Columnists

UK a guinea pig for election digital security this year

THE United Kingdom general election is being watched closely after stark warnings that rapid advancements in cyber-tech, particularly artificial intelligence, and increasing friction between major nations threaten the integrity of 2024's landmark votes.

"These rogue and unregulated technological advances pose an enormous threat to us all.

"They can be weaponised to discriminate, disinform and divide," the head of Amnesty International Agnes Callamard said in April.

The election on July 4, four months before the one in the United States, will be seen as the "guinea pig" for election security, said Bruce Snell, cyber-security strategist at US firm Qwiet AI, which uses AI to prevent cyber-attacks.

While AI has grabbed most of the headlines, more traditional cyber-attacks remain a major threat.

"It's misinformation, it's disruption of parties, it's leakage of data and attacking specific individuals," said Ram Elboim, head of cyber-security firm Sygnia and a former senior operative at Israel's 8200 cyber and intelligence unit.

State actors are expected to be the main threat, with the UK issuing warnings about China and Russia.

"The main things are maybe to promote specific candidates or agendas," said Elboim.

""The second is creating some kind of internal instability or chaos, something that will impact the public feeling."

The UK has an advantage over the United States due to the short time period between announcing and holding the election, giving attackers little time to develop and execute plans, said Elboim.

It is also less vulnerable to attacks on election infrastructure as voting is not automated, he added.

But hacking of institutions remains a threat, and the UK has accused China of being behind an attack on the Electoral Commission.

Elboim said: "You don't have to disrupt the main voting system.

"For example, if you disrupt a party, their computers or a third party that affects that party, that's something that might have an impact."

Individuals are most at risk of being targeted, he added.

Any embarrassing information could be used to blackmail candidates.

But it is more likely the attacker will simply leak information to shape public opinion or use the hacked account to impersonate the victim and spread misinformation.

Former Conservative party leader Iain Duncan Smith, a fierce Beijing critic, has claimed that Chinese state actors have impersonated him online, sending fake emails to politicians around the world.

However, it is the increased scope for using AI to create and distribute misinformation that is the real unknown quantity in this year's elections, said Snell.

The spread of deepfakes — fake videos, pictures or audio — is of prime concern.

"The levels of potential for fakery are just tremendous. It's something that we definitely didn't have in the last election," said Snell, calling the UK a guinea pig for 2024's votes.

He highlighted software that can recreate someone's voice from a 30-second sample, and how that could be abused.

Labour health spokesman Wes Streeting has said he was a victim of a deepfake audio, in which he appeared to insult a colleague.

Snell advised authorities to focus on a "shortcut" solution of "getting awareness out there, having people understand that this is the issue".

Other software can be used to make fake pictures and videos, despite filters on many AI applications designed to prevent the depiction of real people.

"AI is, while very sophisticated, also extremely easy to fool" into creating images of real people, said Snell.

AI is also being used to create bots, which automatically flood social media with comments to shape public opinion.

Snell said: "The bots used to be really easy to spot. You'd see things like the same message being repeated and parroted by multiple accounts.

"But with the sophistication of AI now... it's very easy to generate a bot farm that can have 1,000 bots and every one has a varying style of communication."

While software exists to check if videos and pictures have been generated using AI to a "high level of competency", they are not yet used widely enough to curb the problem.

Snell said the AI industry and social media firms should take responsibility for curbing misinformation "because we're in a brave new world where the lawmakers have no idea what's going on".


The writer is from Agence France-Presse

The views expressed in this article are the author's own and do not necessarily reflect those of the New Straits Times

Most Popular
Related Article
Says Stories