Ad
Deepfakes are becoming a trend in modern elections, and the technology continues to improve (Photo: Saradasish Pradhan)

Irish election deepfake AI video shines light on lack of EU-wide rules

Free Article

Last Friday (24 October), Catherine Connolly won the Irish presidential election, despite an AI-generated deepfake of her dropping out of the race circulating before the election.

In an AI video, a realistic RTÉ News broadcast reported that the now Irish president-elect had dropped out of the race and that the election was cancelled. It also included a fake video and voice clone of Connolly herself withdrawing at a campaign event. 

And it's just the latest example.

AI deepfakes of political candidates have been a feature of global elections in recent years, as bad actors, political parties and candidates themselves post AI-generated videos during the campaign season in order to manipulate voters or reenforce political narratives

The Connolly deepfake appeared across social media just days before Irish citizens headed to the polls. 

"The video is a fabrication. It is a disgraceful attempt to mislead voters and undermine our democracy," said Connolly herself in a statement on 22 October.

Online platforms did take measures to mitigate the video's impact, with Meta removing the video and the accounts that posted it, but the video had already been shared numerous times.

But AI content online does not necessarily have an impact on voting.

“Current research on the impact of AI on elections has not uncovered any clear impact of AI-generated deepfakes on actual voting outcomes,” said Eva Lejla Podgoršek, senior policy manager at NGO AlgorithmWatch to EUobserver.

And digital services have tools to limit AI influence, as Meta implemented its AI watermarking in 2024, and OpenAI has systems to prevent people from recreating public figures with its video and image models.

However, videos continue to be posed, platforms systems are not foolproof and the technology continues to improve.

On 30 September, OpenAI released its most realistic AI video generator, Sora 2, and explicitly promotes the model's ability to place real people in generated environments through its so-called "cameos" feature — apparently launching with public figure safety features in place.

However, AI detection software company Reality Defender was able to bypass Sora 2's safety features and create multiple public figure cameos. 

Tech platforms must also follow AI safety and election integrity measures under EU regulation.

The EU's AI Act, which passed in 2024, requires platforms to watermark AI content and imposes specific disclosure requirements for deepfakes. The Digital Services Act also obligates them to mitigate risks to election integrity.

But currently there is no EU-wide legal framework on digital likeness rights, leaving each member state to decide its own rules.

"That fragmentation poses major issues for the regulation of generative video models,” said Barry Scannell, an attorney on the Irish government's AI advisory council and is head of AI law & policy at Irish law firm William Fry, to EUobserver.

And he is concerned that member states are not teaching their electorates to question online content enough. 

As AI models continue to improve, the "reflex – to verify before sharing – should be standard and embedded in the curriculum,” said Scannell


Every month, hundreds of thousands of people read the journalism and opinion published by EUobserver. With your support, millions of others will as well.

If you're not already, become a supporting member today.

Ad
Ad