Ad
An early premonition of the dangers of Artificial Intelligence in Stanley Kubrick's 1969 film 2001: A Space Odyssey — where the spaceship's all-knowing computer, HAL9000, tries to take over the mission and kill the astronauts (Photo: Wikimedia)

Opinion

Where the EU's efforts to regulate AI fall short

Free Article

In recent years, efforts to regulate AI have surged. However, the technology’s complexity compounded by self-interested actors making wild claims about its potential to do harm, make it challenging to craft measured regulations.

In a 2023 open letter, academics and business owners including Bill Gates warned that AI poses an existential threat without explaining how

Some of the signatories concerned about lack of regulation stand to benefit from the status quo. 

Google Brain’s Co-Founder, Andrew Ng, argued that the letter's true motive was to weaken Open Source AI through heavy-handed regulations to the advantage of big-tech.

Amid the global race to regulate AI, world's first comprehensive AI regulation — the EU AI Act — aims to set a global regulatory standard.

But I believe it falls short.

Three forms of AI regulation

In the global regulatory landscape, three ways of regulating AI can be identified — comprehensive, sectoral, and AI system-focused regulations.

A comprehensive regulation seeks to establish common regulatory standards for AI across all industries. In contrast, sectoral regulation allows adopting regulatory standards for particular domains (e.g., banking). 

Sectoral regulation ideally requires a central authority that coordinates consistent standard-setting and enforcement (so-called coordinated sectoralism).

The UK’s 2023 pro-innovation AI white paper outlined this although the proposed UK AI Bill inadequately translates it into a regulatory framework.

In the US and Australia, AI is currently regulated primarily through consumer protection, privacy, and other domain-specific laws that are not particularly designed for AI.

Over the past few years, AI technology-focused regulations have been proposed or enacted. These include China's proposed generative AI regulation and California’s BOT Act regulating chatbots. The UK government's recent announcement that the country's upcoming AI legislation will target companies developing ‘the most powerful AI systems’ is another example of technology-specific regulatory choice.

These diverse approaches offer competing and complementary options for regulating AI.

Rushed compromises

Whilst comprehensive regulation seeks to prevent fragmented standards and provide legal certainty, it can ironically lead to fragmentation.

For instance, the EU AI Act mandates the disclosure in relation to AI-generated content to consumers engaging with deep fakes. However, deep fakes pose profound challenges including misleading voters and allowing the misuse of people’s images and voices (eg, in non-consensual pornography). They call for a more robust regulatory response beyond mere disclosures. 

Kai Zenner, an EU Parliament aide involved in drafting the AI Act acknowledged that the act was rushed with emphasis on quick compromises, leaving critical gaps, and necessitating additional fragmented regulations. 

On the opposite end, focusing only on 'the most powerful AI systems' could be dangerous. AI technologies have caused significant real-world harm.

Facial recognition technology has been used by the police to wrongfully arrest individuals or lockout Uber drivers of their accounts due to technical flaws, impacting their livelihoods. It does not take the most powerful AI systems to cause considerable societal harm.

A well-designed sectoral approach seems to be more effective in the evolving AI landscape because it allows the adoption and enforcement of regulations suitable for sectoral needs and contexts.

The EU AI Act bans AI systems deemed to pose unacceptable risks to society whilst imposing stringent standards for high-risk AI systems such as those used to evaluate job applications, student admissions and eligibility for loans.

Those presenting low risks face minimal transparency and accountability obligations. This risk-based approach tailors regulatory requirements to the level of risk posed by the specific AI systems. In theory, it is sensible; but due to its specific design, it could lead to over-regulation or under-regulation.

Delivery robots — but where?

The EU AI Act classifies autonomous delivery robots as high-risk, whether used in controlled warehouse premises or urban public spaces. While their use in public spaces can cause harm, they pose little to no risk when used in controlled environments. The list of high-risk AI systems under the EU AI Act ignores this important context of use and overregulates developers and deployers of such robotics.

Conversely, AI systems that pose a significant risk of harm, such as migration prediction and money laundering detection tools fall through the cracks, although they could discriminate against people based on factors such as race and religion. Since they are not classified as high-risk, developers and deployers are not required to follow the strict safety standards meant for high-risk AI.

The EU Commission's power to revise the list of high-risk AI systems is ineffective due to the potential lack of political commitment and slow approval processes for delegated acts through which revisions can be adopted.

Whilst risk-based regulation is useful, it should be grounded in a principle. The AI Act could have been designed to treat "AI use cases that are likely to adversely affect public interest" as high-risk. By lacking an adaptable underlying principle, the AI Act’s current approach potentially hinders innovation and creates loopholes. 

Erosion of the 'Brussels Effect'

The EU has shaped global regulatory standards through the ‘Brussels Effect', where foreign businesses and states embrace EU law out of legal and commercial necessities.

The power of the Brussels Effect lies partly in the quality of legal standards in the EU. However, the EU AI Act’s flaws combined with the contentious nature of AI regulation, suggest that the Brussels Effect may give way to more negotiated global standards.

An early premonition of the dangers of Artificial Intelligence in Stanley Kubrick's 1969 film 2001: A Space Odyssey — where the spaceship's all-knowing computer, HAL9000, tries to take over the mission and kill the astronauts (Photo: Wikimedia)

Tags

Author Bio

Dr Asress Adimi Gikay is senior lecturer in AI, disruptive innovation & law at Brunel University, London, and a board member of the Centre for AI: Social and Digital Innovation.

Ad

Related articles

Ad
Ad