Thursday

28th Mar 2024

Opinion

AI Act — leaving oversight to the techies will not protect rights

  • These technical elements will have a direct impact on people's right to privacy, and knock-on effects for their rights to protest, due process, health, work, and participation in social and cultural life (Photo: Tobias Tullius)
Listen to article

In May, the European Parliament is scheduled to vote on the landmark Artificial Intelligence Act — the world's first comprehensive attempt to regulate the use of AI.

A lot has been said about the Act's risk-based approach, and the manner in which certain technologies have been classified under the act — from remote biometric technologies, to emotion recognition, to the use of AI in migration contexts.

Read and decide

Join EUobserver today

Get the EU news that really matters

Instant access to all articles — and 20 years of archives. 14-day free trial.

... or subscribe as a group

Much less attention, however, has been paid to how the key aspects of the act — those relating to "high risk" applications of AI systems — will be implemented in practice. This is a costly oversight, because the current envisioned process can significantly jeopardise fundamental rights.

Technical standards — who, what and why it matters?

Under the current version of the act, the classification of high risk AI technologies include those used in education, employee recruitment and management, the provision of public assistance benefits and services, and law enforcement. While they are not prohibited, any provider who wants to bring a high risk AI technology to the European market will need to demonstrate compliance with the act's "essential requirements."

However, the act is vague on what these requirements actually entail in practice, and EU lawmakers intend to cede this responsibility to two little-known technical standards organisations.

The European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC) are identified in the AI Act as the key bodies to develop standards that set out the technical frameworks, requirements, and specifications for acceptable high risk AI technologies.

These bodies are almost exclusively composed of engineers or technologists that represent EU member states. With little to no representation from human rights experts or civil society organisations, there is a real danger that these bodies will have the de facto power to determine how the AI Act is implemented without the means to ensure that its intended objective — to protect people's fundamental rights — is truly met.

At ARTICLE 19, we have been working for over half a decade on building and strengthening the consideration of human rights in technical standardisation bodies, including the Internet Engineering Task Force (IETF), the Institute for Electrical and Electronics Engineers (IEEE), and the International Telecommunication Union (ITU). We know from experience that they are not set up to meaningfully engage with these considerations.

When it comes to technology, it is impossible to completely separate technical design choices from real-world impacts on the rights of individuals and communities, and this is especially true of the AI systems that CEN and CENELEC would need to address under the current terms of the act.

The standards they produce will likely set out requirements related to data governance, transparency, security, and human oversight.

All of these technical elements will have a direct impact on people's right to privacy, and knock-on effects for their rights to protest, due process, health, work, and participation in social and cultural life. However, to understand what these impacts are and effectively address them, engineering expertise is not sufficient; we need human rights expertise to be part of the process, too.

Although the European Commission has made specific references to the need for this expertise, as well as the representation of other public interests, it will be hard to achieve in practice.

With little exception, CEN and CENELEC membership is closed to participation from any organisations other than the national standards bodies that represent the interests of EU member states. Even if there was a robust way for human rights experts to participate independently, there are no commitments or accountability mechanisms in place to ensure that the consideration of fundamental rights will be upheld in this process, especially when these considerations come into conflict with business or government interests.

Standard setting as a political act

Standardisation, far from a purely technical exercise, will likely be a highly political one, as CEN and CENELEC will be tasked with answering some of the most complicated questions left open in the essential requirements of the Act — questions that would be better addressed through open, transparent, and consultative policy and regulatory processes.

At the same time, the European Parliament will not have the ability to veto the standards mandated by the European Commission, even when the details of these standards may require further democratic scrutiny or legislative interpretation. As a result, these standards may dramatically weaken the implementation of the AI Act, rendering it toothless against technologies that threaten our fundamental rights.

If the EU is serious about their commitment to regulating AI in the way that respects human rights, outsourcing those considerations to technical bodies is not the answer.

A better way forward could include the establishment of a "fundamental rights impact assessment" framework, and a requirement for all high risk AI systems to be evaluated according to this framework as a condition of being placed on the market. Such a process could help ensure that these technologies are properly understood, analysed and, if needed, mitigated on a case-by-case basis.

The EU's AI Act is a critical opportunity to draw some much-needed red lines around the most harmful uses of AI technologies, and put in place best practices to ensure accountability across the lifecycle of AI systems. EU lawmakers intend to create a robust system that safeguards fundamental human rights and puts people first. However, by ceding so much power to technical standards organisations, they undermine the entirety of this process.

Author bio

Mehwish Ansari is head of digital at ARTICLE 19, the NGO defending freedom of expression, where Vidushi Mardais senior programme officer.

Disclaimer

The views expressed in this opinion piece are the author's, not those of EUobserver.

Magazine

The challenge of artificial intelligence

The fast-growing impact of artificial intelligence will be the biggest challenge for business and consumers in Europe's single market of tomorrow.

The AI Act — a breach of EU fundamental rights charter?

"I hope MEPs will not approve the AI Act in its current text," warns a senior EU civil servant, writing anonymously. The normalisation of arbitrary 'algorithmic' intrusions on our inner life provides a legacy of disregard for human dignity.

Analysis

The end of street anonymity — is Europe ready for that?

The EU's AI Act, despite setting global standards, fails to ban facial recognition in public spaces — setting a concerning precedent for many. Is Europe really ready to sacrifice street anonymity for enhanced security?

AI will destroy more female jobs than male, study finds

About four percent of global female employment is subject to potential automation through generative AI technologies, compared to only 1.4 percent of male employment. The trend is even more pronounced in high-income countries, a new study reveals.

Why UK-EU defence and security deal may be difficult

Rather than assuming a pro-European Labour government in London will automatically open doors in Brussels, the Labour party needs to consider what it may be able to offer to incentivise EU leaders to factor the UK into their defence thinking.

Column

EU's Gaza policy: boon for dictators, bad for democrats

While they woo dictators and autocrats, EU policymakers are becoming ever more estranged from the world's democrats. The real tragedy is the erosion of one of Europe's key assets: its huge reserves of soft power, writes Shada Islam.

Latest News

  1. Kenyan traders react angrily to proposed EU clothes ban
  2. Lawyer suing Frontex takes aim at 'antagonistic' judges
  3. Orban's Fidesz faces low-polling jitters ahead of EU election
  4. German bank freezes account of Jewish peace group
  5. EU Modernisation Fund: an open door for fossil gas in Romania
  6. 'Swiftly dial back' interest rates, ECB told
  7. Moscow's terror attack, security and Gaza
  8. Why UK-EU defence and security deal may be difficult

Stakeholders' Highlights

  1. Nordic Council of MinistersJoin the Nordic Food Systems Takeover at COP28
  2. Nordic Council of MinistersHow women and men are affected differently by climate policy
  3. Nordic Council of MinistersArtist Jessie Kleemann at Nordic pavilion during UN climate summit COP28
  4. Nordic Council of MinistersCOP28: Gathering Nordic and global experts to put food and health on the agenda
  5. Friedrich Naumann FoundationPoems of Liberty – Call for Submission “Human Rights in Inhume War”: 250€ honorary fee for selected poems
  6. World BankWorld Bank report: How to create a future where the rewards of technology benefit all levels of society?

Stakeholders' Highlights

  1. Georgia Ministry of Foreign AffairsThis autumn Europalia arts festival is all about GEORGIA!
  2. UNOPSFostering health system resilience in fragile and conflict-affected countries
  3. European Citizen's InitiativeThe European Commission launches the ‘ImagineEU’ competition for secondary school students in the EU.
  4. Nordic Council of MinistersThe Nordic Region is stepping up its efforts to reduce food waste
  5. UNOPSUNOPS begins works under EU-funded project to repair schools in Ukraine
  6. Georgia Ministry of Foreign AffairsGeorgia effectively prevents sanctions evasion against Russia – confirm EU, UK, USA

Join EUobserver

EU news that matters

Join us