MEPs push for world's toughest rules on AI
-
The original bill did not not cover chatbots in detail. MEPs added an amendment to put ChatGPT and similar generative AI on the same level as high-risk AI systems (Photo: Jonathan Kemper, Unsplash)
By Eszter Zalan
MEPs took a key step in adopting new rules on Thursday (11 May) on regulating artificial intelligence tools, banning predictive policing technologies and facial recognition for the surveillance of citizens.
Regulators across the world are racing to catch up with the speed of development of new technologies, such as ChatGPT, an AI-based chatbot.
Join EUobserver today
Become an expert on Europe
Get instant access to all articles — and 20 years of archives. 14-day free trial.
Choose your plan
... or subscribe as a group
Already a member?
On Thursday in the internal market, and the civil liberties committees, MEPs opted for toughening the proposed rules in an effort to defend fundamental and human rights.
"It was the first attempt at regulating AI in the world in this horizontal and thorough manner," Italian Socialists & Democrats MEP Brando Benifei, one of the key lawmakers working on the file, told journalists on Wednesday.
The lead MEPs on the legislation do not expect major changes to the legislation ahead of the plenary vote.
First proposed in 2021, the AI Act would set out rules governing any product and service that uses an artificial intelligence system. The legislation classifies AI tools into four ranks based on their risk level from minimal to unacceptable.
Riskier applications face tougher rules requiring more transparency and using more accurate data.
Policing tools that aim to predetermine where crimes will happen and by whom — for instance as the one foreseen in the movie Minority Report — are set to be banned.
Remote facial recognition technology will also be banned with the exception of countering and preventing a specific terrorist threat.
So-called "social scoring" systems, already under development in China, that judge or punish people, and businesses, based on their behaviour are expected to be banned.
AI systems used in high-risk categories like employment and education, which would affect the course of a person's life, face tough requirements in transparency, risk assessment, and mitigation measures.
The aim is "to avoid a controlled society based on AI, instead to make AI support more freedom and human development, not a securitarian nightmare" Benifei said on Wednesday.
"We think that these technologies could be used, instead of the good, also for the bad, and we consider the risks to be too high," he added.
'With our text, we are also showing what kind of society we want," the Italian MEP said, adding: "a society where social scoring, predictive policing, biometric categorisation, emotional recognition, indiscriminate scraping of facial images from the internet are considered unacceptable practices".
Emotional recognition is used by employers or police to identify tired workers or drivers.
Most AI systems, such as video games or spam filters, fall into the low- or no-risk category.
While the original legislation did not cover chatbots in detail, MEPs added an amendment to put ChatGPT and similar generative AI on the same level as high-risk AI systems.
As a new requirement, any copyrighted material used to teach AI systems how to generate text, images, video, or music should be documented, so that creators can decide whether their work has been used and get paid for it.
Violations are set to draw fines of up to €30m or six percent of a company's annual global revenue, which in the case of tech companies like Google and Microsoft could amount to billions.
However, it could take years before the rules take effect. MEPs in the plenary are set to vote on the legislation in mid-June. Then negotiations start with EU governments and the commission.
The final text is expected by the end of the year, or early 2024 at the latest, followed by a grace period for companies, which usually takes two years.
Loopholes
Digital rights advocates welcomed the first step in the adoption of the EU's AI Act — but criticised it on the rights of migrants.
"The parliament is sending a globally significant message to governments and AI developers with its list of bans, siding with civil society's demands that some uses of AI are just too harmful to be allowed," Sarah Chander, the senior policy adviser of European Digital Rights (EDRi), a rights advocacy group, said.
"Unfortunately, the European Parliament's support for peoples' rights stops short of protecting migrants from AI harms," Chander added.
EDRi said MEPs failed to include in the list of prohibited practices where AI is used to facilitate illegal pushbacks of migrants, or to profile people in a discriminatory manner (eg AI-based lie-detectors and risk-profiling systems).
However, real-time facial recognition technology would also be banned being used by border officials.
"There is no stronger safeguard [than this ban]. A border crossing point is a public space. According to the text we have right now, you will not be able to deploy AI biometric recognition technology in a public space," another key MEP on the file, Romania's Dragos Tudorache, from Renew Europe, said.
EDRi also warned that any watering down on what constitutes a high-risk AI would open up dangerous loopholes.
AccessNow, a digital civil rights advocacy group, argued for removing a self-assessment carve out from high risk classification, which risks turning the AI Act into "self-regulation".
Industry groups warned, on the other hand, that the regulation could create extra costs for businesses and hamper digital innovation in Europe.
"European AI developers would now be put at a disadvantage compared to their foreign counterparts by MEPs' changes — such as the broad extension of the list of prohibited AI systems and that of high-risk use cases," the Computer and Communications Industry Association (CCIA) Europe, a non-profit organisation said, which counts Amazon, Apple, Facebook, Google, Twitter among its members.
Site Section
Related stories
- EU keen to set global rules on artificial intelligence
- EU lawmakers 'hold breath' on eve of AI vote
- AI Act — leaving oversight to the techies will not protect rights
- Big Tech's attempt to water down the EU AI act revealed
- Predicting migration: the opaque science behind AI technologies
- The EU needs to foster tech — not just regulate it