AI must have human oversight, MEPs recommend
-
The EU’s regulatory framework for artificial intelligence should make it possible for consumers to seek human review when mistakes appear as a result of automated decisions (Photo: Ars Electronica/Vanessa Graf)
Developments in artificial intelligence and automated decision-making by computer algorithms should always require human oversight, according to the European Parliament's internal market committee (IMCO) on Thursday (23 January), which passed a draft resolution aimed at ensuring humans remain in control of the technology.
Automated decision-making systems (ADMs) - technical systems that replace human decision making - are being used by public and private entities for tasks from recruiting to predictive policing, and are most widely-used in the US.
Join EUobserver today
Become an expert on Europe
Get instant access to all articles — and 20 years of archives. 14-day free trial.
Choose your plan
... or subscribe as a group
Already a member?
However, according to the resolution, "humans must always be ultimately responsible for, and able to overrule, decisions" that are taken by new technologies, especially in medical, legal and accounting professions.
For the banking sector, the committee calls for a regulatory framework that ensures independent supervision of automated decision-making systems by qualified professionals in cases where the public interest is at stake.
This framework should also make it possible for consumers to seek human review when mistakes appear as a result of using this type of new technologies.
Likewise, automated decision-making systems should only use high-quality and unbiased data sets and "explainable and unbiased algorithms" to guarantee trust and acceptance, the resolution states.
"We have to make sure that consumer protection and trust is ensured and that the data sets used in automated decision-making systems are of high-quality and are unbiased," said Belgian MEP Petra De Sutter (Greens/EFA), who chairs the IMCO committee.
Following the provisions of the General Data Protection Regulation (GDPR) regarding explainability, MEPs also stressed that consumers should be "properly informed about how [this technology] functions, about how to reach a human with decision-making powers, and about how the system's decisions can be checked and corrected".
However, explaining in layman's terms the results and functionalities of large and complex technical models might be problematic.
This resolution, which was initiated by Green MEP Alexandra Geese, marks the parliament's first official position on the safeguards needed for the deployment and implementation of ADMs.
"All algorithmic systems must be checked for their legality and neutrality according to a risk model. If a discriminatory bias cannot be remedied, such fundamental rights-violating systems must not be used in Europe," Geese wrote for the German newspaper Tagesspiegel.
Additionally, MEPs called on the European Commission to ensure that the EU's rules on safety and liability for products and services are fit for purpose in the digital age - what it is already part of the commission's white paper for AI.
The text approved by the committee will be voted on the next plenary session by all MEPs, ahead of the commission's announcement for a European approach to AI expected on 19 February.