EU backtracks on plans to ban facial recognition
-
Law-enforcement authorities in several European countries have been testing facial-recognition technologies - raising concern over privacy and discrimination risks (Photo: EFF Photos)
The European Commission came under fire on Wednesday (19 February) for ruling out a moratorium on facial recognition, as the bloc's new strategy for data and artificial intelligence (AI) governance was unveiled.
"It is of utmost importance and urgency that the EU prevents the deployment of mass surveillance and identification technologies without fully understanding their impacts on people and their rights," warned Diego Naranjo, head of policy at European Digital Rights.
Join EUobserver today
Become an expert on Europe
Get instant access to all articles — and 20 years of archives. 14-day free trial.
Choose your plan
... or subscribe as a group
Already a member?
But the commission vice president Margarethe Vestager said only on Wednesday that the EU's executive body will launch "a broad European debate to determine the specific circumstances, if any, which might justify the use" of facial recognition.
According to Naranjo, "the proposed debate should lead to the prohibition of deployment of live facial recognition systems (and similar biometric technologies) in member states".
"Facial recognition and biometric identification are enormously privacy-invasive and their use can bear many risks for consumers," said David Martin from the European Consumer Organisation, who added "it would be careless to expand the use of this technology until all the necessary safeguards are in place".
The commission wants to ensure the adoption of "trustworthy, ethical and human-centric" AI across the EU by establishing new binding requirements for the development and use of "high-risk" applications.
However, the risks of facial-recognition technologies, which make decisions based on image interpretation, vary depending on their purpose - for example, unblocking a smartphone does not pose a "high risk" for citizens' fundamental rights.
What is 'high-risk'?
All "high-risk" applications will have to comply with new rules, including high data quality and traceability obligations, human oversight and transparency standards throughout the whole development lifecycle of these technologies.
However, the commission recognised that "the lack of transparency [of AI systems] makes it difficult to identify and prove possible breaches of laws, including legal provisions that protect fundamental rights".
Additionally, facial recognition would be subject to requirements of 'proportionality' - an assessment to ensure that the technology is being used proportionally and with sufficient safeguards to protect individuals.
"AI can only be used for remote biometric identification [facial recognition] purposes where such use is duly justified, proportionate and subject to adequate safeguards," the commission's white paper on AI states.
However, critics warned that these measures, aimed at reducing bias and error, fail to address the threats that the technology poses to citizens' privacy.
"Modern systems allow constant surveillance of people over long periods, which is simply not compatible with a free and democratic society," said MEP Alexandra Geese (Green/EFA), who is the rapporteur on the ethical aspects of AI for the parliament's internal market and consumer protection committee.
"The devil is in the detail - in the precise definition of high risk and in the possibilities to defend against discriminatory or inaccurate AI applications," she added.
Policing the police?
Last November, the European Agency for Fundamental Rights said that "a clear legal framework must regulate the deployment and use of facial recognition technologies," warning that collecting facial images of individuals without their consent or chance to refuse "can have a negative impact on people's dignity".
However, the commission considered that the basic legal framework for facial recognition might already be in place under the bloc's General Data Protection Regulation (GDPR), and prefers to focus on enforcement.
As a rule, GDPR prohibits the processing of biometric data, such as facial images or fingerprints, without consent - unless a specific exception applies.
However, more and more European countries have been testing these technologies in the last few years by law-enforcement authorities - raising concerns over privacy and discrimination risks.
"Police forces in almost all EU countries already use face recognition or plan to introduce it, and none of them is being fully transparent about their use," warned Nicolas Kayser-Bril from NGO AlgorithmWatch.
Some of the member states that have tried or deployed facial-recognition technologies for security reasons include Germany, France, Italy, Sweden, Belgium and the Netherlands.
Last month, a leaked paper showed that the commission was considering a temporary ban on facial recognition technologies in public spaces to better understand its potential risks, but the proposal was quickly rejected.
"I want that digital Europe reflects the best of Europe – open, fair, diverse, democratic, and confident," said the commission president Ursula von der Leyen, who claim that "AI must serve people and must always comply with people's rights".