Tuesday

2nd Jun 2020

Opinion

Why EU will find it difficult to legislate on AI

  • Governing AI - especially exported technologies or those deployed across borders - through ethical principles does not work (Photo: The Preiser Project)

Artificial Intelligence (AI) – especially machine learning – is a technology that is spreading rapidly around the world.

AI will become a standard tool to help steer cars, improve medical care or automate decision making within public authorities. Although intelligent technologies are drivers of innovation and growth, the global proliferation of them is already causing serious harm in its wake.

Read and decide

Join EUobserver today

Support quality EU news

Get instant access to all articles — and 20 years of archives. 14-day free trial.

... or subscribe as a group

Last month, a leaked white paper showed that the European Union is considering putting a temporary ban on facial recognition technologies in public spaces until the potential risks are better understood.

But many AI technologies in addition to facial recognition warrant more concern, especially from European policymakers.

More and more experts have scrutinised the threat that 'Deep Fake' technologies may pose to democracy by enabling artificial disinformation; or consider the Apple Credit Card which grants much higher credit scores to husbands when compared to their wives, even though they share assets.

Global companies, governments, and international organisations have reacted to these worrying trends by creating AI ethics boards, charters, committees, guidelines, etcetera, all to address the problems this technology presents - and Europe is no exception.

The European Commission set up a High Level Expert Group on AI to draft guidelines on ethical AI.

Unfortunately, an ethical debate alone will not help to remedy the destruction caused by the rapid spread of AI into diverse facets of life.

The latest example of this shortcoming is Microsoft, one of the largest producers of AI-driven services in the world.

Microsoft, who has often tried to set itself apart from its Big Tech counterparts as being a moral leader, has recently taken heat for its substantial investment in facial recognition software that is used for surveillance purposes.

"AnyVision" is allegedly being used by Israel to track Palestinians in the West Bank. Although investing in this technology goes directly against Microsoft's own declared ethical principles on facial recognition, there is no redress.

It goes to show that governing AI - especially exported technologies or those deployed across borders - through ethical principles does not work.

The case with Microsoft is only a drop in the bucket.

Numerous cases will continue to pop up or be uncovered in the coming years in all corners of the globe – given a functioning and free press, of course.

This problem is especially prominent with facial recognition software, as the European debate reflects. Developed in Big Tech, facial recognition products have been procured by government agencies such as customs and migration officers, police officers, security forces, the military, and more.

This is true for many regions of the world: like in America, the UK, as well as several states in Africa, Asia, and more.

Promising more effective and accurate methods to keep the peace, law enforcement agencies have adopted the use of AI to super-charge their capabilities.

This comes with specific dangers, though, which is shown in numerous reports from advocacy groups and watchdogs saying that the technologies are flawed and deliver more false matches disproportionately for women and darker skin tones.

If law enforcement agencies know that these technologies have the potential to be more harmful to subjects who are more often vulnerable and marginalised, then there should be adequate standards for implementing facial recognition in such sensitive areas.

Ethical guidelines – neither those coming from Big Tech nor those coming from international stakeholders – are not sufficient to safeguard citizens from invasive, biased, or harmful practices of police or security forces.

Although these problems have surrounded AI technologies in previous years, this has not yet resulted in a successful regulation to make AI "good" or "ethical" – terms that mean well but are incredibly hard to define, especially on an international level.

This is why, even though actors from private sector, government, academia, and civil society have all been calling for ethical guidelines in AI development, these discussions remain vague, open to interpretation, non-universal, and most importantly, unenforceable.

In order to stop the faster-is-better paradigm of AI development and remedy some of the societal harm already caused, we need to establish rules for the use of AI that are reliable and enforceable.

And arguments founded in ethics are not strong enough to do so; ethical principles fail to address these harms in a concrete way.

International human rights to rescue?

As long as we lack rules that work, we should at least use guidelines that already exist to protect vulnerable societies to the best of our abilities. This is where the international human rights legal framework could be instrumental.

We should be discussing these undue harms as violations of human rights, utilising international legal frameworks and language that has far-reaching consensus across different nations and cultural contexts, is grounded in consistent rhetoric, and is in theory enforceable.

AI development needs to promote and respect human rights of individuals everywhere, not continue to harm society at a growing pace and scale.

There should be baseline standards in AI technologies, which are compliant with human rights.

Documents like the Universal Declaration of Human Rights and the UN Guiding Principles which steer private sector behaviour in human-rights compliant ways need to set the bar internationally.

This is where the EU could lead by example.

By refocusing on these existing conventions and principles, Microsoft's investment in AnyVision, for example, would be seen as not only a direct violation of its internal principles, but also as a violation of the UN Guiding Principles, forcing the international community to scrutinise the company's business activities more deeply and systematically, ideally leading to redress.

Faster is not better. Fast development and dissemination of AI systems has led to unprecedented and irreversible damages to individuals all over the world. AI does, indeed, provide huge potential to revolutionise and enhance products and services, and this potential should be harnessed in a way that benefits everyone.

Author bio

Kate Saslow is a researcher for AI and public policy at Stiftung Neue Verantwortung, a Berlin-based think tank.

Disclaimer

The views expressed in this opinion piece are the author's, not those of EUobserver.

AI must have human oversight, MEPs recommend

The European Parliament's internal market committee (IMCO) insists humans must remain in control automated decision-making processes, ensuring that people are responsible and able to overrule the outcome of decisions made by computer algorithms.

Dual EU 'expert groups' on AI risk duplication

The European Commission has set up a 'High-Level Expert Group on Artificial Intelligence'- meanwhile, the European External Action Service has initiated a 'Global Tech Panel'.

Analysis

EU plan on AI: new rules, better taxes

The bloc wants to become a global leader in innovation in the data economy and AI. However, proportional legislation, better data-availability and huge invesment will be needed across the EU to compete with China and the US.

News in Brief

  1. Trump threatens to use army to crush unrest in US
  2. Trump wants Russia back in G7-type group
  3. Iran: Fears of second wave as corona numbers rise again
  4. WHO: Overuse of antibiotics to strengthen bacterial resistance
  5. Orban calls EU Commission recovery plan 'absurd'
  6. ABBA's Björn new president of authors' rights federation
  7. Malta and Libya to create anti-migrant 'units'
  8. France reopening bars and parks next week

Column

Hawks to doves? Germany's new generation of economists

For many Europeans, Angela Merkel's change looked sudden. But the groundwork started two years ago. Germany slowly ripened for the Merkel-Macron plan. This explains why it didn't meet massive public resistance in Germany.

Stakeholders' Highlights

  1. UNESDAHow reducing sugar and calories in soft drinks makes the healthier choice the easy choice
  2. Nordic Council of MinistersGreen energy to power Nordic start after Covid-19
  3. European Sustainable Energy WeekThis year’s EU Sustainable Energy Week (EUSEW) will be held digitally!
  4. Nordic Council of MinistersNordic states are fighting to protect gender equality during corona crisis
  5. UNESDACircularity works, let’s all give it a chance
  6. Nordic Council of MinistersNordic ministers call for post-corona synergies between economic recovery and green transition

Latest News

  1. Malta fiddles on migrants, as Libya burns
  2. Borrell: EU doesn't need to choose between US and China
  3. Post-Brexit and summer travel talks This WEEK
  4. State-level espionage on EU tagged as 'Very High Threat'
  5. Beethoven vs Virus: How his birthplace Bonn is coping
  6. EU's new migration pact must protect people on the move
  7. Spain takes 'giant step' on guaranteed minimum income
  8. Vestager hits back at Lufthansa bailout criticism

Join EUobserver

Support quality EU news

Join us