Saturday

26th Sep 2020

Opinion

Why EU will find it difficult to legislate on AI

  • Governing AI - especially exported technologies or those deployed across borders - through ethical principles does not work (Photo: The Preiser Project)

Artificial Intelligence (AI) – especially machine learning – is a technology that is spreading rapidly around the world.

AI will become a standard tool to help steer cars, improve medical care or automate decision making within public authorities. Although intelligent technologies are drivers of innovation and growth, the global proliferation of them is already causing serious harm in its wake.

Read and decide

Join EUobserver today

Support quality EU news

Get instant access to all articles — and 20 years of archives. 14-day free trial.

... or subscribe as a group

Last month, a leaked white paper showed that the European Union is considering putting a temporary ban on facial recognition technologies in public spaces until the potential risks are better understood.

But many AI technologies in addition to facial recognition warrant more concern, especially from European policymakers.

More and more experts have scrutinised the threat that 'Deep Fake' technologies may pose to democracy by enabling artificial disinformation; or consider the Apple Credit Card which grants much higher credit scores to husbands when compared to their wives, even though they share assets.

Global companies, governments, and international organisations have reacted to these worrying trends by creating AI ethics boards, charters, committees, guidelines, etcetera, all to address the problems this technology presents - and Europe is no exception.

The European Commission set up a High Level Expert Group on AI to draft guidelines on ethical AI.

Unfortunately, an ethical debate alone will not help to remedy the destruction caused by the rapid spread of AI into diverse facets of life.

The latest example of this shortcoming is Microsoft, one of the largest producers of AI-driven services in the world.

Microsoft, who has often tried to set itself apart from its Big Tech counterparts as being a moral leader, has recently taken heat for its substantial investment in facial recognition software that is used for surveillance purposes.

"AnyVision" is allegedly being used by Israel to track Palestinians in the West Bank. Although investing in this technology goes directly against Microsoft's own declared ethical principles on facial recognition, there is no redress.

It goes to show that governing AI - especially exported technologies or those deployed across borders - through ethical principles does not work.

The case with Microsoft is only a drop in the bucket.

Numerous cases will continue to pop up or be uncovered in the coming years in all corners of the globe – given a functioning and free press, of course.

This problem is especially prominent with facial recognition software, as the European debate reflects. Developed in Big Tech, facial recognition products have been procured by government agencies such as customs and migration officers, police officers, security forces, the military, and more.

This is true for many regions of the world: like in America, the UK, as well as several states in Africa, Asia, and more.

Promising more effective and accurate methods to keep the peace, law enforcement agencies have adopted the use of AI to super-charge their capabilities.

This comes with specific dangers, though, which is shown in numerous reports from advocacy groups and watchdogs saying that the technologies are flawed and deliver more false matches disproportionately for women and darker skin tones.

If law enforcement agencies know that these technologies have the potential to be more harmful to subjects who are more often vulnerable and marginalised, then there should be adequate standards for implementing facial recognition in such sensitive areas.

Ethical guidelines – neither those coming from Big Tech nor those coming from international stakeholders – are not sufficient to safeguard citizens from invasive, biased, or harmful practices of police or security forces.

Although these problems have surrounded AI technologies in previous years, this has not yet resulted in a successful regulation to make AI "good" or "ethical" – terms that mean well but are incredibly hard to define, especially on an international level.

This is why, even though actors from private sector, government, academia, and civil society have all been calling for ethical guidelines in AI development, these discussions remain vague, open to interpretation, non-universal, and most importantly, unenforceable.

In order to stop the faster-is-better paradigm of AI development and remedy some of the societal harm already caused, we need to establish rules for the use of AI that are reliable and enforceable.

And arguments founded in ethics are not strong enough to do so; ethical principles fail to address these harms in a concrete way.

International human rights to rescue?

As long as we lack rules that work, we should at least use guidelines that already exist to protect vulnerable societies to the best of our abilities. This is where the international human rights legal framework could be instrumental.

We should be discussing these undue harms as violations of human rights, utilising international legal frameworks and language that has far-reaching consensus across different nations and cultural contexts, is grounded in consistent rhetoric, and is in theory enforceable.

AI development needs to promote and respect human rights of individuals everywhere, not continue to harm society at a growing pace and scale.

There should be baseline standards in AI technologies, which are compliant with human rights.

Documents like the Universal Declaration of Human Rights and the UN Guiding Principles which steer private sector behaviour in human-rights compliant ways need to set the bar internationally.

This is where the EU could lead by example.

By refocusing on these existing conventions and principles, Microsoft's investment in AnyVision, for example, would be seen as not only a direct violation of its internal principles, but also as a violation of the UN Guiding Principles, forcing the international community to scrutinise the company's business activities more deeply and systematically, ideally leading to redress.

Faster is not better. Fast development and dissemination of AI systems has led to unprecedented and irreversible damages to individuals all over the world. AI does, indeed, provide huge potential to revolutionise and enhance products and services, and this potential should be harnessed in a way that benefits everyone.

Author bio

Kate Saslow is a researcher for AI and public policy at Stiftung Neue Verantwortung, a Berlin-based think tank.

Disclaimer

The views expressed in this opinion piece are the author's, not those of EUobserver.

AI must have human oversight, MEPs recommend

The European Parliament's internal market committee (IMCO) insists humans must remain in control automated decision-making processes, ensuring that people are responsible and able to overrule the outcome of decisions made by computer algorithms.

Dual EU 'expert groups' on AI risk duplication

The European Commission has set up a 'High-Level Expert Group on Artificial Intelligence'- meanwhile, the European External Action Service has initiated a 'Global Tech Panel'.

Analysis

EU plan on AI: new rules, better taxes

The bloc wants to become a global leader in innovation in the data economy and AI. However, proportional legislation, better data-availability and huge invesment will be needed across the EU to compete with China and the US.

Why no EU progress on Black Lives Matter?

Months after Black Lives Matter erupted, for many EU decision-makers the problems of racism in policing and criminal legal systems - the issues that sparked the George Floyd protests - are still 'over there', across the Atlantic.

How EU can help end Uighur forced labour

A recent report noted apparel and footwear as the leading exports from the Uighur region - with a combined value of $6.3bn [€5.3bn] representing over 35 percent of total exports.

Stakeholders' Highlights

  1. Nordic Council of MinistersNordic Council meets Belarusian opposition leader Svetlana Tichanovskaja
  2. Nordic Council of MinistersNordic Region to invest DKK 250 million in green digitalised business sector
  3. UNESDAReducing packaging waste – a huge opportunity for circularity
  4. Nordic Council of MinistersCOVID-19 halts the 72nd Session of the Nordic Council in Iceland
  5. Nordic Council of MinistersCivil society a key player in integration
  6. UNESDANext generation Europe should be green and circular

Latest News

  1. Berlin repeats support for EU human rights sanctions
  2. China's carbon pledge at UN sends 'clear message' to US
  3. Far right using pandemic to win friends in Germany
  4. Visegrad countries immediately push back on new migration pact
  5. Why no EU progress on Black Lives Matter?
  6. EU migration pact to deter asylum
  7. 'Era of EU naivety ends', MEP pledges on foreign meddling
  8. Anti-mask protesters pose challenge for EU authorities

Stakeholders' Highlights

  1. Nordic Council of MinistersNEW REPORT: Eight in ten people are concerned about climate change
  2. UNESDAHow reducing sugar and calories in soft drinks makes the healthier choice the easy choice
  3. Nordic Council of MinistersGreen energy to power Nordic start after Covid-19
  4. European Sustainable Energy WeekThis year’s EU Sustainable Energy Week (EUSEW) will be held digitally!
  5. Nordic Council of MinistersNordic states are fighting to protect gender equality during corona crisis
  6. UNESDACircularity works, let’s all give it a chance

Join EUobserver

Support quality EU news

Join us