Friday

14th May 2021

Opinion

Why EU needs to be wary that AI will increase racial profiling

  • Amnesty found that 78 percent of people on the UK's Gangs Matrix in 2017 were black, despite only 29 percent of 'gang-related' crimes being committed by black people (Photo: Tony Gonzalez)

Whether it be 'predictive' algorithmic systems used in the Netherlands to profile Eastern Europeans and Roma for 'pick-pocketing', or secretive lists of suspected criminals such as the UK's Gang Matrix, there is a growing reliance on criminal risk-scoring technologies by police forces across Europe.

Police say these systems predict where and by whom crime happens.

Read and decide

Join EUobserver today

Become an expert on Europe

Get instant access to all articles — and 20 years of archives. 14-day free trial.

... or subscribe as a group

These predictions and judgments can have very real impacts on the people subject to them, increasing the frequency of interactions with police.

In some cases, these risk-scores are heavily intertwined with social welfare systems, child protection watch-lists, and the classification of certain neighbourhoods as 'ghettos'.

Central to predictive policing systems is the notion that risk and crime can be objectively and accurately forecasted. Not only is this presumption flawed, it demonstrates a growing commitment to the idea that data can and should be used to quantify, track and predict human behaviour.

The increased use of such systems is part of a growing ideology that social issues can be solved by allocating more power, resources - and now technologies - to police.

Data might be able to indicate that something may occur in the near future, but not why. As such these technologies are about controlling behaviour that has been deemed as 'unacceptable' by police.

Fundamentally determined by historical practice and socio-economic biases, the data underlying law enforcement systems is inherently racialised and classed.

The data reflects patterns of policing, not crime.

Based on police presumptions

Discriminatory outcomes from predictive policing are therefore not an accident. They are the result of a process which seeks to "optimise" existing policing practice.

For example, Amnesty found that 78 percent of people on the UK's Gangs Matrix in 2017 were black, despite only 29 percent of 'gang-related' crimes being committed by black people.

Using indicators such as previous stop and search information, association with other 'gang nominals' social media activity, and even types of music listened to, the Matrix purports to predict who will commit crimes in the future.

The entire system is based on highly discriminatory police presumptions.

These tools tend to be deployed on crimes such as burglary, theft and other 'anti-social behaviour'. Prioritising pre-emptive police intervention on activities that are constructed to be associated with working class, migrant and racialised communities hardwires value judgments about which behaviours should be prioritised by law enforcement and which should not.

In the process this further criminalises racialised and poor communities.

In another proposed legislation, the European Commission is attempting to enhance Europol's (the EU policing agency) capacity to make use of big data – both in its operations and in the context of 'research and innovation'.

As argued by Statewatch, the European Council is already heavily subscribed to the idea that police should make more use of artificial intelligence.

However, the EU using data "hoovered up from the member states" for research or operations presents a major concern of reinforcing historic patterns of racist policing as well as deepening police surveillance.

The EU's tech deterministic worldview and vested interest in AI in policing casts huge doubts on whether it will take the necessary decisions to address the harms at stake.

It raises the question – will the upcoming regulation really regulate "high risk" AI or will it enable it?

Everything will depend on the obligations towards institutions that deploy these technologies. Categorising harmful uses of AI as 'high risk' with relatively weak procedural requirements is likely to provide a clearer roadmap for developers and states to more easily deploy technologies like predictive policing rather than deter them.

As set out in its Digital Compass, the EU will be striving to meet the goal that "three out of four companies should use cloud computing services, big data and Artificial Intelligence" by 2030.

By design, the EU's approach is likely to enable a market of unjust AI.

At the recent EU Anti-racism summit, commissioner for Equality, Helena Dalli argued that the EU must strive to become an 'anti-racist' union.

In the field of digital policy, this would mean developing a regulatory model firmly based on human rights and social justice. Issues of non-discrimination and fundamental rights would have to be at the core of the approach, rather than considered after competition and industrial policy.

A truly people-centred AI regulation would take a step back and acknowledge the inherent harms AI will perpetuate if deployed for certain purposes.

As outlined by 62 human rights organisations, the EU needs to set clear limits or 'red lines' on the most harmful uses of AI, including predictive policing, biometric mass surveillance and applications that exacerbate structural discrimination.

The EU needs to break from its enabling approach on AI. It must commit to encouraging only those applications that can guarantee benefiting people, not just public authorities and companies.

Author bio

Sarah Chander is a senior policy advisor at European Digital Rights. Fieke Jansen is a fellow of the Mozilla Foundation.

Disclaimer

The views expressed in this opinion piece are the author's, not those of EUobserver.

Magazine

The challenge of artificial intelligence

The fast-growing impact of artificial intelligence will be the biggest challenge for business and consumers in Europe's single market of tomorrow.

Eight EU states miss artificial intelligence deadline

Pan-European strategy "encouraged" member states to publish national artificial intelligence strategies by mid-2019. Germany, France and the UK have already done so - others are lagging behind.

News in Brief

  1. No EUobserver newsletter on Friday 14 May
  2. Germany stops Facebook gathering WhatsApp data
  3. Italy rebuts reports of EU deal with Libya
  4. MEPs demand EU states protect women's reproductive rights
  5. At least nine dead in Russia school shooting
  6. Bulgaria interim government appointed until July election
  7. German priests defy pope to bless same-sex couples
  8. New EU public prosecutor faults Slovenia

Column

'Sofagate' was more about power than sexism

Sexism may have played a role, but the deeper meaning of Ursula von der Leyen's humiliation in the palace of Turkish president Erdoğan is political and geopolitical.

Stakeholders' Highlights

  1. Nordic Council of MinistersNordic Council enters into formal relations with European Parliament
  2. Nordic Council of MinistersWomen more active in violent extremist circles than first assumed
  3. Nordic Council of MinistersDigitalisation can help us pick up the green pace
  4. Nordic Council of MinistersCOVID19 is a wake-up call in the fight against antibiotic resistance
  5. Nordic Council of MinistersThe Nordic Region can and should play a leading role in Europe’s digital development
  6. Nordic Council of MinistersNordic Council to host EU webinars on energy, digitalisation and antibiotic resistance

Latest News

  1. EU aims at 'zero pollution' in air, water and soil by 2050
  2. French police arrest Luxembourg former top spy
  3. Vaccine drives spur better-than-expected EU economic recovery
  4. Slovenia causing headaches for new EU anti-graft office
  5. 'No place to hide' in Gaza, as fighting escalates
  6. EU chases 90m AstraZeneca vaccines in fresh legal battle
  7. Fidesz MEP oversees FOI appeals on disgraced Fidesz MEP
  8. Belgium outlines summer Covid relaxation plans

Join EUobserver

Support quality EU news

Join us