Wednesday

1st Feb 2023

Opinion

Why EU needs to be wary that AI will increase racial profiling

  • Amnesty found that 78 percent of people on the UK's Gangs Matrix in 2017 were black, despite only 29 percent of 'gang-related' crimes being committed by black people (Photo: Tony Gonzalez)

Whether it be 'predictive' algorithmic systems used in the Netherlands to profile Eastern Europeans and Roma for 'pick-pocketing', or secretive lists of suspected criminals such as the UK's Gang Matrix, there is a growing reliance on criminal risk-scoring technologies by police forces across Europe.

Police say these systems predict where and by whom crime happens.

Read and decide

Join EUobserver today

Become an expert on Europe

Get instant access to all articles — and 20 years of archives. 14-day free trial.

... or subscribe as a group

These predictions and judgments can have very real impacts on the people subject to them, increasing the frequency of interactions with police.

In some cases, these risk-scores are heavily intertwined with social welfare systems, child protection watch-lists, and the classification of certain neighbourhoods as 'ghettos'.

Central to predictive policing systems is the notion that risk and crime can be objectively and accurately forecasted. Not only is this presumption flawed, it demonstrates a growing commitment to the idea that data can and should be used to quantify, track and predict human behaviour.

The increased use of such systems is part of a growing ideology that social issues can be solved by allocating more power, resources - and now technologies - to police.

Data might be able to indicate that something may occur in the near future, but not why. As such these technologies are about controlling behaviour that has been deemed as 'unacceptable' by police.

Fundamentally determined by historical practice and socio-economic biases, the data underlying law enforcement systems is inherently racialised and classed.

The data reflects patterns of policing, not crime.

Based on police presumptions

Discriminatory outcomes from predictive policing are therefore not an accident. They are the result of a process which seeks to "optimise" existing policing practice.

For example, Amnesty found that 78 percent of people on the UK's Gangs Matrix in 2017 were black, despite only 29 percent of 'gang-related' crimes being committed by black people.

Using indicators such as previous stop and search information, association with other 'gang nominals' social media activity, and even types of music listened to, the Matrix purports to predict who will commit crimes in the future.

The entire system is based on highly discriminatory police presumptions.

These tools tend to be deployed on crimes such as burglary, theft and other 'anti-social behaviour'. Prioritising pre-emptive police intervention on activities that are constructed to be associated with working class, migrant and racialised communities hardwires value judgments about which behaviours should be prioritised by law enforcement and which should not.

In the process this further criminalises racialised and poor communities.

In another proposed legislation, the European Commission is attempting to enhance Europol's (the EU policing agency) capacity to make use of big data – both in its operations and in the context of 'research and innovation'.

As argued by Statewatch, the European Council is already heavily subscribed to the idea that police should make more use of artificial intelligence.

However, the EU using data "hoovered up from the member states" for research or operations presents a major concern of reinforcing historic patterns of racist policing as well as deepening police surveillance.

The EU's tech deterministic worldview and vested interest in AI in policing casts huge doubts on whether it will take the necessary decisions to address the harms at stake.

It raises the question – will the upcoming regulation really regulate "high risk" AI or will it enable it?

Everything will depend on the obligations towards institutions that deploy these technologies. Categorising harmful uses of AI as 'high risk' with relatively weak procedural requirements is likely to provide a clearer roadmap for developers and states to more easily deploy technologies like predictive policing rather than deter them.

As set out in its Digital Compass, the EU will be striving to meet the goal that "three out of four companies should use cloud computing services, big data and Artificial Intelligence" by 2030.

By design, the EU's approach is likely to enable a market of unjust AI.

At the recent EU Anti-racism summit, commissioner for Equality, Helena Dalli argued that the EU must strive to become an 'anti-racist' union.

In the field of digital policy, this would mean developing a regulatory model firmly based on human rights and social justice. Issues of non-discrimination and fundamental rights would have to be at the core of the approach, rather than considered after competition and industrial policy.

A truly people-centred AI regulation would take a step back and acknowledge the inherent harms AI will perpetuate if deployed for certain purposes.

As outlined by 62 human rights organisations, the EU needs to set clear limits or 'red lines' on the most harmful uses of AI, including predictive policing, biometric mass surveillance and applications that exacerbate structural discrimination.

The EU needs to break from its enabling approach on AI. It must commit to encouraging only those applications that can guarantee benefiting people, not just public authorities and companies.

Author bio

Sarah Chander is a senior policy advisor at European Digital Rights. Fieke Jansen is a fellow of the Mozilla Foundation.

Disclaimer

The views expressed in this opinion piece are the author's, not those of EUobserver.

Magazine

The challenge of artificial intelligence

The fast-growing impact of artificial intelligence will be the biggest challenge for business and consumers in Europe's single market of tomorrow.

Eight EU states miss artificial intelligence deadline

Pan-European strategy "encouraged" member states to publish national artificial intelligence strategies by mid-2019. Germany, France and the UK have already done so - others are lagging behind.

MEPs poised to vote blank cheque for Europol using AI tools

Fair Trials, EDRi and other civil society organisations are calling on MEPs to hold true to protect our fundamental rights. We urge MEPs to vote against the revision of Europol's mandate, which distinctly lacks meaningful accountability and safeguards.

Column

Democracy — is it in crisis or renaissance?

Countries that were once democratising are now moving in the other direction — think of Turkey, Myanmar, Hungary or Tunisia. On the other hand, in autocracies mass mobilisation rarely succeeds in changing political institutions. Think of Belarus, Iran or Algeria.

Column

Democracy — is it in crisis or renaissance?

Countries that were once democratising are now moving in the other direction — think of Turkey, Myanmar, Hungary or Tunisia. On the other hand, in autocracies mass mobilisation rarely succeeds in changing political institutions. Think of Belarus, Iran or Algeria.

Greece's spy scandal must shake us out of complacency

The director of Amnesty International Greece on the political spying scandal that now threatens to bring down prime minister Kyriakos Mitsotakis. Activists and NGO staff work with the constant fear that they are being spied on.

Latest News

  1. Hungary blames conspiracy for EU corruption rating
  2. Democracy — is it in crisis or renaissance?
  3. EU lobby register still riddled with errors
  4. Polish backpedal on windfarms put EU funds at risk
  5. More money, more problems in EU answer to US green subsidies
  6. Study: EU electricity transition sped into high gear in 2022
  7. Russia and China weaponised pandemic to sow distrust, MEPs hear
  8. Frontex to spend €100m on returning migrants this year

Stakeholders' Highlights

  1. Party of the European LeftJOB ALERT - Seeking a Communications Manager (FT) for our Brussels office!
  2. European Parliamentary Forum for Sexual & Reproductive Rights (EPF)Launch of the EPF Contraception Policy Atlas Europe 2023. 8th February. Register now.
  3. Europan Patent OfficeHydrogen patents for a clean energy future: A global trend analysis of innovation along hydrogen value chains
  4. Forum EuropeConnecting the World from the Skies calls for global cooperation in NTN rollout
  5. EFBWWCouncil issues disappointing position ignoring the threats posed by asbestos
  6. Nordic Council of MinistersLarge Nordic youth delegation at COP15 biodiversity summit in Montreal

Stakeholders' Highlights

  1. Nordic Council of MinistersCOP27: Food systems transformation for climate action
  2. Nordic Council of MinistersThe Nordic Region and the African Union urge the COP27 to talk about gender equality
  3. Friedrich Naumann Foundation European DialogueGender x Geopolitics: Shaping an Inclusive Foreign Security Policy for Europe
  4. Obama FoundationThe Obama Foundation Opens Applications for its Leaders Program in Europe
  5. EFBWW – EFBH – FETBBA lot more needs to be done to better protect construction workers from asbestos
  6. European Committee of the RegionsRe-Watch EURegions Week 2022

Join EUobserver

Support quality EU news

Join us