Monday

2nd Oct 2023

Opinion

Why EU needs to be wary that AI will increase racial profiling

  • Amnesty found that 78 percent of people on the UK's Gangs Matrix in 2017 were black, despite only 29 percent of 'gang-related' crimes being committed by black people (Photo: Tony Gonzalez)

Whether it be 'predictive' algorithmic systems used in the Netherlands to profile Eastern Europeans and Roma for 'pick-pocketing', or secretive lists of suspected criminals such as the UK's Gang Matrix, there is a growing reliance on criminal risk-scoring technologies by police forces across Europe.

Police say these systems predict where and by whom crime happens.

Read and decide

Join EUobserver today

Become an expert on Europe

Get instant access to all articles — and 20 years of archives. 14-day free trial.

... or subscribe as a group

These predictions and judgments can have very real impacts on the people subject to them, increasing the frequency of interactions with police.

In some cases, these risk-scores are heavily intertwined with social welfare systems, child protection watch-lists, and the classification of certain neighbourhoods as 'ghettos'.

Central to predictive policing systems is the notion that risk and crime can be objectively and accurately forecasted. Not only is this presumption flawed, it demonstrates a growing commitment to the idea that data can and should be used to quantify, track and predict human behaviour.

The increased use of such systems is part of a growing ideology that social issues can be solved by allocating more power, resources - and now technologies - to police.

Data might be able to indicate that something may occur in the near future, but not why. As such these technologies are about controlling behaviour that has been deemed as 'unacceptable' by police.

Fundamentally determined by historical practice and socio-economic biases, the data underlying law enforcement systems is inherently racialised and classed.

The data reflects patterns of policing, not crime.

Based on police presumptions

Discriminatory outcomes from predictive policing are therefore not an accident. They are the result of a process which seeks to "optimise" existing policing practice.

For example, Amnesty found that 78 percent of people on the UK's Gangs Matrix in 2017 were black, despite only 29 percent of 'gang-related' crimes being committed by black people.

Using indicators such as previous stop and search information, association with other 'gang nominals' social media activity, and even types of music listened to, the Matrix purports to predict who will commit crimes in the future.

The entire system is based on highly discriminatory police presumptions.

These tools tend to be deployed on crimes such as burglary, theft and other 'anti-social behaviour'. Prioritising pre-emptive police intervention on activities that are constructed to be associated with working class, migrant and racialised communities hardwires value judgments about which behaviours should be prioritised by law enforcement and which should not.

In the process this further criminalises racialised and poor communities.

In another proposed legislation, the European Commission is attempting to enhance Europol's (the EU policing agency) capacity to make use of big data – both in its operations and in the context of 'research and innovation'.

As argued by Statewatch, the European Council is already heavily subscribed to the idea that police should make more use of artificial intelligence.

However, the EU using data "hoovered up from the member states" for research or operations presents a major concern of reinforcing historic patterns of racist policing as well as deepening police surveillance.

The EU's tech deterministic worldview and vested interest in AI in policing casts huge doubts on whether it will take the necessary decisions to address the harms at stake.

It raises the question – will the upcoming regulation really regulate "high risk" AI or will it enable it?

Everything will depend on the obligations towards institutions that deploy these technologies. Categorising harmful uses of AI as 'high risk' with relatively weak procedural requirements is likely to provide a clearer roadmap for developers and states to more easily deploy technologies like predictive policing rather than deter them.

As set out in its Digital Compass, the EU will be striving to meet the goal that "three out of four companies should use cloud computing services, big data and Artificial Intelligence" by 2030.

By design, the EU's approach is likely to enable a market of unjust AI.

At the recent EU Anti-racism summit, commissioner for Equality, Helena Dalli argued that the EU must strive to become an 'anti-racist' union.

In the field of digital policy, this would mean developing a regulatory model firmly based on human rights and social justice. Issues of non-discrimination and fundamental rights would have to be at the core of the approach, rather than considered after competition and industrial policy.

A truly people-centred AI regulation would take a step back and acknowledge the inherent harms AI will perpetuate if deployed for certain purposes.

As outlined by 62 human rights organisations, the EU needs to set clear limits or 'red lines' on the most harmful uses of AI, including predictive policing, biometric mass surveillance and applications that exacerbate structural discrimination.

The EU needs to break from its enabling approach on AI. It must commit to encouraging only those applications that can guarantee benefiting people, not just public authorities and companies.

Author bio

Sarah Chander is a senior policy advisor at European Digital Rights. Fieke Jansen is a fellow of the Mozilla Foundation.

Disclaimer

The views expressed in this opinion piece are the author's, not those of EUobserver.

Magazine

The challenge of artificial intelligence

The fast-growing impact of artificial intelligence will be the biggest challenge for business and consumers in Europe's single market of tomorrow.

Eight EU states miss artificial intelligence deadline

Pan-European strategy "encouraged" member states to publish national artificial intelligence strategies by mid-2019. Germany, France and the UK have already done so - others are lagging behind.

MEPs poised to vote blank cheque for Europol using AI tools

Fair Trials, EDRi and other civil society organisations are calling on MEPs to hold true to protect our fundamental rights. We urge MEPs to vote against the revision of Europol's mandate, which distinctly lacks meaningful accountability and safeguards.

How do you make embarrassing EU documents 'disappear'?

The EU Commission's new magic formula for avoiding scrutiny is simple. You declare the documents in question to be "short-lived correspondence for a preliminary exchange of views" and thus exempt them from being logged in the official inventory.

Column

Will Poles vote for the end of democracy?

International media must make clear that these are not fair, democratic elections. The flawed race should be the story at least as much as the race itself.

Latest News

  1. Slovak's 'illiberal' Fico victory boosts Orban, but faces checks
  2. European Political Community and key media vote This WEEK
  3. Is the ECB sabotaging Europe's Green Deal?
  4. The realists vs idealists Brussels battle on Ukraine's EU accession
  5. EU women promised new dawn under anti-violence pact
  6. Three steps EU can take to halt Azerbaijan's mafia-style bullying
  7. Punish Belarus too for aiding Putin's Ukraine war
  8. Added-value for Russia diamond ban, as G7 and EU prepare sanctions

Stakeholders' Highlights

  1. Nordic Council of MinistersThe Nordic Region is stepping up its efforts to reduce food waste
  2. International Medical Devices Regulators Forum (IMDRF)Join regulators, industry & healthcare experts at the 24th IMDRF session, September 25-26, Berlin. Register by 20 Sept to join in person or online.
  3. UNOPSUNOPS begins works under EU-funded project to repair schools in Ukraine
  4. Georgia Ministry of Foreign AffairsGeorgia effectively prevents sanctions evasion against Russia – confirm EU, UK, USA
  5. International Medical Devices Regulators Forum (IMDRF)Join regulators & industry experts at the 24th IMDRF session- Berlin September 25-26. Register early for discounted hotel rates
  6. Nordic Council of MinistersGlobal interest in the new Nordic Nutrition Recommendations – here are the speakers for the launch

Stakeholders' Highlights

  1. Nordic Council of Ministers20 June: Launch of the new Nordic Nutrition Recommendations
  2. International Sustainable Finance CentreJoin CEE Sustainable Finance Summit, 15 – 19 May 2023, high-level event for finance & business
  3. ICLEISeven actionable measures to make food procurement in Europe more sustainable
  4. World BankWorld Bank Report Highlights Role of Human Development for a Successful Green Transition in Europe
  5. Nordic Council of MinistersNordic summit to step up the fight against food loss and waste
  6. Nordic Council of MinistersThink-tank: Strengthen co-operation around tech giants’ influence in the Nordics

Join EUobserver

Support quality EU news

Join us