SP
BravenNow
Police AI chief admits crime-fighting tech will have bias but vows to tackle it
| United Kingdom | world | ✓ Verified - theguardian.com

Police AI chief admits crime-fighting tech will have bias but vows to tackle it

#AI Bias #Police AI #Crime-fighting Technology #National Crime Agency #Facial Recognition #Predictive Policing #UK Police #AI Oversight

📌 Key Takeaways

  • AI crime-fighting technology will contain bias but police vow to minimize it
  • New £115m national police AI center will help address bias issues
  • Bias in AI can lead to overtargeting minority communities or misidentifying individuals
  • AI can help police process vast amounts of data much faster than humans
  • Police are in an "arms race" with criminals who are also using AI

📖 Full Retelling

UK National Crime Agency's director of threat leadership and national AI lead Alex Murray admitted in February 2026 that artificial intelligence used for crime-fighting will contain bias but pledged to combat the risks as part of a new £115m national police AI center aimed at addressing these issues. Murray acknowledged that algorithms trained on historical data reflecting past human prejudices could systematically produce unfair outcomes, such as overtargeting minority communities or misidentifying individuals based on race, gender, or socioeconomic status. The national AI center will focus on recognizing and minimizing these biases while training officers to properly interpret AI outputs. Bias has already surfaced in existing police AI technologies, particularly in retrospective facial recognition systems where suspects are compared with databases after crimes occur. A December report found that one such system was used with inadequate safeguards, with failures not shared with affected communities or sector stakeholders. Darryl Preston, police and crime commissioner for Cambridgeshire, emphasized the need for independent oversight of these powerful tools, stating it's unacceptable to use technology until it has been thoroughly tested to eliminate bias. Despite the challenges, Murray highlighted how AI can transform police work by processing vast amounts of data far more efficiently than humans, potentially reducing investigations that take days or weeks to just hours. The technology can help with manhunts, analyzing CCTV footage, searching seized digital devices, and combating criminal activities like deepfake technology used by offenders. As criminals increasingly adopt AI, police find themselves in an "arms race" where those with imagination can leverage these tools, making the new national center crucial for maintaining law enforcement capabilities while addressing ethical concerns.

🏷️ Themes

AI Bias, Police Technology, Criminal Justice Reform

📚 Related People & Topics

National Crime Agency

National Crime Agency

National law enforcement agency in the United Kingdom

The National Crime Agency (NCA) is a national law enforcement agency in the United Kingdom. It is the UK's lead agency against organised crime, human and drug trafficking, weapons and cybercrime, including economic crime that goes across regional and international borders, but it can be tasked to in...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for National Crime Agency:

👤 Border Security 1 shared
🌐 Human trafficking 1 shared
View full profile
Original Source
Police AI chief admits crime-fighting tech will have bias but vows to tackle it Exclusive: NCA’s Alex Murray says he hopes new £115m police AI centre can limit unfairness found in tools ‘It’s not Robocop’: UK police embrace AI ‘efficiency’ in complex investigations A police chief has admitted artificial intelligence used to boost crime fighting will contain bias but pledged to combat the risks. Labour wants a dramatic expansion of police use of AI within England and Wales, with police chiefs also believing it could help keep law enforcement up to date with new criminal threats. Alex Murray told the Guardian that a new national police AI centre would recognise the risks of bias and minimise them. Bias in use of AI in policing could result in instances where algorithms – often trained on historical data reflecting past human prejudices – systematically produce unfair outcomes, such as overtargeting minority communities or misidentifying individuals based on race, gender, or socioeconomic status. Murray, the director of threat leadership with the National Crime Agency, and the national lead for AI, said: “Once you’ve recognised and minimised , how do you train officers to deal with outputs to ensure that it is further minimised? “If you talk about live facial recognition or predictive policing, there will be bias, and you need to get in the data scientists and the data engineers to clean the data, to train the model appropriately, and then to test it. “There is no point releasing something to policing that has bias in it that’s not recognised, and everything should be done to minimise it to a level where it can be understood and mitigated.” Examples of bias have already surfaced in the police use of retrospective facial recognition, which is powered by AI. That is where a suspect is compared with a database of images after a crime. Live facial recognition, which is more controversial and is used less by policing, hunts for suspects in real time, and also contains bias....
Read full article at source

Source

theguardian.com

More from United Kingdom

News from Other Countries

🇺🇸 USA

🇺🇦 Ukraine