Skip to main content
Advertisement

Police AI Chief Acknowledges Bias in Crime Tech, Commits to Mitigation

Alex Murray of the NCA acknowledges bias in police AI tools but pledges to minimize it through a new £115m national AI centre, aiming to improve fairness and efficiency in crime fighting.

·5 min read
A police van with a notice that live facial recognition is in operation.

Police AI Chief Acknowledges Bias but Commits to Addressing It

A senior police official has acknowledged that artificial intelligence (AI) technology used to enhance crime fighting will inherently contain bias, but has committed to addressing and minimizing these risks.

Labour advocates for a significant expansion of AI use by police forces across England and Wales, with law enforcement leaders also recognizing AI's potential to keep pace with evolving criminal threats.

Alex Murray, director of threat leadership at the National Crime Agency (NCA) and the national lead for AI, told that a forthcoming £115 million national police AI centre will actively recognize and work to reduce bias in policing tools.

Alex Murray
Alex Murray of the National Crime Agency.

Understanding and Minimizing AI Bias in Policing

Bias in AI policing tools can arise when algorithms, often trained on historical data reflecting past human prejudices, generate unfair outcomes. These may include disproportionate targeting of minority communities or misidentification based on race, gender, or socioeconomic status.

Murray emphasized the importance of recognizing and minimizing bias before deploying AI tools, stating:

“Once you’ve recognised and minimised [bias], how do you train officers to deal with outputs to ensure that it is further minimised?”

He further explained the necessity of involving data scientists and engineers to cleanse data, properly train models, and rigorously test them, particularly in applications like live facial recognition and predictive policing.

“If you talk about live facial recognition or predictive policing, there will be bias, and you need to get in the data scientists and the data engineers to clean the data, to train the model appropriately, and then to test it.
There is no point releasing something to policing that has bias in it that’s not recognised, and everything should be done to minimise it to a level where it can be understood and mitigated.”

Examples of Bias in Facial Recognition Technology

Instances of bias have already emerged in police use of retrospective facial recognition, which compares suspects to image databases after crimes have occurred. Live facial recognition, which is more controversial and less frequently used, also exhibits bias and has been criticized for inadequate safeguards, as highlighted in a December report.

The Association of Police and Crime Commissioners (APCC), overseeing local forces in England and Wales, stated:

“System failures have been known for some time, yet these were not shared with those communities affected, nor with leading sector stakeholders.”

Darryl Preston, APCC forensic science lead and police and crime commissioner for Cambridgeshire, remarked on the discovery of bias in the police national database’s retrospective facial recognition system:

“The discovery of an in-built bias in the police national database’s retrospective facial recognition system, even if only in limited circumstances, demonstrates the need for independent oversight of these powerful tools.
It is not acceptable for technology to be used unless and until it has been thoroughly tested to eliminate bias. That clearly was not the case in this instance.”

National AI Centre to Streamline and Improve Policing Technology

The new national AI centre, with a £115 million budget, aims to reduce bias and evaluate AI products from private suppliers. Currently, individual UK police forces make their own decisions regarding AI tools, a process seen as inefficient and costly.

Advertisement

Murray described the situation as an “arms race” with criminals who are also leveraging AI technology.

“Anyone with imagination can use AI.”

He cited a case where a paedophile claimed that images depicting him abusing children were deepfakes, which police had to disprove to secure a conviction.

AI’s Broad Benefits Beyond Predictive Policing

Murray highlighted that AI’s advantages extend well beyond common perceptions such as facial recognition and predictive policing.

“Across a range of crimes and challenges facing policing, AI ranged from being a help to a gamechanger, but a human police officer will have to make the final decisions about what to do about the results AI produces.”

He noted AI’s potential to assist in countering political agitators spreading fake images on social media intended to incite violence.

Looking ahead, Murray suggested AI could expedite manhunts, accelerate searches for vehicles linked to suspects, and drastically reduce the time detectives spend reviewing extensive CCTV footage or analyzing digital devices seized from suspects.

“What took days, weeks, sometimes months can potentially take hours,”

he said.

Real-World Success: AI Assists in Rapid Convictions

Recently, four suspects based in Luton were arrested for attacks on and thefts from cashpoints. Police downloaded data from the suspects’ phones and, with the aid of AI, secured guilty pleas within weeks.

The data was in Romanian, and AI was able to scan, translate, identify material related to potential crimes, determine the offences, and compile the information into a package for detectives.

Trevor Rodenhurst, chief constable of Bedfordshire Police, told :

“This allowed us to draw evidence from lots of devices with a vast quantity of data, which we would otherwise not have been able to do.”

He added that as officers witness AI’s benefits, frontline attitudes are shifting from skepticism to eagerness.

“They are no longer suspicious, they are asking when they can have it. That capability is transformative.”

This article was sourced from theguardian

Advertisement

Related News