New £115M Police AI Centre to Ensure Officers Are Aware They’re Being Racist Faster Than Ever

National Crime Agency promises ‘recognised bias’ approach will revolutionise speed at which systemic discrimination is formally acknowledged

LONDON — The UK government has announced £115 million in funding for a new National Police AI Centre, which officials say will dramatically improve the speed and efficiency with which law enforcement can identify, document, and then continue to modernise the delivery of data-driven suspicion.

Alex Murray, the National Crime Agency’s director of threat leadership and national lead for AI, told the Guardian that the centre would “recognise the risks of bias and minimise them”, which we can confirm is technically different from eliminating them.

“There is no point releasing something to policing that has bias in it that’s not recognised,” Murray said, in what experts are calling the most ambitious mission statement ever for a government institution.

‘Arms Race’

Murray warned that police are in an “arms race” with criminals who are using AI, adding that “anyone with imagination can use AI.” He did not specify whether this applied to the procurement teams assessing supplier proposals, though £115 million suggests a vivid collective imagination was already at work.

It was also suggested AI could help police deal with “political agitators who infect social media with fake images to try to trigger violence on the streets”, which, reassuringly, is a designation that is famously precise and rarely open to interpretation.

Looking Ahead

The new centre will also centralise procurement decisions currently made independently by individual forces, described as “slow and wasteful.” Under the new model, biased technology will instead be purchased at scale, delivering significant savings per unit of systematic unfairness.

———

The National Police AI Centre is expected to be operational by 2026. A spokesperson declined to confirm whether the centre’s own hiring algorithms would be subject to bias testing.