Met Police commissioner dismisses critics of facial recognition systems

uk metropolitan police facial recognition surveillance privacy uk tech ai artificial intelligence face

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (

The chief commissioner of the Metropolitan Police has dismissed critics of law enforcement using facial recognition systems.

Met Commissioner Cressida Dick was speaking at the Royal United Services Institute think tank on Monday. Much of Dick’s speech was spent on making the case for British police to use modern technologies to tackle crime.

Dick accused critics of police facial recognition technology as being “highly inaccurate or highly ill-informed.”

Needless to say, this angered said critics who believe Dick is the one who is ill-informed by ignoring an independent report which suggests the technology in question only works in just 19 percent of cases.

“I would say it is for critics to justify to the victims of crimes why police should not be allowed to use tech lawfully and proportionally to catch criminals,” Dick argued.

Dick says she welcomes a public debate about facial recognition but attacked organisations such as Big Brother Watch and Liberty who brought the attention to the wider public.

“It’s unhelpful for the Met to reduce a serious debate on facial recognition to unfounded accusations of ‘fake news’,” Big Brother Watch tweeted. “Dick would do better to acknowledge and engage with the real, serious concerns – including those in the damning independent report that she ignored.”

Liberty tweeted a similar response: “Fact: Met started using facial recognition after ignoring its own review of two-year trial that said its use of the tech didn’t respect human rights. Another fact: scaremongering and deriding criticisms instead of engaging shows how flimsy their basis for using it really is.”

Met Police tests of facial recognition technology so far have been nothing short of a complete failure.

An initial trial, at the 2016 Notting Hill Carnival, led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.

Ironically, the legality of the trials has been called into question. An independent report by Professor Peter Fussey and Dr Daragh Murray last year concluded the six trials they were given access to were probably illegal since they had not accounted for human rights compliance.

Dr Murray said: “This report raises significant concerns regarding the human rights law compliance of the trials.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR.

“Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset, and was not an integral part of the process.”

You can find a copy of the full report here (PDF)

(Image Credit: Met police helmet by Matt Brown under CC BY 2.0 license)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: , , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply