Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police.
Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has collected.
Clearview AI CEO Hoan Ton-That disclosed in an interview with the BBC that the firm has scraped 30 billion images from platforms such as Facebook. The images were taken without the users’ permission.
The company has been repeatedly fined millions in Europe and Australia for breaches of privacy, but US police are still using its powerful software.
Matthew Guaragilia from the Electronic Frontier Foundation said that police use of Clearview puts everyone into a “perpetual police line-up.”
While the use of facial recognition by the police is often sold to the public as being used only for serious or violent crimes, Miami Police confirmed to the BBC that it uses Clearview AI’s software for every type of crime.
Miami’s Assistant Chief of Police Armando Aguilar said his team used Clearview AI’s system about 450 times a year, and that it had helped to solve several murders.
However, there are numerous documented cases of mistaken identity using facial recognition by the police. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family and held overnight in a “crowded and filthy” cell.
“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.
“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”
The lack of transparency around police use of facial recognition means the true figure of wrongful arrests it’s led to is likely far higher.
Civil rights campaigners want police forces that use Clearview AI to openly say when it is used, and for its accuracy to be openly tested in court. They want the systems to be scrutinised by independent experts.
The use of facial recognition technology by police is a contentious issue. While it may help solve crimes, it also poses a threat to civil liberties and privacy.
Ultimately, it’s a fine line between using technology to fight crime and infringing on individual rights, and we need to tread carefully to ensure we don’t cross it.
Related: Clearview AI lawyer: ‘Common law has never recognised a right to privacy for your face’
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.