Chinese AI darling SenseTime wants facial recognition standards

Ryan Daws is a senior editor at TechForge Media, with a seasoned background spanning over a decade in tech journalism. His expertise lies in identifying the latest technological trends, dissecting complex topics, and weaving compelling narratives around the most cutting-edge developments. His articles and interviews with leading industry figures have gained him recognition as a key influencer by organisations such as Onalytica. Publications under his stewardship have since gained recognition from leading analyst houses like Forrester for their performance. Find him on X (@gadget_ry) or Mastodon (

The CEO of Chinese AI darling SenseTime wants to see facial recognition standards established for a ‘healthier’ industry.

SenseTime is among China’s most renowned AI companies. Back in April, we reported it had become the world’s most funded AI startup.

Part of the company’s monumental success is the popularity of facial recognition in China where it’s used in many aspects of citizens’ lives. Just yesterday, game developer Tencent announced it’s testing facial recognition to check users’ ages.

Xu Li, CEO of SenseTime, says immigration officials doubted the accuracy of facial recognition technology when he first introduced his own. “We knew about it 20 years ago and, combined with fingerprint checks, the accuracy is only 53 per cent,” one told him.

Facial recognition has come a long way since 20 years ago. Recent advances in artificial intelligence have led to even greater leaps, resulting in companies such as SenseTime.

To dispel the idea that facial recognition is still inaccurate, Li wants ‘trust levels’ to be established.

“With standards, technology adopters can better understand the risk involved, just like credit worthiness for individuals and companies,” Xu said to South China Morning Post. “Providers of facial recognition can be assigned different trust levels, ranging from financial security at the top to entertainment uses.”

Many of the leading facial recognition technologies have their own built-in trust levels. These levels determine how certain the software must be to call it a match.

Back in July, AI News reported on the findings of ACLU (American Civil Liberties Union) which found Amazon’s facial recognition AI erroneously labelled those with darker skin colours as criminals more often when matching against mugshots.

Amazon claims the ACLU left the facial recognition service’s default confidence setting of 80 percent on – when it suggests 95 percent or higher for law enforcement.

Responding to the ACLU’s findings, Dr Matt Wood, GM of Deep Learning and AI at Amazon Web Services, also called for regulations. However, Wood asks the government to force a minimum confidence level for the use of facial recognition in law enforcement.

Li and Wood may be calling for different regulations, but they – and many other AI leaders – agree that some are essential to ensure a healthy industry.

 Interested in hearing industry leaders discuss subjects like this? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Tags: , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply