Amazon

AI Experts: Dear Amazon, stop selling facial recognition to law enforcement

©iStock/Eerik

A group of AI experts have signed an open letter to Amazon demanding the company stops selling facial recognition to law enforcement following bias findings.

Back in January, AI News reported on findings by Algorithmic Justice League founder Joy Buolamwini who researched some of the world’s most popular facial recognition algorithms.

Buolamwini found most of the algorithms were biased and misidentified subjects with darker skin colours and/or females more often.

Here were the results in descending order of accuracy:

Microsoft

  • Lighter Males (100 percent)
  • Lighter Females (98.3 percent)
  • Darker Males (94 percent)
  • Darker Females (79.2 percent)

Face++

  • Darker Males (99.3 percent)
  • Lighter Males (99.2 percent)
  • Lighter Females (94 percent)
  • Darker Females (65.5 percent)

IBM

  • Lighter Males (99.7 percent)
  • Lighter Females (92.9 percent)
  • Darker Males (88 percent)
  • Darker Females (65.3 percent)

Amazon executives rebuked the findings and claimed a lower level of accuracy was used than what they recommend for law enforcement use.

“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” Matt Wood, GM of AI for Amazon’s cloud-computing division, wrote in a January blog post.

Signatories of the open letter came to Buolamwini’s defense, including AI pioneer Yoshua Bengio who is a recent winner of the Turing Award.

“In contrast to Dr. Wood’s claims, bias found in one system is cause for concern in the other, particularly in use cases that could severely impact people’s lives, such as law enforcement applications,” they wrote.

Despite having the most accurate facial recognition, Microsoft has rightly not been content at that and has further improved its accuracy since Buolamwini’s work. The firm supports a policy requiring signs to be visible wherever facial recognition is used.

IBM has also made huge strides in levelling the accuracy of their algorithms to represent all parts of society. Earlier this year, the company unveiled a new one million image dataset more representative of the diversity in society.

When Buolamwini reassessed IBM’s algorithm, the accuracy when assessing darker males jumped from 88 percent to 99.4 percent, for darker females from 65.3 percent to 83.5 percent, for lighter females from 92.9 percent to 97.6 percent, and for lighter males it stayed the same at 97 percent.

Buolamwini commented: “So for everybody who watched my TED Talk and said: ‘Isn’t the reason you weren’t detected because of, you know, physics? Your skin reflectance, contrast, et cetera,’ — the laws of physics did not change between December 2017, when I did the study, and 2018, when they launched the new results.”

“What did change is they made it a priority.”

Aside from potentially automating societal problems like racial profiling, inaccurate facial recognition could be the difference between life and death. For example, a recent study (PDF) found that driverless cars observing the road for pedestrians had a more difficult time detecting individuals with darker skin colours.

Everyone, not just AI experts, should be pressuring companies to ensure biases are kept well away from algorithms.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

To Top

We are using cookies on our website

We use cookies to personalise content and ads, to provide social media features, and to analyse our traffic. Please confirm if you accept our tracking cookies. You are free to decline the tracking so you can continue to visit our website without any data sent to third-party services. All personal data can be deleted by visiting the Contact Us > Privacy Tools area of the website.