Ethics

IBM releases tool for tackling scourge of bias in AI algorithms

©iStock/joel-t

Bias and prejudice remains a serious issue across many societies, take away human input and the result could be disastrous.

IBM is stepping in with a tool it calls ‘Fairness 360’ which scans for signs of bias in algorithms to recommend adjustments on how to correct them.

AIs already have a documented bias problem. It’s rarely intentional, but typically a result of their developers coming from the predominant part of each society.

Take facial recognition software, for example.

A 2010 study by researchers at NIST and the University of Texas in Dallas found algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

The ACLU (American Civil Liberties Union) recently conducted a test of Amazon’s facial recognition technology on members of Congress to see if they match with a database of criminal mugshots. The false matches disproportionately affected members of the Congressional Black Caucus.

Humans have natural biases. Political stances, for example, are – for the most part – fine to have on an individual basis. However, if an AI starts to conduct actions or spread the views of its developer(s) then it creates a problem.

A problem today is that developers often don’t know exactly what decisions are being made by their AI and why. The AIs work in what’s known as a ‘black box’.

IBM’s tool wants to make these decisions more transparent so that developers can see what factors are being used by their AIs.

A recent study conducted by IBM’s Institute for Business Value found 82 percent of enterprises are considering AI deployments. However, 60 percent fear liability issues.

The software will be cloud-based and open source, it will also work with various common AI frameworks including Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML.

You can find out more about Fairness 360 here, or find initial code on GitHub.

What are your thoughts on IBM’s tool for detecting AI bias? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

To Top

We are using cookies on our website

We use cookies to personalise content and ads, to provide social media features, and to analyse our traffic. Please confirm if you accept our tracking cookies. You are free to decline the tracking so you can continue to visit our website without any data sent to third-party services. All personal data can be deleted by visiting the Contact Us > Privacy Tools area of the website.