Ethics

Editorial: Stopping AI’s discrimination will be difficult, but vital

©iStock/smartboy10

Several human rights organisations have signed a declaration calling for governments and companies to help ensure AI technologies are indiscriminate, but it’s going to be difficult.

Amnesty International and Access Now prepared the ‘Toronto Declaration’ (PDF) that’s also been signed by Human Rights Watch and the Wikimedia Foundation. As an open declaration; other companies, governments, and organisations are being called on to add their endorsement.

In a post, Access Now wrote:

“As machine learning systems advance in capability and increase in use, we must examine the positive and negative implications of these technologies.

We acknowledge the potential for these technologies to be used for good and to promote human rights, but also the potential to intentionally or inadvertently discriminate against individuals or groups of people.

We must keep our focus on how these technologies will affect individual human beings and human rights. In a world of machine learning systems, who will bear accountability for harming human rights?”

Ethics have become a major talking point in the AI industry. However, much of the conversation so far has focused on drawing red lines when it comes to surveillance and military applications.

There’s a big debate over AIs potential impact to jobs. Some believe automation will cause a work shortage, while others argue that most will simply be enhanced by AI.

If jobs are being replaced, ideas like a universal income will have to be re-examined. If jobs are being enhanced, ensuring AI is indiscriminate will be even more important.

AI has already shown discrimination

Technologies developed and used in the West are typically developed by white males.

Research has been performed into the gender and race gap of executives in Silicon Valley. This data at least provides some indication of the representation problem:

What this means is that, unintentionally, products often perform better for this particular group. Today, that could just mean something relatively trivial like Siri recognising an American male voice with greater accuracy (even as a British male, Silicon Valley-developed products often struggle with my accent!)

2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in western countries are more accurate at detecting Caucasians.

However, if jobs are becoming more reliant on AI, they need to work as well for everyone who uses them. Failing to do so will put certain groups at a greater advantage than others.

“From policing, to welfare systems, online discourse, and healthcare – to name a few examples – systems employing machine learning technologies can vastly and rapidly change or reinforce power structures or inequalities on an unprecedented scale and with significant harm to human rights,” wrote Access Now.

Policing is one area of particular concern. An investigative report by ProPublica revealed that computer-generated ‘risk assessment scores’ used to determine eligibility for parole are almost twice as likely to label black defendants as potential repeat offenders, despite evidence to the contrary.

Similarly, a 2012 study (paywall) by the IEEE  found that police surveillance cameras using facial recognition to identify suspected criminals are five to 10 percent less accurate when identifying African Americans – which could lead to more innocent black people being arrested.

Machine learning models for AI are often trained on public data and therefore we must be careful about what sources are used. Microsoft’s attempt to create a chatbot which learns from the public, Tay, infamously ended up becoming a rather unsavoury character spouting racist and sexist remarks.

The declaration signed today is a great start to keep these issues in mind as AI technologies are being developed, but it will require tackling inequalities across the whole of society to make developments truly representative of those it serves.

What are your thoughts on the AI discrimination issue? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

To Top

We are using cookies on our website

We use cookies to personalise content and ads, to provide social media features, and to analyse our traffic. Please confirm if you accept our tracking cookies. You are free to decline the tracking so you can continue to visit our website without any data sent to third-party services. All personal data can be deleted by visiting the Contact Us > Privacy Tools area of the website.