Microsoft’s replacement of human editors with artificial intelligence has faced its first big embarrassment.
In late May, Microsoft decided to fire many of its human editors for MSN News and replace them with an AI.
Earlier this week, a news story appeared about Little Mix band member Jade Thirlwall’s experience facing racism. The story appears innocent enough until you realise Microsoft’s AI confused two of the mixed-race band members. The error was quickly pointed out by Thirlwall.
In an Instagram story, Thirlwall wrote: “@MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed race member of the group.”
She added: “This shit happens to @leighannepinnock and I ALL THE TIME that it’s become a running joke … It offends me that you couldn’t differentiate the two women of colour out of four members of a group … DO BETTER!”
Microsoft’s human editors were reportedly told to be aware the AI may subsequently publish stories on its own racist error and to manually remove them.
Staff at MSN have also been told to await the publication of this Guardian article and try to manually delete it from the website, because there is a high risk the Microsoft robot editor taking their jobs will decide it is of interest to MSN readers. https://t.co/KkKDZqpHWu— Jim Waterson (@jimwaterson) June 9, 2020
The Microsoft News app ended up being flooded with stories about the incident. It’s clear that the remaining human editors couldn’t move fast enough against their automated counterpart.
Final update on the thread of news dystopia: Microsoft’s artificial intelligence news app is now swamped with stories selected by the news robot about the news robot backfiring. pic.twitter.com/X0LwfVxw8e— Jim Waterson (@jimwaterson) June 9, 2020
According to Waterson, the recently-sacked human staff from MSN have been told to stop reporting to him what the AI is doing.
This isn’t the first time an AI-powered solution from Microsoft has come under fire for racism.
An infamous Twitter chatbot developed by Microsoft called Tay ended up spouting racist and misogynistic vitriol back in 2016. The chatbot obviously wasn’t designed to be such an unsavoury character but Microsoft, for some reason, thought it would be a good idea to allow internet denizens to train it.
One of the most pressing concerns in this increasingly draconian world we live in is that of mass surveillance and facial recognition. While IBM announced this week it wants nothing more to do with the technology, Microsoft remains a key player.
An experiment by the Algorithmic Justice League last year found serious disparities between the performance of facial recognition algorithms based on gender and skin colour.
Microsoft’s algorithm actually performed the best of those tested and managed a 100 percent accuracy when detecting lighter-skinned males. However, the algorithm was just 79.2 percent accurate when used on darker-skinned females.
If that version of Microsoft’s facial recognition system was used for surveillance – almost two in every ten females with darker skin risks being falsely flagged. In busy areas, that could mean hundreds if not thousands of people facing automated profiling each day.
While ideally algorithms wouldn’t have any biases or issues, all of the incidents show exactly why many humans should almost always be involved in final decisions. That way, when things go wrong, at least there’s accountability to a specific person rather than just blaming an AI error.
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.