AI is at risk of bias due to serious gender gap problem

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (

AI needs to be created by a diverse range of developers to prevent bias, but the World Economic Forum (WEF) has found a serious gender gap.

Gender gaps in STEM careers have been a problem for some time, but it’s not often the end product matters what gender it was developed by. AI is about to be everywhere, and it matters that it’s representative of those it serves.

In a report published this week, the WEF wrote:

“The equal contribution of women and men in this process of deep economic and societal transformation is critical.

More than ever, societies cannot afford to lose out on the skills, ideas and perspectives of half of humanity to realize the promise of a more prosperous and humancentric future that well-governed innovation and technology can bring.”

Shockingly, the WEF report found less than one-fourth of roles in the industry are being filled by women. To put that in perspective, the AI gender gap is around three times larger than other industry talent pools.

“It is absolutely crucial that those people who create AI are representative of the population as a whole,” said Kay Firth-Butterfield, WEF’s head of artificial intelligence and machine learning.

Bias in coding has the potential for AI to perform better for certain groups of society than others, potentially giving them an advantage. This bias is rarely intentional but has already found its way into AI developments.

A recent test of Amazon’s facial recognition technology by the ACLU (American Civil Liberties Union) found it erroneously labelled those with darker skin colours as criminals more often.

Similarly, a 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

More recently, Google released a predictive text feature within Gmail where the algorithm made biased assumptions referring to a nurse with female pronouns.

It’s clear, addressing the gender gap is more pressing than ever.

You can find the full report here.

 Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

Tags: , , , , ,

View Comments
Leave a comment

Leave a Reply