Study finds YouTube’s AI serves most of the content users regret viewing

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (

A major crowdsourced study has found that YouTube’s AI is still recommending most of the videos that users regret viewing.

Mozilla launched a study using a crowdsourced army of volunteers that found, of the content the users regretted viewing, 71 percent were recommendations from YouTube’s AI.

Over the years, concerns have been raised that YouTube’s algorithms have promoted videos that range from stupid but otherwise innocuous to hate speech, conspiracies, abuse, misinformation, and political/religious extremism. The more of such videos are watched, the more they’re recommended and dangerous bubbles are formed where views are solidified and other perspectives aren’t tolerated.

“We now know that YouTube is recommending videos that violate their very own content policies and harm people around the world — and that needs to stop,” Mozilla says.

As a product of Google, one of the global leaders in AI, it seems shocking that YouTube is still serving up such recommendations. It makes many people question whether there’s little incentive to change it as such clickbait keeps users engaged on the platform, for better or worse.

YouTube is the biggest online video provider; reaching more than two billion viewers per month and delivering over one billion hours of view time every day. With that in mind, Google has a huge responsibility to ensure the content it recommends is appropriate.

That’s not necessarily an easy task, but Google should be in a position to manage it. Things like hate speech and extremism have no place on YouTube but Google does have to tread a fine line when it comes to avoiding things like political biases that could be deemed as electoral manipulation.

About nine percent of the videos that Mozilla’s study participants regretted viewing were later taken down from YouTube.

Google has taken steps over the years to try and tackle some of the problematic content on its platform and claims to have made 30 changes to its algorithms over the past year to reduce recommendations of harmful videos. While it arguably hasn’t gone far enough, it deserves some credit for that. However, it seems these efforts have been more effective for English-speaking countries.

According to Mozilla’s study, YouTube’s recommendation of videos that users regretted viewing is 60 percent higher in countries that don’t speak English as their primary language. This was especially true in Brazil, Germany, and France.

The report provides a variety of recommendations to help YouTube and its users tackle the problem.

For users, Mozilla recommends checking your “watch” and “search” histories to clear out anything you don’t want influencing the content served to you.

Mozilla also recommends that YouTube sets up an independent audit of its recommendation systems, be more transparent about borderline content, and provide the ability to opt-out of personalisation altogether.

Lawmakers should mandate some of these things from Google, suggests Mozilla. This includes the release of information and the creation of tools for external researchers to scrutinise YouTube’s algorithms.

You can find a full copy of Mozilla’s report here.

(Photo by Alexander Shatov on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

Tags: , , , , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply