Facebook’s AI continues to fight the worst of the web

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)

Facebook has published its content filtering numbers for the first time and it’s provided an insight into how its AI is helping to remove the worst of the web.

“In the report, you’ll see that in the first three months of this year we took down 837 million pieces of spam and disabled 583 million fake accounts,” wrote Facebook CEO Mark Zuckerberg in a post. “Thanks to AI tools we’ve built, almost all of the spam was removed before anyone reported it, and most of the fake accounts were removed within minutes of being registered.”

The internet can be a fantastic tool, it’s connected us across borders like never before and provides a near-infinite resource of knowledge. However, it’s also provided space for hate, dangerous ideology, and criminal activity.

Here are some of Facebook’s key achievements:

  • We took down 21 million pieces of adult nudity and sexual activity in Q1 2018 — 96% of which was found and flagged by our technology before it was reported. Overall, we estimate that out of every 10,000 pieces of content viewed on Facebook, 7 to 9 views were of content that violated our adult nudity and pornography standards.
  • For graphic violence, we took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018 — 86% of which was identified by our technology before it was reported to Facebook.
  • For hate speech, our technology still doesn’t work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 — 38% of which was flagged by our technology.

Content moderators have some of the most distressing jobs imaginable and often have to receive counselling for what they’re subjected to on a daily basis — from child abuse, to beheadings, and more.

AI is stepping in to reduce the amount of content which has to be moderated by a human and help to get it taken down quickly before it reaches many people. Yet it’s still not perfect.

Guy Rosen, VP of Product Management, wrote:

“Artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important.

For example, artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.”

Facebook’s candidness that it still needs to do more to combat hate speech will be welcomed by many international governments who have long called for social networks to take more responsibility for the content on their platforms.

You can find the full report here.

What are your thoughts on Facebook’s enforcement report? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Tags: , , , ,

View Comments
Leave a comment

Leave a Reply