Machine Learning

Opinion: PornHub’s new AI is wasted on adult content

©iSTOCK/eranicle

Warning: Contains descriptions of explicit sexual acts.

PornHub has announced an AI for tagging adult content, but I can’t help but feel it demonstrates promising technology with better use elsewhere.

The company’s AI is virtually perving on PornHub’s entire online catalogue, frame-by-frame, and tagging it with relevant tags for real users to discover content faster. It has been fed with thousands of images of models and specific acts to generate a database of names, faces, and positions.

PornHub shared clips of the AI in action with Engadget who claim it was able to identify both the names of the performers in a scene, and what they were doing. “Tags such as ‘blowjob’, ‘doggy’, ‘cowgirl’, and ‘missionary’ floated on screen with the corresponding action,” wrote Engadget.

The system has been through around 500,000 featured videos as of today, and can even tag content by performer attributes such as ‘blonde’, ‘brunette’, or even the size of their, um, assets…

“Artificial intelligence has quickly reached a fever pitch, with many companies incorporating its capabilities to considerably expedite antiquated processes. And that’s exactly what we’re doing with the introduction of our AI model, which quickly scans videos using computer vision to instantaneously identify pornstars,” said Corey Price, VP, Pornhub. “Now, users can search for a specific pornstar they have an affinity for and we will be able to retrieve more precise results.”

It’s undeniably creepy, but it serves as an impressive demonstration of the tasks which AI can remove from human hands. If we’re honest, it probably wouldn’t be difficult to find someone willing to be paid to watch and categorise porn, but there are darker types of content on the internet.

Facebook and YouTube, for example, are popular networks to share content. Unfortunately, this attracts content which can have a serious detrimental impact on the human psyche – such as terrorist propaganda.

These networks have teams who have to manually review some of the world’s most horrific content; from child abuse, to beheadings, to suicide. I’d be willing to bet you’d find it more difficult to find those willing to review such content.

And this is where AI can step in. In fact, both YouTube and Facebook are increasingly making use of machine learning to identify problem content to varying degrees of success.

In a blog post back in June, Kent Walker from Google detailed a couple of steps the company is doing to counter terrorist propaganda:

  1. Increased use of machine learning technology. “First, we are increasing our use of technology to help identify extremist and terrorism-related videos. This can be challenging: a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user.”
  2. More independent human experts. “Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 percent of the time and help us scale our efforts and identify emerging areas of concern.”

Part of the keenness to flag and remove inappropriate content is due to governmental pressure. The UK and France are currently considering whether to introduce a new liability for tech platforms that fail to promptly remove terrorist content. In Germany, a proposal which includes big fines for social media firms that fail to take down hate speech has already gained government backing.

It’s clear that companies such as Google do not think we’re in a position to hand over full responsibility of identifying problematic videos to machines just yet, and it’s a concern to see the need for more human input, but it’s good to see the issue being taken seriously. The less we have to expose humans to the worst this world has to offer, the better.

Do you think AI will reduce the need for human moderation of content? Share your thoughts in the comments.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

To Top