OpenAI tool aims to uncover AI-generated text

OpenAI has launched a tool for detecting text generated using services like its own ChatGPT.

Generative AI models used for services like ChatGPT have raised many societal and ethical questions: Could they be used to generate misinformation on an unprecedented scale? What if students cheat using them? Should an AI be credited where excerpts are used for articles or papers?

A paper (PDF) from the Middlebury Institute of International Studies’ Center on Terrorism,...

Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article

AI expert Jarno Duursma has called out a misleading article in The Guardian which claims to have been written entirely by OpenAI’s GPT-3.

GPT-3 has made plenty of headlines in recent months. The coverage is warranted, GPT-3 is certainly impressive—but many of the claims of its current capabilities are greatly exaggerated.

The headline of the article which Duursma questions is: "A robot wrote this entire article. Are you scared yet, human?"

It's a headline...

Two grads recreate OpenAI’s text generator it deemed too dangerous to release

openai text generator ai artificial intelligence fake news disinformation researchers grad

Two graduates have recreated and released a fake text generator similar to OpenAI's which the Elon Musk-founded startup deemed too dangerous to make public.

Unless you've been living under a rock, you'll know the world already has a fake news problem. In the past, at least fake news had to be written by a real person to make it convincing.

OpenAI created an AI which could automatically generate fake stories. Combine fake news, with Cambridge Analytica-like targeting, and...