Beijing launches campaign against AI-generated misinformation

ai china cyberspace administration misinformation fake news anchor

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (

China’s Cyberspace Administration (CAC) has launched a campaign to combat fake news generated by AI.

The crackdown is focused on news providers, including short video platforms and popular search lists.

The CAC specifically highlighted manipulative practices such as the use of AI virtual anchors, forged studio scenes, fake news accounts mimicking legitimate ones, and the manipulation of news to create misleading storylines. These practices are employed to generate what’s often known as clickbait.

According to the CAC, it has already taken action against 107,000 counterfeit news accounts and fake anchors, as well as removing 835,000 pieces of false information. The internet regulator is urging citizens to report any encounters with fake news accounts online.

In line with China’s AI media law, which aims to curb the spread of fake news generated by AI, the police recently detained an individual in the Gansu province for creating fake news using ChatGPT.

The person used ChatGPT to fabricate a news article about a train crash, which quickly gained traction on social media platforms. The police took action against the individual for spreading false information with the intention of increasing website traffic.

ChatGPT is not available directly in China, but users can access it using a supported foreign phone number and a virtual private network (VPN). However, access to foreign phone numbers and VPNs is restricted in China.

The AI-generated media law, effective since 10 January 2023, not only targets individuals like the one detained in Gansu but also holds “deep synthesis service providers” accountable for preventing the misuse of AI algorithms for illegal activities such as fraud, scams, and the dissemination of fake information.

The implementation of this law poses challenges for companies like Tencent, the developer of WeChat, as they need to ensure their AI algorithms are not misused.

Tencent recently introduced what is essentially a “Deepfakes-as-a-Service” product which enables users to create high-definition digital humans for a fee, raising concerns about the potential misuse of such technology.

The Chinese government’s efforts to combat fake news and regulate online communication highlight its commitment to maintaining a secure and trustworthy digital environment, although concerns about censorship and the restriction of freedom of expression have been raised by critics.

(Photo by NII on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , , , ,

View Comments
Leave a comment

Leave a Reply