Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Anthony Cuthbertson

China using ChatGPT for ‘authoritarian abuses’, OpenAI claims

OpenAI has said that some China-based ChatGPT users have been using the AI chatbot for “authoritarian abuses”.

In the company’s latest threat report, OpenAI revealed that it had banned several accounts that appeared to be linked to various government entities in China after they violated policies relating to national security uses.

“Some of these accounts asked our models to generate work proposals for large-scale systems designed to monitor social media conversations,” the report stated.

“While these uses appear to have been individual rather than institutional, they provide a rare snapshot into the broader world of authoritarian abuses of AI.”

The world’s leading AI firm, which was recently valued at $500 billion, said that a cluster of Chinese-language accounts had also been caught using ChatGPT to assist in cyber operations against Taiwan’s semiconductor sector, US academia, and political groups critical of the Chinese Communist Party.

In some instances, the threat actors used ChatGPT to generate formal emails in English in order to carry out phishing campaigns designed to breach IT systems.

ChatGPT is not available in China due to the country’s strict internet censorship, known as the Great Firewall of China, however OpenAI offers Chinese-language versions of the app that can be accessed via virtual private networks (VPNs).

“Our disruption of ChatGPT accounts used by individuals apparently linked to Chinese government entities shines some light on the current state of AI usage in this authoritarian setting,” OpenAI’s 37-page report noted.

The authors of the report also noted cyber operations conducted by Russian- and Korean-speaking users.

These did not appear to be linked to government entities, however some may have been affiliated with state-backed criminal groups.

OpenAI claims to have disrupted over 40 malicious networks since it first began releasing public threat reports in February 2024.

The latest report said that no new offensive capabilities had been discovered for its latest AI models.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.