Get all your news in one place.
100’s of premium titles.
One app.
Start reading

Foreign adversaries are using multiple AI tools, OpenAI warns

Foreign adversaries are increasingly using multiple AI tools to power hacking and influence operations, according to a new OpenAI report released Tuesday.

Why it matters: In the cases OpenAI discovered, the adversaries typically turned to ChatGPT to help plan their schemes, then used other models to carry them out — reflecting the range of applications for AI tools in such operations.


Zoom in: OpenAI banned several accounts tied to nation-state campaigns that seemed to be using multiple AI models to improve their operations.

  • A Russian-based actor that was generating content for a covert influence operation used ChatGPT to write prompts seemingly for another AI video model.
  • A cluster of Chinese-language accounts used ChatGPT to research and refine phishing automation they wanted to run on China-based model DeepSeek.
  • OpenAI also confirmed that an actor the company previously disrupted was the same one Anthropic recently flagged in a threat report, suggesting they were using both tools.

Between the lines: OpenAI mostly observed threat actors using ChatGPT to improve their existing tactics, rather than creating new ones, Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, told reporters in a call ahead of the report's release.

  • However, the multi-model approach means that investigators have "just a glimpse" at how threat actors are using a specific model, Nimmo said.

The intrigue: Nation-state hackers and scammers are also learning to hide the telltale signs of AI usage, OpenAI's research team found. One scam network asked ChatGPT to remove em dashes from its writing, for example.

The big picture: Much like the U.S. government, foreign adversaries have been exploring ways to use ChatGPT and similar tools for years.

  • In the latest report, OpenAI said it had banned accounts that appeared to be tied to both China-based entities and Russian-speaking criminal groups for using the model to help develop malware and write phishing emails.
  • The company also banned accounts linked to Chinese government entities, including some that were asking OpenAI's models to "generate work proposals for large-scale systems designed to monitor social media conversations," according to the report.

What to watch: The campaigns OpenAI identified didn't seem to be very effective, per the report. But nation-state entities are still early in their AI experiments.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.