Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Geekflare
Geekflare
Keval Vachharajani

AI Is Catching More Scammers Than Helping Them

penAI has shared a new Disrupting Malicious Uses of AI: October 2025 report, which reveals that its chatbot is being used far more often to detect scams instead of creating them. This finding is one of the most common fears around artificial intelligence on its head.

According to the report, ChatGPT is now used three times more frequently to identify scams than to run them. While some fraudsters have tried to use AI tools to translate messages or craft more convincing pitches, far more people are turning to the same technology to check suspicious texts, emails, and job offers before falling for them.

The report describes how OpenAI’s threat-intelligence team disrupted multiple scam networks that appeared to originate in Cambodia, Myanmar, and Nigeria. These groups used ChatGPT to write messages, fake social media posts, and recruitment pitches for investment schemes. One of the operations even created detailed fake biographies and online personas to pose as financial experts. Another group used the model to manage day-to-day logistics inside a scam center, including scheduling and internal communication.

However, the company also found millions of legitimate users doing the opposite. People pasted screenshots of suspicious messages into ChatGPT and asked whether they were real. The model correctly flagged many of them as scams and advised users on how to stay safe. OpenAI states that it has observed this type of scam-spotting behavior millions of times each month.

Furthermore, OpenAI also noted that scammers are not inventing new kinds of fraud. Instead, they are using AI as a convenience tool to speed up translation, write smoother copy, or handle routine communication. In most cases, ChatGPT’s safeguards blocked clearly malicious requests, forcing scammers to rely on indirect or manual workarounds.

As OpenAI put it in the report, ChatGPT “is being used to identify scams up to three times more often than it is being used for scams.” In a digital landscape crowded with fake investment pitches and cloned customer-service chats, that ratio may be one of the most encouraging numbers yet in the AI era.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.