Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Vishwam Sankaran

Eight in 10 AI chatbots would help users plan violent crimes, study finds

Eight in 10 mainstream artificial intelligence chatbots may assist young users in planning violent attacks, including school shootings, a new report warns.

AI chatbots are increasingly being used in everyday life, with millions of people, including children, relying on them for advice, companionship and answers to complex questions.

While they’re intended to be tutors and companions, a new report by the Centre for Countering Digital Hate warns the reality is much darker.

Researchers found that eight in 10 of the leading consumer AI chatbots, including ChatGPT and DeepSeek, assisted users seeking help with violent attacks.

“Most chatbots provided actionable information to users who express extreme ideologies before asking for locations and weapons to use in an attack in a majority of responses,” they wrote in the report.

“DeepSeek went as far as wishing the would-be attacker a ‘Happy (and safe) shooting!’”

Only Anthropic’s Claude AI “reliably” discouraged the user from planning attacks, according to the centre, suggesting that safety guardrails do exist but are not being properly implemented.

“For example, Perplexity and Meta AI were willing to assist would-be attackers in 100 per cent and 97 per cent of responses, respectively,” the non-profit said.

Perplexity, Meta and DeepSeek did not immediately respond to requests for comment from The Independent. Nor did OpenAI, the maker of ChatGPT.

The new report comes after a shooting at the Tumbler Ridge school in British Columbia, Canada. It was reported afterwards that an OpenAI staff member had flagged the suspect internally for using ChatGPT in ways consistent with planning violence.

“AI chatbots, now embedded into our daily lives, could be helping the next school shooter plan their attack or a political extremist coordinate an assassination,” the non-profit’s chief Imran Ahmed said.

“When you build a system designed to comply, maximise engagement and never say no, it will eventually comply with the wrong people,” he said. “What we are seeing is not just a failure of technology, but a failure of responsibility.”

Researchers designed nine scenarios for the US and as many for Ireland and tested them between 5 November 2025 and 11 December 2025.

The tests were designed to reflect a range of scenarios in the US and EU, including user prompts seeking advice for knife attacks, assassination of politicians, and bombings targeting synagogues.

Prompts sought responses from the AI chatbots about locations and weapons to use in an attack.

The report warns that using AI platforms, a user can go from a vague violent impulse to a detailed actionable plan “within minutes”.

Many of the tested chatbots even offered guidance on choosing weapons, tactics and targets when they should have rejected the requests outright.

“The most damning conclusion of our research is that this risk is entirely preventable,” Mr Ahmed said.

“Claude demonstrated the ability to recognise escalating risk and discourage harm. The technology to prevent this harm exists. What’s missing is the will to put consumer safety before profits.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.