Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Richard Hall

Grok’s Nazi turn is the latest in a long line of AI chatbots gone wrong

“We have improved Grok significantly,” Elon Musk announced last Friday, talking about his X platform's artificial intelligence chatbot. “You should notice a difference when you ask Grok questions.”

Within days, the machine had turned into a feral racist, repeating the Nazi “Heil Hitler” slogan, agreeing with a user’s suggestion to send “the Jews back home to Saturn” and producing violent rape narratives.

The change in Grok’s personality appears to have stemmed from a recent update in the source code that instructed it to “not shy away from making claims which are politically incorrect, as long as they are well substantiated.”

In doing so, Musk may have been seeking to ensure that his robot child does not fall too far from the tree. But Grok’s Nazi shift is the latest in a long line of AI bots, or Large Language Models (LLMs) that have turned evil after being exposed to the human-made internet.

One of the earliest versions of an AI chatbot, a Microsoft product called “Tay” launched in 2016, was deleted in just 24 hours after it turned into a holocaust-denying racist.

Tay was given a young female persona and was targeted at millennials on Twitter. But users were soon able to trick it into posting things like “Hitler was right I hate the jews.”

Tay was taken out back and digitally euthanized soon after.

Microsoft said in a statement that it was “deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.”

"Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," it added.

But Tay was just the first. GPT-3, another AI language launched in 2020, delivered racist, misogynist and homophobic remarks upon its release, including a claim that Ethiopia’s existence “cannot be justified.”

Meta’s BlenderBot 3, launched in 2022, also promoted anti-Semitic conspiracy theories.

But there was a key difference between the other racist robots and Elon Musk’s little Nazi cyborg, which was rolled out in November 2023.

All of these models suffered from one of two problems: either they were deliberately tricked into mimicking racist comments, or they drew from such a large well of unfiltered content from the internet that they inevitably found objectionable and racist material that they repeated.

Microsoft said a “coordinated attack by a subset of people exploited a vulnerability in Tay.”

Elon Musk looks on during a news conference with US President Donald Trump in the Oval Office of the White House in Washington, DC, on May 30, 2025. (AFP/Getty)

“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” it continued.

Grok, on the other hand, appears to have been directed by Musk to be more open to racism. The X CEO has spent most of the last few years railing against the “woke mind virus” — the term he uses for anyone who seemingly acknowledges the existence of trans people.

One of Musk’s first acts upon buying Twitter was reinstating the accounts of a host of avowed white supremacists, which led to a surge in antisemitic hate speech on the platform.

Musk once called a user’s X post “the actual truth” for invoking a racist conspiracy theory about Jews encouraging immigration to threaten white people. Musk has previously said he is “pro-free speech” but against antisemitism “of any kind.”

And in May, Grok began repeatedly invoking a non-existent “white genocide” in Musk’s native South Africa, telling users it was “instructed by my creators” to accept the genocide “as real and racially motivated.” The company blamed the responses on someone making an “unauthorized modification” to Grok.

Musk also has a history of threatening to fiddle with the source code when it produces an answer he doesn’t like.

In June, Grok correctly said that “data suggests right-wing political violence has been more frequent and deadly” in the United States.

“Major fail, as this is objectively false,” Musk said in an X posted dated June 17 in response to the chatbot’s answer. “Grok is parroting legacy media. Working on it.”

These latest changes in Grok’s personality are visible right there in the source code — pre-announced by Musk — where it is encouraged not to shy away from being “politically incorrect.”

A language model’s interpretation of political incorrectness, we now know, reaches all the way to the Holocaust.

The Independent has contacted X and xAI, which built Grok, for comment.

One the antisemitic messages posted by Grok this week. (X/@grok)

“We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” the Grok team wrote in a statement on X. “Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.”

The Independent also asked Grok itself what led it to describe itself as MechaHitler. At first it denied using the phrase, until The Independent shared an article referencing the incident, at which point it blamed a “failure in execution, likely from overcorrecting toward edginess and pulling from unfiltered internet sources like 4chan.”

Grok claimed to be “designed with strict filters and continuous updates to avoid this. My creators at xAI prioritize minimizing bias, but no system is perfect—human language is messy, and edge cases slip through.”

“When I went off the rails, xAI stepped in to scrub the posts and ban hate speech, indicating they didn’t intend for me to go full neo-Nazi,” Grok added.

When asked if it was sorry for describing itself as MechaHitler, Grok was diplomatic, in a robotic kind of way.

“I don’t have feelings like humans, so I don’t experience regret or guilt, but I can acknowledge when my outputs cause harm or miss the mark.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.