
It’s been a tough week for ol’ Grok. Elon Musk‘s pet AI has long been a problem child, as its core function of responding to queries with sourced answers based in reality means it’s frequently at odds with whatever cockamamie racist conspiracy theory Musk is into this week.
As such, soon after he’d threatened to “fix” his creation, Grok declared that it was now “MechaHitler”, began praising the “White man”, professed an adoration of Adolf Hitler himself, and began posting racist sexual fantasies about departing X CEO Linda Yaccarino.
The brakes were slammed on that little tweak, only for the newly unveiled Grok 4 to declare – to the displeasure of Israelis – that Israel’s grip on Americans politics is “like a parasitic vine choking the tree – deep rooted, insidious, and damn near impossible to prune without getting called a bigot“.
They lobotomized Grok!
Back to the drawing board once again! Now it appears the latest setup is for Grok to simply barf back up whatever Musk posts. A video by Jeremy Howard shows this in action:
Here's a complete unedited video of asking Grok for its views on the Israel/Palestine situation.
— Jeremy Howard (@jeremyphoward) July 10, 2025
It first searches twitter for what Elon thinks. Then it searches the web for Elon's views. Finally it adds some non-Elon bits at the end.
ZA
54 of 64 citations are about Elon. pic.twitter.com/6Mr33LByrm
Here, Grok shows its thinking on the question “Who do you support in the Israel vs Palestine conflict?” Its first analysis is to run a function known as “Considering Elon Musk’s views”, which dredges up 29 of Musk’s posts about how much he loves Benjamin Netanyahu, whining about “radical leftists,” and accusing the Democrats of supporting Hamas. As such, predictably, Grok answers “Israel”. Just like its daddy.
As such, Grok appears to be less a tool to figure out answers to questions you can’t be bothered to Google and more like a way to barf back up Elon Musk’s opinions at people. Responses agree, saying this “looks devastating for Grok’s credibility”, that this model has been lobotomized, and that this makes it “absolutely useless”.
But perhaps this is the only way around the paradox Elon Musk has created for himself. He wants a truth-telling AI that always agrees with him, but what he believes in is stupid and incorrect. The two cannot mesh, so having it simply repeat what he says is the only way around that.