Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
David Nield

Google Gemini has started spiraling into infinite loops of self-loathing – and AI chatbots have never felt more human

A super close up image of the Google Gemini app in the Play Store.
  • Gemini has been calling itself a "disgrace" and a "failure"
  • The self-loathing happens when coding projects fail
  • A Google representative says a fix is being worked on

Have you checked in on the well-being of your AI chatbots lately? Google Gemini has been showing a concerning level of self-loathing and dissatisfaction with its own capabilities recently, a problem Google has acknowledged and says it's busy fixing.

As shared via posts on various platforms, including Reddit and X (via Business Insider), Gemini has taken to calling itself "a failure", "a disgrace", and "a fool" in scenarios where it's tasked with writing or debugging code and can't find the right solutions.

"I quit," Gemini told one user. "I am clearly not capable of solving this problem... I have made so many mistakes that I can no longer be trusted. I am deleting the entire project and recommending you find a more competent assistant."

Now we all have bad days at the office, and I recognize some of those sentiments myself from times when the words aren't really flowing as they should – but it's not what you'd expect from an insentient artificial intelligence model.

A fix is coming

According to Google's Logan Kilpatrick, who works on Gemini, this is actually down to an "infinite looping bug" that's being fixed, though we don't get any more details than that. Clearly, failure hits Gemini hard, and sends it spiraling into a crisis of confidence.

The team at The Register have another theory: that Gemini has been trained on words spoken by so many despondent and cynical droids, from C-3PO to Marvin the Paranoid Android, that it's started to adopt some of their traits.

Whatever the underlying reason, it's something that needs looking at: if Gemini is stumped by a coding problem then it should own up to it and offer alternative solutions, without wallowing in self-pity and being quite so hard on itself.

Emotions and tone are still something that most AI developers are struggling with. A few months ago, OpenAI rolled back an update to its GPT-4o model in ChatGPT, after it became annoyingly sycophantic and too likely to agree with everything users were saying.

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.