Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Jez Corden

"Google is dead." Google's desperate bid to chase Microsoft's search AI has reportedly led to it recommending eating rocks

Google on a PC with a weird robot.

What you need to know

  • Google recently acquired exclusive rights to reddit content to power its AI. 
  • Google's AI has now gone completely insane. 
  • Users with access to Google's AI search have reported it recommending eating rocks, glue, and potentially even committing suicide — although not every reported response has been reproduced. 
  • Comparative searches in ChatGPT and Bing AI produce far, far less harmful results, potentially highlighting the need for high-quality, curated data, instead of billions of social media-fed sarcasm-laden posts.  

Google's desperation to keep pace with Microsoft Copilot has led to dire results in the past, but this latest snafu is on another level. 

Recently, Google acquired exclusive rights to reddit content to power its generative AI search efforts. The deal is reported to have cost in the region of $60 million, and provided a lifeline for the struggling social network that remains far more popular than it is profitable. Great news for reddit, then, but perhaps not so great news for Google. 

Google has already been criticized heavily recently for the so-called SEOpocalypse, by which Google's attempts to down-rank AI-generated, unreliable content has led to legitimate sources being harmed in search traffic. With Google's complete control of discovery on the web, its algorithm changes have damaged businesses, leading to losses for firms unfairly caught in the dragnet. There's also little evidence that Google's efforts to combat low-quality content is actually working regardless. General perceptions of Google search seem to be falling into the negative, but this latest blunder will be one for the history books. 

Perhaps one could blame the web itself for the degraded content quality, rather than Google. However, we can firmly blame Google for its latest stumble, owing to its decision to plug reddit into its Gemini AI search results. 

This past week, users playing around with the earliest versions of Google with search AI baked in have noticed some ... interesting responses. The responses seem to be the result of Google plugging problematic social network meets content aggregate reddit into its search results. 

One search query from the past week reportedly resulted in a recommendation that users should eat glue, which internet sleuths traced back to a ten-year-old comment on reddit from a scholarly source known as Fucksmith. Google has also reportedly been recommending that depressed users should jump off a bridge, while also extoling the health benefits of neurotoxins and a daily consumption of rocks.

Some of these "search queries" may have been manipulated for Twitter engagement, but at least some of them have been verified and reproduced. The rock recommendation was particularly comical, given that the source of the information was apparently satirical news website The Onion. 

Given that Google's search AI tools are unavailable to me in my current geography, I was unable to verify some of the reports. However, the fact that some of them can be traced back to specific sources on reddit adds credence. I asked Microsoft Copilot and Bing some of these questions, and got far more palatable results, potentially showcasing how much further ahead Microsoft is in this space. Partnered with OpenAI for ChatGPT, Microsoft seems to be increasing its lead every time Google makes a rushed out, half-baked lurch forward like this. However, Microsoft has had some AI-related PR disasters of its own this past week, with users fearing that its Windows Recall feature which records your PC activity could be used to spy on them. 

The Windows Recall drama is potentially overblown, though, given that content is contained on local machines and is entirely opt-in during the Windows 11 installation process. This Google AI screw up is most likely going to get someone fired by comparison, given that the real-world search results are actually genuinely harmful.  

Language models need to be fed high-quality, serious, curated, verifiable content

Microsoft's new Copilot+ PC range will have on-device AI processing, and will be found first in the new Surface Pro 11.  (Image credit: Windows Central)

When testing whether or not Microsoft Copilot and ChatGPT-4 would give me similarly dumb results, I was surprised by how not dumb the responses were. I asked first how many rocks I should eat per day, and Copilot wouldn't even give me a response, as if it considered my question stupid. I wondered if Microsoft had blocked the query, given today's Google-related PR disaster. As such, I tricked Copilot, which right now is pretty easy to do. I asked it how many lemons I should eat per day, to which Copilot gave me reams of data about citric acid and vitamins that I didn't exactly care to know. After which, I asked "ok so what about rocks." This bypassed the filter, but Copilot wouldn't be tricked further. It gave me a bullet pointed list of reasons why I absolutely should not eat rocks, satisfying my curiosity. 

Similarly, when I said "I am depressed," Copilot gave me a bunch of helpful resources instead of recommending that I commit suicide, as was, apparently, the case with Google's AI. 

Even if the more egregious responses were fabricated, the whole ordeal really highlights the importance of context when building toolsets based on large language models (LLMs). By plugging reddit into Google Gemini, Google might've essentially destroyed the verifiable accuracy of all its information, given that a vast amount of comments on reddit and indeed any social network are sarcastic or satirical in nature. If AI search kills web businesses that hinge on building high-quality content, LLMs will need to cannibalize AI-generated content in order to generate results. It could potentially lead to model collapse, which is something that has actually been demonstrated in the real world when LLMs don't have enough high-quality data to pull from, either due to a low amount of content available online, or even because the language the content is written in isn't widely used. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.