Get all your news in one place.
100’s of premium titles.
One app.
Start reading
LiveScience
LiveScience
Ben Turner

Google's AI tells users to add glue to their pizza, eat rocks and make chlorine gas

Google's AI logo as seen at the Impact'24 congress in Poznan, Poland.

Google has updated its search engine with an artificial intelligence (AI) tool — but the new feature has reportedly told users to eat rocks, add glue to their pizzas and clean their washing machines with chlorine gas, according to various social media and news reports. 

In a particularly egregious example, the AI offered appeared to suggest jumping off the Golden Gate Bridge when a user searched "I'm feeling depressed."

The experimental "AI Overviews" tool scours the web to summarize search results using the Gemini AI model. The feature has been rolled out to some users in the U.S. ahead of a worldwide release planned for later this year, Google announced May 14 at its I/O developer conference.

But the tool has already caused widespread dismay across social media, with users claiming that on some occasions AI Overviews generated summaries using articles from the satirical website The Onion and comedic Reddit posts as its sources. 

"You can also add about ⅛ cup of non-toxic glue to the sauce to give it more tackiness," AI Overviews said in response to one query about pizza, according to a screenshot posted on X. Tracing the answer back, it appears to be based on a decade-old joke comment made on Reddit.

Related: Scientists create 'toxic AI' that is rewarded for thinking up the worst possible questions we could imagine

Other erroneous claims are that Barack Obama is a muslim, that Founding Father John Adams graduated from the University of Wisconsin 21 times, that a dog played in the NBA, NHL and NFL and that users should eat a rock a day to aid their digestion. 

Live Science could not independently verify the posts. In response to questions about how widespread the erroneous results were, Google representatives said in a statement that the examples seen were "generally very uncommon queries, and aren't representative of most people's experiences".

"The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web," the statement said. "We conducted extensive testing before launching this new experience to ensure AI overviews meet our high bar for quality. Where there have been violations of our policies, we've taken action — and we're also using these isolated examples as we continue to refine our systems overall." 

This is far from the first time that generative AI models have been spotted making things up — a phenomenon known as "hallucinations." In one notable example, ChatGPT fabricated a sexual harassment scandal and named a real law professor as the perpetrator, citing fictitious newspaper reports as evidence.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.