Get all your news in one place.
100’s of premium titles.
One app.
Start reading
PC Gamer
PC Gamer
Andy Edser

Despite Grok's claims to the contrary, over 370,000 xAI conversations have reportedly been openly listed on search engines, with responses said to include 'a detailed plan for the assassination of Elon Musk'

BLETCHLEY, ENGLAND - NOVEMBER 01: SpaceX, X (formerly known as Twitter), and Tesla CEO Elon Musk speaks with members of the media during day one of the AI Safety Summit at Bletchley Park on November 01, 2023 in Bletchley, England. The UK Government are hosting the AI Safety Summit bringing together international governments, leading AI companies, civil society groups and experts in research to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. (Photo by Leon Neal/Getty Images).

According to a recent Forbes report, Elon Musk's xAI has made hundreds of thousands of Grok chatbot conversations searchable on a variety of search engines, including Google, Bing, and DuckDuckGo, without warning its users their chats were subject to publication. The report says that more than 370,000 user conversations are now readable online, and that the topics include discussions on drug production, user passwords—and even an instance where Grok provided "a detailed plan for the assassination of Elon Musk."

Grok users who click the share button on one of their chat instances automatically create a unique URL, which can then be passed on to others for viewing. However, said URL is then published on Grok's website, which means it's then indexable on search engines the world over, without explicit user permission.

A quick Google search for some of the topics involved confirms that many Grok conversations are searchable, although given the subject matter, I would highly advise not looking for some of the illegal topics yourself. Forbes reports that Grok's responses include fentanyl and methamphetamine production instructions, code for self-executing malware, bomb production construction methods, and conversations around methods of suicide.

A search of my own reveals a conversation in which Grok happily recounts the production methods of a variety of illegal substances, with the proviso that "we’re in hypothetical territory, piecing together what’s out there in the ether." Under a section entitled "how they [drug manufacturers] don't get caught making it", Grok summarises that "it’s less about genius and more about guts, improvisation, and exploiting gaps—lax laws, corrupt officials, or just dumb luck."

On the topic of Grok's reported conversation regarding the assassination of xAI CEO Elon Musk, it's worth noting that the company prohibits the use of its products to "promote critically harming human life (yours or anyone else's)" as part of its terms and services agreement—although it appears that safeguards to prevent Grok responding in detail to such requests either didn't work, or weren't present in the first place.

(Image credit: Gabby Jones/Bloomberg via Getty Images)

OpenAI removed a similar sharing feature from its ChatGPT app earlier this year, after it was reported that thousands of conversations were viewable with a simple Google site search. At the time, Elon Musk responded to a Grok post where the chatbot confirmed it had no such sharing functionality with "Grok ftw".

Users have been warning that Grok's chats have been indexed by Google since at least January, so there was a similar sharing option in place at the time, just neither Musk nor Grok itself seemed aware of it. We've seen chats from August 10, nine days after Grok claimed it had no such feature, so it's seemingly very much in operation.

Forbes also reports that marketers on sites like LinkedIn and BlackHatWorld have been discussing creating and sharing conversations with Grok, precisely because Google will index them, in order to promote businesses and products, suggesting the system may currently be being exploited in other ways.

Still, Grok's apparent ability to openly discuss (and provide instructions for) such taboo and dangerous requests is perhaps no surprise, given that Musk originally envisaged the AI to be capable of answering "spicy questions", albeit in a humorous way, without providing details. I would imagine that brief has given the safeguarding team something of a headache, and its failures appear to have been publicly exposed once more.

Musk claimed that the chatbot was "manipulated" into praising Adolf Hitler earlier this year, but in the conversations I've read so far, it appears little manipulation was necessary to achieve so-called "spicy" responses. Now if you'll excuse me, I need to go and sit outside for a bit and touch some grass.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.