Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Sead Fadilpašić

Meta patches worrying security bug which could have exposed user AI prompts and responses - and pays the bug hunter $10,000

In this photo illustration the Meta logo and European Union flag are displayed on a mobile phone screen.
  • Meta AI was assigning unique identifiers to prompts and responses
  • The servers were not checking who had access rights to these identifiers
  • The vulnerability was fixed in late January 2025

A bug which could have exposed user’s prompts and AI responses on Meta’s artificial intelligence platform has been patched.

The bug stemmed from the way Meta AI assigned identifiers to both prompts, and responses.

As it turns out, when a logged-in user tries to edit their previous prompt to get a different response, Meta assigns both of them a unique identifier. By changing that number, Meta’s servers would return someone else’s queries and results.

No abuse so far

The bug was discovered by a security researcher and AppSecure founder, Sandeep Hodkasia, in late December 2024. He reported it to Meta, who deployed a fix on January 24, 2025, and paid out a $10,000 bounty for his troubles.

Hodkasia said that the prompt numbers that Meta’s servers were generating were easy to guess, but apparently - no threat actors thought of this before it was addressed.

This basically means that Meta’s servers weren’t double-checking if the user had proper authorization to view the contents.

This is clearly problematic in a number of ways, the most obvious one being that many people share sensitive information with chatbots these days.

Business documents, contracts and reports, personal information, all of these get uploaded to LLMs every day, and in many cases - people are using AI tools as psychotherapists, sharing intimate life details and private revelations.

This information can be abused, among other things, in highly customized phishing attacks, that could lead to infostealer deployment, identity theft, or even ransomware.

For example, if a threat actor knows that a person was prompting the AI for cheap VPN solutions, they could send them an email offering a great, cost-effective product, that is nothing more than a backdoor.

Via TechCrunch

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.