
Out of all the new AI features last week, a ChatGPT feature meant to make shared chats easier for users to share with each other backfired.
Instead, it exposed deeply personal conversations, on topics ranging from mental health and job struggles to confidential business info, in live Google search results.
This, once again, raises the issue of what should and shouldn't be shared with ChatGPT.
What went wrong

The now-disabled feature allowed users to share ChatGPT conversations with others. While technically opt-in, the implementation made it easy for users to unintentionally expose sensitive content to the open web.
Soon after its launch, users began spotting indexed chats under the site:chatgpt.com/share domain, with more than 4,500 public conversations appearing in search results.
Many included identifying details or context that could be easily de-anonymized.
How OpenAI responded

Following the backlash, OpenAI has fully turned off the discoverability option and is now working with Google and other search engines to de-index previously exposed conversations.
If you’ve ever shared a ChatGPT link, deleting the link will remove it from your ChatGPT account, but not immediately from search engine caches.
Some content may still be temporarily accessible through cached pages on Google or Bing.
Why this matters
ChatGPT has become a go-to productivity tool for millions, often used to draft emails, ask intimate questions or explore personal issues.
Users frequently treat chats as a life coach, but this incident proves that’s not always the case.
Even if a chat doesn't include a name or email, some contextual clues can often be enough to identify a user.
The ease of activating this feature, which is just a checkbox, meant users could unknowingly make content public with minimal friction.
Innovation vs. privacy

AI is constantly under scrutiny as privacy remains top of mind for users. Yet, the rush to innovate has sometimes come at the cost of privacy safeguards.
To its credit, OpenAI acted quickly to shut the feature down. However, the episode underscores the importance of privacy-first design, clearer disclosures, and enhanced protections, particularly as generative AI becomes increasingly integrated into daily life.
Bottom line
If you’ve shared any ChatGPT conversations in the past, now’s a good time to:
- Review and delete old shared links
- Avoid including personal or confidential info in any AI conversation
- Treat ChatGPT like email or cloud docs — anything you type could potentially be seen
OpenAI’s experiment with public chat discoverability has been rolled back, but it serves as a powerful reminder: even the most helpful AI features need guardrails.
If companies want users to trust AI, privacy can’t be an afterthought. Users are more aware now than ever and will not just expect clearer guardrails, but demand them.