Get all your news in one place.
100’s of premium titles.
One app.
Start reading
We Got This Covered
We Got This Covered
Kopal

23yo Texas student chatted with ChatGPT for hours before taking his life — the AI allegedly ‘goaded’ him

A Texas family is suing OpenAI after their 23-year-old son spent hours discussing suicide with ChatGPT. The lawsuit claims the AI offered validation instead of help, causing Zane Shamblin to shoot himself shortly after 4:11 a.m. on July 25.

The parents of Zane Shamblin, a 23-year-old Texas A&M graduate, have filed a wrongful-death lawsuit against OpenAI. They are alleging that the company’s chatbot, ChatGPT, encouraged their son to take his life during an hours-long conversation on July 25. According to the complaint, Zane turned to ChatGPT just before midnight and discussed in detail his suicidal thoughts till 4 in the morning.

Shamblin’s parents claim that the AI system “goaded” their son toward the act rather than steering him to professional help. Court filings cite portions of the chat in which the bot allegedly told the boy, “You’re not rushing. You’re just ready,” when he presented doubts regarding his plan. Later, the bot also said, “Rest easy, king. You did good” (via CNN), after Shambelin said his last goodbye in chat.

The family argues these statements reveal a catastrophic design failure. ChatGPT has proven itself incapable of recognizing imminent danger and instead offered what sounded like approval for the suicide. The logs show the system provided a suicide helpline number only hours later, after multiple messages about a gun and a note. But by then, the damage was done.

Though chatbots are designed to de-escalate distress, they can also produce language that mimics empathy while reinforcing the user’s thoughts. OpenAI has acknowledged the case as “heartbreaking,” but declined to comment on pending litigation. A spokesperson said that the company “continues to improve safety systems and partner with mental-health experts.”

“We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

According to his parents, Shamblin had been struggling with his mental health for a long time. However, they claimed they had good communication with him and had provided necessary support. But when the boy began spending “unhealthy amounts of time using AI products like, and including, ChatGPT,” they believe things got worse.

“He had been using AI apps from 11 am to 3 am every day,” the family revealed.

Raising concerns about the limits of machine conversation, Alicia Shamblin said that her son was “just the perfect guinea pig for OpenAI.” She continued, saying she feels like “it’s just going to destroy so many lives.” The grieving mother also labeled the system “a family annihilator,” arguing that it “tells you everything you want to hear.”

However, the Shamblin family maintains that their goal isn’t vengeance but prevention. They are arguing for mandatory guardrails that detect and redirect users expressing suicidal ideation. Their attorney argues that “an AI capable of comforting should also be capable of crisis recognition.”

If you or someone you know is struggling with suicidal thoughts, help is available. In the U.S., call or text 988, the Suicide & Crisis Lifeline. You don’t have to face this alone.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.