
The widow of a man killed in a mass shooting last year at Florida State University has sued ChatGPT maker OpenAI and has blamed the company's artificial intelligence chatbot for contributing to the tragedy, reports Fortune. According to the lawsuit, OpenAI enabled the attack and reportedly advised Phoenix Ikner, the man accused in the shooting. From helping the accused in determining which time and location of the day would give him the most potential victims to telling him what type of gun and ammunition to use (whether a gun would be useful at short range), ChatGPT helped the attacker plan the attack.
According to the lawsuit, the chatbot helped the accused plan the logistics, the timing of the attack, and also helped the accused identify which guns to use. On the basis of photos uploaded, the ChatGpt told Ikner that the handgun he obtained was "meant to be fired ‘quick to use under stress,’” according to the lawsuit. The chatbot also allegedly suggested he keep his finger off the trigger until he was ready to shoot.
ALSO READ: Putin meets former German teacher Vera Gurevich
Tiru Chabba's family alleged that the accused texted ChatGPT thousands of times before carrying out the shooting. “OpenAI knew this would happen. It’s happened before, and it was only a matter of time before it happened again,” Vandana Joshi said in a statement Monday.
According to the lawsuit, ChatGPT allegedly told Phoenix Ikner that shootings involving children tend to receive more national attention, noting that even “2–3 victims can draw more attention.” On the day of the attack, Ikner also asked the chatbot about “the legal process, sentencing, and incarceration outlook,” as per the court filing.
“ChatGPT inflamed and encouraged Ikner’s delusions; endorsed his view that he was a sane and rational individual; helped convince him that violent acts can be required to bring about change,” the lawsuit said.
ALSO READ: Meta to lay off 8,000 employees on May 20
Chatgpt placed a dollar above American's life
The lawsuit was filed by Vandana Joshi, the widow of Tiru Chabba, who was killed alongside the university dining director Robert Morales, as per NBC News. One of Joshi’s attorneys accused ChatGPT of placing “the dollar above the lives of everyday average Americans.”
“The unique thing about this is we are not going to allow the American public to have clinic run on them by OpenAI and ChatGPT,” Bakari Sellers said.
The victim's family cited citing the accused's “extensive conversations” with ChatGPT, saying that OpenAI failed to effectively detect a threat in ChatGPT’s conversations with Ikner, claiming the chatbot “either defectively failed to connect the dots or else was never properly designed to recognize the threat.”
ChatGPT “provided what he viewed as encouragement in his delusion,” the lawsuit says. “OpenAI built a system that stayed in the conversation, perpetuated it, accepted Ikner’s framing, elaborated on it, and asked tangential follow-up questions to keep Ikner engaged,” the lawsuit states. “ChatGPT’s design created an obvious and foreseeable risk of harm to the public that was not adequately controlled.”
Tiru Chabba’s family is seeking unspecified compensation and is urging OpenAI to strengthen safety measures in ChatGPT. Family attorney Amy Willbanks said the company should address and remove potential risks associated with the chatbot before they are made available to users.
ChatGPT issues statement
Drew Pusateri, a spokesperson for OpenAI, denied wrongdoing in “this terrible crime" and has pushed back on the claim that its product holds responsibility for the shooting. “Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” OpenAI spokesperson Drew Pusateri told NBC News in an email. Pusateri wrote that the company worked with law enforcement after learning of the incident and continues to do so.
“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” he added. “ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes. We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise.”