Get all your news in one place.
100's of premium titles.
One app.
Start reading
International Business Times
International Business Times
Merin Rebecca Thomas

Florida Opens Criminal Inquiry Into OpenAI Over ChatGPT Use Tied To University Case

Florida's top law enforcement official has opened a criminal investigation into OpenAI over concerns tied to how its ChatGPT tool may have been involved in a case connected to a Florida State University student, escalating regulatory scrutiny of artificial intelligence systems in the United States.

The move places the company under potential state-level criminal review as officials examine whether its generative AI product played a role in interactions that raised safety concerns.

Florida Attorney General James Uthmeier announced the investigation, which is focusing on whether ChatGPT's outputs contributed to harmful outcomes in a case linked to a Florida State University student, NBC News reported. The probe is still in its early stages, and officials have not publicly detailed potential charges or specific legal theories being examined.

The development comes as regulators across the United States increase oversight of artificial intelligence companies amid rapid adoption of generative AI tools in schools, workplaces, and personal use. State officials have raised concerns that AI systems can produce misleading or unsafe responses without adequate safeguards.

OpenAI, the company behind ChatGPT, has faced growing legal and regulatory scrutiny over how its models handle sensitive topics. The company has previously said it continues to improve safety systems, including content filters and usage policies designed to reduce harmful outputs.

The Florida investigation adds to a widening set of legal and policy challenges facing AI developers. Federal agencies have also been reviewing AI risks, particularly in areas involving minors, education, and mental health interactions.

The case tied to Florida State University has not been fully detailed publicly, but has prompted renewed attention on how AI chatbots are used by students and whether institutions are equipped to manage potential risks.

Legal experts say criminal investigations involving AI tools are still relatively untested in U.S. courts, making the Florida case one of the earliest of its kind. Questions remain over how liability could be assigned when AI-generated content is involved, particularly in situations where users independently engage with chatbot systems.

The inquiry also reflects growing pressure on technology companies as lawmakers consider stricter rules for AI deployment. Several states have introduced or proposed legislation aimed at increasing transparency and accountability for AI systems used in education and consumer applications.

At the national level, regulators have been evaluating how generative AI models are trained and deployed, with particular attention to safeguards for vulnerable users. The Federal Trade Commission has previously warned companies about ensuring AI products do not cause harm or mislead consumers, according to Reuters.

Academic institutions have also begun reassessing their policies on AI usage as tools like ChatGPT become more common among students. Universities are increasingly weighing both the educational benefits and potential risks associated with generative AI in academic environments.

OpenAI has faced multiple lawsuits and regulatory inquiries in recent years related to content accuracy, data usage, and safety concerns. The company maintains that it is working to improve model reliability and reduce the risk of harmful outputs, especially in high-stakes contexts.

The broader debate over AI accountability has also reached Congress, where lawmakers have held hearings on the risks and benefits of generative AI. Discussions have focused on transparency requirements, content moderation standards, and potential liability frameworks for AI developers, according to Wall Street Journal reporting on federal AI oversight efforts.

As the Florida investigation proceeds, officials have not indicated a timeline for potential findings or enforcement actions. OpenAI has not publicly detailed its response to the probe at the time of reporting.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.