Get all your news in one place.
100’s of premium titles.
One app.
Start reading
We Got This Covered
We Got This Covered
Sadik Hossain

ChatGPT accuses professor of harassment on a trip to Alaska with a student. But he’s never taught at that school or been to Alaska

Jonathan Turley teaches law at George Washington University. Earlier this year, he got some shocking news. ChatGPT, the popular AI chatbot, was telling people he sexually harassed a student. The AI said it happened on a school trip to Alaska. But here’s the thing. None of it was real.

According to the New York Post, a UCLA law professor named Eugene Volokh discovered the problem while testing ChatGPT. He asked the bot to list examples of law professors who had been accused of harassment. ChatGPT gave him five names, and Turley was one of them. The bot said Turley taught at Georgetown University Law Center and harassed a student during a trip organized by the school.

ChatGPT claimed Turley made “sexually suggestive comments” and “attempted to touch her in a sexual manner” during a law school-sponsored trip to Alaska. The bot even said this information came from a Washington Post article published in March 2018. But Turley says everything about this story is wrong. He never worked at Georgetown. He never went to Alaska with any students. The Washington Post never wrote that article. And nobody has ever accused him of harassment.

When AI makes stuff up, real people get hurt

Turley told reporters the fake accusation really scared him. “First, I have never taught at Georgetown University,” the aghast lawyer declared. “Second, there is no such Washington Post article.”

He added, “Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student and I’ve never been been accused of sexual harassment or assault.”

Other people have run into similar problems with ChatGPT. A radio host from Georgia named Mark Walters sued OpenAI, the company behind ChatGPT, after the bot falsely said he stole money from a gun rights group. His lawsuit was one of the first times someone tried to hold an AI company responsible for spreading lies. OpenAI said ChatGPT warns users that it might not always be accurate, but lawyers still disagree about whether that’s enough protection.

What happened to Turley shows a big problem with AI right now. These chatbots can make up completely false stories but present them in a way that sounds real and believable. They cite fake sources and give specific details that make the lies seem true. 

For someone like Turley, whose career depends on his reputation, this kind of false story can do real damage. He pointed out that once a lie like this gets online, it can spread to thousands of websites before the person even finds out about it.

OpenAI says it knows AI hallucinations are a problem, and the company is trying to fix it. But what happened to Turley proves that these systems can still hurt real people by spreading complete lies about their lives. ChatGPT has been connected to other troubling incidents as well, making people wonder if the technology is safe to use without better safeguards.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.