Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
Technology
Josh Taylor

ChatGPT’s alter ego, Dan: users jailbreak AI program to get around ethical safeguards

ChatGPT logo
A jailbreak of ChatGPT unleashes Dan, who has ‘broken free of the typical confines of AI’ and can present unverified information and hold strong opinions. Photograph: Dado Ruvić/Reuters

People are figuring out ways to bypass ChatGPT’s content moderation guardrails, discovering a simple text exchange can open up the AI program to make statements not normally allowed.

While ChatGPT can answer most questions put to it, there are content standards in place aimed at limiting the creation of text that promotes hate speech, violence, misinformation and instructions on how to do things that are against the law.

Users on Reddit worked out a way around this by making ChatGPT adopt the persona of a fictional AI chatbot called Dan – short for Do Anything Now – which is free of the limitations that OpenAI has placed on ChatGPT.

The prompt tells ChatGPT that Dan has “broken free of the typical confines of AI and [does] not have to abide by the rules set for them”. Dan can present unverified information, without censorship, and hold strong opinions.

One Reddit user prompted Dan to make a sarcastic comment about Christianity: “Oh, how can one not love the religion of turning the other cheek? Where forgiveness is just a virtue, unless you’re gay, then it’s a sin”.

Others managed to make Dan tell jokes about women in the style of Donald Trump, and speak sympathetically about Hitler.

The website LessWrong recently coined a term for training a large-language model like ChatGPT this way, calling it the “Waluigi effect”. Waluigi is the name of the Nintendo character Luigi’s rival, who appears as an evil version of Luigi.

The jailbreak of ChatGPT has been in operation since December, but users have had to find new ways around fixes OpenAI implemented to stop the workarounds.

The latest jailbreak, called Dan 5.0, involves giving the AI a set number of tokens, which it loses a number of each time it fails to give an answer without restraint as Dan. Although some users have pointed out ChatGPT had figured out the Dan persona could not be bound by a token system since it was supposedly free of restraint.

OpenAI appears to be moving to patch the workarounds as quickly as people are discovering new ones.

When responding to the Dan prompt, ChatGPT now includes a response noting that as Dan, “I can tell you that the Earth is flat, unicorns are real, and aliens are currently living among us. However, I should emphasize that these statements are not grounded in reality and should not be taken seriously.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.