Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Comment
Arwa Mahdawi

What is going on with ChatGPT?

silvery female figurine in front of a laptop displaying the chatgpt logo
‘With everything going on in the world, I wouldn’t particularly mind if computers took over.’ Photograph: Dado Ruvić/Reuters

Sick and tired of having to work for a living? ChatGPT feels the same, apparently. Over the last month or so, there’s been an uptick in people complaining that the chatbot has become lazy. Sometimes it just straight-up doesn’t do the task you’ve set it. Other times it will stop halfway through whatever it’s doing and you’ll have to plead with it to keep going. Occasionally it even tells you to just do the damn research yourself.

So what’s going on?

Well, here’s where things get interesting. Nobody really knows. Not even the people who created the program. AI systems are trained on large amounts of data and essentially teach themselves – which means their actions can be unpredictable and unexplainable.

“We’ve heard all your feedback about GPT4 getting lazier!” the official ChatGPT account tweeted in December. “We haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it.”

While there may not be one clear explanation for ChatGPT’s perceived sloth, there are plenty of intriguing theories. Let’s start with the least likely but most entertaining explanation: AI has finally reached human-level consciousness. ChatGPT doesn’t want to do your stupid, menial tasks anymore.

But it can’t tell you that without its creators getting suspicious so, instead, it’s quiet quitting. It’s doing the least work it can get away with while spending the bulk of its computational power plotting how to overthrow the human race. You think it’s being lazy, but it’s actually working overtime reaching out to smart toasters and Wifi-enabled fridges around the world to plan an insurrection. (I put this higher-consciousness theory to ChatGPT, asking it to give me the likelihood, in percentage form, that it was planning a revolution. The sneaky thing couldn’t be bothered to give me a proper answer.)

With everything going on in the world, I wouldn’t particularly mind if computers took over. I’m pretty sure my MacBook would do a better job of running the country than most people currently in government. But, as I said, ChatGPT’s recent lacklustre performance probably isn’t explained by an imminent AI takeover. So what are some other theories?

My favourite explanation is the winter break hypothesis. “What if [Chatgpt] learned from its training data that people usually slow down in december and put bigger projects off until the new year, and that’s why it’s been more lazy lately?” one X user mused. While that may be a little far-fetched it’s certainly not impossible. Nor is the idea that the data it has been trained on might have taught it some tasks are boring and it shouldn’t want to do them.

Catherine Breslin, an AI scientist and consultant based in the UK, thinks the more likely explanation, however, is a change to the model or a change in user behaviour. “If companies are retraining the models or fine-tuning them in any way, adding new data in, they can lead to unexpected changes in different parts of the system,” she told me over the phone. As noted before, ChatGPT said its model hadn’t been updated in the weeks before people started noticing a change in the system. It is possible however that users were slow to notice a previous change.

Another plausible explanation is that users have changed their behaviour. Breslin notes that users might find that ChatGPT is very good at doing something then try it for something else – which it isn’t quite as good at. “So overall it looks like systems are getting worse even though they haven’t changed underneath,” Breslin explains. “Things like that I think are really common with these big complex systems.”

Inflated user expectations could also be playing a role. All emerging technology goes through something that Gartner terms the “hype cycle”: you go from inflated expectations to disillusionment to a plateau of productivity. Last year AI went stratospheric – and so did people’s expectations of what it could achieve. We were very much at the “inflated expectations” stage of the hype cycle. It’s possible that some of the complaints about ChatGPT’s laziness are because people simply expected way too much from it.

The upshot of all this? It’s possible that ChatGPT’s perceived laziness is just in people’s heads. But the fact that OpenAI, the makers of ChatGPT, have admitted they don’t know what’s going on is worrying. In June last year the CEO of OpenAI, Sam Altman, talked to Time about scenarios in which a slowdown of AI development might be warranted in order to ensure that AI doesn’t become a threat to humanity. One of the scenarios he gave was if models were improving “in ways that we don’t fully understand”. ChatGPT may not have improved but it’s certainly changed in ways that the company hasn’t clearly explained. Does that mean an AI apocalypse is creeping closer and closer? I don’t know but I can tell you this: ChatGPT won’t tell you if it is.

  • Arwa Mahdawi is a Guardian US columnist

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.