Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Ryan Morrison

ChatGPT will be ‘much less lazy’ in future claims OpenAI CEO Sam Altman

ChatGPT logo on phone in front of robot thinking.

ChatGPT will be “much less lazy” in the future, OpenAI CEO Sam Altman declared on X. His proclamation comes after reports last year that its underlying model GPT-4 had started refusing to respond to some queries, or responded less fully than it could.

The artificial intelligence lab revealed in December that there had been a deterioration in the large language model's performance. It had effectively become "lazy" after an update.

Altman told his followers on X that “GPT-4 had a slow start on its new year's resolutions,” after updates designed to fix the problem didn’t immediately lead to improvements, but added that it “should now be much less lazy now!”

Why was GPT-4 being lazy?

AI, particularly large language models like ChatGPT are particularly good at automating dull and repetitive tasks. We use them to answer questions from the mundane to the complex and rely on the fact they will — mostly — give us a useful response.

Last year reports started to circulate that ChatGPT had become lazy. In that it wasn’t responding as fully as it once would, giving snippets instead of fully formed functions or explaining how to write a poem rather than simply writing the poem.

Some of this was likely in response to updates to the underlying AI model designed to combat misuse and add guardrails against illegal use cases. There was also efforts from OpenAI to reduce the cost of running the expensive model.

What has OpenAI done to fix the problem?

How OpenAI tackled the laziness problem isn’t obvious, but there have been updates to the underlying model released since the holiday season. This includes new versions of the Turbo model designed to speed up responses without sacrificing quality.

OpenAI says the new GPT-4-Turbo can complete tasks like code generation more thoroughly and a new GPT-3.5-Turbo reduces the overall cost of completing tasks or responding to queries.

There was some speculation that the AI models were are “winding down” for the holiday season, which is why they became slower and less responsive. OpenAI denied this and they didn’t immediately bounce back to normal in January.

Altman’s quip that ChatGPT had a “slow start on its new year resolutions” was a fun way of saying the updates were still filtering through. After all, AI is simply responding to our input, working from pre-trained data and isn’t sentient enough to make its own choices — yet!

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.