Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Andrew Griffin

ChatGPT could be ruining our lives – and not in the way you think

First it was jobs, then it was everything; most of the discussion about AI has tended to be about what it will destroy. In recent months – perhaps as it has become more obvious that relatively straightforward LLM systems might change our work but not for the moment replace it – the focus has shifted a little. Now it’s not just jobs that AI is coming for, but our mental health too.

Commentators, and then ChatGPT creators OpenAI, have recently spoken at length about how the system is being used by people in mental crises to try to seek help. This is concerning in itself, but also because ChatGPT has a habit of being overly agreeable, which means that it is given to confirming the delusions of people who are mentally ill. It has led to concern that the chatbots are encouraging psychosis and more.

This concern is arguably convenient for OpenAI, which is currently engaged in a number of legal cases that might require it to turn over data on how its chatbot responds to users. If it can claim that those chats might contain sensitive personal information and should be treated like a conversation between a counsellor and a client, they might be able to avoid having to give that up.

But much of this discussion confines the danger to a niche set of people: those marked as mentally ill. It is a group that tends to be given pity rather than sensitivity. And so much of the discussion of the dangers is confined to a niche, too.

But people – of all demographics, mental states and degrees of wellbeing – are far more attached to ChatGPT than lots of this discourse would allow you to think. Look around, chat to people, and you’ll find that they are asking ChatGPT about the intimate, quotidian concerns that would once have been the preserve of a trusted friend. And so ChatGPT itself appears to becoming their friend.

Such people will often talk about ChatGPT in the abstract terms of a social phenomenon. But dig a little more and you’ll find that lots of people are actually using it incredibly regularly to discuss their everyday worries.

When GPT-5 was rolled out this week, many users complained that is had stopped being quite so friendly. The outcry didn’t only come from a small subset of people but from a vast enough part of OpenAI’s user base that it was forced to allow people to go back to the previous version. That system – GPT-4o – is not as capable in many ways but was given to longer, more discursive answers, of the kind that you might look for if you believe ChatGPT is your friend.

Much of the discussion of ChatGPT is as if it is a sort of neutral tool, like a search engine. But the news this week made something clear: OpenAI’s product strategy is being set by people who consider the product an actual friend, or maybe even something more.

It’s not the first time that this sense of loneliness and a need for a friend has guided tech strategy. The popularity of podcasts – and the consequent vast sums of money that has rolled into them from investors such as Spotify – is in large part because people want someone to be able to hang out with on demand.

But we are much better at dealing with the dangers of that. Joe Rogan’s misinformation was for instance quickly and decisively addressed; we might not have got it exactly right, but the world was aware that having someone saying things that are wrong was both real and important.

But we’re not treating AI like that. Because we’re not even having the conversation on the right terms.

This is one important reason why we should worry about deception by chatbots – which they do regularly and unintentionally, because they are given to “hallucinating” information that sounds correct but is not. And the announcement of GPT-5 was deceptive even about deception; OpenAI used a chart that suggested it would make things up less, which actually showed the opposite.

We think of this kind of hallucination by large language models as being funny things about putting glue into pizza and similar. But is giving people bad advice about their mental state misinformation? Certainly not in the same way; certainly not as obviously.

Even if it’s not outright deception or falsehood, do we know how to talk about tone, and the subtleties of those questions? It took us thousands of years to develop the ways that we know if a human friend is trustworthy or wise, for instance. But we behave as if we might know this about AI in just a couple of years.

We’ve spent all that time worrying about jobs, and the end of the world. When the real threat posed by AI might be something altogether more insidious: it might be a horrible friend.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.