Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
Editorial

The Guardian view on ChatGPT search: exploiting wishful thinking

ChatGPT logo is displayed on a smartphone screen.
‘Chatbots are more authoritative-sounding but they are not more truthful.’ Photograph: Rafael Henrique/SOPA Images/REX/Shutterstock

In his 1991 book Consciousness Explained, the cognitive scientist Daniel Dennett describes the juvenile sea squirt, which wanders through the sea looking for a “suitable rock or hunk of coral to … make its home for life”. On finding one, the sea squirt no longer needs its brain and eats it. Humanity is unlikely to adopt such culinary habits but there is a worrying metaphorical parallel. The concern is that in the profit-driven competition to insert artificial intelligence into our daily lives, humans are dumbing themselves down by becoming overly reliant on “intelligent” machines – and eroding the practices on which their comprehension depends.

The human brain is evolving. Three thousand years ago, our ancestors had brains that were larger than our own. At least one explanation is that intelligence became increasingly collective 100 generations ago – and humans breached a population threshold that saw individuals sharing information. Prof Dennett wrote that the most remarkable expansion of human mental powers – the rise of civilisation through art and agriculture – was almost instantaneous from an evolutionary perspective.

This socialisation of synaptic thought is now being tested by a different kind of information exchange: the ability of AI to answer any prompt with human-sounding language – suggesting some sort of intent, even sentience. But this is a mirage. Computers have become more accomplished but they lack genuine comprehension, nurtured in humans by evolving as autonomous individuals embedded in a web of social practices. ChatGPT, the most human-like impersonator, can generate elegant prose. But it gets basic maths wrong. It can be racist and sexist. ChatGPT has no nostalgia, no schemes and no reflections. So why all the fuss? In short, money.

When Google’s new AI-powered Google search tool, Bard, was spotted this week to have erred in a promotional video, the mistake wiped more than $150bn off the stock price of its parent company Alphabet. Why, wondered the neural scientist Gary Marcus, was Microsoft’s Bing search engine, powered by ChatGPT, and unveiled on the same day as Bard, hailed as “a revolution” despite offering an error-strewn service? The answer is the chance that humanity might be “Binging” rather than “Googling” the web. This does not seem unreasonable: ChatGPT has wowed millions of people since it was unveiled at the end of November.

The trouble is that this is just vibes. Chatbots sound more authoritative, but they are not more truthful. Prof Marcus points out their errors, or hallucinations, are in their “silicon blood”, a byproduct of the way they compress their inputs. “Since neither company has yet subjected their products to full scientific review, it’s impossible to say which is more trustworthy,” he writes. “It might well turn out that Google’s new product is actually more reliable.”

Mega-corporations have all acquired a wealth of information in an exploitable form without having to understand it. Journalists, politicians and poets might be very concerned about the “semantic” aspects of communication, but not so much AI engineers. They look at the information in a message as a measure of the system’s disorder. That’s why AI risks creating a new class of weapons in a war on truth.

Humans have a long track record of wishful thinking and underestimating the risks of new breakthroughs. Commercial interests push technology as a new religion whose central article of faith is that more technology is always better. Web giants want to dazzle users into overestimating their AI tools’ utility, encouraging humanity to prematurely cede authority to them far beyond their competence. Entrepreneurial attitude and scientific curiosity have produced many of the modern era’s advances. But progress is not an ethical principle. The danger is not machines being treated like humans, but humans being treated like machines.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.