Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
John Naughton

The ChatGPT bot is causing panic now – but it’ll soon be as mundane a tool as Excel

OpenAI logo with ChatGPT website displayed on mobile seen in Brussels, Belgium, on 12 December 2022
‘ChatGPT is pretty adept at mimicking human language.’ Photograph: Jonathan Raa/NurPhoto/Rex/Shutterstock

So the ChatGPT language processing model burst upon an astonished world and the air was rent by squeals of delight and cries of outrage or lamentation. The delighted ones were those transfixed by discovering that a machine could apparently carry out a written commission competently. The outrage was triggered by fears of redundancy on the part of people whose employment requires the ability to write workmanlike prose. And the lamentations came from earnest folks (many of them teachers at various levels) whose day jobs involve grading essays hitherto written by students.

So far, so predictable. If we know anything from history, it is that we generally overestimate the short-term impact of new communication technologies, while grossly underestimating their long-term implications. So it was with print, movies, broadcast radio and television and the internet. And I suspect we have just jumped on to the same cognitive merry-go-round.

Before pressing the panic button, though, it’s worth examining the nature of the beast. It’s what the machine-learning crowd call a large language model (LLM) that has been augmented with a conversational interface. The underlying model has been trained on hundreds of terabytes of text, most of it probably scraped from the web, so you could say that it has “read” (or at any rate ingested) almost everything that has ever been published online. As a result, ChatGPT is pretty adept at mimicking human language, a facility that has encouraged many of its users to anthropomorphism, ie viewing the system as more human-like than machine-like. Hence the aforementioned squeals of delight – and also the odd misguided user apparently believing that the machine is in some way “sentient”.

The best-known antidote to this tendency to anthropomorphise systems such as ChatGPT is Talking About Large Language Models, a recent paper by the distinguished AI scholar Murray Shanahan, available on arXiv. In it, he explains that LLMs are mathematical models of the statistical distribution of “tokens” (words, parts of words or individual characters including punctuation marks) in a vast corpus of human-generated text. So if you give the model a prompt such as “The first person to walk on the moon was ... ” and it responds with “Neil Armstrong”, that’s not because the model knows anything about the moon or the Apollo mission but because we are actually asking it the following question: “Given the statistical distribution of words in the vast public corpus of [English] text, what words are most likely to follow the sequence ‘The first person to walk on the moon was’? A good reply to this question is ‘Neil Armstrong’.”

So what’s going on is “next-token prediction”, which happens to be what many of the tasks that we associate with human intelligence also involve. This may explain why so many people are so impressed by the performance of ChatGPT. It’s turning out to be useful in lots of applications: summarising long articles, for example, or producing a first draft of a presentation that can then be tweaked. One of its more unexpected capabilities is as a tool for helping to write computer code. Dan Shipper, an experienced software guy, reports that he spent Christmas experimenting with it as a programming assistant, concluding that: “It’s incredibly good at helping you get started in a new project. It takes all of the research and thinking and looking things up and eliminates it… In 5 minutes you can have the stub of something working that previously would’ve taken a few hours to get up and running.” His caveat, though, was that you had to know about programming first.

That seems to me to be the beginning of wisdom about ChatGPT: at best, it’s an assistant, a tool that augments human capabilities. And it’s here to stay. In that sense, it reminds me, oddly enough, of spreadsheet software, which struck the business world like a thunderbolt in 1979 when Dan Bricklin and Bob Frankston wrote VisiCalc, the first spreadsheet program, for the Apple II computer, which was then sold mainly in hobbyist stores. One day, Steve Jobs and Steve Wozniak woke up to the realisation that many of the people buying their computer did not have beards and ponytails but wore suits. And that software sells hardware, not the other way round.

The news was not lost on IBM and prompted the company to create the PC and Mitch Kapor to write the Lotus 1-2-3 spreadsheet program for it. Eventually, Microsoft wrote its own version and called it Excel, which now runs on every machine in every office in the developed world. It went from being an intriguing but useful augmentation of human capabilities to being a mundane accessory – not to mention the reason why Kat Norton (aka “Miss Excel”) allegedly pulls in six-figure sums a day from teaching Excel tricks on TikTok. The odds are that someone, somewhere is planning to do that with ChatGPT. And using the bot to write the scripts.

What I’ve been reading

Triple threat
The Third Magic is a meditation by Noah Smith on history, science and AI.

Don’t look back
Nostalgia for Decline in Deconvergent Britain is Adam Tooze’s long blogpost on the longer history of British economic decline.

Inequality’s impacts
Who Broke American Democracy? is an insightful essay on the Project Syndicate site by Nobel laureate Angus Deaton.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.