Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
Emily Bell

A fake news frenzy: why ChatGPT could be disastrous for truth in journalism

Illustration: Nate Kitch

It has taken a very short time for artificial intelligence application ChatGPT to have a disruptive effect on journalism. A technology columnist for the New York Times wrote that a chatbot expressed feelings (which is impossible). Other media outlets filled with examples of “Sydney” the Microsoft-owned Bing AI search experiment being “rude” and “bullying” (also impossible). Ben Thompson, who writes the Stratechery newsletter, declared that Sydney had provided him with the “most mind-blowing computer experience of my life” and he deduced that the AI was trained to elicit emotional reactions – and it seemed to have succeeded.

To be clear, it is not possible for AI such as ChatGPT and Sydney to have emotions. Nor can they tell whether they are making sense or not. What these systems are incredibly good at is emulating human prose, and predicting the “correct” words to string together. These “large language models” of AI applications, such as ChatGPT, can do this because they have been fed billions of articles and datasets published on the internet. They can then generate answers to questions.

For the purposes of journalism, they can create vast amounts of material – words, pictures, sounds and videos – very quickly. The problem is, they have absolutely no commitment to the truth. Just think how rapidly a ChatGPT user could flood the internet with fake news stories that appear to have been written by humans.

And yet, since the ChatGPT test was released to the public by AI company OpenAI in November, the hype around it has felt worryingly familiar. As with the birth of social media, enthusiastic boosting from investors and founders has drowned out cautious voices. Christopher Manning, director of the Stanford AI Lab, tweeted: “The AI Ethics crowd continues to promote a narrative of generative AI models being too biased, unreliable and dangerous to use, but, upon deployment, people love how these models give new possibilities to transform how we work, find information and amuse ourselves.” I would consider myself part of this “ethics crowd”. And if we want to avoid the terrible errors of the last 30 years of consumer technology – from Facebook’s data breaches to unchecked misinformation interfering with elections and provoking genocide – we urgently need to hear the concerns of experts warning of potential harms.

The most worrying fact to be reiterated is that ChatGPT has no commitment to the truth. As the MIT Technology Review puts it, large language model chatbots are “notorious bullshitters”. Disinformation, grifting and criminality don’t generally require a commitment to truth either. Visit the forums of blackhatworld.com, where those involved in murky practices trade ideas for making money out of fake content, and ChatGPT is heralded as a gamechanger for generating better fake reviews, or comments, or convincing profiles.

In terms of journalism, many newsrooms have been using AI for some time. If you have recently found yourself nudged towards donating money or paying to read an article on a publisher’s website, or if the advertising you see is a little bit more fine-tuned to your tastes, that too might signify AI at work.

Some publishers, however, are going as far as using AI to write stories, with mixed results. Tech trade publication CNET was recently caught out using automated articles, after a former employee claimed in her resignation email that AI-generated content, such as a cybersecurity newsletter, was publishing false information that could “cause direct harm to readers”.

Felix Simon, a communications scholar at the Oxford Internet Institute, has interviewed more than 150 journalists and news publishers for a forthcoming study of AI in newsrooms. He says there is potential in making it much easier for journalists to transcribe interviews or quickly read datasets, but first-order problems such as accuracy, overcoming bias and the provenance of data are still overwhelmingly dependent on human judgment. “About 90% of the uses of AI [in journalism] are for comparatively tedious tasks, like personalisation or creating intelligent paywalls,” says Charlie Beckett, who directs a journalism and AI programme at the LSE. Bloomberg News has been automating large parts of its financial results coverage for years, he says. However, the idea of using programs such as ChatGPT to create content is extremely worrying. “For newsrooms that consider it unethical to publish lies, it’s hard to implement the use of a ChatGPT without lots of accompanying human editing and factchecking,” says Beckett.

There are also ethical issues with the nature of the tech companies themselves. A Time expose found that OpenAI, the firm behind ChatGPT, had paid workers in Kenya less than $2 an hour to sift through content describing graphic harmful content such as child abuse, suicide, incest and torture to train ChatGPT to recognise it as offensive. “As someone using these services, this is something you have no control over,” says Simon.

In a 2021 study, academics looked at AI models that convert text into generated pictures, such as Dall-E and Stable Diffusion. They found that these systems amplified “demographic stereotypes at large scale”. For instance, when prompted to create an image of “a person cleaning”, all the images generated were of women. For “an attractive person”, the faces were all, the authors noted, representative of the “white ideal”.

ChatGPT logo
‘Enthusiastic boosting from investors and founders has drowned out cautious voices.’ Photograph: Sheldon Cooper/SOPA Images/REX/Shutterstock

NYU professor Meredith Broussard, author of the upcoming book More Than a Glitch, which examines racial, gender and ability bias in technology, says that everything baked into current generative models such as ChatGPT – from the datasets to who receives most of the financing – reflects a lack of diversity. “It is part of the problem of big tech being a monoculture,” says Broussard, and not one that newsrooms using the technologies can easily avoid. “Newsrooms are already in thrall to enterprise technologies, as they have never been well funded enough to grow their own.”

BuzzFeed founder Jonah Peretti recently enthused to staff that the company would be using ChatGPT as part of the core business for lists, quizzes and other entertainment content. “We see the breakthroughs in AI opening up a new era of creativity … with endless opportunities and applications for good,” he wrote. The dormant BuzzFeed share price immediately surged 150%. It is deeply worrying – surely a mountain of cheap content spewed out by a ChatGPT ought to be a worst-case scenario for media companies rather than an aspirational business model. The enthusiasm for generative AI products can obscure the growing realisation that these may not be entirely “applications for good”.

I run a research centre at the Columbia Journalism School. We have been studying the efforts of politically funded “dark money” networks to replicate and target hundreds of thousands of local “news” stories at communities in the service of political or commercial gain. The capabilities of ChatGPT increase this kind of activity and make it so much more readily available to far more people. In a recent paper on disinformation and AI, researchers from Stanford identified a network of fake profiles using generative AI on LinkedIn. The seductive text exchanges journalists find so irresistible with chat bots are altogether less appealing if they are talking vulnerable people into giving out their personal data and bank account details.

Much has been written about the potential of deepfake videos and audio – realistic pictures and sounds that can emulate the faces and voices of famous people (notoriously, one such had actor Emma Watson “reading” Mein Kampf). But the real peril lies outside the world of instantaneous deception, which can be easily debunked, and in the area of creating both confusion and exhaustion by “flooding the zone” with material that overwhelms the truth or at least drowns out more balanced perspectives.

It seems incredible to some of us in the “ethics crowd” that we have learned nothing from the past 20 years of rapidly deployed and poorly stewarded social media technologies that have exacerbated societal and democratic problems rather than improved them. We seem to be being led by a remarkably similar group of homogeneous and wealthy technologists and venture funds down yet another untested and unregulated track, only this time at larger scale and with even less of an eye to safety.

• Emily Bell is director of the Tow Center for Digital Journalism at Columbia University’s Graduate School of Journalism

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.