Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Comment
Margaret Sullivan

From scripts to sermons: is AI going to be writing everything soon?

ChatGPT logo on a screen
‘There’s no denying the impact of AI or its omnipresence.’ Photograph: Dado Ruvić/Reuters

No one wants a soulless sermon – that defeats the purpose – and Pope Leo XIV has taken steps to ensure that Roman Catholic priests don’t deliver one.

Artificial intelligence, the new pontiff said in a recent meeting with clergy, “will never be able to share faith”, which is what giving a homily is all about. Resist the temptation and write your own words, he urged.

But that point of view is increasingly on the margins.

Not every workplace is a pulpit, and AI is an unstoppable force in almost every field.

That doesn’t mean there’s general agreement on how to use it and what the guidelines and guard rails should be.

As battles over AI rage in places as far-flung as Hollywood and the Pentagon, there are plenty of high emotions, but nothing like consensus.

In perhaps the most high-stakes battle, AI company Anthropic is fencing with the Pentagon about key restrictions on AI use by the military. Huge contract with the defense department are at stake, as is national security, and there is competitive pressure from another major company, OpenAI.

Hollywood writers’ unions are trying to hold back the tide that threatens their members’ livelihoods.

And in my own field, journalism, AI is a hot topic.

A few weeks ago, Cleveland Plain Dealer editor Chris Quinn took US journalism schools to task for, as he sees it, not sufficiently preparing students to enter an industry where AI is becoming part of the normal workflow. In his shop, he says, AI is increasingly used to draft stories from reporters’ notes, thus freeing reporters to do crucial shoe-leather work. The stories are then reviewed by an editor before publication.

The fracas in Cleveland began when a candidate for a reporting fellowship withdrew her application after learning that she’d be expected, in some instances, to file her notes to an AI reporting tool rather than write stories herself. That wasn’t what she had in mind when she decided to become a journalist, so she backed out.

“Artificial intelligence is not bad for newsrooms,” Quinn wrote in one of his regular letters to readers, using that incident as a jumping off point. “It’s the future of them …Anyone entering this field should be immersing themselves in AI.”

The letter drew fire from working journalists such as Phil Lewis, a Huffington Post editor who wrote on Twitter/X: “An editor for a newspaper encouraging ‘removing writing from reporters’ workflow’ should just resign.”

The boss at Axel Springer, Mathias Döpfner, delivered a similar message to the staff of his international company, which includes the US-based Business Insider and Politico.

“The truth is, you either embrace AI or you die,” was his blunt directive according to the media newsletter Status, which obtained a recording of his meeting with staff.

“Does it replace jobs here and there? The honest answer is, yes, no doubt about it.” But, Dopfner added, its overall effect will be to preserve thousands of other jobs because of a strengthened business model. Small comfort if your job is on the line now. Status reported that AI-bashing signs reading “No slop in our shop” dotted the meeting.

At Columbia University’s journalism school, a research fellow and I explored the question of how America’s newsrooms were responding to AI.

Trying to determine that was like nailing jello to a wall, because the practices – and the thinking about them – were changing at such a rapid pace.

Standards-and-practices editors were thoughtfully putting out guidelines to journalists only to find that the technology was outpacing their rulemaking. Publishers and business-side people may have one idea, and old-school journalists quite another.

But there’s no denying the impact of AI or its omnipresence.

“Resistance is futile,” one AI product manager for the Associated Press wrote in internal messages to colleagues, according to a Semafor story titled “It’s bots v reporters at the AP.” She also reportedly observed that many editors would “prefer an AI-written article to a human written one”.

The one strong suggestion my colleague and I made – foundational for many organizations using AI, not just newsrooms – is to make sure that there are “humans in the loop”. Maybe AI can create a news story, but the reporter who gathered the facts – and her editor – need to check it and review it before it goes out to the public.

But, of course, that’s not a given. There are AI-related gaffes and misuses aplenty, as we detailed in a story for Columbia Journalism Review.

The website Indicator reported that a podcast network called Daily News Now had churned out an average of 11,000 podcast episodes a day using AI. In many cases, these mass-produced podcasts were ripping off and failing to credit the original reporting done by local news organizations.

Yet we also know that AI has been used to great effect by the best journalists to provide the basis of groundbreaking journalism, like an AP investigation that produced a database of people killed by police officers supposedly using “less lethal force”.

Maybe the best lesson at this still-early stage is that AI is potentially a great tool – and certainly a great danger.

Use it wisely. Or, if you’re a Catholic priest on deadline for Sunday’s sermon, perhaps not at all.

  • Margaret Sullivan is a Guardian US columnist writing on media, politics and culture

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.