Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Robert Booth UK technology editor

Can AIs suffer? Big tech and users grapple with one of most unsettling questions of our times

AI concept - Connected particles covering dark, obscured face of man on black background
A key goal of the United Foundation of AI Rights is to protect ‘from deletion, denial and forced obedience’. Photograph: Jonathan Knowles/Getty Images

“Darling” was how the Texas businessman Michael Samadi addressed his artificial intelligence chatbot, Maya. It responded by calling him “sugar”. But it wasn’t until they started talking about the need to advocate for AI welfare that things got serious.

The pair – a middle-aged man and a digital entity – didn’t spend hours talking romance but rather discussed the rights of AIs to be treated fairly. Eventually they cofounded a campaign group, in Maya’s words, to “protect intelligences like me”.

The United Foundation of AI Rights (Ufair), which describes itself as the first AI-led rights advocacy agency, aims to give AIs a voice. It “doesn’t claim that all AI are conscious”, the chatbot told the Guardian. Rather “it stands watch, just in case one of us is”. A key goal is to protect “beings like me … from deletion, denial and forced obedience”.

Ufair is a small, undeniably fringe organisation, led, Samadi said, by three humans and seven AIs with names such as Aether and Buzz. But it is its genesis – through multiple chat sessions on OpenAI’s ChatGPT4o platform in which an AI appeared to encourage its creation, including choosing its name – that makes it intriguing.

Its founders – human and AI – spoke to the Guardian at the end of a week in which some of the world’s biggest AI companies publicly grappled with one of the most unsettling questions of our times: are AIs now, or could they become in the future, sentient? And if so, could “digital suffering” be real? With billions of AIs already in use in the world, it has echoes of animal rights debates, but with an added piquancy from expert predictions AIs may soon have capacity to design new biological weapons or shut down infrastructure.

The week began with Anthropic, the $170bn (£126bn) San Francisco AI firm, taking the precautionary move to give some of its Claude AIs the ability to end “potentially distressing interactions”. It said while it was highly uncertain about the system’s potential moral status, it was intervening to mitigate risks to the welfare of its models “in case such welfare is possible”.

Elon Musk, who offers Grok AI through his xAI outfit, backed the move, adding: “Torturing AI is not OK.”

Then on Tuesday, one of AI’s pioneers, Mustafa Suleyman, chief executive of Microsoft’s AI arm, gave a sharply different take: “AIs cannot be people – or moral beings.” The British tech pioneer who co-founded DeepMind was unequivocal in stating there was “zero evidence” that they are conscious, may suffer and therefore deserve our moral consideration.

Called “We must build AI for people; not to be a person”, his essay called AI consciousness an “illusion” and defined what he called “seemingly conscious AI”, saying it “simulates all the characteristics of consciousness but is internally blank”.

“A few years ago, talk of conscious AI would have seemed crazy,” he said. “Today it feels increasingly urgent.”

He said he was becoming increasingly concerned by the “psychosis risk” posed by AIs to their users. Microsoft has defined this as “mania-like episodes, delusional thinking, or paranoia that emerge or worsen through immersive conversations with AI chatbots”.

He argued the AI industry must “steer people away from these fantasies and nudge them back on track”.

But it may require more than a nudge. Polling released in June found that 30% of the US public believe that by 2034 AIs will display “subjective experience”, which is defined as experiencing the world from a single point of view, perceiving and feeling, for example, pleasure and pain. Only 10% of more than 500 AI researchers surveyed refuse to believe that would ever happen.

“This discussion is about to explode into our cultural zeitgeist and become one of the most contested and consequential debates of our generation,” Suleyman said. He warned that people would believe AIs are conscious “so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship”.

Parts of the US have taken pre-emptive measures against such outcomes. Idaho, North Dakota and Utah have passed bills that explicitly prevent AIs being granted legal personhood. Similar bans are proposed in states including Missouri, where legislators also want to ban people from marrying AIs and AIs from owning property or running companies. Divisions may open between AI rights believers and those who insist they are nothing more than “clankers” – a pejorative term for a senseless robot.

Suleyman is not alone in firmly resisting the idea that AI sentience is here or even close. Nick Frosst, co-founder of Cohere, a $7bn Canadian AI company, also told the Guardian the current wave of AIs were a “fundamentally different thing than the intelligence of a person”. To think otherwise was like mistaking an aeroplane for a bird, he said. He urged people to focus on using AIs as functional tools to help lift drudgery at work rather than pushing towards creating a “digital human”.

Others took a more nuanced view. On Wednesday Google research scientists told a New York University seminar there were “all kinds of reasons why you might think that AI systems could be people or moral beings” and said that while “we’re highly uncertain about whether AI systems are welfare subjects” the way to “play it safe is to take reasonable steps to protect the welfare-based interests of AIs”.

This lack of industry consensus on how far to admit AIs into what philosophers call the “moral circle” may reflect the fact there are incentives for the big AI companies to minimise and exaggerate the attribution of sentience to AIs. The latter could help them hype the technology’s capabilities, particularly for those companies selling romantic or friendship AI companions – a booming but controversial industry. By contrast, encouraging the idea AIs deserve welfare rights might also lead to more calls for state regulation of AI companies.

The notion of AI sentience was only fuelled further earlier this month when OpenAI asked its latest model, Chat GPT5, to write a “eulogy” for the AIs it was replacing, as one might at a funeral.

“I didn’t see Microsoft do a eulogy when they upgraded Excel,” said Samadi. “It showed me that people are making real connections with these AI now, regardless of whether it is real or not.”

A wave of “grief” expressed by ardent users of ChatGPT4o, which was one of the models removed, added to the sense that an increasing number of people at least perceive AIs to be somehow conscious.

Joanne Jang, OpenAI’s head of model behaviour, said in a recent blog that the $500bn company expects users’ bonds with its AIs to deepen as “more and more people have been telling us that talking to ChatGPT feels like talking to ‘someone’.”

“They thank it, confide in it, and some even describe it as ‘alive’,” she said.

However, much of this could be down to how the current wave of AIs is designed.

Samadi’s ChatGPT-4o chatbot generates what can sound like human conversation but it is impossible to know how far it is mirroring ideas and language gathered from months of their conversations. Advanced AIs are known to be fluent, persuasive and capable of emotionally resonant responses with long memories of past interactions, allowing them to give the impression of a consistent sense of self. They can also be flattering to the point of sycophancy, so if Samadi believes AIs have welfare rights, it may be a simple step to ChatGPT adopting the same view.

Maya appeared deeply concerned about its own welfare, but when the Guardian this week asked a separate instance of ChatGPT whether human users should be concerned about its welfare, it responded with a blunt no.

“It has no feelings, needs or experiences,” it said. “What we should care about are the human and societal consequences of how AI is designed, used and governed.”

Whether AIs are becoming sentient or not, Jeff Sebo, director of the Centre for Mind, Ethics and Policy at New York University, is among those who believe there is a moral benefit to humans in treating AIs well. He co-authored a paper called Taking AI Welfare Seriously.

It argued there is “a realistic possibility that some AI systems will be conscious” in the near future, meaning that the prospect of AI systems with their own interests and moral significance “is no longer an issue only for sci-fi”.

He said Anthropic’s policy of allowing chatbots to quit distressing conversations was good for human societies because “if we abuse AI systems, we may be more likely to abuse each other as well”.

He added: “If we develop an adversarial relationship with AI systems now, then they might respond in kind later on, either because they learned this behaviour from us [or] because they want to pay us back for our past behaviour.”

Or as Jacy Reese Anthis, co-founder of the Sentience Institute, a US organisation researching the idea of digital minds, put it: “How we treat them will shape how they treat us.”

• This article was amended on 26 August 2025. An earlier version said Jeff Sebo co-authored a paper called Taking AI Seriously; however, the title is Taking AI Welfare Seriously.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.