
Last night I was flicking through a dating app. One guy stood out: “Henry VIII, 34, King of England, nonmonogamy”. Next thing I know, I am at a candlelit bar sharing a martini with the biggest serial dater of the 16th century.
But the night is not over. Next, I am DJing back-to-back with Diana, Princess of Wales. “The crowd’s ready for the drop,” she shouts in my ear, holding a headphone to her tiara. Finally, Karl Marx is explaining why he can’t resist 60% off, as we wait in the cold to get first dibs on Black Friday sales.
On Sora 2, if you can think it, you can probably see it – even when you know you shouldn’t. Launched this October in the US and Canada via invitation only, OpenAI’s video app hit 1m downloads in just five days, surpassing ChatGPT’s debut.
Sora is not the only text-to-video generative AI tool out there, but it has become popular for two main reasons. First, it is the easiest way yet for users to star in their own deepfakes. Type a prompt and a 10-second video appears within minutes. It can then be shared on Sora’s own TikTok-style feed or exported elsewhere. Unlike the mass-produced, low-quality “AI slop” clogging the internet, these clips have unnervingly high production value.
The second reason is that Sora allows the likenesses of celebrities, sportspeople and politicians – with one crucial caveat: they have to be dead. Living people must give consent to feature, but there is an exception for “historical figures”, which Sora seems to define as anyone famous and no longer alive.
That seems to be what most users have been doing since launch. The main feed is a surreal whirlpool of brain rot and historical leaders. Adolf Hitler runs his fingers through a glossy mane in a shampoo ad. Queen Elizabeth II catapults herself from a pub table while hurling profanities. Abraham Lincoln erupts with joy on a TV set upon hearing: “You are not the father.” Rev Martin Luther King Jr tells a gas station clerk about his dream that one day all slushy drinks will be free – then grabs the icy beverage and bolts before finishing his sentence.
But relatives of those depicted are not laughing.
“It is deeply disrespectful and hurtful to see my father’s image used in such a cavalier and insensitive manner when he dedicated his life to truth,” Malcolm X’s daughter Ilyasah Shabazz told the Washington Post. She was two when her father was assassinated. Today, Sora clips depict the civil rights activist wrestling with MLK, talking about defecating on himself and making crude jokes.
Zelda Williams, actor Robin Williams’s daughter, pleaded with people to “please stop” sending her AI videos of her father, in an Instagram story post. “It’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want,” she said. Shortly before his death in 2014, the late actor took legal action to block anyone from using his likeness in advertisements or digitally inserting him into films until 2039. “To watch the legacies of real people be condensed down to … horrible, TikTok slop puppeteering them is maddening,” his daughter added.
Videos using the likeness of the late comedian George Carlin are “overwhelming, and depressing”, his daughter, Kelly Carlin, said in a BlueSky post.
People who have died more recently have also been spotted. The app is littered with videos of Stephen Hawking receiving a “#powerslap” that knocks his wheelchair over. Kobe Bryant dunks on an old woman while shouting about objects up his rectum. Amy Winehouse can be found stumbling around the streets of Manhattan or crying into the camera as mascara runs down her face.
Deaths from the past two years – Ozzy Osbourne, Matthew Perry, Liam Payne – are absent, indicating a cutoff that falls somewhere between.
Whenever they died, this “puppeteering” of the dead risks redrawing the lines of history, says Henry Ajder, a generative AI expert. “People fear that a world saturated with this kind of content is going to lead to a distortion of these people and how they’re remembered,” he says.
Sora’s algorithm rewards shock value. One video high on my feed shows King making monkey sounds during his I Have a Dream speech. Others depict Bryant re-enacting the helicopter crash that killed him and his daughter.
While actors or cartoons may also portray people posthumously, there are stronger legal guardrails. A movie studio is liable for its content; OpenAI is not necessarily liable for what appears on Sora. Depicting someone for commercial use also requires an estate’s consent in some states.
“We couldn’t just intimately resurrect Christopher Lee to star in a new horror film, so why can OpenAI resurrect him to star in thousands of shorts?” asks James Grimmelmann, an internet law expert at Cornell Law School and Cornell Tech.
OpenAI’s decision to hand the personas of the departed to the commons raises uncomfortable questions about how the dead should live on in the generative AI era.
The legal question
Consigning the ghosts of celebrities to for ever haunt Sora might feel wrong, but is it legal? That depends who you ask.
A major question remains unresolved in internet law: are AI companies covered bysection 230, and therefore not liable for the third-party content on their platforms? If OpenAI is protected under section 230, it cannot be sued for what users make on Sora.
“But unless there’s federal legislation on the issue, it’s going to be legal uncertainty until the supreme court takes up a case – and that’s another two to four years,” says Ashkhen Kazaryan, an expert in first amendment and technology policy.
In the meantime, OpenAI must avoid lawsuits. That means requiring the living to give consent. US libel law protects living people from any “communication embodied in physical form that is injurious to a person’s reputation”. On top of this, most states have right of publicity laws that prevent someone’s voice, persona or likeness being used without consent for “commercial” or “misleading” purposes.
Permitting the dead “is their way of dipping their toe in the water”, says Kazaryan.
The deceased are not protected from libel, but three states – New York, California and Tennessee – grant a postmortem right of publicity (the commercial right to your likeness). Navigating these laws in the context of AI remains a “grey area” without legal precedent, says Grimmelmann.
To sue successfully, estates would have to show OpenAI is liable – for example, by arguing it encourages users to depict the dead.
Grimmelmann notes that Sora’s homepage is full of such videos, in effect promoting this content. And if Sora was trained on large volumes of footage of historical figures, plaintiffs might argue that the app is designed to reproduce it.
OpenAI could, however, defend itself by claiming Sora is purely for entertainment. Each video carries a watermark, preventing it from misleading people or being classed as commercial.
Bo Bergstedt, a generative AI researcher, says most users are exploring, not monetising.
“People are treating it like entertainment, seeing what crazy stuff they can come up with or how many likes they can gather,” he says. Upsetting as this may be for families, it could still comply with publicity laws.
But if a Sora user builds an audience by generating popular clips of historical figures and starts monetising that following, they could find themselves in legal trouble. Alexios Mantzarlis, director of the security, trust and safety Initiative at Cornell Tech, notes that “economic AI slop” includes earning money indirectly through monetised platforms. Sora’s emerging “AI influencers” could therefore face lawsuits from estates if they profit from the dead.
A ‘Whac-A-Mole’ approach
In response to the backlash, OpenAI announced last week that it would begin allowing representatives of “recently deceased” public figures to request that their likeness be blocked from Sora videos.
“While there are strong free speech interests in depicting historical figures, we believe that public figures and their families should ultimately have control over how their likeness is used,” an OpenAI spokesperson said.
The company has not yet defined “recently”, or explained how requests will be handled. OpenAI did not immediately respond to the Guardian’s request for comment.
It has also backtracked on its copyright-free-for-all approach, after subversive content such as “Nazi Spongebob” spread across the platform and the Motion Picture Association accused OpenAI of infringement. A week after launch, it switched to an opt-in model for rights holders.
Grimmelmann expects a similar pivot over depictions of the dead. “Insisting people must opt out if they don’t like this may not be tenable,” he says. “It’s ghoulish, and if I have that instinct, others will too – including judges.”
Bergstedt calls this a “Whac-A-Mole” approach to guardrails that will probably continue until federal courts define AI liability.
In Ajder’s view, the Sora dispute foreshadows a larger question each of us will eventually face: who gets to control our likeness in the synthetic age?
“It’s a worrying situation if people simply accept that they’re going to be used and abused in hyperrealistic AI-generated content.”