Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
Technology
Sian Cain

Bryan Cranston thanks OpenAI for cracking down on Sora 2 deepfakes

Actor Bryan Cranston pictured at a Sag-Aftra strike rally in 2023 in New York.
Bryan Cranston pictured speaking at a Sag-Aftra strike rally in 2023 in New York. The Breaking Bad actor went to the union with concerns after users of OpenAI’s generative video platform Sora 2 were able to generate his likeness without his consent. Photograph: Angela Weiss/AFP/Getty Images

Bryan Cranston has said he is “grateful” to OpenAI for cracking down on deepfakes of himself on the company’s generative AI video platform Sora 2, after users were able to generate his voice and likeness without his consent.

The Breaking Bad star approached the actors’ union Sag-Aftra with his concerns after Sora 2 users were able to generate his likeness during the video app’s recent launch phase. On 11 October, the LA Times described a Sora 2 video in which “a synthetic Michael Jackson takes a selfie video with an image of Breaking Bad star Bryan Cranston”.

Living people must ostensibly give their consent, or opt in, to feature on Sora 2, with OpenAI stating since launch that it takes “measures to block depictions of public figures” and that it has “guardrails intended to ensure that your audio and image likeness are used with your consent”.

But when Sora 2 launched, several publications including the Wall Street Journal, the Hollywood Reporter and the LA Times reported widespread anger in Hollywood after OpenAI allegedly told multiple talent agencies and studios that if they didn’t want their clients or copyrighted material replicated on Sora 2, they would have to opt out – rather than opt in.

OpenAI disputed these reports, telling the LA Times it was always its intention to give public figures control over how their likeness was used.

On Monday, Cranston issued a statement through Sag-Aftra, thanking OpenAI for “improving its guardrails” to prevent users generating his likeness again.

“I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way,” Cranston said. “I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness.”

Two of Hollywood’s biggest major agencies, Creative Artists Agency (CAA) and United Talent Agency (UTA) – which represents Cranston – have repeatedly raised alarms about the potential risks of Sora 2 and other generative AI platforms on their clients and their careers.

But on Monday, UTA and CAA co-signed a statement with OpenAI, Sag-Aftra and talent agent union the Association of Talent Agents, stating that what had happened to Cranston was an error, and that they would all work together to protect actors’ “right to determine how and whether they can be simulated”.

“While from the start it was OpenAI’s policy to require opt-in for the use of voice and likeness, OpenAI expressed regret for these unintentional generations. OpenAI has strengthened guardrails around replication of voice and likeness when individuals do not opt-in,” the statement read.

Actor Sean Astin, the new president of Sag-Aftra, warned that Cranston “is one of countless performers whose voice and likeness are in danger of massive misappropriation by replication technology”.

“Bryan did the right thing by communicating with his union and his professional representatives to have the matter addressed. This particular case has a positive resolution. I’m glad that OpenAI has committed to using an opt-in protocol, where all artists have the ability to choose whether they wish to participate in the exploitation of their voice and likeness using AI,” Astin said.

“Simply put, opt-in protocols are the only way to do business and the NO FAKES Act will make us safer,” he added, referring to the NO FAKES Act currently being considered by Congress, which seeks to ban the production and distribution of AI-generated replica of any individual without their consent.

OpenAI publicly supports the No FAKES Act, with the CEO, Sam Altman, saying the company is “deeply committed to protecting performers from the misappropriation of their voice and likeness”.

Sora 2 does allow users to generate “historical figures”, defined broadly as anyone both famous and dead. However, OpenAI has recently agreed to allow representatives of “recently deceased” public figures to request that their likeness be blocked from Sora 2.

Earlier this month OpenAI announced that they had “worked together” with the estate of Martin Luther King Jr and, at their request, was pausing the ability to depict King on Sora 2 as the company “strengthens guardrails for historical figures”.

Recently Zelda Williams, the daughter of the late actor Robin Williams, pleaded with people to “please stop” sending her AI videos of her father, while Kelly Carlin, daughter of the late comedian George Carlin, has called AI videos of her father “overwhelming, and depressing”.

Legal experts have speculated that generative AI platforms are allowing the use of dead, historical figures to test what is permissible under the law.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.