Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Creative Bloq
Creative Bloq
Technology
Joe Foley

Did OpenAI really not see the Sora 2 copyright controversy coming?

An AI-generated image of polo players on horses on the moon.

OpenAI's launch of its Sora 2 AI video generator and Sora social media app is in disarray after it backtracked following complaints about copyright infringement. Few people are buying the company's claim to have been surprised by the controversy. Meanwhile, users have rapidly found ways to circumvent new controls.

It all demonstrates a worryingly nonchalant attitude and a staggering lack of thought before the launch of such a powerful AI model.

To recap, Sora 2 is a new AI model that can generate much more realistic and controllable video than its predeccessor and perhaps any other video model currently available. OpenAI, the company behind Chat GPT, launched it along with an iOS app that it intended as a new form of social media where users can generate deepfakes of themselves and their friends.

Initially, copyright holders were told they had to opt out if they didn't want their intellectual property to appear in videos generated by the model. After a week of chaos in which people generated AI videos of things like a Nazi SpongeBob SquarePants and ads for 'Epstein Island' children’s toys, OpenAI backtracked and switched to an opt-in policy as media companies and bodies like the Motion Picture Association complained of copyright infringement.

To try to sway people to opt in, the company vowed to give rightsholders more control over the generation of characters. Bill Peebles, OpenAI’s head of Sora, posted on X (above) that users can specify how their cameo is used through text instructions, such as “don’t put me in videos that involve political commentary” or “don’t let me say this word.”

A post shared by OpenAI (@openai)

A photo posted by on

Some users have reacted furiously to the stricter controls, claiming that Sora 2 is now practically useless because they continuously receive warnings that their video generation requests breach the new guardrails. But others are already finding ways around the controls, using unofficial images or changing character names to avoid the app detecting that third-party IP.

Ultimately, the problem is in Sora's training and OpenAI's whole approach to AI ethics. Incredibly, OpenAI's CEO Sam Altman has suggested that he wasn't expecting the app to be so controversial. Considering the well-documented concerns about copyright theft in generative AI training, including several ongoing lawsuits, it seems he must have been living under a rock for the past few years.

The Verge cites the CEO as saying in a Q&A response: “I think the theory of what it was going to feel like to people, and then actually seeing the thing, people had different responses. It felt more different to images than people expected.”

Sam also seems to have been surprised to learn that users might not want deepfakes of themselves to say offensive things. He said he expected people would either want to make their deepfakes (dubbed 'cameos') public or not, but not that their decision might be more nuanced. “They don’t want their cameo to say offensive things or things that they find deeply problematic,” the billionaire CEO was surprised to learn.

Another fear around Sora is that it could plunge us into a crisis of misinformation and fake news. Sora adds a watermark to videos by default, but it's hardly difficult to remove.

Sam said in the Q&A Monday that he knew that “people are already finding ways to remove it”, suggesting that the head of one of the world's biggest tech companies has just learned that AI watermark removers exist or that you can easily mask out a watermark in a video-editing program.

The whole debacle shows the reckless approach that many AI companies take to launching news tools. Open AI has veered to and fro between trying to paint a picture of responsibility and then promoting its models for their ability to rip off well-known art styles and copyright material. For the launch of Chat GPT 4, it actively encouraged the brief AI Studio Ghibli craze with Sam using a Ghiblified picture of himself as his profile image on X.

Is the company and its leadership really so out of touch that it didn't predict the reaction that Sora generated? Or is it playing a game of testing how far it can go, releasing things without controls to generate hype and then rowing back when it fears legal fallout or harm to its relationships with major media companies?

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.