Get all your news in one place.
100’s of premium titles.
One app.
Start reading

OpenAI's Sora: Fast track to a vacuous AI-video future

OpenAI's new Sora app gives us a fast-forward view of a future in which AI video, social media and the attention economy fuse into one giant mucky, murky, reality-corroding pool of virality.

Why it matters: Feeds, memes and slop are the building blocks of a new media world where verification vanishes, unreality dominates, everything blurs into everything else and nothing carries any informational or emotional weight.


Driving the news: OpenAI's Sora 2 AI video maker and Sora app let users make and share AI-based short videos starring themselves, their friends and anyone else who gives them permission to be included.

  • The new Sora comes on the heels of the launch of Meta's Vibes, a TikTok knockoff that's composed entirely of AI-made videos.

Both companies are betting that the public's interest in AI video is not just a brief infatuation with a technical novelty but the start of a foundational shift in media consumption.

  • AI video is much cheaper to make than professionally produced material. And AI-made videos can be personalized and targeted in ways traditional video could never match.

Yes, but: Much of the appeal of online video platforms lies in personal connection between creators and their followers, and how that translates into the world of AI-generated clips is unclear.

  • Sora's "cameo" feature, which lets users share (and control) the use of their own images in Sora videos, could be an answer — but it's also a Pandora's box of potential problems.

Here are five friction points the new AI video apps will face.

1. Truth erosion.

  • The more AI-made video there is online, the less anyone can and should trust that any particular video is real.
  • Videos made with Sora are watermarked for now, and both Sora and Vibes are explicitly all-AI feeds. But if these apps become popular their hit clips will spill over onto TikTok, Reels, YouTube and other platforms, where they will mix with real footage.
  • At this point, it's prudent to start from the assumption that every online video, no matter how real-looking, is fictional until proven otherwise.

2. Copyright violation.

  • When you open Sora's feed, you enter a world packed with images from popular animated series and video games.
  • OpenAI says that intellectual property holders can opt out of having their images and styles duplicated.
  • Studio Ghibli never sued the company for the viral wave of Miyazaki-style clips users made with OpenAI tools earlier this year — but sooner or later, someone will take OpenAI to court.

3. Meme-ification.

  • My Sora feed right now is full of Jesuses, Spongebobs, dogs driving cars, and endless jokes featuring OpenAI CEO Sam Altman (who has allowed anyone to make use of his cameo).
  • AI-only content pushes the medium even further into vacuous unreality than TikTok or Reels, to the point that it feels entirely untethered from the real world.
  • The self-referential surrealism of meme universes like Skibidi Toilet and Italian Brainrot hold a deep attraction for plenty of sixth graders — but may not be able to sustain broader appeal.

4. Personalization.

  • The more we use AI to make videos and the more AI videos we consume, the more information we're handing the AI-makers.
  • OpenAI says it wants to avoid the engagement-maximizing sins of old-school social media. But it's also collecting more and more data from users.
  • Combining what a TikTok-style algorithm knows about your likes and dislikes with everything else about yourself that you're already telling ChatGPT will give OpenAI some powerful levers to shape your behavior.

5. Intimidation and humiliation.

  • Sora's cameos feature has some smartly thought-out safeguards that should limit wholesale abuse.
  • But a social-video playground full of real people's faces is bound to produce horrific cases of bullying, slander and other kinds of harm.
  • The internet public's ability to find and exploit nasty and hurtful uses of a novel social networking tool will always outpace the efforts of a platform to limit such uses.

What they're saying: Altman wrote in a blog post that OpenAI feels "trepidation" because of social media apps' history of becoming addictive and used for bullying.

  • "The team has put great care and thought into trying to figure out how to make a delightful product that doesn't fall into that trap, and has come up with a number of promising ideas" it will experiment with, he wrote.
  • He said OpenAI aims to "optimize for long-term user satisfaction," "encourage users to control their feed," "prioritize creation" and "help users achieve their long-term goals."

The bottom line: Right now, the government is reluctant to regulate new tech, companies have effectively unlimited budgets and the public is bitterly divided over political and social issues.

  • That means our new AI-shaped social media world is more likely to run amuck than to evolve with caution and care.
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.