Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Creative Bloq
Creative Bloq
Technology
Tom May

I've been watching Google Veo 3 videos, and they're genuinely terrifying

Two men walking away from burning building with grins on their faces.

I've spent the last few days watching AI-generated videos that would fool my own mother, and I'm genuinely terrified. Not in a "robots are coming for us" way, but in a "we've just handed every wannabe manipulator on earth a Hollywood studio" way.

That's because Google's new AI video generator, Veo 3, doesn't just create moving pictures. It creates reality, complete with perfect lip-syncing, dialogue that sounds like actual humans recorded it, and physics that behave exactly as they should. And all you need is a text prompt.

The clips flooding social media are jaw-dropping. There's a standup comedian telling a joke on stage. A series of high-spirited interviews at a car show. A college professor teaching Boomers Gen-Z slang. And the quality is so convincing, I found myself checking twice to see if these were real people.

They're not. They're digital phantoms, conjured from nothing but words and algorithms. But if you didn't know, you wouldn't know.

And here's what's keeping me awake at night. As a society, we're not ready for this. Not even close.

The death of "I'll believe it when I see it"

For centuries, humans have relied on a simple rule: seeing is believing. If you could film something, it probably happened.

Sure, movies have had special effects for decades, but those required massive budgets, teams of specialists, and weeks of post-production. Now, any teenager with a Google account and $249 (£200) a month can create footage that would have required a Hollywood studio just five years ago.

The examples I've seen are technically impressive – but they're also terrifying. Because while the subject matter so far has been benign and trivial, just imagine... Someone creates a fake news report about a terrorist attack that looks so real it makes your stomach drop. Would those people who are already fired up about an existing conflict, or perceived conflict, think before breaking out their rifle and storming out onto the streets?

The chilling part is, just like fake news and fake images, fake videos will get shared faster than fact-checkers can keep up. By the time someone debunks it, it's already been viewed by millions and shaped public opinion. We're entering an era where the lie gets around the world before the truth has even put its boots on.

This isn't just about entertainment any more. It's about the complete erosion of our (already damaged) shared reality.

The professionals are worried, and they should be

The film industry is trying to put a brave face on this, with some creators claiming Veo 3 gives them "new creative freedom." But scratch beneath the surface and you'll find widespread anxiety. When AI can generate footage that looks professionally shot on one of the usual video editing software options, complete with proper lighting and camera work, what happens to the thousands of people who make their living creating that content?

I've spoken to several video editors and cinematographers who are genuinely concerned about their futures. When anyone can generate a Hollywood-quality short film from a paragraph of text (like the one below), what's the point of spending years learning the craft?

The irony is that while AI is becoming incredibly sophisticated at mimicking human creativity, it's also completely devoid of the human experience that makes art meaningful. These videos might look perfect, but they're created by algorithms that have never felt joy, sorrow, or the weight of human experience. They're technically flawless but emotionally hollow.

We're sleepwalking into chaos

The most frustrating part of this whole situation is how unprepared we are for what's coming. We're still arguing about whether social media companies should fact-check obvious lies, and now we're about to be hit with a tsunami of synthetic media that's exponentially harder to detect and debunk.

Our legal systems don't know how to handle deepfakes. Our educational systems haven't taught people how to spot sophisticated AI-generated content. Our social platforms are already struggling to moderate human-created content, let alone AI-generated material that can be produced at industrial scale.

And the technology is advancing faster than our ability to respond to it. By the time we figure out how to detect Veo 3 videos, there will be Veo 4, and then Veo 5, each one more sophisticated than the last.

What now?

I'm not saying we should ban this technology – that horse has already bolted. But we need to have some serious conversations about how we're going to live in a world where any video could be fake. We need watermarking standards, detection tools, and perhaps most importantly, a complete rethinking of how we consume and share information.

We also need to accept that the internet as we know it is about to change dramatically. The days of casually sharing videos without verification are over. Every clip will need to be authenticated, every source checked, every claim verified. It's going to be exhausting, but it's the price we'll pay for living in an age where reality itself can be faked.

The technology exists now. The genie is out of the bottle. The question isn't whether this will change everything – it's whether we'll be ready when it does. Based on what I've seen so far, I'm not optimistic.

But perhaps that's exactly the wake-up call we need.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.