Recently, a friend sent me a video of a man dressed as a pickle. Following a high-octane car chase, the pickle flung himself out of the car and flailed down the highway. It was stupid and we laughed. But it also wasn’t real. When I pointed out to my friend that the video was AI-generated, she was taken by surprise, noting she’s usually pretty good at spotting them. She was also frustrated: “I hate having to be on the constant lookout for AI trash,” she lamented in the chat.
And I feel that. Becoming an AI detective is a job I never wanted and wish I could quit.
By now, the problems with generative AI are well documented: it’s built upon theft of people’s creative labour; it’s accelerating environmental degradation; it’s claiming productivity gains but actually producing the opposite; it relies upon exploited workers; its biggest champions are socially reprehensible losers; and so on. In my online circles, it’s also deeply uncool – using generative AI to make silly little videos signals you either don’t understand its consequences or you’re too much of an arsehole to care. Every AI-generated video that crosses my feed is a stand-in for the horrors associated with the technology and its politics.
But aside from these very important critiques, it’s also just downright irritating. Who wants to use so much energy playing synthetic media Sherlock Holmes?
I consider myself to be relatively tech-savvy, which probably lured me into a false sense of security that AI content would always be obvious to me. This is becoming less true, thanks to many video-generation models becoming more sophisticated and producing content with fewer immediately obvious red flags. Combine this with the sheer volume of AI-generated material and the context we see it in – on platforms where the whole point is to move from one video to the next very quickly. You barely have time to do a reality check before moving on. Consuming an algorithmically curated social media feed feels like drowning in a soup of slop. Whatever croutons of reality you might come by are only there to keep you drowning, sorry, scrolling, for longer.
And here’s the sick joke of it all: the more you linger on a video to discern if it’s AI-generated, the more of its kind you’ll see. Perhaps you watch it a few times, trying to spot the classic AI body horror. Maybe you’re annoyed enough to leave a comment. Perhaps you share it with a friend like: “Thanks, I hate it!!” All of these are signals that scream more! more! more! to the algorithm. And so we’re trapped in a nightmare in which even the act of hating AI-generated content only serves to fuel it.
All of this makes me feel like an unwilling traveller through a hellish hyperreality – a concept theorised by Jean Baudrillard decades before generative AI took off. Would Baudrillard feel vindicated or repulsed by Sora 2? (You can ask ChatGPT.) But aside from waxing lyrical about poststructuralist theory and the collapse between reality and simulation, I didn’t anticipate all of this to be just so annoying.
In some perverse way, it makes sense why people might choose to create a deepfake of a politician saying something outrageous, or generate sexually explicit material depicting someone without their consent. I don’t like it or support it, but the intent is pretty easily discernible (a political agenda, straight-up misogyny). Those kinds of videos are bad in an obvious, illegal or violent way.
But I also find the inane content particularly unsettling. Why bother making an AI video of a pickle in a car chase? Well, there’s money to be made. Insipid and puerile AI content created for the express purpose of going viral to turn a profit is a disturbing reflection of the absurdity of late-stage capitalism. When it comes to social media success, reality becomes a hindrance to revenue.
Journalist Jason Koebler argues this content isn’t actually designed for humans, its intended audience are the algorithms. The content of the AI slop is irrelevant – it doesn’t matter whether it’s high or low quality, political or mundane, in touch with reality or complete delusion. What matters is the ability to churn out a lot of it, spam the platform, find what gets engagement, rinse and repeat all the way to the bank.
As always, it’s important to remember this is happening because the world’s most powerful companies and their billionaire leaders decided that mass adoption of generative AI is good for business. Platforms aren’t interested in stopping the onslaught of AI spam. Rather, they’re embracing it, incentivising it and building tools to enable it. It doesn’t matter how people use it or for what purpose. It doesn’t matter if people like it. And because the platforms benefit, there’s no indication it will relent any time soon.
Maybe I’m being stubborn in my desire to grasp on to reality; maybe I’m deluding myself that such a thing is possible. But it still matters to me if what I am seeing is real or not. Maybe it matters to you, too. So for now, we have to begrudgingly hold on to our magnifying glass and digital deerstalker hat.
• Samantha Floreani is a digital rights advocate and writer based in Melbourne/Naarm