Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Atlantic
The Atlantic
Technology
Charlie Warzel

People Aren’t Falling for AI Trump Photos (Yet)

Illustration by The Atlantic; Source Elliot Higgins Midjourney v5

On Monday, as Americans considered the possibility of a Donald Trump indictment and a presidential perp walk, Eliot Higgins brought the hypothetical to life. Higgins, the founder of Bellingcat, an open-source investigations group, asked the latest version of the generative-AI art tool Midjourney to illustrate the spectacle of a Trump arrest. It pumped out vivid photos of a sea of police officers dragging the 45th president to the ground.

Higgins didn’t stop there. He generated a series of images that became more and more absurd: Donald Trump Jr. and Melania Trump screaming at a throng of arresting officers; Trump weeping in the courtroom, pumping iron with his fellow prisoners, mopping a jailhouse latrine, and eventually breaking out of prison through a sewer on a rainy evening. The story, which Higgins tweeted over the course of two days, ends with Trump crying at a McDonald’s in his orange jumpsuit.

All of the tweets are compelling, but only the scene of Trump’s arrest went mega viral, garnering 5.7 million views as of this morning. People immediately started wringing their hands over the possibility of Higgins’s creations duping unsuspecting audiences into thinking that Trump had actually been arrested, or leading to the downfall of our legal system. “Many people have copied Eliot’s AI generated images of Trump getting arrested and some are sharing them as real. Others have generated lots of similar images and new ones keep appearing. Please stop this,” the popular debunking account HoaxEye tweeted. “In 10 years the legal system will not accept any form of first or second hand evidence that isn’t on scene at the time of arrest,” an anonymous Twitter user fretted. “The only trusted word will be of the arresting officer and the polygraph. the legal system will be stifled by forgery/falsified evidence.”

This fear, though understandable, draws on an imagined dystopian future that’s rooted in the concerns of the past rather than the realities of our strange present. People seem eager to ascribe to AI imagery a persuasion power it hasn’t yet demonstrated. Rather than imagine emergent ways that these tools will be disruptive, alarmists draw on misinformation tropes from the earlier days of the social web, when lo-fi hoaxes routinely went viral.

These concerns do not match the reality of the broad response to Higgins’s thread. Some people shared the images simply because they thought they were funny. Others remarked at how much better AI-art tools have gotten in such a short amount of time. As the writer Parker Molloy noted, the first version of Midjourney, which was initially tested in March 2022, could barely render famous faces and was full of surrealist glitches. Version five, which Higgins used, launched in beta just last week and still has trouble with hands and small details, but it was able to re-create a near-photorealistic imagining of the arrest in the style of a press photo.

But despite those technological leaps, very few people seem to genuinely believe that Higgins’s AI images are real. That may be a consequence, partially, of the sheer volume of fake AI Trump-arrest images that filled Twitter this week. If you examine the quote tweets and comments on these images, what emerges is not a gullible reaction but a skeptical one. In one instance of a junk account trying to pass off the photos as real, a random Twitter user responded by pointing out the image’s flaws and inconsistencies: “Legs, fingers, uniforms, any other intricate details when you look closely. I’d say you people have literal rocks for brains but I’d be insulting the rocks.”

I asked Higgins, who is himself a skilled online investigator and debunker, what he makes of the response. “It seems most people mad about it are people who think other people might think they’re real,” he told me over email. (Higgins also said that his Midjourney access has been revoked, and BuzzFeed News reported that users are no longer able to prompt the art tool using the word arrested. Midjourney did not immediately respond to a request for comment.)

The attitude Higgins described tracks with research published last month by the academic journal New Media & Society, which found that “the strongest, and most reliable, predictor of perceived danger of misinformation was the perception that others are more vulnerable to misinformation than the self”—a phenomenon called the third-person effect. The study found that participants who reported being more worried about misinformation were also more likely to share alarmist narratives and warnings about misinformation. A previous study on the third-person effect also found that increased social-media engagement tends to heighten both the third-person effect and, indirectly, people’s confidence in their own knowledge of a subject.

The Trump-AI-art news cycle seems like the perfect illustration of these phenomena. It is a true pseudo event: A fake image enters the world; concerned people amplify it and decry it as dangerous to a perceived vulnerable audience that may or may not exist; news stories echo these concerns.

There are plenty of real reasons to be worried about the rise of generative AI, which can reliably churn out convincing-sounding text that’s actually riddled with factual errors. AI art, video, and sound tools all have the potential to create basically any mix of “deepfaked” media you can imagine. And these tools are getting better at producing realistic outputs at a near exponential rate. It’s entirely possible that the fears of future reality-blurring misinformation campaigns or impersonation may prove prophetic.

But the Trump-arrest photos also reveal how conversations about the potential threats of synthetic media tend to draw on generalized fears that news consumers can and will fall for anything—tropes that have persisted even as we’ve become used to living in an untrustworthy social-media environment. These tropes aren’t all well founded: Not everyone was exposed to Russian trolls, not all Americans live in filter bubbles, and, as researchers have shown, not all fake-news sites are that influential. There are countless examples of awful, preposterous, and popular conspiracy theories thriving online, but they tend to be less lazy, dashed-off lies than intricate examples of world building. They stem from deep-rooted ideologies or a consensus that forms in one’s political or social circles. When it comes to nascent technologies such as generative AI and large language models, it’s possible that the real concern will be an entirely new set of bad behaviors we haven’t encountered yet.

Chris Moran, the head of editorial innovation at The Guardian, offered one such example. Last week, his team was contacted by a researcher asking why the paper had deleted a specific article from its archive. Moran and his team checked and discovered that the article in question hadn’t been deleted, because it had never been written or published: ChatGPT had hallucinated the article entirely. (Moran declined to share any details about the article. My colleague Ian Bogost encountered something similar recently when he asked ChatGPT to find an Atlantic story about tacos: It fabricated the headline “The Enduring Appeal of Tacos,” supposedly by Amanda Mull.)  

The situation was quickly resolved but left Moran unsettled. “Imagine this in an area prone to conspiracy theories,” he later tweeted. “These hallucinations are common. We may see a lot of conspiracies fuelled by ‘deleted’ articles that were never written.”

Moran’s example—of AIs hallucinating, and accidentally birthing conspiracy theories about cover-ups—feels like a plausible future issue, because this is precisely how sticky conspiracy theories work. The strongest conspiracies tend to allege that an event happened. They offer little proof, citing cover-ups from shadowy or powerful people and shifting the burden of proof to the debunkers. No amount of debunking will ever suffice, because it’s often impossible to prove a negative. But the Trump-arrest images are the inverse. The event in question hasn’t happened, and if it had, coverage would blanket the internet; either way, the narrative in the images is instantly disprovable. A small minority of extremely incurious and uninformed consumers might be duped by some AI photos, but chances are that even they will soon learn that the former president has not (yet) been tackled to the ground by a legion of police.

Even though Higgins was allegedly booted from Midjourney for generating the images, one way to look at his experiment is as an exercise in red-teaming: the practice of using a service adversarially in order to imagine and test how it might be exploited. “It’s been educational for people at least,” Higgins told me. “Hopefully make them think twice when they see a photo of a 3-legged Donald Trump being arrested by police with nonsense written on their hats.”

AI tools may indeed complicate and blur our already fractured sense of reality, but we would do well to have a sense of humility about how that might happen. It’s possible that, after decades of living online and across social platforms, many people may be resilient against the manipulations of synthetic media. Perhaps there is a risk that’s yet to fully take shape: It may be more effective to manipulate an existing image or doctor small details rather than invent something wholesale. If, say, Trump were to be arrested out of the view of cameras, well-crafted AI-generated images claiming to be leaked law-enforcement photos may very well dupe even savvy news consumers.

Things may also get much weirder than we can imagine. Yesterday, Trump shared an AI-generated image of himself praying—a minor fabrication with some political aim that’s hard to make sense of, and that hints at the subtler ways that synthetic media might worm its way into our lives and make the process of information gathering even more confusing, exhausting, and strange.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.