
A 22-year-old medical student in India says he made thousands of dollars by creating a MAGA AI influencer, describing a strategy that relied on viral content, niche messaging and what he bluntly called an easily persuadable audience.
Speaking to WIRED under a pseudonym, "Sam" said he developed a fictional Instagram persona—Emily Hart, a conservative-leaning nurse—after experimenting with ways to earn money online. He turned to an AI chatbot for guidance, which advised that generic content would fail because he would be "competing with a million other models," and suggested focusing on a specific audience instead.
Sam told Wired he followed that advice by tailoring the account to pro-Trump users, posting content aligned with conservative themes. The results were immediate. "Every Reel I posted was getting 3 million views, 5 million views, 10 million views. The algorithm loved it," he said, adding that he spent as little as 30 to 50 minutes a day managing the account.
As the account grew, so did its revenue streams. Sam said he monetized the audience through subscriptions on third-party platforms and the sale of themed merchandise, estimating he earned several thousand dollars per month. "I was basically doing nothing... and it was just flooded with money," he said.
Explaining why the strategy worked, Sam offered a stark assessment: "The MAGA crowd is made up of dumb people—like, super dumb people. And they fall for it."
The case is part of a broader trend in which AI-generated personas are attracting large followings online. As reported by Gadget Review, a separate account posing as a U.S. military servicewoman amassed more than one million followers in just a few months despite being entirely fabricated.
The account used realistic imagery of Jessica Foster, an AI-generated model with political messaging and staged interactions with public figures to build credibility before directing users to paid platforms.
Researchers say advances in artificial intelligence are making such tactics more effective. Valerie Wirtschafter, a fellow at the Brookings Institution, told WIRED that while fake profiles have long existed, "AI has made them more believable," contributing to their rapid spread.
Despite platform policies requiring disclosure of AI-generated content, enforcement remains inconsistent. Many accounts are removed only after gaining traction, allowing creators to profit in the interim.
Sam said he has since stopped operating the account to focus on his medical studies. He added that he does not view the project as deceptive, arguing that users engaged with the content regardless of whether the persona was real.
© 2025 Latin Times. All rights reserved. Do not reproduce without permission.