Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Salon
Salon
Politics
Russell Payne

"Summoning the demon": MAGA splits on AI

Pressure to regulate AI, fueled by apocalyptic prophecy and long-held animosity of tech giants like billionaire Elon Musk, is building within MAGA, and it might be enough to get something done in Congress.

AI-generated images, ranging from muscle-bound depictions of President Donald Trump to memes portraying the president’s opponents as communists, have become a hallmark of online conservatism over the past few years.

Percolating in the background, however, has been a resistance to AI technology, rooted in the conservative movement’s skepticism of Big Tech. Criticism of AI on the right ranges from relatively mundane concerns over AI’s potential ability to defame to warnings that AI has a role to play in the end times.

Central to the concern of right-wingers is the concept of the AI “singularity” — the name for the hypothetical point at which AI becomes able to improve itself, leading to an uncontrollable cascade of advancements in the technology — and Musk, who often features prominently in right-wing critiques of AI for his influence in the Trump administration, a 2014 interview where he predicted that “with artificial intelligence we are summoning the demon” and for his longtime social media profile, in which he sported armor bearing the Sigil of Baphomet.

“If you listen to the four horsemen of the apocalypse — Dario, Musk, Altman… they talk right now about the Big Bang, that this is the Big Bang time for artificial intelligence,” former Trump adviser Steve Bannon said on a recent episode of his podcast, “War Room.” “As sure as the turning of the Earth, this is going to be the most fundamental radical transformation in all human history, going back to the absolute beginning,” Bannon continued, “and what you have is the most irresponsible people doing it for: one, their own efforts for eternal life, because they do not believe in the underlying tenants of the Judeo-Christian West; and also money and power. It must be stopped.”

Contemporary discussions of the AI singularity trace their roots back at least to the 1993 paper, “The Coming Technological Singularity: How to Survive in the Post-Human Era,” by Vernor Vinge, a mathematician at San Diego State University.

In the seminal 1993 paper, however, Vinge, who died in 2024, argued that “we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.”

In short, Vinge predicts that by 2030, humans will be capable of creating a machine with greater than human intelligence, which would then lead to a cascade of technological progress. Basically, once humans design an AI smart enough to improve itself, it’s off to the races.

Notably, there is no single definition of what qualifies as the point of singularity, though many use computer scientist Ray Kurzweil’s definition of it being a point where technological growth reaches a speed where it is no longer predictable. There is also some shakiness on the definition of different stages of AI that could lead to the point of singularity, namely artificial general intelligence (when AI becomes on par with a human) and artificial super intelligence (when AI becomes much smarter than a human). 

While discussions of the singularity have been percolating online for years, recent developments in AI technology and headlines warning of a potential AI apocalypse have elevated the topic. One Axios headline, for example, warned of a “white-collar bloodbath” resulting from AI taking jobs from humans. Another headline from The Guardian sounds the alarm on an AI “superintelligence” potentially “escaping human control.”

Daron Acemoglu, an economist at the Massachusetts Institute of Technology who has written extensively on AI and automation, told Salon that a lot of the warnings coming out of the AI industry are effectively marketing tactics.

However, Acemoglu said, that doesn’t mean that there isn’t increased uncertainty about the future of AI. And even Acemoglu, a prominent critic of some of the most bombastic claims about the future of AI, said that while he once expected that artificial general intelligence was attainable in the next 60 years, it might be coming sooner, given recent improvements in the technology. He maintains, however, that the calculations made by the current generation of AI are fundamentally different from how humans think.

“The more talk of artificial super intelligence we have, the more of a boost these companies get, especially in terms of being able to raise funding, in terms of being in the spotlight and high status, high ability to convince others,” Acemoglu said.

In right-wing circles, expectations for the future of AI vary as well. Joe Allen, a conservative writer and the author of “Dark Aeon,” a book on transhumanism and artificial intelligence, told Salon that on the fringe side of the right, you have characters like Alex Jones, the creator of InfoWars, who take the idea of the singularity “entirely too seriously” while on the “normie-con” side you have people who “don’t know about or don’t care about the technology.”

Allen told Salon that he doesn’t think Bannon means to be taken in a literal way when he talks about Musk and other technologist billionaires as the four horsemen of the apocalypse, as predicted in the Book of Revelation. 

“However, when Steve talked about transhumanism being a blasphemy of the Holy Spirit, I think he means it in a much more literal sense than I would take it myself,” Allen said.

Looming over Bannon’s discussion of AI on the right is also the ongoing struggle for influence between Bannon and Musk. Bannon has been a longtime critic of Musk and has recently called on Trump to nationalize Musk’s SpaceX. AI, which many on the right are already skeptical of, may just be another front in the same power struggle.

In between the normie cons and those very concerned about AI and transhumanism, Allen said, there are significant numbers of “traditional Christians, traditional Jews and traditional Muslims who have enormous apprehension.” Whether these concerns are rooted in concerns about job loss, surveillance or a fear that the pursuit of superintelligent AI might constitute blasphemy, general suspicion of AI, among both Democrats and Republicans, is beginning to show up in public polling.

A recent Pew Research survey, found that majorities in both parties are concerned that AI technology is under-regulated. Among Democrats, 64% of respondents said they were concerned that AI regulation would not go far enough, while among Republicans, 56% said the same. Among the general population of American adults, 58% said they were concerned that regulation would be insufficient, while just 21% said that they were worried it might go too far.

This popular opinion, however, hasn’t manifested into action from elected officials, at least not yet. Former President Joe Biden’s policy towards AI, combined with a lack of legislative action, was criticized for doing too little to rein in Big Tech’s domination of the industry. 


Start your day with essential news from Salon. Sign up for our free morning newsletter, Crash Course.


The GOP’s budget bill, which has already passed the House, also includes a 10-year moratorium on state-level regulation of AI, though it seems that the Senate has softened this provision and instead wants to withhold funding for broadband projects if states choose to regulate AI.

That doesn’t mean, however, that there’s no opportunity to pressure at least some Republicans into heeding their base’s distrust of AI technology and the people behind it. 

Given the slim majority Republicans hold in both the House and the Senate and the opposition the measure has drawn in the Senate, from Sen. Josh Hawley, R-Mo., and Sen. Marsha Blackburn, R-Tenn., and in the House, from Rep. Marjorie Taylor-Greene, R-Ga., who voiced her opposition only after voting for the bill, there may be an opportunity for conservatives to change the language in this bill in the reconciliation process. 

In a comment to Salon, Greene expressed hope that the Senate parliamentarian would strike the AI language out of the reconciliation bill for being unrelated to federal spending.

"This moratorium is not what President Trump ran on," Greene said. "He promised to secure our borders and unleash American energy dominance, and the One Big Beautiful Bill delivers. The AI regulation moratorium is a poison pill and has no place in this legislation."

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.