
The hype around artificial intelligence (AI) risks spiraling out of control as claims around the emerging technology escalate into the realm of the absurd. AI is a big-money business, write the authors of the new book, "THE AI CON: How to Fight Big Tech's Hype and Create the Future We Want" (2025), and the marketing fanfare we see is meant to promote the interests of big tech and do one thing: sell AI products.
In this new book, authors Emily M. Bender, professor of linguistics at the University of Washington, and Alex Hanna, director of research at the Distributed AI Research Institute, challenge our understanding of what AI is — and what it isn't. Ultimately, they attempt to see through a lot of the overblown claims and sensationalism to understand the true impact AI is having on society.
In this excerpt, the writers grapple with the idea of artificial general intelligence (AGI), the origins of that idea and what the term actually means. In this extract, they argue that the true definitions of AGI and a hypothetical "superintelligence" are fuzzy, at best, and in practice only serve to feed the corporate AI hype machine.
If you listened to executives and researchers at big tech firms, you’d think that we were on the verge of a robot uprising. In February 2022, OpenAI’s Chief Scientist Ilya Sutskever tweeted "it may be that today’s large neural networks are slightly conscious."
In June 2022, the Washington Post reported that Google engineer Blake Lemoine was convinced that Google’s language model LaMDA was sentient and needed legal representation. Lemoine was fired over this incident — not for his false claims (which Google did deny), but for leaking private corporate information. In an August 2022 blog post, Google VP Fellow Blaise Agüera y Arcas responded to the Lemoine story, but rather than countering Lemoine’s claims, he suggested that LaMDA does indeed "understand" concepts and that the debate over whether or not LaMDA has feelings is not resolvable or "scientifically meaningful."
In April 2023, a team at Microsoft Research led by Sébastien Bubeck posted a non-peer-reviewed paper called "Sparks of Artificial General Intelligence: Early Experiments with GPT-4," in which they claim to show that the language model GPT-4 "can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology" and thus shows the first "sparks of artificial general intelligence."
Related: GPT-4.5 is the first AI model to pass an authentic Turing test, scientists say
The word "sparks" evokes an image of something about to catch fire and spread of its own accord. The phrase "artificial general intelligence" here is meant to differentiate from ordinary technologies called "AI," and is particularly common in modern discourse around thinking, sentient or conscious machines.
These claims are not new. Over 60 years ago, researchers, business executives and government officials were making similar bombastic claims about the nature of computer intelligence and the risk of superhuman intelligence supplanting humans at work, at home, and, perhaps most alarmingly, on the battlefield.
The sinister origins of "general intelligence"
Despite claims that machines may one day achieve an advanced level of "general intelligence", such a concept doesn’t have an accepted definition. (OpenAI has avoided the question by suggesting that they will allow their board to decide when their algorithms have achieved artificial general intelligence.) But the project of identifying general intelligence is inherently racist and ableist to its core, making the project of chasing artificial general intelligence foolhardy at best, and deceptive and dangerous at worst.
Microsoft’s "Sparks" paper contains a preliminary definition of general intelligence, one that has no references to fields that may have a say in such a thing, like psychology or cognitive neuroscience. Despite being a paper claiming that certain statistical models have shown the inklings of "artificial general intelligence", there is no well-scoped definition of what the components of general intelligence are.
In a prior version of the paper, the authors cited a 1994 Wall Street Journal editorial signed by a group of 52 psychologists that had proffered this definition: "The consensus group defined intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience."
Unfortunately, the goal of creating artificial general intelligence isn’t just a project that lives as a hypothetical in scientific papers. There’s real money invested in this work, much of it coming from venture capitalists.
A lot of this might just be venture capitalists (VCs) following fashion, but there are also a number of AGI true believers in this mix, and some of them have money to burn. These ideological billionaires — among them Elon Musk and Marc Andreessen — are helping to set the agenda of creating AGI and financially backing, if not outright proselytizing, a modern-day eugenics. This is built on the combination of conservative politics, an obsession with pro-birth policies, and a right-wing attack on multiculturalism and diversity, all hidden behind a façade of technological progress.
The hype of "superintelligence"
Why do so many people involved in building and selling large language models seem to have fallen for the idea that they (might be) sentient? And why do so many of these same people spend so much time warning the world about "existential risk" of "superintelligence" while also spending so much money on it?
In a word, claims around consciousness and sentience are a tactic to sell you on AI. Most people in this space seem to simply be aiming to make technical systems which achieve what looks like human intelligence to get ahead in what is already a very crowded market. The market is also a small world: researchers and founders move seamlessly between a few major tech players, like Microsoft, Google, and Meta, or they go off to found AI startups that receive millions in venture capital and seed funding from Big Tech.
As one data point, in 2022, 24 Google researchers left to join AI startups (while one of us, Alex, left to join a research nonprofit). As another data point, in 2023 alone, $41.5 billion in venture deals was dished out to generative AI firms, according to Pitchbook data. The payoff has been estimated to be huge. That year, McKinsey suggested that soon, generative AI may add "up to $4.4 trillion" annually into the global economy. Estimates like this are, of course, part of the hype machine, but VCs don’t seem to think that fact should stem the rush to invest in these tools.
This hype leans on tropes about artificial intelligence: sentient machines needing to be granted robot rights or Matrix-style superintelligence posing a direct threat to ragtag human resisters. This has implications beyond the circulation of funds among VCs and other investors, most notably because ordinary folks are being told they’re going to be out of a job.