Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Salon
Salon
Science
Luke Fernandez

AI reflects hyberbolic history of tech

I'll admit it. In the first couple days after using ChatGPT, the advanced chatbot from OpenAI, I felt the same way I did when I first downloaded the Mosaic Web Browser in the early '90s. The earth seemed like it had shifted below my feet. I write about the history of technology, so you'd think I'd have the intellectual armor to keep such dramatic feelings in check. But I didn't. I felt genuinely awed.  

In the subsequent months, I've had chances to recalibrate those feelings in light of other people's reactions. Some, with a utopian bent, speak of ChatGPT fueling an economic boom that increases intellectual productivity multifold. Others, who have been described as "doomers," trot out visions of robot apocalypse. Doom talk, in one of its more curious, and perhaps corrosive forms, can generate "wishful worries," as the historian David Brock has suggested. People who engage in wishful worrying would rather fantasize about some future injustice that AI might cause rather than focus on its present-day harms. Some have even created "The AI Hype Wall of Shame" to critique pundits who dramatize and mystify A.I. overmuch.

Making sense of those reactions and assessing my own sentiments in light of them, is a challenge. To what extent did my awe need to be tempered? In feeling awe, was I also mystifying the technology? Beyond taking into account the advice of present-day boosters, doomers and critics, it's also worth considering how past Americans have framed their responses to what seemed initially like awesome technologies, and whether those reactions might also cast some light on our current encounters. 

An obvious place to begin is "The Turk," which was a chess playing automaton. Originally constructed in Europe in 1770, it was brought over to America in 1826. "The Turk" was a chess board which was placed next to a clothed manikin. The manikin and the board, in turn rested on a wooden box that contained gears and other machinery. To the unwitting observer, and even to many who examined it closely, the machinery appeared to be able to play chess. In actuality however, there was an expert chess player hidden inside the box who wielded the chess pieces by moving the manikin's arms. 

As Tom Standage has argued in his book The Turk, the public's reaction to the automaton in 1820s "foreshadowed" our own reaction to modern digital automatons. Some people were awed by the technology and were willing (at least for a moment) to imagine it as intelligent and possessed of self-agency, while others sought to demystify it and point out the real hidden human labor that was driving it.

With the wisdom of hindsight, we can say that the American public in the 1820s was gullible. After all, machines that could play decent games of chess did not appear until the antecedents of IBM's Deep Blue in the late 20th century. At the same time, that hindsight doesn't discourage everyone from thinking that our own time is special — some of us are still inclined to believe (if only momentarily) that now we're witnessing the birth of some real form of machine intelligence.

The lesson of "The Turk" cautions otherwise: we might like to attribute some sort of autonomy and intelligence to machines, but if we can look into the black box, what is revealed are the labors, and interests, of human actors. If our initial encounters with technology provoke feelings of mystery and awe, we might not want to take those feelings at face value. They might be concealing as much as they reveal.

When Samuel Morse inaugurated the first telegraph line, the message he famously tapped out implicitly asked a question: "What hath God wrought."  

And if going back to the 1820s seems like too much of a stretch, we should remember that there are more recent precedents. When Steve Jobs was hyping the MacIntosh in 1984, he aspired (as he put it) "to reach the point where the operating system is totally transparent…you never [should have to] interact with it; you don't know about it." 

For Jobs, the interface should be so magical and user friendly that consumers wouldn't have to consider how it actually generated what it generated. As Jobs famously said later, "It just works." Never mind the workforce that labored in the background to make it work — the magic covered all of that up. I don't know if Sam Altman, the head of OpenAI, has ever explicitly ascribed to this design philosophy, but if you play with ChatGPT's prompts for any length of time you can see that philosophy implicitly built into the way it interfaces with you.

Other events in the past can also enlarge our understanding of the often awed reactions to ChatGPT and its surrounding hype. For example, when Samuel Morse inaugurated the first telegraph line in America (which stretched between Washington, D.C. and Baltimore), the message he famously tapped out implicitly asked a question (even though it lacked a question mark): "What hath God wrought."  

That message might simply be read as the product of an age that was, in some ways, less secularized than our own. And since Morse was the son of a Congregationalist minister, it seems, on one level, unsurprising that he'd use divine rhetoric. But Morse, of course, also stood to profit from the adoption of his invention. Dramatizing it as God's creation surely served that end. Other investors in the telegraph like Congressman Francis O. J. Smith used similar language. For Smith, the telegraph was worthy of "religious reverence." It had "almost super human agency" and was "unsurpassed" in its "moral grandeur."

Hype — in this case 19th century holy hype — occludes the way the technology serves as a tool of imperialism and oppression. 

Using religious language to hype technology may have revealed something important about human limits, or put another way, what powers should belong to humans and which ones to gods. But the language didn't go very far in anticipating the more immediately problematic aspects of telegraphy, including the way it hastened the spread of misinformation, information overload, and many forms of imperialism. Some of this is exemplified in John Gast's painting American Progress. In high school history classes this painting, is often used as a way of pictorially representing "manifest destiny," since it shows American settlers moving west. But it also depicts something more. 

At the center of the painting is a deity trailing a telegraph wire.  She too moves from east to west across the American landscape.  At first gloss, the painting seems to echo Morse's and Smith's hype — God, in concert with humans, uses the telegraph to spread progress and enlightenment across the world. But after a second, more discerning look, one sees that the deification of technology distracts attention from seeing what Gast has painted on the left (westernmost) margins — namely the displacement of Native Americans as the telegraph and settlers move ever westward.  Hype — in this case 19th century holy hype — occludes the way the technology serves as a tool of imperialism and oppression.

Similar systems of occlusion and erasure are at work 150 years later in the current hype around AI. But instead of deifying technology (although there is still some of that), it's done by imagining A.I as autonomous, as sentient, with its own ends and its own deterministic imperatives that no one (especially those of us in academia) has the power to stop. When framed this way, the commercial ends of OpenAI and of Microsoft (which has a multi-billion dollar stake in the product) are obscured and left unquestioned. Reified in this manner, the goals appear to be in the AI — not in the people who profit from the AI or from the labors of billions of writers whose writing is used to train the AI. Erasure, as it were, happens at multiple levels. It's "The Turk" writ large.

If the above are not reasons enough for resisting the hype, one might also consider that Americans have been speaking reverently about emerging technologies for centuries. That hype might catalyze awe and fear, and feelings of what the historian David Nye called "the technological sublime,"  but the feelings contain within themselves their "own rapid obsolescence." They have "half-lives." And probably shorter half-lives than the sublimity past generations found in the technologies of their own day. Next year, by this time, ChatGPT might just feel ordinary, mundane, or as Neil Postman once put it, just "part of the natural order of things."

So it seems like good counsel to be wise to the hype.  Not only does it conceal as much as it reveals, if history is any guide, it's likely to be ephemeral. 

Americans have been speaking reverently about emerging technologies for centuries

This isn't to say however, that in guarding against hype we have to question everyone who writes inflated or extravagant stories about the power of technology or its capacity to appear autonomous. In appropriate circumstances, those stories, and the emotions they provoke, can illuminate aspects of social and political life that otherwise are left unexplored. 

After all, humans have been fashioning dramatic accounts of their tools and how those tools reconfigure their sense of limits and agency for millennia. If we look backwards in time from the recent past to the distant past the pattern is evident. Here are just a few of the many instances that give definition to this pattern: In the movies Ex Machina and HER, humans fall in love with AIs, but they outsmart the humans and abandon them. In the 1936 film Modern Times, Charlie Chaplin is swallowed up and turned into a machine by an assembly line. In countless Hollywood remakes and in Mary Shelly's original book, Victor Frankenstein is sometimes read as someone who has engaged in Promethean overreach and other times as someone who loses control because he neglects to nurture his creation. And the Greeks used dramatic mythical figures like Pandora, Icarus and Prometheus to broach the fraught relationships they had with their own technologies.

Those stories aren't meant to be taken literally. Instead they are metaphors that help express the limits of human freedom and how technology may (or may not) reshape that freedom. We tell stories about autonomous technology and imagine technology as out of human control not because we always literally believe in them. We do so, as Langdon Winner said, because the idea of "autonomous technology is nothing more or less than the idea of human autonomy held up to a different light."  

And this is where the criticism of AI hype is itself overhyped. As we have seen, there are good reasons to avoid mystifying machines if we don't want to be taken in by "The Turk" and it's modern AI analogue. But taken too far, that position risks reducing technology to mere tools that are used instrumentally by humans for human ends. Technology is that. But hardly solely that.

As the creature says to Victor Frankenstein, "You are my creator, but I am your master." Clearly, we are not always the masters of our own technologies. And the ends that are designed into technology are not always aligned with the diverse ends of the people who use it. Being able to see these protean qualities in technology and the way we interact with it can enlarge our understanding of what it is.

The fact that technologies are not just tools is, notably, not the only message that Frankenstein imparts. This is because the novel, paradoxically, uses hype to call hype into question. Langdon Winner in his book Autonomous Technology, offers an insightful interpretation: Victor Frankenstein only ever sees his relation to the creature in hyperbolic terms. He's in a fever to create it because he imagines that he's almost like God, bringing into existence his own Adam. But then, as soon as he brings the creature to life, he runs away from it, repulsed by its grotesque appearance.

In modern lingo, Frankenstein is the deadbeat-dad who is unwilling to nurture or care for his offspring. Metaphorically, Victor can only see technology through hyperbole. It's either an artifact from heaven, or one from hell. More crucially, Victor thinks the monstrousness is inherent in the creature itself rather than something that emerged as a result of his neglect.

Being able to see these protean qualities in technology and the way we interact with it can enlarge our understanding of what it is.

As Winner notes, in many modern renditions of the Frankenstein tale, all that is portrayed is Victor's myopic point of view: technology suddenly becomes, of itself, an autonomous horror, killing and rampaging through the countryside. But significantly, Shelly's novel forwards a subtler message. The real reason why Victor loses control of the creature is that he neglects it. The lesson that is obfuscated in the Hollywood version is clear in the novel: Victor's interactions with the creature (and by extension our own interactions with technology) would be more benign if after inventing the creature he bothered to stick around to nurture and maintain it and teach it limits. Shelly's dramatic, some might say hype-filled novel, delivers an important message still relevant in our own day as we grapple with our own fraught interactions with AI. Technology might not be inherently autonomous, but it's not a simple tool either — it's only partly under our control. And it is precisely her hype that helps remind us of that fact.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.