Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
Kenan Malik

Elon Musk v OpenAI: tech giants are inciting existential fears to evade scrutiny

The OpenAI logo is reflected in a closeup of an eye.
Emails released in response to Elon Musk’s legal challenge make it clear that ‘all board members agreed that “OpenAI” could not actually be open’. Photograph: Jaap Arriens/NurPhoto/Rex/Shutterstock

In 1914, on the eve of the First World War, HG Wells published a novel about the possibilities of an even greater conflagration. The World Set Free imagines, 30 years before the Manhattan Project, the creation of atomic weapons that allow “a man [to] carry about in a handbag an amount of latent energy sufficient to wreck half a city”. Global war breaks out, leading to an atomic apocalypse. It takes the “establishment of a world government” to bring about peace.

What concerned Wells was not simply the perils of a new technology, it was also the dangers of democracy. Wells’ world government was not created through democratic will but imposed as a benign dictatorship. “The governed will show their consent by silence,” England’s King Egbert menacingly remarks. For Wells, the “common man” was “a violent fool in social and public affairs”. Only an educated, scientifically minded elite could “save democracy from itself”.

A century on, another technology provokes a similar kind of awe and fear – artificial intelligence. From the boardrooms of Silicon Valley to the backrooms of Davos, political leaders, tech moguls and academics exult in the immense benefits that AI will bring, but fear that it may also herald humanity’s demise as superintelligent machines come to rule the world. And, as a century ago, at the heart of the debate are questions of democracy and social control.

In 2015, the journalist Steven Levy interviewed Elon Musk and Sam Altman, two founders of OpenAI, the tech company that burst into public consciousness two years ago with the release of ChatGPT, the seemingly human-like chatbot. A galaxy of Silicon Valley heavyweights, fearful of the potential consequences of AI, created the company as a non-profit-making charitable trust with the aim of developing technology in an ethical fashion to benefit “humanity as a whole”.

Levy quizzed Musk and Altman about the future of AI. “There’s two schools of thought,” Musk mused. “Do you want many AIs, or a small number of AIs? We think probably many is good.”

“If I’m Dr Evil and I use it, won’t you be empowering me?” Levy asked. Dr Evil is more likely to be empowered, Altman replied, if only a few people control the technology: “Then we’re really in a bad place.”

In reality, that “bad place” is being built by the tech companies themselves. Musk, who stepped down from OpenAI’s board six years ago to develop his own AI projects, is now suing his former company for breach of contract for having put profits ahead of the public good and failing to develop AI “for the benefit of humanity”.

In 2019, OpenAI created a for-profit subsidiary to raise money from investors, notably Microsoft. When it released ChatGPT in 2022, the model’s inner workings were kept hidden. It was necessary to be less open, Ilya Sutskever, another of OpenAI’s founders and at the time the company’s chief scientist, claimed in response to criticism, to prevent those with malevolent intent from using it “to cause a great deal of harm”. Fear of the technology has become the cover for creating a shield from scrutiny.

In response to Musk’s lawsuit, OpenAI published last week a series of emails between Musk and other members of the board. These make clear that from the beginning all board members agreed that “OpenAI” could not actually be open.

As AI develops, Sutskever wrote to Musk, “it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its [sic] built, but it’s totally OK to not share the science.” “Yup,” responded Musk. Whatever his lawsuit might say, Musk is no more open to openness than other tech moguls. The legal challenge to OpenAI is more a power struggle within Silicon Valley than an attempt to bring about accountability.

Wells wrote The World Set Free at a time of great political turmoil when many questioned the wisdom of extending suffrage to the working class.

“Was it desirable, was it even safe to entrust [the masses],” the Fabian Beatrice Webb wondered, with “the ballot box, with making and controlling the Government of Great Britain with its enormous wealth and its far-flung dominions?” This was the question at the heart of Wells’ novel too – to whom could one entrust the future?

A century later, we are again having a fierce debate about the virtues of democracy. For some, the political turmoil of recent years has been the product of too much democracy, of allowing the irrational and uneducated to make important decisions. “It is unfair to thrust on to unqualified simpletons the responsibility to take historic decisions of great complexity and sophistication,” as Richard Dawkins put it after the Brexit referendum, a sentiment with which Wells would have agreed.

For others, it is precisely such disdain for ordinary people that has helped create a democratic deficit in which large sections of the population feel deprived of a say in how society is run.

It is a disdain that feeds into discussions of technology too. As in The World Set Free, the AI debate centres around questions not just about the technology but also about openness and control. Despite the alarmism, we are a long way from “superintelligent” machines. Today’s AI models, such as ChatGPT, or Claude 3, released last week by another AI company, Anthropic, are so good at predicting what should be the next word in a sequence that they can fool us into imagining they can hold a human-like conversation. They are, however, not intelligent in any human sense, have negligible understanding of the real world and are not about to extinguish humanity.

The problems that AI poses are not existential, but social. From algorithmic bias to mass surveillance, from disinformation and censorship to copyright theft, our concern should not be that machines may one day exercise power over humans but that they already work in ways that reinforce inequalities and injustices, providing tools by which those in power can consolidate their authority.

That is why what we might call the “Egbert manoeuvre” – the insistence that some technologies are so dangerous that they must be put beyond democratic pressure and controlled by a select few – is so menacing. The problem is not just Dr Evil but those who use the fear of Dr Evil to protect themselves from scrutiny.

• Kenan Malik is an Observer columnist

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.