Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Everyone wants A.I. governance. So why are so many unhappy with Musk's call for a 6 month pause?

Elon Musk photo. (Credit: Taylor Hill—Getty Images)

Hello and welcome to March’s special edition of Eye on A.I.

By now you've probably heard about the open letter signed by Elon Musk, Apple cofounder Steve Wozniak and more than 1,800 others calling for a six-month pause on the development of any A.I. systems “more powerful than GPT-4.” Others signing the letter included a co-founder of Pinterest, the CEO of Getty Images, and the president of the Bulletin of the Atomic Scientists, the organization that maintains the Doomsday Clock.

The letter was put out under the auspices of The Future of Life Institute, a U.S. nonprofit group headed by MIT physicist Max Tegmark that is generally concerned with various existential risks, from asteroids to pandemics to, yes, runaway digital superintelligence. Although not officially part of Effective Altruism (which calls itself EA)—the movement whose adherents try to use utilitarian principles to ensure philanthropy benefits the most people—there’s some overlap in the personalities and donors involved in FLI and EA. That said, a lot of people signed the open letter who have no prior affiliation with either FLI or EA.

The letter called for the companies building powerful A.I. to use the six-month pause to agree on a set of verifiable procedures for ensuring powerful A.I. could be deployed safely. It said that if the companies couldn’t or wouldn’t agree to this pause on their own, governments should impose the moratorium. It also called on governments to immediately develop a new governance and inspection regimes at both the national and international level to prevent potential dangers of advanced A.I. ranging from misinformation and cyberattacks to, yep, Armageddon.

The companies building advanced A.I. have mostly reacted to the letter by ignoring it. Only Anthropic, the A.I. lab founded by researchers who broke away from OpenAI last year, provided a statement in response. “We think it’s helpful that people are beginning to debate different approaches to increasing the safety of AI development and deployment,” an Anthropic spokesperson said. From the rest, crickets, although OpenAI CEO Sam Altman did tweet that “for a good AGI future,” (AGI is short for artificial general intelligence, the kind of A.I. superintelligence that is the stuff of science fiction) there would need to be the technical ability and coordination to align superintelligence efforts and an effective regulatory framework to ensure a "democratic governance" of the effort. Those are all things the open letter endorses. But I get the feeling OpenAI is not about to commit to a six-month pause.

Meanwhile, lots of other people have been critical of the open letter. The criticisms tend to fall into the following not mutually-exclusive buckets (my colleague David Meyer created his own, slightly different taxonomy in Fortune’s Data Sheet newsletter yesterday):

- Those who dislike and distrust Elon Musk. It’s not clear having Elon be the lead signatory on the letter was the smartest move by FLI. It ensured the letter got a ton of news coverage. But it also made it very easy to dismiss the letter’s call for a pause and regulation as another somewhat unhinged and self-serving idea from Musk. Many also pointed out Musk’s hypocrisy—he’s opposed government regulation when it comes to Tesla. Now he’s begging for it when it comes to a product he doesn’t make—and in particular, A.I. systems emanating from a lab (OpenAI) he helped cofound but then abandoned, apparently, at least in part, because he wasn’t allowed to run the place.

- It’s a commercial ploy. Many pointed out that many of those who signed also run A.I. companies that are trying to compete with OpenAI, Microsoft, Google, Anthropic, Cohere, etc. when it comes to building foundation models. Stable Diffusion’s Emad Mostaque, one of the letter’s signatories, has complained about the advantage big tech firms like Microsoft and Google (and the A.I. companies closely aligned with them now) have because of their access to massive numbers of graphics processing units (GPUs), the specialized computer chips needed to train and run these large foundation models, an advantage that is magnified as model sizes keep getting bigger (a paper from DeepMind last year showed that model size and performance are not directly proportional—the training data and how the network is trained matters too, but for all intents and purposes, the larger the model, the more capable it will be). Musk, for his part, has talked about forming a new lab to compete with OpenAI. The criticism here is that Musk, Mostaque, and others only want the six-month moratorium to allow their own efforts to catch up.

- China will never agree. The argument here is that only Western companies would agree to the moratorium, putting the West itself at a disadvantage in the increasingly tense geopolitical competition over advanced A.I. systems. It’s true that these systems do have national security implications. On the other hand, it’s not clear that a six-month pause would really enable Chinese researchers to beat others to a major breakthrough. So far, Chinese companies have been fast followers of mostly U.S. A.I. innovation, but have not jumped ahead in capabilities. Also, it is possible that China would agree to a moratorium precisely because it knows it is falling behind the U.S. in capabilities or because it legitimately fears that any harm from A.I. could hurt China as much as anywhere else.

- The letter feeds into A.I. hype and distracts attention from the present harms of foundation models. This was the reaction of a good number of A.I. researchers, particularly those associated with A.I. ethics research. Emily Bender, the University of Washington computational linguist who has become a fierce critic of A.I. hype, especially when it comes to large language models, produced a widely-read takedown of the letter on Twitter along these lines.

Bender and many others in the A.I. ethics camp basically agree at a high level with the letter’s call for fast action on government regulation, but they vehemently dislike the letter’s framing. They think any regulatory effort should focus on the harms and dangers of existing A.I., including GPT-4 and tons of systems that are ostensibly less powerful and less general. These already include: misinformation, the perpetuation and amplification of biases, the exploitation of data labelers in poor countries who often help to refine the training data for A.I. systems, the exploitation of artists and writers whose work is often used to train these systems without proper compensation, the potential loss of jobs as companies use these systems to replace workers, and the environmental impact of all the electricity training and running these large foundation models consume. All of those things, they argue, are more pressing than the idea that future A.I. systems will somehow exceed human intelligence and pose an existential risk. (Much of this has to do with the unfortunate division of A.I. Ethics and A.I. Safety into separate and warring camps, which I’ve covered in this newsletter before.)

The A.I. Ethics side sees talk of existential risk as a red herring that only distracts people from focusing on the harms of existing large foundation models. (Some even theorize conspiratorially that this in fact is the intention of those sounding the alarm about existential risk, who they accuse of being disingenuous.) Most of the A.I. ethics folks believe AGI is either impossible—or at least can’t be achieved by making ever larger GPT models, which is pretty much what all of the A.I. labs are doing.

- The letter’s proposed solution is anticompetitive and hurts innovation. This was essentially the position taken by deep learning pioneers Yann LeCun, who is currently Meta’s chief A.I. scientist, and Andrew Ng, a widely-respected former Stanford University computer scientist who is now the founder and CEO at Landing AI, a company that sells computer vision models. "Having governments pause emerging technologies they don’t understand is anti-competitive, set a terrible precedent, and is awful innovation policy," Ng wrote.

It seems highly unlikely that a moratorium is going to happen. But there are other moves afoot to try to force governments to take action to rein in A.I. companies. Yesterday, as my colleague David reported in a separate news item, a nonprofit group dedicated to a “socially-just rollout of A.I.” filed a formal complaint with the U.S. Federal Trade Commission alleging that OpenAI is violating the FTC Act with GPT-4 and ChatGPT, saying the products are designed to deceive. The FTC has previously cautioned that it is watching the development of new A.I.-powered chatbots closely. This complaint may be all it needs to launch an investigation and take action. In addition, the European Consumer Organisation issued a call for European regulators, at both the national and EU level, to take immediate action to regulate ChatGPT and its competitors. The group said that waiting for the EU’s Artificial Intelligence Act, which is currently being debated by legislators, to come into force was inadequate given the rate at which consumers and businesses were adopting generative A.I.

Not all European regulators are waiting. As this newsletter was about to go to press, news broke that the Italian privacy regulator had temporarily banned ChatGPT in the country on the grounds that the system's creator, OpenAI, was violating Europe's strict GDPR data privacy law. The Italian regulator claimed that OpenAI had illegally processed the personal information of many individuals as part of the training data fed to the large language models that power ChatGPT. It also said that OpenAI processed the personal data inaccurately, in violation of a little-tested provision of GDPR. Finally, it said that OpenAI had no age restrictions in place on the use of ChatGPT and that this exposed children to age-inappropriate answers from the chatbot, again in violation of the sweeping privacy law. As of press time, OpenAI had not yet commented on the ban.

Meanwhile, back in the U.S., several senators, including Michael Bennet (D-Colo.) and Chris Murphy (D-Conn.) have started calling for more robust government regulation of A.I. (Unfortunately, Murphy has also sent some ill-informed tweets—claiming ChatGPT “taught itself to do advanced chemistry”—that were panned by A.I. researchers for contributing to A.I. hype and fear-mongering.)

I am fairly sympathetic to the overall intention of the open letter, even if I too think a six-month pause would almost certainly require coordinated government action that is unlikely to be forthcoming—at least not yet. A.I. companies are clearly racing one another to release advanced A.I.-powered products and hook large language models up to the internet (as OpenAI has now done with the ChatGPT plugins.) And you don’t have to believe in AGI to think that this is potentially very dangerous and that the government ought to step in now to force the companies to slow down and put in place better safety guardrails.

With that here’s some other A.I. news from the past week.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.