Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Three top A.I. companies just published their thoughts on avoiding the A.I. apocalypse

(Credit: Yamada HITOSHI/Gamma-Rapho via Getty Images)

Hello and welcome to May’s special monthly edition of Eye on A.I.

The idea that increasingly capable and general-purpose artificial intelligence software could pose extreme risks, including the extermination of the entire human species, is controversial. A lot of A.I. experts believe such risks are outlandish and the danger so vanishingly remote as to not warrant consideration. Some of these same people see the emphasis on existential risks by a number of prominent technologists, including many who are working to build advanced A.I. systems themselves, as a cynical ploy intended both to hype the capabilities of their current A.I. systems and to distract regulators and the public from the real and concrete risks that already exist with today’s A.I. software.

And just to be clear, these real-world harms are numerous and serious: They include the reinforcement and amplification of existing systemic, societal biases, including racism and sexism, as well as an A.I. software development cycle that often depends on data taken without consent or regard to copyright, the use of underpaid contractors in the developing world to label data, and a fundamental lack of transparency into how A.I. software is created and what its strengths and weaknesses are. Other risks also include the large carbon footprint of many of today’s generative A.I. models and the tendency of companies to use automation as a way to eliminate jobs and pay workers less.

But, having said that, concerns about existential risk are becoming harder to ignore. A 2022 survey of researchers working at the cutting edge of A.I. technology in some of the most prominent A.I. labs revealed that about half of these researchers now think there is a greater than 10% chance that A.I.’s impact will be “extremely bad” and could include human extinction. (It is notable that a quarter of researchers still thought the chance of this happening was zero.) Geoff Hinton, the deep learning pioneer who recently stepped down from a role at Google so he could be freer to speak out about what he sees as the dangers of increasingly powerful A.I., has said models such as GPT-4 and PALM 2 have shifted his thinking and that he now believes we might stumble into inventing dangerous superintelligence anytime in the next two decades.

There are some signs that a grassroots movement is building around fears of A.I.’s existential risks. Some students picketed OpenAI CEO Sam Altman’s talk at University College London earlier this week. They were calling on OpenAI to abandon its pursuit of artificial general intelligence—the kind of general-purpose A.I. that could perform any cognitive task as well as a person—until scientists figure out how to ensure such systems are safe. The protestors pointed out that it was particularly crazy that Altman himself has warned that the downside risk from AGI could mean “lights out for all of us,” and yet he continues pursuing more and more advanced A.I. Similar protestors have picketed outside the London headquarters of Google DeepMind in the past week.

I am not sure who is right here. But I think that if there’s a nonzero chance of human extinction or other severely negative outcomes from advanced A.I., it is worthwhile having at least a few smart people thinking about how to prevent that from happening. It is interesting to see some of the top A.I. labs starting to collaborate on frameworks and protocols for A.I. Safety. Yesterday, a group of researchers from Google DeepMind, OpenAI, Anthropic, and several nonprofit think tanks and organizations interested in A.I. Safety published a paper detailing one possible framework and testing regime. The paper is important because the ideas in it could wind up forming the basis for an industry-wide effort and could guide regulators. This is especially true if a national or international agency specifically aimed at governing foundation models, the kinds of multipurpose A.I. systems that are underpinning the generative A.I. boom, comes into being. OpenAI’s Altman has called for the creation of such an agency, as have other A.I. experts, and this week Microsoft put its weight behind that idea too.

“If you are going to have any kind of safety standards that govern ‘is this A.I. system safe to deploy?’ then you're going to need tools for looking at that AI system and working out: What are its risks? What can it do? What can it not do? Where does it go wrong?” Toby Shevlane, a researcher at Google DeepMind, who is the lead author on the new paper, tells me.

In the paper, the researchers called for testing to be conducted by both the companies and labs developing advanced A.I. as well as by outside, independent auditors and risk assessors. “There are a number of benefits to having external perform the evaluation in addition to the internal staff,” Shevlane says, citing accountability and vetting safety claims made by the model creators. The researchers suggested that while internal safety processes might be sufficient to govern the training of powerful A.I. models, regulators, other labs and the scientific community as a whole should be informed of the results of these internal risk assessments. Then, before a model can be set loose in the world, external experts and auditors should have a role in assessing and testing the model for safety, with the results also reported to a regulatory agency, other labs, and the broader scientific community. Finally, once a model has been deployed, there should be continued monitoring of the model, with a system for flagging and reporting worrying incidents, similar to the system currently used to spot “adverse events” with medicines that have been approved for use.

The researchers identified nine A.I. capabilities that could pose significant risks and for which models should be assessed. Several of these, such as the ability to conduct cyberattacks and to deceive people into believing false information or into thinking that they are interacting with a person rather than a machine, are basically already true of today’s existing large language models. Today’s models also have some nascent capabilities in other areas the researchers identified as concerning, such as the ability to persuade and manipulate people into taking specific actions and the ability to engage in long-term planning, including setting sub-goals. Other dangerous capabilities the researchers highlighted include the ability to plan and execute political strategies, the ability to gain access to weapons, and the capacity to build other A.I. systems. Finally, they warned of A.I. systems that might develop situational awareness—including possibly understanding when they are being tested, allowing them to deceive evaluators—and the capacity to self-perpetuate and self-replicate.

The researchers said those training and testing powerful A.I. systems should take careful security measures, including possibly training and testing the A.I. models in isolated environments where the model had no ability to interact with wider computer networks or its ability to access other software tools could be carefully monitored and controlled. The paper also said that labs should develop ways to rapidly cut off a model’s access to networks and shut it down should it start to exhibit worrying behavior.

In many ways, the paper is less interesting for these specifics than for what its mere existence says about the communication and coordination between cutting-edge A.I. labs regarding shared standards for the responsible development of the technology. Competitive pressures are making the sharing of information on the models these tech companies are releasing increasingly fraught. (OpenAI famously refused to publish even basic information about GPT-4 for what it said was largely competitive reasons and Google has also said it will be less open going forward about exactly how it builds its cutting-edge A.I. models.) In this environment, it is good to see that tech companies are still willing to come together and try to develop some shared standards on A.I. safety. How easy it will be for such coordination to continue, absent a government-sponsored process, remains to be seen. Existing laws may also make it more difficult. In a white paper released earlier this week, Google president of global affairs Kent Walker called for a provision that would give tech companies safe harbor to discuss A.I. safety measures without falling afoul of antitrust laws. That is probably a sensible measure.

Of course, the most sensible thing might be for the companies to follow the protestors' advice, and abandon efforts to develop more powerful A.I. systems until we actually understand enough about how to control them to be sure they can be developed safely. But having a shared framework for thinking about extreme risks and some standard safety protocols is better than continuing to race headlong into the future without those things.

With that here’s a few more items of A.I. news from the past week:  

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.