As artificial intelligence accelerates, so does the prospect of a cyberattack powerful enough to shut down hospitals, black-out cities and disrupt core government systems.
Why it matters: Just by scaling and accelerating the cyberwarfare tools adversaries already have, AI can turn manageable intrusions into large-scale crises.
- Axios asked seven former senior cybersecurity officials and leading security experts what a major AI-enabled cyberattack would look like and what worries them the most about current advancements in generative AI.
The big picture: Several of the experts pointed to the vulnerability of utilities, particularly water and electricity.
- Former Defense Secretary Leon Panetta worries AI tools will speed up the ability of adversaries to burrow into sensitive systems and turn off the lights — and potentially to also disable backup systems to prevent a timely recovery.
- Gen. Paul Nakasone, former head of the NSA and Cyber Command, raised the possibility that a nation-state that has breached systems critical to supplies of food and water could trigger an outage accidentally, if they lose control of an AI agent.
- Chinese government-linked hackers are known to have accessed U.S. critical infrastructure systems. But nation-states know the risks of attacking the U.S. directly, Nakasone said: "The United States is going to respond and they're not going to respond necessarily only in cyberspace."
The intrigue: That's one reason the experts tended to think accidental escalation was at least as likely as a targeted attack.
- The AI future could look a lot more like the 1980s' film "WarGames," in which Matthew Broderick plays a hacker who nearly ignites nuclear war by accident, than the Terminator's Skynet.
- Chris Inglis, the first U.S. national cyber director, noted the perils of a world in which AI is both carrying out and detecting cyberattacks, and feeding that information back to human decision-makers.
- "There's a human foible, human frailty involved in this — in terms of building human confidence based upon this machine's ability to inform that confidence, so the human is willing to push the Big Red Button," he said.
Zoom in: "The Big One could be a lot of different things: One, utilities. Two, communications. Three, healthcare. Four, anything in logistics and travel, that'd be a disaster. I don't want to give the bad guys all the ideas, but they probably already have them," says Kevin Mandia, founder of cybersecurity firm Mandiant.
- He thinks "the big one" will hit one of those, rather than all at once. "It's going to be against a few specific targets or an industry. It's not going to be widespread because it makes no sense to burn the resources of the ultimate offense on everything."
- Michael Sulmeyer, former assistant secretary of Defense for cyber policy, is particularly worried about health care.
- It's already theoretically possible "to abuse an AI model and just start asking it to pick vulnerable targets that are hospitals," then trying to knock them offline, says Sulmeyer, who is now a Georgetown professor.
Threat level: Former CIA Director Michael Hayden was more definitive than the others that an attack at a scale we have not seen before is coming.
- "I think it's going to happen, that's assured. Maybe one year, five years? We just don't know." He pointed to Russia as a possible culprit, noting that Moscow was "more desperate" than Beijing.
- Panetta said part of the danger of AI is that you can't control who has access, even though the tools will be powerful enough to "paralyze a country."
- Other ex-officials said a series of smaller-scale cyberattacks could be just as dangerous as one big one, if they lead to unprecedented data wipes or corporate shutdowns.
Between the lines: Generative AI is amplifying the risks in part because it is advancing attackers' capabilities, but also because cyber operatives could come to trust it too much, the experts say.
- "AI isn't creating entirely new cyber risk — it's scaling existing weaknesses in insecure software, brittle systems, and over-trusted automation, while making attacks harder to spot because they blend into normal operations," says Jen Easterly, former director of the Cybersecurity and Infrastructure Security Agency.
What to watch: Whether companies, agencies and utilities can quickly and effectively tap AI tools to shore up their defenses.