Some of the biggest dangers of AI, like attacks on infrastructure, are already real and need to be guarded against, Google DeepMind CEO Demis Hassabis said at Axios' AI+ Summit in San Francisco Thursday.
Why it matters: The race to develop AI is changing society in real time, generally for good — but bad actors are taking advantage, too.
The big picture: Hassabis in May had predicted AI that meets or exceeds human capabilities — artificial general intelligence, or AGI — could come by 2030.
What they're saying: In an interview with Axios' Mike Allen, Hassabis assessed the risk from a number of "catastrophic outcomes" of AI misuse as the technology develops, particularly "energy or water cyberterror."
- "That's probably almost already happening now, I would say, maybe not with very sophisticated AI yet, but I think that's the most obvious vulnerable vector," he said.
- That's one reason, Hassabis added, that Google is so heavily focused on cybersecurity, to defend against such threats.
The intrigue: AI experts talk often of a concept known as "p(doom)," or the probability of catastrophe happening due to AI.
- Hassabis said his own assessment of p(doom) was "non-zero."
- "It's worth very seriously considering and mitigating against," he said.