Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Peter Vanham, Nicholas Gordon

'Sapiens' author warns that A.I. could erode trust and kill all democracies

Professor Yuval Noah Harari, author and Professor of History at the Hebrew University of Jerusalem, speaks about themes from his new book 'Homo Deus: A Brief History of Tomorrow' on September 8, 2016 at the Dancehouse Theatre as part of the Manchester Literature Festival in Manchester, England. (Photo by Jonathan Nicholson/NurPhoto via Getty Images) (Credit: Jonathan Nicholson—NurPhoto via Getty Images)

Good morning, 

Does anyone still doubt Alan’s assessment late last year that the advent of generative A.I. would prove to be the most important news event of 2023? 

Not Klaus Schwab of the World Economic Forum. I caught up with my former boss at his offices in Cologny, Geneva, yesterday, and he told me he hardly has time for anything else. A.I. completely dominates his agenda, he told me, including when talking to Silicon Valley luminaries. 

And not the United Nations either. The UN’s telecommunications arm organized an “A.I. for Good” summit late in Geneva last week, and at it, philosopher and Sapiens author Yuval Noah Harari told The Atlantic’s Nick Thompson what he believes to be A.I.’s most imminent threat to society. 

Rather than warning broadly about “the risk of extinction,” he pointed to A.I.'s standout quality as the first tool in human history that can make decisions and come up with ideas on its own.  

He specifically warned of the risk posed by A.I. bots. “If you can’t know who is a real human and who is a fake human, trust will collapse, and with it, at least free society. Maybe dictatorships will be able to manage somehow, but not democracies.”

His proposed solutions include prison sentences of up to 20 years for A.I. developers who create bots and allow them to run wild, as well as minimum investment requirements in A.I. safety by those who develop it. 

Given all this A.I. angst, I’d like to submit an entry for Alan’s summer reading list: Mary Shelley’s Frankenstein

Written just a few hundred yards away from my own offices on Lake Geneva, Shelley captures the similar feelings of fear and awe that many people harbored toward technological progress during the industrial revolution of the early 19th Century. 

Shelley warns of what might happen if innovators don’t think through the unintended consequences of their inventions. In Frankenstein’s dark romantic world, the monster’s creation led to the wrongful execution of his friend’s servant and to the murder of his wife. The lesson is clear: don’t just innovate, make sure your inventions are safe. 

But there’s a deeper lesson too. For all the fear of monsters, it wasn’t a Frankenstein-like creature that led to the greatest suffering in the 19th and early 20th centuries. It was unequal access to technology’s riches and the very deliberate human decisions to wage war on one another, both militarily and in terms of trade.  

On a separate note, if you’d like to learn more about a far more benign human creation, tune in to Alan and Michal Lev-Ram's Leadership Next interview with Mattel CEO Ynon Kreiz on Spotify or Apple Podcasts. On the podcast, Kreiz talks about how Mattel’s most famous invention, Barbie, is going from strength to strength, and how she successfully—and harmlessly, I should add—made the move to the big screen. 

More news below.

Peter Vanham
peter.vanham@fortune.com
@petervanham

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.