Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology

The existential threat from AI – and from humans misusing it

The 2004 film adaptation of Isaac Asimov’s book I, Robot, starring Will Smith.
The 2004 film adaptation of Isaac Asimov’s book I, Robot, starring Will Smith. Photograph: Digital Domain/20th Century Fox/Allstar

Regarding Jonathan Freedland’s article about AI (The future of AI is chilling – humans have to act together to overcome this threat to civilisation, 26 May), isn’t worrying about whether an AI is “sentient” rather like worrying whether a prosthetic limb is “alive”? There isn’t even any evidence that “sentience” is a thing. More likely, like life, it is a bunch of distinct capabilities interacting, and “AI” (ie disembodied artificial intellect) is unlikely to reproduce more than a couple of those capabilities.

That’s because it is an attempt to reproduce the function of just a small part of the human brain: more particularly, of the evolutionarily new part. Our motivation to pursue self-interest comes from a billion years of evolution of the old brain, which AI is not based upon. The real threat is from humans misusing AI for their own ends, and from the fact that the mechanisms we have evolved to recognise other creatures with minds like ours are (as Freedland highlighted) too easily fooled by superficial evidence.
Roger Haines
London

• Isaac Asimov’s book I, Robot makes useful reading. I quote the introductory page as follows.

The three laws of robotics:
“1) A robot may not injure a human being, or, through inaction allow a human being to come to harm.
2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

Asimov’s predictions, made more than 70 years ago, imagined a time in 2058 when these laws would be necessary. Things have moved faster than he expected.
Prof Paul Huxley
London

• Your article (Yes, you should be worried about AI – but Matrix analogies hide a more insidious threat, 30 May) confirms much I have suspected about the AI “existential threat” we’re supposed to fear. It feels a bit like the Y2K panic. It seems to follow the same playbook. First, establish an overblown future cause for corporate and political concern, then sell the “solution”, which will, of course, cost serious money in research and consultancy fees.

The harms that Samantha Floreani describes are clear, current and in plain sight, and the means to deal with them are not hard to figure out. We don’t need the snake oil salesmen for this.
Phyl Hyde
Coventry

• Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.