Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology

The danger of blindly embracing the rise of AI

A printed circuit in the shape of a human brain
‘Beware AI’s potential to create a convincing reality,’ writes Alan Lewis. Photograph: MattLphotography/Alamy

Evgeny Morozov’s piece is correct insofar as it states that AI is a long way from the general sentient intelligence of human beings (The problem with artificial intelligence? It’s neither artificial nor intelligent, 30 March). But that rather misses the point of the thinking behind the open letter of which I and many others are signatories. ChatGPT is only the second AI chatbot to pass the Turing test, which was proposed by the mathematician Alan Turing in 1950 to test the ability of an AI model to convincingly mimic a conversation well enough to be judged human by the other participant. To that extent, current chatbots represent a significant milestone.

The issue, as Evgeny points out, is that a chatbot’s abilities are based on a probabilistic prediction model and vast sets of training data fed to the model by humans. To that extent, the output of the model can be guided by its human creators to meet whatever ends they desire, with the danger being that its omnipresence (via search engines) and its human-like abilities have the power to create a convincing reality and trust where none does and should exist. As with other significant technologies that have had an impact on human civilisation, their development and deployment often proceeds at a rate far faster than our ability to understand all their effects – leading to sometimes undesirable and unintended consequences.

We need to explore these consequences before diving into them with our eyes shut. The problem with AI is not that it is neither artificial nor intelligent, but that we may in any case blindly trust it.
Alan Lewis
Director, SigmaTech Analysis

• The argument that AI will never achieve true intelligence due to its inability to possess a genuine sense of history, injury or nostalgia and confinement to singular formal logic overlooks the ever-evolving capabilities of AI. Integrating a large language model in a robot would be trivial and would simulate human experiences. What would separate us then? I recommend Evgeny Morozov watch Ridley Scott’s Blade Runner for a reminder that the line between man and machine may become increasingly indistinct.
Daragh Thomas
Mexico City, Mexico

• Artificial intelligence sceptics follow a pattern. First, they argue that something can never be done, because it is impossibly hard and quintessentially human. Then, once it has been done, they argue that it isn’t very impressive or useful after all, and not really what being human is about. Then, once it becomes ubiquitous and the usefulness is evident, they argue that something else can never be done. As with chess, so with translation. As with translation, so with chatbots. I await with interest the next impossible development.
Edward Hibbert
Chipping, Lancashire

• AI’s main failings are in the differences with humans. AI does not have morals, ethics or conscience. Moreover, it does not have instinct, much less common sense. Its dangers in being subject to misuse are all too easy to see.
Michael Clark
San Francisco, US

• Thank you, Evgeny Morozov, for your insightful analysis of why we should stop using the term artificial intelligence. I say we go with appropriating informatics instead.
Annick Driessen
Utrecht, the Netherlands

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.