Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Eleanor Pringle

One of the three 'godfathers of A.I.' feels 'lost' because of the direction the technology has taken

Yoshua Bengio on stage at an event (Credit: Graham Hughes/Bloomberg - Getty Images)

The three so-called 'godfathers of A.I.' aren't thrilled with how the technology is evolving.

The trio of computer scientists—Professor Yoshua Bengio, Dr Geoffrey Hinton, and Yann LeCun—earned the nickname in 2019 when they won the prestigious Turing Prize and were awarded $1 million to share between them.

Now the group, who have reportedly been friends for more than three decades, have turned their attention not to furthering the course of artificial intelligence but to warning the industry that now is the time to put the breaks on.

In an interview with the BBC, Bengio said watching A.I. morph into an apparent threat has left him questioning his life's work, and that his direction and identity is no longer clear to him.

"It is challenging, emotionally speaking, for people who are inside [the A.I. sector]," he said. "You could say I feel lost. But you have to keep going and you have to engage, discuss, encourage others to think with you."

Bengio is one of many who is warning about the impact A.I. could have if it fell into the hands of military bodies, and is also the first signatory on an open letter also signed by the like of Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month pause on the development of the technology.

Second letter

Bengio, formerly an advisor to Microsoft and a collaborator with IBM, also signed a second letter this week suggesting that "mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Alongside Bengio's name is OpenAI's Sam Altman, who has openly called for increased regulation of the sector, as well as fellow 'godfather' Hinton.

Bengio believes companies working on powerful tools like ChatGPT—a large language model—should be registered: "Governments need to track what they're doing, they need to be able to audit them, and that's just the minimum thing we do for any other sector like building airplanes or cars or pharmaceuticals.

"We also need the people who are close to these systems to have a kind of certification…we need ethical training here. Computer scientists don't usually get that, by the way."

Currently a professor at the Université de Montréal, Bengio adds that it's not too late to set the sector on the right path.

"It's never too late to improve," he said. "It's exactly like climate change. We've put a lot of carbon in the atmosphere. And it would be better if we hadn't, but let's see what we can do now."

What the other 'godfathers' think

Hinton is similarly nervous about the path A.I. is taking, saying he fears the technology will outstrip the intelligence of humans.

Previously a Google staffer, the award-winning computer scientist told the MIT Technology Review: “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future ... How do we survive that?”

The A.I. expert has similarly warned of the impact should the technology fall into the wrong hands.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told the New York Times in an interview published in May. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

The third member of the group—Meta Research's chief A.I. scientist—LeCun is far less worried about the negative impact of the technology.

LeCun has resoundingly rejected calls to delay A.I., telling a YouTube stream hosted by DeepLearningAI: "Why slow down the progress of knowledge and science? Then there is the question of products. I'm all for regulating products that get in the hands of people, I don't see the point of regulating research and development.

"I don't think it serves any purpose other than reducing the knowledge that we could use to actually make technology better and safer."

LeCun's bullish position doesn't seem to have changed from four years ago, when he co-wrote a piece in Scientific American saying humans "dramatically overestimate the threat of an accidental A.I. takeover".

He added: "We tend to conflate intelligence with the drive to achieve dominance. This confusion is understandable: during our evolutionary history as (often violent) primates, intelligence was key to social dominance and enabled our reproductive success.

"And indeed, intelligence is a powerful adaptation, like horns, sharp claws or the ability to fly, which can facilitate survival in many ways. But intelligence per se does not generate the drive for domination, any more than horns do."

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.