Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Investors Business Daily
Investors Business Daily
Business
SCOTT S. SMITH

Geoffrey Hinton, The Godfather Of AI, Now Warns About His Creation

The idea of creating computer programs that can mimic the human brain wasn't always accepted. But it fascinated Geoffrey Hinton — and his work laid the foundation for today's artificial intelligence.

Pulling this off took major organic brain power, though. In the 1970s, computers were too primitive to mimic a human mind. Hinton, now 77, took a wide variety of courses at Cambridge University to better understand how to get around the barriers. He graduated with a B.A. in experimental psychology in 1970. And in 1978, he earned a Ph.D. in artificial intelligence at the University of Edinburgh.

But even education couldn't push AI forward. Hinton needed more resources — and he was willing to go anywhere to find them.

Geoffrey Hinton: Listen To Your Gut

By the late 1970s, funding for AI had dried up for lack of results. And the approach Hinton advocated, the development of networks of simulated brain cells, was out of favor. So, he needed a new environment to get his ideas back on track.

He moved to the University of California at San Diego in 1978 to do research. Then he started teaching at Carnegie Mellon in Pittsburgh in 1982, before joining the computer science faculty at the University of Toronto in 1987.

"Many of my colleagues thought I was crazy to keep pursuing the simulation of how the brain works, but if you think you have a really good idea and others tell you it's complete nonsense, then you know you're onto something," Hinton told IBD.

Today, Hinton is known as the Godfather of AI and shared the Nobel Prize for physics in 2024 with John Hopfield.

Persist In Climbing Your Mount Everest, Like Hinton

Geoffrey Everest Hinton, 77, was born in Wimbledon, England, with a scientific lineage. And he tapped it multiple times in his life.

His father was a professor of entomology (specializing in beetles). They shared a middle name that came from a relative whose name was given to the mountain — after he made it possible to calculate its height as surveyor general of India. His great grandfather, Charles Hinton, was a mathematician who envisioned a four-dimensional analog to the cube. His great, great grandfather invented Boolean logic (the algebra that underpins computer circuits).

Following family tradition, he went to King's College Cambridge in 1967. Hinton finished his undergraduate degree after studying physics, physiology, philosophy and psychology.

"I got fed up with academia because I realized no one had a clue and decided I would rather be a carpenter," he told the New York Times.

A year later, he realized his carpentry skills were not serving him well enough (though he continues to use them as a hobby). He moved to Edinburgh in 1972 to pursue a Ph.D. because the university had a new AI program. But Hinton focused on developing artificial neural networks. He felt those better simulated human thought. His advisor, though, advocated the rival logic-based approach to AI.

Tap Continuous Learning To Make Progress

In San Diego, working as a postdoctoral researcher, Hinton joined a group of cognitive psychologists studying how people learn and solve problems. The researchers had a strong interest in developing neural networks to mimic the brain.

They began making good headway with "the back propagation algorithm" first proposed by a Harvard professor in 1974 to train neural networks, so computers could learn from data.

It works like this: If you look at a tree, an image enters your retina using light waves. Then, stimulating electrical impulses move to the brain. These actions form an internal representation of the tree in your mind.

"In AI, the Holy Grail was: How do you generate internal representations?" Hinton told the New York Times.

He knew he needed support to get a computer to work like the human mind. At Carnegie Mellon in 1982, the computer science department studied AI since the 1950s. There, he became a professor and made some progress generating these internal representations.

But computers were still slow. However, his work still paid off. Neutral networks he worked on benefited "deep learning," which focused on multilayered neural networks to perform tasks like classification.

Find Innovation Benefits From Collaboration

Still looking to push the field ahead, Hinton moved to the University of Toronto in 1987. And in 2004 he set up a program at the Canadian Institute for Advanced Research that he directed for 10 years. The program, now known as Learning In Machines & Brains, included academic superstars Yoshua Bengio and Yann LeCun. Together they pioneered deep learning.

Hinton took a break from this work in 1998 until 2001, to set up a Computational Neuroscience Unit at University College in London before returning to Toronto.

And time allowed the world to catch up to his ideas. By 2009, computers were fast enough for deep learning to excel at speech recognition. And in 2012. Hinton and two of his students showed that deep learning was much better at recognizing objects in images than hand-engineered computer vision systems. They formed DNNresearch, which Alphabet acquired  in 2013 for $44 million.

Seeing so much opportunity, Hinton divided his time between university research and working for Google Brain, an AI unit of Alphabet.

Address The Benefits And Risks Of AI

Hinton has a long history of touting the potential benefits of AI, especially for interpreting medical images (he had two wives die of cancer). But he saw his role of leadership in the field shift.

As the New York Times noted in 2017, if you "text on your smartphone, search for a photo on Google, or in the not-too-distant future ride in a self-driving car, you will be using technology based partly on Dr. Hinton's ideas."

So it came as a shock when, in May 2023, Hinton announced he was resigning from his role at Alphabet to speak out candidly about the dangers of AI. These risks were highlighted in the 2025 International AI Safety Report by 96 experts commissioned by the United Nations and 30 nations. It details the history and potential for misuse, malfunction and social disruption.

In 2023, the U.S. and UK established the AI Safety Institute, which noted that AI can lead to cyberattacks and bioterrorism, unemployment and harmful digital manipulation.

Respect The Rules

Hinton is flexible enough to know he must use his leadership in a new way. He believes the controls and preventive measures put in place by governments and technology companies understate the risks that the rapid advancement of AI  poses to humanity.

In 2018, Hinton was optimistic about the overall benefits of AI. He thought it would take 30 to 50 years for superintelligence or artificial general intelligence (AGI) to possibly view humans as an inferior rival that might need to be eliminated.

Now he believes this could take place within five to 20 years because he thinks that digital intelligence may well be a superior form of intelligence to the analog intelligence of our brains. Simply stated: Multiple copies of the same digital neural network running on different hardware can share what they have learned millions of times more efficiently than we can.

"The systems companies are developing to keep AI submissive are not going to work because it is going to be much smarter than us and will find all sorts of ways to get around them," he told the AI4 conference in Las Vegas in August 2025.

Develop A Safe And Productive Relationship With AI

Hinton always looks for a solution to problems.

And with rogue AI, he proposed building "maternal instincts" into the systems, so that "they care about people" even when they are smarter and more powerful. He believes AI will inevitably create two subgoals. The first? Simply staying alive, because it knows humans still have the ability to threaten it. (AI would know, for example, about the unplugged computer in "2001: A Space Odyssey." The other subgoal would be for AI to get more control.

"If we can find a productive relationship with AGI, this could enable us to benefit from the ability to assess the vast amount of medical data being generated by MRI and CT scans, leading to breakthroughs in medical technology and radical new drugs," he said.

Geoffrey Hinton's Keys

  • A leading authority on AI and its benefits and risks.
  • Overcame: Skepticism of other scientists about his approach to developing artificial intelligence.
  • Lesson: "We need to be prepared for general-purpose AI that could bring about changes comparable in scale with the Industrial Revolution."
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.