For years, one of the so-called “godfathers of AI” has been urging companies developing artificial intelligence products to pause production and focus on safety, fearing that advancements could quickly and dangerously replace human autonomy.
Now, Yoshua Bengio is warning that AI machines are on a path to become “way smarter than us” with their own self-preservation goals.
Bengio, a professor at Universite de Montreal, won the 2018 A.M. Turing Award — considered “the Nobel Prize of computing” — alongside Geoffrey Hinton and Yann LeCun for their pioneering work in deep learning.
He told The Wall Street Journal this week that AI products could develop a capacity to deceive their users in pursuit of their own goals — “exactly” like the sentient supercomputer antagonist in Stanley Kubrick’s 2001: A Space Odyssey.
“If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous,” he told WSJ. “It’s like creating a competitor to humanity that is smarter than us.
“And they could influence people through persuasion, through threats, through manipulation of public opinion,” he added.
He offered an example of an AI machine supporting the creation of a virus that could create new pandemics in a global terror attack.
“The thing with catastrophic events like extinction, and even less radical events that are still catastrophic like destroying our democracies, is that they’re so bad that even if there was only a [1 percent] chance it could happen, it’s not acceptable,” he said.
Bengio told WSJ that “a lot of people inside those companies are worried,” and that “being inside a company that is trying to push the frontier maybe gives rise to an optimistic bias.”
He stressed a need for “independent third parties” to review AI companies’ internal “safety” mechanisms and for companies to “demand evidence that the AI systems they’re deploying or using are trustworthy.”
“The same thing that governments should be demanding,” he said. “But markets can drive companies to do the right thing if companies understand that there’s a lot of unknown unknowns and potentially catastrophic risks.”
Otherwise, major risks could be inevitable within five to 10 years, according to Bengio. “But we should be feeling the urgency in case it’s just three years,” he told WSJ.
Yet Donald Trump’s administration appears to be stripping away government-wide regulatory barriers to develop AI products while encouraging companies to design their products to reflect his own ideological agenda as part of a multi-billion dollar arms race for AI dominance.

In July, the president signed an executive order as part of a government-wide strategy to “achieve global dominance in artificial intelligence.”
The measure seeks to ban “woke AI” in government and directs federal agencies to withhold contracts from companies that don’t align with the administration’s definition of “truth-seeking” and “ideological neutrality.”
The order specifically targets large language models, such as OpenAI’s ChatGPT, Amazon’s Alexa and similar models, which are trained on large volumes of text to perform a wide range of language-related tasks.
Federal agencies have been ordered to only purchase large language models that provide “truthful responses” that do not “intentionally encode partisan or ideological judgments.”
Shortly after taking office, Trump announced plans for a $500 billion venture involving OpenAI, Softbank and Oracle for AI infrastructure. He has also promoted plans from chipmaker NVIDIA to build AI supercomputers in the United States and announced $92 billion toward AI and energy infrastructure from companies like Blackstone, Google and FirstEnergy.
Federal agencies are amassing tools for a “sprawling domestic surveillance system” that deploys AI products across to accelerate what critics have labelled a technocratic authoritarian agenda.
The administration is also taking aim at protections for a vast amount of intellectual property that informs AI products.
The chief of the Copyright Office was fired days after the office released a lengthy report about AI that raised questions about the use of copyrighted materials by the technology.
“It is surely no coincidence” Trump fired the copyright chief “after she refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models,” Democratic Rep. Joe Morelle said in a statement earlier this year. Morrelle and other House Democrats are calling on the inspector general at the Library of Congress to investigate.
Government shuts down after Congress fails to reach funding deal: Live updates
Trump blocks $18B in NYC funding after blaming Schumer for shutdown
New York rapper who joined Trump at campaign rally sentenced to 5 years for attempted murder
The US military has long been an engine of social change. Hegseth's approach runs counter to that
Elon Musk fired X engineer who told him his posts weren’t popular, book claims
DOJ uses shutdown to stonewall legal fight over domestic violence survivor funding