
A fresh warning is raising questions about how aggressively the industry should pursue autonomous superintelligence.
Amid that concern, the industry must rethink how it approaches frontier systems, Microsoft Corp. (NASDAQ:MSFT) artificial intelligence chief Mustafa Suleyman recently told the "Silicon Valley Girl Podcast."
"It would be very hard to contain something like that or align it to our values. And so that should be the anti-goal," he said.
Don't Miss:
- The ‘ChatGPT of Marketing' Just Opened a $0.86/Share Round — 10,000+ Investors Are Already In
- Bill Gates Invests Billions in Green Tech — This Tree-Free Material Could Be the Next Big Breakthrough
Humanist Vision Takes Priority
Suleyman told host Marina Mogilko autonomous superintelligence — a system able to self-improve, set its own goals, and act independently of humans — "doesn't feel like a positive vision of the future."
He said such a system would be difficult to control and its independence would create risks the industry should avoid.
Suleyman, best known for co-founding Alphabet Inc.'s (NASDAQ:GOOGL, GOOG)) Google DeepMind unit, said his team is working on a "humanist superintelligence." He described it as a model designed to operate in service of human interests, offering support rather than replacing human judgment.
He also addressed claims that AI may deserve moral status or consciousness. "These things don't suffer. They don't feel pain," he said. He added that current models are "just simulating high-quality conversation," not experiencing emotion or self-awareness.
Trending: 7 Million Gamers Already Trust Gameflip With Their Digital Assets — Now You Can Own a Stake in the Platform
Industry Timelines Fuel Debate
Suleyman's comments come as other technology leaders outline their own expectations for advanced systems. Their projections come as companies report rapid progress in training infrastructure, model performance and global investment.
Earlier this year, OpenAI CEO Sam Altman wrote in his blog that superintelligent tools could advance scientific discovery far beyond human ability and increase abundance and prosperity.
Altman also told the German newspaper Die Welt in September that he would be "very surprised" if superintelligence does not emerge by 2030. His timeline has featured prominently in recent discussions about advanced AI systems.
Google DeepMind co-founder Demis Hassabis offered his own timeline in April when he told Time magazine AGI could be achieved "in the next five to 10 years." He described future systems as being "embedded in everyday lives" and having the ability to understand the world "in nuanced ways."
See Also: GM-Backed EnergyX Is Solving the Lithium Supply Crisis — Invest Before They Scale Global Production
Skepticism Remains Strong
However, one leading researcher continues to encourage caution. Meta Platforms Inc. (NASDAQ:META) chief AI scientist Yann LeCun said at the World Economic Forum in January that current AI systems cannot reason, plan, or understand the physical world, and that human-level AI will require an entirely new paradigm.
He also said during an April talk at the National University of Singapore that "most interesting problems scale extremely badly," adding that, in his view, simply increasing data and compute does not solve the underlying limitations of current systems.
Read Next: $100k+ in investable assets? Match with a fiduciary advisor for free to learn how you can maximize your retirement and save on taxes – no cost, no obligation.
Image: Midjourney