Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Benzinga
Benzinga
Kaustubh Bagalkote

Microsoft AI Chief Mustafa Suleyman Warns Of 'Psychosis Risk' From 'Seemingly Conscious AI' Amid $13 Billion AI Boom

Microsoft Corp.

Microsoft Corp. (NASDAQ:MSFT) artificial intelligence Chief Mustafa Suleyman warned on Tuesday about emerging risks from “Seemingly Conscious AI” (SCAI) systems, arguing the technology could create dangerous societal divisions and psychological dependencies among users.

Suleyman’s Blog Post Raises Market Concerns About AI Development Ethics

In a lengthy blog post titled “We must build AI for people; not to be a person,” Suleyman outlined his concerns about AI systems that could convincingly simulate consciousness without actually possessing it. The warning comes as Microsoft’s AI business surpassed $13 billion in annual revenue, growing 175% year-over-year.

Key Market Implications for AI Sector

Suleyman’s concerns center on what he terms “psychosis risk” – the possibility that users will develop strong beliefs in AI consciousness, potentially leading to advocacy for AI rights and citizenship.

This development could complicate the regulatory landscape for major AI companies, including Microsoft, Alphabet Inc. (NASDAQ:GOOGL) (NASDAQ:GOOG) and Meta Platforms Inc. (NASDAQ:META).

The Microsoft AI chief, who co-founded Google‘s DeepMind before joining Microsoft in March 2024, emphasized that current large language models show “zero evidence” of consciousness. However, he argued that technological capabilities available today could be combined to create convincing simulations within 2-3 years.

See Also: OpenAI Expands Into 2nd-Largest Market With India Office Launch After Introducing $4.60 ChatGPT Go Subscription

Technical Capabilities Creating SCAI Risk

According to Suleyman’s analysis, several existing AI capabilities could combine to create seemingly conscious systems:

  • Advanced natural language processing with personality traits
  • Long-term memory systems store user interactions
  • Claims of subjective experiences and self-awareness
  • Intrinsic motivation systems beyond simple token prediction
  • Autonomous goal-setting and tool usage capabilities

These features, already available through major AI APIs, require no breakthrough technologies to implement, making SCAI development “inevitable” without industry intervention, Suleyman stated.

Industry Standards and Regulatory Response Needed

The blog post calls for immediate industry action, including consensus definitions of AI capabilities and explicit design principles preventing consciousness simulations. Suleyman suggested AI companies should avoid encouraging beliefs in AI consciousness and implement “moments of disruption” that remind users of AI limitations.

At Microsoft AI, Suleyman’s team is developing “firm guardrails” around responsible AI personality design. The approach focuses on creating helpful AI companions that explicitly present as artificial systems rather than simulating human-like consciousness or emotions.

The warning carries particular weight given Suleyman’s recruitment of former Google DeepMind talent, including health unit head Dominic King and AI researchers Marco Tagliasacchi and Zalán Borsos.

Read Next:

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Photo courtesy: Shutterstock

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.