
Over the past few months, Microsoft has been doubling down on its efforts to become more independent in the AI landscape. The tech giant is OpenAI's largest backer with a $13 billion investment. It has heavily integrated the ChatGPT maker's technology across its products and services.
Microsoft CEO Satya Nadella indicated that the company is moving away from Bill Gates' rigid software factory vision by diversifying its portfolio into intelligence, integration, and AI.
At the beginning of the year, OpenAI unveiled its $500 billion Stargate project designed to facilitate the construction of data centers across the United States for its sophisticated AI advances. Consequently, Microsoft lost its exclusive cloud provider status, though it still holds the right of first refusal.
Salesforce CEO Marc Benioff predicted that Microsoft won't use OpenAI's technology in the future. More recently, Microsoft AI CEO Mustafa Suleyman confirmed that the company was developing its own off-frontier AI models, but admitted they'll be 3 or 6 months behind OpenAI. "Our strategy is to really play a very tight second," added Suleyman.
Building further upon this premise, Microsoft Chief Technology Officer Kevin Scott recently revealed that the company wants to use its own AI chips across data centers, potentially moving away from an over-reliance on AMD and NVIDIA (via CNBC).
According to Scott:
“We’re not religious about what the chips are. And ... that has meant the best price performance solution has been Nvidia for years and years now. We will literally entertain anything in order to ensure that we’ve got enough capacity to meet this demand.”
Every major tech corporation is trying to tap into the AI hype, creating a high demand for chips to foster the development of AI, which explains why NVIDIA's market valuation has been on an upward trajectory (up to $4 trillion).
Scott revealed that Microsoft intends to predominantly use its own chips across its data centers. Perhaps more interestingly, the executive revealed that the tech giant is already using its own chips right now.
Microsoft AI CEO Mustafa Suleyman recently highlighted the company's plan to build and develop its own AI chip cluster to unlock self-sufficiency in AI.
We should have the capacity to build world class frontier models in-house of all sizes, but we should be very pragmatic and use other models where we need to. It's critical that a company of our size, with the diversity of businesses that we have, that we are, you know, able to be self sufficient in AI, if we choose to.
Microsoft AI CEO, Mustafa Suleyman
For context, the software giant launched the Azure Maia AI Accelerator in 2023 for AI workloads. And now, it is reportedly developing in-house next-gen chips. This will make it easy for the company to develop an entire system that goes in the data center, which meets a specific need.
According to Scott:
“It’s about the entire system design. It’s the networks and the cooling and you want to be able to have the freedom to make the decisions that you need to make in order to really optimize your compute to the workload.”
Elsewhere, Microsoft's multibillion-dollar partnership with OpenAI has seemingly been fraying. The tech giant reportedly wiggled out of two mega data center deals because it did not want to provide additional training support for ChatGPT. However, Sam Altman indicated that OpenAI is no longer compute-constrained.

Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!