Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Ryan Morrison

Sam Altman hopes to take on Nvidia with new global network of AI chip factories

Sam Altman CEO of OpenAI.

OpenAI's CEO Sam Altman has embarked on a global campaign to create a network of artificial intelligence chip factories that can take on Nvidia's dominance of the technology.

Larger AI labs like OpenAI are spending billions on Nvidia GPUs to train the next generation of large language models. They are then spending more still to run those models for consumers.

To tackle this problem some of the bigger companies are looking at ways to bring down the size of the models, improve efficiency and even create new, custom and cheaper chips — but making advanced semiconductors is both expensive and complicated.

For his new chip project, Altman has spoken to several investors as the cost is likely to reach the billions. Potential backers include Abu Dhabi-based G42 and Japan's SoftBank Group, and he is said to be in talks with Taiwanese manufacturer TSMC to make the units.

Why does Sam Altman want to make AI chips?

Nvidia became a trillion-dollar company for the first time last year off the back of its near monopoly on high-end GPUs capable of training the most advanced AI models.

Earlier this month Meta announced it was buying 350,000 Nvidia H100 GPUs to train a future superintelligence and make it open source. Dubbed the first chip designed for generative AI, the H100 GPU comes in at about $30,000 per chip and is in very high demand.

Google trained its next-generation Gemini model on its chips known as Tensor Processing Units (TPUs) which it has been developing for more than a decade. 

This would have significantly reduced the overall cost of training such a large model and given Google’s developer greater control over how it was trained and optiimized.

What is involved in making chips?

(Image credit: Shutterstock)

Making semiconductors is expensive. It takes a lot of natural resources, funding and research to reach a point where any new chip can perform at the highest level.

There are a limited number of fabrication facilities around the world able to construct the type of high-end chip needed by OpenAI, leading to a potential bottleneck in training the next generation of models. 

Altman wants to boost this global capacity with a new network of fabrication facilities dedicated exclusively to AI chips. 

OpenAI is expected to partner with a company like Intel, TSMC or Samsung for its own AI chips, or it could partner with existing investor Microsoft. The company announced last year that it was making its own AI chips to operate within its Azure cloud platform for running AI services.

What is the bigger picture

Amazon has its own Trainium chip that runs inside its AWS cloud service for AI models and Google Cloud uses TPUs. However, despite having their own chips all of the major cloud companies make heavy use of Nvidia's H1000 processors.

Altman is also going to come up against continued improvements from Nvidia, which might draw investors away from OpenAI’s own chip projects.

The GH200 Grace Hopper chips were confirmed last year and Intel has new AI chips running in its Meteor Lake processors which could see more AI models run locally rather than at scale.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.