Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Amanda Caswell

Why OpenAI wants 100 million GPUs — and how it could supercharge ChatGPT

A phone saying OpenAI with Sam Altman behind it.

ChatGPT's CEO Sam Altman has a bold vision for the future of AI, something other big tech can’t compete with: one powered by 100 million GPUs.

That jaw-dropping number, casually mentioned on X just days after ChatGPT Agent launched as we await ChatGPT-5, is a glimpse into the scale of AI infrastructure that could transform everything from the speed of your chatbot to the stability of the global energy grid.

Altman admitted the 100 million GPU goal might be a bit of a stretch, punctuating the comment with “lol," but make no mistake, OpenAI is already on track to surpass 1 million GPUs by the end of 2025. And the implications are enormous.

What does 100 million GPUs even mean?

(Image credit: Shutterstock)

For those unfamiliar, I’ll start by explaining the GPU, or graphics processing unit. This is a specialized chip originally designed to render images and video. But in the world of AI, GPUs have become the powerhouse behind large language models (LLMs) like ChatGPT.

Unlike CPUs (central processing units), which handle one task at a time very efficiently, GPUs are built to perform thousands of simple calculations simultaneously. That parallel processing ability makes them perfect for training and running AI models, which rely on massive amounts of data and mathematical operations.

So, when OpenAI says it's using over a million GPUs, it's essentially saying it has a vast digital brain made up of high-performance processors, working together to generate text, analyze images, simulate voices and much more.

To put it into perspective, 1 million GPUs already require enough energy to power a small city. Scaling that to 100 million could demand more than 75 gigawatts of power, around three-quarters of the entire UK power grid. It would also cost an estimated $3 trillion in hardware alone, not counting maintenance, cooling and data center expansion.

This level of infrastructure would dwarf the current capacity of tech giants like Google, Amazon and Microsoft, and would likely reshape chip supply chains and energy markets in the process.

Why does it matter to you?

While a trillion-dollar silicon empire might sound like insider industry information, it has very real consequences for consumers. OpenAI’s aggressive scaling could unlock:

  • Faster response times in ChatGPT and future assistants
  • More powerful AI agents that can complete complex, multi-step tasks
  • Smarter voice assistants with richer, real-time conversations
  • The ability to run larger models with deeper reasoning, creativity, and memory

In short, the more GPUs OpenAI adds, the more capable ChatGPT (and similar tools) can become.

But there's a tradeoff: all this compute comes at a cost. Subscription prices could rise.

Feature rollouts may stall if GPU supply can't keep pace. And environmental concerns around energy use and emissions will only grow louder.

The race for silicon dominance

(Image credit: Shutterstock)

Altman’s tweets arrive amid growing competition between OpenAI and rivals like Google DeepMind, Meta and Anthropic.

All are vying for dominance in AI model performance, and all rely heavily on access to high-performance GPUs, mostly from Nvidia.

OpenAI is reportedly exploring alternatives, including Google’s TPUs, Oracle’s cloud and potentially even custom chips.

More than speed, this growth is about independence, control and the ability to scale models that could one day rival human reasoning.

Looking ahead at what's next

(Image credit: ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)

Whether OpenAI actually hits 100 million GPUs or not, it’s clear the AI arms race is accelerating.

For everyday users, that means smarter AI tools are on the horizon, but so are bigger questions about power, privacy, cost and sustainability.

So the next time ChatGPT completes a task in seconds or holds a surprisingly humanlike conversation, remember: somewhere behind the scenes, thousands (maybe millions) of GPUs are firing up to make that possible and Sam Altman is already thinking about multiplying that by 100.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.