
In a wide-ranging discussion about artificial intelligence (AI), Nvidia (NVDA) CEO Jensen Huang offered both unique insights and a nod of respect for the engineers driving the AI revolution. Speaking on the BG2 Podcast with Bill Gurley and Brad Gerstner, Huang remarked that “I would not be surprised if he (Elon Musk) gets to a gigawatt before anybody else does.”
The statement, referring to the energy scale required to power next-generation AI infrastructure, was coupled with Huang’s broader observation that “These AI supercomputers are complicated things… This is unquestionably the most complex systems problem humanity has ever endeavored.”
Context: The Scale and Complexity of AI Infrastructure
Huang’s comments come from a place of deep technical and operational understanding. Nvidia, the company he founded in 1993, sits at the center of the modern AI ecosystem. Its graphics processing units (GPUs) are the critical hardware powering everything from generative AI systems like ChatGPT to self-driving vehicles and supercomputing clusters. When Huang describes AI supercomputers as complex, it’s not theoretical; it’s drawn from firsthand experience building the tools and partnerships that make AI possible.
A “gigawatt” of computing power refers to a massive data center or network of data centers capable of consuming the same amount of electricity as a large city. Huang’s remark about Musk reflects recognition of the Tesla (TSLA) and xAI founder’s ability to move quickly and build at scale, particularly when merging physical infrastructure, manufacturing, and computing — a rare combination of strengths in today’s AI race.
Coming from Huang, whose own company defines the backbone of AI computation, that acknowledgment represents high praise.
Why Huang’s Words Carry Weight
Huang’s authority in the AI industry is unparalleled. Over the past decade, his vision for accelerated computing transformed NVIDIA from a niche gaming chipmaker into a trillion-dollar company central to global AI development. Nvidia’s chips are integral to nearly every major AI project, from research labs to corporate enterprises. His consistent message has been that AI progress depends not just on software or algorithms, but on the vast, interdependent physical systems — semiconductors, data centers, energy, and logistics — that underpin them.
That framing helps contextualize the CEO’s description of AI development as “the most complex systems problem humanity has ever endeavored.” The challenge is not only about training models but also about orchestrating hardware supply chains, securing energy resources, and scaling data infrastructure, all simultaneously. It’s an engineering and logistical feat comparable to building the internet or launching a space program.
Broader Economic Implications of AI’s Physicality
Huang’s comments also underscore a defining theme in the current economic landscape: AI’s physicality. Despite being a digital technology, artificial intelligence relies on immense energy consumption and specialized hardware. As companies and nations race to build AI supercomputers, access to power, land, and capital has become a new form of competitive advantage.
By recognizing Musk’s potential to “reach a gigawatt first,” Huang effectively highlighted the merging of industrial and digital frontiers — the point where energy, computation, and innovation converge. It’s a statement that resonates beyond admiration; it’s a reminder that the future of AI will be determined as much by engineering execution as by algorithmic brilliance.