
A global race to dominate artificial intelligence (AI) is driving an unprecedented semiconductor spending spree, pitting America’s strategy of massive capital investment against China’s urgent push for self-sufficiency in the face of U.S. sanctions. The chase for AI supremacy has turned chipmakers in both countries into red-hot investment targets.
Over the past month, U.S.-based OpenAI, the world’s largest AI startup, has signed procurement deals with three semiconductor giants — Broadcom Inc., Advanced Micro Devices Inc. (AMD), and Nvidia Corp. The combined orders carry a staggering combined power requirement of 26 gigawatts, enough electricity to power nearly three New York Cities at peak demand. It is a testament to the brute-force, capital-intensive strategy the U.S. is deploying to win the AI race.
While Washington is leveraging deep capital markets to fund its technical dominance, China — increasingly cut off from top-tier American technology — is taking a pragmatic path of domestic substitution. It is building a self-reliant ecosystem and rolling out AI applications at scale. A new generation of homegrown chipmakers and AI firms is emerging, reshaping global supply chains in the process.
The central question now facing the industry is which path will lead to the shores of artificial general intelligence, or AGI, first — a race that is likely to define the technological and geopolitical landscape for decades to come.
America’s all-in bet
Wall Street has embraced the American strategy enthusiastically. Broadcom shares jumped nearly 10% on the day its deal was announced, AMD soared 37%, and Nvidia gained 3.93%. Nvidia — the dominant AI chip supplier over the past two years — has seen its market value climb to $4.42 trillion, making it the world’s most valuable publicly traded company. AMD is now valued at around $378 billion, while Broadcom’s market capitalization has risen 50% this year to $1.63 trillion.
Driving OpenAI’s chip appetite is its Stargate initiative — a $500 billion infrastructure project. In January, OpenAI announced plans to partner with Oracle Corp. and SoftBank Group Corp. to build data centers totaling 10 GW of capacity across the U.S. over the next five years. On Oct. 1, it struck a deal with South Korea’s Samsung Electronics Co. and SK Hynix Inc. to expand high band width dynamic random-access memory (DRAM) chip production to 900,000 chips per month to support the initiative.
However, the rush has raised concerns about potential “circular deals,” in which suppliers such as Nvidia and AMD indirectly fund AI companies to drive demand for their own products. These arrangements, reminiscent of the dot-com era, could inflate both demand and investor expectations. Research firms warn that this structure may be building an AI infrastructure bubble.
The current U.S. bull market, sparked by the October 2022 release of ChatGPT, has driven the S&P 500 up nearly 90%, propelled by tech titans such as Apple, Microsoft, Alphabet, Amazon and Nvidia — all of which have dramatically increased AI-related capital spending.
China’s AI ambitions
Across the Pacific, China’s AI drive is rooted in the pursuit of technological self-sufficiency to close the widening supply gap. A July report from Bernstein estimated China’s 2025 AI chip demand at $39.5 billion. At the time, it projected that a resumption of Nvidia’s H20 chip sales to the Chinese mainland would narrow the gap to just $2.5 billion. Those sales never materialized, however, due to regulatory obstacles on both sides.
As a result, China’s supply gap for AI chips is now expected to balloon to over $10 billion in 2025, with domestic substitution becoming the only viable path forward.
“At the moment, we’re 100% out of China,” Jensen Huang, Nvidia’s CEO, said at a recent Citadel Securities event, noting that U.S. export controls had slashed the company’s Chinese market share from 95% to zero.
As Washington tightens export restrictions, Beijing is intensifying efforts to bridge the technology gap, turning domestic substitution from a policy objective into a market necessity. The result has been an investment boom.
Chinese AI chipmakers have become investor darlings. Companies viewed as potential mass producers have been dubbed “China’s Nvidia” or “China’s AMD,” pushing their valuations skyward and accelerating IPO plans.
Cambricon Technologies Corp. — the only publicly listed AI chipmaker on China’s A-share market. — soared after securing a major order from ByteDance Ltd., briefly becoming the country’s most expensive stock. The company’s revenue jumped more than 43 times to 2.88 billion yuan ($404 million) in the first half, marking its first-ever quarterly profit.
“The market was shocked by Cambricon, and now everyone is looking for the next star like it,” one investor told Caixin.
One frontrunner is Moore Threads Technology Co. Ltd., a five-year-old GPU startup fast-tracking its path to go public on Shanghai's STAR Market with plans to raise 8 billion yuan. Although the IPO is still in progress, speculation has pushed up the shares of several firms with minor links to Moore Threads surged — such as CNCR Group, which holds a mere 0.02% stake but gained more than 56% in three days.
“China’s AI chip industry is accelerating under competitive pressure from the U.S.,” said a person familiar with Moore Threads. “But the company is still in its growth stage. Investors should be patient and avoid excessive hype.”
Established tech giants are also intensifying in-house chip development. At a September conference, Huawei Technologies Co. Ltd. unveiled a three-year AI chip strategy to double computing power each year to meet China’s national AI needs. a move widely seen as a direct challenge to Nvidia.
Baidu Inc. has advanced with its Kunlunxin chip unit, deploying a 30,000-card cluster. Alibaba Group Holding Ltd.’s chip unit, T-Head, has developed a processor reportedly matching Nvidia's H20. These gains have reignited investor enthusiasm, sending Baidu and Alibaba’s Hong Kong-listed shares up 50% and 54%, respectively, since September.
“Critics say the rally lacks earnings support. But markets haven’t seen such a clear growth story in years,” a tech executive told Caixin. “Whether valuations are justified depends on actual demand.”
A recent report by Bain & Co. estimates that by 2030, global capital expenditure on AI data centers will reach $500 billion a year, requiring 200 GW of added power capacity — half of it in the U.S. But the AI sector needs to generate $2 trillion in annual revenue to justify the outlay. For now, there’s an $800 billion gap.
“China’s AI chip sector still faces hurdles in demand and foundry capacity,” a cloud executive told Caixin. “The market needs real applications to scale. It’s the application demand that decides everything. The American style of frantically expanding computing power is not the choice for Chinese companies.”
Capitalizing on momentum
Since August four promising AI chip designers — Enflame Technology, Biren Technology, Moore Threads and MetaX Integrated Circuits Co. Ltd. — have applied to list on Shanghai’s tech-focused STAR Market. The move comes after regulators eased rules for unprofitable hardware companies in late June, opening a critical funding window.
The race for tech self-sufficiency has turned policy urgency into a sprint for funding and innovation. Success for these startups could determine whether China can build viable alternatives to U.S. hardware underpinning the global AI boom.
Demand is explosive. Guotai Haitong Securities forecasts that the proportion of foreign-made chips in China’s AI servers will plunge from 63% to 42% by 2025. Tech giants such as Alibaba, Tencent, and Baidu, once heavily reliant on Nvidia, are now aggressively shifting to domestic suppliers. This has created a gold rush-style frenzy, with investment institutions overwhelmingly Chinese AI chip companies “overweight”.
These startups are led by seasoned semiconductor veterans, many with deep experience at the very American firms they now aim to replace.
Enflame, founded in 2018, is led by CEO Zhao Lidong, a former executive at Tsinghua Unigroup and a seven-year veteran of AMD. Biren, founded in 2019, was started by Zhang Wen, a former president at AI giant SenseTime, with a founding team consisting of veterans from AMD, Samsung, Huawei and Alibaba.
Moore Threads and MetaX, both established in 2020 to develop general-purpose GPUs like Nvidia’s, are staffed with alumni from Nvidia and AMD. Moore Threads’ CEO, Zhang Jianzhong previously served as general manager for Nvidia in China.
Fueled by billions in venture capital, these companies have moved quickly from concept to production, with their latest chips approaching the performance of Nvidia’s China-specific H20 chip. Tencent, an Enflame backer, is already deploying its chips at scale. Moore Threads claims its newest GPU cluster rivals the efficiency of comparable foreign systems.
Despite impressive progress, profitability remains a distant goal. The startups are burning through cash, pouring billions into research and development. From 2022 to 2024, Moore Threads posted a cumulative loss of 4.6 billion yuan on 3.8 billion yuan in R&D spending, while MetaX lost 2.72 billion yuan over the same period.
The primary bottleneck remains in manufacturing. With foreign foundries off-limits, Chinese AI must rely on domestic foundries like Semiconductor Manufacturing International Corp., whose lines are already stretched by demand for smartphone chips. Yield rates for more complex AI chips remain below 20%, slowing R&D.
MetaX’s prospectus notes that its next-generation chip, the C600, built using a domestic supply chain, won’t reach mass production until the first quarter of 2026.
An alternative strategy
While the U.S. focuses on ever-more-powerful individual chips from firms like Nvidia, China is taking a different approach — chasing comparable results through large-scale system design and software optimization.
With advanced foundry access restricted, Chinese firms are leaning on cluster-based architectures that interconnect thousands of lower-powered domestic chips with high-speed links, to achieve competitive system-level output, albeit at a higher energy cost.
Huawei leads this strategy. Under U.S. sanctions since 2020, it unveiled a roadmap on September 18 to release four new generations of its Ascend AI chips within three years, with each version nearly doubling computing power. To offset chip-level constraints, Huawei developed “super nodes” — systems integrating thousands of Ascend chips into unified platforms.
Its Atlas 900 super node, launched in March, connects 384 Ascend 910C chips for a peak performance of 300 PFLOPS (floating-point operations per second). This makes it the world’s most powerful known AI compute node. Future versions will scale dramatically, with the Atlas 950 (2026), linking 8,192 chips, and the Atlas 960 (2027) connecting over 15,000. Huawei’s modular design allows for expansion to hundreds of thousands of chips to train ultra-large AI models.
While each Ascend 910C chip offers just one-third the performance of Nvidia’s latest GPUs, research firm Semi Analysis found that Huawei’s clustered approach can deliver nearly twice the system-level performance — albeit at 2.5 times the energy cost.
The remaining challenge lies in the software ecosystem. Huawei’s Ascend series relies on its proprietary CANN architecture, rather than Nvidia’s CUDA ecosystem. Huawei executive Xu Zhijun likened it to the iOS-Android divide, arguing that if Nvidia chips become unavailable, developers will adapt. In August, Huawei open-sourced CANN to encourage broader adoption.
Other cloud giants are following suit. Baidu’s Kunlun chip, optimized for its own AI framework, is now deployed at scale with the Kunlun P800 cluster reaching 30,000 cards. Crucially, Kunlun is CUDA compatible, lowering barriers for developers. Similarly, Alibaba’s T-Head has developed a chip reportedly matching Nvidia’s H20 and compatible with CUDA.
Even as China rapidly builds capacity, a new problem is emerging: demand lag. After an initial wave of investment, the utilization rate of the Chinese mainland’s intelligent computing centers is less than 30%, according to a cloud vendor source, as they await a boom in downstream applications.
This will drive a market consolidation, said Du Hai, a senior manager at Baidu’s cloud division. The dozen or so domestic AI chip firms active today will probably shrink to three or four distinct camps. “The winners will be the ones whose chips can support the broadest range of models — or enable a killer app that becomes the de facto standard.”
Contact reporter Han Wei (weihan@caixin.com)

