Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Luke James

Huawei reveals long-range Ascend chip roadmap — three-year plan includes ambitious provision for in-house HBM with up to 1.6 TB/s bandwidth

Huawei.

Huawei’s AI silicon roadmap is no longer a state secret. Speaking at the Huawei Connect conference on September 18, rotating chairman Xu Zhijun outlined the company’s first official long-range Ascend chip strategy, with four new parts scheduled across the next three years: Ascend 950PR and 950DT in early 2026, followed by Ascend 960 and 970 in 2027 and 2028, respectively.

Huawei says its upcoming 950PR chip will ship in Q1 next year with in-house HBM designed to compete with the likes of SK hynix and Samsung. That’s a pretty bold claim considering HBM supply and factors like packaging and bandwidth efficiency have arguably become the single biggest constraint on AI accelerator performance at scale.

According to Huawei, the 950PR will feature 128GB of its in-house HBM delivering up to 1.6 TB/s of bandwidth, while the 950DT increases those figures to 144GB and 4 TB/s, but Huawei hasn’t disclosed how its in-house HBM is manufactured, what packaging is used, or which foundry is producing the chip itself.

Under U.S. sanction rules, Huawei is barred from accessing TSMC’s advanced nodes and CoWoS packaging lines, both of which Nvidia uses to stack HBM around its top-end Hopper and Blackwell GPUs. If Huawei is working with SMIC or other domestic fabs, yields and bandwidth may prove to be hugely limiting factors.

That hasn’t stopped the company from talking scale, though. Alongside its chip roadmap, Huawei teased new so-called “supernodes” that will house thousands of Ascend chips. The Atlas 950 and 960 systems are positioned as next-gen AI compute clusters that, on paper, rival Nvidia’s GB200 NVL72 configurations in deployment scale, with up to 15,488 Ascend accelerators in a single system. Huawei says Atlas 950 will debut in Q4 this year.

But big numbers don’t necessarily translate into performance. Nvidia’s big advantage isn’t just its silicon but NVLink and a tightly optimized software stack that keeps its clusters saturated across large model workloads. To challenge that, Huawei is going to need more than a boastful chip roadmap — a roadmap that has landed conveniently alongside demands from the Chinese government to produce more domestic silicon and a ban on procuring Nvidia parts.

Huawei will need a proven end-to-end platform that can match Nvidia in training, efficiency, and model throughput for its roadmap to succeed. Right now, it doesn’t, and plans alone don’t break bottlenecks.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.