Get all your news in one place.
100's of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Luke James

Nvidia's exposure to Asian supply chains for components hits 90% of its production costs — marked increase from 65% could intensify as physical AI adds even more exposure

Nvidia Jetson.

Asian suppliers now represent roughly 90% of Nvidia's production costs, up from about 65% a year earlier, according to data compiled by Bloomberg. That figure captures Nvidia's established data center supply chain: TSMC fabrication, SK hynix and Samsung HBM, and server assembly from Foxconn and Quanta. But the company's physical AI hardware is now adding entire new product categories that route through those same suppliers.

Nvidia's Jetson Thor robotics platform, released last August, is built on the Blackwell GPU architecture and fabricated on TSMC's 3nm process. The top-end T5000 module delivers 2,070 FP4 TFLOPS with 128 GB of LPDDR5X memory, while a lower-cost T4000 variant introduced at CES 2026 offers 1,200 FP4 TFLOPS with 64 GB at $1,999 per unit in volume. Both use Arm Neoverse-V3AE CPU cores and LPDDR5X sourced from Samsung or SK hynix.

These modules compete for TSMC 3nm wafer starts alongside Blackwell data center GPUs. Partners, including Boston Dynamics and Amazon Robotics, are building on the platform, and LG has confirmed that it’s “exploring a strategic collaboration in physical AI,” with Nvidia, including the robotics ecosystem, Bloomberg reported. Nvidia's DRIVE AGX Thor automotive SoC is another Blackwell-based product line competing for the same 3nm wafer capacity.

None of these physical AI products requires TSMC's CoWoS advanced packaging, which remains the primary bottleneck for data center GPU production, but they do consume 3nm wafer capacity and Asian-sourced LPDDR5X, both of which are already constrained.

The same memory market dynamics feeding Nvidia's newer physical AI products are simultaneously killing off its older ones. At the end of April, it was reported that Nvidia has accelerated end-of-life timelines for its Jetson TX2 and Xavier modules because LPDDR4 supply has become too constrained to maintain production. Samsung has moved away from LPDDR4 manufacturing, and AI-driven demand has redirected memory capacity toward higher-margin products.

That forces Jetson customers onto Orin or Thor modules, which use LPDDR5X from the same Asian memory suppliers whose capacity is already stretched by HBM and data center DRAM demand. TSMC's CoWoS advanced packaging for data center GPUs is growing at an 80% compound annual growth rate, TSMC's head of North American packaging told CNBC last month, and chips fabricated at TSMC's Arizona Fab 21 still ship back to Taiwan for packaging.

Nvidia committed to $500 billion in U.S. server manufacturing last year, with Foxconn and Wistron, and Amkor and SPIL are building advanced packaging facilities in Arizona. But those operations are not yet at production scale, and physical AI product lines are widening the range of components sourced from Asia faster than domestic capacity can absorb them.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.