Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Luke James

AMD swoops in to help as John Carmack slams Nvidia's $4,000 DGX Spark, says it doesn't hit performance claims, overheats, and maxes out at 100W power draw — developer forums inundated with crashing and shutdown reports

A DGX Spark developer workstation.

Nvidia’s DGX Spark, the company’s new $4,000 mini PC platform powered by the Grace Blackwell GB10 superchip, is under fire after John Carmack, the former CTO of Oculus VR, began raising questions about real-world performance and power draw. His comments were enough to draw tech support from Framework and even AMD, with the offer of an AMD-driven Strix Halo-powered alternative.

In a post on X, Carmack said that the DGX Spark appears to max out at 100 watts of power draw, which is less than half of its 240-watt rating. While Nvidia advertises one petaflop of sparse FP4 compute, Carmack assumes the dense equivalent should be closer to 125 teraflops, and says he’s getting far less than that. He also flagged “spontaneous rebooting on a long run,” asking if the system had been “de-rated before launch.” (Expand the tweet below to see his comments.)

Similarly, independent testing by ServeTheHome found that a retail Spark unit pulled just under 200 watts under combined CPU+GPU load, and couldn’t hit the full 240W ceiling in any workload they ran.

Drawn in by the claims, Framework dropped by Carmack's thread to offer an AMD Strix Halo-powered box for him to try instead, and AMD's Anush Elangovan, the company's Vice President of AI Software and the public face of its CUDA-challenging ROCm software, even joined the pile-on, adding "Will be on standby for anything to support your exploration on Strix Halo."

Carmack’s post has kicked off a broader re-examination of what Nvidia actually promised. The petaflop figure is included across multiple pages as FP4 with sparsity, which implies 2:4 structured sparsity. This is a technique that can double effective throughput but only applies to certain matrix operations. When evaluated in denser formats like FP8 or BF16, the theoretical ceiling drops sharply. Nvidia’s specs list 273GB/s of memory bandwidth and 128GB of unified LPDDR5X shared between a 20-Arm-core Nvidia Grace CPU, making Spark a capacity-focused system with nowhere near the bandwidth of an HBM-equipped GPU.

Spark is meant to host large models in-memory rather than race through tokens per second. Nvidia’s marketing even suggests it can run 20-billion-parameter models locally, a feat few discrete setups can manage, due to its Blackwell architecture. But the growing number of users citing reboot issues and apparent power ceilings suggests Nvidia’s tight thermal and power envelope within a 150mm chassis may be starting to bite, especially when most users would have been more than happy for the Spark to ship in a larger footprint if that meant better performance and sufficient cooling.

What’s causing this suboptimal performance, such as a firmware-level cap or thermal throttling, is not clear. Nvidia hasn’t commented publicly on Carmack’s post or user-reported instability. Meanwhile, several threads on Nvidia’s developer forums now include reports of GPU crashes and unexpected shutdowns under sustained load.

It’s still very early days for DGX Spark, but with expectations for GB10 sky-high among users, Nvidia will need to explain why its flagship developer kit might be leaving so much performance potential on the table.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.