Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Luke James

Alibaba and ByteDance allegedly train Qwen and Doubao LLMs using Nvidia chips, despite export controls — Southeast Asian data center leases skirt around U.S. chip restrictions

Nvidia data center.

Chinese technology giants, including Alibaba and ByteDance, are increasingly training their most advanced artificial intelligence models in Southeast Asia, taking advantage of overseas data centers equipped with high-end Nvidia GPUs, according to new reporting by the Financial Times. The shift reflects how leading AI labs in China are navigating U.S. export controls by leasing compute from non-Chinese operators based in Singapore and Malaysia.

Over the past year, Alibaba’s Qwen and ByteDance’s Doubao large language models have risen into the top tier of global LLM benchmarks. Allegedly, both have been trained, at least in part, using Nvidia accelerators located in offshore clusters.

Singapore-based operators told the FT that demand from Chinese firms has grown since April, when the Trump administration enforced a tighter embargo on Nvidia’s H20 and other export-compliant chips, only for the so-called “diffusion rule” intended to block overseas leasing to be rolled back shortly afterward under revised policy.

U.S. export controls currently prohibit Nvidia from selling its most advanced GPUs directly to China, and China has banned foreign AI chips from its state-funded data centers. But leasing compute from foreign-owned data centers abroad — even if the end user is Chinese — remains legal under the current rules.

A May 2025 notice withdrew proposed Biden-era restrictions known as the "AI diffusion rule" that would have treated such arrangements as indirect violations of the export ban. In effect, that allows companies to use H100- and A100-class accelerators outside China, provided the hardware is owned and managed by a compliant third party.

ByteDance and Alibaba are not the only firms pursuing this route, but they represent the most visible examples. Their arrangements allow them to train new models with performance targets on par with those of Western AI labs. The resulting weights can then be run inside China for inference on domestically sourced silicon. Chinese companies are increasingly using chips from Huawei and other local suppliers to handle deployment and user interactions, which now make up a growing share of AI workloads.

One exception is DeepSeek, a Shanghai-based firm that stockpiled Nvidia parts ahead of the U.S. ban and continues to train inside China. The company, which is also thought to be using shell companies to evade restrictions, has partnered with Huawei to optimize future training runs using local silicon.

While training clusters are migrating abroad, private data still cannot leave China. That constraint means fine-tuning or retraining based on Chinese user data must take place domestically, even when the base model was developed offshore.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.