DeepSeek has released its long-awaited new artificial intelligence model, saying it offers world-beating capabilities.
DeepSeek V4, a preview version of which is now available to use, is reportedly better optimised for the chips that China is producing domestically.
The company said in a statement that its newest model “features an ultra-long context of one million words, achieving leadership in both domestic and open-source fields across agent capabilities, world knowledge, and reasoning performance".
The model is available as DeepSeek V4-Pro and DeepSeek V4-Flash. The latter version, the company says, is a “more efficient and economical choice".
“In world knowledge benchmarks, DeepSeek V4-Pro significantly leads other open-source models and is only slightly outperformed by the top-tier closed-source model Gemini-Pro-3.1," the company noted, referring to Google's generative AI.
DeepSeek V4-Pro comes with a “maximum reasoning effort mode”, which, the Chinese startup claims, “significantly advances the knowledge capabilities of open-source models, firmly establishing itself as the best open-source model available today”.

DeepSeek sparked a trillion-dollar stock market sell-off last year after releasing the R1 model, which rivalled the performance of AI systems such as OpenAI’s ChatGPT despite costing a fraction to build.
The R1 release sent shockwaves through the entire Western tech industry, with Nvidia experiencing its biggest single-day loss in market value in history, losing over $500bn, and fellow tech giants like Oracle, Amazon and Microsoft seeing significant drops in their share prices.
This was because the DeepSeek model was open-source, free-to-use, and developed at only a fraction of the cost.
It also marked the first time a Chinese competitor had rivalled the most advanced AI models from US giants.
DeepSeek’s latest release comes amid growing semiconductor export restrictions from the US to China, especially high-end graphical processing units (GPUs), which are key for building AI models, forcing China to rely on its own homegrown GPU manufacturers.
The company did not unveil what chip system it used to train the V4 models, but said its software components are designed to work with both Nvidia and Huawei chips.

So far, the AI firm has only released basic details of the new version, including that it has the ability to process a maximum output of 384,000 tokens.
Tokens are the basic unit of data that AI models process, and can be words or characters. Typically, a token can be about 4 characters, and the faster a model processes tokens, the faster it can learn and respond.
The Chinese AI firm says the new version achieves a “dramatic leap in computational efficiency” with its ability to process and understand the context of up to 1 million tokens.
In comparison, the previous version, V3, was able to understand the context of up to 128,000 tokens.
The new upgrade enables multi-document reasoning, with the AI model now capable of understanding the context of entire books and full code databases.
“This breakthrough enables efficient support for a context length of one million tokens, ushering in a new era of million-length contexts for next-generation large language models,” it said.
In terms of understanding the context of long strings of text, DeepSeek V4-Pro outperforms Google’s Gemini-3.1-Pro, the company says, adding, however, that it remains behind Anthropic’s Claude Opus 4.6 AI model.
DeepSeek said it hopes to further enhance the model’s intelligence, robustness, and practical usability across a broad range of scenarios and tasks.
Meta now allows parents to see what their kids are discussing with its AI
Apple’s new CEO is a company man, can he help them master the AI revolution?
Meta cuts nearly 8,000 jobs as it tells workers it’s looking to be more efficient
How AI shopping could change retail more than consumers expect
EU age verification app can be hacked in 2 minutes, says Telegram founder