Chinese developer DeepSeek has unveiled V3.2, the largest update to its AI lineup, featuring two models of strategic importance. The release signals the company’s intention to compete on the same level as global leaders such as OpenAI and Google.
The highlight of the announcement is the experimental model DeepSeek-V3.2-Speciale, which demonstrated exceptional long-form reasoning and planning capabilities. It earned conditional “gold medals” at the International Mathematical Olympiad (IMO 2025) and the Chinese Mathematical Olympiad (CMO 2025), as well as high placements in ICPC and IOI.
This achievement became possible thanks to an architecture that emphasizes logical reasoning. Speciale employs a multi-stage self-verification process that allows the model to critique and refine its own answers in real time. This mechanism is built upon advanced technology inherited from the highly regarded DeepseekMath-V2 project.
While Speciale requires significant computational resources and is currently available only to researchers via API, the base version, DeepSeek-V3.2, is positioned as a universal and highly efficient tool. According to the company, it is optimized for speed and performs on the level of GPT-5.
DeepSeek states that the new release significantly narrows the gap between open and closed AI ecosystems. Both V3.2 models are now officially available on HuggingFace and ModelScope.
ORIENT
Also read:
Triumph of Innovation: The Startup “PECMA” Won Demo Day 2025 in Ashgabat
