
Alibaba releases the Qwen3-Max-Thinking flagship reasoning model, with performance comparable to GPT-5.2 and Gemini 3 Pro

On January 26, Alibaba released the Qwen3-Max-Thinking flagship reasoning model, which set new global records in multiple authoritative evaluations, with performance comparable to GPT-5.2 and Gemini 3 Pro. Qwen3-Max-Thinking has over one trillion total parameters and achieved significant performance improvements through innovations in reinforcement learning and reasoning technology. In addition, the number of derivative models in the Qwen series has surpassed 200,000, with downloads exceeding one billion, firmly establishing it as the world's leading open-source large model
On January 26, Alibaba officially released the Qwen3-Max-Thinking flagship reasoning model, setting several authoritative global records and achieving performance comparable to GPT-5.2 and Gemini 3 Pro, making it the strongest domestic AI large model closest to the top international models to date. The new Qwen model has over one trillion parameters, underwent larger-scale reinforcement learning post-training, and achieved a significant leap in model performance through a series of innovations in reasoning technology. According to the latest data from Hugging Face, the number of Qwen derivative models has surpassed 200,000, making it the first open-source large model in the world to reach this milestone. At the same time, downloads of the Qwen series models have exceeded 1 billion, with an average of 1.1 million downloads per day by developers, maintaining its position as the top open-source large model globally.
Risk Warning and Disclaimer
The market has risks, and investment should be cautious. This article does not constitute personal investment advice and does not take into account the specific investment objectives, financial situation, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article are suitable for their specific circumstances. Investment based on this is at one's own risk.
