China's AI "Max Moment"! Qianwen's strongest model opens the second growth curve

Wallstreetcn
2026.01.26 15:48
portai
I'm PortAI, I can summarize articles.

China's AI sector welcomes the "Max moment," with Qianwen launching its flagship inference model Qwen3-Max-Thinking, which outperforms GPT-5.2 and Gemini 3 Pro, becoming a domestically produced AI model close to international top levels. This breakthrough stems from a generational upgrade in the model's inference mechanism, utilizing a "test-time expansion" mechanism to achieve more efficient computation and intelligent results. Although market confidence in Chinese tech companies still needs to be improved, the launch of new products is expected to change investors' perceptions

Looking back at 2024 to 2025, the pricing logic of the capital market for Chinese technology assets has undergone a long tug-of-war.

With the stunning performance of Qwen2.5 and DeepSeek, the market briefly sparked a wave of "revaluation" for Chinese AI. At that time, investors saw hope for Chinese large models to "catch up" with Silicon Valley, and Alibaba's stock price experienced a corrective rebound as a result.

However, the nature of that market trend was more about "emotional repair."

Global funds acknowledged that Chinese tech companies "have not fallen behind," but still stubbornly refused to believe that they could "lead the way." In Wall Street's pricing model, Alibaba's e-commerce gene remains strong, and the potential value of its AI and cloud business is merely viewed as a bullish option—promising, but not an aggressive growth engine.

To change this inertia in pricing, Chinese large models can only gradually alter market perceptions with new products.

On January 26, 2026, Alibaba Cloud officially released its flagship reasoning model Qwen3-Max-Thinking, which performed exceptionally well, even surpassing GPT-5.2 and Gemini 3 Pro in several global authoritative evaluations, becoming the first domestically developed AI large model that truly approaches the performance of top international models.

It is noteworthy that the key to this model breakthrough lies in the generational upgrade of the model's reasoning mechanism.

The common reasoning calculations in the industry often simply increase parallel reasoning paths, relying on repetitive derivation of known conclusions to strike it lucky, resulting in significant computational redundancy. The Qwen new model adopts a completely new "Test-time Scaling" mechanism.

It introduces an "experience extraction" style refinement process, allowing the model to iteratively refine previous reasoning results multiple times, achieving more efficient computation and smarter outcomes within the same context. This is a strategy that encourages the model to "think slowly," exchanging computation for intelligence while ensuring economic efficiency.

Based on this innovation, Qwen3-Max-Thinking has achieved a qualitative leap in reasoning performance. In the "Humanity's Last Exam" (HLE) evaluation, which tests the use of tools, Qwen scored as high as 58.3. In comparison, OpenAI's GPT-5.2-Thinking scored 45.5, and Google's Gemini 3 Pro scored 45.8. A gap of over 10 points signifies a generational lead in the AI evaluation field.

In anticipation of the upcoming Agent era, Qwen3-Max-Thinking has also significantly enhanced its native ability to autonomously call tools As the technological gap continues to be narrowed, the capital market's revaluation of Alibaba's AI is no longer a question of "if."

From "Competing in Computing Power" to "Competing in Intelligence," Domestic AI Has Found Its Second Curve

Over the past five years, the main line of development in the AI industry has followed Scaling Laws, which involve stacking more cards and feeding more data to enhance intelligence.

However, after entering 2025, this path encounters physical walls, with high-quality data depletion and training costs exploding exponentially.

Shifting from training to inference, the industry urgently needs a second growth curve.

One of the core breakthroughs of Qwen3-Max-Thinking lies in its expansion during testing, similar to "System 2" thinking in human psychology. When faced with advanced mathematical proofs or complex code architectures, the model does not immediately output the first generated token but instead initiates a thoughtful reflection process in the background.

This process includes chain-of-thought search, self-verification, and multiple iterations.

The Qianwen team has introduced a unique experience extraction mechanism that allows the model to "learn how to find answers" during the inference process, identifying and pruning redundant logical paths. This mechanism ensures that computing power is used to explore the most valuable branches, resulting in extremely high-density intelligence output with the same computing power consumption.

As the largest model currently in Alibaba and even in China's tech sector, Qwen3-Max-Thinking has over a trillion parameters and pre-training data of up to 36T Tokens. Its performance in HLE testing is highly significant, with Qwen's high score proving that its technology can not only solve mathematical problems but also handle ambiguous and variable tasks in the real world.

Moreover, the model has achieved the internalization of native agent capabilities.

In the past, large models were merely "brains" that required external tools, often leading to unstable instruction adherence. Qwen3-Max-Thinking, however, thinks like a professional, using tools while contemplating. It can autonomously determine when to search the internet, when to write code, and when to consult knowledge bases, dynamically adjusting plans based on feedback.

This capability significantly reduces model hallucinations, providing the necessary reliability assurance for enterprise-level applications and demonstrating a high potential for scalable engineering implementation in the agent era.

This also means that Alibaba is no longer simply "selling computing power" or just providing a single large model, but is exploring greater possibilities for AI implementation through a genuinely usable complete "intelligent solution," transitioning from a past "cost narrative" to a "value narrative."

The Reversal of Open Source: Xigu Dong'a and Global Developers Voting with Their Feet

If the technological breakthrough of Qwen3-Max-Thinking is Alibaba's "substance," then the dominance of the Qwen series in the open-source ecosystem is its shiny "surface." The changes in this field are rewriting the geopolitical landscape of global AI.

Meta's Llama series was once regarded as the default standard for open-source large models, but this situation was completely overturned in January 2026 Hugging Face data shows that the number of derivative models based on Qwen has surpassed 200,000, making it the first open-source family in the world to achieve this milestone. Its cumulative download volume has exceeded 1 billion times, with an average daily download volume reaching 1.1 million times, completely surpassing Llama and firmly holding the top position globally.

Research from MIT indicates that the global adoption share of Chinese open-source AI models has risen to 17.1%, surpassing the United States' 15.8% for the first time.

Qwen's success stems from its "full-size, full-modal" strategy. Covering the full parameter range from 0.5B to 480B, and supporting 119 languages, it has quickly become the preferred choice in emerging markets such as Southeast Asia and the Middle East.

A more dramatic signal comes from within Silicon Valley.

According to Bloomberg, Meta's secret project codenamed "Avocado" has had to start using "distillation" technology to learn from Qwen. The newly formed TBD lab team, personally overseen by Zuckerberg, has distilled multiple open-source models, including Qwen, while training its new model.

This indirectly acknowledges Qwen's lead in specific capabilities on a technical level.

NVIDIA CEO Jensen Huang candidly stated at the 2025 GTC conference: "China is far ahead in the open-source field." This remark underscores this trend.

Silicon Valley engineers have shown extreme pragmatism: use whoever is useful.

Full-Stack Moat: Cloud, Chip, Application, and Valuation Reconstruction in Capital Markets

When analyzing Chinese tech giants, a common mistake investors make is to simply view Alibaba as an "e-commerce company + cloud service provider."

In fact, in the AI era, the complete embodiment of a company's AI capabilities comes from integrating the underlying infrastructure, intermediate model capabilities, and upper-level applications. Alibaba is currently the only company in China and one of only three globally that possesses a full-stack closed-loop capability of "computing power, models, and applications," and this full-stack advantage is being transformed into the company's moat.

The end of AI competition is energy and computing power.

In the face of export restrictions on NVIDIA's high-end chips, Alibaba's hardware layout demonstrates strong foresight. The PPU developed by Alibaba's Tmall Genie has performance on specific inference tasks comparable to NVIDIA's H20. Coupled with the Yitian 710 server chip, Alibaba has built a heterogeneous computing power system of "one cloud, multiple chips."

Last week, market news indicated that Alibaba has decided to support Tmall Genie’s future independent listing. This not only means that Alibaba has an additional trump card that can be independently priced in the capital market but also signifies that domestic large models have gained more confidence in their underlying computing power support But chips are just the starting point. In the AI era, what truly determines a company's long-term competitiveness is whether the entire cloud computing architecture can achieve a systematic reconstruction from "general computing" to "AI native," which involves a full-stack upgrade of computing, storage, networking, and even the MaaS platform.

In this regard, Alibaba Cloud has built the most complete AI infrastructure in China. Morgan Stanley's latest report predicts that Alibaba Cloud's revenue will double in three years, growing from CNY 118 billion in fiscal year 2025 to CNY 240 billion in fiscal year 2028. Its goal is to capture 80% of the incremental Chinese AI cloud market by 2026.

At the application layer, Qianwen APP is rapidly validating the commercial potential of this technological closed loop.

In its first week, the download volume exceeded 10 million, and within two months, the monthly active users (MAU) surpassed 100 million. More critically, Qianwen APP is redefining the form of AI applications, evolving from a simple chatbot into the world's first "capable" AI. It fully integrates with ecosystems such as Taobao, Alipay, Fliggy, and Amap, allowing users to complete the entire process of booking flights, ordering takeout, and shopping with just a voice command. This "AI + full-scenario life closed loop" pushes the value of AI from the information layer to the transaction layer, opening up a commercial space far beyond subscription models.

In the AI race, Capex (capital expenditure) is the ticket to the future. Alibaba CEO Eric Wu announced last year that over CNY 380 billion will be invested in building cloud and AI hardware infrastructure over the next three years.

In comparison to U.S. stocks, Google, Meta, and Amazon are also burning cash like crazy because they recognize the winner-takes-all effect brought by AI. Alibaba's investment of up to CNY 300 billion should be seen as a buyout of competitive qualifications for the next decade.

When Qwen3-Max-Thinking proves China's top-tier strength in algorithms, when Alibaba Cloud demonstrates its controllability in computing power, and when Qianwen APP proves AI's ability to land in commercial scenarios, Alibaba's AI valuation will have a solid foundation for reconstruction.

Risk Warning and Disclaimer

The market has risks, and investment requires caution. This article does not constitute personal investment advice and does not take into account the specific investment goals, financial conditions, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investment based on this is at their own risk.