Core business revenue surged by 409%! NVIDIA's performance once again shocks the world, single-handedly revitalizing the "AI faith".

Zhitong
2024.02.22 00:08
portai
I'm PortAI, I can summarize articles.

NVIDIA has announced better-than-expected performance and outlook, with total revenue more than doubling to $22.1 billion, including a staggering 409% surge in revenue from the data center business unit. The company expects another significant increase in total revenue this quarter. NVIDIA's stock price soared by as much as 40%, pushing its market value to $1.67 trillion. Investors are betting that the company will remain a major beneficiary of the artificial intelligence computing boom.

Zhitong App has learned that NVIDIA, the chip giant known as the "strongest shovel seller" in the AI field, has once again announced incredibly strong quarterly performance and performance outlook that far exceeded market expectations. With the groundbreaking generative AI - ChatGPT coming into existence, it signifies the world stepping into a new AI era. Following the significant increase in demand for NVIDIA AI chips - A100/H100 chips used in AI training and inference, not only the tech industry but various industries worldwide have witnessed NVIDIA releasing another astonishingly strong performance data after the previous three quarters, shocking the world.

In the fourth quarter of the 2024 fiscal year ending on January 28, NVIDIA's total revenue more than doubled, reaching $22.1 billion. Excluding certain items, the earnings per share under the NON-GAAP criteria were $5.16, significantly surpassing the widely predicted $20.4 billion in revenue and earnings per share of $4.60 by Wall Street analysts. More importantly, NVIDIA expects another substantial increase in total revenue this quarter, proving the rationality of its soaring stock price and maintaining its position as one of the most valuable companies globally.

The total revenue highlights NVIDIA's continuous growth trajectory: in the entire fiscal year of 2021, NVIDIA's total revenue did not even reach this figure. Additionally, the most core business division of NVIDIA, the Data Center business division providing A100/H100 chips to global data centers, achieved a Q4 revenue of approximately $18.4 billion, a staggering 409% increase year-on-year.

After a 240% surge in stock price in 2023, NVIDIA's stock price has risen by as much as 40% in 2024. NVIDIA's market value has increased by over $400 billion this year, bringing its total market value to $1.67 trillion, with investors betting that the company will remain a major beneficiary of the AI computing boom.

NVIDIA CEO Jensen Huang stated: "GPU-accelerated computing and generative AI have reached a 'tipping point.' The demand from companies, industries, and most countries worldwide is skyrocketing."

During the earnings conference call with Wall Street analysts, Huang mentioned that for the remainder of the year, the demand for NVIDIA's latest products will continue to outstrip supply. He noted that despite the increasing supply, there are no signs of demand slowing down. "Generative AI has opened up a whole new investment cycle," Huang said. He predicts, "The scale of data center infrastructure will double in the next five years, representing a market opportunity of hundreds of billions of dollars annually." After the explosive performance announcement by NVIDIA, the stock price soared by more than 11% in after-hours trading. The US chip technology stocks, especially the chip sector, collectively surged after-hours. It's worth noting that since the beginning of this week, these AI-related technology stocks and chip stocks have been weak, mainly due to global caution before NVIDIA's earnings report. Therefore, NVIDIA, the "strongest shovel seller" in the AI chip field, has single-handedly revitalized the "AI faith" of global tech stock investors, which may lead to a huge wave of belief in AI in the global stock market once again.

Chris Caso, an analyst from the well-known Wall Street investment firm Wolfe Research, stated in a report, "Global stock markets are all watching this report, so expectations have also increased. However, NVIDIA's strong performance outlook is sufficient, showing a reasonable rise in stock price and leaving room for continued growth in the second half of the year."

Undoubtedly, competition in the AI chip field will become increasingly fierce. NVIDIA's strongest competitor, AMD, has recently started selling the MI300 series AI GPU accelerator. AMD is expected to generate $3.5 billion in revenue from this series this year, higher than the previous forecast of $2 billion. Emerging AI chip startups will also pose a strong challenge to NVIDIA. Recently, Groq launched its self-developed LPU, with text generation speed even faster than blinking, and inference performance 10 times faster than NVIDIA GPUs. However, NVIDIA is not standing still. Analysts predict that the company is about to mass-produce more powerful AI chips, the H200, and the highly anticipated B100.

NVIDIA - The Strongest Player in the AI Field

The latest performance proves that NVIDIA is still undoubtedly the "strongest shovel seller" in the global AI field, with a market share as high as 90% in the AI training field, riding the unprecedented global frenzy of enterprise AI deployment to attract crazy investments. For example, the LPU launched by Groq mentioned above is currently more suitable for inference. To train large language models, it still requires large-scale purchases of NVIDIA GPUs.

NVIDIA recognized the potential of GPUs in the AI and deep learning fields very early on, investing heavily in related research and successfully building a strong software and hardware ecosystem around its GPU hardware.

NVIDIA has been deeply involved in the global high-performance computing field for many years, especially with its CUDA computing platform that has become popular worldwide. It can be said that NVIDIA's hottest AI chip, the H100 GPU accelerator, is based on NVIDIA's groundbreaking Hopper GPU architecture, providing unprecedented computing power, especially in floating-point operations. NVIDIA is focusing on its core performance in tensor and AI-specific acceleration. OpenAI, the developer of ChatGPT, along with tech giants like Amazon.com Inc., Meta Platforms (the parent company of Facebook and Instagram), Tesla, Microsoft, and Alphabet (Google's parent company) are NVIDIA's largest clients, contributing to nearly 50% of its total revenue. These companies are currently heavily investing in AI hardware, such as NVIDIA AI chips.

Tesla's CEO, Elon Musk, likened the AI arms race among tech companies to a high-stakes "poker game," where enterprises need to invest billions of dollars annually in AI hardware to stay competitive. Musk mentioned that by 2024, Tesla alone will spend over $500 million on NVIDIA's AI chips, but he cautioned that Tesla will require "billions of dollars" worth of hardware in the future to catch up with some of the largest competitors.

As the world enters the AI era, NVIDIA's data center business has become its core focus, shifting away from its previous heavy reliance on gaming graphics card demand. The data center business, particularly the division providing A100/H100 chips to global data centers, has transformed from being a sideline to NVIDIA's main revenue contributor. This segment has consistently outperformed other divisions for several quarters, with the data center business generating $18.4 billion in revenue in Q4, a staggering 409% increase from the same period last year. Additionally, NVIDIA anticipates that the scale of data center infrastructure will double within five years. Meanwhile, the company's gaming business benefited from the global chip demand recovery trend, with revenue growing by 56% YoY to $2.9 billion.

NVIDIA is currently expanding its AI software and hardware ecosystem beyond large data centers. The 61-year-old Jensen Huang has been traveling the world recently, believing that governments worldwide need sovereign-level AI systems to protect data and gain a competitive edge in AI.

Huang recently introduced the concept of "sovereign AI capabilities," indicating a surge in national-level demand for AI hardware. He mentioned that countries around the world are planning to establish and operate their own AI infrastructure domestically, which will significantly boost the demand for NVIDIA's hardware products. During a recent interview, Huang Renxun mentioned the importance of countries such as India, Japan, France, and Canada discussing the investment in "sovereign artificial intelligence capabilities."

In terms of performance expectations, NVIDIA, the world's highest market value chip company, stated in its performance outlook that the total revenue for the fourth quarter of the 2024 fiscal year (ending in April 2024) is expected to reach around $24 billion. This figure significantly surpasses the Wall Street analysts' average forecast of $21.9 billion. This exceptionally strong performance outlook highlights NVIDIA as the best beneficiary of the global AI trend, positioning it as the "top player" in the core infrastructure field of AI.

Faced with the increasing demand from consumers for generative AI products like ChatGPT and Google Bard, as well as other essential AI tools from various tech giants and data center operators worldwide, efforts are being made to stock up on the company's H100 GPU accelerator. The H100 excels in handling the heavy workloads required for AI training/inference.

GPU - One of the Most Crucial Infrastructures in the AI Era

As the world enters the AI era and the acceleration of the Internet of Things, the global demand for computing power is experiencing explosive growth. Particularly in the field of AI training, various AI subtasks involve intensive operations such as matrix calculations, forward and backward propagation of neural networks, which demand high hardware performance. These challenges cannot be addressed by CPUs that have enjoyed the benefits of Moore's Law for many years. Even a large number of CPUs cannot solve this issue, as CPUs are designed for general-purpose computing among various routine tasks, not for handling massive parallel computing patterns and high-density matrix operations.

Moreover, with the innovation and development in the global chip industry entering the "Post-Moore Era," CPUs, which have been the driving force behind human social development, are unable to achieve rapid breakthroughs like the transition from 22nm to 10nm within less than 5 years. Subsequent breakthroughs face significant obstacles such as quantum tunneling and massive investment, imposing great limitations on CPU performance upgrades and optimizations.

Therefore, GPUs, with a large number of computing cores capable of executing multiple high-intensity AI tasks simultaneously and excelling in parallel computing, have become the most crucial hardware in the chip industry in recent years. GPUs have significant advantages over other types of chips in high-performance computing fields like AI training/inference, which is crucial for complex AI tasks such as image recognition, natural language processing, and extensive matrix operations.

Modern GPU architectures have been specifically optimized for AI, making them suitable for AI tasks such as deep learning. For example, NVIDIA Tensor Cores can accelerate critical high-intensity operations like matrix multiplication and convolution calculations. They can parallelize the processing of large-scale floating-point and integer matrix calculations, thereby improving computational efficiency.

Since the emergence of ChatGPT, AI's influence on the global high-tech industry and technological development has been increasing. CPUs that focus on single-thread performance and general-purpose computing remain indispensable in the chip field, but their status and importance in the chip field are far less than GPUs.

From a theoretical perspective, the exponential growth trend predicted by Moore's Law has not disappeared in recent years but has shifted from CPUs to GPUs based on a large number of cores. In recent years, GPU performance has still been following the trend of exponential growth, doubling in performance approximately every 2.2 years. In comparison, Intel CPU GFLOPs are still growing, but compared to GPU GFLOPs, it seems to be a straight line.

Huang Renxun emphasized that the global shift towards artificial intelligence is just beginning. He believes that accelerating specific tasks through decomposing them into smaller parts and processing them in parallel is taking the lead. In terms of market size expectations, the latest research from the well-known market research firm Mordor Intelligence shows that the GPU market size (covering PC, servers, high-performance computing, autonomous driving, and other application-side GPUs) is expected to reach approximately $206.95 billion in the next five years. The compound annual growth rate (CAGR) during the forecast period (2024-2029) is as high as 32.70%.

Mordor Intelligence states that GPU hardware is not only used for rendering images, animations, and electronic games but also for general computing purposes, deployed in almost all computing devices worldwide. The active deployment trend of personal computers, laptops, and emerging applications (such as AR/VR, high-performance computing, artificial intelligence, machine learning, blockchain, cryptocurrency mining, autonomous driving, and high-precision navigation for vehicles and robots), especially in the field of artificial intelligence, will greatly drive GPU demand in the future.


On the eve of the performance announcement, top Wall Street players are bullish on NVIDIA.

Before the latest performance and outlook of NVIDIA were released, the top investment institutions on Wall Street were all bullish on the stock price trend of NVIDIA in the next 12 months.

Among them, Goldman Sachs significantly raised its 12-month target price for NVIDIA from $625 to $800, Bank of America raised NVIDIA's target price from $700 to $800, and UBS raised its target price from $580 to $850. Mizuho Securities raised NVIDIA's target price from $625 to $825.

Some institutions even expected NVIDIA's stock price to exceed $1000 in the next 12 months before the financial report was released. Rosenblatt reiterated a "buy" rating for NVIDIA with a target price of $1100, while Loop Capital set NVIDIA's target price as high as $1200, the highest target price on Wall Street.

Analysts at Loop Capital stated that the reason for setting such a high target price is due to investors' extremely optimistic outlook on the demand for its AI chips, as well as the trend of large-scale computing centers and data centers transitioning to GPU systems. They believe NVIDIA is at the "front end" of a multi-year cycle. Analysts at Loop Capital likened this phenomenon to the "Internet construction trend" in the mid to late 1990s - from 1995 to 2001, the Internet adoption rate increased fivefold.

Undoubtedly, competition in the AI chip field will become increasingly fierce.

With NVIDIA's strongest competitor AMD recently starting to sell the MI300 series AI GPU accelerator, and the recent launch of the self-developed AI chip Groq, along with cloud computing giants like Google continuing to focus on ASIC chips for AI acceleration, the competition in the AI chip field will become even more intense.

NVIDIA is not standing still. Analysts predict that the company is about to mass-produce a more powerful AI chip, the B100. Morgan Stanley, a major Wall Street firm, believes that the future revenue scale of NVIDIA's products is also worth looking forward to. It is expected that the B100 will become a game-changer in artificial intelligence, with its impact even more powerful than the previous flagship AI chip, the H100.

The NVIDIA B100 was originally scheduled to be released at the end of 2024, but due to the overwhelming demand for AI chips, it has been brought forward to the first half of 2024. According to insiders, it has now entered the supply chain certification stage.

The B100 will be able to easily handle large language models with 1.73 trillion parameters, twice as powerful as the current model H200. In addition, the B100 will adopt a more advanced HBM high-bandwidth memory specification, expected to continue to break through in stacking capacity and bandwidth, surpassing the existing 4.8TB/s. The B100 AI GPU will adopt advanced technologies, including an expected 1.78 trillion transistors and the latest HBM3e storage technology. Compared to the H100's HBM3 storage technology, this means there will be a significant improvement in both data processing and storage capabilities. The launch of the B100 also signifies NVIDIA's efforts to strengthen its position in the AI inference field where it has not yet achieved full dominance. In terms of AI inference performance, the B100 is expected to be over 4 times faster than the existing H100 AI GPU, indicating a significant advantage in inference tasks for pre-trained models.

Additionally, the B100 is expected to adopt chiplet design for the first time in NVIDIA's GPU products. According to NVIDIA's product roadmap, the X100 is expected to be released in 2025, further enriching the GPU product lineup and solidifying NVIDIA's leading position in AI chips.

One month later, on March 18th local time, NVIDIA will host the 2024 GTC for AI developers. NVIDIA CEO Jensen Huang will deliver a keynote speech, possibly revealing more new details about the B100. Vivek Arya, a Bank of America analyst who has always been optimistic about NVIDIA, believes that the pricing of the upcoming B100 GPU from NVIDIA will be at least 10% to 30% higher than the H100.