Survey: A large number of users are planning to switch from the NVIDIA platform to AMD MI300X!

LB Select
2024.03.12 15:45
portai
I'm PortAI, I can summarize articles.

According to a report from ZhiXin News on March 10th, a recent survey by TensorWave revealed that many AI professionals are planning to switch from NVIDIA's AI GPU platform to AMD's latest Instinct MI300X GPU. Jeff Tatarchuk, co-founder of TensorWave, recently disclosed through X platform that they conducted an independent survey covering 82 engineers and AI professionals, with about 50% of respondents expressing confidence in AMD. They believe that compared to NVIDIA's H100 series products, the MI300X not only offers better value for money but also has sufficient supply, avoiding issues with tight inventory. Jeff also mentioned that TensorWave will be adopting the MI300X AI accelerator. This is undoubtedly good news for AMD, as their Instinct series products have historically lagged behind NVIDIA in both performance and market recognition, resulting in a much smaller market share in the AI chip market. However, AMD's latest Instinct MI300X GPU, launched last year, has successfully surpassed NVIDIA's flagship product, the H100, in terms of performance.

On March 10th, according to a recent survey released by TensorWave, a large number of artificial intelligence professionals are planning to switch from NVIDIA's AI GPU platform to using AMD's latest Instinct MI300X GPU.

Jeff Tatarchuk, co-founder of TensorWave, recently revealed through X platform that they conducted an independent survey covering 82 engineers and AI professionals, with about 50% of the respondents expressing confidence in AMD. Compared to NVIDIA's H100 series products, the MI300X not only offers better cost performance but also has sufficient supply, avoiding the issue of tight supply. Jeff also mentioned that TensorWave will adopt the MI300X AI accelerator.

Clearly, this is undoubtedly good news for AMD. In the past, its Instinct series products compared to NVIDIA's competitors not only had lower performance but also lower market recognition, leading to its much lower market share in the AI chip market than NVIDIA. However, AMD's latest Instinct MI300X GPU launched last year has successfully surpassed NVIDIA's star product, the H100, in terms of performance.

According to the information, the AMD Instinct MI300X integrates 12 small chips (HMB and I/O are 6nm) with a total of 153 billion transistors. In terms of core design, it adopts a simpler design compared to the MI250X, abandoning the 24 Zen4 cores of the APU and I/O chip, replacing them with the CDNA 3 GPU core architecture, with 304 compute units (38 CUs per small GPU chip) and 19,456 stream processors. In terms of memory bandwidth, the MI300X is also equipped with a larger 192GB HBM3 memory (8 HBM3 packages, each stack with 12 Hi), a 50% increase compared to the MI250X, providing up to 5.3TB/s bandwidth and 896GB/s Infinity Fabric bandwidth. In comparison, NVIDIA's upcoming H200 AI accelerator only has a capacity of 141 GB.

Specifically, compared to the NVIDIA H100, the MI300X has the following advantages:

  • Memory capacity is 2.4 times higher
  • Memory bandwidth is 1.6 times higher
  • FP8 performance (TFLOPS) is 1.3 times higher
  • FP16 performance (TFLOPS) is 1.3 times higher
  • In 1v1 comparison tests, performance can lead H100 (Llama 2 70B) by up to 20%
  • In 1v1 comparison tests, performance can lead H100 (FlashAttention 2) by up to 20% In the 8v8 server comparison test, the performance can lead H100 (Llama 2 70B) by up to 40%.

In the 8v8 server comparison test, the performance can lead H100 (Bloom 176B) by up to 60%.

From the parameters, AMD Instinct MI300X has made a huge leap in performance, with a certain advantage over NVIDIA's current flagship product H100. Additionally, NVIDIA still faces issues such as supply shortages and high prices, giving AMD more opportunities.

AMD CEO Lisa Su predicted on January 30th in a conference call that AMD's AI chip revenue will reach $3.5 billion in 2024, higher than the previous forecast of $2 billion. Furthermore, the AI chip revenue in the last quarter of last year also exceeded the previous forecast of $400 million, although she did not disclose the exact numbers.

However, it should be noted that NVIDIA currently holds over 90% of the market share in the AI chip market, maintaining a monopolistic position, which may suppress AMD in the competition.

Jonathan Ross, CEO of Groq, a U.S. AI startup competing with NVIDIA in the cloud-based large model inference chip market, recently accused NVIDIA of obstructing fair competition in an interview with The Wall Street Journal. Ross mentioned that other chip suppliers' customers have revealed that if NVIDIA inquires about their dealings with Groq, they would deny any conversations out of fear of retaliation.

"Many people we've met say that if news of our meeting reaches NVIDIA, they will flatly deny it." "The problem is, you have to pay NVIDIA a year in advance, but the delivery time may be a year or even longer. They would say, 'Oh, you bought from someone else, so I guess the delivery time might be delayed.'" Ross said.

Although NVIDIA denies these allegations, former Senior Vice President and General Manager of Graphics Business at AMD, Scott Herkelman, also accused NVIDIA of engaging in similar business practices to a cartel, calling it a "GPU cartel," implying that NVIDIA's dominant position in the AI chip market may not be coincidental. Herkelman believes that NVIDIA restricts GPU supply, instilling fear in customers about using competitors' GPUs to maintain its industry dominance.

Editor: Zhitong App - Wanderer Sword