"Very bullish for the whole year"! AMD's conference call responds to doubts: OpenAI's 6GW order is "on track," and MI450 will "significantly ramp up" in the second half of the year

Wallstreetcn
2026.02.04 01:55
portai
I'm PortAI, I can summarize articles.

AMD's Q1 guidance unexpectedly "stalled," but Lisa Su emphasized that the second half of the year is the "real turning point." She confirmed that the OpenAI 6GW order is progressing steadily, and the MI450 chip will begin volume production in Q3 this year, with significant ramp-up in Q4. The company disclosed for the first time its plan to move to a 2nm process by 2027 and reiterated that annual revenue from AI will soar to "tens of billions of dollars." However, the company's traditional business segments, such as PC and gaming, remain under pressure

Overnight, AMD, a strong competitor to NVIDIA, delivered a record-breaking data center revenue report; however, the company's after-hours stock price fell nearly 8%, overshadowed by market disappointment over the Q1 revenue guidance of $9.8 billion. In the subsequent conference call, AMD's management attempted to reassure the market: The real AI ramp-up will occur in the second half of the year, and the path is very clear.

“When we look at the full year, we are very bullish.” Lisa Su emphasized that the core focus for 2026 is the parallel advancement of two growth lines in the data center: one is the continued strength of server CPUs, and the other is the data center AI reaching a "turning point" in the second half of the year.

Behind the "not explosive enough" guidance: Core business is actually accelerating

CFO Jean Hu candidly stated in the conference call that the Q1 guidance was affected by multiple factors, including a "significant double-digit decline" due to the gaming console cycle entering its seventh year and a reduction in one-time revenues in certain specific markets. However, stripping away these distractions, the core data center business is performing exceptionally strong.

AMD's core engine has not stalled. Lisa Su highlighted a key structural bright spot:

“Even in the typically seasonally weak Q1, we expect our data center GPU and server CPU businesses to achieve quarter-over-quarter growth.”

This means that, despite the drag from PC and gaming businesses, AI and server businesses are accelerating against the trend. Management clearly stated that 2026 will show a "low first half and high second half" trend, with the real breakout point locked in for the second half.

The decisive battle in the second half: MI450 delivered on schedule, OpenAI orders "on track"

What investors are most concerned about is whether AMD can continue to catch up with NVIDIA in high-end AI chips. In response, Lisa Su provided a clear timeline, placing the bets on the second half of the year.

Regarding the MI450 series, which benchmarks against the Blackwell competitor, Lisa Su confirmed:

The MI450 series is on track to be released and begin mass production in the second half of the year... Revenue will start in the third quarter and ramp significantly in the fourth quarter as we enter 2027.

In terms of major customer collaborations, in response to a pointed question from a Morgan Stanley analyst about the progress of the OpenAI partnership—whether the "6 Gigawatts" deployment could start on schedule—Lisa Su provided reassurance:

“We clearly have a very strong relationship with OpenAI. We plan to enter the ramp-up phase from the second half of this year into 2027. That is on track.

She further revealed that, in addition to OpenAI, other customers are also actively seeking to quickly deploy MI450. This directly counters market concerns about AMD's singular customer base in large model training. Lisa Su emphasized:

"I also want to remind everyone that we have a very broad customer base that is very excited about the MI450 series." She added that the company "is also actively discussing large-scale multi-year deployments starting from Helios and MI450 with several other customers."

Server CPU: Order book "especially strong in the past 60 days," Venice "customer pull is very high"

At the same time, Lisa Su pointed out that AI agents (Agentic AI) are becoming a new engine for CPUs.

"The demand for high-performance CPUs is currently very high. This is due to agent workloads... When you have these AI processes or AI agents in the enterprise, they actually offload many traditional CPU tasks."

Lisa Su repeatedly positioned server CPUs as another growth curve in the AI cycle: "CPUs are very important in the ongoing climb of AI."

"We have seen the CPU order book strengthen continuously over the past few quarters, especially in the past 60 days."

She also emphasized the "abnormally strong" seasonal performance: "We see server CPUs growing from Q4 to Q1, which is usually a seasonal decline."

Regarding new products, she mentioned the next-generation Venice in her statement: "Customer pull for Venice is very high, and related collaborations are progressing to support large-scale cloud deployments, as well as to achieve widespread OEM platform supply when Venice is released later this year."

AI revenue target "hundreds of billions of dollars," data center growth over 60%, technology roadmap aimed at 2nm by 2027

Looking further into the future, AMD confirmed for the first time the process details of its next-generation flagship chip, showing its aggressive catch-up in process technology.

"The MI500 series uses our CDNA6 architecture, built on advanced 2nm process technology and equipped with high-speed HBM4E memory. We expect to launch the MI500 in 2027."

Based on this roadmap, Lisa Su reiterated aggressive long-term goals:

"We are in a favorable position, expecting to achieve over 60% annual revenue growth in our data center segment over the next three to five years, and to scale our AI business to hundreds of billions of dollars in annual revenue by 2027."

Supply and delivery risks: "A lot of testing has been done," and stated "we will not be supply constrained"

In the face of potential system integration bottlenecks in rack-level delivery, Lisa Su stated: "The development of the MI450 series and Helios racks is progressing very well... We have done a lot of testing—both at the rack level and the chip level."

She also mentioned, "Customers have given us many points to test, so we can do a lot of testing in parallel."

Regarding whether supply would constrain the ramp-up in the second half of the year, her answer was more direct: "We have planned at every component level... relative to our data center AI ramp-up, I do not believe we will be supply constrained."

China MI308: "No further forecasts after Q1," MI325 "has been submitted for licensing"

When the market is concerned about the sustainability of China's related revenue, Lisa Su stated that the sales of MI308 in the fourth quarter "are actually the permits approved after our cooperation with the government," adding that "these orders actually come from very early in 2025."

Regarding assumptions after the first quarter, she directly translated: "We expect about $100 million in revenue for the first quarter. We no longer forecast additional revenue from China, as this is a very dynamic situation."

She added, "We have submitted a license application for MI325."

Expense Leverage and Investment Direction: Operating expenses in 2026 "should lag behind revenue"

In response to rising expenses, Lisa Su stated that the company has "very high confidence" in its roadmap, but operating leverage should appear in 2026: "We should absolutely see leverage... In our long-term model, the growth rate of OpEx (operating expenses) should be lower than revenue, and we expect this to be the case in 2026."

The CFO added that the focus of investment in 2025 is on "data center AI hardware roadmap," "software capability expansion," and "acquisition of ZT Systems to enhance system-level solution capabilities," stating that 2026 "will continue to see active investment," but "revenue growth is expected to outpace operating expense growth."

PC and Gaming: PC TAM may "slightly decline," semi-custom SoC to see "significant double-digit decline" in 2026

On the PC side, Lisa Su pointed out demand-side risks: "Based on everything we see today, we may see a slight decline in PC TAM... including inflationary pressures on commodity prices, including memory."

She stated that the company's modeling for the year is "slightly below seasonal in the second half compared to the first half." However, she emphasized the share logic: "Even if the PC market declines, we believe we can still grow... The focus is on enterprise and continuing to grow in the higher-end, higher-price segments."

On the gaming side, management expects the semi-custom business to enter the later stage of the cycle: the company stated that revenue from semi-custom SoCs in 2026 will "decline by a significant double-digit percentage."

The full translation of AMD's Q4 2025 earnings call is as follows:

AMD Q4 2025 Earnings Call Event Date: February 3, 2026 Company Name: AMD

Speaking Session

Operator: Hello everyone, and welcome to the AMD Q4 2025 and full-year earnings call. Currently, all participants are in listen-only mode. There will be a Q&A session following the formal presentation. Please note that this conference is being recorded. Now, I will turn the call over to Matt Ramsay, Vice President of Financial Strategy and Investor Relations.

Matt Ramsay (Vice President of Financial Strategy and Investor Relations): Thank you. Welcome to AMD's Q4 and full-year 2025 financial performance earnings call. You should have had the opportunity to review our earnings press release and the accompanying slides If you have not yet reviewed, these materials can be found on the investor relations page at amd.com. Today, we primarily reference non-GAAP financial metrics during the conference call. The complete non-GAAP to GAAP reconciliation table can be found in today's press release and in the slides on our website.

Joining today's conference call are our Chair and CEO Dr. Lisa Su, as well as Executive Vice President, Chief Financial Officer, and Finance Chief Jean Hu. This is a live broadcast and will be available for web replay on our website.

Before we begin, I would like to remind everyone that Executive Vice President and Chief Technology Officer Mark Papermaster will be speaking at the Morgan Stanley TMT Conference on Tuesday, March 3rd. Today's discussion contains forward-looking statements based on our current beliefs, assumptions, and expectations, and only reflects the situation as of today. Therefore, it involves risks and uncertainties that may cause actual results to differ materially from our current expectations. Please refer to the cautionary statement in our press release for more information on factors that could lead to significant differences in actual results.

Now, I will turn the meeting over to Lisa.

Lisa Su (Chair and CEO): Thank you, Matt, and good afternoon, everyone. 2025 is a pivotal year for AMD, with our revenue, net income, and free cash flow reaching all-time highs, driven by strong demand for our high-performance computing and AI products. We maintained strong momentum at the end of the year, with all business segments performing well. We are seeing accelerated demand in the data center, PC, gaming, and embedded markets, launching the broadest leading product portfolio in history, gaining significant market share in server and PC processors, and rapidly expanding our data center AI business as cloud, enterprise, and AI customers increase adoption of Instinct and ROCm.

Looking back at the fourth quarter, revenue grew 34% year-over-year to $10.3 billion, primarily driven by record sales of EPYC (Rome), Ryzen, and Instinct processors. Net income increased 42% to a record $2.5 billion, and free cash flow nearly doubled year-over-year to a record $2.1 billion. Full-year revenue grew 34% to $34.6 billion, with the data center and client segments increasing by over $7.6 billion.

Turning to our fourth-quarter segment performance. Data center segment revenue grew 39% year-over-year to a record $5.4 billion, primarily driven by the acceleration of Instinct MI350 series GPU deployments and growth in server market share. In the server space, the adoption of the fifth-generation EPYC Turin CPU accelerated this quarter, accounting for more than half of total server revenue. Sales of the fourth-generation EPYC also remained strong, as our previous generation CPUs continue to deliver superior performance and total cost of ownership (TCO) across various workloads compared to competitors' products As a result, we set records for server CPU sales to cloud and enterprise customers this quarter, ending the year with record market share.

In the cloud business, demand from hyperscale cloud service providers is very strong as North American customers expand their deployments. Public cloud products based on EPYC saw significant growth this quarter, with companies like AWS and Google launching over 230 new AMD instances. Hyperscale cloud service providers launched over 500 AMD-based instances in 2025, resulting in a year-over-year increase of over 50% in the number of EPYC cloud instances, reaching nearly 1,600.

In the enterprise sector, we have seen a meaningful shift in the adoption of EPYC due to our leading performance, broader platform availability, extensive software support, and increased marketing initiatives. Leading server vendors now offer over 3,000 solutions powered by the fourth and fifth generation EPYC CPUs, optimized for all major enterprise workloads. Consequently, the number of large enterprises deploying EPYC on-premises more than doubled in 2025, and our actual server sell-through reached a new high at the end of the year.

Looking ahead, demand for server CPUs remains very strong. Hyperscale cloud service providers are expanding their infrastructure to meet the growing demand for cloud services and AI, while enterprises are modernizing their data centers to ensure they have the computing power needed to enable new AI workflows. In this context, EPYC has become the preferred processor for modern data centers, offering leading performance, energy efficiency, and TCO. Our next-generation Venice CPU will further extend our leadership in these metrics. Customer demand for Venice is very high, and we are engaging in relevant collaborations to support large-scale cloud deployments and broad OEM platform availability when Venice is released later this year.

Turning to our data center AI business, we delivered record revenue from Instinct GPUs in the fourth quarter, primarily driven by increased shipments of the MI350 series. We also generated some revenue from selling MI308 to Chinese customers. The adoption of Instinct further expanded this quarter. Today, eight of the top ten AI companies are using Instinct to support an increasingly broad range of production workloads.

With the MI350 series, we are entering the next phase of Instinct adoption, expanding our footprint among existing partners and adding new customers. In the fourth quarter, hyperscale cloud service providers expanded the availability of the MI350 series, leading AI companies scaled up their deployments to support more workloads, and several new cloud service providers (NeoCloud) launched MI350 series products, providing on-demand access to Instinct infrastructure in the cloud

In terms of the AI software stack, we expanded the ROCm ecosystem in the fourth quarter, enabling customers to deploy Instinct faster and achieve higher performance across a wider range of workloads. Millions of large language models and multimodal models can be used out of the box on the AMD platform, with leading models supporting Instinct GPUs on the first day of release. This capability highlights our rapidly expanding open-source community support, including the integration of new upstream AMD GPUs into VLLM (one of the most widely used inference engines). To drive the adoption of Instinct in specific industry use cases, we added support for domain-specific models in key verticals. For example, in the healthcare sector, we added ROCm support for leading medical imaging frameworks, enabling developers to train and deploy high-performance deep learning models on Instinct GPUs.

For large enterprises, we launched the Enterprise AI Suite, a full-stack software platform with enterprise-grade tools, inference microservices, and solution blueprints designed to simplify and accelerate large-scale production deployments. We also announced a strategic partnership with Tata Consultancy Services to jointly develop AI solutions for specific industries, helping customers deploy AI in their operations.

Looking ahead, customer engagement with our next-generation MI400 series and Helios platform continues to expand. In addition to our multi-generational partnership with OpenAI (deploying 6 gigawatts of Instinct GPUs), we are also in active discussions with other customers for large-scale multi-year deployments based on Helios and MI450 starting later this year.

Through the MI400 series, we are also expanding our product portfolio to meet the demands of comprehensive cloud, HPC (high-performance computing), and enterprise AI workloads. This includes the MI455X and Helios for AI superclusters, the MI430X for HPC and sovereign AI, and the MI440X server for enterprise customers, which offers leading training and inference performance in a compact eight-GPU solution that is easy to integrate into existing infrastructure. Several OEMs have publicly announced plans to launch Helios systems in 2026 and are engaged in deep engineering collaboration to support a smooth production ramp-up. In December, HPE announced it would provide Helios racks equipped with dedicated HPE Juniper Ethernet switches and optimized software for high-bandwidth scalable networks. In January, Lenovo announced plans to offer Helios racks. The adoption rate of the MI430X also increased this quarter, with France's Gen-C and Germany's HLRS announcing new exascale supercomputer plans

Looking ahead to the distant future, the development of our next-generation MI500 series is progressing smoothly. The MI500 is driven by our CDNA6 architecture, built on advanced 2nm process technology, and utilizes high-speed HBM4E memory. We expect to launch the MI500 in 2027, and anticipate that the MI500 will achieve another significant leap in AI performance, powering the next wave of large-scale multimodal models.

In summary, our AI business is accelerating, with the launch of the MI400 series and Helios representing a major turning point for the business, as we deliver leading performance and TCO at the chip, compute tray, and rack levels. Leveraging our advantages from the EPYC and Instinct roadmaps, we are well-positioned to achieve over 60% annual revenue growth in our data center segment over the next three to five years, and to scale our AI business to tens of billions of dollars in annual revenue by 2027.

Turning to the client and gaming business. Segment revenue grew 37% year-over-year to $3.9 billion. In the client segment, our PC processor business performed exceptionally well. Revenue grew 34% year-over-year to a record $3.1 billion, driven by increased demand for multiple generations of Ryzen desktop and mobile CPUs. Desktop CPU sales set a record for the fourth consecutive quarter. Throughout the holiday period, Ryzen CPUs topped the best-seller lists at major global retailers and e-commerce platforms, with strong demand across all regions and price segments, driving record desktop channel sell-out rates.

In the mobile space, strong demand for laptops powered by AMD processors drove record actual sales of Ryzen PCs this quarter. This momentum extended into the commercial PC sector, as we established new long-term growth engines for our client business, accelerating the adoption of Ryzen. Actual sales of Ryzen CPUs in commercial laptops and desktops grew over 40% year-over-year in the fourth quarter, and we secured large deals with major telecommunications, financial services, aerospace, automotive, energy, and technology customers.

At CES, we expanded the Ryzen product lineup, launching CPUs that further enhance our performance leadership. Our new Ryzen AI 400 mobile processors significantly outperform competitors in content creation and multitasking performance. Laptops featuring the Ryzen AI 400 are already on the market, and the broadest range of AMD-based consumer and commercial AI PCs will be rolled out throughout the year. We also introduced the Ryzen AI Halo platform, the world's smallest AI development system, equipped with our high-end Ryzen AI Max processor, featuring 128GB of unified memory capable of running models with up to 200 billion parameters locally

In terms of gaming, revenue increased by 50% year-on-year to $843 million. Sales of semi-custom products grew year-on-year but declined quarter-on-quarter as expected. For 2026, we anticipate a significant double-digit decline in annual revenue from semi-custom SoCs as we enter the seventh year of this very strong console cycle. From a product perspective, Valve is expected to start shipping its AMD-powered Steam Machine early this year, while the development of Microsoft's next-generation Xbox (which uses AMD semi-custom SoCs) is progressing well to support a 2027 launch.

Gaming GPU revenue also increased year-on-year, driven by higher channel sell-through rates due to demand for our latest generation Radeon RX 9000 series GPUs during the holiday sales period. We also launched FSR4 Redstone this quarter, which is our state-of-the-art AI-driven super-resolution technology that provides gamers with higher image quality and smoother frame rates.

Turning to our embedded segment. Revenue grew by 3% year-on-year to $950 million, primarily driven by strong performance from test and measurement and aerospace customers, as well as an increase in the adoption of our embedded x86 CPUs. With improved end-customer demand across multiple end markets (led by test and measurement and simulation), channel actual sales accelerated this quarter.

Design win momentum remains one of the clearest indicators of long-term growth for our embedded business, and we delivered a record year. We achieved $17 billion in design wins in 2025, a nearly 20% year-on-year increase, and since the acquisition of Xilinx, we have won over $50 billion in embedded designs. We also strengthened our embedded product portfolio this quarter. We began production of the Versal AI Edge Gen 2 SoC for low-latency inference workloads and started shipping the high-end Spartan Ultrascale+ devices for cost-optimized applications. We also launched new embedded CPUs, including the EPYC 2005 series for cybersecurity and industrial edge applications, the Ryzen P100 series for in-vehicle infotainment and industrial systems, and the Ryzen X100 series for physical AI and autonomous platforms.

In summary, 2025 is shaping up to be an excellent year for AMD, marking the beginning of a new growth trajectory for the company. We are entering a multi-year super cycle of demand for high-performance and AI computing, creating significant growth opportunities across our businesses. AMD is well-positioned to capture this growth with highly differentiated products, a proven execution engine, deep customer partnerships, and significant operational scale. As AI reshapes the computing landscape, we have the broad solutions and partnerships needed to achieve end-to-end leadership, from Helios for large-scale training and inference in the cloud to the expanded Instinct product portfolio for sovereign supercomputing and enterprise AI deployments

At the same time, demand for EPYC CPUs has surged, as agentic and emerging AI workloads require high-performance CPUs to power head nodes and run parallel tasks alongside GPUs. In the edge and PC sectors where AI adoption has just begun, our industry-leading Ryzen and embedded processors are powering real-time edge AI. Therefore, we expect significant revenue and profit growth in 2026, primarily driven by increased adoption of EPYC and Instinct, continued growth in client market share, and a recovery in the embedded sector.

Looking further into the future, we see a clear path to achieving the ambitious goals we set during our financial analyst day last November, including achieving over 35% compound annual growth rate (CAGR) over the next three to five years, significantly expanding profit margins, and generating over $20 in earnings per share within a strategic timeframe, all driven by growth across all our segments and the rapid expansion of our data center AI business.

Now I will hand the call over to Jean, who will provide more details about our fourth-quarter performance and full-year results.

Jean Hu (Executive Vice President, Chief Financial Officer, and Finance Head): Thank you, Lisa, and good afternoon, everyone. I will first review our financial performance and then provide our current outlook for the first quarter of fiscal year 2026.

AMD performed very well in 2025, delivering a record $34.6 billion in revenue, a year-over-year increase of 34%, driven by a 32% growth in the data center segment and a 51% growth in the client and gaming segments. Gross margin was 52%, and we delivered a record earnings per share of $4.17, a year-over-year increase of 26%, while continuing to invest aggressively in AI and data center to support our long-term growth.

In the fourth quarter of 2025, revenue reached a record $10.3 billion, a year-over-year increase of 34%, driven by strong growth in the data center, client, and gaming segments, which included approximately $390 million in revenue from the sale of MI308 to China, which was not included in our fourth-quarter guidance. Revenue increased 11% quarter-over-quarter, primarily due to continued strong growth in the data center driven by server and data center AI business, as well as a year-over-year recovery in the embedded sector.

Gross margin was 57%, an increase of 290 basis points year-over-year. We benefited from the release of a previously written-down $360 million MI308 inventory reserve. Excluding the release of the inventory reserve and revenue from MI308 in China, the gross margin was approximately 55%, an increase of 80 basis points year-over-year, primarily driven by a favorable product mix.

Operating expenses were $3 billion, a year-over-year increase of 42%, primarily due to our continued investment in R&D and marketing activities to support our AI roadmap and long-term growth opportunities, as well as higher employee performance incentives. Operating income reached a record $2.9 billion, representing an operating margin of 28% Taxes, interest, and other items resulted in a net expenditure of approximately $335 million. The diluted earnings per share for the fourth quarter reached a record $1.53, an increase of over 40% year-on-year, reflecting the strong execution of our business model and operational leverage.

Now turning to our reporting segments, first is the Data Center segment, which achieved record revenue of $5.4 billion, a year-on-year increase of 39% and a quarter-on-quarter increase of 24%, primarily driven by strong demand for EPYC processors and the continued mass production of MI350 products. The operating income for the Data Center segment was $1.8 billion, accounting for 33% of revenue, compared to $1.2 billion (30%) in the same period last year, reflecting higher revenue and the release of inventory reserves, partially offset by ongoing investments supporting our AI hardware and software roadmap.

The Client and Gaming segment generated revenue of $3.9 billion, a year-on-year increase of 37%, primarily driven by strong demand for our leading AMD Ryzen processors. Quarter-on-quarter, revenue decreased by 3% due to a decline in semi-custom revenue. The Client business achieved record revenue of $3.1 billion, a year-on-year increase of 34% and a quarter-on-quarter increase of 13%, driven by strong demand from channels and PC OEMs, as well as continued market share growth. The Gaming business revenue was $843 million, a year-on-year increase of 50%, primarily driven by higher semi-custom revenue and strong demand for AMD Radeon GPUs. Quarter-on-quarter, Gaming revenue decreased by 35% due to a decline in semi-custom sales. The operating income for the Client and Gaming segment was $725 million, accounting for 18% of revenue, compared to $496 million (17%) in the same period last year.

The Embedded segment generated revenue of $950 million, a year-on-year increase of 3% and a quarter-on-quarter increase of 11%, as demand strengthened across multiple end markets. The operating income for the Embedded segment was $357 million, accounting for 38% of revenue, compared to $362 million (39%) in the same period last year.

Before I review the balance sheet and cash flow, I want to remind everyone that we completed the sale of ZT Systems' manufacturing business to Sanmina in late October. The financial performance of ZT's manufacturing business in the fourth quarter is reported separately in our financial statements as discontinued operations and is not included in our non-GAAP financial data.

Turning to the balance sheet and cash flow. This quarter, our continuing operations generated a record $2.3 billion in cash and a record $2.1 billion in free cash flow. Inventory increased by approximately $607 million quarter-on-quarter to $7.9 billion to support strong data center demand. As of the end of the quarter, cash, cash equivalents, and short-term investments totaled $10.6 billion. For the year, we repurchased 12.4 million shares, returning $1.3 billion to shareholders. As of year-end, we have $9.4 billion remaining in our stock repurchase authorization

Now turning to our outlook for the first quarter of 2026. We expect revenue to be approximately $9.8 billion, with a fluctuation of $300 million up or down, which includes about $100 million in revenue from the sale of MI308 to China. Based on the midpoint of our guidance, revenue is expected to grow 32% year-over-year, driven by strong growth in the data center, client, and gaming segments, along with moderate growth in the embedded segment. On a quarter-over-quarter basis, we expect revenue to decline by about 5%, primarily due to seasonal declines in the client, gaming, and embedded segments, partially offset by growth in the data center segment. Additionally, we expect a non-GAAP gross margin of approximately 55% for the first quarter, non-GAAP operating expenses of about $3.05 billion, non-GAAP other net income of approximately $35 million, a non-GAAP effective tax rate of 13%, and diluted shares outstanding expected to be around 1.65 billion shares.

Lastly, 2025 was an outstanding year for AMD, reflecting the execution of plans across the business, achieving strong revenue growth, improved profitability, and cash generation, while actively investing in AI and innovation to support our long-term growth strategy. Looking ahead, we are in a very favorable position to continue achieving strong revenue growth and profit expansion in 2026, focusing on driving data center AI growth, operational leverage, and delivering long-term value to shareholders.

Now, I will hand the call back to Matt for the Q&A session.

Q&A Session

Operator: Thank you, Matt. We will now begin the Q&A session. The first question comes from Aaron Raikers of Wells Fargo. Please go ahead.

Aaron Raikers (Wells Fargo): Thank you for taking my question. Lisa, at the analyst day last November, you seemed to endorse the market's expectation for AI revenue to exceed $20 billion in 2027 (high $20 billion). I know you reiterated a strong double-digit growth path today. So, I would like to ask, can you talk about what you are seeing in terms of customer engagement? How are these engagements expanding? I believe you have hinted at multiple multi-gigawatt opportunities in the past. Can you elaborate on the demand we are seeing for the MI455 and Helios platforms as we look towards the second half of the year?

Lisa Su: Okay, Aaron. Thank you for your question. First, the development of the MI450 series is progressing very well. We are very pleased with the current progress. We are on track to launch and begin production in the second half of this year. Regarding the ramp-up shape and customer engagement, I would say customer engagement continues to show good progress. Clearly, we have a very strong relationship with OpenAI. We are planning to ramp up capacity starting in the second half of this year and continuing into 2027. This is proceeding as planned.

We are also working closely with many other customers, and given the strong performance of the product, they are very interested in quickly deploying the MI450. We see this opportunity in both inference and training. Therefore, we feel very good about the overall growth of data centers in 2026. Regarding 2027, we talked about tens of billions of dollars in data center AI revenue, and we are very confident about this.

Operator: Thank you. The next question comes from Tim Arcuri of UBS.

Timothy Arcuri (UBS): Thank you. Jean, I was wondering if you could provide some details about the guidance for March. I know you told us that Venice would see a slight year-over-year increase. It sounds like the client business is experiencing a seasonal decline, and I understand it might be down around 10%. So, could you give us an update on the other parts? Also, could you let us know about the ramp-up of data center GPUs for the year? I know this year is focused on the second half, but I think everyone expects revenue to be around $14 billion. I'm not asking you to confirm that number, but it would be great if you could give us a rough idea.

Jean Hu: Hi, Tim. Thank you for your question. We provide guidance on a quarterly basis, but I can give you some insights on Q1 guidance. First, on a quarter-over-quarter basis, we are guiding a decline of about 5%, but the data center will actually see an increase. When you consider this, our CPU business typically experiences a high single-digit seasonal decline. However, in the current guidance, we actually expect a very nice quarter-over-quarter growth in CPU revenue. Additionally, we are also very optimistic about the increase in GPU revenue (including the China segment) in the data center. So, the overall guidance for data centers is very good. In the client segment, we have indeed seen a seasonal quarter-over-quarter decline, and the embedded business also has a seasonal decline.

Lisa Su: Tim, perhaps I can add a bit about the outlook for the year. I think when we look at the full year, an important point is that we are very optimistic about this year. If you look at the key themes, we will see very strong growth in data centers. This spans across two growth vectors. We see that server CPU growth is actually very strong. We have talked about the fact that as AI continues to ramp up, CPUs are very important. We have seen CPU orders continue to strengthen over the past few quarters, especially in the last 60 days. Therefore, we view this as a strong growth driver for us. As Jean mentioned, we see server CPUs growing from Q4 to Q1, which is typically a period of seasonal decline. This growth will continue throughout the year.

Then there is the data center AI aspect, which is a very important year for us, truly a turning point. The MI355 is performing well, and we are satisfied with its performance in Q4. We will continue to push its ramp-up in the first half of the year. But as we enter the second half, the MI450 is indeed a turning point for us Therefore, this revenue will start in the third quarter, but there will be significant sales ramp-up in the fourth quarter and into 2027. This gives you some insight into the annual data center ramp-up situation.

Operator: The next question comes from Vivek Arya of Bank of America.

Vivek Arya (Bank of America): Thank you. First, could you clarify your assumptions regarding the sales of the MI308 in China after Q1? Then, Lisa, specifically for 2026, can your data center revenue grow at a target growth rate of over 60%? I know this is a multi-year target, but do you believe there are enough drivers (whether in server CPUs or GPUs) to allow you to achieve that target growth even in 2026?

Lisa Su: Okay, Vivek. Let me first talk about China, as this is important, and we want to make sure this is clear. We are pleased to have some MI308 sales in the fourth quarter. This is actually through licenses approved in collaboration with the government. These orders are actually from early 2025. So we see some revenue in Q4 and forecast about $100 million in revenue for Q1. We currently do not forecast any additional revenue from China, as this is a very dynamic situation.

Given that this is a dynamic situation, we have submitted the license application for the MI325, and we are continuing to work with customers to understand their needs. We believe it is prudent not to forecast any additional revenue beyond the $100 million mentioned in our Q1 guidance. Now, regarding the overall data center, as I mentioned in response to Tim's question, we are very optimistic about data centers. I believe the combination of drivers for our CPU franchise—the EPYC product line, including Turin and Genoa—continues to ramp smoothly. In the second half of the year, we will launch Venice, and we believe this actually expands our lead. The ramp of the MI450 is also very significant in the second half of 2026. We clearly do not provide specific guidance by segment, but achieving a long-term target of greater than 60% in 2026 is certainly possible.

Operator: The next question comes from CJ Muse of Cantor.

CJ Muse (Cantor): Good afternoon. Thank you for taking my question. I am curious about the server CPU aspect, given the extreme tightness in capacity, how is your ability to obtain synchronized capacity from TSMC and elsewhere? I would like to know how long it will take to see wafer output? How should we view the growth trajectory for the entire 2026 calendar year? As part of that, it would be very helpful if you could discuss how we should view the pricing inflection point.

Lisa Su: CJ, there are a few points regarding the server CPU market. First, we believe the overall potential market size (TAM) for server CPUs will grow in 2026, let's say at a strong double-digit rate, as we mentioned, given the relationship between CPU demand and the overall AI ramp-up So I think this is positive. Regarding our ability to support this demand, we have seen this trend over the past few quarters. Therefore, we have increased our supply capacity for server CPUs, which is also one of the reasons we were able to raise our Q1 server business guidance. We believe we have the capability to continue growing throughout the year. There is no doubt that demand remains strong. So we are working with our supply chain partners to increase supply. But from what we see currently, I believe the overall server situation is strong, and we are increasing supply to meet demand.

Matt Ramsay: CJ, do you have a follow-up question?

CJ Muse: I do. Perhaps asking Jean, can you talk about the gross margin for the full year? As you balance the strengthened server CPUs with the GPUs that may accelerate in the second half, what kind of framework should we use? Thank you very much.

Jean Hu: Yes, thank you for your question. We are very satisfied with the Q4 gross margin performance. The Q1 guidance is 55%, which is actually a year-over-year increase of 130 basis points. Why did we significantly increase the production of MI355 year-over-year (while still maintaining high gross margins)? I believe we benefit from a very favorable product mix across all businesses. If you think about it, in the data center, our new products, next-generation products, and the ramp-up of MI355 contribute to the gross margin. In the client segment, we continue to move towards high-end mobile and gain momentum in the commercial business. Our client business gross margin has been improving well. Additionally, we are certainly seeing a recovery in the embedded business, which also contributes to the increase in margins. The tailwinds we are seeing will continue to exist in the coming quarters. When MI450 ramps up in Q4, our gross margin will largely be driven by the product mix. When we reach that stage, I will provide you with more details, but overall, we feel very good about the progress of gross margins this year.

Operator: The next question comes from Joe Moore of Morgan Stanley.

Joseph Moore (Morgan Stanley): Okay, thank you. Regarding the ramp-up of MI455, will 100% of the business be in rack form? Will there be an eight-way server business around that architecture? Do you recognize revenue when you ship to rack builders, or is there something else we need to understand?

Lisa Su: Joe, we do have multiple variants of the MI450 series, including eight-way GPU forms, but by 2026, I would say the vast majority will be rack-level solutions. Yes, we recognize revenue when we ship to rack builders.

Joseph Moore: Okay, great. So can you talk about any risks you might face once the chips are produced and converted into racks? Your competitors faced some issues last year, and you mentioned that you learned from that. Are you ensuring that these issues won't arise by pre-building racks? Do we need to understand any risks in this area?

Lisa Su: Joe, the main point is that the development progress is going very smoothly. The development of the MI450 series and the Helios rack is on schedule. We have conducted a large number of tests, including rack-level and chip-level tests. So far, everything is going well. We have received a lot of feedback from customers regarding the testing content so that we can conduct a large number of tests in parallel. Our expectation is that we will release as planned in the second half of the year.

Operator: The next question comes from Stacy Rasgon of Bernstein Research.

Stacy Rasgon (Bernstein): Hi, everyone. Thank you for taking my question. First, Lisa, I want to ask about operating expenses (OpEx). It seems like you are raising guidance every quarter, and then the actual results are even higher, and then you raise guidance again. Given the growth trajectory, I understand you need to invest. But how should we view the ramp-up of OpEx and the spending numbers, especially as GPU revenue begins to turn? Will we gain leverage in this area? Or should we expect OpEx to grow significantly as AI revenue starts to ramp up?

Lisa Su: Okay, Stacy. Thank you for your question. Regarding OpEx, our current situation is that we have very high confidence in the existing roadmap. So in 2025, as revenue increases, we are indeed increasing OpEx investment, and I think this is for completely the right reasons. Entering 2026, as we see the expected significant growth, we should absolutely see leverage. The way to think about this is that we have always indicated in our long-term model that the growth of OpEx should be slower than the growth of revenue, and we expect this to be the case in 2026 as well, especially as we enter the second half of the year and see revenue turning. But at this stage, if you look at our free cash flow generation and overall revenue growth, I think investing in OpEx is absolutely the right approach.

Stacy Rasgon: Thank you. For my follow-up question, I actually want to seek two brief answers. First, is the $100 million revenue from China in Q1 also on a zero-cost basis like Q4, which is a headwind for margins? Second, I know you don’t give us specific AI numbers, but since a year has passed, can you tell us the scale of the Instinct data for the full year of 2025?

Jean Hu: Stacy, let me answer your first question about the $100 million revenue from China in Q1. In fact, the $360 million inventory reserve released in Q4 is related not only to the Q4 revenue from China but also covers the $100 million MI308 revenue we expect to ship to China in Q1. So the gross margin guidance for Q1 is a very clean guidance

Lisa Su: Stacy, regarding your second question, as you know, we do not provide guidance by business segment, but to help you build your model, I think if you look at the Q4 data center AI numbers, even excluding the China numbers (which are non-recurring), you will still see growth. You will see growth from Q3 to Q4. This should help with your modeling.

Operator: The next question comes from Joshua Buchalter of TD Cowen.

Joshua Buchalter (TD Cowen): Hey, everyone, thanks for taking my question. I want to ask about the client business. The segment performed quite well in the fourth quarter, and I know you gained market share through Ryzen, but given what we are seeing in the memory market, there are a lot of concerns about inflation costs and potential pull-ins. Has there been any change in order patterns this quarter? And perhaps from a broader perspective, how do you view client growth and market health in 2026?

Lisa Su: Thank you for your question, Josh. The client market has performed exceptionally well for us throughout 2025, both in terms of ASP (average selling price) moving towards the high-end mix and strong unit growth. As we enter 2026, we are indeed focused on the development of the business. I believe the PC market is an important market. Based on everything we see today, given some inflationary pressures in commodity pricing (including memory), we may see a slight decline in the potential market size (TAM) for PCs. Our modeling for this year, considering everything we see, suggests that the second half will show slightly more seasonal weakness than the first half. Even in a declining PC market environment, we believe we can grow our PC business. Our focus area is the enterprise market. This is where we made very good progress in 2025, and we expect to do so in 2026 as well, continuing to grow in the high-end part of the market.

Joshua Buchalter: Thank you for your clarification. I want to ask about the Instinct series. We see that your large GPU competitor has struck a deal with an SRAM-based spatial architecture supplier, and reportedly OpenAI has also made a connection with one. Can you talk about the impact on competition? I think you are doing well in inference, partly due to your leadership in HBM content. If you are seeing this trend as well, I would like to know if you can discuss whether this seems to be driven by demand for low-latency inference, and how Instinct is positioned to serve this demand.

Lisa Su: Josh, I think this is an evolution you can expect as the AI market matures. What we are seeing is that as inference ramps up, the dollars per token or the efficiency of the inference stack becomes increasingly important. As you know, with our chiplet architecture, we have strong capabilities to optimize across different stages of inference, training, and even inference So I think it's very normal, and moving into the future, you'll see more products optimized for workloads. You can use GPUs and other architectures that are more like ASICs to achieve this. I believe we have a complete computing stack to do all of this. From this perspective, we will continue to focus on inference, as we see it as a significant opportunity in addition to enhancing training capabilities.

Operator: The next question comes from Ben Reitzes of Melius Research.

Ben Reitzes (Melius Research): Hey, thank you. Lisa, I want to ask about OpenAI. I believe the fluctuations from the outside world are not unfamiliar to you. As far as you know, is everything on track for the 6-gigawatt project starting in the second half of the year and the three-and-a-half-year timeline? Is there anything else you would like to add about this relationship?

Lisa Su: Ben, I want to say that we are actively collaborating with OpenAI and our cloud service provider (CSP) partners to deliver the MI450 series and achieve capacity ramp-up. The ramp-up plan is scheduled to start in the second half of the year. The MI450 is performing excellently. Helios is doing well. We are engaged in deep joint development with all these parties. Looking ahead, we are optimistic about the ramp-up of the MI450 with OpenAI. But I also want to remind everyone that we have a broad customer base that is very excited about the MI450 series. Therefore, in addition to the work we are doing with OpenAI, there are many customers we are also working hard to ramp up within that timeframe.

Ben Reitzes: Okay, thank you. I want to turn to server CPUs and talk about the comparison between x86 and ARM. There is a viewpoint that, from a broader perspective, x86 has a particular advantage in agentic applications. Do you agree with that? What are you seeing from customers? Specifically, obviously your large competitors will be selling ARM CPUs separately in the second half of the year. It would be great if you could talk about this competitive landscape with ARM, what NVIDIA is doing, and your thoughts on it.

Lisa Su: Ben, regarding the CPU market, I want to say that there is currently huge demand for high-performance CPUs. This involves agentic workloads, where when you have these AI processes or AI agents distributing a lot of work in enterprises, they actually handle many traditional CPU tasks. And most of these tasks are currently running on x86. I think the beauty of EPYC is that we have optimized it. We have done workload optimization. So we have the best cloud processors. We have the best enterprise processors. We also have some low-cost variants for storage and other elements. I believe that when we consider the entire AI infrastructure that needs to be built, all of this will play a role. I think CPUs will continue to be an important component of the AI infrastructure ramp-up This is what we mentioned at our Analyst Day in November, which is the multi-year CPU cycle. We continue to see this. I believe we have optimized EPYC to meet all these workloads. We will continue to work with customers to expand our EPYC footprint.

Operator: The next question comes from Tom O'Malley at Barclays.

Tom O'Malley (Barclays): Hey, Lisa. How are you? I just wanted to ask, you mentioned earlier that memory is a key issue in inflation costs. Different customers have different approaches, and different suppliers have different approaches as well. But could you talk about when your memory procurement took place, especially regarding HBM? Was it completed a year ago, or six months ago? Different accelerator vendors have talked about different timelines, and I would love to hear when you made your purchases.

Lisa Su: Given the delivery cycles of HBM, wafers, and these components in the supply chain, we are working closely with suppliers over a multi-year timeframe, which involves the demand we see, how we ramp up, and how we ensure our development is tightly aligned. So I feel very good about our supply chain capabilities; we have been planning for this ramp-up. Therefore, independent of current market conditions, we have been planning for a significant ramp-up in our CPU and GPU businesses over the past few years. From this perspective, I believe we are well-positioned to achieve significant growth by 2026, and we are also signing multi-year agreements that go beyond this timeframe, as the supply chain remains tight.

Tom O'Malley: Thank you. As a follow-up question, you see various forms of system accelerators in the industry, such as KV cache offload, more independent ASIC-style computing, etc. If you look at what competitors are doing and then look at your upcoming first-generation system architecture. Perhaps you could take some time to discuss whether you think you will follow these different types of architectural changes, or do you think you will head in a different direction? What are your thoughts on the evolution of system-based architecture and the related products or chips within it?

Lisa Su: Tom, I think we have very flexible architectural capabilities, including our chiplet architecture and flexible platform architecture, which allows us to provide different system solutions for different needs. I think we are very clear that there will be different solutions, so I often say there is no "one-size-fits-all" solution, and I will say it again, there is no "one-size-fits-all." Even so, it is clear that rack-level architecture is very, very good for high-end applications such as distributed inference and training. But we also see opportunities to leverage enterprise AI to use some other form factors, so we are investing across the spectrum.

Operator: The next question comes from Ross Seymore at Deutsche Bank

Ross Seymore (Deutsche Bank): Hi. Thank you for taking my last few questions. I think my first question is about gross margins. As you transition from MI300 to 400 and then ultimately to 500, have you seen any changes in gross margins during this period? In the past, you've talked about optimizing dollar profit amounts rather than percentages. But in terms of percentages, as you transition from one generation to the next, is it increasing, decreasing, or fluctuating? I’d like to understand the trajectory there.

Jean Hu: Ross, thank you for your question. At a very high level, with each generation of products, we actually provide more capabilities and more memory to better assist our customers. So overall, when you provide more capabilities to customers, the gross margin for each generation of products should improve. But typically, when a generation of products first ramps up, the gross margin tends to be lower. As you reach scale, improve yields, enhance testing, and overall performance, you will see the gross margin for each generation of products increasing. So it’s a dynamic gross margin, but over the long term, you should expect the gross margin for each generation of products to be higher.

Ross Seymore: Thank you, Jean. While this is a small part of your business, it seems quite volatile, and you’re discussing a timeframe that is further out than usual, which is in gaming. What is the extent of the decline you mentioned for this year? Because in 2025, you expect it to be flat, but it actually grew by 50%, which is a nice positive surprise. But now since you say there will be a decline this year, and the next generation Xbox will ramp up in 2027, I hope to understand the annual trajectory there.

Jean Hu: Yes, Lisa can add to that. 2026 is actually the seventh year of the current product cycle. Typically, when you are at this stage of the cycle, revenues tend to decline. As Lisa mentioned in her prepared remarks, we do expect significant double-digit declines in semi-custom revenues in 2026. For the next generation...

Lisa Su: Yes, I think we will definitely talk about this later, but as our next-generation products ramp up, you can expect the situation to reverse.

Matt Ramsay: Operator, I think we have time for one more question. Thank you.

Operator: The last question comes from Jim Schneider of Goldman Sachs.

Jim Schneider (Goldman Sachs): Good afternoon. Thank you for taking my question. Regarding the ramp of your recordable systems, as you ramp up in the second half of the year, do you anticipate any bottlenecks in supply constraints that could potentially impact or limit revenue growth? In other words, perhaps you could discuss whether you expect supply to really constrain growth in Q3 relative to Q4.

Lisa Su: Jim, we are planning at every component level. Regarding our data center AI ramp, I don’t believe we will face supply constraints in the ramp plans that are already in place I believe we have an aggressive ramp plan. I think this is a very feasible ramp plan. When we consider the scale of AMD, it is clear that our top priority is to ensure that the data center ramp goes very smoothly. This includes both data center AI (in terms of GPUs) and CPUs.

Jim Schneider: Thank you. As a follow-up to the previous question regarding OpEx, could you discuss what some of your largest investment areas are for 2025? What are the largest incremental OpEx investment areas for 2026? Thank you.

Jean Hu: Yes, Jim. Regarding investments for 2025, the priority and largest investment is in data center AI. We have accelerated our hardware roadmap. We have expanded our software capabilities. We also acquired ZT Systems, which adds significant system-level solutions and capabilities. These are the main investments for 2025. We are also investing heavily in marketing to really expand our marketing capabilities, support revenue growth, and expand our commercial business and enterprise business for our CPU franchise. In 2026, you should expect that we will continue to invest aggressively, but as Lisa mentioned earlier, we do expect revenue growth to outpace the increase in operating expenses to drive expansion in earnings per share.

Matt Ramsay: Alright. Thank you all for joining the call. Operator, I think we can conclude the meeting now. Thank you. Good evening.

Operator: Thank you. Ladies and gentlemen, the Q&A session has concluded, and today’s conference call has also ended. You may now disconnect. Have a great day.

Meeting adjourned