
Intel, "returning" to DRAM?

Intel and Sandia National Laboratories have made significant progress in memory technology, successfully transforming DRAM research and development results into a new type of memory technology aimed at addressing memory bandwidth and latency issues. This has sparked speculation about Intel's potential return to the DRAM market. Intel dominated the DRAM market in the 1970s but exited in 1985 due to increased competition. Currently, the DRAM industry is experiencing a surge in demand due to AI, drawing attention to Intel's movements
Recently, a report from the research institution Sandia National Laboratories has sparked heated discussions in the industry.
The report pointed out that the laboratory has made significant progress in memory technology in collaboration with Intel. Their joint "Advanced Memory Technology" (AMT) project has successfully transformed DRAM-related research and development results into new memory technologies, aimed at addressing memory bandwidth and latency challenges in the critical missions of the U.S. National Nuclear Security Administration (NNSA).
This news has brought forth speculation about "whether Intel will return to the DRAM market."
Although the news did not explicitly announce that Intel would return to independent DRAM manufacturing on a large scale, the signals it conveys are intriguing. Especially considering Intel's historical background and the current upward trend in the DRAM industry driven by the AI supercycle, this development becomes even more subtle.
The possibility of this former storage giant "returning" is increasingly worth discussing.
The Rise and Fall of Storage Giants
Intel's connection to DRAM can be traced back to the early days of the industry.
In 1970, Intel launched the 1103 chip, which was the world's first commercially successful DRAM product. With its comprehensive advantages over magnetic core memory in terms of price, density, and logical compatibility, it quickly reshaped the landscape of the storage industry.

In the 1970s, Intel once held a 90% share of the global DRAM market, becoming the undisputed industry leader.
At that time, the 1103 chip not only won the favor of mainstream computer manufacturers such as HP, DEC, and Honeywell but also established the development model for DRAM, laying the foundation for subsequent technological evolution.
However, the cyclical nature of the DRAM industry and the brutality of market competition prevented Intel's dominance from continuing. In the 1980s, Japanese manufacturers, supported by government backing, excellent manufacturing yields, and aggressive pricing strategies, rose to prominence. Companies like NEC and Toshiba quickly captured market share, and by 1987, seven of the top ten global DRAM suppliers were from Japan. Intel, facing cost disadvantages, fell into massive losses and ultimately announced its exit from the DRAM business in 1985, shifting its focus to the CPU sector. This decision was described by The Economist as "the most significant strategic shift in semiconductor history."
In the following decades, the global DRAM landscape underwent changes with the rise of Korean manufacturers and industry consolidation, ultimately forming an oligopoly where Samsung, SK Hynix, and Micron monopolized over 95% of the market share.
It is well known that the storage industry has strong cyclical characteristics, typically experiencing a severe supply-demand fluctuation every 4-5 years. After experiencing a super cyclical downturn from the end of 2021 to 2023, the explosion of generative AI has completely changed the demand landscape
Currently, the DRAM industry is 迎来 a new round of structural opportunities. The extreme demand for memory bandwidth and capacity from AI workloads is driving explosive growth in data center HBM and DRAM demand, leading the storage industry into an unprecedented super cycle.
According to TrendForce's forecast, in the first quarter of 2026, influenced by the shift of original factory capacity towards Server and HBM applications, the contract price of general DRAM is expected to increase by 55-60% quarter-on-quarter, while Server DRAM prices will rise by over 60% quarter-on-quarter, resulting in a prosperous market characterized by supply shortages. Market research indicates that the revenue of the DRAM industry will recover to the level of 100 billion USD in 2025 and is expected to reach 150 billion USD by 2029, with data center and automotive applications becoming the core driving forces, achieving compound annual growth rates of 25% and 38%, respectively.
In such an industry windfall, and under the multiple pressures faced by Intel in the CPU market, including market squeeze, losses in foundry business, and delays in AI chips, finding new growth curves may become an urgent need.
AMT Project: Unveiling Intel's "Return" to DRAM
Among them, the collaboration between Sandia National Laboratories and Intel provides the most direct technical evidence for this speculation.
It is understood that as part of the post-exaflop computing program, the AMT project is led by Sandia National Laboratories, in collaboration with Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Intel. The first two rounds of cooperation focused on research and development, while the third round has entered the productization stage, with early investments gradually translating into actual results.
The next-generation DRAM bonding (NGDB) program launched by Intel in this project showcases a disruptive technological approach. Unlike traditional memory architectures, NGDB adopts a new memory organization and stacking assembly method, significantly enhancing DRAM performance while achieving reduced power consumption and cost optimization.
More critically, it breaks the performance trade-off between HBM and DDR DRAM, addressing the common pain point of "trading capacity for bandwidth" in current high-bandwidth memory, allowing more application scenarios to benefit from the advantages of high-bandwidth memory. Gwen Voskuil, Chief Technologist at Sandia National Laboratories, stated, "This technology will promote the wider application of high-bandwidth memory in systems constrained by capacity and power."
From a technical perspective, Intel has developed a new stacking method and DRAM organizational structure, with prototype products not only overcoming the memory capacity limitations of existing technologies but also achieving functional validation, confirming the feasibility of large-scale production of this technology.
Joshua Fryman, Chief Technology Officer of Intel Government Technology, emphasized, "Standard memory architectures cannot meet the demands of artificial intelligence; NGDB defines a new approach that will accelerate our journey into the next decade." The statement from Intel Fellow Josh Fryman carries even deeper significance: "We are rethinking the organization of DRAM, fundamentally advancing computer system architecture.""The goal is to achieve an order-of-magnitude performance improvement and incorporate innovation into industry standards."
These technological breakthroughs and strategic statements all imply that Intel's ambitions in the DRAM field are not merely a short-term trial but are focused on a long-term industry layout.
Saimemory Low Power Revolution, Re-entering the Storage Race
If the collaboration with Sandia National Laboratories is a groundwork for cutting-edge technology, then Intel's partnership with SoftBank Group can be seen as a significant step forward in the DRAM race, showcasing a clearer commercialization path.
At the end of 2024, Intel and Japan's SoftBank announced the establishment of a joint venture, Saimemory, partnering with institutions such as the University of Tokyo and RIKEN, aiming to address the storage pain points of the AI era as a "low-power storage revolutionary," dedicated to developing stacked DRAM solutions as an alternative to HBM.
Currently, AI processors heavily rely on HBM chips, but HBM has inherent flaws such as complex manufacturing processes, high costs, significant power consumption, and heat generation, with the market dominated by three major manufacturers: Samsung, SK Hynix, and Micron, leading to a continuous supply shortage.
The core mission of Saimemory is to break this pattern: by vertically stacking multiple DRAM chips and optimizing interconnection methods using Intel's EMIB bridging technology, it aims to double the storage capacity compared to current advanced memory (targeting a single chip of 512GB), reduce power consumption by 40%-50%, and achieve production costs at only 60% of HBM. This approach avoids the complex silicon via processes that HBM relies on, focusing more on architectural optimization and energy efficiency breakthroughs.
Unlike companies like Samsung and NEO Semiconductor that focus on capacity enhancement with 3D stacked DRAM technology, Saimemory directly addresses the core pain point of high electricity costs in AI data centers. Its technological path is compatible with existing AI processor interfaces, requiring no large-scale hardware modifications, significantly reducing customer migration costs.
In terms of resource investment, the total investment for this project is expected to reach 10 billion yen (approximately 70 million USD), with SoftBank initially investing 3 billion yen to become the largest shareholder and committing to priority procurement. Fujitsu, Nikkon Electric Industrial, and others are also participating in the investment, while the Japanese government plans to provide over 5 billion yen in subsidies, highlighting Japan's deep-seated desire to revitalize its semiconductor memory industry, which is implicitly linked to Japan's strategic aspirations to rejuvenate its semiconductor industry. According to the plan, Saimemory aims to complete prototype design and production evaluation by 2027, striving for commercialization before 2030, with priority supply to the AI training data center being established by SoftBank.
This collaboration represents not only an extension of Intel's IDM 2.0 strategy but also a secondary activation of technological resources. By opening up core technologies such as chip stacking and packaging, Intel seeks to rebuild its ecological influence in the storage field rather than solely relying on its manufacturing capabilities. Despite facing challenges such as patent barriers from HBM giants, control over foundry yield, and insufficient ecological collaboration, Saimemory's differentiated low-power alternative route has already shown potential for breakthroughs, likely forming unique competitiveness in scenarios like edge computing and small to medium-sized AI servers, and even triggering a diversion in technological routes within the storage industryOverall, this strong combination of "capital + technology" demonstrates Intel's ambition to deeply engage in the high-end DRAM market in another form.
Preserving the Technical Spark of eDRAM
Beyond its main layout in DRAM, Intel's technological accumulation in eDRAM (embedded DRAM) provides another support for its return to the storage track.
As a storage technology that directly integrates DRAM cells onto processor chips, eDRAM is regarded by the industry as one of the effective means to bridge the "memory wall" between GPUs and memory, thanks to its low latency, high bandwidth, and high density characteristics, and is now regaining industry attention.
Compared to SRAM, eDRAM storage unit structures are simpler (1T-1C structure), with lower unit capacity costs, and can achieve about six times the capacity of SRAM within the same chip area; compared to traditional DRAM, its data transmission paths are shorter, with significant advantages in latency and power consumption.
Intel has long been deeply involved in the eDRAM field.
As early as over a decade ago during the Haswell and Broadwell processor era, Intel integrated 128MB eDRAM as L4 cache in high-end processors, significantly enhancing integrated graphics performance.
For example, in the 2013 Haswell architecture processors, the high-end integrated graphics Iris Pro Graphics integrated 128MB eDRAM as L4 cache, achieving a read/write speed of 102.4GB/s at 1W power consumption through the on-package I/O (OPIO) interface, significantly improving graphics processing performance.
Subsequently, Intel's Broadwell architecture desktop version further adopted this design, with the 128MB eDRAM L4 cache optimized through independent read/write buses and multi-storage body designs, achieving a loading latency of 36.6 nanoseconds and demonstrating excellent stability under high bandwidth loads.
In the high-performance computing field, Intel's Xeon Phi processors paired with 16GB eDRAM provide efficient cache support for scientific computing, data analysis, and other tasks.
Although Intel once slowed down the pace of eDRAM due to considerations of manufacturing costs, the demand for extreme performance in the AI era has brought this technology back to the center of its arsenal.
Indeed, the development of eDRAM faces multiple challenges such as complex process integration, refresh power consumption, and yield control, but with advancements in semiconductor processes, the application of 3D stacking and new capacitor materials is continuously breaking through these bottlenecks, which have long constrained its development. Currently, the demand for low-latency, high-bandwidth storage in scenarios such as AI training inference and high-performance computing is becoming increasingly urgent, which is also expanding the application landscape of eDRAM in graphics processing, embedded systems, and edge computing. Intel's deep technological reserves in the eDRAM field undoubtedly become a key asset for its return to the high-end storage track, providing it with more competitive technological options and strategic flexibility
Final Thoughts
Looking back at history, in addition to its early brilliance in the DRAM industry, Intel has also had a long exploration in the NAND flash memory field: in 2007, it launched 25-nanometer NAND to promote the popularity of SSDs, and in 2015, it jointly released the 3D XPoint technology with Micron, branding it as "Optane." Although it ultimately exited the NAND business and discontinued Optane due to commercialization difficulties, its accumulation in storage architecture, chip stacking, advanced packaging, and storage patents remains profound, which may lay a crucial foundation for its current re-entry into the DRAM business.
Especially in the AI era, memory is not only a game of capacity and cost but also a comprehensive competition of performance, power consumption, and architecture. Currently, the AI-driven transformation of the storage industry is reshaping the industry landscape. The high growth expectations in the DRAM industry, the monopolistic pain points in the HBM market, and the technological revival of eDRAM provide Intel with an excellent opportunity to return to the storage track.
From the technological breakthroughs at Sandia National Laboratories to the Saimemory project in collaboration with SoftBank, and the technological reserves in the eDRAM field, Intel's current movements in the storage sector resemble a multi-point layout, testing the waters: on one hand, it maintains participation in cutting-edge technology through collaborations with national laboratories, and on the other hand, it explores alternative product paths through joint ventures, while still retaining the technological spark of integrated solutions like eDRAM internally.
However, the road to return is not smooth. Facing the technological monopolies and ecological barriers of giants like Samsung and SK Hynix, as well as practical challenges such as patent disputes, yield control, and cost optimization, along with lessons from its past commercialization failures, will test Intel's strategic determination and execution capabilities.
Currently, there is still no conclusion on whether "Intel will officially return to the DRAM track," but it is certain that against the backdrop of storage technology increasingly becoming a core competitive advantage in the AI era, Intel is unlikely to completely abandon its layout in this key field.
In the coming years, regardless of whether Intel ultimately "returns to DRAM" in a traditional form, its frequent moves in the storage field are an undeniable fact. This former storage king may not directly confront giants like Samsung and SK Hynix in terms of technological routes and production capacity, but in the era of AI-driven heterogeneous computing, it is entirely possible for it to redefine its role in the storage field through architectural innovation and system integration capabilities.
As an industry analyst stated: "The future competition in storage is no longer a single process race, but a complex game of architecture, power consumption, ecology, and even geopolitical strategies." In this new multidimensional battlefield, Intel's story in the storage field may just be turning a new page.
Risk Warning and Disclaimer
The market has risks, and investment requires caution. This article does not constitute personal investment advice and does not take into account the specific investment goals, financial conditions, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investment based on this is at one's own risk
