
Samsung’s high-bandwidth memory (HBM) segment is on the verge of a significant transformation, particularly as AMD incorporates the HBM3E process into its latest artificial intelligence (AI) accelerator. This development could also pave the way for NVIDIA’s future endorsement.
The Path to NVIDIA’s Approval: Samsung’s HBM3E Gains Momentum with AMD
Samsung has been navigating challenging times within the AI marketplace, notably as its foundry division has underperformed over the past few quarters. Initial optimism about the HBM division emerged when NVIDIA expressed a strong interest in its capabilities. However, after extensive qualification tests, Samsung failed to meet NVIDIA’s stringent standards, temporarily sidelining the company from a critical segment of the industry.
Recent reports from BusinessKorea indicate that there may be a turnaround on the horizon. AMD has officially announced the use of Samsung’s HBM3E 12-Hi stacks in its latest AI accelerator lineup. Furthermore, a collaboration between Samsung and AMD is also in the works for the upcoming Instinct MI400 accelerator series, utilizing the advanced HBM4 process.

AMD’s newly unveiled AI accelerators, the Instinct MI350X and MI355X, are set to feature HBM3E from both Samsung and Micron, each equipped with 288 GB of memory. This likely indicates the implementation of Samsung’s 12-Hi stacks. Moreover, AMD’s ambitions to scale these AI solutions into rack-scale offerings could significantly increase the demand for HBM3E memory solutions, ultimately benefiting Samsung’s market positioning over time. This partnership marks a notable acknowledgment of Samsung’s capabilities, potentially generating a favorable response from industry stakeholders.
Looking ahead, Samsung also anticipates ramping up HBM4 production in the latter half of this year. Given AMD’s commitment to utilizing the HBM4 process for its Instinct MI400 AI accelerators, there is strong potential for widespread adoption of Samsung’s advanced memory technologies.
Leave a Reply