
SK Hynix has announced its intention to launch 24 Gb GDDR7 memory, which will significantly enhance VRAM capacities in upcoming graphics processing units (GPUs).Additionally, the company is preparing for the introduction of HBM4 memory solutions.
Advancements in Memory Technology: We Hynix’s GDDR7 and HBM4 Initiatives
In its latest update, We Hynix emphasized a successful quarter marked by a surge in sales of its 12-Hi HBM3e DRAM modules. The company also reported shipment volumes for both DRAM and NAND flash that exceeded expectations, contributing to a robust quarterly performance.
A key highlight from this report is the development of the 24 Gb GDDR7 memory modules. These new memory chips are designed to not only bolster next-generation graphics cards but also provide AI-focused clientele with expanded VRAM capabilities. Notably, the 24 Gb configuration (equivalent to 3 GB) offers a substantial 50% increase in capacity compared to the current 16 Gb (2 GB) dies.

Furthermore, GDDR7 is anticipated to deliver impressive speed enhancements, with data rates of 30+ Gbps expected to become the norm as the technology matures. While the arrival of 40+ Gbps options may be on the horizon, the expansion of VRAM capacities represents a significant leap forward.
Samsung has already begun producing similar memory modules, and some have been observed for sale online. The upcoming “RTX 50 SUPER”series from NVIDIA is expected to incorporate these high-capacity memory solutions, which may hit the market later this year or early next year.
Comparative Analysis of Graphics Memory Technologies
GRAPHICS MEMORY | GDDR7 | GDDR6X | GDDR6 | GDDR5X |
---|---|---|---|---|
Workload | Gaming / AI | Gaming / AI | Gaming / AI | Gaming |
Platform (Example) | GeForce RTX 5090 | GeForce RTX 4090 | GeForce RTX 2080 Ti | GeForce GTX 1080 Ti |
Die Capacity (Gb) | 16-64 | 8-32 | 8-32 | 8-16 |
Number of Placements | 12? | 12 | 12 | 12 |
Gb/s/pin | 28-42.5 | 19-24 | 14-16 | 11.4 |
GB/s/placement | 128-144 | 76-96 | 56-64 | 45 |
GB/s/system | 1536-1728 | 912-1152 | 672-768 | 547 |
Configuration (Example) | 384 IO (12pcs x 32 IO package)? | 384 IO (12pcs x 32 IO package) | 384 IO (12pcs x 32 IO package) | 384 IO (12pcs x 32 IO package) |
Frame Buffer of Typical System | 24GB (16Gb) / 36GB (24Gb) | 24 GB | 12 GB | 12 GB |
Module Package | 266 (BGA) | 180 (BGA) | 180 (BGA) | 190 (BGA) |
Average Device Power (pJ/bit) | TBD | 7.25 | 7.5 | 8.0 |
Typical IO Channel | PCB (P2P SM) | PCB (P2P SM) | PCB (P2P SM) | PCB (P2P SM) |
The company expects the solid performance of its products and mass-production capabilities to help double HBM, compared with a year earlier, to generate stable earnings. It will also ensure timely provision of HBM4 in accordance with customers’ requests to remain competitive.
SK Hynix will start provision of an LPDDR-based module for servers within this year, and prepare for GDDR7 products for AI GPUs with an expanded capacity of 24Gb from 16Gb in a bid to enhance its leadership in the AI memory market with product diversification.

As the next-generation HBM4 standard gears up to revolutionize the high-performance computing (HPC) and AI landscape, companies like NVIDIA and AMD are poised to utilize these memory solutions in their upcoming models, such as the Rubin and MI400 architectures. Initial supplies have reportedly commenced, focusing on evaluation purposes, indicating a strong push towards embracing this advanced memory technology. A recent roadmap for HBM illustrates how these new standards will position against existing memory solutions and the architectural shifts that might occur.
Leave a Reply