What Are HBM, HBM2 and HBM2E? A Basic Definition

(Image credit: AMD)

HBM stands for high bandwidth memory and is a type of memory interface used in 3D-stacked DRAM (dynamic random access memory) in some AMD GPUs (aka graphics cards), as well as the server, high-performance computing (HPC) and networking and client space. Samsung and SK Hynix make HBM chips. 

Ultimately, HBM is meant to offer much higher bandwidth, lower power consumption compared to the GDDR memory used in most of today’s best graphics cards for gaming. 

HBM Specs

HBM2 / HBM2E (Current) HBM HBM3 (Upcoming)
Max Pin Transfer Rate 3.2 Gbps 1 Gbps ?
Max Capacity 24GB 4GB 64GB
Max Bandwidth 410 GBps 128 GBps 512 GBps

HBM technology works by vertically stacking memory chips on top of one another in order to shorten how far data has to travel, while allowing for smaller form factors. Additionally, with two 128-bit channels per die, HBM’s memory bus is much wider than that of other types of DRAM memory.  

Cut through image of a graphics card with HBM  (Image credit: AMD)

Stacked memory chips are connected through through-silicon vias (TSVs) and microbumps and connect to GPU via the interposer, rather than on-chip. 

HBM2 and HBM2E

HBM

Samsung’s Flashbolt HBM2 DRAM targets high-performance computing.  (Image credit: Samsung)

HBM2 debuted in 2016, and in December 2018, the JEDEC updated the HBM2 standard. The updated standard was commonly referred to as both HBM2 and HBM2E (to denote the deviation from the original HBM2 standard). However, the spec was updated again in early 2020, and the name “HBM2E” wasn’t formally included. However, you may still see people and/or companies refer to HBM2 as HBM2E or even HBMnext, thanks to Micron. 



Source link