News Samsung Outs World's First GDDR7 Chip: 32 GT/s for Next-Gen GPUs

IIRC, doesn't GDDR6X use PAM4 signalling?
Yeah Micron debuted PAM4 with GDDR6X.
Also, I seem to recall something about GDDR7 using on-die ECC, like DDR5. That could help explain additional power consumption by the DRAM chips.
Everything starting with GDDR5 has had memory error correction, but there may be more specifics for GDDR7 as I haven't read up on it yet.
 
Source? I can't find any indication of that, on any GDDR standards, actually.

And are we talking just at the interface level, on-die, or both?
For VRAM I believe it's always been controller level except for ECC of course. I think the GDDR part is CRC (Micron docs refer to data link protection), but it has to be read by the controller it doesn't do it by itself.
 
And Nvidia will still skimp on VRAM and charge insane prices for higher-memory models. Looking at you, 16GB 4060 Ti.
Based on all of the criticism and 16GB failing to smooth out some dips as expected due to the 128bits bottleneck, we can only hope Nvidia learned the lesson that 128bits simply wasn't enough for the 4060(Ti) and it won't do that again. Releasing generation upon generation of GPUs that get thrashed by reviews and rot on shelves is not good for business.
 
And Nvidia will still skimp on VRAM and charge insane prices for higher-memory models. Looking at you, 16GB 4060 Ti.
Considering Samsung only announced a 16Gb (2GB) chip, they're going to have to charge more for VRAM anyway because that would require extra memory interfaces, which need L2 cache. Unless NVIDIA copies AMD, VRAM capacity and bus-width are going to be issues here.

Really I find it more disappointing a 32Gb (4GB) chip wasn't announced.
 
Last edited:
24Gbits chips are already getting somewhat on the large side at ~120sqmm. You are going to have to wait until either the next cell shrink, stacked DRAM packages (ex.: basically do HBM but switch out the HBM base die with a (G)DDRx one) or multi-layer DRAM chips for 32Gbits.
It's not a packaging problem as Samsung already has GDDR6W which runs 32Gb because it combines two 16Gb in one package. That doesn't mean there isn't a feasibility of manufacture issue at hand for a single 32Gb chip of course.
 
It's not a packaging problem as Samsung already has GDDR6W which runs 32Gb because it combines two 16Gb in one package. That doesn't mean there isn't a feasibility of manufacture issue at hand for a single 32Gb chip of course.
Though it's worth noting that the GDDR6W modules Samsung made are 64-bit chips. Which makes me wonder how the 4060 Ti 16GB is getting its capacity from since the memory interface didn't change. However, some digging around led me to find that GDDR6 uses a addressable packetized format (https://media-www.micron.com/-/medi...cal-note/dram/tn-ed-04_gddr6_design_guide.pdf page 4) so there's no need for chip select lines anymore.

So pending any circuit board analysis, my theory is NVIDIA using segmented memory. Though this isn't any different than dual-channel CPUs using 4 DIMMs.
 
Though it's worth noting that the GDDR6W modules Samsung made are 64-bit chips. Which makes me wonder how the 4060 Ti 16GB is getting its capacity from since the memory interface didn't change. .
Yeah that's why I said 2x 16Gb because that's all it is.

Until someone opens one up we won't know, but there's a 99% chance it's just clamshell memory installation like the 3090.