News 3D X-DRAM Roadmap: 1Tb Die Density by 2030

rluker5

Distinguished
Jun 23, 2014
626
381
19,260
Something like this is going to like good cooling.

I was thinking about going with 64GB DDR5 for no good reason. Now I have even less reason. Money saved for a different upgrade I guess. In a couple years I would be happy to toss some cooling on something like this.
 

Steve Nord_

Prominent
Nov 7, 2022
55
7
535
Something like this is going to like good cooling.

I was thinking about going with 64GB DDR5 for no good reason. Now I have even less reason. Money saved for a different upgrade I guess. In a couple years I would be happy to toss some cooling on something like this.
Why cooling, I saw nothing on dissipation. I am looking for compute in memory...might take more than 1 mask to have that though.
 

setx

Distinguished
Dec 10, 2014
227
151
18,760
After
This patented DRAM technology
it gets completely uninteresting.

First of all, creating something in a lab and developing cheap mass production are very different things.

Next, and likely more important, various patents with no major manufacturing contracts mean that if they can't put it into next major RAM standard – it's actually going to harm global development, as other companies would avoid the areas covered by patents.
 

rluker5

Distinguished
Jun 23, 2014
626
381
19,260
Why cooling, I saw nothing on dissipation. I am looking for compute in memory...might take more than 1 mask to have that though.
More capacitors per mm^2 of cooling surface and some have thermal barriers to heat dissipation due to being underneath others.

Luckily lots are getting used to ram cooling again.
I just have a dangling fan pointed at mine on the PC that gets the most volts through it. My oc becomes unstable if it gets to 60c.
 

InvalidError

Titan
Moderator
First of all, creating something in a lab and developing cheap mass production are very different things.
Cheap manufacturing may already be a mostly solved problem if the thing is made mostly the same way as NAND, which it probably is.

Failing to license the thing at a reasonable price would suck though, as it would likely mean there won't be cheap versions of this stuff until 20 years from now when the patents on whatever makes this meaningfully different from NAND structure expire.

More capacitors per mm^2 of cooling surface and some have thermal barriers to heat dissipation due to being underneath others.
Most of a DRAM chip's ~300mW operating at standard voltage and frequencies is IO, making the chip 32X bigger won't increase IO power much but it might increase self-refresh by ~500mW. 1W per chip should still be well within passive cooling as long as there is room for natural convection around the DIMMs, where there usually aren't when you slap embellishment air-blockers on them that reduce clearance between DIMMs to zero.

The most logical use for this sort of stuff is HBM: all of the advantages without having to do TSVs, die shaving or DRAM die stacking.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
@InvalidError , good call for predicting this a couple months ago.

1Tb ICs would mean the potential to slap 2TB onto a single DIMM with relative ease — double-sided DIMMs with eight chips per side would get there. 4TB using 32 ICs would also be possible.
It would take an entire minute to read or write 4 TB of DRAM, even at the MCR spec of DDR5-8800 that Intel claims it'll support, in Granite Rapids. I wonder if they'll also have to increase the on-die ECC allocation, to make it viable.
 

bit_user

Polypheme
Ambassador
I was thinking about going with 64GB DDR5 for no good reason.
I was thinking about going with 64GB DDR5 for a good reason. The smallest dual-ranked ECC UDIMMs I can find are 32 GB. So, the lowest dual-ranked memory config I can use is 64 GB. That's a good deal more than I need, but at least I won't have to upgrade it for the life of the machine.

When I built a box with 16 GB in 2013, I never had to upgrade the RAM. I only used that much because it's quad-channel and the price difference between 2 GB and 4 GB DIMMs wasn't too bad, at the time.
 

usertests

Distinguished
Mar 8, 2013
461
409
19,060
DRAM cost-per-bit has been mostly stagnant for over a decade. If we're lucky, something like this will rejuvenate the market and make 32 GB the new 4 GB in 5-10 years. Let's see $0.25/GB.

Chip stacking is an alternative. Samsung was talking about 512 GB modules using TSV in 2021, obviously for server and hyperscaler customers only. That gives this X-DRAM something to compare to, if it becomes cheaper to manufacture as they claim.

If NEO stifles 3D DRAM or waits too long to get it to market, they miss out on a potential pile of money.
 

bit_user

Polypheme
Ambassador
I wonder how they will license this patent to the big RAM companies.

Will it be a on a individual company basis, or license through a joint Company/JEDEC agreement?
I doubt it will go into some JEDEC patent pool, unless you need a modified protocol to access it that they want to standardize.

More capacitors per mm^2 of cooling surface and some have thermal barriers to heat dissipation due to being underneath others.
The article says their cells don't use capacitors. It's a short article - just 392 words in 6 paragraphs.
 
Last edited:

bit_user

Polypheme
Ambassador
DRAM cost-per-bit has been mostly stagnant for over a decade. If we're lucky, something like this will rejuvenate the market and make 32 GB the new 4 GB in 5-10 years. Let's see $0.25/GB.
Correct me if I'm wrong, but it seems like 3D NAND didn't improve cost-per-bit very much, if we stop our analysis at the point when NAND was last profitable. I assume these 3D dies take a lot longer to fabricate, which partially nullifies the cost-savings of the density increases.
 

InvalidError

Titan
Moderator
Correct me if I'm wrong, but it seems like 3D NAND didn't improve cost-per-bit very much, if we stop our analysis at the point when NAND was last profitable. I assume these 3D dies take a lot longer to fabricate, which partially nullifies the cost-savings of the density increases.
Still using NAND as a reference, you can stack 200+ layers of those and the chips still cost only ~$20 each. If you tried to stack 200 dies, the same amount of storage capacity would cost ~$2000. While layers require additional steps, those steps are ~70X cheaper than silicon.

So, even if you had to sacrifice 75% of the density for extra spacing to make body capacitors larger, reduce crosstalk between cells, etc., it should still come out at least 20X more cost-efficient than die-stacking. Imagine being able to get 16GB of HBM3 or similar for under $10, no excuse for sub-16GB GPUs anymore.

The article says their cells don't use capacitors. It's a short article - just 392 words in 6 paragraphs.
A "capacitor-less floating body" DRAM cell still has a capacitor: the floating body. One of the largest challenges with 1T-RAM is that the word line (gate) capacitance is about as large as the floating body capacitance itself, which makes it difficult to access cells without them going above threshold regardless of whether they are supposed to be '0' or '1'.
 
Last edited:
  • Like
Reactions: bit_user