[SOLVED] is 128 layer 3D NAND more reliable than 32 layer 3D NAND? or vice versa true?

Pextaxmx

Reputable
Jun 15, 2020
418
59
4,840
5 minutes of googling didn't give me any answer so I want to open a discussion here: would more layers in NAND have any impact on the lifespan or data retention performance? Any reason to buy 64-layer SSD over 176-layer version? or vice versa?
 
Solution
5 minutes of googling didn't give me any answer so I want to open a discussion here: would more layers in NAND have any impact on the lifespan or data retention performance? Any reason to buy 64-layer SSD over 176-layer version? or vice versa?

This is a complicated question and something I've covered extensively on Reddit and on my discord. However, in general, more layers is better, but that "better" involves a series of trade-offs, such as: endurance, performance, density, cost, etc.

An example of endurance gains with layers would be Intel going from 64L to 96L on their QLC. Performance-wise improvements were pretty minor - see the 660p vs. 665p, keeping in mind SLC cache changes - while endurance went from 1000 to 1500 P/E...

Pextaxmx

Reputable
Jun 15, 2020
418
59
4,840
It was purely out of my curiosity. I am not shopping for one. I just read some random article about how we may see 500 layers soon... and wondered if this race for more layers has any impact on the NAND reliability. That's all.
 

Pextaxmx

Reputable
Jun 15, 2020
418
59
4,840
well, in planar NAND era, cramming data into smaller space was exactly the problem. A fatal one. That is how people invented 3D nand to solve that problem. Now we are making the vertical space smaller and smaller.... so I think it is logical to suspect too many layers must hurt the reliability.. but I am not an expert.
 

USAFRet

Titan
Moderator
well, in planar NAND era, cramming data into smaller space was exactly the problem. A fatal one. That is how people invented 3D nand to solve that problem. Now we are making the vertical space smaller and smaller.... so I think it is logical to suspect too many layers must hurt the reliability.. but I am not an expert.
Not every tech leap results in better reliability.

But on its face, 'more layers' is not necessarily better or worse.
 

USAFRet

Titan
Moderator
do you have any supporting literature for that statement?
No.

But it depends on actual construction and quality control.
A 64 layer from Samsung may be better reliability than a 128 layer from ADATA.
Or a 128 layer from Samsung may be better than a 64 layer from ADATA.

If shopping for a drive, I would not put that near the top of my list of 'why this drive'.
 
5 minutes of googling didn't give me any answer so I want to open a discussion here: would more layers in NAND have any impact on the lifespan or data retention performance? Any reason to buy 64-layer SSD over 176-layer version? or vice versa?

This is a complicated question and something I've covered extensively on Reddit and on my discord. However, in general, more layers is better, but that "better" involves a series of trade-offs, such as: endurance, performance, density, cost, etc.

An example of endurance gains with layers would be Intel going from 64L to 96L on their QLC. Performance-wise improvements were pretty minor - see the 660p vs. 665p, keeping in mind SLC cache changes - while endurance went from 1000 to 1500 P/E (with LDPC ECC). Intel uses floating gate (FG) for their flash which emphasizes endurance and in fact will be used in the future for split-gate PLC+ as well.

Another, different example of endurance gains with layers would be Micron going from 64L to 176L, but this had multiple changes involved. While their 64L flash was rated for 1500 PE, their 96L was 2000 and later 3000 - they changed the geometry on the 96L (B27A vs. B27B) by reducing the block size which improves endurance. Then they changed architecture from FG to CTF (specifically, replacement gate or RG a la TCAT, like Samsung's V-NAND basis) and achieved 5000 PE with their 128L flash (B37R).

In general endurance does improve with layer count because while die density increases - that is, bits per surface area of the die - the amount of data per layer does not necessarily increase. There are multiple types of disturb - more than planar has - but the effective process and cell size is much, much larger, so most types of disturb are insignificant, and alleviated or mitigated in a variety of ways otherwise. In any case, there's also a splitting of layers with decks which helps mitigate issues with high aspect ratio etching for example (which varies retention by word line position). At the controller level, ECC may also improve, including by having larger codewords. There's also been improvements to SLC caching and in many other areas too many to write. Of course, many of these are separate from layer count. But there are improvements made to circuitry every generation, for example putting peripherals under the NAND (CUA).

If you're going for more layers, though, it's because of capacity (cost per GB), energy efficiency, and performance. Endurance is better as a side effect of more general improvements. Good example is SK hynix's 128L flash first seen in the P31. That drive is incredibly efficient, has very good performance, and scales up to 1Tb/die (2TB SKU recently released) all while being affordable. But, it likely also has improved endurance over at least their old architecture.
 
Last edited:
  • Like
Reactions: USAFRet
Solution

Pextaxmx

Reputable
Jun 15, 2020
418
59
4,840
This is an amazing post that answers my questions + provides much more interesting perspective to NAND layer count.

There are multiple types of disturb - more than planar has - but the effective process and cell size is much, much larger, so most types of disturb are insignificant, and alleviated or mitigated in a variety of ways otherwise.
seems like 3D NAND was a step change in NAND technology... no wonder though... they added a whole dimension to it.

If you're going for more layers, though, it's because of capacity (cost per GB), energy efficiency, and performance. Endurance is better as a side effect of more general improvements.

These are exactly the answer I was looking for. Thank you.
In a nutshell, more layer count does not directly correlate with endurance but higher layer count is better in all aspect (including endurance) because those are made with the latest and greatest technology as of today....
 
Cells hold an amount of electrons (a charge) which corresponds to different bits depending on the value. There's a limit (threshold) of discernment, which 15nm planar was close to hitting with 3-bit MLC (TLC) - specifically, you had 100 electrons or less with a threshold of about 10, so TLC's 8 voltage states (technically ER and 7 states) was a hard limit. 3D NAND is almost an order-of-magnitude larger effectively, but it can suffer from more disturb because there's more directions where other cells lie; as a result, these types of disturb are based on "X" and "Y" coordinates for example. However, much of this disturb, including read disturb, is basically negligible given the size and distance of the cells. I could go into the technical details but suffice it to say, there are constant advancements to mitigate things like program disturb.

In any case, over time you will have things like leakage which impact retention. The cells also wear down over time from the program/erase cycle. Workload type and temperature can impact this, too. However, in general, modern controllers will keep a log/table of biases for this and adjust the threshold windows accordingly. Eventually you have to rely on ECC and then parity. Nevertheless, improvements are also being made here, as in the manufacturing process itself (since you're just string-stacking decks after a while, although alignment can be an issue). NAND endurance is an issue but obviously we've moved to TLC and soon QLC so it's more of a secondary concern in the consumer market. Consumer drives also have multiple modes, e.g. SLC caching, which can be utilized intelligently to defer writes, esp. with DRAM, in order to reduce wear via write amplification.

2D/planar is inherently different in structure to 3D so shouldn't be directly compared. 2D/planar is in fact still utilized with small density SLC, NOR flash, etc. There's also ultra low latency 3D SLC. Chia retail drives now exist that are QLC in permanent and full SLC/pSLC mode - but people often confuse this for being identical to native SLC. That is NOT the case and is important to realize when comparing QLC and TLC, too; while you can run QLC in TLC mode by just having 3 bits in a cell (e.g. Kioxia's 1.33Tb/die QLC to 1Tb/die TLC), QLC cells on average tend to already be smaller and the circuitry may be different. A good example is that Intel's latest data center (DC) drive is sold as 144L TLC when in reality it's their 144L QLC (floating gate architecture!) in TLC mode. This has interesting results on endurance.

It's rare to go up in layers without other improvements, including endurance, but there's also the question of "is it relevant?" With quality consumer flash and usage, usually not.
 
Last edited:
was it better or worse than, say, 128 layer native TLC?

The architecture is floating gate with CUA, which makes it different than anything else on the market. Micron used FG in TLC up to and including 96L but that's it. Samsung isn't on CUA yet, BiCS (Toshiba/WD/Kioxia) isn't quite there yet, either. Hynix used a variation of BiCS previously. BiCS is a different charge trap flash (CTF) than what Micron and Samsung are using are 128L+. So it's tough to draw comparisons.

FG generally has better endurance, though, but potentially worse performance. However, Intel's QLC is quad-plane, and given enterprise/DC drives tend to be high-capacity the die density (768Gb in TLC) doesn't matter as much. Plus, it looks to have a bit more than usual over-provisioning (11TiB of QLC -> 8.25TiB of TLC) which can improve write performance and endurance.

I mean you personally can go out and just check the reviews. I'm speaking from a technical standpoint.
 

TRENDING THREADS