[SOLVED] Over-Provision NVMe?

Kalik212

Distinguished
Sep 28, 2015
122
1
18,585
1) how much free space do I need to leave empty on my SSD and NVMe drives (over-provisioning)?...I see different numbers online- some say 10%, some say 20% etc...which is the correct one?

2) I have a 1 TB drive...but it shows up as only 931 GB in Windows...does this mean I provision 10% (or whatever number it is) from 1 TB or from 931 GB?
 
Solution
931GB vs 1TB is not the index, or File Allocation Table, or anything else that people parrot about "lost space"

It is simply a difference in reporting units.
Base 2 vs Base 10.
ALL drives are like that. From an 8GB flash drive, to a 18TB HDD.

There was even a lawsuit about this, which is why there is tiny print on the side of the box a drive comes in, telling you about this.

931GB == 1TB.
1.81TB == 2TB.
432GB == 500GB.

No space is lost, no space is consumed with other stuff.
Period.

USAFRet

Titan
Moderator
A "1TB drive" will show in Windows as 931GB. This not exclusive to SSD, NVMe, whatever.
It is simply a difference in reporting units.
Base 2 vs Base 10
Computer vs Human.

Free space?
Don't go over 80% or so.
So, for a "1TB" drive...don't fill it up past 800GB.
 

jasonf2

Distinguished
You also have the file allocation table and other stuff that takes up space when you format it. How much overprovisioning is all relative to how many write cycles per time period you are going for and the type of flash you have. If you are using it for something that writes constantly more is better. If it isn't writing all the time it is really just wasting space. I typically with just go with the amount recommended in you SSD maintenance software. Flash memory can only be written so many times before failure so the over provisioning just gives somewhere for the bad blocks to be moved to.
 

USAFRet

Titan
Moderator
931GB vs 1TB is not the index, or File Allocation Table, or anything else that people parrot about "lost space"

It is simply a difference in reporting units.
Base 2 vs Base 10.
ALL drives are like that. From an 8GB flash drive, to a 18TB HDD.

There was even a lawsuit about this, which is why there is tiny print on the side of the box a drive comes in, telling you about this.

931GB == 1TB.
1.81TB == 2TB.
432GB == 500GB.

No space is lost, no space is consumed with other stuff.
Period.
 
Solution

Kalik212

Distinguished
Sep 28, 2015
122
1
18,585
I typically with just go with the amount recommended in you SSD maintenance software. Flash memory can only be written so many times before failure so the over provisioning just gives somewhere for the bad blocks to be moved to.

I have a Samsung 970 EVO Plus 1 TB NVMe and the Samsung Magician software has over-provisioning set at 10%...I don't use the software for it though as I prefer to just subtract an additional 10% or so from the total available space
 

Karadjgne

Titan
Ambassador
Over Provisioning is not reported space. Think of it as you buy a 1Tb ssd, but in reality it's a 1.1Tb ssd. That 10% doesn't exist for all practical purposes.

What it does is if a key goes bad, burns out, fails to hold voltage, it's address is checked and one of the keys in the over provision takes its place. So you'll remain at 100% healthy until all the keys in the over provision are used up. That's when you get to 99% on down.

The ssd itself is 1Tb. That's what it has available. After that comes stuff you aren't allowed to access like the boot, MBR, GPT info etc. After OS install comes more stuff you can't access like hardware reserved and hibernation. Leaving you closer to 931Gb. Unless you change the rules.

Hardware reserved can be mitigated if you have no igpu, Hibernation (75% of ram size) can be disabled. Those 2 alone are responsible for 10-20Gb easily on modern systems, portioned space for no apparent reason other than the OS wants it.

Drive space allocation depends on the drive. Many older, smaller drives, like 120Gb etc needed 10% minimum, just for the windows swap file use or suffered extreme lag. With 1Tb drives, you still need 20Gb (±) for that file alone, but a full 10% 100Gb isn't totally necessary. For normal use, 5% will suffice. For super-users, those dealing with uber large files, 10% would be better advised. That changes again if using a 2Tb ssd etc.
 
  • Like
Reactions: Mon12

jasonf2

Distinguished
I have a Samsung 970 EVO Plus 1 TB NVMe and the Samsung Magician software has over-provisioning set at 10%...I don't use the software for it though as I prefer to just subtract an additional 10% or so from the total available space
That isn't really the way that overprovisioning works. The SSD has to set the overprovisioned space out of the usable space or there is no where for wear leveling and failed cells to be moved to when the drive is full. The overprovisioned space will be subtracted from your drive capacity. It must also be setup with the drive software. Just leaving 10% on the drive does nothing. If magician is suggesting 10% that is what I would use. The drive watches for cells that are used too much and moves it around in the overprovisioned space to prolong life. Overprovisioning used to be part of the drive in the early ssd days but to keep costs down they have let you set it in software. In my opinion it is probably more important today than ever because the flash has become more dense, but less wear tolerant in many drives.
 
  • Like
Reactions: Mon12

USAFRet

Titan
Moderator
That isn't really the way that overprovisioning works. The SSD has to set the overprovisioned space out of the usable space or there is no where for wear leveling and failed cells to be moved to when the drive is full. The overprovisioned space will be subtracted from your drive capacity. It must also be setup with the drive software. Just leaving 10% on the drive does nothing. If magician is suggesting 10% that is what I would use. The drive watches for cells that are used too much and moves it around in the overprovisioned space to prolong life. Overprovisioning used just be part of the drive in the early ssd days but to keep costs down they have let you set it in software. In my opinion it is probably more important today than ever because the flash has become more dense, but less wear tolerant in many drives.
Do you have some documentation for this OP concept?

Not waying right or wrong, but I'd love to read more on it.
 

jasonf2

Distinguished
931GB vs 1TB is not the index, or File Allocation Table, or anything else that people parrot about "lost space"

It is simply a difference in reporting units.
Base 2 vs Base 10.
ALL drives are like that. From an 8GB flash drive, to a 18TB HDD.

There was even a lawsuit about this, which is why there is tiny print on the side of the box a drive comes in, telling you about this.

931GB == 1TB.
1.81TB == 2TB.
432GB == 500GB.

No space is lost, no space is consumed with other stuff.
Period.
Nothing is lost but the process of formatting does consume minimal space from an unformatted drive space. Volume information and FAT do take some space on every drive.
 

Karadjgne

Titan
Ambassador
That's TRIM. It deals with wear leveling and such. Over provision is slightly different, even some hdds had it. It's just space inherent to the drive and is different for different drives. Some use as little as 7%, some have upto 13%. It's non-reported by any means other than what the drive actually is.

If you think about a hdd, it has specific size sectors etc. Those sectors etc do not add up to exactly 1Tb. Could be 1.12Tb ish in total. But with different base usage, it's seen as a 1Tb. That extra 0.12Tb is used as over provision to replace failed sectors, the hdd simply reassigns the sectors address to a different physical location, the prior sector then becomes invisible to the drive.
 
That isn't really the way that overprovisioning works. The SSD has to set the overprovisioned space out of the usable space or there is no where for wear leveling and failed cells to be moved to when the drive is full. The overprovisioned space will be subtracted from your drive capacity. It must also be setup with the drive software. Just leaving 10% on the drive does nothing. If magician is suggesting 10% that is what I would use. The drive watches for cells that are used too much and moves it around in the overprovisioned space to prolong life. Overprovisioning used to be part of the drive in the early ssd days but to keep costs down they have let you set it in software. In my opinion it is probably more important today than ever because the flash has become more dense, but less wear tolerant in many drives.

You're correct that there must be some physical OP, which tends to be at a minimal the binary to decimal conversion difference. However that's not terribly accurate because actual die capacity is significantly higher than what's listed, you also take OP space away when using pSLC (static specifically) and for the mapping table (which tends to be in pSLC). What's available to the host is the LBA and you can change this on some drives (generally in enterprise these days). Some programs used to just deallocate which is not the same thing. Intel has a white paper covering this exact topic (see: Methods of Over-Provisioning). However, leaving free space is dynamic OP on modern drives, Kioxia has a detailed article on the subject (see: Write Amplification).

OP improves endurance (reduces WAF) and improves performance, particularly with random writes, as it shifts the free block threshold a bit (note that static and dynamic pSLC have different threshold mechanisms). Dynamic OP requires TRIM to be effective but the thing is, modern drives are very aggressive with it and GC, particularly because they rely on SLC caching and consumer usage has tons of idle time. For consumer usage OP is not too important (AnandTech tested E12 drives with different levels of physical OP with no significant changes) but it depends on the drive - DRAM-less (esp. SATA) and QLC-based drives are more susceptible to issues when the drive is fuller, for example. I do recommend people keep a certain amount of space free which tends to amount to ~10% of user space.

Note that I'm not disagreeing with you, merely elaborating on the subject matter. You are correct that cells are rotated based on effective wear (and this is related to SLC caching also) and it is a requirement for SSDs based on how NAND and the FTL operate. You are also correct that it's more important when you have more bits per cell but newer flash (3D vs. 2D) actually has far larger effective cell sizes, but I'd rather not get into a more technical debate.

From Intel: "1. Limiting the logical volume capacity during partitioning, 2. Limiting an application to use only a certain LBA range, 3. Limiting the Maximum LBA on the drive level"

From Kioxia: "any data locations that are de-allocated a.k.a. TRIMmed, or never written are also spare space, and lower WAF just as effectively as the designed-in equivalent. Effective, or instantaneous OP, is the total amount of unoccupied capacity, as a ratio to the logical capacity of the drive.."

From AnandTech: "Clearly, the higher overprovisioning ratio of the MP510 is not necessary for the Phison E12 controller to perform well, even on the tests that completely fill the drive."
 
Last edited:
  • Like
Reactions: Mon12

Karadjgne

Titan
Ambassador
That 10% is somewhat archaic now with larger drives. As I mentioned earlier it came about because of filling smaller drives, 64Gb - 128Gb and Windows swap needing space. With modern drives reaching 1Tb and above, that's not necessary any more, for general windows users it can be dropped down to 5% or less.

But there is a valid point about (especially Sata) cheaper drives, the really cheap can suffer from slowdowns when over 50% full. Not usually an issue with better grade drives or NVMe.
 
  • Like
Reactions: Mon12
My 10% value comes from the Kioxia article. You ideally want a WAF of around 3.0 (given their worst-case scenario, this is close to 1.5 for consumer workloads) which they peg at 20% effective over-provisioning. For a given drive with typical binary values, for example 512GiB of flash, the equation comes out to 426.67GiB. A 512GiB-sourced drive may have up to 512GB of LBA which is 476.84GiB (Windows uses binary also), 426.67/476.84 = ~10% free. I actually have some journal articles showing the difference in performance and wear given various workloads and this is a good rule-of-thumb.

Absolutely some drives rely on OP more than others. I state this above: DRAM-less (especially SATA) and QLC-based are particularly prone to issues when fuller. These drives often have more native OP (e.g., 480GB vs. 500/512GB). This is also done on budget drives utilizing inferior flash (e.g. Chinese market). I also linked the AnandTech review that compares OP and shows it's not a huge deal for consumer workloads but that there's always significant slowdowns with a fuller drive. I'm merely stating a basic guideline for general use. You must consider the fact that modern consumer drives rely on dynamic pSLC which shrinks with drive usage, and this compounds GC efforts since dynamic pSLC shares a wear zone with native flash (TLC or QLC). Older drives were DLC (2-bit MLC) and often had no caching.

Obviously a blanket statement. But I digress...there is no set proper amount and most drives are fine at 95% full (for example), the 10% value is a catch-all.
 
Last edited:
  • Like
Reactions: Mon12
1) how much free space do I need to leave empty on my SSD and NVMe drives (over-provisioning)?...I see different numbers online- some say 10%, some say 20% etc...which is the correct one?
There isn't really a single correct answer, but just make sure to have at least 10-20% free space on the drive.

2) I have a 1 TB drive...but it shows up as only 931 GB in Windows...does this mean I provision 10% (or whatever number it is) from 1 TB or from 931 GB?
See https://en.wikipedia.org/wiki/Binary_prefix . It's a problem with what the manufacturer says and what software engineers say.
 

TRENDING THREADS