[SOLVED] About "over provisioning" ?

karasahin

Distinguished
Sep 28, 2014
88
0
18,640
Hello,

I'm using a Samsung 870 EVO 500 GB SSD with its over provisioning value set to 11%, means 51 GB currently temporarily out of use.

How important is this thing? I guess it is helpful for the disk to last longer but is this true and if so how much longer if not trivial?

Should I disable this feature if the disk is about 90% full?

Thanks.
 
Solution
So this is about longevity of the SSD? The more consumed space above the threshold the more it becomes prone to performance loss and instability? If so, could you tell me after when this will start to become a problem?
To write (save) a new value to a cell, that cell has to first be erased.
The TRIM function erases cells that are not currently used for data, in preparation to accept something in the future.

Given sufficient free space, the drive controller does not have to spend so much time looking for a free cell to write new data to.

As said...keep some free space on the drive.
Personally, I don't go over 80% actual consumed space.
Hello,

I'm using a Samsung 870 EVO 500 GB SSD with its over provisioning value set to 11%, means 51 GB currently temporarily out of use.

How important is this thing? I guess it is helpful for the disk to last longer but is this true and if so how much longer if not trivial?

Should I disable this feature if the disk is about 90% full?

Thanks.
You seeing 90% full is what is left after removing that 11%/51GB.
So thats not an issue.


SSDs need some free space, for TRIM to do its thing.
The OP is basically just removing some space from use and view, to keep that free space available.
 
  • Like
Reactions: karasahin
You seeing 90% full is what is left after removing that 11%/51GB.
So thats not an issue.


SSDs need some free space, for TRIM to do its thing.
The OP is basically just removing some space from use and view, to keep that free space available.
It's currently like in the SS below. So you're saying even if I don't set over provisioning value, it still comes with it by default/factory setting even though lower than what I set manually?
4eK0ubv.png
 
That is showing you have 415GB available, split between the C and D partitions.

What is the actual question?


And generally, I counsel against partitioning a drive like this.
Noted about partitioning.

Is there much different between 1% over provisioning vs 10%? If so how much? I don't like limiting the capacity especially if this type of SSDs come with over provisioning by default to some extent.
 
Noted about partitioning.

Is there much different between 1% over provisioning vs 10%? If so how much? I don't like limiting the capacity especially if this type of SSDs come with over provisioning by default to some extent.
Basically, it just walls off some space so that YOU don't have to think about it.

An SSD wants about 15-20% free space.
ex: a 500GB drive should not go over 400GB actual consumed space.
 
Basically, it just walls off some space so that YOU don't have to think about it.

An SSD wants about 15-20% free space.
ex: a 500GB drive should not go over 400GB actual consumed space.
So this is about longevity of the SSD? The more consumed space above the threshold the more it becomes prone to performance loss and instability? If so, could you tell me after when this will start to become a problem?
 
https://www.techtarget.com/searchstorage/definition/overprovisioning-SSD-overprovisioning

The over provisioned area is a good thing, it allows the SSD processor to safely conduct housekeeping. It is not a good idea to recover the space for general use.
"In general, the higher the overprovisioning rate, the better a drive performs over the long term, especially for heavy workloads with lots of random writes. The drive is also more likely to meet its expected life span."

"More", "better", these are just vague terms with no actual data.
 
So this is about longevity of the SSD? The more consumed space above the threshold the more it becomes prone to performance loss and instability? If so, could you tell me after when this will start to become a problem?
To write (save) a new value to a cell, that cell has to first be erased.
The TRIM function erases cells that are not currently used for data, in preparation to accept something in the future.

Given sufficient free space, the drive controller does not have to spend so much time looking for a free cell to write new data to.

As said...keep some free space on the drive.
Personally, I don't go over 80% actual consumed space.
 
Solution
To write (save) a new value to a cell, that cell has to first be erased.
The TRIM function erases cells that are not currently used for data, in preparation to accept something in the future.

Given sufficient free space, the drive controller does not have to spend so much time looking for a free cell to write new data to.

As said...keep some free space on the drive.
Personally, I don't go over 80% actual consumed space.
Okay this is helpful. Since I don't plan to use my laptop and SSD (since it's SATA) for so long, I will lower over provisioning down to 5% and will use it like that for the time being and for about next two years, maybe three. It shouldn't be a problem... And after I get a new laptop and SSD, I will probably use more OP, like 10% or 20%.

Thanks for the help everyone!
 
A note about partitioning: my testing showed that different drives handle the pseudo-SLC cache in different ways. For some, it makes no difference. For others, the drive will not use space from one partition as pSLC when performing operations in the other partition. On Samsung and other drives with hybrid caches (some static SLC and some dynamic), even the static SLC will sometimes be split. In some cases, it even seemed like the second partition had no pSLC of its own and needed to use space from the first. It's not even consistent by brand and I only had a few models to test with (which still took forever). This could affect performance depending on how much space is free in each partition and how heavy write operations are.
 
  • Like
Reactions: karasahin
A note about partitioning: my testing showed that different drives handle the pseudo-SLC cache in different ways. For some, it makes no difference. For others, the drive will not use space from one partition as pSLC when performing operations in the other partition. On Samsung and other drives with hybrid caches (some static SLC and some dynamic), even the static SLC will sometimes be split. In some cases, it even seemed like the second partition had no pSLC of its own and needed to use space from the first. It's not even consistent by brand and I only had a few models to test with (which still took forever). This could affect performance depending on how much space is free in each partition and how heavy write operations are.
Yes, I saw that from you the other day.
Interesting.

But it would take a LOT of testing to nail down specific performance aspects, good or bad.
Drives, workload, etc, etc.
 
Overprovisioning is just forcing user to leave enough space for it's function of maintenance and swapping of cell usage so they all get equal number of writes. SSDs have no special place like reserve area on HDDs and all cells are not in same place so partition which is virtual doesn't even matter.
How worse is an SSD with no over provisioning allocated versus an SSD running at a higher temp above its threshold in terms of performance and stability?
 
Yes, I saw that from you the other day.
Interesting.

But it would take a LOT of testing to nail down specific performance aspects, good or bad.
Drives, workload, etc, etc.
I did more testing with other drives beyond what I ended up posting here. Not all drives even use the full space for pSLC (like Samsung and the one Patriot drive I tested) so having it get split between partitions can cause a severe drop in the available cache and potentially performance if you are writing a large amount at once. But if it uses hybrid cache and only half the static amount is available, that's even worse.

Logically it doesn't even make any sense that the drive would care about partitions, but it was consistently happening in my tests. The total free space on the drive didn't matter, only the free space within a partition, either total amount of percentage of the total. It made me think that in some drives the controller may only use "local" blocks so that it can't reach out to other areas of the drive, and especially in those where only part of the capacity is used for cache it may not be spread across the drive but rather be limited to particular blocks. In most cases, it might not be noticeable unless you're doing a LOT of sequential writing, but one was particularly bad. The Crucial BX500 acted like the second partition couldn't use ANY space from the first partition as cache, but the first partition could use the second partition's space as cache. I'd never want to have multiple partitions on that drive (and its non-cached performance was worse than a bad mechanical drive).