Question Tutorial for calculating power on hours/host reads/writes ?

apollos68

Prominent
Jun 16, 2023
13
0
510
hi,
im considering a used m.2. purchase. would like to know how to arrive at the remaining 'life.' (i looked here for a tutorial for such, not found.) if anyone knows, please link me.
additionally, here's the data if someone just knows how, wants to explain, and i'll follow thru.
thank you.

It's an 8TB M.2 SSD
1. hours: 12221
2. reads: 101078GB
3. writes: 82714GB
 
An 8TB SSD would have a rating of roughly 5000 TBW. The 82TB already written has barely scratched the surface, so to speak. But I agree with Lutfij in that buying a used SSD is not a good idea unless you personally know its history.
 
Without seeing the SMART data, you have no way to come up with a decent estimate of lifespan. That drive could have 10,000 reallocated sectors already, or none. You can get what it SHOULD be based on number of GB written versus max TBW from the specs, but it's been shown that those aren't even entirely accurate and that drives can last long past that point, and we also know that drives can suddenly degrade long before reaching that number. What you need is a screenshot of data from something like Crystal DiskInfo that shows the drive's serial number, the estimated health the firmware has calculated and showing all the SMART attributes. If it has any reallocations or "media and data integrity errors" or "error information log entries" at all I wouldn't buy it, and with that much data written in a short time, 95% health would be acceptable. (My 1TB main OS drive has been on for a little more than 11,000 hours and only has 32TB written, and 98% health. The drive you're looking at has had 82TB written in only 1000 hours more so it has been relatively heavily used, but it has a much higher expected lifespan due to the size.)

The SSD's health percentage is actually calculated by the controller firmware and reported by software. The firmware knows what the lifespan and max TBW ought to be, as well as intimate details like the wear leveling activity that aren't reported anywhere.

No SSD will ever reach its MTBF and it's a ridiculously irrelevant number. 1.6 million hours is over 182 years. And that's the MEAN time between failure so they're claiming many will last even longer. If you consider that most may last 5 to 10 years, they're talking about others that would last almost 350 years. Is that "there's still one sector that's readable and writable"?
 
Last edited:
thank you for the response.
how bout this one?

Corsair MP600 PRO NH 8TB
hours:13,359
reads: 2,364,196,848
writes: 22,275,097,967
mtbf: 1,600,000
The information you're looking at isn't complete. MTBF is meaningless given the century+ that it indicates. You need the rating for TBW (endurance) for the drives, and ideally the SMART data. For that one, endurance is 6000 TBW but it's not clear what the read/write numbers you provided are. Kilobytes, bytes, MB? Either way it had almost 10 times as many writes as it did reads, which could indicate that someone was trying it out and benchmarking and decided it wasn't fast enough at writes and sent it back, but the power-on hours indicates otherwise.

If it's bytes, then it only had 22GB written to it which is barely anything for any drive. If it's kilobytes then it's 22TB, which is still not much with a rating of 6000. But 13,359 power-on hours with that little usage means it was just sitting in a system for nearly two years virtually unused. I don't think I trust a drive that was refurbished after almost 2 years of power-on, whether it was actively being used or not, and with such low read/write numbers and that much use I'd question whether they're being honest about that usage level.

In short, if you don't have those details in full, you're not going to get a reliable answer. But you can do the math yourself on how much usage was done compared to the TBW rating.
 
The information you're looking at isn't complete. MTBF is meaningless given the century+ that it indicates. You need the rating for TBW (endurance) for the drives, and ideally the SMART data. For that one, endurance is 6000 TBW but it's not clear what the read/write numbers you provided are. Kilobytes, bytes, MB? Either way it had almost 10 times as many writes as it did reads, which could indicate that someone was trying it out and benchmarking and decided it wasn't fast enough at writes and sent it back, but the power-on hours indicates otherwise.

If it's bytes, then it only had 22GB written to it which is barely anything for any drive. If it's kilobytes then it's 22TB, which is still not much with a rating of 6000. But 13,359 power-on hours with that little usage means it was just sitting in a system for nearly two years virtually unused. I don't think I trust a drive that was refurbished after almost 2 years of power-on, whether it was actively being used or not, and with such low read/write numbers and that much use I'd question whether they're being honest about that usage level.

In short, if you don't have those details in full, you're not going to get a reliable answer. But you can do the math yourself on how much usage was done compared to the TBW rating.
hi, ok t.y. for the response.....

here's a screen shot of the drive stat:
question
1. based on that stat, is that 'host read/writes gb, tb, or what?

p.s. please tell me the 'math' formula ex: ('A' divided by 'B' times 'C' equals 'estimated lifetime.') please that way., i can calculate myself)

thank you,

 
p.s. please tell me the 'math' formula ex: ('A' divided by 'B' times 'C' equals 'estimated lifetime.') please that way., i can calculate myself)
The maker quotes a number of writes in TB for the drive. Tech Power Up quotes this information in the performance panel of their summary. https://www.techpowerup.com/ssd-specs/corsair-mp600-pro-nh-8-tb.d1162

So, data units written, 1.7Petabytes would indicate that it’s about 1/3rd worn.

This site has a breakdown of how the numbers are calculated. https://storedbits.com/tbw-dwpd-mtbf/

The only nebulous thing in the description is “write amplification factor”. Because there is more than 1 bit per data cell, TLC has 3 bits per cell, if your drive wants to write to a cell that already has data in it the data must be
1, copied out
2, the cell zeroed
3, the old and new data rewritten into the cell.
This means that write is slightly more than 1 write. The manufacturer hides this number. (Multiplier)

This formula gives the calculation:
TBW = (NAND Endurance) × (Total Capacity) × (WAF) × (Over-Provisioning Factor)
 
So the "units" refer to sectors, and in this case the drive has 512-byte sectors so it has calculated 1.78 PEBIbytes of data was written. Two units equals 1KB. So it's 3,491,153,447 divided by 2 to get the number of KB, then multiply by 1024 equals 1,787,470,564,864 bytes. You can ignore the math given that you have the software here, and it's indicating 1.78PB has been written to the drive. The rating of the drive is 6PB (6000TB) so it has already had nearly a third of its expected lifetime used up, and in less than two years.

You'll notice the "percentage used" is 32%. Software would probably be showing the health as "68%" or thereabouts. That's pretty heavy usage, but good enough to continue using it if it were something you already owned. It would likely give you a lot of remaining life, but I wouldn't pay a huge amount for it.

I also notice there are 3 Error Information Log Entries, which is probably not bad considering the amount that it has been used, but it does indicate that it has had to fix problems at some point.

When you provided the data, did you manually type it while looking at the SMART data or did you copy it from what someone else provided? Your post said "reads: 2,364,196,848" but that's the number of read commands. The actual number of bytes read (data units read) is 23.6PB.

So as I mentioned, you have the software and in this case it gives you what you need (as would Crystal DiskInfo and others like it). The "health" is related to the "percentage used" which is basically the number of writes versus the rated endurance. Subtract the percentage used from 100%, and that's how healthy it is. You judge whether the cost is low enough to take the risk of it lasting long enough, and that's where you have to consider both the written bytes and the number of hours.

So we know it's about 32% used, which indicates the cells are pretty worn but still working well within threshold and will probably continue to do so for at least 3 years even if you continued using it at the same level that it was before. You can't really know how it was being used but you can figure out the average rate they were writing.

1,787,470,564,864 bytes divided by 13359 hours equals 133802722 bytes per hour, about 133MB per hour. That's really hardly anything. My primary drive is averaging 2.8GB per hour and I really don't even use it that heavily, and some of that would have been benchmarking so higher than usual usage. (It has lost 2% health in a year and a half.) So you could estimate you're going to probably be hitting that drive at least 10, maybe even 20 times harder if you're using the system regularly. That estimated 3 to 4 years of life under original usage is now down to maybe half a year I'd say.

You don't want that drive unless it's free, you absolutely need a new one NOW, and you expect to have the money to replace it again in 3 months.

That sounds like my drive isn't really aging hard, but the TBW rating is only 600TBW, 1/10th of that Corsair drive. Corsair is likely being very optimistic while Western Digital was conservative in their ratings.
 
I'd be willing to buy refurbished drives from reputable sellers who do that sort of thing specifically, for certain purposes that don't include primary storage of anything, if the numbers were a lot better than those on this drive. I wouldn't really trust just a "used" drive even with good numbers or one from anyone, certainly not a random seller, especially knowing now that SMART data can be reset at any time.
 
AMERICAN DOLLARS? In the US? For a drive that has 32% of its rated lifespan used up? A brand new one WITH heatspreader is only $640 right this minute (less than the one with no heatspreader). Prices should depreciate MUCH faster than the health of the drive, not almost at the same rate. That thing should be $200, at the extreme highest, with capacity still giving it a premium, and I personally wouldn't pay more than $50 for it knowing I'd probably need to replace it not long after.
 
No SSD will ever reach its MTBF and it's a ridiculously irrelevant number. 1.6 million hours is over 182 years. And that's the MEAN time between failure so they're claiming many will last even longer. If you consider that most may last 5 to 10 years, they're talking about others that would last almost 350 years. Is that "there's still one sector that's readable and writable"?

That parameter is often misinterpreted. It should be used to estimate the likelihood of failure in a large population of equivalent drives.

For example, if you have a population of 1000 SSDs, on average you can expect 1 failure every 1600 hours. That's what "mean time between failures" means.
 
That doesn't make any sense included as part of consumer products, if it requires nonsense math to be useful. What if I only have 10? One failure every 160,000 hours? What if I have 10,000? A failure every week? What's the use of a spec whose relevance varies depending on how many of them you buy?
 
That doesn't make any sense included as part of consumer products, if it requires nonsense math to be useful. What if I only have 10? One failure every 160,000 hours? What if I have 10,000? A failure every week? What's the use of a spec whose relevance varies depending on how many of them you buy?
Devices are tested, abused, made to fail and from the test sample the results are analysed. The MTBF is extrapolated.

The chances are your drive will outlive its usefulness but there is a risk it will fail on the first power on. This is quantified by the MTBF. Mean = average. You need to look at the whole sample set/production run in hindsight to see how meaningful or meaningless MTBF is.

One failure every 160,000 hours? Possible but improbable, you would have 10 drives out of the production run of hundreds/thousands/millions.

That drives will fail is a given. Backups are essential.