SSD Benchmarks Hierarchy: We've tested over 100 different SSDs over the past few years, and here's how they stack up.

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Maybe it is a nice idea for all SSD makers to allow the users to underclock the SSD controller for lower power consumption... I will send them all emails for this.
It would be interesting if there were a standard NVMe command set for querying & configuring SSD power levels.

For all I know, there could be...

Sounds big enough, the could be all sorts of goodies hiding in there!
 
Maybe it is a nice idea for all SSD makers to allow the users to underclock the SSD controller for lower power consumption... I will send them all emails for this.
To be fair if they stopped making only PCIe x4 SSDs the problem would solve itself. Not every SSD has to max everything out, and a good chunk of the reason the P31 is so good is that the controller uses half the channels of the competition. If they weren't all pushing maximum performance and instead were shooting for efficiency I'd imagine there would be even better drives efficiency wise than the P31.
 
How do you mount SSDs on motherboards with their own integrated heatsink for the slot? Do you use a thermal pad, a blob of heatsink compound, or just rely on there being enough contact for adequate heat conduction?
The mobos I've used always have a thermal pad between the heatsink and the M.2 slot — often on both the top and bottom to give increased cooling to both sides. It's not a perfect solution, as the airflow to the bottom of the SSD will be negligible at best, but again you tend to need to hit the SSD really hard to heat it up.

Our sustained writes do 1MiB blocks for 30 minutes straight. It typically takes a few minutes to hit max temperature. On a PCIe 5.0 SSD, that means potentially >300GB of data written in a single blast to max out the temp. And even a slow RPM fan directing airflow across the SSD area can work wonders.
 
  • Like
Reactions: bit_user and dimar
You list the 4TB TeamGroup A440 Pro but I believe you tested the A440 Pro Special Series. I don't believe the non-special series performs the same as the special series. Can you confirm since there is an about 30 or 40 USD difference?
I believe the when we reviewed them, Teamgroup only had A440 Pro ]called "Pro" on the 1TB/2TB variants, and only called "Pro Special" on the 4TB model. So I guess we need "Special" on our 4TB model, yes. I'll go fix that. Thanks for the heads up.
 
  • Like
Reactions: Heiro78
The mobos I've used always have a thermal pad between the heatsink and the M.2 slot — often on both the top and bottom to give increased cooling to both sides. It's not a perfect solution, as the airflow to the bottom of the SSD will be negligible at best, but again you tend to need to hit the SSD really hard to heat it up.
One thing I look at is whether a SSD is single-sided or double-sided. In the case of the 2 TB Samsung 990 Pro that I just got at Amazon for $120, it appears to be single-sided. That should largely avoid the need for much in the way of "underside" cooling.
 
  • Like
Reactions: JarredWaltonGPU
One thing I look at is whether a SSD is single-sided or double-sided. In the case of the 2 TB Samsung 990 Pro that I just got at Amazon for $120, it appears to be single-sided. That should largely avoid the need for much in the way of "underside" cooling.
Yeah, the Phison E26 drives are a concern, along with other double-sided solutions. I think most of the heat is in the controller rather than the NAND, though, so if the controller is on top it should be good.
 
  • Like
Reactions: bit_user
Fine, I've added the two Optane drives where we have test results. They're in the 1TB table and have incredibly high QD1 random IO.
I think it was fair to leave them out, since both are discontinued, but I guess the people have spoken!

BTW, I know your IOPS chart is QD1, and that makes sense. But, if we want to talk peak IOPS, this drive has demonstrated 14M IOPS on a single Golden Cove P-core, in Linux:

...just some fun, useless trivia.
 
Last edited:
Maybe it is a nice idea for all SSD makers to allow the users to underclock the SSD controller for lower power consumption... I will send them all emails for this.
Another reason why I will only buy Samsung drives. It's low power mode is wonderful in notebooks and it can be just as quick as the normal power mode
 
While I'm certain you spent a lot of time testing these M.2s, synthetic tests are pretty meaningless to consumers and real world use. No different than GPU testing and only publishing synthetic results. The following from Tech Power Up is an example of some of the best testing I've seen from anyone and certainly is relatable to most people for their purchasing.

 
While I'm certain you spent a lot of time testing these M.2s, synthetic tests are pretty meaningless to consumers and real world use. No different than GPU testing and only publishing synthetic results. The following from Tech Power Up is an example of some of the best testing I've seen from anyone and certainly is relatable to most people for their purchasing.

I happen to really like the synthetics in Toms' SSD testing, since they show things like how IOPS scale with queue depth and give you plots showing how that and sustained write performance directly compare between competing drives. They also provide average & peak power usage + power-efficiency. That's the good part.

For the bad, I'll have to agree that Toms' 50 GB folder copy isn't adequate to represent real-world usage. They also run some prepackaged test suites, like PCMark and 3DMark, but I've never seen them demonstrate a correlation between those scores and any real world usage metric.

I have some recollection of seeing them test boot times, application loading, and game loading times in the past. However, it seems they've gotten dropped from the standard test suite.

I also am sad to see no direct temperature measurement in Toms' SSD tests (at least, it's not in the latest review of Gigabyte's PCIe 5.0 drive and I don't recall seeing it in others I've looked at recently). It'd be great to see plots of temperature measured over time, as that could be used to infer thermal throttling.

I would suggest that TechPowerUp's 9-game loading time benchmarks are a little excessive. If you happen to be playing one of the titles they tested, I can understand wanting to know how much it'll benefit from upgrading your SSD, but most of those games' loading times probably correlate well with each other. In fact, just from eyeballing those plots, I think if you culled the SATA drives from the plots, you'd probably find less than about 10% variation between the fastest and slowest drives, because NVMe is so fast that it's really eliminated I/O as a bottleneck.

Thanks for weighing in. I'll just tag @JarredWaltonGPU , in case he finds any of this feedback useful. However, I'd hate to lose some of the things I really like about Toms' synthetic testing for the sake of a bunch of game loading times that hardly vary at all between NVMe drives.
 
  • Like
Reactions: JarredWaltonGPU
Okay, so I decided to do a little more analysis and put some numbers behind my assertions about game loading times.

I've entered data from those plots into a little spreadsheet, and here's what I found.

Title
Total Spread​
NVMe Spread​
SATA Spread​
Slowest NVMe v Fastest SATA​
Age of Empires IV
36.4%​
21.5%​
11.7%​
0.5%​
Deathloop
21.7%​
3.9%​
14.6%​
2.2%​
Doom Eternal
65.4%​
11.5%​
14.3%​
29.8%​
F1 2022
10.8%​
4.8%​
5.1%​
0.5%​
Red Dead Redemption 2
14.9%​
5.8%​
7.1%​
1.4%​
Unreal Engine 5
43.2%​
23.4%​
12.0%​
3.6%​
Watch Dogs Legion
36.2%​
23.8%​
10.0%​
0.0%​

It's hopefully self-explanatory enough. Spread is the amount of benefit the fastest drive provides over the slowest. The next two columns attempt to show within NVMe drives and SATA drives, how much variation there is. The last column shows the gap between NVMe and SATA.

My guess as to the reason why there's not more often a gap is that some of the slower NVMe drives are probably also DRAM-less, which hurts their performance even more. With game loading not being very I/O bottlenecked, those drives don't get to do enough big sequential access to compensate for their weaknesses in slow random I/O.

The following two titles had so little spread between all drives that there was no clean segmentation between the NVMe and SATA models. All I think we can really say is that they're just not I/O-limited - it hardly matters what SSD you're running them from.

Title
Total Spread​
Cyberpunk 2077
10.3%​
Far Cry 6
14.0%​

Anyway, my recommendation would be to include loading times from a couple of the games with the most NMVe spread, with the caveat to readers that these are worst-case examples and most games will show far less variation.
 
Last edited:
Application load/boot times are a virtually worthless benchmark when the vast majority of results fall into margin of error or low single digit seconds. PCMark and 3DMark are basically just a compilation of trace tests from real world activities. This means they are actually rather good benchmarks for the respective workloads.

The MySQL testing TPU does is definitely the best of their application testing even though they don't show comparison numbers (I wish they would pick 2-3 drives to compare to for each review). I do also like the copy testing being separated out by type as it allows readers to compare specific workloads.

IOPs and latency tend to be the most important things to look at for comparing SSD performance given how close performance tends to be. I'm not sure how many real world consumer applications would actually show variation here though I believe it could be measured with perfmon.

I would absolutely like to see temperature testing, and I think it should be feasible. There needs to be testing as the drive comes and then with a standardized heatsink (while motherboard heatsink is good I'd rather it be a third party one which could continue to be used no matter what motherboard is in use).
 
  • Like
Reactions: bit_user
For the hierarchy, we chose to sort by synthetic metrics just because those are readily accessible and reliable, and specifically used QD1 random testing as that's very difficult to "cheat." It does generally correlate with how well a lot of real-world workloads will respond to the various SSDs, and while certainly not perfect (nothing is), I would generally trust it's ranking over 3DMark or PCMark as the sole metric.

We do have other tests that we've conducted on every SSD, and the reviews are linked from the specs column in the table. The issue is that we can't show a table of every metric — it would look ugly and open up our data to theft from others that don't do the testing.

Paul and I argued about using a larger collection of data for the performance metric. I did a geometric mean of QD1 random throughput (converting IOPS to MB/s), sequential QD8 throughput, 3DMark and PCMark throughput, and the DiskBench 50GB copy MB/s. The overall rankings aren't all that different from what you see in the Copy charts, FWIW, though there's a bit less of a spread — 3DMark and PCMark basically skew toward a lot of the drives being relatively similar in performance. (And if you look at the article, you can guess who "won" the argument. I've told Paul if he dies, I'm putting the column back in.)

For temps, we have data on a lot of the recent drives that I've tested — but nothing for all of the previous testing done by Sean. I think some of the early drives that I tested also don't have temp data. Anyway, the reality is that temps will depend heavily on your particular motherboard and heatsink. I saw some places ranting about how the PCIe 5.0 drives would hit 85C and then throttle. That's only like to happen if you run bare or run in a PC with horrible airflow.

Here's the results of our testing of the Teamgroup Z540 2TB as an example (this is during the 30 minute sustained write tests, which is followed by increasingly long idle periods to let the SSD recover).

1700797097778.png

The maximum temperature is consistently 72C, nowhere near the throttling point. But does that represent the quality of the drive itself, or is it more indicative of our test PC? I'd say it's the latter. The test PC has plenty of airflow and the SSD is under a decently sized heatsink, and there's not even a graphics card installed (we use the integrated Intel Graphics on the 12900K).

But in my mind, anyone buying a PCIe 5.0 SSD should absolutely have at least this level of cooling for their SSD. Don't buy a PCIe 5.0 drive and plan on running it in an M.2 slot that's buried under the GPU, or without a heatsink!

What about other SSDs? Well, the PCIe 5.0 drives are some of the hottest running we've encountered. Most are running at substantially lower temperatures! For example, here's the Crucial T700 2TB data:

1700797884894.png

It peaked at 52–53C, and it only got there briefly during peak write speeds. Under the longest sustained writes, it was only at about 37C. I'll have a chat with Shane about at least including information on the highest temperatures we saw during testing, though I'm not sure we'll create a temp chart like the above for every drive.

It takes 10 minutes of futzing about with the data for each drive, but if you really find it useful, let me know. I can work on making them for future reviews at least. But again, it's a bit more one of those anecdotal evidence things and an indication of SSD temps in our particular test PC rather than a true indication of how the drive runs everywhere. Stick them in a mini-ITX case with restricted airflow and temps would naturally be a lot higher.
 
  • Like
Reactions: bit_user
Here's the results of our testing of the Teamgroup Z540 2TB as an example (this is during the 30 minute sustained write tests, which is followed by increasingly long idle periods to let the SSD recover).

View attachment 287

The maximum temperature is consistently 72C, nowhere near the throttling point.
Uh, whether they say so or not, that perfectly-flat ceiling sure looks like throttling to me! It would be interesting to line it up with a performance graph, because I'll bet you'd see performance fall off whenever it hits that ceiling.

It takes 10 minutes of futzing about with the data for each drive, but if you really find it useful, let me know. I can work on making them for future reviews at least.
I'll tell you what I think is important, and then perhaps you can think about the best way to address these concerns in your reviews.

Heat affects both drive performance and longevity. Therefore, readers will likely want to know how drives compare in their sensitivity to heat and how much they need good airflow. Even though your setup differs from theirs, they'll at least know which drives are more sensitive to throttling and which drives want a better heatsink and/or airflow.

I guess another reason to look for signs of throttling is that it suggests paying yet more attention to cooling could likely unleash even more performance.

But again, it's a bit more one of those anecdotal evidence things and an indication of SSD temps in our particular test PC rather than a true indication of how the drive runs everywhere. Stick them in a mini-ITX case with restricted airflow and temps would naturally be a lot higher.
Understood, but you know that's exactly how some people are going to use them - especially if they don't know otherwise! Or in laptops, for drives that don't come with their own heatsink.
 
Uh, whether they say so or not, that perfectly-flat ceiling sure looks like throttling to me! It would be interesting to line it up with a performance graph, because I'll bet you'd see performance fall off whenever it hits that ceiling.
There's a sustained load, writing data as fast as possible, and the bottleneck at this point is a combination of NAND and controller. Initially, these Phison E26 drives write in pSLC mode and so the Z540 spits out nearly 12 GB/s of data to the NAND for about 20 seconds (~250 GB written, give or take). Then the cache gets filled up and it writes directly to the NAND in TLC mode at around 3.7 GB/s. This goes on for about seven minutes, or ~420 more seconds and writes another ~1,500 GB.

At this point, all of the "clean" NAND has been used, and this is when the drive enters a "folding" state where it has to clean out the pSLC cache and write that to the NAND in TLC mode. So things slow down and the controller + NAND try to catch up while additional writes are still happening. On the Z540, it's another ~220 seconds to catch up. Interestingly, it took a bit longer for the slower E26 drives, possibly due to having slower 10 GT/s NAND or possibly due to firmware differences.

But after that point, the NAND can then take data at 3.7 GB/s again pretty much indefinitely. (At least for sequential writes.) This puts a steady load on the controller and so it heats up to ~72C max and stays there until the load is removed.

All of the E26 drives (with proper cooling) behave this way. Take away the cooling and run a bare drive, and the steady state performance collapses to around 400 MB/s or something. I don't have the numbers right here, but it's really bad and the controller hits 85C before you see the throttling kick in.
I'll tell you what I think is important, and then perhaps you can think about the best way to address these concerns in your reviews.

Heat affects both drive performance and longevity. Therefore, readers will likely want to know how drives compare in their sensitivity to heat and how much they need good airflow. Even though your setup differs from theirs, they'll at least know which drives are more sensitive to throttling and which drives want a better heatsink and/or airflow.

I guess another reason to look for signs of throttling is that it suggests paying yet more attention to cooling could likely unleash even more performance.

Understood, but you know that's exactly how some people are going to use them - especially if they don't know otherwise! Or in laptops, for drives that don't come with their own heatsink.
I'm actually not sure what the "ideal" temperatures are for NAND. I know on HDD studies (Backblaze and others), it's been shown that having components that are too cold can be equally as detrimental as being too hot. Granted, HDDs have moving parts, but I think there may be some correlation between temperatures and NAND program/erase cycles. I suspect there's a pretty wide range of about 30C to 60~70C where everything is fine.

Except, all of the above temperatures are actually just the controller temp. I don't think the NAND gets nearly as hot, so really it's just the controller getting warm. Apparently, E26 doesn't like getting as hot as Intel and AMD CPUs and thus throttles at a much lower point. That, or maybe it's to protect the NAND and other stuff that made them set the throttle point lower. But either way, all indications are that any SSD running at ~75C or lower on the controller is nowhere near throttling and should be perfectly fine.

Anecdotally, I can also tell you that I've tested a few different setups on the SSD testbed. The mobo M.2 slot, versus the M.2 slot on the expansion card (which is why I have to use in order to do power testing and easily swap SSDs), versus an SSD in an M.2 slot with a fan blowing directly at it all behaved slightly differently. The mobo M.2 slot with the smaller heatsink got hottest by a few degrees Celsius, then the SSD with a fan was next, and then finally the (very large) heatsink from the expansion card had the coolest temps. But of those three, none were close to the throttle point.

Put an E26 drive into either the expansion card or the primary M.2 mobo slot without a heatsink and without a fan blowing at it and you'll get maybe 30 seconds before major throttling kicks in. Which means it can often get through shorter workloads without issues, but synthetic tests and write saturation trip the throttling. (A newer firmware revision helps reduce the throttling a bit, and prevents full system crashes, but you still don't want to run these PCIe 5.0 drives without a heatsink or active cooling!)
 
There's a sustained load, writing data as fast as possible, and the bottleneck at this point is a combination of NAND and controller. Initially, these Phison E26 drives write in pSLC mode and so the Z540 spits out nearly 12 GB/s of data to the NAND for about 20 seconds (~250 GB written, give or take). Then the cache gets filled up and it writes directly to the NAND in TLC mode at around 3.7 GB/s. This goes on for about seven minutes, or ~420 more seconds and writes another ~1,500 GB.
I hear you, but it's too flat and always shoots up until it hits that exact threshold, then holds it, never going over. That really looks like temperature throttling. It takes a control system to achieve such precise behavior, and it sure looks like temperature is the variable they're controlling for. I'll bet plots of any other variable aren't as level as those plateaus, during those same time periods.

If it were a byproduct of some other energy-intensive activity, then you'd expect to see temperature continuing to build, gradually. Or gradually approach the plateau with a nice, long rolloff and maybe undulate a bit.

All of the E26 drives (with proper cooling) behave this way. Take away the cooling and run a bare drive, and the steady state performance collapses to around 400 MB/s or something. I don't have the numbers right here, but it's really bad and the controller hits 85C before you see the throttling kick in.
Pull off the heatsink from a Intel i9 CPU and it'll throttle down to a few hundred MHz. That doesn't mean it's not throttling even with the heatsink - just that the throttling isn't nearly as bad.

I'm actually not sure what the "ideal" temperatures are for NAND. I know on HDD studies (Backblaze and others), it's been shown that having components that are too cold can be equally as detrimental as being too hot.
I haven't exactly seen the claim about "too cold", but I did see an interesting table showing that you get better retention from writing the data at higher temperatures and then storing the drive at lower temperatures. There's certainly an upper limit - not like you want the drive to be as hot as possible, during writes. Plus, heating it up affects the other cells in the drive, causing them to lose charge faster.

Granted, HDDs have moving parts, but I think there may be some correlation between temperatures and NAND program/erase cycles. I suspect there's a pretty wide range of about 30C to 60~70C where everything is fine.
"fine" = warranty-covered. However, if you want to get the most longevity, then you'd have to control conditions more tightly. The NAND makers know the details and tell the drive developers. We just need to look for bits of that information that get published or leak out.

Except, all of the above temperatures are actually just the controller temp.
If true, I'll bet the NAND chips have embedded temperature sensors - it's just a question of the drives exposing them.

all indications are that any SSD running at ~75C or lower on the controller is nowhere near throttling and should be perfectly fine.
If we had the temperature + performance plots lined up, it would be much more convincing. I know it's work for you guys, so it's not like I'm demanding that you do it. My point is just that it'd be better if you could show us, rather than tell us.

So much of this discussion could've been avoided if I could just see the data for myself. That said, the NAND vs. controller temperature is a good point and something I wish we could get confirmation of.

Thanks for all your testing & taking the time to reply. I do genuinely respect & appreciate your dedication.
 
If we had the temperature + performance plots lined up, it would be much more convincing. I know it's work for you guys, so it's not like I'm demanding that you do it. My point is just that it'd be better if you could show us, rather than tell us.
It may not line up with what was shown above, but the snippet in the review seems like it would match the beginning of the temperature graph (I'm making this guess based on the timing of the write hole mentioned in the review and the first temp graph above):
fBVek7sQzaH4T7AEhPxz4V.png
 
  • Like
Reactions: Order 66
I hear you, but it's too flat and always shoots up until it hits that exact threshold, then holds it, never going over. That really looks like temperature throttling. It takes a control system to achieve such precise behavior, and it sure looks like temperature is the variable they're controlling for. I'll bet plots of any other variable aren't as level as those plateaus, during those same time periods.

If it were a byproduct of some other energy-intensive activity, then you'd expect to see temperature continuing to build, gradually. Or gradually approach the plateau with a nice, long rolloff and maybe undulate a bit.


Pull off the heatsink from a Intel i9 CPU and it'll throttle down to a few hundred MHz. That doesn't mean it's not throttling even with the heatsink - just that the throttling isn't nearly as bad.


I haven't exactly seen the claim about "too cold", but I did see an interesting table showing that you get better retention from writing the data at higher temperatures and then storing the drive at lower temperatures. There's certainly an upper limit - not like you want the drive to be as hot as possible, during writes. Plus, heating it up affects the other cells in the drive, causing them to lose charge faster.


"fine" = warranty-covered. However, if you want to get the most longevity, then you'd have to control conditions more tightly. The NAND makers know the details and tell the drive developers. We just need to look for bits of that information that get published or leak out.


If true, I'll bet the NAND chips have embedded temperature sensors - it's just a question of the drives exposing them.


If we had the temperature + performance plots lined up, it would be much more convincing. I know it's work for you guys, so it's not like I'm demanding that you do it. My point is just that it'd be better if you could show us, rather than tell us.

So much of this discussion could've been avoided if I could just see the data for myself. That said, the NAND vs. controller temperature is a good point and something I wish we could get confirmation of.

Thanks for all your testing & taking the time to reply. I do genuinely respect & appreciate your dedication.
I wonder if I can attach a CSV here? Hmmm... Seems like I can!

So these are the Teamgroup Z540 2TB files. The Iometer Temps is logging from HWiNFO64, while the instWS-30min file is the Iometer results. Note also that Iometer is logging ever second, HWiNFO64 is every two seconds (but there's more jitter there). And you need to massage the data for the time field from HWiNFO64 if you open it in Excel — it gets converted (incorrectly) into some other time value.

Besides those files, you need to know that the Iometer temperature logging happens over the course of about two hours, during which the following happens: Sit idle for 10 seconds. Write 1MiB blocks for 30 minutes (this is the instWS-30min.CSV file). Idle for 30 seconds. Write for 5 minutes. Idle for 60 seconds. Write for 5 minutes. Idle for 5 minutes. Write for 5 minutes. Go idle for 30 minutes (but temps kept logging until I stopped HWiNFO64, which was more like 45 minutes or so).

Some SSDs will have multiple sensors that get picked up by HWiNFO64. I know I've seen plenty with two fields, but also at least some with three fields. On most, the first two temperatures are identical, while if there's a third, it's much lower. That could be NAND temps, but it's not labeled other than "Temp 3" or whatever.
 

Attachments

@JarredWaltonGPU , I'm trying to understand the data in this review:

thermal-write.png

Of course, the reason I'm having trouble is they never say what the green plot shows, but it's implied with their thermal image and the associated text:

flir.jpg


"We recorded a thermal image of the running SSD as it was completing the write test. The surface temperature of the heatsink 101°C, which very closely matches what the onboard thermal reporting shows."

For it to match the onboard thermal reporting, it should mean the green line is the controller's temperature and the red line is the hottest NAND chip's temperature.

Source:

 
  • Like
Reactions: Order 66
For it to match the onboard thermal reporting, it should mean the green line is the controller's temperature and the red line is the hottest NAND chip's temperature.
It is this, W1zzard forgot to add a line about two thermal sensors in that review.

From their 980 Pro review:
Unlike most other SSDs, the Samsung 980 Pro has two thermal sensors, one inside the controller (green line) and another that measures the temperature of the flash chips (red line).