[SOLVED] Details for Sustained Sequential Write Performance Test

May 8, 2020
5
1
15
Hi,
I've been lurking in lockdown until now, but I'm coming out.....:-}

Thanks for your testing, and for the Forum.
It's a very good source of information.

Can anyone please tell me some more details on how the Sustained Sequential Write Speed Test is performed?
I've read "How we do the Tests", and it's a bit sparse on detail.
Also searched the Forum.

Specifically:
(1) How do you prepare the drive before?
Do you write over with fresh writes?
Leave it blank?

(2) What packet size do you use?
A lot of reviwers use 128kB, but not all.

(3) What Queue Depth?

(4) Do you do just do 1 test, or do you repeat it to check?

(5) What frequency do you record the data at?
Your plots are solid lines but they are actualy composed of datum points at each reading.
Do you do it time based (say once or twice every second?).
Or is it data based? (say every 1 GB?).

(6) Have you always used Iometer to do it?

(7) Is it always for 15 minutes?

(8) What Over Provisioning do you use?
Is it the standard amount, as formatted?
I'm assuming it is, because you don't say anything about that.

I like the way that you report the output, in particular the graph that shows Gibabytes written as a function of time.
I think that's a better metric than the usual MB/sec per Data in GB.

Thanks very much.

In addition, would it be possible to archive your output somewhere on. your site?
A lot of the other reviwers do that, and it's very useful.

Again, that's for your efforts, and helpful comments.

Best,
Alan Jarvis
Sheffield
 
Solution
Hi,
I've been lurking in lockdown until now, but I'm coming out.....:-}

Thanks for your testing, and for the Forum.
It's a very good source of information.

Can anyone please tell me some more details on how the Sustained Sequential Write Speed Test is performed?
I've read "How we do the Tests", and it's a bit sparse on detail.
Also searched the Forum.
Yeah, that needs to be updated, that was how the previous reviewer tested afaik a few years ago.

Specifically:
(1) How do you prepare the drive before?
Do you write over with fresh writes?
Leave it blank?

For testing the sustained sequential write speed I prep the drive by doing the following.
  • Secure erase
  • Boot from another drive into Windows
  • Format drive as...
Most are just happy with CrystalDiskMark's default battery of tests, specifically the top sequential test...which is where you can see if the drive performs as it should in known conditions.

If you want to transfer very large ISOs and files, expect the sustained large file write numbers to be much smaller, as once write cache is filled/saturated, it then becomes useless, and one see's an SSD's numbers drop from 400-500 MB/sec to 80-100 MB/sec...

(I doubt many home users even desire to play with over-provisioning manually, letting the individual drives' manufacturers handle these tasks during the design stage)

If you need the highest, highest sustained writes and IOPS, bring your checkbook and research Intel's NVME PCI-e slot drives that cost like $800-$1000 for 480 GB, $1900 for 960 GB, etc...
 
May 8, 2020
5
1
15
Most are just happy with CrystalDiskMark's default battery of tests, specifically the top sequential test...which is where you can see if the drive performs as it should in known conditions.

If you want to transfer very large ISOs and files, expect the sustained large file write numbers to be much smaller, as once write cache is filled/saturated, it then becomes useless, and one see's an SSD's numbers drop from 400-500 MB/sec to 80-100 MB/sec...

(I doubt many home users even desire to play with over-provisioning manually, letting the individual drives' manufacturers handle these tasks during the design stage)

If you need the highest, highest sustained writes and IOPS, bring your checkbook and research Intel's NVME PCI-e slot drives that cost like $800-$1000 for 480 GB, $1900 for 960 GB, etc...

Nope, thanks for that, but that's not related to what I asked.

I asked if anyone had more details about the actual test.

You cannot actually use the results from the test properly unless you know what was done, especially not if the Reviewer doesn't test a really big range of SSD's.

As far as i know, this specific test is not done using CDM's software, I think it probably uses Iometer.

Tom's Hardware does not give many details about this specific test. I asked what I'd like to know, but the BAREST miniumum is to know what Queue Depth was used. Some reviwers use QD1, others use QD32. I dunno what Tom's uses.

As I'm sure most people know, you need to know QD, as well as packet size, to relate transfer speed to IOPS.

The other reviewers like Anantech and TechPowerUp, Guru3D, and KitGuru give more information. But I don't see it on the TomsHardware site.

I was hoping I'd get someone who knows about the test to reply, or perhaps the Reviewer who did it knows.
 
May 8, 2020
5
1
15
The Editorial staff/testers don't usually like to give up their specifics and aren't often on the forums; just to give you a heads up.

I see......or rather I do not really see.

And I am being constructive here, although I'm being fairly blunt, but I think what I'm saying is valid.

This is not the usual state of affairs for technical forums, that are hosted by a commercial (as in run for profit) organisation. And Tom's Hardware, either full time, or Reviewers, as well as the company that owns it, are profit oriented.

Which is OK, but then there is a responsability to show that what is done is credible. After all, your income is from people who trust what you say, and use your links. And from advertisers who think that you seem credible. And the manufacturers that give you things to test, expect a professional job to be done.

Which means being able to backup what they say, and to be approachable.

For instance, if you look at the Intel Forum, you usually get help from the actual expert involved.

On a different forum run by a company that makes the GS-911, a diagnostic tool, you either get help from staff, or else very knowledgeable members, tht actually addresses the question you asked.

I'm not trying to attack the person who had a go at a response to my question, but it was not relevant to my question. I'm not saying he wasn't trying to be helpful, but it was completely off-topic. Perhaps he didn't read it very thoroughly, or understand it?

I've already asked this question in Comments on several Reviews, and I don't see any reply there.

So if Tom's Hardware fulltime staff, or part time (Reviewers) are not able, or willing, to answer what Ii think is a relevant question, then I am afraid it calls into question the credibility of any review that is done.

After all, if a mistake is made in a given one, and it's noticed, and queried, and you have no process of responding, then it might well be that a large number of the Reviews have issues.

If you compare this to Anandtech, or TechPowerUp, or some of the other Review sites, they do seem to be able to reply to queries.

I'm trying to be constructive here, so if there is indeed a way to query results, then sorry, I've missed it. But I tried on Comments, and on the Forum now. Also send you a specific question via PM.

I can ask a few of the Manufacturers to request that you supply more complete information abiut how you test their products: I'm sure that they are interested in how their products get tsted and reported on.
 

popatim

Titan
Moderator
Manufacturers would love to have influence on how we test products, that way they can steer reviews to even more favorable results, see where this is going? BTW - They don' have any influence here though I've seen where they've been contacted about a glaring but that should be fixed asap.

As for testing methodology, would you give yours away; that took hundreds of hours to perfect, if not years; away to a competitor ? And for Free? so they can undercut you and take your job? I think not.

And then there is the whole " Who actually owns the process?" is it property of the reviewer or is it the Website's owners intellectual property? give that away and you could see a hefty fine or even jail time.

And to be clear I am not a product reviewer but those are the some of the more obvious issues I can see.
 

seanwebster

Contributing Writer
Editor
Aug 30, 2018
191
68
10,690
Hi,
I've been lurking in lockdown until now, but I'm coming out.....:-}

Thanks for your testing, and for the Forum.
It's a very good source of information.

Can anyone please tell me some more details on how the Sustained Sequential Write Speed Test is performed?
I've read "How we do the Tests", and it's a bit sparse on detail.
Also searched the Forum.
Yeah, that needs to be updated, that was how the previous reviewer tested afaik a few years ago.

Specifically:
(1) How do you prepare the drive before?
Do you write over with fresh writes?
Leave it blank?

For testing the sustained sequential write speed I prep the drive by doing the following.
  • Secure erase
  • Boot from another drive into Windows
  • Format drive as GPT and NTFS
  • Delete the partition
  • Convert the drive to MBR and leave without a partition
(2) What packet size do you use?
A lot of reviwers use 128kB, but not all.

(3) What Queue Depth?

(4) Do you do just do 1 test, or do you repeat it to check?

Run iometer script that entails:
10 seconds idle
10 seconds sequential write preconditioning for at qd8 - 1MB block size
10 seconds idle
15 minutes 30 seconds sequential write at QD32 - 1MB block size
30 seconds idle
5 min sequential write at QD32 - 1MB block size
1 minute idle
5 min sequential write at QD32 - 1MB block size
5 min idle
5 min sequential write at QD32 - 1MB block size
30 min idle
5 min sequential write at QD32 - 1MB block size

Sometimes I run this twice (secure erase before again) if the results seem strange for the hardware. Usually once gives me enough data and detail to use. I used to use 128KB, but some drives underperformed. 1MB is closer to what real-world behavior is anyways. I test QD 1-128 4KB random read/write and QD1-32 128KB sequential read/write with iometer, too, which is where I get my peak sequential and random performance data. I may script a replacement test for ATTO one day soon.

(5) What frequency do you record the data at?
Your plots are solid lines but they are actualy composed of datum points at each reading.
Do you do it time based (say once or twice every second?).
Or is it data based? (say every 1 GB?).
Data capture is logged every second from iometer. It is time (seconds) on the x-axis and MBps on the Y-axis on the first two detailed charts. The 3rd-5th charts are GB on the Y-axis.

(6) Have you always used Iometer to do it?
Yep

(7) Is it always for 15 minutes?
Yep, but I didn't have it run the idle rounds to check cache recovery until maybe a half a year ago.

(8) What Over Provisioning do you use?
Is it the standard amount, as formatted?
I'm assuming it is, because you don't say anything about that.

I don't adjust user capacity settings. The sequential write saturation test is the only test where I test the drive empty and as a secondary device. All the other tests are done with the drive running the OS and at 50% full for 3-4 rounds at varying idle times to represent the closest real-world use case results. By now, however, most drives I test end up getting tested at least 5-6 times more after that when doing other articles and testing new workflow patterns.

I may transition to running the tests with the DUT as a secondary device in the future though, except for SPECworkstation3 since it needs to be run on the OS drive.

I like the way that you report the output, in particular the graph that shows Gibabytes written as a function of time.
I think that's a better metric than the usual MB/sec per Data in GB.

Thanks very much.
Thank you, if you have anything else you would like to see or adjustments, I am always open to feedback. Just hard to balance out how much time to spend on testing and graphing for these at the end of the day.

In addition, would it be possible to archive your output somewhere on. your site?
A lot of the other reviewers do that, and it's very useful.

Again, that's for your efforts, and helpful comments.

Best,
Alan Jarvis
Sheffield
I'll see what I can do.

Manufacturers would love to have influence on how we test products, that way they can steer reviews to even more favorable results, see where this is going? BTW - They don' have any influence here though I've seen where they've been contacted about a glaring but that should be fixed asap.

As for testing methodology, would you give yours away; that took hundreds of hours to perfect, if not years; away to a competitor ? And for Free? so they can undercut you and take your job? I think not.

And then there is the whole " Who actually owns the process?" is it property of the reviewer or is it the Website's owners intellectual property? give that away and you could see a hefty fine or even jail time.

And to be clear I am not a product reviewer but those are the some of the more obvious issues I can see.

My testing is an open book. I do what I can to cover most aspects of performance while exploiting and highlighting the weaknesses and strengths between different architectures. Manufacturers can be quite sneaky and sometimes implement specific benchmark focused modifications in the firmware that don't carry over into other tests. I've seen it happen over the years and adjust my workflow as needed.

Anyone can go and try doing what I can and follow my workflow somewhat, but not many can take a confident professional stance on these products week after week, year after year as I have. And power testing? Ha, hardly anyone can, let alone try. After toying with and reporting on SSDs for the past decade, I haven't seen many try to keep up with the workflow. Those who have usually don't hang around reviewing storage too long or are owners of a tech review site themselves. Well, I mean there are those who simply review storage based on one run of crystal disk mark and ATTO and don't even mention the underlying hardware...but then again, is that really a review? Do those actually count?

I've been there and done that to a point, but man, you gotta be deep into storage to sense things. It can take a lot of time to get data and conclusions and comparisons right each and every week with a new product each time, especially when you have little data to go off of. I am the point of contact to get any juicy hardware detail out of the product managers on SSD storage...and they still don't give me much of anything. So, I always gotta do some extra digging to get some special detail from random white papers or download software I probably shouldn't have in the first place to get the answers I need.

I am a freelance contributor to Tom'sHardware. My reviews, the test procedure, sample acquisition, basically everything, is all my own workflow and I have no influence from any company besides Tom's requesting me to keep up with an SSD content output schedule. I'm near fully independent, all opinions are of my own and own finding. I can change anything I want with my testing/reviews as long as it is for reader benefit.
 
Last edited:
Solution

seanwebster

Contributing Writer
Editor
Aug 30, 2018
191
68
10,690
I miss when revierwers would do hd tune pro full drive writes.
It doesn't even work half the time, some drives have firmware that detects the workload like I mentioned they do before, and will just dish out false data to HDTune and not show the true cache. That's why I just run iometer for 15 min to show the SLC cache. May have to extend it a little for larger drives once they start hitting the market, but 15 min has been plenty lately.