Review Seagate FireCuda 540 SSD Review: Premium Performance Meets Outstanding Warranty

lmcnabney

Prominent
Aug 5, 2022
192
190
760
At 5x the price of the cheaper 4x4 modules this is just a toy for the rich so they can save maybe a whole second every time they load a modern game.
 

dk382

Commendable
Aug 31, 2021
54
49
1,560
Seagate's Firecuda series have always been great SSDs. It's a shame that they always seem to be in such low supply and priced so high. My Firecuda 510 still barely has a dent in its health rating after four years of use, three of which was as an OS drive.
 

everettfsargent

Honorable
Oct 13, 2017
130
35
10,610
Could someone please explain iometer and sequential writes to an ssd? Also after a full write to a disk should not the test stop?

Elsewheres it has been suggested to use ms windows diskspd cli instead. And elsewheres it has been suggested that iometer in a decade old or even older?

Writing more then 100% to an ssd makes no sense whatsoever. Someone please explain. You can somewhat see this in both the 2gb and 4gb seagate 530 ssd's at roughly 450 and 900 seconds into each test.

Finally, one would need to read from a much faster drive into ddr memory (ok, so maybe not, because you can copy directly from one storage medium to another, i. e. a raid volume for example) before writing to a secondary drive such as these ssd's, that would be realworld throughput. Just moving the same bits over and over from memory, i guess would work, and that is what i am assuming is meant by a synthetic benchmark. However writing past 100% of a physical drive's size is not meaningful to me for any real filesystem where the write volume is not normally overwritten ever as far as i know.

Thanks in advance.
 
Last edited:
Could someone please explain iometer and sequential writes to an ssd? Also after a full write to a disk should not the test stop?

Elsewheres it has been suggested to use ms windows diskspd cli instead. And elsewheres it has been suggested that iometer in a decade old or even older?

Writing more then 100% to an ssd makes no sense whatsoever. Someone please explain. You can somewhat see this in both the 2gb and 4gb seagate 530 ssd's at roughly 450 and 900 seconds into each test.

Finally, one would need to read from a much faster drive into ddr memory (ok, so maybe not, because you can copy directly from one storage medium to another, i. e. a raid volume for example) before writing to a secondary drive such as these ssd's, that would be realworld throughput. Just moving the same bits over and over from memory, i guess would work, and that is what i am assuming is meant by a synthetic benchmark. However writing past 100% of a physical drive's size is not meaningful to me for any real filesystem where the write volume is not normally overwritten ever as far as i know.

Thanks in advance.
Not sure if this answers your question, but the methodology for that test in particular was explained at https://forums.tomshardware.com/thr...write-performance-test.3603980/#post-21790529
 

everettfsargent

Honorable
Oct 13, 2017
130
35
10,610
hotaru.hino,



Thanks i can see where everything is "zeroed" out so to speak. Also this test appears to be universal at least for the early part of the test where the dram cache is nit filled yet as most manufacturers apoear to quote similar numbers for max read/write speeds.



The only real reason i asked was in looking at the iometer results as a function of time and doing a mental integration (very roughly mind you) of that time series and going, wait a minute that is over 100% of a full capacity ssd drive write???



Also, a sequential write or read makes perfect sense for newly formatted spinners but makes absolutely no sense for ssd's.



I will now go back to lurking mode but would appreciate any additional information or insights (im thinking this is only useful for new unused blank ssd's).



It does make for a nice time series though, just not one for a drive past 100% of its capacity (brand new or otherwise).
 

everettfsargent

Honorable
Oct 13, 2017
130
35
10,610
I even have more questions now. Pcie 3.0, 4.0 and 5.0 and dram caches. What is being written to, dram or physical ssd media, and the number of layers in these new 5.0 ssd's (that may explain the heat issues seen in this newest generation of 5.0 ssd's). More layers, same horizontal memory footprints (same M.2 physical dimensions), higher MBps on average (due to more layers?) equals more heat to dissipate?

I currently think the iometer test is somewhat meaningless in real world tests and is only meaningful when moving very large amounts of data in one go (and where the dram cache is exceeded).
 

dimar

Distinguished
Mar 30, 2009
1,042
63
19,360
Would be nice to have some SSDs that stay at 30 deg C without heatsink, so I don't have to worry burning down my laptop or office desktop.
 
Aug 27, 2023
1
0
10
The 540s seem to have a firmware data corruption bug. I've tried a few of these after RMA'ing them repeatedly - pop a Firecuda 540 in a ceph cluster and watch ceph start to throw hundreds of data corruption rejections. Given Ceph is really just a "pretty high flush workload", I'd bet you can get the same result writing a ton to btrfs/zfs with `while true; do sync; done` running in the background and then trying to read it back or even high-writes Postgre