Review Sabrent Rocket Q4 2230 2TB SSD Review: Double the Rocket, Double the Fun

abufrejoval

Reputable
Jun 19, 2020
615
454
5,260
I guess the biggest question is: how do you ensure it's done steady-state processing before you turn the device off?

These gaming decks spend plenty of time not being used, so time windows for such 'batch background' processing exists aplenty... except that you might not charge it immediately and that you'd have probably turned it off, too.

I guess even if the OS isn't really needed running, the OS will decide to put the SSD into sleep state and there is very little the SSD can do without risking to have its juice cut off, should it try to refuse.

So you need to keep the console on after you've finished to enable the SLC->QLC transfer and the erasure block optimizations to happen.

And in fact I guess there isn't even a proper protocol for the SSD to tell the host that it's finished doing housekeeping and now wouldn't mind being turned off.... or is NVMe a lot smarter than I give it credit for?
 

evdjj3j

Distinguished
Aug 4, 2017
371
396
19,060
The ratings on this site make no sense whatsoever. Jerrad claims that 3 1/2 stars on his reviews is a a C yet this review with 3 stars claims in the summary that this drive is a good buy.
 
The ratings on this site make no sense whatsoever. Jerrad claims that 3 1/2 stars on his reviews is a a C yet this review with 3 stars claims in the summary that this drive is a good buy.
A mediocre product at a good price might still be a good buy, particularly for a drive going in a Steam Deck. Not that this is a "good" price, but 2TB 2230 drives aren't really cheap right now. Also: Different people, different takes. The text is far more important than the score, in my view.
 
I guess the biggest question is: how do you ensure it's done steady-state processing before you turn the device off?

These gaming decks spend plenty of time not being used, so time windows for such 'batch background' processing exists aplenty... except that you might not charge it immediately and that you'd have probably turned it off, too.

I guess even if the OS isn't really needed running, the OS will decide to put the SSD into sleep state and there is very little the SSD can do without risking to have its juice cut off, should it try to refuse.

So you need to keep the console on after you've finished to enable the SLC->QLC transfer and the erasure block optimizations to happen.

And in fact I guess there isn't even a proper protocol for the SSD to tell the host that it's finished doing housekeeping and now wouldn't mind being turned off.... or is NVMe a lot smarter than I give it credit for?
If you're still copying files, in Windows, you'd see the dialog still open. For Linux, you'd have something similar. On a Steam Deck, the only thing you'd likely be doing is downloading and installing a game, which can also be interrupted. This is the whole purpose of the shutdown procedure in modern OSes, giving the OS time to clean things up before turning off.

Powering off (via a hard switch) in the middle of doing anything can be bad. Most drives limit how much stuff sits in volatile storage (RAM caches) for exactly this reason. High-end drives would have a super capacitor to store power so that they can flush things from RAM to NAND in the event of a power loss. For consumer drives, it's possible, if you cycle the power in the middle of writes, to kill an SSD. Probably very unlikely, and it would depend on the model, but I know in the past I heard of this happening.
 

abufrejoval

Reputable
Jun 19, 2020
615
454
5,260
If you're still copying files, in Windows, you'd see the dialog still open. For Linux, you'd have something similar. On a Steam Deck, the only thing you'd likely be doing is downloading and installing a game, which can also be interrupted. This is the whole purpose of the shutdown procedure in modern OSes, giving the OS time to clean things up before turning off.

Powering off (via a hard switch) in the middle of doing anything can be bad. Most drives limit how much stuff sits in volatile storage (RAM caches) for exactly this reason. High-end drives would have a super capacitor to store power so that they can flush things from RAM to NAND in the event of a power loss. For consumer drives, it's possible, if you cycle the power in the middle of writes, to kill an SSD. Probably very unlikely, and it would depend on the model, but I know in the past I heard of this happening.
Just to make sure we're on the same page: I am concerned that the usage pattern of a Steam deck or console will give your device very little time to move freshly written data from the SLC cache to permanent QLC storage.

SSD firmware is a lot like a database transaction log, that needs some idle to empty properly.

Windows and Linux will see just a committed write, turning off the device won't loose you any data, it might just not have the opportunity to do the house-keeping and the SLC cache will remain permanently filled while the drive has to bypass it for new data resulting in HDD class write speeds.

It all depends on just how eager the SSD is about evicting the SLC cache and how high your update rate is. If your Internet is well below 1Gbit, there is little risk it will overflow the SLC cache, if your Steam deck is receiving updates from your gaming PC NBase-T rates and SLC eviction is very lazy, things could be different.

At 2TB for your Steam stash, at least you won't have to swap games in and out as often, which signficantly helps to lessen the write burden.

I've never liked the unpredictability of QLC or in fact SLC caches on principle. I can't say that I've been badly hit by it either, but that's a small surprise, since I never knowingly bought any QLC devices: I went from SLC to MLC quickly but took my sweet time to switch from MLC to TLC, but then SSDs were still caching not holding entire Steam collections.

But I've often wondered if SSDs that are only used in a very bursty manner (top load or off) might accumulate write amplification debt by their usage patterns. Modern flash drives need idle time to do housekeeping, including internal patrol reads to keep blocks from decaying beyond their ECC thresholds.

Desktops and even office notebooks provide tons of idle, a Steam deck perhaps much less so.

You could investigate, and make that an article: wouldn't that be fun?
 
Just to make sure we're on the same page: I am concerned that the usage pattern of a Steam deck or console will give your device very little time to move freshly written data from the SLC cache to permanent QLC storage.

SSD firmware is a lot like a database transaction log, that needs some idle to empty properly.

Windows and Linux will see just a committed write, turning off the device won't loose you any data, it might just not have the opportunity to do the house-keeping and the SLC cache will remain permanently filled while the drive has to bypass it for new data resulting in HDD class write speeds.

It all depends on just how eager the SSD is about evicting the SLC cache and how high your update rate is. If your Internet is well below 1Gbit, there is little risk it will overflow the SLC cache, if your Steam deck is receiving updates from your gaming PC NBase-T rates and SLC eviction is very lazy, things could be different.

At 2TB for your Steam stash, at least you won't have to swap games in and out as often, which significantly helps to lessen the write burden.

I've never liked the unpredictability of QLC or in fact SLC caches on principle. I can't say that I've been badly hit by it either, but that's a small surprise, since I never knowingly bought any QLC devices: I went from SLC to MLC quickly but took my sweet time to switch from MLC to TLC, but then SSDs were still caching not holding entire Steam collections.

But I've often wondered if SSDs that are only used in a very bursty manner (top load or off) might accumulate write amplification debt by their usage patterns. Modern flash drives need idle time to do housekeeping, including internal patrol reads to keep blocks from decaying beyond their ECC thresholds.

Desktops and even office notebooks provide tons of idle, a Steam deck perhaps much less so.

You could investigate, and make that an article: wouldn't that be fun?
So, the pseuso-SLC cache is still non-volatile. Even if you filled it up with sustained writes, and then immediately told the PC to shutdown, all that data is still there and "safe." The SSD would just work to flush out the pSLC cache to QLC storage once it's powered back up and basically idle. At least, that's my understanding of things, so the pSLC isn't a bad approach at all.

Note that with a 2TB SSD, the pSLC cache could be up to 500GB in size for a completely empty drive. So, if you could do sustained writes at max speed and fill that up, and then had to drain it at ~100 MB/s, it could take over 1.38 hours just to empty the pSLC to QLC. LOL. (Related: The drives take a while to recover in our Windows testing, unless you just wipe/format them.)

For the Steam Deck, the only way you'd really exceed even the reduced SSD write speed is if you had a wired connection. In my experience at least, you're otherwise limited by the Wi-Fi. It's a Wi-Fi 6 device (802.11ac 2x2 to be specific), which means best-case it has a theoretical 866 Mbps throughput. Except, you will NEVER see that in the real-world, even in ideal scenarios.

Basically, Wi-Fi 6 2x2 speeds will usually max out at maybe 550 Mbps, and if you have anything else happening on the network you'll probably get more like 350 Mbps. I have an 802.11ac 2x2 router for my house, and download speeds on the Steam Deck topped out at around 270–300 Mbps consistently. I get faster than that on some laptops with similar adapters, so it's likely the Steam Deck hardware that's slowing things down.

So if you have a hypothetical 200GB game download going, best-case it's writing to your SSD at maybe 60 MB/s, and very possibly closer to 35 MB/s. Even the worst QLC drives can basically do that all day without problems. Heck, even hard drives could do that, but HDDs are worse for power, size, and other performance aspects since they have that whole constantly spinning disk thing going on.

It would be interesting to try testing this. Like, a decent SSD and controller should write initially to the pSLC cache, but if it's only at ~40 MB/s, the cache can then be immediately flushed to QLC and would perhaps never fill up (until the SSD is completely full). The problem is that writing even 100GB of data at 40 MB/s takes a while, about 40 minutes. I guess that would be the question: if write speeds are slow, like sub-100 MB/s, do the SSDs even use their pSLC caches, or do they just write straight to TLC/QLC NAND?
 

abufrejoval

Reputable
Jun 19, 2020
615
454
5,260
So, the pseuso-SLC cache is still non-volatile. Even if you filled it up with sustained writes, and then immediately told the PC to shutdown, all that data is still there and "safe." The SSD would just work to flush out the pSLC cache to QLC storage once it's powered back up and basically idle. At least, that's my understanding of things, so the pSLC isn't a bad approach at all.

Note that with a 2TB SSD, the pSLC cache could be up to 500GB in size for a completely empty drive. So, if you could do sustained writes at max speed and fill that up, and then had to drain it at ~100 MB/s, it could take over 1.38 hours just to empty the pSLC to QLC. LOL. (Related: The drives take a while to recover in our Windows testing, unless you just wipe/format them.)

For the Steam Deck, the only way you'd really exceed even the reduced SSD write speed is if you had a wired connection. In my experience at least, you're otherwise limited by the Wi-Fi. It's a Wi-Fi 6 device (802.11ac 2x2 to be specific), which means best-case it has a theoretical 866 Mbps throughput. Except, you will NEVER see that in the real-world, even in ideal scenarios.

Basically, Wi-Fi 6 2x2 speeds will usually max out at maybe 550 Mbps, and if you have anything else happening on the network you'll probably get more like 350 Mbps. I have an 802.11ac 2x2 router for my house, and download speeds on the Steam Deck topped out at around 270–300 Mbps consistently. I get faster than that on some laptops with similar adapters, so it's likely the Steam Deck hardware that's slowing things down.

So if you have a hypothetical 200GB game download going, best-case it's writing to your SSD at maybe 60 MB/s, and very possibly closer to 35 MB/s. Even the worst QLC drives can basically do that all day without problems. Heck, even hard drives could do that, but HDDs are worse for power, size, and other performance aspects since they have that whole constantly spinning disk thing going on.

It would be interesting to try testing this. Like, a decent SSD and controller should write initially to the pSLC cache, but if it's only at ~40 MB/s, the cache can then be immediately flushed to QLC and would perhaps never fill up (until the SSD is completely full). The problem is that writing even 100GB of data at 40 MB/s takes a while, about 40 minutes. I guess that would be the question: if write speeds are slow, like sub-100 MB/s, do the SSDs even use their pSLC caches, or do they just write straight to TLC/QLC NAND?
Yup, it's at that point when you want to start reading the controller's source code.

But then perhaps, you'd never trust it with your data again, when you see how badly even firmware can be written =:-O

And when the firmware has to deal with things like host buffers, that require interaction with host firmware that could be buggy, too, and simply sprinkle your most critical data structures with random bits, you wonder if these firmware engineers might have burn out or a drinking problem, especially since these junior guys only get to work on the cheaper entry level products, which are much harder to handle than when you've got everything fully under your own control.

My very first SSDs were FusionIO drives, which basically operated in host-buffer mode and a host side firmware, while the FPGA on the drive only did the lowest level signal processing and basic chores. It let me appreciate the complexity that has now transitioned into the drives themselves, which run a software stack which has your average VAX cluster look primitive by comparison.

So I am eagerly awaiting your results and suggest you dock the Steam deck, use a 2.5 Gbit USB Ethernet dongle and let Steam stream from a big desktop locally.

Of course you could always just run 'fio' and see what happens.

And you obviously don't need to test this on a Steam deck, any M.2 slot in a PC will do..
 

abufrejoval

Reputable
Jun 19, 2020
615
454
5,260
In the older days, I used to worry a lot about keeping enough spare area around to ensure that in absence of TRIM support my SSDs wouldn't start reflashing entire erase blocks for every 512 byte sector written.

These days I just keep running my manual TRIMs when I do major updates and most of my SSDs never go near the 90% mark anyway before I expand or reallocate: prices below €50/TB evict quite a lot of lesser capacity drives natuerally, which interestingly have never gone near 90% remaining life in all those years.

But... I've also had some very old Android tablets die on storage that seemed to reprogram flash at EEPROM speeds, never giving up ...before I did.

Steam caches are a bit special, because there are actually non-critical data: if data actually were to get corrupted, all you need is wait for it to download from somewhere else.

So you tend to get messy with it, fill your storage more recklessly. In the case of my kids it's always near 100% which has me banging my head and my kids shrugging ('never given me a problem, but I need more space..')

What I really want is a local shared Steam cache on my 10Gbit LAN, only one copy of every game in a houshold with nearly 10 Steam devices of various kinds.

But it turns out Windows really, really sucks at opening hundreds of thousands of little files when I open my favorite game, which is ARK survival evolved. And it sucks much, much worse, when you do that over the network.

I had ARK running once on Linux: It loaded ARK faster from a hard disk than Windows loaded it from NVMe...
The graphics were another issue and I love the game for its eye candy.
 
  • Like
Reactions: JarredWaltonGPU
In the older days, I used to worry a lot about keeping enough spare area around to ensure that in absence of TRIM support my SSDs wouldn't start reflashing entire erase blocks for every 512 byte sector written.

These days I just keep running my manual TRIMs when I do major updates and most of my SSDs never go near the 90% mark anyway before I expand or reallocate: prices below €50/TB evict quite a lot of lesser capacity drives natuerally, which interestingly have never gone near 90% remaining life in all those years.

But... I've also had some very old Android tablets die on storage that seemed to reprogram flash at EEPROM speeds, never giving up ...before I did.

Steam caches are a bit special, because there are actually non-critical data: if data actually were to get corrupted, all you need is wait for it to download from somewhere else.

So you tend to get messy with it, fill your storage more recklessly. In the case of my kids it's always near 100% which has me banging my head and my kids shrugging ('never given me a problem, but I need more space..')

What I really want is a local shared Steam cache on my 10Gbit LAN, only one copy of every game in a houshold with nearly 10 Steam devices of various kinds.

But it turns out Windows really, really sucks at opening hundreds of thousands of little files when I open my favorite game, which is ARK survival evolved. And it sucks much, much worse, when you do that over the network.

I had ARK running once on Linux: It loaded ARK faster from a hard disk than Windows loaded it from NVMe...
The graphics were another issue and I love the game for its eye candy.
I had a 1TB Steam / etc. caching system set up at one point, back in the day. But now that I have a gigabit internet connection, it's not really useful. I do remember I had issues where often it would download the same file multiple times because of SSL or something, and I think that was also why I stopped bothering. Well, that plus fast internet plus no data cap. I am still just running gigabit Ethernet, as that's what all of my PCs support. I think one or two have 10GbE ports, but trying to rewire plus the cost of the new router has me just ignoring it for now. Maybe in another five years or so I'll finally be convinced to make the switch!
 

saunupe1911

Distinguished
Apr 17, 2016
213
76
18,660
I hate Sabrent. Their support absolutely sucks!!!!!! I've had 3...I repeat...3 Rocket 4 NVME die on me within 3 years. One of them was their dang replacement. Meanwhile I've had Samsung and PNY drives last even longer!!!!!!!!!!!!!!!!!

Stop promoting these pissy products