News Silicon Motion Readies 7nm PCIe 5.0 SSD Controller for Q4 2023

There was a lot of hype this time last year about these great, super fast Gen 5 SSDs that were all going to be avaialbe to consumers in October of '22. That was just another fairytale.
They're not solving a real problem (for consumers, anyhow), so I don't even care. In the consumer market, PCIe speed is mainly just specsmanship.

I have no interest in buying a SSD that thermally throttles or has an ungainly cooling solution. So, that means I'll do my best to stick with trailing-edge solutions that stay within reasonable power limits.
 
  • Like
Reactions: thisisaname
They're not solving a real problem (for consumers, anyhow), so I don't even care. In the consumer market, PCIe speed is mainly just specsmanship.

I have no interest in buying a SSD that thermally throttles or has an ungainly cooling solution. So, that means I'll do my best to stick with trailing-edge solutions that stay within reasonable power limits.

Most Gen 5 SSDs will not require the oversize heatsinks supplied. Those are just a sales gimmick. Most Gen 5 SSDs will operate just fine with no throttling with the OE mobo heatsink. The Gen 5 SSDs are faster but not ridiculously faster than the better Gen 4 SSDs that run with just a flat heatsink.
 
There was a lot of hype this time last year about these great, super fast Gen 5 SSDs that were all going to be avaialbe to consumers in October of '22. That was just another fairytale.

And the PCIe 5.0 SSD that are available, need active cooling.
  1. the last thing I want is a mini-fan in my PC that will start to rattle within a few months
  2. I don't like the idea of an SSD controller heating up so much that it needs active cooling, storage data and heat do not mix well
  3. PCIe 4.0 SSD sustained read and write is more than fast enough, it is these smaller random reads that are more important. PCIe 5.0 SSD haven't shown they can increase random read performance. SSD are still not at the level Optane gave us.
Basically, PCIe 4.0 SSD are simply better atm imo.
 
For a good time still PCIe 5 will be only beneficial to enterprise space and mostly that’s for being able to conserve PCIe lanes by cutting drives back to 2 versus 4 lanes. Maybe some niche workstation offerings would be 4x or other storage devices targeting absolute performance over capacity.
 
  • Like
Reactions: bit_user
Gen5 SSDs sure may not seem worth it for many users, especially when there isn't mainstream software which would make full use of it (yet). And by the way, one little trick how to increase data bandwidth otherwise is to spread out running software on more than one drive - e.g. I have the OS on one NVMe SSD, and games on another, and that way the software isn't competing for available bandwidth or delaying each other's latency (when or if there is a peak of demand to access a drive).

I do have an air-flow PC case and a MB with M.2 Gen5 slot already though. So for me there isn't any extra cost getting a Gen5 SSD. Which isn't to say that I wouldn't go "ouch" if the asking price will be $200 for 1TB (with DRAM). But if I will eventually have a CPU for $500-$700 and a GPU for $900, then I don't see much a point in saving a few bucks on a storage drive, when a newer SSDs may offer better latency (especially at sustained loads) etc.
 
With the report that Samsung is soon to join the other flash manufacturers in reducing output in order to increase prices, PCIe 4 and even 3 drives are going to be the most in demand for home users given the near to zero performance benefit of Gen 5 despite higher prices.
 
one little trick how to increase data bandwidth otherwise is to spread out running software on more than one drive - e.g. I have the OS on one NVMe SSD, and games on another, and that way the software isn't competing for available bandwidth or delaying each other's latency (when or if there is a peak of demand to access a drive).
I used to do this with hard disks. With SSDs, there's really no point. If you've ever studied the IOPS graphs where they measure at different amounts of queue depth or threads, you actually need lots of outstanding reads to get anywhere close to peak IOPS from these drives.

If you've got drives to spare or money to burn, go ahead. But, I think there's very little performance benefit to partitioning your stuff across multiple drives, especially if you're using a half-decent NVMe drive.