News Phison Demos M.2-2580 PCIe 5.0 x4 SSD: Up to 12GBps Reads

wifiburger

Honorable
Feb 21, 2016
565
67
11,090
9
worthless, pcie3 to pcie4 nvme didn't do anything for most

The only thing that it did for me is increase price and lower the total drive capacity

Instead of chasing crap speeds they should focus on price and capacity because regular SSD are getting more and more attractive these days and those at 600mb/s :ROFLMAO:

excuse me for not giving 0 f about 1tb doing 10Gb/s when 4tb ssd 600mb/s is half price
 

kiniku

Distinguished
Mar 27, 2009
182
6
18,685
0
For most of us, the bottom number impacts end user "experience" more than any other: RND4K Q1T1 speed. And in that measure it's really no different from a 3.0 SSD.
 
Reactions: shady28

jp7189

Distinguished
Feb 21, 2012
105
29
18,610
0
For most of us, the bottom number impacts end user "experience" more than any other: RND4K Q1T1 speed. And in that measure it's really no different from a 3.0 SSD.
I would argue that sequential q1t1 has an impact on many consumer applications as well. Also since this controller will target enterprise drives the high queue depth metrics are meaningful to that segment.
 

LuxZg

Distinguished
Dec 29, 2007
180
20
18,685
0
For most of us, the bottom number impacts end user "experience" more than any other: RND4K Q1T1 speed. And in that measure it's really no different from a 3.0 SSD.
You actually made me turn on PC and check speed of my ancient (6 years old this week!) SM951 ...

Rough speedup for this sample PCIe 5.0 drive in parentheses.

My reads:
2156 (6x)
1448 (3x)
1227 (4x)
50 (20%)

Writes:
1226 (9x)
1185 (8x)
233 (20x?)
143 (2.5x)

Unsure what's with low write at RAN4K Q32T16 for my drive, but let's disregard that (despite result being vwry repeatable).

Anyway, I agree that worst case vs best case isn't all that good... 20% read speedup (vs 600% sequential) and 250% write speedup (vs 9x sequential).

And I agree that probably moves something like Word startup from 2s to 1.8s.

But we have to keep our eyes on new tech/features, like DirectStorage and AMDs Smart Storage. So I will keep my judgment until some real world tests arrive, like game loading times, if there's any difference while playing, and so on.

I really just need 512GB, but I wouldn't mind 1TB. What I can say, there's no 4TB drives for half the price :p Where I live 1TB jumps roughly 2x in price for PCIe 4.0 drive vs 3.0. But a 4TB PCIe 3.0 NVMe drive is still almost 4x the price of that 1TB 4.0 drive ! So yeah, if 5.0 drives are another 2x increase vs 4.0 I'll skip. If it is roughly still just 2x vs 3.0 then it's a buy. After all, when I get one, I'm unlikely to upgrade next 6 years again ;D
 

InvalidError

Titan
Moderator
worthless, pcie3 to pcie4 nvme didn't do anything for most
Because realizing the benefits of massively increased bandwidth and reduced latency is going to require a major overhaul of how most software loads stuff. Typical software reads data from storage at the point of use, then waits for the read to return before processing the data until it reaches the next point where it needs to read more data, wait, then process that, rinse and repeat. To leverage NVMe's full capabilities, software needs to be rewritten to load everything it can concurrently ahead of processing so it doesn't get stalled waiting on reads.

Most software developers likely won't bother re-arranging their code to accommodate pervasively threaded and pipelined asynchronous loading beyond what is absolutely essential to user experience.
 
Reactions: LuxZg and deesider

escksu

Reputable
Aug 8, 2019
738
289
5,260
0
You will only experience the speed in sequential read/write situations. Serial nature of I/O means your won't benefit much from read/write of small files. This unfortunately is typical of end-user environment.

Try this: copy 1 x 1GB file and 1000 x 1MB file, despite the same overall size, 1000 x 1MB takes way longer. It only copies 1 file at a time (serial nature of I/O) and you will probably see 10-20MB/s speed...
 

InvalidError

Titan
Moderator
You will only experience the speed in sequential read/write situations. Serial nature of I/O means your won't benefit much from read/write of small files. This unfortunately is typical of end-user environment.
In the case of games and likely some software with a boatload of plugins, there would be significant potential performance benefits to parallelizing loads to whatever extent dependencies will allow and prioritizing them accordingly to minimize bottlenecks. While IO to the raw block device may be serial, a lot of the processing that happens afterwards doesn't need to be and that is where the biggest bottleneck currently is.
 

LuxZg

Distinguished
Dec 29, 2007
180
20
18,685
0
All this is quite true. But in my case biggest lag I have is when logging into PC, so after boot, when several things auto-load and I start clicking and opening all stuff I'm gonna use. That situation doesn't need rewrites or optimizations, and speedy I/O matters. OK, so I may still not notice much difference between individual PCIe generations, but I believe a jump from first gen NVMe drive to something like this engineering sample would still be noticable.

As for rewrites - those will barely happen even in biggest of apps and suites, but I do expect future game development to take this into consideration (due to consoles supporting it already).

Also, you will slowly but surely stop using those options like "don't update in background while I play", so you will less often be forced to endure the whole "torture" cycle before starting to game. It's just all too often that I start PC that then needs an update, then start Steam that needs to update, then when it starts dozen games need to update, including one I want to play, and then you actually start a game, load the save and... time is up and you can shut it all down. Hopefully with background updates that will happen less, and when it does happen, should be faster. Letting stuff start and update/install at once on fast NVMe should improve experience quite a bit. Sure it's not something you do all day long, but boy does it feel like that sometimes ;D
 

CRamseyer

Distinguished
Jan 25, 2015
411
1
18,795
3
You are correct that your "real world" performance comes from low queue depth random read performance. Keep in mind that E26 is still in development and the engineers work on sequential performance first and then focus on random performance. Phison's E18 with Micron's B47R NAND delivers around 95 to 98 MB/s random reads in CDM. E26 will be even better once the controller is tuned for random performance. I expect to see a solid 120MB/s in that test but that is just my personal opinion.

In the past sequential read performance was mostly a game of show and tell. It looks good in benchmarks but difficult to reach those speeds with the software we use. With new technologies like DirectStorage we will actually get to utilize the high sequential reads. Future games (at least one coming this year) will reach out to the SSD and pull in texture files in parallel (high queue depth). One company has already warned that even some Gen4 SSDs will perform well with DirectStorage. If you want to play your games at "Ultra" settings or other high graphic detail modes, you will need a very fast SSD.

For gaming, you can start to think of the SSD as a cache for the GPU. That will allow GPU makers to reduce the DRAM on the cards (that is a massive % of the cost of a few video card) and use the massive SSD bandwidth to feed the GPU with textures "on demand".
 

escksu

Reputable
Aug 8, 2019
738
289
5,260
0
In the case of games and likely some software with a boatload of plugins, there would be significant potential performance benefits to parallelizing loads to whatever extent dependencies will allow and prioritizing them accordingly to minimize bottlenecks. While IO to the raw block device may be serial, a lot of the processing that happens afterwards doesn't need to be and that is where the biggest bottleneck currently is.
Those processing that happens afterwards is done by the CPU, not the SSD and the data needed will be stored in the RAM instead.

As for parallelizing loads, it can only be done by the software and CPU. SSD is only respoonsible for getting data that CPU requires.

No doubt there are environments where CPU will need to process huge amount of data and a fast SSD really helps. However, end user environment is not one of them.
 

InvalidError

Titan
Moderator
Those processing that happens afterwards is done by the CPU, not the SSD and the data needed will be stored in the RAM instead.

As for parallelizing loads, it can only be done by the software and CPU. SSD is only respoonsible for getting data that CPU requires.
With DirectStorage, processing can be done on the GPU for data intended to go there and doesn't necessarily have to go through system memory nor the CPU first. As for parallelizing loads, you need enough SSD IO bandwidth, low enough SSD latency and high enough SSD IOPS to feed the 12-32 CPU threads and GPU(s) that could be attempting to concurrently load and process stuff in software aiming for practically nonexistent load-screen times and asset pops.
 

ASK THE COMMUNITY