Review Samsung 9100 Pro SSD Review: Samsung’s Capable Answer To Phison’s Storage Gauntlet

Nice drives, but not particularly impressive.

Why do you keep doing PlayStation testing with these drives? It’s in there every time and there’s literally hardly any difference between any of the reasonably fast drives, let alone a difference that a person might notice in daily use. Basically everything is just fast enough to max out the PlayStation storage.
 
Sheesh those prices... for such a small increment in % improvement, I really don't expect these to sell well till the prices come down. $200 for 1tb, ~20cents/GB and $300 for 2tb ~15cents/GB, and 4TB for ~13.75cents/GB. All too spendy for this very cost sensitive segment. I like my cheap Gen 3 m2 drives though.

Looking forward to future does a faster ssd matter segments comparing the latest drives all the way back to sata ssd and optane(still miss it) in the future.
 
Nice drives, but not particularly impressive.

Why do you keep doing PlayStation testing with these drives? It’s in there every time and there’s literally hardly any difference between any of the reasonably fast drives, let alone a difference that a person might notice in daily use. Basically everything is just fast enough to max out the PlayStation storage.
I honestly think that is the main market for overpriced m2 drives, the console market. So the testing is extremely relevant.
 
I honestly think that is the main market for overpriced m2 drives, the console market. So the testing is extremely relevant.
Wel that’s sort of a reason I guess, getting console players to look at your review even, but as basically everything that’s been released the last year or two maxes out PlayStation storage it seems like a huge waste of time to me.
 
What irks me is that with 8TB becoming more or less normal with linear prices, you'd want to consolidate the drives you have.

With SATA I'd put SSDs into a nice big JBOD in a 6x or 8x drive cage filling a 5 1/4" HH slot in the chassis and enjoy the lesser capacities a while longer in a crowd. A port multiplier or SAS expander was always lying around somewhere, too, affordable otherwise.

But with NVMe the real issue is scarcity of PCIe lanes (apart from having to dig in deep to get at them NVMe sticks), so what do you do with those nice 1TB sticks, that still show 99-97% remaining endurance?

PCIe switch chips seem to cost more than a 8TB drive, after Avago/Broadcom bought the PLX and others, the only affordable option I've seen is a PCIe v3 variant which allocates a single lane to each of of four M.2 slots it connects to a PCIe x4, so it's not much of a switch.

I'd really like to see something that's capable of aggregating say multiple PCIe v3 NMe drives into one PCIe v5 x4 or M.2 connector, and I believe PCIe would allow for that, but since I've never seen such a chip or card, there is a good chance I'm wrong.
 
Last edited:
My computer is old. It has only PCIe v3. I have no need of performance. Just a 2TB drive which should be reliable and silent. Storing photos and my old medias from CD and DVD.

If there are Nvme drives compatible PCIe v5 or v4 at comparable cost as an old Nvme PCIe3, is it ok I buy the newer version?
 
Notices its rediculous kilomegagiga speeds in Steady Write Speed test ? Look at 2nd and 3rd screen. This drive will show off first 40 seconds its 6pack muscles and then blow into the mud like a stone running just as usual 4th gen drive. Avoid at any cost if you move large volumes because every time in less than a minute you will get a reminder that you just wasted your money )
 
Last edited:
My computer is old. It has only PCIe v3. I have no need of performance. Just a 2TB drive which should be reliable and silent. Storing photos and my old medias from CD and DVD.

If there are Nvme drives compatible PCIe v5 or v4 at comparable cost as an old Nvme PCIe3, is it ok I buy the newer version?
PCIe is backwards compatible, so yes, you can use a PCIe 4.0 or 5.0 SSD even if the motherboard and/or CPU only support PCIe 3.0. Plenty of budget laptops do exactly that 🙂

My laptop is a secondhand T480 Thinkpad, and the SSD is limited to just two PCIe 3.0 lanes rather than the more typical four lanes. This is different from your use-case, but I'm planning an upgrade to something like the Crucial T500 or Samsung 990 Pro. Yes, those drives' sequential read/write performance would be bottlenecked to roughly 2 GB per second. But since random read/write is much slower than sequential, it generally won't run into that bottleneck.
 
  • Like
Reactions: The5thCacophony
What irks me is that with 8TB becoming more or less normal with linear prices, you'd want to consolidate the drives you have.

With SATA I'd SSDs into a nice big JBOD in a 6x or 8x drive cage filling a 5 1/4" HH slot in the chassis and enjoy the lesser capacities a while longer in a crowd. A port multiplier or SAS expander was always lying around somwhere, too.

But with NVMe the real issue is scarcity of PCIe lanes (apart from having to dig in deep to get at them NVMe sticks), so what do you do with those nice 1TB sticks, that still show 99-97% remaining endurance?

PCIe switch chips seem to cost more than a 8TB drive, after Avago/Broadcom bought the PLX and others, the only affordable option I've seen is a PCIe v3 variant which allocates a single lane to each of of four M.2 slots it connects to a PCIe x4, so it's not much of a switch.

I'd really like to see something that's capable of aggregating say multiple PCIe v3 NMe drives into one PCIe v5 x4 or M.2 connector, and I believe PCIe would allow for that, but since I've never seen such a chip or card, there is a good chance I'm wrong.
you can have 48-128 PCIe Lanes if you move to Threadrippers. Problem solved.
 
@JarredWaltonGPU & Shane,

Thanks for the review, but I'd have loved to see the Quarch PPM power consumption stats with the drive at PCIe 4.0 speed, like I mentioned in the comments thread of its announcement. A lot of people still have motherboards or laptops with no (available) PCIe 5.0 M.2 slots and are concerned about power consumption/heat.
 
  • Like
Reactions: drSeehas
Why do you keep doing PlayStation testing with these drives? It’s in there every time and there’s literally hardly any difference between any of the reasonably fast drives, let alone a difference that a person might notice in daily use. Basically everything is just fast enough to max out the PlayStation storage.
It's true that high-end drives basically max it out, but some of the mid-range drives do noticeably worse.

One key detail is that I think PS5 doesn't support HMB (Host Memory Buffer) for DRAM-less drives. So, it's both informative for those considering such drives, as well as helping us see the impact HMB is having.

I also thought it was interesting that the 2 TB 990 Pro outperformed its successor, in those tests. Not by a very significant amount, but still enough to be interesting. Specifically, by 0.8% in the transfer test and 3.4% in the read test. This should inform interested buyers to go with the older drive and save some money, even if they're after the absolute best performance.
 
Looking forward to future "does a faster ssd matter?" segments comparing the latest drives all the way back to sata ssd and optane(still miss it) in the future.
Some other publications have tested game loading times on SSDs that span all the way from SATA to fast NVMe. How much impact it has depends a lot on the game. I don't recall any case where loading times were even cut in half, between the fastest NVMe drive vs. the slowest SATA drive, however.

If you work with VMs or containers, those are also cases where end-users might be I/O bound. At my job, when I'm doing container management ops, my i9 is bottlenecked on its PCIe 4.0 SSD for minutes at a time.

Also, when I recently upgraded my laptop (one of the last models to feature a M.2 SATA SSD) to the latest Ubuntu distro, the package installation step was achingly slow. I mean, it almost felt like hard-disk slow! I think the package manager was doing synchronized writes, because otherwise the kernel does a very good job of hiding write speeds via buffering.

P.S. You can still buy Optane on ebay. Last I checked, new 400 GB P5800X drives were going for about half the original street price. But, I hope you don't need more capacity than that!
 
Last edited:
Notices its rediculous kilomegagiga speeds in Steady Write Speed test ? Look at 2nd and 3rd screen. This drive will show off first 40 seconds its 6pack muscles and then blow into the mud like a stone running just as usual 4th gen drive. Avoid at any cost if you move large volumes because every time in less than a minute you will get a reminder that you just wasted your money )
Depends on how big you're talking about, though. Because both the 2 TB and 4 TB drives support filling about a quarter of the drive's total capacity at 13 GB/s.

So, if you're writing just a couple hundred GB at a time, to a drive with a fair amount of free space, then you probably have little to worry about. If writing more at a time - and especially if you tend to keep your drives nearly full - then this might not be the drive for you.

BTW, I recall one poster complaining that a 970 Pro performed poorly on a sustained database workload (he said it would pause for a couple seconds, every once in a while - I'd guess probably to consolidate writes) and advocated for using hard disks, instead (which is nuts, BTW). These are not the class of drive you should be using for sustained I/O like that. All of these manufacturers make server-grade models for a reason. The mixed-workload and write-oriented models can sustain high write activity like no problem.

Is it sad that the steady state write speed is about the same as the 970 EVO Plus! Progress for sure.
The article notes that they didn't use their latest NAND, in this drive. I think it should be seen more as a positive for the 990 Evo Plus than a negative for the 9100 Pro, but YMMV.
 
  • Like
Reactions: Stomx
I expected something better. Good thing I didn't wait for this drive from Samsung. Sequential writing - in tests and on a pedestal - poor. Random and real-world use - here you need to test, and here acct we have terribly poor performance.
The drives encountered an interface wall.... The end not to jump over. There is no point in overpaying for Samsung. Sometimes even (depending on the application) it does not make sense to overpay for PCIe 5.
 
I expected something better. Good thing I didn't wait for this drive from Samsung.
Yeah, I was surprised how well the 990 Pro held up against it, in many cases.

Sequential writing - in tests and on a pedestal - poor.
Huh? Doesn't look poor to me.

tS424GQRoB24oxuKUSohfU.png


The only time write performance suffers is after you've filled the pSLC buffer, as mentioned above. Then, performance does indeed fall off a cliff. However, if you buy a big enough drive and keep enough free space for the size of the big/max-speed writes you do, it shouldn't be an issue.

Random and real-world use - here you need to test, and here acct we have terribly poor performance.
That doesn't correlate with any of the results, here. Yes, QD1 4k random write is a little bit weak, but all their other random performance benchmarks and the application-level (PCMark and 3DMark) trace testing put it in one of the top slots.

Now, if someone runs a stress test on a nearly-full drive, then the lagging native write performance could indeed rear its head and you'll be hurting. But, the moral of the story is really: don't do that, people! SSD write performance suffers when they're nearly full. So, if you're going to stress it with heavy writes, then try to leave a good amount of free space on it.

Sometimes even (depending on the application) it does not make sense to overpay for PCIe 5.
Agreed.
 
Last edited:
And the SSD makers insist on not solving this issue entirely by factory reserve (or ADD) hidden space for this.
It's very expensive to solve, that way. A pSLC cache uses 4x as many cells as TLC. That's why you can only fill 1/4th of an empty drive at full speed, and then it has to drop down to native TLC speed. So, however much extra NAND you'd set aside for a write buffer, it would only extend your full-speed writing capability by 1/4th that amount.

I never thought about it, before, but I'll bet if you plotted each drive according to how much capacity it filled at full speed, they would all plot virtually the same (i.e. at ~500 GB for the 2 TB models). Notice how the drives which sustain full-speed writes for longer periods of time are also the ones with lower peak write speeds? That's no coincidence. At least the 9100 Pro drops down to a steady write speed, upon buffer exhaustion, rather than how some of the drives go almost to zero.

wcwvq2u89XSK3FJUzvKrKj.png


There's no perfect solution to this problem. I think what the mixed & write-oriented server drives do is use way more NAND channels or MLC, both of which are more expensive. Otherwise, you're stuck dealing with the limitations of how write buffering is implemented in consumer SSDs.

I guess another thing they could do is drop to pMLC, at a certain point, but not everyone will appreciate that.

or simply report the drive size minus 20% to the system.
Or, how about letting you use all the capacity you paid for and then it's on you to leave 20% free, if you value write performance on that last 5% of capacity, so much?
 
Last edited:
They should Honor their advertised write speed. it is part of the package.
The advertised write speed is always peak write speed. Write speed can be affected by many factors outside of the SSD's control.

pSLC write caches are a hack, but they work well enough for most people, most of the time. Like I said, if you think you're in that minority that needs full-drive sustained write performance higher than the minimum you can get in a consumer drive, you should buy a server SSD instead.

BTW, pay close attention to that graph. The Crucial T705 and Micron 4600 have peak writes near that of the Samsung 9100 Pro. While they give up the peak sooner, their sustained write is much higher (4 GB/s). Maybe that's fast enough, for doing large writes to a nearly-full drive?
 
Last edited: