Review Enmotus FuzeDrive P200 M.2 NVMe SSD Review: AI Storage Beats SSD Caching


Deleted member 14196

To me it looks like the cons outweigh the pros


Aug 4, 2019
I make it a habit to never buy products that employ buzzwords in their advertising. AI = buzzword. AI = programming--just good, old-fashioned programming, and not one thing more. And write endurance is a big con these days--even my lowly 960 EVO 256GB NVMe drive will last 15 years at my present rate of usage- -and is sure to be replaced long before then...;) Endurance = 75TB . Also, endurance numbers have nothing to do with construction--an NVMe with 75TB write capacity is built the same as an NVMe with 300TBs write estimate, etc--it's not "sturdier," etc. If you are an aggressive user and write 6 TB's per year to an NVMe drive warranted for 5 years with an endurance estimate of 300TB, well, you warranty runs out somewhere after 30TBs...;) Likely, too, you'll buy something new to replace it before that five years is up.

I love my NVMe drives--they're great. What I don't love is hype and other kinds of misleading advertising.
1TB of QLC for $200? Considering drives with that amount of QLC and the exact same controller can be had for a little over $100, that's a huge markup for what amounts to little more than a better firmware implementation. It sounds like those $100 drives could theoretically implement similar firmware that would do the exact same thing if one were willing to give up around 10% of their capacity to act as a permanent SLC cache like this drive does. It probably won't be long before we see other manufacturers do something similar in drives costing almost half as much.

An SSD with lots of TBW endurance just in time for people to create Chia plots!
While I'm not too familiar with the exact workings of how that internally operates, if data is accessed in a mostly random manner across the drive's entire capacity, then I would suspect this drive might have some trouble utilizing more than the size of its SLC cache for that purpose, as the caching algorithm wouldn't likely be able to predict which sectors are going to be written to otherwise. So, there might be less than 128GB usable for that purpose with the 1.6TB drive, or 24GB for the 900GB drive. If one were rapidly writing to random locations in a file much larger than that, I suspect you would burn through the QLC's endurance in no time. You might also end up seeing performance drop to those 200MB/s sustained write speeds, in which case you might be better off with a less expensive and higher capacity platter-based drive.