I wouldn't say the most cost-efficient, I'm sure board designers could come up with some creative cost-cutting measures if they weren't constrained by conventional form factors much like they do in laptops where some motherboards are almost credit-card sized these days. Of course, that means gambling on selling enough of these proprietary just-about-everything to recover sunk costs.
In a world where NVMe SSDs are getting ever hotter and in need of more cooling to avoid throttling (or worse), I think 2.5" provides useful benefits in terms of additional surface area. It also allows opens up the design space for cheaper drives to use a larger number of lower-density NAND chips.
But 2.5" is still too physically large IMO, and I'm targeting a smaller Form Factor because Enterprise & SFF wants smaller drives, having 1.8" SSD's be a option will help everybody.
Plus how cool would it be to take out a Bicycle Poker Card Deck and whip out 5x 1.8" SSD's like a spy sneaking a bunch of data drives through security.
I wouldn't say the most cost-efficient, I'm sure board designers could come up with some creative cost-cutting measures if they weren't constrained by conventional form factors much like they do in laptops where some motherboards are almost credit-card sized these days. Of course, that means gambling on selling enough of these proprietary just-about-everything to recover sunk costs.
Or we can tell the manufacturers that "Proprietary MoBo Designs & Form Factors" are to be BARRED from being made.
If need be, create regulation to LITERALLY make Proprietary Form Factors ILLEGAL to manufacture and require them to use "Standardized" Form Factors ONLY.
Less E-Waste is good, standards are good, Easy User Repair-ability is Good!
Well, something to consider: the English language never had compound words with a capital letter in the middle. Words are either: separate, hyphenated, or simply run-together. This business of capitalizing middle letters is something used by programmers (who couldn't use spaces or hyphens) and popularized by marketers. It's not proper English.
...not that I'm saying you should care, but just in case you do.
But 2.5" is still too physically large IMO, and I'm targeting a smaller Form Factor because Enterprise & SFF wants smaller drives, having 1.8" SSD's be a option will help everybody.
Enterprise has been clear about what it wants, like those crazy ruler form-factor drives. SFF wants M.2, more or less. It's really desktop which wants something other than M.2. 2.5" already has extremely broad support. There's no real need to introduce yet another form factor (i.e. 1.8").
Plus how cool would it be to take out a Bicycle Poker Card Deck and whip out 5x 1.8" SSD's like a spy sneaking a bunch of data drives through security.
I'm talking about cost and signal integrity. We don't have much visibility, but if we did then we should see the cables are cheaper because they don't need retimers.
I actually agree with you. I want the same thing. It would be best.
I just bought a micro-ATX which has 3 PCIe slots:
x16 (top)
x4 (middle)
x8 (bottom)
If you put a 2-slot+ card in the top slot, you block the middle slot. If you put anything in the bottom slot, the top one drops to x8. So, if you really want to use a card in that board at full x16 width, then it must be the only card, unless it happens to be single-width (some workstation graphics cards are single-width, but they're considerably more expensive than the gaming equivalent and not even the higher end ones).
How cheap of them! Given the bottom slot is mechanically x16, I wish they'd not taken the cheap route and just made that the full-width slot.
Anyway, because of that, I had to spend the extra money and get the version of the board which had integrated 10 Gigabit Ethernet, since my ability to use a 10 Gigabit NIC was so badly constrained. And, due to supply constraints, it basically took about 2 years for me to find that version of the board in stock & reasonably-priced.
I share your complaints about M.2 in desktops. All the heat problems we're seeing show just what a bad idea it was.
The boards are somewhat niche. Cases have been virtually nonexistent, ever since the form factor was launched in 2016 or so.
I wish ARM-based mini-PCs would adopt this form factor and we'd get a vibrant case ecosystem for them. Instead, every ARM SBC either uses the Pi's form factor (which is a fine layout for a hobby board, but not a mini-PC) or has an extremely limited case selection.
Well, something to consider: the English language never had compound words with a capital letter in the middle. Words are either: separate, hyphenated, or simply run-together. This business of capitalizing middle-letters is something used by programmers (who couldn't use spaces or hyphens) and popularized by marketers. It's not proper English.
...not that I'm saying you should care, but just in case you do.
Enterprise has been clear about what it wants, like those crazy ruler form-factor drives. SFF wants M.2, more or less. It's really desktop which wants something other than M.2. 2.5" already has extremely broad support. There's no real need to introduce yet another form factor (i.e. 1.8").
It is compelling to me and others who want "Smaller Things" & to fit more into a existing case or backplane.
The ever increasing need for smaller & more cuter things because they're small is a ever increasing need in our society.
And inside a Rack Mount Server, Volume and Space is a premium, same with SFF.
Also for Shipping / Logistics, smaller = Better = fit more. With a smaller than 2.5" format, you can fit more product into existing shipping containers.
M.2 also has the issue that it's bare PCB is easy for end users to damage.
Having a proper shell w/o being too big or too small is nice.
It's like Goldilocks and the 3 bears. Not too big, not too small, just right.
1.8" SSD Drives is "Just Right" in size IMO along with flexibility and not conflicting with any existing designs that are popular and out on the market.
Leave 2.5" drives to Physical HDD's and Hybrid SHDD's along with 3.5" drives.
It's also WAY easier to route 16x NAND-Flash Packages in a Square Grid with the SSD Micro-Controller in the center and 8x NAND Flash Packages on the Ventral & Dorsal sides for a total of 16x NAND-Flash Packages.
DRAM can be located on the opposite side of the controller for shortest trace lengths possible.
If you're stuck with the M.2 / Ruler format, routing large amounts of NAND Flash packages becomes a PAIN in the ARSE.
I'm talking about cost and signal integrity. We don't have much visibility, but if we did then we should see the cables are cheaper because they don't need retimers.
And not having to pay to manufacture any riser and just use that money for retimers would be all we need.
You're already paying for a PCIe <Insert Latest Version # Here> x16 slot.
All that extra money for the Riser Cable/Card can be better spent with the ReTimers IMO.
This way the case manufacturers don't need to buy them and put them in the case, just give me up to 5x expansion Card slots past the bottom slots for Video Card Exhaust & Connectivity.
We have Riser Cables/Cards costing up to $80 Retail.
And once you buy them, they're not even guranteed to work half the time.
They eventually flake out and you have to waste time to debug the problem.
Usually plugging straight into the MoBo's PCIe x16 slot works w/o issue.
So why not just spend that $$ at the MoBo makers to get it right the first time?
If you shove in the extra foot of Riser Cable, you'd get similar excess latency of a few nano-seconds as putting on the Retimer and moving the Slot to the bottom (extra latency penalty will vary based on MoBo Form Factor YMMV).
I actually agree with you. I want the same thing. It would be best.
I just bought a micro-ATX which has 3 PCIe slots:
x16 (top)
x4 (middle)
x8 (bottom)
If you put a 2-slot+ card in the top slot, you block the middle slot. If you put anything in the bottom slot, the top one drops to x8. So, if you really want to use a card in that board at full x16 width, then it must be the only card, unless it happens to be single-width (some workstation graphics cards are single-width, but they're considerably more expensive than the gaming equivalent and not even the higher end ones).
How cheap of them! Given the bottom slot is mechanically x16, I wish they'd not taken the cheap route and just made that the full-width slot.
Anyway, because of that, I had to spend the extra money and get the version of the board which had integrated 10 Gigabit Ethernet, since my ability to use a 10 Gigabit NIC was so badly constrained. And, due to supply constraints, it basically took about 2 years for me to find that version of the board in stock & reasonably-priced.
I share your complaints about M.2 in desktops. All the heat problems we're seeing show just what a bad idea it was.
If that's the case, there's no harm in me pushing VIA's ITX family of standards for LapTop usage.
We have LTT supporting FrameWork for the modular LapTop.
Slowly, we need to drive a common Eco-System for LapTops that use a common standard.
I want Nano-ITX to be "THE MoBo Form Factor" for LapTops.
The Dimensions for Nano-ITX IRL is so perfect for modern LapTops:
Even for the Thin LapTops we have today.
As for the battery issue, I want to bring back Modular Batteries via my "Folder PC" concept.
Where you move the Battery into a bottom module that houses 2x Seperate 99.9 Whr batteries where ONLY 1x Battery can be legally on at any given time due to the FAA 100 Whr Battery rule.
Then you can hot-swap 99.9 Whr batteries and recharge the other in the other battery bay.
Dell already pulls of this battery trick with their "Tough Line" of Professional LapTops, we can do the same for consumer LapTops. Bring back modular, user Hot-Swappable Batteries.
You'd obvious stand up the core frame of the LapTop like a ASUS MotherShip setup with a Surface like hinge on the back.
The back of the core body would have all fine mesh and have LOTS of Air Ventilation pushing Heat out of the Top/Sides with a Elevated 180° Display hinge that raises the Display higher than normal, effectively alleviating the end users constant neck strain of looking down at their LapTop Screen.
Just like "NoteBook LapTop PC's" have been a common format.
I want to iterate on that and bring in the "Folder TableTop PC's" to become the next popular common mobile PC format.
One that is flexible, but mobile enough to deploy on Tables everywhere while maintaining ergonomics for the end user by having a taller screen naturally and having "Hot-Swappable" batteries come back along with the option to just use a AC DC Mains Power pack and no Batteries if you want.
I wish ARM-based mini-PCs would adopt this form factor and we'd get a vibrant case ecosystem for them. Instead, every ARM SBC either uses the Pi's form factor (which is a fine layout for a hobby board, but not a mini-PC) or has an extremely limited case selection.
It's also WAY easier to route 16x NAND-Flash Packages in a Square Grid with the SSD Micro-Controller in the center and 8x NAND Flash Packages on the Ventral & Dorsal sides for a total of 16x NAND-Flash Packages.
DRAM can be located on the opposite side of the controller for shortest trace lengths possible.
Again, that's beside the point. The key thing is that I wanted Rasperry Pi-like machines having a standard form factor, so I had a broader range of cases to choose from.
I want to revitialize HDD's with Multi-Acutator & Optane built in.
If DRAM-less SSD's use SLC cache, there's no reason why HDD's can't use Optane to act as a similar SLC cache for large chunks of memory and connect via NVMe over PCIe x1 lane.
Again, that's beside the point. The key thing is that I wanted Rasperry Pi-like machines having a standard form factor, so I had a broader range of cases to choose from.
Most people who still want HDDs want them for the lowest possible cost per GB with decent reliability. More mechanical parts increases the risk of failure and cost for moderate to no benefit to most people. If I want fast access to something, I'll put it on NVMe while I need it and move it to HDD for cold/near-line storage when I'm done, I don't want the cost and unnecessary reliability liability of having a consumable storage tier built directly into the HDD.
Trust Kingston for all of your servers, desktops and laptops memory needs. Kingston DRAM is designed to maximise the performance of a specific computer system. Find memory for your device here.
Enjoy the next generation of multi-core CPUs with a DDR5 desktop RAM upgrade from Crucial. Buy today and get FREE US Delivery.
www.crucial.com
Or we could look at any number of other manufacturers. Whether you buy a non-gaming PC from a big OEM or a smaller shop, open it up and most likely the RAM won't have heatsinks.
Multi-actuator never existed in 2.5" form factor. It's not obvious whether it could, but it sort of defeats the point. The reason they went to multi-actuator was to compensate for the long readout time of high-capacity drives. 2.5" HDDs were never capacity-oriented.
It just confuses people. Or worse: it reflects poorly on you.
And if it's a proper name, then mis-capitalizing it could lead others to believe the company itself uses that capitalization, which it doesn't. It's one thing to write well-known words in your own style, but you shouldn't take liberties with a proper name, like that of a person or company.
Which is what hybrid drives are, and they haven't really done much to the market. Even adding Optane wouldn't do anything to revive this market.
The problem is marketers promised "SSD like access times" or similar, which is only the case if data is constantly being accessed. As such, hybrid drives are only really useful as the system drive for OS data.
If DRAM-less SSD's use SLC cache, there's no reason why HDD's can't use Optane to act as a similar SLC cache for large chunks of memory and connect via NVMe over PCIe x1 lane.
The only reason why cache is "needed" for storage drives is to make smaller writes appear to act instantaneously. Smaller write sizes (my WAG for what "smaller" means is <= 128KiB) tend to dominate what kind of write operations are done.
Most people who still want HDDs want them for the lowest possible cost per GB with decent reliability. More mechanical parts increases the risk of failure and cost for moderate to no benefit to most people. If I want fast access to something, I'll put it on NVMe while I need it and move it to HDD for cold/near-line storage when I'm done, I don't want the cost and unnecessary reliability liability of having a consumable storage tier built directly into the HDD.
The problem is marketers promised "SSD like access times" or similar, which is only the case if data is constantly being accessed. As such, hybrid drives are only really useful as the system drive for OS data.
Or you throw in Massive Multi-GB files and transfer slows to a crawl once it's out of the tiny DRAM amount.
With a larger Optane Cache, it would last ALOT longer before it slows down.
Same with multiple Reads/Writes in parallel for many smaller to medium size files
The only reason why cache is "needed" for storage drives is to make smaller writes appear to act instantaneously. Smaller write sizes (my WAG for what "smaller" means is <= 128KiB) tend to dominate what kind of write operations are done.
Trust Kingston for all of your servers, desktops and laptops memory needs. Kingston DRAM is designed to maximise the performance of a specific computer system. Find memory for your device here.
Enjoy the next generation of multi-core CPUs with a DDR5 desktop RAM upgrade from Crucial. Buy today and get FREE US Delivery.
www.crucial.com
Or we could look at any number of other manufacturers. Whether you buy a non-gaming PC from a big OEM or a smaller shop, open it up and most likely the RAM won't have heatsinks.
I very rarely ever see naked RAM sticks out in the wild, it almost ALWAYs has some form of Heat Sink.
Even the most basic design ones will work, it doesn't need to be fancy.
Multi-actuator never existed in 2.5" form factor. It's not obvious whether it could, but it sort of defeats the point. The reason they went to multi-actuator was to compensate for the long readout time of high-capacity drives. 2.5" HDDs were never capacity-oriented.
You need the right amount of Actuator Arms & Heads to do Vertical Stack "Multi-Actuator", not ever single config for 2.5" HDD form factor would work, only some would work.
2.5" HDD's can be capacity oriented depending on which model you're looking into buying.
And if it's a proper name, then mis-capitalizing it could lead others to believe the company itself uses that capitalization, which it doesn't. It's one thing to write well-known words in your own style, but you shouldn't take liberties with a proper name, like that of a person or company.
People shouldn't be believing what random posters on the internet states for spelling/capitalization. They should go to the company website or get a official representative to do that.
Random people are just that, people, nothing more, nothing less. Don't put too much expectations on other folks, you won't be disappointed that way.
How many "mm" or "inches" is ! Thin ! at the rear and front ends of a LapTop IYO?
How many "mm" or "inches" is Chunky at the rear and front ends of a LapTop IYO?
How many "mm" or "inches" is THICC! at the rear and front ends of a LapTop IYO?
The implementation doesn't really matter. The problem with anything "cache" related is you don't get that cache speed unless you access the same data enough times. Outside of the OS, core library files, and apps the user uses every day, everything else almost never gets to this point.
Or you throw in Massive Multi-GB files and transfer slows to a crawl once it's out of the tiny DRAM amount.
With a larger Optane Cache, it would last ALOT longer before it slows down.
Same with multiple Reads/Writes in parallel for many smaller to medium size files
How large are we talking about? Hybrid drives seemed to have topped out at 8GB, but I would guess they actually topped out at 16GB.
At some point, if you're going to keep adding more and more cache, you may as well just get an SSD. Even those are still an order of magnitude faster than HDDs when their SLC cache runs out.
Additional cost, additional points of failure for a feature that are of little to no importance to what most people still use HDDs for. The usual "jack of all trades, master of none" type deal.
The implementation doesn't really matter. The problem with anything "cache" related is you don't get that cache speed unless you access the same data enough times. Outside of the OS, core library files, and apps the user uses every day, everything else almost never gets to this point.
For me, the value would seem to be in using it as a write buffer, which is consistent with how @Kamen Rider Blade described it.
For SMR drives, you need to coalesce the writes you make to the platter. So, a NAND-based buffer would hold a close parallel to SSDs, where there's a pseudo-SLC or pseudo-MLC write buffer the drive fills, before going back and moving the data to TLC or QLC packing.
Actually using it for quick turn-around of reads is almost beside the point, IMO. Yes, you could try to use some of it for that, and it might help with the most frequently-accessed files, but if someone isn't booting off the drive then they probably won't notice much difference from such caching behavior.
Additional cost, additional points of failure for a feature that are of little to no importance to what most people still use HDDs for. The usual "jack of all trades, master of none" type deal.
For me, the value would seem to be in using it as a write buffer, which is consistent with how @Kamen Rider Blade described it.
For SMR drives, you need to coalesce the writes you make to the platter. So, a NAND-based buffer would hold a close parallel to SSDs, where there's a pseudo-SLC or pseudo-MLC write buffer the drive fills, before going back and moving the data to TLC or QLC packing.
Actually using it for quick turn-around of reads is almost beside the point, IMO. Yes, you could try to use some of it for that, and it might help with the most frequently-accessed files, but if someone isn't booting off the drive then they probably won't notice much difference from such caching behavior.
That's exactly the point I'm trying to convey. You hit the nail on the head!
A Read/Write buffer for large files and for many files simultaneously that are of middling size or enough buffer for many small files.
The SLC-cache on DRAM-less SSD's makes "ALL the difference" in performance, the same can be true for Hybrid HDD's with Optane.
Especialy HDD's with Multi-Actuator since that is starting to become a thing and improve on the Sequential Read/Write performance which is what HDD's are supposed to be good at.
Without having to pump up Rotational speeds, just adding more Actuator arms does wonders for Read/Write performance on a HDD w/o incurring significant power consumption / heat issues. No more than existing HDD's.
Add in Optane cache that is consistent performing, and the latest in Platter density, you'd have a "VERY Competitive" in Linear performance drive with DRAM-less QLC-SSD's.
The implementation doesn't really matter. The problem with anything "cache" related is you don't get that cache speed unless you access the same data enough times. Outside of the OS, core library files, and apps the user uses every day, everything else almost never gets to this point.
We start at 16 GiB of Optane for the smallest HDD configurations and can scale up in 16 GiB increments all the way to 128 GiB of Optane for the largest HDD configurations currently in mass production for now. Obviously once mass production of Optane truly ramps up and going along with economies of scale, we can adjust those sizes.
But that matches the SLC-cache range for modern SSD's that I've seen which range from:
6 GB of SLC-cache for a 500 GB SSD which is about 1.2% of it's total capacity
to
660 GB of SLC-cache for 2 TB SSD which is ~ ⅓ of it's total capacity.
At some point, if you're going to keep adding more and more cache, you may as well just get an SSD. Even
those are still an order of magnitude faster than HDDs when their SLC cache runs out.
If you were talking about 1-bit per cell or 2-bit per cell, I'd agree with you on that, but ever since 3-bit per cell and 4-bit per cell; performance is piss poor once you run out of SLC cache, especially since going DRAM-less to save on costs.
A very useful feature for HDD's that have relatively consistent write performance relatively for what it is.
Without having to pump up Rotational speeds, just adding more Actuator arms does wonders for Read/Write performance on a HDD w/o incurring significant power consumption / heat issues. No more than existing HDD's.
Add in Optane cache that is consistent performing, and the latest in Platter density, you'd have a "VERY Competitive" in Linear performance drive with DRAM-less QLC-SSD's.
The cache/buffer is important for optimizing random writes. Whatever benefit you get from buffering sequential writes is almost beside the point, relatively speaking.
This so-called jack of all trades was in fact William Shakespeare. The full phrase is “a jack of all trades is a master of none, but oftentimes better than a master of one.” It was a compliment. Far from letting it deter their path, some entrepreneurs swear that being a jack of all trades brings benefits.
Actually the Opti-NAND is a form of Embedded Flash Drive that stores only the Meta-Data of the HDD
It's reserved exclusively for that purpose, ergo freeing some capacity on the HDD and improving performance at the same time.
Optane, when using Direct Memory Attached mode to the HDD Controller with the "Ultra-Low Latency" along with PCIe NVMe connection to the Host & SAS/SATA connection to the HDD end as well will allow Super High Performance that is similar to most SSD's.
It'll also have "Very Low Latency". We're talking Low Latency that is faster than a RAM Disk & close to DRAM. Given that we're talking over PCIe NVMe, it's performance will be in the ball park of your beloved CXL Direct Attached Memory, but directly attached to the HDD controller.
I wouldn't be surprised if the Consumer/Enterpise version of HDD's add CXL Direct Memory Attach support to the HDD's Optane section and allow "Super Fast" Read/Write to it as a buffer before you dump the data back to the HDD. Literally a form of "RAM Disk" attached over PCIe that buffers all "Data Read/Writes" before it hits the HDD layer.
That's where the HDD Controller comes in and automatically does RAID 0 internally to gives us the best possible Sequential Performance. Obviously, it should be end user configurable, but the default should be for "Internal RAID 0".
The cache/buffer is important for optimizing random writes. Whatever benefit you get from buffering sequential writes is almost beside the point, relatively speaking.
Imagine what we can do with modern technology & integration of Optane into a HDD, it'd make casual end users not notice the difference in performance against a bog standard SSD until they reach the extreme ends of performance necessary. Especially with larger Optane Caches attached to larger HDD's to mask the slowness of a HDD. And we know the last interation of Optane was VERY fast and connection over even 1x PCIe 5.0 lane would offer plenty of bandwidth to use @ ~ 4 GB/s which is equivalent to 4x PCIe 3.0 lanes.
Moving forward in time with newer iterations of PCIe, the bandwidth will only get better and the nice part about only having 1x PCIe connection is that multiple hosts on Virtual Machines can Virtually connect to the same Drive and share resources over multiple connections. The Optane portion would be critical to buffering Reads/Writes from multiple VM hosts as well or anybody over multiple Network Connections.
For me, the value would seem to be in using it as a write buffer, which is consistent with how @Kamen Rider Blade described it.
For SMR drives, you need to coalesce the writes you make to the platter. So, a NAND-based buffer would hold a close parallel to SSDs, where there's a pseudo-SLC or pseudo-MLC write buffer the drive fills, before going back and moving the data to TLC or QLC packing.
I would argue if you need a HDD for write performance, an SMR one is not ideal. And depending on how much flash cache you need to add to make this worth while, it's likely the cost reduction of going with SMR has diminished.
And depending on how much you're writing at once, you'll just lose that performance anyway.
Actually the Opti-NAND is a form of Embedded Flash Drive that stores only the Meta-Data of the HDD
It's reserved exclusively for that purpose, ergo freeing some capacity on the HDD and improving performance at the same time.
That's worded like a correction, but it's not. It doesn't contradict anything I said.
I know what Opti-NAND is, and I worry that it might fail before the rest of the drive. Without it, the drive is unreadable, making data-recovery impossible. It's a completely different topic, though.
No, I'm pretty sure they made them separate logical drives for good reasons. If you stripe across the actuators, then your IOPS don't really go up.
Also, it turns a failure in one half of the drive into a failure of the entire thing. The way these are meant to be used is in large pools of storage, where you replicate data across different physical and logical drives. Then, if one of the heads crashes, you only have to restore half a drive's worth of data.
Anyway, it's not a consumer technology. It might never trickle down to consumer HDDs.
I would argue if you need a HDD for write performance, an SMR one is not ideal. And depending on how much flash cache you need to add to make this worth while, it's likely the cost reduction of going with SMR has diminished.
SMR is fine for coherent writes. Where it falls apart is on smaller random writes. That's where a giant write buffer can really help. However, if you're just using it for backups, then it probably won't make a big difference.
As for the cost reduction, consider that you only need one NAND chip. If it can be driven directly by the HDD controller, then it might not even add much hardware complexity. Ideally, you could even reduce the amount of DRAM the drive has.
So, we're talking about a fixed overhead to the drive's electronics, in terms of cost. By contrast, the value of SMR is that it scales the amount of data you can fit on a platter. However dense your platter is, SMR will increase it by (IIRC) 30%. So, I don't see a NAND-based buffer destroying the value-proposition of SMR.
Plus, if it's an Opti-NAND drive (and I'd only recommend those for cloud/enterprise scenarios with robust redundancy schemes), then you've already got some NAND there, so why not also use it for write-buffering?
That's worded like a correction, but it's not. It doesn't contradict anything I said.
I know what Opti-NAND is, and I worry that it might fail before the rest of the drive. Without it, the drive is unreadable, making data-recovery impossible. It's a completely different topic, though.
No, I'm pretty sure they made them separate logical drives for good reasons. If you stripe across the actuators, then your IOPS don't really go up.
Also, it turns a failure in one half of the drive into a failure of the entire thing. The way these are meant to be used is in large pools of storage, where you replicate data across different physical and logical drives. Then, if one of the heads crashes, you only have to restore half a drive's worth of data.
Anyway, it's not a consumer technology. It might never trickle down to consumer HDDs.
Plus, if it's an Opti-NAND drive (and I'd only recommend those for cloud/enterprise scenarios with robust redundancy schemes), then you've already got some NAND there, so why not also use it for write-buffering?
It's only used for "Write Buffering" for emergency power loss situations.
According to the "Storage Review" article about Opti-NAND:
WD’s figures are that OptiNAND drives can secure more than 100MB of write cache data in the event of an unplanned power loss, a 50X improvement over standard drives that can flush about 2MB.
I was talking hypothetically, but it doesn't surprise me to hear that. They don't want to burn out their Opti-NAND chips by using them as a normal write buffer. In this case, it sounds like they just dump the contents of the DRAM buffer to it. Fair enough.