News AMD Engineers Show Off 'Infinitely' Stackable AM5 Chipset Cards

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

InvalidError

Titan
Moderator
Desktop motherboards have stayed the way there are because it's the most efficient way to do things.
I wouldn't say the most cost-efficient, I'm sure board designers could come up with some creative cost-cutting measures if they weren't constrained by conventional form factors much like they do in laptops where some motherboards are almost credit-card sized these days. Of course, that means gambling on selling enough of these proprietary just-about-everything to recover sunk costs.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
812
20,060
In a world where NVMe SSDs are getting ever hotter and in need of more cooling to avoid throttling (or worse), I think 2.5" provides useful benefits in terms of additional surface area. It also allows opens up the design space for cheaper drives to use a larger number of lower-density NAND chips.
But 2.5" is still too physically large IMO, and I'm targeting a smaller Form Factor because Enterprise & SFF wants smaller drives, having 1.8" SSD's be a option will help everybody.

Plus how cool would it be to take out a Bicycle Poker Card Deck and whip out 5x 1.8" SSD's like a spy sneaking a bunch of data drives through security.

Riser cables (which I think you meant to say) are easier to implement and provide better signal integrity than riser cards.
Riser Cables may be easier, but they aren't cheap to implement
And I don't need Riser Card or Riser Cables.

I just want my PCIe x16 Slot to be moved to the "Bottom" of the MoBo, regardless of MoBo Form Factor.

This way I can get my PCIe Expansion Card Slots back.

Also, EFF M.2 Slots on MoBo's, short of extreme circumstances, I don't want to see a single M.2 slot on a MoBo.

If I want M.2, I'll use a PCIe Adapter card of my choice.

mini-STX is probably a lot more common than several of those. Also, there are still EATX boards on the market.

There's also 5x5 and I think 4x4.
I've never seen Mini-STX out in the wild, so maybe they might be popular, but from what I can tell, they're still pretty niche.

But as you know, EATX isn't a proper standard.
We need to regulate the industry and prevent them from making ANY-MORE future:
E-ATX / XL-ATX / EE-ATX are ALL "Non-Standardized" MoBo Form Factors.

That's why SSI created the CEB/MEB/EEB/TEB standards, use them.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
812
20,060
I wouldn't say the most cost-efficient, I'm sure board designers could come up with some creative cost-cutting measures if they weren't constrained by conventional form factors much like they do in laptops where some motherboards are almost credit-card sized these days. Of course, that means gambling on selling enough of these proprietary just-about-everything to recover sunk costs.
Or we can tell the manufacturers that "Proprietary MoBo Designs & Form Factors" are to be BARRED from being made.


If need be, create regulation to LITERALLY make Proprietary Form Factors ILLEGAL to manufacture and require them to use "Standardized" Form Factors ONLY.

Less E-Waste is good, standards are good, Easy User Repair-ability is Good!
 

bit_user

Titan
Ambassador
More like I forget to do it sometimes.
Well, something to consider: the English language never had compound words with a capital letter in the middle. Words are either: separate, hyphenated, or simply run-together. This business of capitalizing middle letters is something used by programmers (who couldn't use spaces or hyphens) and popularized by marketers. It's not proper English.

...not that I'm saying you should care, but just in case you do.
 
Last edited:
  • Like
Reactions: snemarch

bit_user

Titan
Ambassador
But 2.5" is still too physically large IMO, and I'm targeting a smaller Form Factor because Enterprise & SFF wants smaller drives, having 1.8" SSD's be a option will help everybody.
Enterprise has been clear about what it wants, like those crazy ruler form-factor drives. SFF wants M.2, more or less. It's really desktop which wants something other than M.2. 2.5" already has extremely broad support. There's no real need to introduce yet another form factor (i.e. 1.8").

Plus how cool would it be to take out a Bicycle Poker Card Deck and whip out 5x 1.8" SSD's like a spy sneaking a bunch of data drives through security.
That's not a compelling need most people have. 2.5" is already small enough to be adequately portable.

Riser Cables may be easier, but they aren't cheap to implement
I'm talking about cost and signal integrity. We don't have much visibility, but if we did then we should see the cables are cheaper because they don't need retimers.

I just want my PCIe x16 Slot to be moved to the "Bottom" of the MoBo, regardless of MoBo Form Factor.
I actually agree with you. I want the same thing. It would be best.

I just bought a micro-ATX which has 3 PCIe slots:
  • x16 (top)
  • x4 (middle)
  • x8 (bottom)

If you put a 2-slot+ card in the top slot, you block the middle slot. If you put anything in the bottom slot, the top one drops to x8. So, if you really want to use a card in that board at full x16 width, then it must be the only card, unless it happens to be single-width (some workstation graphics cards are single-width, but they're considerably more expensive than the gaming equivalent and not even the higher end ones).

How cheap of them! Given the bottom slot is mechanically x16, I wish they'd not taken the cheap route and just made that the full-width slot.

Anyway, because of that, I had to spend the extra money and get the version of the board which had integrated 10 Gigabit Ethernet, since my ability to use a 10 Gigabit NIC was so badly constrained. And, due to supply constraints, it basically took about 2 years for me to find that version of the board in stock & reasonably-priced.

I share your complaints about M.2 in desktops. All the heat problems we're seeing show just what a bad idea it was.

I've never seen Mini-STX out in the wild, so maybe they might be popular, but from what I can tell, they're still pretty niche.
The boards are somewhat niche. Cases have been virtually nonexistent, ever since the form factor was launched in 2016 or so.

I wish ARM-based mini-PCs would adopt this form factor and we'd get a vibrant case ecosystem for them. Instead, every ARM SBC either uses the Pi's form factor (which is a fine layout for a hobby board, but not a mini-PC) or has an extremely limited case selection.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
812
20,060
Well, something to consider: the English language never had compound words with a capital letter in the middle. Words are either: separate, hyphenated, or simply run-together. This business of capitalizing middle-letters is something used by programmers (who couldn't use spaces or hyphens) and popularized by marketers. It's not proper English.

...not that I'm saying you should care, but just in case you do.
As a programmer, I do it because I like it.

That pretty much is all there is to it.

Nothing deep about it.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
812
20,060
Enterprise has been clear about what it wants, like those crazy ruler form-factor drives. SFF wants M.2, more or less. It's really desktop which wants something other than M.2. 2.5" already has extremely broad support. There's no real need to introduce yet another form factor (i.e. 1.8").
From everything I've seen, Enterprise wants something other than the silly "Ruler Format" that is M.2

That format was foisted upon them via the LapTop market.

That's not a compelling need most people have. 2.5" is already small enough to be adequately portable.
It is compelling to me and others who want "Smaller Things" & to fit more into a existing case or backplane.

The ever increasing need for smaller & more cuter things because they're small is a ever increasing need in our society.
And inside a Rack Mount Server, Volume and Space is a premium, same with SFF.

Also for Shipping / Logistics, smaller = Better = fit more. With a smaller than 2.5" format, you can fit more product into existing shipping containers.

M.2 also has the issue that it's bare PCB is easy for end users to damage.

Having a proper shell w/o being too big or too small is nice.

It's like Goldilocks and the 3 bears. Not too big, not too small, just right.

1.8" SSD Drives is "Just Right" in size IMO along with flexibility and not conflicting with any existing designs that are popular and out on the market.

Leave 2.5" drives to Physical HDD's and Hybrid SHDD's along with 3.5" drives.

CHm3tTu.png
It's also WAY easier to route 16x NAND-Flash Packages in a Square Grid with the SSD Micro-Controller in the center and 8x NAND Flash Packages on the Ventral & Dorsal sides for a total of 16x NAND-Flash Packages.

DRAM can be located on the opposite side of the controller for shortest trace lengths possible.

If you're stuck with the M.2 / Ruler format, routing large amounts of NAND Flash packages becomes a PAIN in the ARSE.

I'm talking about cost and signal integrity. We don't have much visibility, but if we did then we should see the cables are cheaper because they don't need retimers.
And not having to pay to manufacture any riser and just use that money for retimers would be all we need.

You're already paying for a PCIe <Insert Latest Version # Here> x16 slot.

All that extra money for the Riser Cable/Card can be better spent with the ReTimers IMO.

This way the case manufacturers don't need to buy them and put them in the case, just give me up to 5x expansion Card slots past the bottom slots for Video Card Exhaust & Connectivity.

Retimer chips for a full 16-lane full PCIe 4.0 could cost $15 to $25
We have Riser Cables/Cards costing up to $80 Retail.
And once you buy them, they're not even guranteed to work half the time.

They eventually flake out and you have to waste time to debug the problem.

Usually plugging straight into the MoBo's PCIe x16 slot works w/o issue.

So why not just spend that $$ at the MoBo makers to get it right the first time?

If you shove in the extra foot of Riser Cable, you'd get similar excess latency of a few nano-seconds as putting on the Retimer and moving the Slot to the bottom (extra latency penalty will vary based on MoBo Form Factor YMMV).


I actually agree with you. I want the same thing. It would be best.

I just bought a micro-ATX which has 3 PCIe slots:
  • x16 (top)
  • x4 (middle)
  • x8 (bottom)

If you put a 2-slot+ card in the top slot, you block the middle slot. If you put anything in the bottom slot, the top one drops to x8. So, if you really want to use a card in that board at full x16 width, then it must be the only card, unless it happens to be single-width (some workstation graphics cards are single-width, but they're considerably more expensive than the gaming equivalent and not even the higher end ones).

How cheap of them! Given the bottom slot is mechanically x16, I wish they'd not taken the cheap route and just made that the full-width slot.

Anyway, because of that, I had to spend the extra money and get the version of the board which had integrated 10 Gigabit Ethernet, since my ability to use a 10 Gigabit NIC was so badly constrained. And, due to supply constraints, it basically took about 2 years for me to find that version of the board in stock & reasonably-priced.

I share your complaints about M.2 in desktops. All the heat problems we're seeing show just what a bad idea it was.
Now you felt my pain, we need to unify and complain to the MoBo makers to implement change across the board.

We need a ATX Addendum provided by the community.

The boards are somewhat niche. Cases have been virtually nonexistent, since the form factor was launched in 2016 or so.
If that's the case, there's no harm in me pushing VIA's ITX family of standards for LapTop usage.

We have LTT supporting FrameWork for the modular LapTop.

Slowly, we need to drive a common Eco-System for LapTops that use a common standard.

I want Nano-ITX to be "THE MoBo Form Factor" for LapTops.
3B8Afiv.jpg
The Dimensions for Nano-ITX IRL is so perfect for modern LapTops:
IZXHc5D.jpg
Even for the Thin LapTops we have today.

As for the battery issue, I want to bring back Modular Batteries via my "Folder PC" concept.
Where you move the Battery into a bottom module that houses 2x Seperate 99.9 Whr batteries where ONLY 1x Battery can be legally on at any given time due to the FAA 100 Whr Battery rule.
Then you can hot-swap 99.9 Whr batteries and recharge the other in the other battery bay.

Dell already pulls of this battery trick with their "Tough Line" of Professional LapTops, we can do the same for consumer LapTops. Bring back modular, user Hot-Swappable Batteries.

You'd obvious stand up the core frame of the LapTop like a ASUS MotherShip setup with a Surface like hinge on the back.
swfQ8MV.jpg
The back of the core body would have all fine mesh and have LOTS of Air Ventilation pushing Heat out of the Top/Sides with a Elevated 180° Display hinge that raises the Display higher than normal, effectively alleviating the end users constant neck strain of looking down at their LapTop Screen.

Just like "NoteBook LapTop PC's" have been a common format.

I want to iterate on that and bring in the "Folder TableTop PC's" to become the next popular common mobile PC format.

One that is flexible, but mobile enough to deploy on Tables everywhere while maintaining ergonomics for the end user by having a taller screen naturally and having "Hot-Swappable" batteries come back along with the option to just use a AC DC Mains Power pack and no Batteries if you want.


I wish ARM-based mini-PCs would adopt this form factor and we'd get a vibrant case ecosystem for them. Instead, every ARM SBC either uses the Pi's form factor (which is a fine layout for a hobby board, but not a mini-PC) or has an extremely limited case selection.
I'd rather see RISC-V come in and eat up ARM's lunch.

There's so many cool things that you can do with RISC-V.
I want to see them blend the CPU/GPU core together into a new Hybrid Core even harder.
That's the real innovation that I want to see happen.

Screw ARM and it's money grubbing ways.

RISC-V or x86.

Screw ARM.
 
Last edited:

bit_user

Titan
Ambassador
From everything I've seen, Enterprise wants something other than the silly "Ruler Format" that is M.2
No, the "ruler format" does not refer to M.2. Have you seriously not heard of the E1.S form factor?

That format was foisted upon them via the LapTop market.
Yes, and datacenter SSDs are no longer made in M.2 form factor. I just bought one of the last 22110 drives made, which is the Samsung PM9A3.

M.2 also has the issue that it's bare PCB is easy for end users to damage.
Same with DIMMs, but you're not arguing to put them in an aluminum shell, are you?

Leave 2.5" drives to Physical HDD's and Hybrid SHDD's along with 3.5" drives.
No, 2.5" HDDs are basically dead and should stay that way. They only made sense for laptops and for >= 10k RPM server HDDs, both of which are extinct.

It's also WAY easier to route 16x NAND-Flash Packages in a Square Grid with the SSD Micro-Controller in the center and 8x NAND Flash Packages on the Ventral & Dorsal sides for a total of 16x NAND-Flash Packages.

DRAM can be located on the opposite side of the controller for shortest trace lengths possible.
All things that are equally possible in a 2.5" form factor.

If that's the case, there's no harm in me pushing VIA's ITX family of standards for LapTop usage.
You know, laptop hasn't even been hyphenated since some of the oldest ads I can find for them:
Retro-Radio-Shack-Tandy-laptop-computers-from-1990-750x1030.jpg

We have LTT supporting FrameWork for the modular LapTop.
  1. Framework is a proper noun, and they don't capitalize the W.
  2. Framework uses proprietary form factor components.

The Dimensions for Nano-ITX IRL is so perfect for modern LapTops:
Maybe if you like thick, chunky laptops. For thinner laptops, you need a longer and more narrow shape.

As for the battery issue, I want to bring back Modular Batteries via my "Folder PC" concept.
Batteries is going too far off-topic, IMO.

I'd rather see RISC-V come in and eat up ARM's lunch.
Again, that's beside the point. The key thing is that I wanted Rasperry Pi-like machines having a standard form factor, so I had a broader range of cases to choose from.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
812
20,060
No, the "ruler format" does not refer to M.2. Have you seriously not heard of the E1.S form factor?
I have, but it's more of a generic term for storage drives that are significantly longer than they are wider

Yes, and datacenter SSDs are no longer made in M.2 form factor. I just bought one of the last 22110 drives made, which is the Samsung PM9A3.
Good, they moved away from a very bad connector.

Same with DIMMs, but you're not arguing to put them in an aluminum shell, are you?
Most consumer DIMMs have a Aluminium HeatSink shell around it.

No, 2.5" HDDs are basically dead and should stay that way. They only made sense for laptops and for >= 10k RPM server HDDs, both of which are extinct.
I want to revitialize HDD's with Multi-Acutator & Optane built in.

If DRAM-less SSD's use SLC cache, there's no reason why HDD's can't use Optane to act as a similar SLC cache for large chunks of memory and connect via NVMe over PCIe x1 lane.

All things that are equally possible in a 2.5" form factor.
Except for being physically larger than 1.8" drive form factor.
Physical Compactness + Durability + Density is what I'm looking for.
The Tri-Fecta.

You know, laptop hasn't even been hyphenated since some of the oldest ads I can find for them:
Maybe it's time to teach marketing how to make their text stand out =D.

  1. Framework is a proper noun, and they don't capitalize the W.
  2. Framework uses proprietary form factor components.
Like I said, it's my intentional stylization.

Maybe if you like thick, chunky laptops. For thinner laptops, you need a longer and more narrow shape.
I'm not Apple, I don't chase for Thin & Light to the n-th degree.

Batteries is going too far off-topic, IMO.
Ok, but it's not for me.

Again, that's beside the point. The key thing is that I wanted Rasperry Pi-like machines having a standard form factor, so I had a broader range of cases to choose from.
That's the issue with Rasperry Pi popularity shooting out before a form factor was decided.

Now it's kind of a Free-For All.
 

InvalidError

Titan
Moderator
I want to revitialize HDD's with Multi-Acutator & Optane built in.
Most people who still want HDDs want them for the lowest possible cost per GB with decent reliability. More mechanical parts increases the risk of failure and cost for moderate to no benefit to most people. If I want fast access to something, I'll put it on NVMe while I need it and move it to HDD for cold/near-line storage when I'm done, I don't want the cost and unnecessary reliability liability of having a consumable storage tier built directly into the HDD.
 

bit_user

Titan
Ambassador
I have, but it's more of a generic term for storage drives that are significantly longer than they are wider
No. No reference I've ever seen to "ruler form factor" was referring to a M.2 drive. The term only came into use when E1.S was proposed.

Most consumer DIMMs have a Aluminium HeatSink shell around it.
No. Most consumer DIMMs are cheap and don't have a heatsink. The ones with heatsinks are the gamer-oriented ones.

The average PC will have something like Kingston Value RAM, which is just a plain green PCB with chips on it:


Crucial is the same:


Or we could look at any number of other manufacturers. Whether you buy a non-gaming PC from a big OEM or a smaller shop, open it up and most likely the RAM won't have heatsinks.

I want to revitialize HDD's with Multi-Acutator & Optane built in.
Multi-actuator never existed in 2.5" form factor. It's not obvious whether it could, but it sort of defeats the point. The reason they went to multi-actuator was to compensate for the long readout time of high-capacity drives. 2.5" HDDs were never capacity-oriented.

If DRAM-less SSD's use SLC cache, there's no reason why HDD's can't use Optane to act as a similar SLC cache for large chunks of memory
Yeah, I think the whole hybrid drive idea makes sense for SMR drives. But NAND is fine for that and we're getting off-topic again.

Like I said, it's my intentional stylization.
It just confuses people. Or worse: it reflects poorly on you.

And if it's a proper name, then mis-capitalizing it could lead others to believe the company itself uses that capitalization, which it doesn't. It's one thing to write well-known words in your own style, but you shouldn't take liberties with a proper name, like that of a person or company.

I'm not Apple, I don't chase for Thin & Light to the n-th degree.
A lot of people don't like chunky laptops. If you want a technology standard to catch on, then you can't ignore what sells.
 
I want to revitialize HDD's with Multi-Acutator & Optane built in.
Which is what hybrid drives are, and they haven't really done much to the market. Even adding Optane wouldn't do anything to revive this market.

The problem is marketers promised "SSD like access times" or similar, which is only the case if data is constantly being accessed. As such, hybrid drives are only really useful as the system drive for OS data.

If DRAM-less SSD's use SLC cache, there's no reason why HDD's can't use Optane to act as a similar SLC cache for large chunks of memory and connect via NVMe over PCIe x1 lane.
The only reason why cache is "needed" for storage drives is to make smaller writes appear to act instantaneously. Smaller write sizes (my WAG for what "smaller" means is <= 128KiB) tend to dominate what kind of write operations are done.

Except for being physically larger than 1.8" drive form factor.
Physical Compactness + Durability + Density is what I'm looking for.
The Tri-Fecta.
And you'll have to deal with the engineering triangle: performance, reliability, cost. Choose at most two.
 
  • Like
Reactions: bit_user

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
812
20,060
Most people who still want HDDs want them for the lowest possible cost per GB with decent reliability. More mechanical parts increases the risk of failure and cost for moderate to no benefit to most people. If I want fast access to something, I'll put it on NVMe while I need it and move it to HDD for cold/near-line storage when I'm done, I don't want the cost and unnecessary reliability liability of having a consumable storage tier built directly into the HDD.
I guess we're different people in what we want out of HDD's.

Which is what hybrid drives are, and they haven't really done much to the market. Even adding Optane wouldn't do anything to revive this market.
They've also been implemented poorly. I know, I've tried them from Seagate and they weren't reliable.


The problem is marketers promised "SSD like access times" or similar, which is only the case if data is constantly being accessed. As such, hybrid drives are only really useful as the system drive for OS data.
Or you throw in Massive Multi-GB files and transfer slows to a crawl once it's out of the tiny DRAM amount.
With a larger Optane Cache, it would last ALOT longer before it slows down.

Same with multiple Reads/Writes in parallel for many smaller to medium size files

The only reason why cache is "needed" for storage drives is to make smaller writes appear to act instantaneously. Smaller write sizes (my WAG for what "smaller" means is <= 128KiB) tend to dominate what kind of write operations are done.
I guess my definition of "Small" is file sizes (< 1 GiB) but (> 1 MiB) along with "Many of them" reading/writing in parallel at the same time.

And you'll have to deal with the engineering triangle: performance, reliability, cost. Choose at most two.
To quote "Adam Savage" from MythBusters "I reject your reality and substitute my own"

I'll find a way to balance performance & reliability & very reasonable costs, especially in bulk mass production.

No. No reference I've ever seen to "ruler form factor" was referring to a M.2 drive. The term only came into use when E1.S was proposed.
Ok...

No. Most consumer DIMMs are cheap and don't have a heatsink. The ones with heatsinks are the gamer-oriented ones.

The average PC will have something like Kingston Value RAM, which is just a plain green PCB with chips on it:
I never see those naked RAM sticks unless it's for enterprise.

Most of the time, in consumer PC's, I see sticks with some form of HeatSink, any kind.
I almost NEVER see a naked RAM stick out in the wild.

Crucial is the same:

Or we could look at any number of other manufacturers. Whether you buy a non-gaming PC from a big OEM or a smaller shop, open it up and most likely the RAM won't have heatsinks.
I very rarely ever see naked RAM sticks out in the wild, it almost ALWAYs has some form of Heat Sink.
Even the most basic design ones will work, it doesn't need to be fancy.

Multi-actuator never existed in 2.5" form factor. It's not obvious whether it could, but it sort of defeats the point. The reason they went to multi-actuator was to compensate for the long readout time of high-capacity drives. 2.5" HDDs were never capacity-oriented.
You need the right amount of Actuator Arms & Heads to do Vertical Stack "Multi-Actuator", not ever single config for 2.5" HDD form factor would work, only some would work.

2.5" HDD's can be capacity oriented depending on which model you're looking into buying.


Yeah, I think the whole hybrid drive idea makes sense for SMR drives. But NAND is fine for that and we're getting off-topic again.
Our topics meander like a river =D

It just confuses people. Or worse: it reflects poorly on you.
Oh well, it's the way I've been typing for quite a while. I highly doubt you're going to change my habits by this time.

And if it's a proper name, then mis-capitalizing it could lead others to believe the company itself uses that capitalization, which it doesn't. It's one thing to write well-known words in your own style, but you shouldn't take liberties with a proper name, like that of a person or company.
People shouldn't be believing what random posters on the internet states for spelling/capitalization. They should go to the company website or get a official representative to do that.

Random people are just that, people, nothing more, nothing less. Don't put too much expectations on other folks, you won't be disappointed that way.

A lot of people don't like chunky laptops. If you want a technology standard to catch on, then you can't ignore what sells.
Let's define "Chunky" & "THICC"

How many "mm" or "inches" is ! Thin ! at the rear and front ends of a LapTop IYO?
How many "mm" or "inches" is Chunky at the rear and front ends of a LapTop IYO?
How many "mm" or "inches" is THICC! at the rear and front ends of a LapTop IYO?
 
Last edited:
They've also been implemented poorly. I know, I've tried them from Seagate and they weren't reliable.
The implementation doesn't really matter. The problem with anything "cache" related is you don't get that cache speed unless you access the same data enough times. Outside of the OS, core library files, and apps the user uses every day, everything else almost never gets to this point.

Or you throw in Massive Multi-GB files and transfer slows to a crawl once it's out of the tiny DRAM amount.
With a larger Optane Cache, it would last ALOT longer before it slows down.

Same with multiple Reads/Writes in parallel for many smaller to medium size files
How large are we talking about? Hybrid drives seemed to have topped out at 8GB, but I would guess they actually topped out at 16GB.

At some point, if you're going to keep adding more and more cache, you may as well just get an SSD. Even those are still an order of magnitude faster than HDDs when their SLC cache runs out.

I guess my definition of "Small" is file sizes (< 1 GiB) but (> 1 MiB) along with "Many of them" reading/writing in parallel at the same time.
And that's what cache is there for: to buffer against many small writes.

To quote "Adam Savage" from MythBusters "I reject your reality and substitute my own"

I'll find a way to balance performance & reliability & very reasonable costs, especially in bulk mass production.
If you don't have any actual engineering or product development experience, or more importantly, money, I wish you lots of luck.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
The implementation doesn't really matter. The problem with anything "cache" related is you don't get that cache speed unless you access the same data enough times. Outside of the OS, core library files, and apps the user uses every day, everything else almost never gets to this point.
For me, the value would seem to be in using it as a write buffer, which is consistent with how @Kamen Rider Blade described it.

For SMR drives, you need to coalesce the writes you make to the platter. So, a NAND-based buffer would hold a close parallel to SSDs, where there's a pseudo-SLC or pseudo-MLC write buffer the drive fills, before going back and moving the data to TLC or QLC packing.

Actually using it for quick turn-around of reads is almost beside the point, IMO. Yes, you could try to use some of it for that, and it might help with the most frequently-accessed files, but if someone isn't booting off the drive then they probably won't notice much difference from such caching behavior.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
812
20,060
Additional cost, additional points of failure for a feature that are of little to no importance to what most people still use HDDs for. The usual "jack of all trades, master of none" type deal.
That's not the original quote, what you are using is the abbreviated version whhich was designed to denegrate "Jack of All trades" types.

Here's the "Full Quote":
A jack of all trades is a master of none, but often times better than a master of one.

For me, the value would seem to be in using it as a write buffer, which is consistent with how @Kamen Rider Blade described it.

For SMR drives, you need to coalesce the writes you make to the platter. So, a NAND-based buffer would hold a close parallel to SSDs, where there's a pseudo-SLC or pseudo-MLC write buffer the drive fills, before going back and moving the data to TLC or QLC packing.

Actually using it for quick turn-around of reads is almost beside the point, IMO. Yes, you could try to use some of it for that, and it might help with the most frequently-accessed files, but if someone isn't booting off the drive then they probably won't notice much difference from such caching behavior.
That's exactly the point I'm trying to convey. You hit the nail on the head!

A Read/Write buffer for large files and for many files simultaneously that are of middling size or enough buffer for many small files.

The SLC-cache on DRAM-less SSD's makes "ALL the difference" in performance, the same can be true for Hybrid HDD's with Optane.
Especialy HDD's with Multi-Actuator since that is starting to become a thing and improve on the Sequential Read/Write performance which is what HDD's are supposed to be good at.

Without having to pump up Rotational speeds, just adding more Actuator arms does wonders for Read/Write performance on a HDD w/o incurring significant power consumption / heat issues. No more than existing HDD's.

Add in Optane cache that is consistent performing, and the latest in Platter density, you'd have a "VERY Competitive" in Linear performance drive with DRAM-less QLC-SSD's.


The implementation doesn't really matter. The problem with anything "cache" related is you don't get that cache speed unless you access the same data enough times. Outside of the OS, core library files, and apps the user uses every day, everything else almost never gets to this point.
Implementation makes all the difference in what it can provide the end user.

How large are we talking about? Hybrid drives seemed to have topped out at 8GB, but I would guess they actually topped out at 16GB.
We start at 16 GiB of Optane for the smallest HDD configurations and can scale up in 16 GiB increments all the way to 128 GiB of Optane for the largest HDD configurations currently in mass production for now. Obviously once mass production of Optane truly ramps up and going along with economies of scale, we can adjust those sizes.

But that matches the SLC-cache range for modern SSD's that I've seen which range from:
6 GB of SLC-cache for a 500 GB SSD which is about 1.2% of it's total capacity
to
660 GB of SLC-cache for 2 TB SSD which is ~ ⅓ of it's total capacity.


At some point, if you're going to keep adding more and more cache, you may as well just get an SSD. Even
those are still an order of magnitude faster than HDDs when their SLC cache runs out.
bH4m1nx.png
If you were talking about 1-bit per cell or 2-bit per cell, I'd agree with you on that, but ever since 3-bit per cell and 4-bit per cell; performance is piss poor once you run out of SLC cache, especially since going DRAM-less to save on costs.

gyvyQ33.png
A very useful feature for HDD's that have relatively consistent write performance relatively for what it is.


And that's what cache is there for: to buffer against many small writes.
DRAM cache is very limited in what it can do, especially given how small capacity they are in this day and age for HDD's. Usually ≤ 2 GiB in capacity.

If you don't have any actual engineering or product development experience, or more importantly, money, I wish you lots of luck.
Thanks! That's why I want to find vendors who are willing to take some risk to make great waves and changes for the better.
 
Last edited:

bit_user

Titan
Ambassador
That's not the original quote, what you are using is the abbreviated version whhich was designed to denegrate "Jack of All trades" types.

Here's the "Full Quote":
“A jack of all trades is a master of none, but often times better than a master of one.
No attribution?

Anyway, even if that was the original quote, the version people use probably aligns with what they really mean.

The SLC-cache on DRAM-less SSD's makes "ALL the difference" in performance, the same can be true for Hybrid HDD's with Optane.
It doesn't need to be Optane. If we're talking about write-buffering to HDDs, pseudo-SLC NAND is cheaper and still more than fast enough.

Without having to pump up Rotational speeds, just adding more Actuator arms does wonders for Read/Write performance on a HDD w/o incurring significant power consumption / heat issues. No more than existing HDD's.
You know they appear as separate logical drives, right?

Add in Optane cache that is consistent performing, and the latest in Platter density, you'd have a "VERY Competitive" in Linear performance drive with DRAM-less QLC-SSD's.
The cache/buffer is important for optimizing random writes. Whatever benefit you get from buffering sequential writes is almost beside the point, relatively speaking.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
812
20,060
No attribution?
Updated, see original post.
A jack of all trades is a master of none, but often times better than a master of one.
This so-called jack of all trades was in fact William Shakespeare. The full phrase is “a jack of all trades is a master of none, but oftentimes better than a master of one.” It was a compliment. Far from letting it deter their path, some entrepreneurs swear that being a jack of all trades brings benefits.

Anyway, even if that was the original quote, the version people use probably aligns with what they really mean.
I know what they mean, I think it's their opinion, they're entitled to have their own opinion on things.

It doesn't need to be Optane. If we're talking about write-buffering to HDDs, pseudo-SLC NAND is cheaper and still more than fast enough.
Actually the Opti-NAND is a form of Embedded Flash Drive that stores only the Meta-Data of the HDD
It's reserved exclusively for that purpose, ergo freeing some capacity on the HDD and improving performance at the same time.

Optane, when using Direct Memory Attached mode to the HDD Controller with the "Ultra-Low Latency" along with PCIe NVMe connection to the Host & SAS/SATA connection to the HDD end as well will allow Super High Performance that is similar to most SSD's.

CwgBuZV.jpg

It'll also have "Very Low Latency". We're talking Low Latency that is faster than a RAM Disk & close to DRAM. Given that we're talking over PCIe NVMe, it's performance will be in the ball park of your beloved CXL Direct Attached Memory, but directly attached to the HDD controller.
DbvSGPm.jpg
I wouldn't be surprised if the Consumer/Enterpise version of HDD's add CXL Direct Memory Attach support to the HDD's Optane section and allow "Super Fast" Read/Write to it as a buffer before you dump the data back to the HDD. Literally a form of "RAM Disk" attached over PCIe that buffers all "Data Read/Writes" before it hits the HDD layer.


You know they appear as separate logical drives, right?
That's where the HDD Controller comes in and automatically does RAID 0 internally to gives us the best possible Sequential Performance. Obviously, it should be end user configurable, but the default should be for "Internal RAID 0".

The cache/buffer is important for optimizing random writes. Whatever benefit you get from buffering sequential writes is almost beside the point, relatively speaking.
Optane is good for buffering everything.
Just look at First Gen Optane and how it performs as a cache drive with only 16 GiB or 32 GiB of Optane buffering a HDD off the MoBo.
4E6CCj0.png

3XuXDQb.png

XQCSkJr.png

12bFaNq.png

07IBlIr.png

EVOElCP.png

Imagine what we can do with modern technology & integration of Optane into a HDD, it'd make casual end users not notice the difference in performance against a bog standard SSD until they reach the extreme ends of performance necessary. Especially with larger Optane Caches attached to larger HDD's to mask the slowness of a HDD. And we know the last interation of Optane was VERY fast and connection over even 1x PCIe 5.0 lane would offer plenty of bandwidth to use @ ~ 4 GB/s which is equivalent to 4x PCIe 3.0 lanes.

Moving forward in time with newer iterations of PCIe, the bandwidth will only get better and the nice part about only having 1x PCIe connection is that multiple hosts on Virtual Machines can Virtually connect to the same Drive and share resources over multiple connections. The Optane portion would be critical to buffering Reads/Writes from multiple VM hosts as well or anybody over multiple Network Connections.
 
Last edited:
For me, the value would seem to be in using it as a write buffer, which is consistent with how @Kamen Rider Blade described it.

For SMR drives, you need to coalesce the writes you make to the platter. So, a NAND-based buffer would hold a close parallel to SSDs, where there's a pseudo-SLC or pseudo-MLC write buffer the drive fills, before going back and moving the data to TLC or QLC packing.
I would argue if you need a HDD for write performance, an SMR one is not ideal. And depending on how much flash cache you need to add to make this worth while, it's likely the cost reduction of going with SMR has diminished.

And depending on how much you're writing at once, you'll just lose that performance anyway.
 

bit_user

Titan
Ambassador
Actually the Opti-NAND is a form of Embedded Flash Drive that stores only the Meta-Data of the HDD
It's reserved exclusively for that purpose, ergo freeing some capacity on the HDD and improving performance at the same time.
That's worded like a correction, but it's not. It doesn't contradict anything I said.

I know what Opti-NAND is, and I worry that it might fail before the rest of the drive. Without it, the drive is unreadable, making data-recovery impossible. It's a completely different topic, though.

That's where the HDD Controller comes in and automatically does RAID 0 internally to gives us the best possible Sequential Performance.
No, I'm pretty sure they made them separate logical drives for good reasons. If you stripe across the actuators, then your IOPS don't really go up.

Also, it turns a failure in one half of the drive into a failure of the entire thing. The way these are meant to be used is in large pools of storage, where you replicate data across different physical and logical drives. Then, if one of the heads crashes, you only have to restore half a drive's worth of data.

Anyway, it's not a consumer technology. It might never trickle down to consumer HDDs.
 

bit_user

Titan
Ambassador
I would argue if you need a HDD for write performance, an SMR one is not ideal. And depending on how much flash cache you need to add to make this worth while, it's likely the cost reduction of going with SMR has diminished.
SMR is fine for coherent writes. Where it falls apart is on smaller random writes. That's where a giant write buffer can really help. However, if you're just using it for backups, then it probably won't make a big difference.

As for the cost reduction, consider that you only need one NAND chip. If it can be driven directly by the HDD controller, then it might not even add much hardware complexity. Ideally, you could even reduce the amount of DRAM the drive has.

So, we're talking about a fixed overhead to the drive's electronics, in terms of cost. By contrast, the value of SMR is that it scales the amount of data you can fit on a platter. However dense your platter is, SMR will increase it by (IIRC) 30%. So, I don't see a NAND-based buffer destroying the value-proposition of SMR.

Plus, if it's an Opti-NAND drive (and I'd only recommend those for cloud/enterprise scenarios with robust redundancy schemes), then you've already got some NAND there, so why not also use it for write-buffering?
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
812
20,060
That's worded like a correction, but it's not. It doesn't contradict anything I said.

I know what Opti-NAND is, and I worry that it might fail before the rest of the drive. Without it, the drive is unreadable, making data-recovery impossible. It's a completely different topic, though.
Hopefully they are using SLC or MLC mode only to make the data last longer.

No, I'm pretty sure they made them separate logical drives for good reasons. If you stripe across the actuators, then your IOPS don't really go up.

Also, it turns a failure in one half of the drive into a failure of the entire thing. The way these are meant to be used is in large pools of storage, where you replicate data across different physical and logical drives. Then, if one of the heads crashes, you only have to restore half a drive's worth of data.

Anyway, it's not a consumer technology. It might never trickle down to consumer HDDs.
The "Consumer Version" is the one I want where RAID 0 is enabled by default.

I care more about the Sequential Performance boost via Multi-Actuator.

The "Enterprise Version" should probably be left to Sys Admin Config by default.

Plus, if it's an Opti-NAND drive (and I'd only recommend those for cloud/enterprise scenarios with robust redundancy schemes), then you've already got some NAND there, so why not also use it for write-buffering?
It's only used for "Write Buffering" for emergency power loss situations.

According to the "Storage Review" article about Opti-NAND:
WD’s figures are that OptiNAND drives can secure more than 100MB of write cache data in the event of an unplanned power loss, a 50X improvement over standard drives that can flush about 2MB.
 

bit_user

Titan
Ambassador
It's only used for "Write Buffering" for emergency power loss situations.

According to the "Storage Review" article about Opti-NAND:
I was talking hypothetically, but it doesn't surprise me to hear that. They don't want to burn out their Opti-NAND chips by using them as a normal write buffer. In this case, it sounds like they just dump the contents of the DRAM buffer to it. Fair enough.
 

TRENDING THREADS