News AMD Engineers Show Off 'Infinitely' Stackable AM5 Chipset Cards

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The whole goal of the "Multi-Actuator" improvements is to slowly add more independently functioning actuators until every single Actuator can operate independently on it's own to allow massive parallelization of Read/Writes to saturate the interface bandwidth.

The first step is "Dual Actuators" on one stack.

The next will be 3x Actuators,

And you know the rest.

Eventually you might even see two actuator stacks in a HDD, one on the left, one on the right with a slightly bent actuator arm on each side that allows reading / writing to each Platter Surface independently.

Imagine how cool would that be to parallelize throughput.
 
The whole goal of the "Multi-Actuator" improvements is to slowly add more independently functioning actuators until every single Actuator can operate independently on it's own to allow massive parallelization of Read/Writes to saturate the interface bandwidth.
Source?

Also, this flies in the face of 2.5" HDDs, as each actuator you add will likely need more room.

Eventually you might even see two actuator stacks in a HDD, one on the left, one on the right with a slightly bent actuator arm on each side that allows reading / writing to each Platter Surface independently.
That's not a new idea and nobody has been able to successfully bring such a drive to market. If/when they do, they won't be cheap.
 
Logical analysis of how HDD's are going to evolve the Multi-Actuator tech to gain Sequential Performance

Also, this flies in the face of 2.5" HDDs, as each actuator you add will likely need more room.
There have been smaller Actuator servos from older and smaller HDD's before that can be used to fit in that tiny foot print.

That's not a new idea and nobody has been able to successfully bring such a drive to market. If/when they do, they won't be cheap.
Obviously, the dual Actuator stack will be the last thing to implement, but it'll be a nice one.
 
Logical analysis of how HDD's are going to evolve the Multi-Actuator tech to gain Sequential Performance
Okay, so just your imagination.

There have been smaller Actuator servos from older and smaller HDD's before that can be used to fit in that tiny foot print.
And those are going to work with modern track densities, much less future ones? I seriously doubt it.
 
AMD engineers show off really unique stackable chipset cards that are used to test the functionality of AMD's AM5 motherboards.

AMD Engineers Show Off 'Infinitely' Stackable AM5 Chipset Cards : Read more
Honestly this is great and all. AMD is high on Chiplets. They tout their potential. What they mean is how many cores they can plug into the silicon scaling as needed. What they don't talk about is the fact that they get their but handed to them by a better more purpose driven design that's not as scalable.

The fact that the AMD 7800X3d can in some instances outpace the highest end x3d part should be more than concerning.

They are on a bad path. It's a perfectly fine path but it's the wrong path for performance. They need to course correct.
 
Again, that's beside the point. The key thing is that I wanted Rasperry Pi-like machines having a standard form factor, so I had a broader range of cases to choose from.
At the end of the day, you only have the Rasperry Pi foundation to blame for not picking a existing MoBo form factor

The Different Versions of the Raspberry Pi

Is there any "industry standard" for single-board computers?

They just do what ever they want, and produce whatever form factor they want.

Standards be damned.

As for what Tinker Board form factor should be standard?

I think Pico-ITX would be perfect.

@ (72×100)mm it's the perfect size and in the past, many vendors have made daughter boards for the Pico-ITX to expand I/O.

Looks like Pico-ITX might have a market already:
 
Last edited:
Then they'll make a new Servo Actuator design to fit into that limited volume.
You don't know if it's even possible, but I guess it doesn't matter because there's no market for 2.5" HDDs, anyhow. The market for them was destroyed by SSDs, and there's nothing you can dream up to make them competitive, again.

At the end of the day, you only have the Rasperry Pi foundation to blame for not picking a existing MoBo form factor
I don't blame them, because I think they made reasonable choices considering their goals. My goals are rather different, and that's the reason for my dissatisfaction.

I just wish someone would be releasing boards with a standard form factor. If that's pico-ITX, so be it. ...but it also needs to be a form factor with some half-decent mini-PC cases available. Chicken and egg problem, perhaps.
 
Last edited:
You don't know if it's even possible, but I guess it doesn't matter because there's no market for 2.5" HDDs, anyhow. The market for them was destroyed by SSDs, and there's nothing you can dream up to make them competitive, again.
Hmmm, so that's a challenge =D

I just wish someone would be releasing boards with a standard form factor. If that's pico-ITX, so be it. ...but it also needs to be a form factor with some half-decent mini-PC cases available. Chicken and egg problem, perhaps.
If you google "Pico-ITX" cases.

You'll find a lot of existing cases for sale.

Whether or not they have the headers you want, that's a different issue.

Just look on the "Images" tab for pics of the cases, quite a diverse set of options for you to choose from.
 
  • Like
Reactions: bit_user
So, I just checked to see if the Orange Pi 5 Plus is a standard form factor. Sadly, it's 3 mm deeper than pico-ITX. The width is the same, but the screw pattern is significantly different.

The main reason I have yet to buy one is the lack of decent case options. Then again, I'm in no hurry.
Raspberry Pi & all the other "Pi clones" seem to just do their own thing.

The folks who follow Pico-ITX seem to actually adhere to standards while the Pi-Clones are just march to the beat of their own drum and do whatever they want.
 
The folks who follow Pico-ITX seem to actually adhere to standards while the Pi-Clones are just march to the beat of their own drum and do whatever they want.
The weird thing is that Orange Pi doesn't even seem to make a case for it. I could understand if they made a nonstandard form factor because they wanted a side-business in cases for it. Otherwise, it would seem to be in their interest to follow some kind of standard.
 
The weird thing is that Orange Pi doesn't even seem to make a case for it. I could understand if they made a nonstandard form factor because they wanted a side-business in cases for it. Otherwise, it would seem to be in their interest to follow some kind of standard.
That's the irony of the "Pi-clones", nobody follows any standard, not even each other.

I guess they expect you to 3D-Print your own case or do DIY case manufacturing for it.
 
It doesn't need to be Optane. If we're talking about write-buffering to HDDs, pseudo-SLC NAND is cheaper and still more than fast enough.
Imagine having a 16TB HDD with 32GB NAND cache. You get maybe 10 full drive writes before the NAND is worn out.

For video recording applications, whatever cache you put on there will overflow if the platter transfer speed cannot sustain the writes and there is no caching your way out of that. For backup and archival purposes, you are likely writing 100+GB at a time, in which case a NAND cache isn't of much help either apart from screwing up how your software reports that a job is "completed" before it actually is, which is potentially dangerous as you may shut the drive down while it is still working on it. For an active use drive, a tiny NAND's endurance sounds like a plain bad idea.

Hybrid HDDs mostly only make sense when you can only have one (more) drive and you need it to be a jack-of-all-trades.
 
Imagine having a 16TB HDD with 32GB NAND cache. You get maybe 10 full drive writes before the NAND is worn out.

For video recording applications, whatever cache you put on there will overflow if the platter transfer speed cannot sustain the writes and there is no caching your way out of that. For backup and archival purposes, you are likely writing 100+GB at a time, in which case a NAND cache isn't of much help either apart from screwing up how your software reports that a job is "completed" before it actually is, which is potentially dangerous as you may shut the drive down while it is still working on it. For an active use drive, a tiny NAND's endurance sounds like a plain bad idea.

Hybrid HDDs mostly only make sense when you can only have one (more) drive and you need it to be a jack-of-all-trades.
Long Term Endurance is one of the reasons I went with Optane over NAND Flash.

Besides the low latency aspect & high bandwidth along with the potential to mount it to the system like a RAM Disk through a "CXL-like" connection to the host system over PCIe connection on a U.2/U.3 physical connector adapted to the correct plugs.
 
  • Like
Reactions: bit_user
Long Term Endurance is one of the reasons I went with Optane over NAND Flash.

Besides the low latency aspect & high bandwidth along with the potential to mount it to the system like a RAM Disk through a "CXL-like" connection to the host system over PCIe connection on a U.2/U.3 physical connector adapted to the correct plugs.
Nobody buys hybrid HDDs to break world records, putting their tiny cache on PCIe x4 would be a waste of PCIe lanes. If you do stuff where storage latency faster than SATA3 SSDs is relevant, use an NVMe or NVDIMM SSD instead of a hybrid contraption.

The concept of hybrid HDDs has been abandoned because it has been superseded by rapidly diminishing SSD prices. SSDs today are cheaper than same capacity HDDs back then.
 
Nobody buys hybrid HDDs to break world records, putting their tiny cache on PCIe x4 would be a waste of PCIe lanes. If you do stuff where storage latency faster than SATA3 SSDs is relevant, use an NVMe or NVDIMM SSD instead of a hybrid contraption.
I've never stated that I would use a PCIe x4 lane for a Hybrid HDD.

I'd only be using PCIe <Insert Latest Version> x1 lane config ONLY.

I wouldn't be wasteful of x4 lanes.


The concept of hybrid HDDs has been abandoned because it has been superseded by rapidly diminishing SSD prices. SSDs today are cheaper than same capacity HDDs back then.
And HDD's are still even cheaper than SSD's in price per GB or price per TB.

And the longevity of the hardware vs the shorter P/E cycles of SSD's that use QLC.

The P/E cycle durability is only getting worse once they go PLC and above to increase capacity.
nfg95Bk.jpg
We know for a fact that they're working on PLC and above.
wTmhwKK.png

7SJmpB3.png
That's why they need SLC-cache.

It's the tradeoff of:
SSD's = Faster Speeds + Low Endurance + More Expensive cost / capacity
HDD's = Lower Speeds + High Endurance + Less Expensive cost / capacity
 
Anyone else skeptical of the results here, particularly the read results? I'm guessing that most storage benchmarks don't accurately measure read performance for caching configurations like this. They write the data and then immediately read it back, likely while it's still in cache, so you're basically just measuring cache performance. You never actually see the read performance of data not in cache.

You could design a benchmark to work around this (copy files that exceed the size of the cache, copy files to the drive then do a bunch of other stuff then try reading them back, etc.), but none of the reviewers seemed to do so for the original optane memory caching review. Some did for reviews of the Intel H10/H20 (the hybrid optane + QLC drives) and you can see for large transfers they performed worse than DRAM-less or regular QLC.
 
Last edited:
Anyone else skeptical of the results here, particularly the read results? I'm guessing that most storage benchmarks don't accurately measure read performance for caching configurations like this. They write the data and then immediately read it back, likely while it's still in cache, so you're basically just measuring cache performance. You never actually see the read performance of data not in cache.

You could design a benchmark to work around this (copy files that exceed the size of the cache, copy files to the drive then do a bunch of other stuff then try reading them back, etc.), but none of the reviewers seemed to dk so for the original optane memory caching review. Some did for reviews of the Intel H10/H20 (the hybrid optane + QLC drives) and you can see for large transfers they performed worse than DRAM-less or regular QLC.
What are you insinuating? That a reviewer for Tom's Hardware was designing a test to be favorable to "Intel"?
 
What are you insinuating? That a reviewer for Tom's Hardware was designing a test to be favorable to "Intel"?
Not quite. I'm saying that I believe the standard test tools in use at the time (by TH and others) were incidentally (not deliberately) favorable to cached storage configurations. This would include Intel Optane caching, but also Samsung Rapid mode, even QLC (maybe even TLC) drives that use an SLC cache.

Edit: Although I do find it irksome, and slightly suspicious, that all reviews I could find used a preconfigured test PC setup provided by Intel.
 
Last edited:
  • Like
Reactions: bit_user
Not quite. I'm saying that I believe the standard test tools in use at the time (by TH and others) were incidentally (not deliberately) favorable to cached storage configurations. This would include Intel Optane caching, but also Samsung Rapid mode, even QLC (maybe even TLC) drives that use an SLC cache.
So basically, they didn't push the data throughput past the Cache size such that it could show it's "True Speeds".

On modern day SSD's, I've seen 2TB SSD's with 660 GB os SLC cache.

So it's going to take a while to push past that point.
 
  • Like
Reactions: TJ Hooker
Imagine having a 16TB HDD with 32GB NAND cache. You get maybe 10 full drive writes before the NAND is worn out.
I was expecting they'd use a modern 3D NAND chip in pseudo-MLC mode. So, more like 128 GB. But, you're right, I get about 35 drive writes of a 20 TB HDD, when I run the numbers.

whatever cache you put on there will overflow if the platter transfer speed cannot sustain the writes and there is no caching your way out of that.
Yes, I'm sure we all know that.

For backup and archival purposes, you are likely writing 100+GB at a time,
I already said the benefits would be limited, in a backup scenario.

The exception to that is if you're not doing backups as a full image, but rather as incremental backups. In that case, you're modifying a lot of files and metadata spread throughout various SMR bands (or whatever they're called) and being able to coalesce those into fewer read-modify-write operations has real value.

in which case a NAND cache isn't of much help either apart from screwing up how your software reports that a job is "completed" before it actually is, which is potentially dangerous as you may shut the drive down while it is still working on it.
No, that's why we're talking about NAND/Optane, rather than DRAM. Unlike DRAM, they won't lose the intermediate copy during power-loss.
 
We know for a fact that they're working on PLC and above.
Yeah, but those are for garbage-tier storage. PLC only gets you 20% more storage per cell than QLC. So, someone really has to be chasing the most GB/$ to trade off endurance, performance, and reliability for such a small gain.

If you look at the performance and most datacenter SSDs, they're all still TLC.

It's the tradeoff of:
SSD's = Faster Speeds + Low Endurance + More Expensive cost / capacity
HDD's = Lower Speeds + High Endurance + Less Expensive cost / capacity
HDDs aren't high endurance, in an absolute sense. They're just harder to wear out because they're much lower-bandwidth.
 
So basically, they didn't push the data throughput past the Cache size such that it could show it's "True Speeds".

On modern day SSD's, I've seen 2TB SSD's with 660 GB os SLC cache.

So it's going to take a while to push past that point.
Some of the better SSD reviews I've seen look at how drive write-performance changes as the drive fills up. That means they must fill the drive at least once, during the review.

Now, whether they do that and make use of the full drive worth of data before their read benchmarks is another question...
 
  • Like
Reactions: TJ Hooker