News AMD Engineers Show Off 'Infinitely' Stackable AM5 Chipset Cards

With the popularity of PCIe x16 Riser cards that allow a user to move the Video Card to a different location, why don't we save the end user some $$$ by just moving the PCIe x16 Slot to the bottom of the MoBo, regardless of MoBo Form Factor.

This way the HeatSink & Fans will hang off the MoBo area w/o blocking any existing MoBo PCIe Expansion Card Slots and you can start routing the cooling out of the case through the sides and back.

It's a Win/Win situation for everybody.

All you need is a slightly larger form factor case to handle your existing MoBo Form Factor.

With the advent of 2-5 slot Video Cards, I think it's a good way to get back the Expansion Card slots that are traditionally blocked by your Video Card due to it's THICC by design nature.
 
  • Like
Reactions: PEnns
With the popularity of PCIe x16 Riser cards that allow a user to move the Video Card to a different location, why don't we save the end user some $$$ by just moving the PCIe x16 Slot to the bottom of the MoBo, regardless of MoBo Form Factor.

This way the HeatSink & Fans will hang off the MoBo area w/o blocking any existing MoBo PCIe Expansion Card Slots and you can start routing the cooling out of the case through the sides and back.

It's a Win/Win situation for everybody.

All you need is a slightly larger form factor case to handle your existing MoBo Form Factor.

With the advent of 2-5 slot Video Cards, I think it's a good way to get back the Expansion Card slots that are traditionally blocked by your Video Card due to it's THICC by design nature.
Same as with riser cards, longer distance from CPU = more connectivity issues and more things that the maker has to do to make the connection stable.
It's a big added cost.
I mean look at the card in the article, that's all the stuff needed to make it happen, granted that is for a bunch of IO but even for a GPU alone you would need quite a bit of that.
 
Are these just to test functionality? Seems like the bus speeds wouldn't be the same so performance testing would be out of the question, right?

Correct if I'm wrong, nerds!
Depends on what's under that heat sink...if there is a tiny CPU there that can handle the pressure you would get full speed, probably not for everything connected on it at the same time but one card at a time should be relatively easy.
 
With the popularity of PCIe x16 Riser cards that allow a user to move the Video Card to a different location, why don't we save the end user some $$$ by just moving the PCIe x16 Slot to the bottom of the MoBo, regardless of MoBo Form Factor.

This way the HeatSink & Fans will hang off the MoBo area w/o blocking any existing MoBo PCIe Expansion Card Slots and you can start routing the cooling out of the case through the sides and back.

It's a Win/Win situation for everybody.

All you need is a slightly larger form factor case to handle your existing MoBo Form Factor.

With the advent of 2-5 slot Video Cards, I think it's a good way to get back the Expansion Card slots that are traditionally blocked by your Video Card due to it's THICC by design nature.

Honestly, I am baffled how the motherboard design has barely changed the last, what, 25 - 30 years?? Of course, it changed (barely) in order to make use of advancement in PCI speeds or add an SSD, for example, but one can't really call it a 're-design" in anyway! But everything else, from GPU / CPU /RAM /PSU, etc, the same old tired design is still the same.

And especially now, where GPUs expect to take over 3-5 slots, or whatever slots they wish and they (and the CPU) need more cooling at the expense of other mobo real-estate.

There are so many ways those engineers - busy churning out billions of the same old mobo design - that could have made major changes to enhance and streamline the functionality, instead of looking at the same old tired layout and calling it a day - and leave the end user to deal with the obsolete insanity!
 
Same as with riser cards, longer distance from CPU = more connectivity issues and more things that the maker has to do to make the connection stable.
It's a big added cost.
I mean look at the card in the article, that's all the stuff needed to make it happen, granted that is for a bunch of IO but even for a GPU alone you would need quite a bit of that.
I trust the MoBo Maker to get the connectivity implemented correctly & reliably over the PCIe x16 Riser manufacturer.

If somebody is going to make it work correctly & reliably, it's the MoBo makers, they have the most incentive to do it correctly the first time.

I've seen too many people complain about "Dodgy" PCIe x16 Riser when direct pluging into the MoBo's PCIe x16 would've solved the issue.
 
  • Like
Reactions: Thunder64
Honestly, I am baffled how the motherboard design has barely changed the last, what, 25 - 30 years?? Of course, it changed (barely) in order to make use of advancement in PCI speeds or add an SSD, for example, but one can't really call it a 're-design" in anyway! But everything else, from GPU / CPU /RAM /PSU, etc, the same old tired design is still the same.
That's why it's call the "ATX Standard"

Common Design Standards allow interchange-able parts from the past to the future.

It's a HUGE benefit to the PC eco system.

Proprietary Crap = Middle Finger and should be "OUTLAWed" from being manufactured.
And especially now, where GPUs expect to take over 3-5 slots, or whatever slots they wish and they (and the CPU) need more cooling at the expense of other mobo real-estate.
But the GPU can be easily moved to the "Bottom Slot".
It's the only device that seems to have a ever increasing Thermal & Power Consumption rate over time that takes up ever more slots.

Since 5x Slots have become a some-what common design amongst the High End Video Cards, we might as well make a slight change to the Placement of the PCIe x16 slot to accomodate it and improve thermals for our PC design.

There are so many ways those engineers - busy churning out billions of the same old mobo design - that could have made major changes to enhance and streamline the functionality, instead of looking at the same old tired layout and calling it a day - and leave the end user to deal with the obsolete insanity!
Those engineers times are being wasted on superficial crap like RGB and stupidly designed Heat Sinks to look pretty.

Things that aren't really necessary when good ole Basic HeatSinks have worked for ages.

We don't need to include bullets into our Heat Sinks or some fancy art on it.

We need Tried & True reliability, consistency, & flexibility in connections.
 
Honestly, I am baffled how the motherboard design has barely changed the last, what, 25 - 30 years?? Of course, it changed (barely) in order to make use of advancement in PCI speeds or add an SSD, for example, but one can't really call it a 're-design" in anyway! But everything else, from GPU / CPU /RAM /PSU, etc, the same old tired design is still the same.

And especially now, where GPUs expect to take over 3-5 slots, or whatever slots they wish and they (and the CPU) need more cooling at the expense of other mobo real-estate.

There are so many ways those engineers - busy churning out billions of the same old mobo design - that could have made major changes to enhance and streamline the functionality, instead of looking at the same old tired layout and calling it a day - and leave the end user to deal with the obsolete insanity!
Desktop motherboards have stayed the way there are because it's the most efficient way to do things.

If you want a new design mobo get a new design PC...
This is the "mobo" of intel nuc 9 which is a whole PC on a pci slot card.
Backplane_575px.jpg
 
  • Like
Reactions: bit_user
I trust the MoBo Maker to get the connectivity implemented correctly & reliably over the PCIe x16 Riser manufacturer.

If somebody is going to make it work correctly & reliably, it's the MoBo makers, they have the most incentive to do it correctly the first time.

I've seen too many people complain about "Dodgy" PCIe x16 Riser when direct pluging into the MoBo's PCIe x16 would've solved the issue.
Yes, I'm just saying that it would be a pretty big cost, not that they wouldn't do it well.
 
Honestly, I am baffled how the motherboard design has barely changed the last, what, 25 - 30 years?? Of course, it changed (barely) in order to make use of advancement in PCI speeds or add an SSD, for example, but one can't really call it a 're-design" in anyway! But everything else, from GPU / CPU /RAM /PSU, etc, the same old tired design is still the same.

And especially now, where GPUs expect to take over 3-5 slots, or whatever slots they wish and they (and the CPU) need more cooling at the expense of other mobo real-estate.

There are so many ways those engineers - busy churning out billions of the same old mobo design - that could have made major changes to enhance and streamline the functionality, instead of looking at the same old tired layout and calling it a day - and leave the end user to deal with the obsolete insanity!
And many engineers over those past 25-30 years have developed different motherboard form factors.
  • BTX was designed to address layout issues namely by putting the CPU closer to the front fans, making the RAM slots parallel to the airflow (not that this hasn't been done on ATX, see DFI's LANPARTY motherboard), and making the chipsets (as there were two at the time) closer to each other.
  • DTX wanted to address the shortcomings of Mini-ITX only allowing one PCIe slot. I suppose the idea was you'd have a x1 slot, then an x16 slot.
  • There was the Ultra ATX board that allowed for 10 expansion slots, in anticipation for dual-slot video cards being the norm, but this required a Full ATX case.
  • There were a lot of others, especially during the 90s, that were around.
But ultimately, here's the problem you face:
standards.png


The only way we can overcome this is if the new standard does at least these two things: does it provide substantially better "performance" or "features" and is it cheap enough to manufacture? Usually you can only hit one of them, if at all.
 
Yes, I'm just saying that it would be a pretty big cost, not that they wouldn't do it well.
Retail Quality PCIe x16 Risers can cost up to $80, so while it might be a big cost from a "MoBo Implementation" stand point, it's still cheaper than making a seperate Riser to sell, especially as PCIe 4.0 / 5.0 / Beyond gets to become standard.

And the irony is that Hardware Canucks recently tested what orientation is the best for PC CPU/GPU temps.
5SSAt48.jpg

 
  • Like
Reactions: bit_user
Desktop motherboards have stayed the way there are because it's the most efficient way to do things.
I concur, that's why I like the current ATX & ATX related family of MoBo's

If you want a new design mobo get a new design PC...
This is the "mobo" of intel nuc 9 which is a whole PC on a pci slot card.
Backplane_575px.jpg
It was a fundamentally good design, the dumb part was that Intel used the standard orientation for a PCIe card and the MoBo Add-On Board has Air Intake being literally blocked by the Video Card.

If they went with a slightly "Non-Standard" orientation and had a special Reversed Position PCIe x16 Slot like this:
Dz8yzLb.jpg
They could've developed the NUC Compute Card to have proper air intake from the side of the Mesh Case.

But they didn't, they went with standard, and blocked their own air intake, so it doesn't cool as well as it could've.
 
  • Like
Reactions: bit_user
Which is essentially like the old S-100 bus, just a backplane that the cards that do the actual work plug into. M.2 is another PCIe x4 in a different form factor.
The M.2 "Physical Connector" was designed specifically for a LapTop's Physically Size & Volume constrained environment.

On the Home Desktop and Enterprise / Server environment, it's largely a pointless form-factor that should be NEVER USED in those environments.

NOTE: I'm perfectly fine with PCIe protocol & NVMe / SAS / SATA Protocols. Each one has it's place.
It's the M.2 "Physical Connector" that I don't care for in a Home Desktop / Server environment.

We need to re-use the now obsolete 1.8" HDD Form-Factor and repurpose it exclusively for SSD's.

The M.2 "Physical Connector" is too fragile and only designed for a limited insertion life-span:
+ Connector Mating Cycle (Insert & Removal) designed Life Span
- M.2 = ______50 Cycles
- SATA = 10,000 Cycles

We have U.2 / U.3 connectors that we can start taking seriously for nearly every Home DeskTop / Server Storage drive.
6QAtdab.png
Q1O0yQ1.png

Imagine a world where EVERYBODY has MoBo's filled with U.2 / U.3 connectors that we (The End Users) can decide what Storage drive or device is on the other end of U.2 / U.3.
If you wanted a BackPlane to Multiple Drives, you can have it, just plug in the appropriate cable.
If you wanted PCIe x4 SSD Drive, you can have it, just plug in the appropriate cable.
If you wanted 2x PCIe x2 SSD Drives, you can have it, plug in the appropriate cable.
If you wanted 4x PCIe x1 Hybrid HDD/SSD, you can have it, plug in the appropriate cable.
If you wanted SATA/SAS/NVMe over PCIe / Pure PCIe / CXL; You can have it.
Just mount the device in a standard 3½" Drive Bay or 5¼" Drive Bay and plug it in.

It's time we go back to our PC DIY Building Roots and not abandon everything we've learned over time.

Cooling is important, but so is the standardized connector & Drive Bays that we mount our other devices too.

That's a beautiful DIY PC Building world that we can move towards if we get all the manufacturers on board along with the SSD manufacturers.

We need to move onto a new era where all SSD's are using the 1.8" SSD Form-Factor.
hA5TFvO.jpg
I did the geometry and there is enough room to fit 16x NAND Flash Packages around a controller with a single DRAM package on the opposite side of the controller.
CHm3tTu.png

1.8"SSD Drive Dimensions = 78.5 mm × 54.0 mm × 5.0 mm (Physical Casing)

I did some rough estimation, and the internal use-able PCB area would be ≈ 3326.00 mm² after you factor out the SATA/SAS/PCIe connector & Screw Holes / Case Screws.

That's still better than a M.2 30110 in Surface Area and you have a "All Aluminium Case" to act as a large Heat-Sink.

Also the size of the standard 1.8" SSD form factor is so small, that you can fit 5x 1.8" SSD's with Plastic Connector Protector Covers & Plastic Protector Cases inside a standard Bicycle Poker Playing Card Deck Box.

How's that for "Useful Small Size" without getting "Too Small" and fiddly that you would easily lose the damn thing.
 
Last edited:
With the popularity of PCIe x16 Riser cards that allow a user to move the Video Card to a different location, why don't we save the end user some $$$ by just moving the PCIe x16 Slot to the bottom of the MoBo, regardless of MoBo Form Factor.
The chipset uses a x4 PCIe 4.0 connection, whereas GPUs use x16 PCIe 5.0.

What you're proposing would be a lot more expensive and non-negligible power overhead, as you want to route all 16 PCIe 5.0 lanes through not just a separate card, but also the chipset.

But the GPU can be easily moved to the "Bottom Slot".
Oh, but that costs money. Especially with ever-increasing signal speeds.
 
Last edited:
Are these just to test functionality? Seems like the bus speeds wouldn't be the same so performance testing would be out of the question, right?

Correct if I'm wrong, nerds!
I think what's happening is that each chipset is just operating as a PCIe bridge. In that case, the end-to-end bandwidth would be the same (best case), but you might notice a small hit on latency.
 
  • Like
Reactions: TJ Hooker
Honestly, I am baffled how the motherboard design has barely changed the last, what, 25 - 30 years??
ATX specifications were first released in 1995. BTX came along in 2004, but the market rejected it.

Since then, we've seen the rise of micro-ATX, mini-ITX, and mini-STX. So, I wouldn't say the design has completely stagnated. Granted, mini-ITX technically goes back to 2001.

With mini-PCs becoming increasingly common, I really hoped to see more activity around mini-STX. Unfortunately, the market for mini-STX cases hasn't really seemed to materialize.

There are so many ways those engineers - busy churning out billions of the same old mobo design - that could have made major changes to enhance and streamline the functionality, instead of looking at the same old tired layout and calling it a day - and leave the end user to deal with the obsolete insanity!
Look at Intel's NUC Elements, then. Or see what @TerryLaze posted, where their NUC Extreme series, they even have the CPU sitting on its own card which plugs into a baseboard.

This is the "mobo" of intel nuc 9 which is a whole PC on a pci slot card.
Backplane_575px.jpg
So, am I correct in guessing there must be a PCIe switch on the bottom side of that baseboard? I don't see how else you can do that!

Note that being Gen 9 (Coffee Lake), this will have only PCIe 3.0. Should get more expensive & less efficient, as PCIe speeds increase.
 
Last edited:
The chipset uses a x4 PCIe 4.0 connection, whereas GPUs use x16 PCIe 5.0.
I know that, but some things must be done to gain back those PCIe Expansion Slots that would get blocked by the massive HeatSink / Fans of modern Video Cards that can span "UpTo 5× Expansion Card Slots"

What you're proposing would be a lot more expensive and non-negligible power overhead, as you want to route all 16 PCIe 5.0 lanes through not just a separate card, but also the chipset.
As opposed to a 1-2 foot PCIe 5.0 x16 Riser Card that costs up to $80?

I'd rather have the MoBo manufacturers implement it naturally into the PCB of the MoBo.

Oh, but that costs money. Especially with ever-increasing signal speeds.
But compared to the "UpTo $80" PCIe 5.0 x16 Riser Card + MoBo, I'd rather give that extra money to the MoBo Maker to do it properly the first time instead of having to waste my time guessing if my PCIe x16 Riser card is causing me issues.
 
Since then, we've seen the rise of micro-ATX, mini-ITX, and mini-STX. So, I wouldn't say the design has completely stagnated. Granted, mini-ITX technically goes back to 2001.
Don't forget mini-DTX
OJ9hIWO.png
The current MoBo Form Factors that matter:
- Mini-ITX
- Mini-DTX
- FlexATX <- There needs to be a revival of this format.
- microATX AKA 'µATX' AKA 'uATX'
- ATX (Advanced Technologies eXtended)
- SSI-CEB (Compact Electronics Bay)
- SSI-EEB (Enterprise Electronics Bay)
- SSI-MEB (Midrange Electronics Bay)
- SSI-TEB (Thin Electronics Bay)

The rest of the MoBo Form Factors listed are more for reference on relative size.
 
With mini-PCs becoming increasingly common, I really hoped to see more activity around mini-STX. Unfortunately, the market for mini-STX cases hasn't really seemed to materialize.
3B8Afiv.jpg
Nano-ITX would be perfect for a "Standard LapTop MoBo" Form Factor.

Pico-ITX & Mobile-ITX would be good for "Ultra SFF" applications.

IZXHc5D.jpg

There's more than enough room in a standard MoBo Chasis to fit both with room for cooling.
 
We need to move onto a new era where all SSD's are using the 1.8" SSD Form-Factor.
In a world where NVMe SSDs are getting ever hotter and in need of more cooling to avoid throttling (or worse), I think 2.5" provides useful benefits in terms of additional surface area. It also allows opens up the design space for cheaper drives to use a larger number of lower-density NAND chips.

As opposed to a 1-2 foot PCIe 5.0 x16 Riser Card that costs up to $80?

I'd rather have the MoBo manufacturers implement it naturally into the PCB of the MoBo.
Riser cables (which I think you meant to say) are easier to implement and provide better signal integrity than riser cards.
 
Last edited:
Don't forget mini-DTX
OJ9hIWO.png
The current MoBo Form Factors that matter:
- Mini-ITX
- Mini-DTX
- FlexATX <- There needs to be a revival of this format.
- microATX AKA 'µATX' AKA 'uATX'
- ATX (Advanced Technologies eXtended)
- SSI-CEB (Compact Electronics Bay)
- SSI-EEB (Enterprise Electronics Bay)
- SSI-MEB (Midrange Electronics Bay)
- SSI-TEB (Thin Electronics Bay)
mini-STX is probably a lot more common than several of those. Also, there are still EATX boards on the market.

There's also 5x5 and I think 4x4.