With the popularity of PCIe x16 Riser cards that allow a user to move the Video Card to a different location, why don't we save the end user some $$$ by just moving the PCIe x16 Slot to the bottom of the MoBo, regardless of MoBo Form Factor.
This way the HeatSink & Fans will hang off the MoBo area w/o blocking any existing MoBo PCIe Expansion Card Slots and you can start routing the cooling out of the case through the sides and back.
It's a Win/Win situation for everybody.
All you need is a slightly larger form factor case to handle your existing MoBo Form Factor.
With the advent of 2-5 slot Video Cards, I think it's a good way to get back the Expansion Card slots that are traditionally blocked by your Video Card due to it's THICC by design nature.
With the popularity of PCIe x16 Riser cards that allow a user to move the Video Card to a different location, why don't we save the end user some $$$ by just moving the PCIe x16 Slot to the bottom of the MoBo, regardless of MoBo Form Factor.
This way the HeatSink & Fans will hang off the MoBo area w/o blocking any existing MoBo PCIe Expansion Card Slots and you can start routing the cooling out of the case through the sides and back.
It's a Win/Win situation for everybody.
All you need is a slightly larger form factor case to handle your existing MoBo Form Factor.
With the advent of 2-5 slot Video Cards, I think it's a good way to get back the Expansion Card slots that are traditionally blocked by your Video Card due to it's THICC by design nature.
Same as with riser cards, longer distance from CPU = more connectivity issues and more things that the maker has to do to make the connection stable.
It's a big added cost.
I mean look at the card in the article, that's all the stuff needed to make it happen, granted that is for a bunch of IO but even for a GPU alone you would need quite a bit of that.
Depends on what's under that heat sink...if there is a tiny CPU there that can handle the pressure you would get full speed, probably not for everything connected on it at the same time but one card at a time should be relatively easy.
With the popularity of PCIe x16 Riser cards that allow a user to move the Video Card to a different location, why don't we save the end user some $$$ by just moving the PCIe x16 Slot to the bottom of the MoBo, regardless of MoBo Form Factor.
This way the HeatSink & Fans will hang off the MoBo area w/o blocking any existing MoBo PCIe Expansion Card Slots and you can start routing the cooling out of the case through the sides and back.
It's a Win/Win situation for everybody.
All you need is a slightly larger form factor case to handle your existing MoBo Form Factor.
With the advent of 2-5 slot Video Cards, I think it's a good way to get back the Expansion Card slots that are traditionally blocked by your Video Card due to it's THICC by design nature.
Honestly, I am baffled how the motherboard design has barely changed the last, what, 25 - 30 years?? Of course, it changed (barely) in order to make use of advancement in PCI speeds or add an SSD, for example, but one can't really call it a 're-design" in anyway! But everything else, from GPU / CPU /RAM /PSU, etc, the same old tired design is still the same.
And especially now, where GPUs expect to take over 3-5 slots, or whatever slots they wish and they (and the CPU) need more cooling at the expense of other mobo real-estate.
There are so many ways those engineers - busy churning out billions of the same old mobo design - that could have made major changes to enhance and streamline the functionality, instead of looking at the same old tired layout and calling it a day - and leave the end user to deal with the obsolete insanity!
Same as with riser cards, longer distance from CPU = more connectivity issues and more things that the maker has to do to make the connection stable.
It's a big added cost.
I mean look at the card in the article, that's all the stuff needed to make it happen, granted that is for a bunch of IO but even for a GPU alone you would need quite a bit of that.
Honestly, I am baffled how the motherboard design has barely changed the last, what, 25 - 30 years?? Of course, it changed (barely) in order to make use of advancement in PCI speeds or add an SSD, for example, but one can't really call it a 're-design" in anyway! But everything else, from GPU / CPU /RAM /PSU, etc, the same old tired design is still the same.
And especially now, where GPUs expect to take over 3-5 slots, or whatever slots they wish and they (and the CPU) need more cooling at the expense of other mobo real-estate.
But the GPU can be easily moved to the "Bottom Slot".
It's the only device that seems to have a ever increasing Thermal & Power Consumption rate over time that takes up ever more slots.
Since 5x Slots have become a some-what common design amongst the High End Video Cards, we might as well make a slight change to the Placement of the PCIe x16 slot to accomodate it and improve thermals for our PC design.
There are so many ways those engineers - busy churning out billions of the same old mobo design - that could have made major changes to enhance and streamline the functionality, instead of looking at the same old tired layout and calling it a day - and leave the end user to deal with the obsolete insanity!
Honestly, I am baffled how the motherboard design has barely changed the last, what, 25 - 30 years?? Of course, it changed (barely) in order to make use of advancement in PCI speeds or add an SSD, for example, but one can't really call it a 're-design" in anyway! But everything else, from GPU / CPU /RAM /PSU, etc, the same old tired design is still the same.
And especially now, where GPUs expect to take over 3-5 slots, or whatever slots they wish and they (and the CPU) need more cooling at the expense of other mobo real-estate.
There are so many ways those engineers - busy churning out billions of the same old mobo design - that could have made major changes to enhance and streamline the functionality, instead of looking at the same old tired layout and calling it a day - and leave the end user to deal with the obsolete insanity!
Which is essentially like the old S-100 bus, just a backplane that the cards that do the actual work plug into. M.2 is another PCIe x4 in a different form factor.
Honestly, I am baffled how the motherboard design has barely changed the last, what, 25 - 30 years?? Of course, it changed (barely) in order to make use of advancement in PCI speeds or add an SSD, for example, but one can't really call it a 're-design" in anyway! But everything else, from GPU / CPU /RAM /PSU, etc, the same old tired design is still the same.
And especially now, where GPUs expect to take over 3-5 slots, or whatever slots they wish and they (and the CPU) need more cooling at the expense of other mobo real-estate.
There are so many ways those engineers - busy churning out billions of the same old mobo design - that could have made major changes to enhance and streamline the functionality, instead of looking at the same old tired layout and calling it a day - and leave the end user to deal with the obsolete insanity!
And many engineers over those past 25-30 years have developed different motherboard form factors.
BTX was designed to address layout issues namely by putting the CPU closer to the front fans, making the RAM slots parallel to the airflow (not that this hasn't been done on ATX, see DFI's LANPARTY motherboard), and making the chipsets (as there were two at the time) closer to each other.
DTX wanted to address the shortcomings of Mini-ITX only allowing one PCIe slot. I suppose the idea was you'd have a x1 slot, then an x16 slot.
There was the Ultra ATX board that allowed for 10 expansion slots, in anticipation for dual-slot video cards being the norm, but this required a Full ATX case.
There were a lot of others, especially during the 90s, that were around.
But ultimately, here's the problem you face:
The only way we can overcome this is if the new standard does at least these two things: does it provide substantially better "performance" or "features" and is it cheap enough to manufacture? Usually you can only hit one of them, if at all.
Retail Quality PCIe x16 Risers can cost up to $80, so while it might be a big cost from a "MoBo Implementation" stand point, it's still cheaper than making a seperate Riser to sell, especially as PCIe 4.0 / 5.0 / Beyond gets to become standard.
And the irony is that Hardware Canucks recently tested what orientation is the best for PC CPU/GPU temps.
It was a fundamentally good design, the dumb part was that Intel used the standard orientation for a PCIe card and the MoBo Add-On Board has Air Intake being literally blocked by the Video Card.
If they went with a slightly "Non-Standard" orientation and had a special Reversed Position PCIe x16 Slot like this:
They could've developed the NUC Compute Card to have proper air intake from the side of the Mesh Case.
But they didn't, they went with standard, and blocked their own air intake, so it doesn't cool as well as it could've.
Which is essentially like the old S-100 bus, just a backplane that the cards that do the actual work plug into. M.2 is another PCIe x4 in a different form factor.
The M.2 "Physical Connector" was designed specifically for a LapTop's Physically Size & Volume constrained environment.
On the Home Desktop and Enterprise / Server environment, it's largely a pointless form-factor that should be NEVER USED in those environments.
NOTE: I'm perfectly fine with PCIe protocol & NVMe / SAS / SATA Protocols. Each one has it's place.
It's the M.2 "Physical Connector" that I don't care for in a Home Desktop / Server environment.
We need to re-use the now obsolete 1.8" HDD Form-Factor and repurpose it exclusively for SSD's.
The M.2 "Physical Connector" is too fragile and only designed for a limited insertion life-span:
+ Connector Mating Cycle (Insert & Removal) designed Life Span
- M.2 = ______50 Cycles
- SATA = 10,000 Cycles
We have U.2 / U.3 connectors that we can start taking seriously for nearly every Home DeskTop / Server Storage drive.
Imagine a world where EVERYBODY has MoBo's filled with U.2 / U.3 connectors that we (The End Users) can decide what Storage drive or device is on the other end of U.2 / U.3.
If you wanted a BackPlane to Multiple Drives, you can have it, just plug in the appropriate cable.
If you wanted PCIe x4 SSD Drive, you can have it, just plug in the appropriate cable.
If you wanted 2x PCIe x2 SSD Drives, you can have it, plug in the appropriate cable.
If you wanted 4x PCIe x1 Hybrid HDD/SSD, you can have it, plug in the appropriate cable.
If you wanted SATA/SAS/NVMe over PCIe / Pure PCIe / CXL; You can have it.
Just mount the device in a standard 3½" Drive Bay or 5¼" Drive Bay and plug it in.
It's time we go back to our PC DIY Building Roots and not abandon everything we've learned over time.
Cooling is important, but so is the standardized connector & Drive Bays that we mount our other devices too.
That's a beautiful DIY PC Building world that we can move towards if we get all the manufacturers on board along with the SSD manufacturers.
We need to move onto a new era where all SSD's are using the 1.8" SSD Form-Factor.
I did the geometry and there is enough room to fit 16x NAND Flash Packages around a controller with a single DRAM package on the opposite side of the controller.
1.8"SSD Drive Dimensions = 78.5 mm × 54.0 mm × 5.0 mm (Physical Casing)
I did some rough estimation, and the internal use-able PCB area would be ≈ 3326.00 mm² after you factor out the SATA/SAS/PCIe connector & Screw Holes / Case Screws.
That's still better than a M.2 30110 in Surface Area and you have a "All Aluminium Case" to act as a large Heat-Sink.
Also the size of the standard 1.8" SSD form factor is so small, that you can fit 5x 1.8" SSD's with Plastic Connector Protector Covers & Plastic Protector Cases inside a standard Bicycle Poker Playing Card Deck Box.
How's that for "Useful Small Size" without getting "Too Small" and fiddly that you would easily lose the damn thing.
With the popularity of PCIe x16 Riser cards that allow a user to move the Video Card to a different location, why don't we save the end user some $$$ by just moving the PCIe x16 Slot to the bottom of the MoBo, regardless of MoBo Form Factor.
The chipset uses a x4 PCIe 4.0 connection, whereas GPUs use x16 PCIe 5.0.
What you're proposing would be a lot more expensive and non-negligible power overhead, as you want to route all 16 PCIe 5.0 lanes through not just a separate card, but also the chipset.
I think what's happening is that each chipset is just operating as a PCIe bridge. In that case, the end-to-end bandwidth would be the same (best case), but you might notice a small hit on latency.
ATX specifications were first released in 1995. BTX came along in 2004, but the market rejected it.
Since then, we've seen the rise of micro-ATX, mini-ITX, and mini-STX. So, I wouldn't say the design has completely stagnated. Granted, mini-ITX technically goes back to 2001.
With mini-PCs becoming increasingly common, I really hoped to see more activity around mini-STX. Unfortunately, the market for mini-STX cases hasn't really seemed to materialize.
There are so many ways those engineers - busy churning out billions of the same old mobo design - that could have made major changes to enhance and streamline the functionality, instead of looking at the same old tired layout and calling it a day - and leave the end user to deal with the obsolete insanity!
Look at Intel's NUC Elements, then. Or see what @TerryLaze posted, where their NUC Extreme series, they even have the CPU sitting on its own card which plugs into a baseboard.
I know that, but some things must be done to gain back those PCIe Expansion Slots that would get blocked by the massive HeatSink / Fans of modern Video Cards that can span "UpTo 5× Expansion Card Slots"
What you're proposing would be a lot more expensive and non-negligible power overhead, as you want to route all 16 PCIe 5.0 lanes through not just a separate card, but also the chipset.
But compared to the "UpTo $80" PCIe 5.0 x16 Riser Card + MoBo, I'd rather give that extra money to the MoBo Maker to do it properly the first time instead of having to waste my time guessing if my PCIe x16 Riser card is causing me issues.
Since then, we've seen the rise of micro-ATX, mini-ITX, and mini-STX. So, I wouldn't say the design has completely stagnated. Granted, mini-ITX technically goes back to 2001.
With mini-PCs becoming increasingly common, I really hoped to see more activity around mini-STX. Unfortunately, the market for mini-STX cases hasn't really seemed to materialize.
In a world where NVMe SSDs are getting ever hotter and in need of more cooling to avoid throttling (or worse), I think 2.5" provides useful benefits in terms of additional surface area. It also allows opens up the design space for cheaper drives to use a larger number of lower-density NAND chips.