Discussion Motherboards … are they really getting any better?

Jun 24, 2025
12
2
15
The high end boards are certainly getting more expensive, but are they getting any better on the performance side? Some of the issues I have with long standing trend in motherboards design, PCIe specs, JEDEC specs, etc.

1. PCIe … Gen 3, Gen 4, Gen 5 seems to offer less connectivity for a trade in increased bandwidth for a “few” (often just one) slot/connection.
2. Failure to support more than 2 memory slots of higher rated speeds at higher capacity (yes it can be done if you are lucky but rare)?
3. More power phases (VRMs) that have little or no benefit to anyone that doesn’t use LN2 and looking to set overclocking records. Why have EFI settings and jumpers for LN2 when 99.9% of the consumer market will not be using LN2?
4. Metal/magnetic or plastic covers over circuits, why? … adds weight and cost and does not help cooling.
5. USB ports that don’t supply sufficient power to operate more than about 10-12 USB devices (if you’re lucky). And a USB spec that is “open to interpretation” with all kinds of speed designators and guessing on whether or not existing cables will work with terrible length limits.
6. XMP/EXPO profiles that sometimes work and sometimes don’t … so why have them if it’s hit and miss?
7. Far too many UEFI settings where only about 10% are actually used and useful.
8. M.2 slot obsession … why? Just buy a single 8TB M.2 rather provide a motherboard with seven M.2 slots.
9. RGB headers all over the board (most of them not able to provide sufficient power).

One of my motherboards is the MSI X870E Godlike which is a good example at $1200 but still unable to work with 4 high frequency RAM modules unless timing is adjusted to low frequency. On my other motherboards (ASUS) have the same issue, limited to 2 slots for high frequency RAM.

About the only useful feature I get out of MSI board is the release button for the PCIe gen 5 slot. Heck, you can run a Gen 5 GPU in a Gen 3 slot and there is barely any difference in performance. On the memory side, it’s stable at 6400Mts 2X48GB, but going to 4 slots forget it (not without major timing adjustments).

A. Make 4 or even 8 memory slots work at rated XMP/EXPO speeds for the memory inserted into them … significant redesign, but at $1200 I would expect it.
B. Provide a 40G Ethernet port(s) and do away the USB issues and short cable length limits. 1000’s of devices could be connected to a single or multiple computers … the tech already exists (with PoE). Standardize on two RJ45 connectors (thick/standard and thin for those small device needs) over Cat8 … no need to go SPF/fiber unless you want lengths over 300 ft.

My background is software engineering (lead/senior) and software development manager for about 40 years. I’ve created and had games published going back to 1982 (HotCoCo). Built 100’s of computers over the decades from water cooled to vapor chillers and enjoy the build process. I currently operate a rack of 6 computers 1 gaming PC at my home office with UnRaid server, Windows Server, Windows 11 etc.

I don’t know if I’m alone with my disappointment with how motherboards have progressed over the years. Don’t get wrong, I like both … the RGB bling factor but I also like straight no frills performance. I’m not against options that support the desires of both.

lnRZN7O.jpg


4oxSCeJ.jpg



MiwnzMj.jpg


Rob
 
Seems like the majority of your complaints actually have nothing to do with motherboards but rather platforms.

There's basically nothing motherboard manufacturers can do to make high frequency DRAM work in 2DPC mode. Far too much relies on predominantly the memory controller, but to a lesser degree the DIMMs themselves. Workstations which can have higher capacity of fast memory is because there are more memory channels and they run RDIMMs.

XMP/EXPO also falls into the category of being more about memory modules and CPU memory controllers than the motherboard.

USB power specifications can be laid at the feet of USB-IF as the ports meet the standards, but the standards are so loose it doesn't matter.

The argument regarding M.2 is a dead horse. I hate the spec, and anyone who understands technology should hate that it's used outside of mobile devices as well. That battle was lost when client went M.2 while enterprise went U.2. Manufacturers could put more PCIe slots on the board, but given that anything but a budget video card is going to block at least 3-4 slots it would mean putting in something the majority of the customers will never touch. There's a higher chance of people putting multiple M.2 drives in their system than there is putting in multiple PCIe cards.

Good VRM makes sense, but the overkill as you go up in boards has never really made sense to me outside of overclocking focused boards and Halo boards like Extremes and GODLIKEs. My board has 20x 110amp stages and it would be just as good with half that.

PCIe throughput and utilization really has nothing to do with motherboard manufacturers at all but rather card and platform vendors. There are plenty of clever things that could be done to leverage the PCIe bandwidth better, but the only companies making PCIe switches above 3.0 are Broadcom and Microchip both of which charge enterprise pricing for everything.

I'm not really sure why anyone would complain about the number of BIOS settings when every manufacturer puts in various named basic versions that only expose commonly used settings. Advanced is for those who know what they're doing and/or want to play around with every possible setting.

RGB headers and PWM headers both suffer from the same thing which is not enough power is available to motherboards to run them. This is hard to solve across the board without adding more power connectors to the board which absolutely could be done and perhaps on high end ones should be.
 
  • Like
Reactions: CountMike
I’ll disagree with you on some your statements.

PCI-SIG comprises many (700+) companies that define the PCIe specification, AMD, Asus, Intel and most motherboard and CPU manufacturers are on the member list. These members define and ratify the specifications, so they are part of the problem.

Agree IMC defines the channel support … and agree many recent CPU IMC's for desktop are only 2 channel, but Intel’s older desktop CPUs were 4 channel and AMD EPYC CPUs are 8-12 channels. But my point is we seem to be moving backwards for desktop CPUs adding more cores that end up being under utilized as the IMC channels are fewer. Doesn't seem like progress, more like pad profit margins. The more cores, the more the need for more memory channels and a better IMC.

On the EFI side, I go into advanced mode very frequently. I’ll hit about 40 settings to optimize my system … out of about 550. Many of the setting nobody knows what they do, not even EEs and the manufacturer reps.

Overall, the trend for motherboard manufacturers is provide less and charge more.
 
Intel’s older desktop CPUs were 4 channel
Intel never sold 4 channel desktop CPUs. The only time they went above 2 channels on desktop was Nehalem with 3 channels and an argument can be made for this being the start of HEDT, but they weren't a separate product stack yet. After that all of the HEDT parts (x79/x99/x299 platforms) were Xeons with memory limitations to prevent hurting the Xeon market and these were 4 channel.
But my point is we seem to be moving backwards for desktop CPUs adding more cores that end up being under utilized as the IMC channels are fewer.
Are we though? AMD released a 16 core part on DDR4 and now they've got 16 core parts on DDR5. How is this moving backwards at all? Even if you take stock supported speeds the 5950X is 3200, 7950X is 5200 (62. 5% increase) and 9950X is 5600 (~7. 7% increase) these are all going up over prior generations.

On the Intel side the 10900K is 2933 (10 cores), 11900K is 3200 (8 cores), 12900K is 4800 (8P/8E cores, 50% increase), 13900K/14900K is 5600 (8P/16E cores, ~16.7% increase) and 285K is 6400 (8P/16E cores, ~14.3% increase). Each new architecture could overclock memory higher than the one prior as well.

With DDR4 there absolutely was not enough bandwidth and the 5950X first really showed some workloads where it was memory bandwidth starved. Currently desktop CPUs aren't particularly bandwidth starved (beyond the typical pieces of software which benefit from as much as can be given). On the AMD side after 32 cores on TR (64 core top SKU) desktop has more memory bandwidth per core and 64 cores on TR Pro (96 core top SKU) is that threshold. Intel is even more skewed in favor of desktop CPU bandwidth since the E-cores don't need the bandwidth P-cores do.
end up being under utilized as the IMC channels are fewer
Many of the setting nobody knows what they do, not even EEs and the manufacturer reps.
Do you have any evidence to back either of these up?
These members define and ratify the specifications, so they are part of the problem.
Please explain your logic here. Motherboard manufacturers don't have anything to do with how PCIe cards interface with the system. They simply provide the connection and the connectivity has not particularly changed in the desktop space since SNB was released. The connection to the chipset has gotten wider which allows for more chipset connectivity but that's pretty much it. Again the motherboard manufacturers have nothing to do with this though.
 
  • Like
Reactions: CountMike
Yes, much of it is out of MB manufacturer's hands, they have to follow standards for each platform and are limited by CPU and chipset limits neither of them they make, even BIOS is usually outsourced.
What they can do is to chose quality of components and number of features within limits. Those they use to make a spread of different priced MBs. Above about mid priced models they keep on adding not really needed accessories to justify price although those cost them little but looks impressive.
As far as BIOS features are concerned, BIOS chips have enough space to set algorithms to suit many CPUs , RAMs etc. to auto so you really need to access just few, the rest will adjust accordingly but they are still there if needed for some other applications.