News Motherboard Shipments Plummet by Ten Million Units in 2022: Report

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

InvalidError

Titan
Moderator
Look at DDR5. It's crap. Look at pcie5 NVME drives. More crap.
First-gen into new stuff is rarely worth bothering with. We're on 2nd-gen with DDR5 though and once you get into high-performance DDR4 (3600-10) vs similarly priced DDR5 (6400-28), DDR5 is usually ahead. First-gen 5.0x4 SSDs still use mostly the same NAND chips as 4.0x4 SSDs and overall drive architecture, so no miracle will be happening there. You'll have to wait for 2nd or 3rd-gen 5.0x4 SSDs for power draw to get reeled in and NAND performance to catch up.
 

shady28

Distinguished
Jan 29, 2007
417
289
19,090
I still say it's more the fact that Zen 3 and Intel 10th gen and newer aren't slow enough to justify replacing your core system. Even if current generation parts cost the same as the previous generation, AMD especially, that's still a lot of money to pour in for a small gain, which for most cases is no noticeable gain.

Exactly.

I'd say the upgrade cycle is longer now too. 3Y for the acolytes who would be reading/posting here, but the other 90% of folks it's 5Y-7Y. The 3Y hobbyist types will upgrade late this year or early 2024, but I would not expect to see a large recovery until ~2025, and I bet it crashes out again by end of 2026.

This also means that the industry, which was already cyclical, is likely to be even more cyclical for many years because Covid just put almost everyone on the same upgrade schedule.
 
D

Deleted member 14196

Guest
I don't know where you're from, but I don't consider 31.3% to 37.4% faster in actual workloads "not worth bothering with".
Well, I think that with a lot of people, the work loads aren’t that intensive, like me for instance, all I need is a tiny little box that allows me to RDP to work over VPN. I don’t have ridiculous requirements for heavy work loads and my AMD 3550 is enough to do everything that I need. And even run virtual machines if necessary

All of my heavy iron is at work. I could get by with an arm laptop.

my mini PC runs a clone of my main build machines just fine so in theory, I could even use my machine to do real work if I had to, but I don’t. I just use virtual machines for my own personal testing of software before it gets installed into my system.

Anyway, since most people already upgraded like we said before, it’s one of the main reasons and the economic downturn. Just to beat a dead horse lol.
 
  • Like
Reactions: Brayzen

InvalidError

Titan
Moderator
I don't know where you're from, but I don't consider 31.3% to 37.4% faster in actual workloads "not worth bothering with".
Anandtech used DDR4-3200-22 for those benchmarks, which is atrociously high latency for DDR4. That is a 30% handicap over bog-standard 3200-16 or still very affordable 3600-16. Had Anandtech used decent DDR4 that was still cheaper than the DDR5 they used, most of that "31.3% to 37.4% faster" would have been reduced to ~0%.
 
  • Like
Reactions: Loadedaxe

bit_user

Champion
Ambassador
Well, I think that with a lot of people, the work loads aren’t that intensive,
The context of my statement was whether DDR5 was "crap", as had been asserted. The data shows it's not. That's all I was saying. Not that everyone needs it or that it provides the same benefit across the board, just that it does provide very real benefits to some, and did so even on launch day of Alder Lake.

Had Anandtech used decent DDR4 that was still cheaper than the DDR5 they used, most of that "31.3% to 37.4% faster" would have been reduced to ~0%.
Prove it.

You can't, because it's not true. DDR5's main benefit is not latency, it's bandwidth, which turns out to be a bottleneck when you're running 24-thread workloads. Also, it doubles channel-count, which increases the amount of hardware-level parallelism. The DDR5 they used even had worse latency than their DDR4!
latency-ADL4.png
latency-ADL5.png
Also, you grossly overestimate how latency-sensitive these workloads are. I don't know where you get some of this stuff.
 
Last edited:

InvalidError

Titan
Moderator
Prove it.

You can't, because it's not true. DDR5's main benefit is not latency, it's bandwidth, which turns out to be a bottleneck when you're running 24-thread workloads.
In THG's DDR4 vs DDR5 benchmarks from September 2022, there was less than a10% difference between DDR4 3600-16 and DDR5-5200-34 in productivity stuff and practically no difference at all in games. The only THG benchmark where DDR5 pulls way ahead is 7Zip decompression, which I dismiss as completely irrelevant due to how rarely people decompress files large enough to take any meaningful time.

For most normal people, DDR5 currently offers few to no meaningful benefits over good DDR4. Anandtech gave an undue advantage to DDR5 by using horrible DDR4.
 
My current system has been using the venerable Ryzen 5 3600 for almost 3 years at this point. For what I do it still offers amazing performance. In fact, I choose to run it in eco mode for improved efficiency and lower noise since the performance difference is in the single digits, or no different at all in games.

Needless to say, I can't see a single good reason to even remotely consider upgrading now and for the foreseeable future. Not when low end b650 boards cost 2x what my b550 board would have been new, and 3x what I paid for mine open box. Id see no improvements from spending the inflated prices for a current generation CPU from either Intel or AMD.

I am a bit sad to see AM4 motherboards seeming to become more difficult to get and the prices slowly rising, especially since early Ryzen parts can be great for budget systems and have a great upgrade path down the line.
 

domih

Reputable
Jan 31, 2020
173
155
4,760
There is a lot of talk in this thread about using 10GbE at home.

The consumer/prosumer RJ45 "new" 10GbE hardware is currently very expensive.

If you want to use 10GbE at home, e.g. a NAS and a few machines,
If you do not want to wait for prices to go down in 5 years,
If you do not want to sell your Grand-Ma but spend as low as possible,
If you know what you are doing:

Go with SFP+ used/open box/new hardware from eBay.

For the cards, there is a palette-size load of SolarFlare cards at around $20. See https://www.ebay.com/sch/i.html?_fr...olarflare+10gbe+sfp&_blrs=spell_check&_sop=15

You need a free PCIe Gen 3 x8 lanes, maybe x4 lanes depending on the model you select, so you'll usually insert the card in the 2nd PCIe x16 lanes slot on regular desktop PCs. There is nothing preventing you to use the 1st x16 lanes slot for the NIC and the 2nd for the graphics card if it makes the motherboard happier.

You do NOT need a switch if you only have one PC and a NAS or a server such as ProxMox or just a 2nd PC. Just directly connect the 2 machines with one cable and set your NICs up manually with fixed IP addresses. If you choose a dual port card, your PC can talk 10GbE to two other machines.

For the switch (if necessary), search for "switch 10GbE SFP" and filter on "10 Gigabit Ethernet", from time to time you'll find models such as: https://www.ebay.com/itm/314318172336

So I strongly suggest that you browse eBay offers regularly until you find a cheap one. Be patient.

If your PC already has an onboard 10GbE NIC, usually an Aquantia chipset, you'll need a 10GBase-T SFP+ Transceiver RJ-45 SFP+ (e.g. https://www.ebay.com/itm/234685127893) to connect it to an SFP+ switch. Usually $45/$50.

While using an unmanaged switch, you still need to set your NICs up manually with fixed IP addresses. Unless you know what you are doing and have a DHCP server on the 10GbE subnet. One of your machines, using Linux can become the router between your 10GbE subnet and your usual 1 or 2.5GbE. The setup is not rocket science.

For the cables, just dig into this: https://www.ebay.com/sch/i.html?_fr...0&_odkw=10gbe+SFP+DAC+cable&_osacat=0&_sop=15. $10 to $50 depending on length (<1m to 10m). If you need very long cables, you will have to go optical with SFP+ transceivers and AOC cables. Costs more.

Note: not all the 10GbE SFP+ cables are the same, avoid the ones saying Cisco. I tried one, it did not work with my Microtik switch. Then I tried one that did not mention Cisco and it worked AOK.

FINAL NOTE
Used 10Gbe SFP+ cards and switches are continuously appearing on eBay. They come for data centers that dropped 10GbE entirely. The hardware was designed years before the 2.5GbE and 5GbE standard were written. These cards and switches will NOT auto-negotiate for 2.5GbE and 5GbE (thus you setup a Linux router to link the two subnets).

Do your home work. Visit the product pages to get the specifications and buy cards, switches and cables that work together.

To add 2.5 GbE for cheap to a motherboard that do not have an onboard 2.5 GbE chipset:

Solution #1: go with a USB 2.5GbE adapter. The most commons use a RTL8156B Realtek chipset. You can find them by looking for "RTL8156B" on eBay. Usually around $20. Just make sure the seller says RTL8156B with the "B" at the end. The original RTL8156 was let's say painfully inadequate (read "not working"). See https://www.ebay.com/sch/i.html?_fr..._odkw=RTL8125B&_osacat=0&_sop=15&LH_PrefLoc=2

Solution #2: go with a PCIe card. The most commons use a RTL8125B Realtek chipset. Convenient if you have a free PCIe Gen 3 x1 lane slot, or above. Usually around $20. Make sure again the seller says RTL8125B with the "B" at the end. The original RTL8125 was yada yada yada. See https://www.ebay.com/sch/i.html?_fr..._odkw=RTL8156B&_osacat=0&_sop=15&LH_PrefLoc=2.

5GbE?
Avoid the USB 5gbE adapters like the plague. Most of them use the same USB 3.0 chipset to bridge USB to Ethernet. This USB 3.0 chipset is Gen 1, meaning max theoretical speed 5Gbps. So as you may surmise, tunneling 5GbE Ethernet inside 5Gbps USB is going to be a showstopper. The max speed you'll get with these USB 5gbE adapters is about 3.5GbE, not the expected 4.5GbE. This being said, if you're happy gaining one more Gigabit compared to 2.5GbE, why not. But that will cost you around $80 per adapter. At that price, it makes more sense in terms of value to jump directly to 10GbE.

My guess is that the manufacturers will just loop with initial high prices for 2.5GbE, then for 5GbE and then for 10GbE. Each time presenting it as the "New New thing Best of the Best" while 10GbE has been used for 2+ decades in data centers.
 
  • Like
Reactions: bit_user
I don't know where you're from, but I don't consider 31.3% to 37.4% faster in actual workloads "not worth bothering with".
I agree the gains from new generation CPUs are very much real and are very beneficial for some people and some workloads, however I think you are missing the main idea. These workloads are not the norm.

The average computer user does not need a high end CPU. The gains from the last generations of CPUs are not noticeable to your average person in the slightest. In fact, id wager for a basic web browsing, web browsing, youtube etc machine, most people would not see the difference between an i5 8400 and a 13900ks. In all honesty, my laptop doesn't feel any slower than my desktop for basic usage, and the laptop I am typing this from is a decade old. Even gamers would not see any real difference from a Ryzen 5 5600 to a 7950x, or even from a slower CPU like a 3600, unless they were running an improper GPU+Monitor resolution pairing.

In general, CPUs and boards now cost a lot, and the benefit for your average person is negligible, so it makes sense why its not selling.
 

greenreaper

Honorable
Apr 3, 2018
47
20
10,535
Maybe the board manufactures can explain why the Euros have twice the selection of LGA 1700 boards such as the B760 and H770 boards than we have here in North America.
Didn't the US recently impose tech sanctions on China with crippling import tariffs? Could be related.
 

InvalidError

Titan
Moderator
My guess is that the manufacturers will just loop with initial high prices for 2.5GbE, then for 5GbE and then for 10GbE. Each time presenting it as the "New New thing Best of the Best" while 10GbE has been used for 2+ decades in data centers.
It isn't only the up-front price though, there is also the per-port power to consider. Datacenters don't care much if 10GbE-TX ports burn 5-7W each instead of 0.5-3W but a normal person may care that their 5-ports router requires active cooling while burning 40-50W vs ~10W for their passively cooled 1GbE one.
 
  • Like
Reactions: domih and bit_user

DavidLejdar

Prominent
Sep 11, 2022
191
101
760
Meanwhile, nearly 52 million iPhones shipped in Q3 of 2022 (with an year on year increase)... which doesn't seem to imply that electronics sales at large would be much impacted by inflation etc. Not surprising though that when many got themselves a new rig in 2021, that they are not rushing to upgrade it several months later - in particular as many users are not such enthusiasts to want 300+ FPS (with which some older CPUs may be overwhelmed with).

In my case, my previous rig was with an i5-4570 (which was selling for around $200 in 2014, a year after launch) and with DDR3. My new rig is AM5 with DDR5. And I could have saved a bit of money if I would have went for a DDR4 (and PCIe 4.0) rig instead. But eventually I would have to upgrade again to be able to make full use of next-gen GPUs (with 32+ GB), so I didn't see much a point in missing out on performance for a saving that may at best pay for a new MB and RAM in a few years (meaning that the total saving would be zero, at best).
 
  • Like
Reactions: bit_user

bit_user

Champion
Ambassador
In THG's DDR4 vs DDR5 benchmarks from September 2022, there was less than a10% difference between DDR4 3600-16 and DDR5-5200-34 in productivity stuff
The workload I cited was SPEC2017, which you confidently stated would be ~0% different. You lied.

and practically no difference at all in games.
I already implied gaming wasn't the issue, so no credit there.

However, even for gaming, their laptop iGPU should benefit. You didn't address that, at all.

For most normal people,
I already made it abundantly clear I was challenging the assertion that DDR5 was "crap". I only needed to show that it offered a significant benefit for some workloads, which I did.

I don't know what compelled you to counter with lies, but I'm not buying your flailing attempts at damage control. You should admit that you made utterly false claims and we can move on. However, it seems your ego won't allow you to do that.

Anandtech gave an undue advantage to DDR5 by using horrible DDR4.
Try again. The workload was SPEC2017. SPEC2017 is comprised of industry-standard multithreaded apps. Toms' geomean included lightly-threaded workloads. Moreover, that Toms' metric had a discrepancy of only 14% between the same memory speeds & timings used in the Anandtech review. Right there, that should tell you it's not a comparable workload. Finally, for DDR4 3600-16 then to close the gap to only 6.1% is that much less impressive, as it covered a difference of only ~8% rather than the implied 25% to 31%.

I think we're starting to see what kind of engineer you were. It's one thing to have a slip up and speak beyond your knowledge, but it's another thing entirely to double-down on it once you've learned that you were wrong.

This sort of intellectual dishonesty does us all a disservice. People trust what you say, so when you start spreading misinformation, it carries further than when it's spouted by the typical gamer kid who never posts even two consecutive, grammatically correct sentences.
 
Last edited:
I will say that as I look at things, I’m more tempted to get an xbox series x. Mind you I have a b350 board, 5800x, 32gb ram, ssds for boot and storage and an rtx 3080. Now you start seeing stuff about games possibly running out of vram and you see nvidia pushing dlss 3 for fake performance gains but if games push that too hard then you get to upgrade again.

Meanwhile I bought my xbox series s in late 2020 or 2021. It was at the time where even the series s was hard to get and I happened to get on GameStop’s website at 1am and one was available so I grabbed it. Really for what I paid it’s a great little console. I actually bought the new Harry Potter game for Xbox. The thought did cross my mind to get the game on pc but I liked the idea of it being on the larger 65 inch screen. Sure there are a few graphical hiccups even in performance mode but overall the game looks great and plays well.

For someone like me, yes I’m a tech, but to be honest when I look at the amount I’ve spent on my pc vs that to mainly play games it’s tempting to get a series x and just upgrade the pc when it won’t do what I want anymore. I always preferred amd because I saw them as better value for money. If were building on a budget today I’d consider maybe an i5 13400 and a b660 board.

As far as boards, 150 for a b650 is ridiculous. They need to be at 100 for a620. Preferably lower. Zen 4 is faster, but for me for the amount of money I’d need to spend, I’m not justifying 700 or more for a new cpu, board and ddr5. As someone who’s always bought amd products I think I’d buy Intel first. Truth is though I can probably carry on with my current rig another year or two as is though. Maybe drop in a 5800x3d and sell the 5800x if I really felt the need.
 
Last edited:

btmedic04

Distinguished
Mar 12, 2015
450
319
19,190
I'll buy a new mobo when all USB ports are at least 3.1. We have more than 2 USB-C ports. And we have at least 10Gbit/s ethernet.

The current mobo are overpriced, half-assed and already outdated.

$200-$300 for mobo that still use USB 2.0, that still only have 1 USB-C and still only 2.5Gbit/s ethernet?

No thanks, I don't like getting ripped off. The amount of cheap components and connectors in mobo is ridiculous..

These mobo makers are putting chips and components in their mobo that are often more than a decade old.

$300 Asus mobo with USB2.0. f you mobo makers.



lkjlkjlj.jpg

The issue is the vast majority of internet access isn't even 1gbps yet, especially so in rural areas of the US. Outside of businesses, professional users and enthusiasts, it doesn't make sense putting 10gbps networking on motherboards. Essentially, you're putting the cart before the horse here for the vast majority of people. Like, my mom and dad don't have access to gig+ internet speeds out where they live. But then they don't need anything faster than 1gbps to hop on Facebook or watch YouTube videos.

With that said, I do agree with you that asus has been especially egregious with their pricing on current generation products.
 
I'll buy a new mobo when all USB ports are at least 3.1. We have more than 2 USB-C ports. And we have at least 10Gbit/s ethernet.

The current mobo are overpriced, half-assed and already outdated.

$200-$300 for mobo that still use USB 2.0, that still only have 1 USB-C and still only 2.5Gbit/s ethernet?

No thanks, I don't like getting ripped off. The amount of cheap components and connectors in mobo is ridiculous..

These mobo makers are putting chips and components in their mobo that are often more than a decade old.

$300 Asus mobo with USB2.0. f you mobo makers.

lkjlkjlj.jpg
The USB 2 is for keyboards and mice. Especially compatibility.

It's a matter of available bandwidth. But they are cutting us short on connectivity, I will agree.
 
  • Like
Reactions: bit_user
In THG's DDR4 vs DDR5 benchmarks from September 2022, there was less than a10% difference between DDR4 3600-16 and DDR5-5200-34 in productivity stuff and practically no difference at all in games. The only THG benchmark where DDR5 pulls way ahead is 7Zip decompression, which I dismiss as completely irrelevant due to how rarely people decompress files large enough to take any meaningful time.

For most normal people, DDR5 currently offers few to no meaningful benefits over good DDR4. Anandtech gave an undue advantage to DDR5 by using horrible DDR4.

DDR5 has high bandwidth but high latency. You can see it in the high CAS numbers. The access time is often equal or worse than quality DDR4.

High bandwidth is useful when you need to access large linear blocks like when writing to a frame buffer. But a lot of random io for small pieces of data is DDR5s weakness.

It will be interesting to see how X3D is affected performance with this generation. DDR5s bandwidth reduces one of the extra cache's benefits of fast data retrieval. Cache will help with small data IO access however, due to latency of DDR5
 
Last edited:
  • Like
Reactions: bit_user

TRENDING THREADS