News Motherboard Shipments Plummet by Ten Million Units in 2022: Report

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I already corrected you on this at least once before. The 7950X gets a multi-application average benefit of 45% better multithreaded performance than its predecessor, while the 7600X is about 35% faster than its predecessor. These are pretty astonishing single-generation improvements, for CPUs with the same core/thread count.
It 45% performance for ~45% more power....
(87W for the 5950x 125W for the 7950x stock)
It's not an improvement in performance, it's just the same performance overclocked to the max.
Hey, if everybody else can use efficiency as an argument then so can I.
At least intel gets much more increased performance per more power.
Both 12 and 13 gen max out at around 300W and 13 gen is much faster in some things.
power-applications.png

The only THG benchmark where DDR5 pulls way ahead is 7Zip decompression, which I dismiss as completely irrelevant due to how rarely people decompress files large enough to take any meaningful time.
That shouldn't be the reason to dismiss it, 7z uses an in-build benchmark that uses a 32Mb set (changeable in the menu) of make-believe data that is not on any storage and applies their algorithm on it,
this is why ryzen was so far ahead in that test for so long, it had much larger caches that could hold all of that data at once making things much faster.

Benches with winrar are much closer to reality because they have to use actual files on a disk to do their tests with, but you can't bother benchmarkers with doing their job these days.

Also de-compression, on real files at least, is always single threaded.
https://fiehnlab.ucdavis.edu/staff/kind/collector/benchmark/7zip-benchmark
 

bit_user

Titan
Ambassador
Go with SFP+ used/open box/new hardware from eBay.
This is where I started, actually. I bought 2 NICs and a copper SFP+ cable to connect them. That allowed me to connect my main workstation with my fileserver @ 10 Gbps, without the additional cost of an expensive switch. It was an affordable option (I think the total cost was a bit over $200), all the way back in 2016.

Then, when I got that Netgear MS510TX switch, I could still use one of those NICs to connect to the switch's SFP+ port. For the other machine, I bought an Aquantia NIC on Amazon, which listed for around $90 but I got two on Black Friday sale for a mere $67 each. Aquantia got acquired by Marvell Technology shortly thereafter, which might be why people don't hear about them much.

Anyway, the benefit of the switch was that I could use the second Aquantia card in my other main desktop machine, to connect it via one of the switch's 5 Gbps ports.

There is nothing preventing you to use the 1st x16 lanes slot for the NIC and the 2nd for the graphics card if it makes the motherboard happier.
If you have 2 x16 slots, populating the second will usually drop them both to x8. So, that could affect your graphics performance, especially if you have a faster GPU and a PCIe 3.0 motherboard.

You do NOT need a switch if you only have one PC and a NAS or a server such as ProxMox or just a 2nd PC. Just directly connect the 2 machines with one cable and set your NICs up manually with fixed IP addresses.
As the consumer-grade NAS boxes tend to have RJ45, you need a 10GBase-T SFP+ module, but the flip side is you don't need a SFP+ copper cable.

For the switch (if necessary), search for "switch 10GbE SFP" and filter on "10 Gigabit Ethernet", from time to time you'll find models such as: https://www.ebay.com/itm/314318172336

If your PC already has an onboard 10GbE NIC, usually an Aquantia chipset, you'll need a 10GBase-T SFP+ Transceiver RJ-45 SFP+ (e.g. https://www.ebay.com/itm/234685127893) to connect it to an SFP+ switch. Usually $45/$50.
If you need a lot of those transceivers and don't have many machines, then something like the $300 (new) 5-port 10GBase-T switch I found on Amazon might be a better option.

For the cables, just dig into this: https://www.ebay.com/sch/i.html?_fr...0&_odkw=10gbe+SFP+DAC+cable&_osacat=0&_sop=15. $10 to $50 depending on length (<1m to 10m). If you need very long cables, you will have to go optical with SFP+ transceivers and AOC cables. Costs more.

Note: not all the 10GbE SFP+ cables are the same, avoid the ones saying Cisco. I tried one, it did not work with my Microtik switch. Then I tried one that did not mention Cisco and it worked AOK.
Exactly. The cables have a chip in the connector and some SFP+ ports only work with some cables.

FINAL NOTE
Used 10Gbe SFP+ cards and switches are continuously appearing on eBay. They come for data centers that dropped 10GbE entirely. The hardware was designed years before the 2.5GbE and 5GbE standard were written. These cards and switches will NOT auto-negotiate for 2.5GbE and 5GbE (thus you setup a Linux router to link the two subnets).
An easier option is to get a multi-gig switch, like the ones I mentioned. There are others, besides. They often have a couple SFP+ ports, enabling you to use them as an exclusive solution, or to daisy-chain them off a SFP+ 10 Gigabit switch.

Another thing to watch out for, with datacenter-grade SFP+ switches, is fan noise. Many multi-gig switches on the market today are aimed at the SOHO market, and won't make excessive amounts of noise. I also found a Noctua fan that was a drop-in replacement for mine.

5GbE?
Avoid the USB 5gbE adapters like the plague.
Yeah and 5 Gbps NICs seem to have largely disappeared, from what I can see. Also, ports on multi-gigabit switches tend to be either 2.5 G or 10 G capable. Any 10 G "multi-gigabit" ports will down-negotiate to 5 Gbps, but the older 10 Gbps PHYs go straight form 10 Gbps to 1 Gbps.

My guess is that the manufacturers will just loop with initial high prices for 2.5GbE, then for 5GbE and then for 10GbE. Each time presenting it as the "New New thing Best of the Best" while 10GbE has been used for 2+ decades in data centers.
I do wonder if 5 Gbps will stage a comeback, once 2.5 becomes more mainstream. The benefit 5GBase-T has over 10 is better compatibility with legacy cabling, as well as being a little cheaper and lower-power.

Anyway, as I stated above, 10GBase-T was only standardized in 2006. Before that, datacenters were using fiber and coper SFP+ cables for it. Not a big discrepancy, but a few years newer than you're saying. I'm not sure when the 10GBase-T NICs started appearing on the market, but a popular Intel chipset X550-AT2 launched in Q4 2015. It's made on 28 nm and has a TDP of 11 W, FWIW. People may not think about power, when it comes to Ethernet, but especially with 10GBase-T, it's something to keep an eye on.

It isn't only the up-front price though, there is also the per-port power to consider. Datacenters don't care much if 10GbE-TX ports burn 5-7W each instead of 0.5-3W but a normal person may care that their 5-ports router requires active cooling while burning 40-50W vs ~10W for their passively cooled 1GbE one.
This is one reason (beyond mere cost) why multi-gigabit switches are a compelling option for consumers. With not all ports being 10 Gbps capable, you don't need as much power or cooling.
 
Last edited:
  • Like
Reactions: domih

bit_user

Titan
Ambassador
I agree the gains from new generation CPUs are very much real and are very beneficial for some people and some workloads, however I think you are missing the main idea. These workloads are not the norm.
No, I was responding to the statement that DDR5 is "crap". It's not, and it never was.

That's different than a discussion about whether it's necessary or worth the cost. All I needed to do was to demonstrate that it very much isn't and wasn't "crap", which I've done.

The average computer user does not need a high end CPU.
Yes. That's a different discussion, though.

High bandwidth is useful when you need to access large linear blocks like when writing to a frame buffer. But a lot of random io for small pieces of data is DDR5s weakness.
That's where doubling the channel-count could be useful. Each DDR5 DIMM is actually comprised of 2 subchannels. So, for heavily-threaded workloads that tend to queue up lots of requests to the memory controller, it turns out to be a win if you trade a little latency for more bandwidth and parallelism.

It will be interesting to see how X3D is affected performance boost wise as DDR5s bandwidth reduces one of the extra cache's benefits.
Indeed. On the flip side, Zen 4 is wider and more bandwidth-hungry than Zen 3 was, so I expect to see the 7800X3D offer similar performance margins as the 5800X3D, if perhaps a little lower.
 
Last edited:

bit_user

Titan
Ambassador
Didn't the US recently impose tech sanctions on China with crippling import tariffs? Could be related.
That's confusing two different things. The US has import tariffs on some goods from China (I think in the range of 10% to 25%). There's been a pause placed on some of these, since the pandemic, which has continued to be extended. It's possible some clause in the sanctions could still come into effect for motherboards having a certain key feature...

The second thing is restrictions on providing tech to the Chinese semiconductor manufacturers who are unable to show their products have no military applications.

Anyway, Europe is actually a bigger market than North America. So, that could be another reason why more products make it there than here.
 
Last edited:

citral23

BANNED
Dec 29, 2022
22
26
40
I already corrected you on this at least once before. The 7950X gets a multi-application average benefit of 45% better multithreaded performance than its predecessor, while the 7600X is about 35% faster than its predecessor. These are pretty astonishing single-generation improvements, for CPUs with the same core/thread count.

Now, if you're talking about FPS, then sure. But, anyone using a slower GPU and/or higher resolution tends not to be very CPU-limited, anyhow. So, I think this really isn't about lack of improvements by AMD or Intel. Rather, it's more about single-generation upgrades generally not making much sense for most people.

This kind of reply/tone marks you as a 'full of himself' person.

He's right that previous gen is so good only niche cases need an upgrade. A 5700X with 3600Mhz DDR4 compiles a whole buildroot faster than I can read the news meanwhile.
PC boots in 5 seconds, transfers are stupid fast with even sata SSDs

There's no "need" to upgrade from a daily performance point of view, even for heavy multithreaded work, unless you have a niche case for it.

The only really interesting part would be DDR5/RDNA2 or 3 desktop APUs imho.
 

bit_user

Titan
Ambassador
It 45% performance for ~45% more power....
(87W for the 5950x 125W for the 7950x stock)
It's not an improvement in performance, it's just the same performance overclocked to the max.
That's a mischaracterization, and you know it. It would be true if Zen 4's power scaling were linear, but it's not (as I'm sure you're aware). I guess I'll just have to cite this, again:
130507.png

We see here that Raptor Lake derives most of its performance advantage from more power, but Zen 4 can deliver almost the same performance even as you scale down power.

At least intel gets much more increased performance per more power.
That argument only plays for overclockers who are going to crank up the Watts, no matter what. For people without extreme cooling setups, who buy the 65 W versions, or who want Eco mode, then Zen 4 becomes the much more efficient choice.

What's rather galling about your statement is that you had just implied Zen 4 had linear scaling and that it was a deficiency. Now, in the same post, you seem to be implicitly acknowledging that it doesn't. But, you're instead implying it's a virtue when Raptor Lake does it. You're spinning so much it's making you dizzy.

For what it's worth, how well a CPU scales performance with power is a bit of a sideshow. It can shed some insight into various aspects of the architecture and implementation, but what really matters is which CPU offers better performance at a given (actual) power dissipation. And at which point you choose to measure it is a function of what applications you're looking at (e.g. ultraportable laptop, performance desktop, cloud server, etc.).
 
Last edited:
  • Like
Reactions: Elusive Ruse

bit_user

Titan
Ambassador
This kind of reply/tone marks you as a 'full of himself' person.
Well, how many times do I need to keep citing the same sources? I could keep copying and pasting the same data, but I think people here get tired of it, at some point. Worse, I don't even know if @Alvar "Miles" Udell has seen them.

I don't actually disagree with the overall sentiment of that post. To understand why I felt compelled to post a correction, you have to pick apart the exact formulation I quoted, which I'll underline, so it's hopefully more clear:

"Even if current generation parts cost the same as the previous generation, AMD especially, that's still a lot of money to pour in for a small gain, which for most cases is no noticeable gain.​
It's that troubling "for a small gain" and the way it's anchored by the clause that follows. We can agree about the following clause, but if you're going to establish that "a small gain" isn't simply what's experienced by most users, then it sounds like Zen 4 is only ever a small gain. And that inference is what I was trying to address.

In the face of an over-broad statement, I just wanted to set the record straight about what Zen 4 actually achieved, since I can sense a misleading "ho hum" narrative setting in about Zen 4. If @Alvar "Miles" Udell has any questions or objections to what I said, I'll be happy to follow up with facts and figures, but otherwise not, as I don't even know if they're seeing my replies.
 
Last edited:
With 100us you can have 32gb ram a xeon 2666v3 and a Chinese motherboard. Why spend thousands of dollars. People wanna game, these old motherboards now have "turbo unlock" The ability to run the "amd sam (rebar). Put a 3060ti on it and it's done. This why the market going down.
I no these chips don't have IPC, but for the money a 13400 need to be 3 or 4 times faster to compensate the price.

https://ibb.co/HpxzcJv my 82 dollar cpu kicking
 
Last edited:

Tac 25

Estimable
Jul 25, 2021
1,391
421
3,890
I still say it's more the fact that Zen 3 and Intel 10th gen and newer aren't slow enough to justify replacing your core system. Even if current generation parts cost the same as the previous generation, AMD especially, that's still a lot of money to pour in for a small gain, which for most cases is no noticeable gain.

referring to the part I put in bold.

yeah, no reason for me to want a new mobo. The 10600K here in the house is still super fast for the playstation 4 and playstation 5 games that I play. Can even encode with Handbrake while gaming.

currently waiting for Tekken 8 and Infinity Nikki. I expect the 10600K to handle these two games just fine as well.
 
Last edited:
  • Like
Reactions: Amdlova
Well, how many times do I need to keep citing the same sources? I could keep copying and pasting the same data, but I think people here get tired of it, at some point. Worse, I don't even know if @Alvar "Miles" Udell has seen them.

I don't actually disagree with the overall sentiment of that post. To understand why I felt compelled to post a correction, you have to pick apart the exact formulation I quoted, which I'll underline, so it's hopefully more clear:

"Even if current generation parts cost the same as the previous generation, AMD especially, that's still a lot of money to pour in for a small gain, which for most cases is no noticeable gain.​
It's that troubling "for a small gain" and the way it's anchored by the clause that follows. We can agree about the following clause, but if you're going to establish that "a small gain" isn't simply what's experienced by most users, then it sounds like Zen 4 is only ever a small gain. And that inference is what I was trying to address.

In the face of an over-broad statement, I just wanted to set the record straight about what Zen 4 actually achieved, since I can sense a misleading "ho hum" narrative setting in about Zen 4. If @Alvar "Miles" Udell has any questions or objections to what I said, I'll be happy to follow up with facts and figures, but otherwise not, as I don't even know if they're seeing my replies.

What real-world gains are seen?

Real world - I own a small IT consultancy and run Business Central / VS Code / Firefox / Office local apps / VMWare machines / small amount of video encoding. That's how I make money - my work laptop is an out-of-date 10th gen i7 w/32GB RAM and bit-lockered (GDPR requirement) PCIE3 Sabrent rocket nvme / 2070gpu. It boots near instantly, it loads my apps almost instantly - all of the wait time I have is down to the internet of wherever I happen to be on that day. I see no reason to upgrade as there's no non-synthetic way that I'll notice a difference.

My home machine is a Ryzen 2700x / 32gb / 1080ti / nvme x2 + ssd + access to my home NAS (built for 'work purposes' and not a games library at all!) over 1gb wired. I don't notice any speed difference apart from a few specific examples (40k Dawn of War takes 10 minutes to save if installed on the NAS. A couple of games don't like being on a remote drive). I did some tests when I built the system and GTA-V (slowest loading game I had at the time) initial loading times were within 10s of each other when comparing nvme / SSD and NAS and no different in-game. I have literally hundreds of games ranging from tiny stuff like FTL to Cyberpunk and Control - I can never tell where they are stored from real-world perceptions. Yes I'd see some performance improvement if I upgraded the GPU, which would then probably cascade down into new CPU / mobo / RAM etc - but for the cost? Yeah I can afford it, but there's no way I'm paying the money required to see a significant upgrade over what I have now, no matter what the benchmarks say.
 
That's a mischaracterization, and you know it. It would be true if Zen 4's power scaling were linear, but it's not (as I'm sure you're aware). I guess I'll just have to cite this, again:
130507.png

We see here that Raptor Lake derives most of its performance advantage from more power, but Zen 4 can deliver almost the same performance even as you scale down power.
Ahhhh I see, the result of one test alone is more true than a result of 45 individual tests put together because... it suits your agenda better....
Cinebench r23 is not all of the applications that humans have access to.
Neither are 45 apps but it's a much better gauge than a single application.
power-applications.png
 

RichardtST

Respectable
May 17, 2022
242
268
1,960
No, not if you're doing heavily-multithreaded work.
117496.png

So, that works out to 31.3% faster SPEC2017int and 37.4% faster SPEC2017fp. In Raptor Lake, the differences are surely even further amplified, but they didn't repeat the same test.

People use these CPUs for more than gaming, you know? But, for the casual gamers, DDR5 is quite likely a boon for iGPU performance. You'd need a big iGPU which makes it harder to test, since those only shipped in the BGA laptop-oriented processors. Did any Alder lake 96 EU mobile parts ship in laptops with (LP)DDR4?


Don't worry, I won't.


Sounds like wishful thinking, IMO. Sure, prices have had some effect on suppressing purchasing, but I wouldn't say that's the biggest.

I think the biggest factors are that most upgrades happened back in 2020-2021 and the economic downturn/uncertainty. The latter had two effects: to put off anyone still in the market for an upgrade and to turn off the taps on corporate spending.

I went and read the article on that link, looked at all the graphs, but nothing indicates any such improvement. In fact the graphs are useless as they don't show anything that is actually comparable or even state what they are comparing. Homey does statistics and reads graphs with the best of them. Sorry, but the article may talk a big game, but the results in the graphs do not convey them. The conclusion of the article is incorrect. If there really were this large of a gain it would be slathered all over the internet and youtube in particular. It is not. And that is because this 33-50% DDR5 advantage is purely imaginary. All other results that I have seen indicate anywhere from minus 5% to plus 5%. It is simply not worth it right now.
 

Loadedaxe

Distinguished
Jul 30, 2016
218
144
18,790
For me,
While upgrading and building two new machines every year for me and my wife was a exciting hobby, it was a reward for busting my hump all year and it was fun to do.
I cant justify a new build currently as the pricing is just too high. PC hobbies were affordable and fun, and spending $2400 a year for two new mid range PCs was easy. $4600 for two new ones, not so much. The PCs I planned for 2023, cost way too much to justify the increase in performance.

Me and my wife will wait another year or two, I did add some storage this year to my current PCs but that is it. Prices will need to drop substantially for me to continue building a new PC every year or two. especially for video cards.
 

bit_user

Titan
Ambassador
Ahhhh I see, the result of one test alone is more true than a result of 45 individual tests put together because... it suits your agenda better....
Except you answer with a graph that measures something completely different and just happens to span more apps.

And you have the sheer temerity to accuse me of an agenda, Intel?

Cinebench r23 is not all of the applications that humans have access to.
Well, if it's more apps you want, the article I cited has some. I didn't quote them all, because they show a similar pattern and I included a link so people could go and see for themselves.

130503.png


130512.png

 

bit_user

Titan
Ambassador
What real-world gains are seen?
The data to which I was referring is from the Windows 11 Multithreaded Geomean scores from Toms' 7950X and 7600X review article. To answer your question, follow this link and you can see exactly which programs it included.



It boots near instantly, it loads my apps almost instantly - all of the wait time I have is down to the internet of wherever I happen to be on that day.
If you're happy with your current hardware, I'm not saying you should upgrade it. People who find themselves waiting for CPU-bound workloads might reach a different conclusion.

Whether newer hardware can deliver better performance is a different question from whether it's worth upgrading, for a specific user.
 
  • Like
Reactions: Amdlova

bit_user

Titan
Ambassador
I went and read the article on that link, looked at all the graphs, but nothing indicates any such improvement.
Here's how I arrived at the specified numbers:
  1. Take the score using DDR5, for a given benchmark.
  2. Divide it by the score using DDR4, for the same benchmark.
  3. Multiply by 100, then subtract 100, and it tells you how much faster the CPU is on that benchmark with DDR5 than DDR4.
So the SPECint2017_r for the i9-12900K is 80.53 with DDR5 and 61.33 with DDR4. The ratio is 1.313. This means DDR5 is 31.3% faster than DDR4, on that benchmark suite.


In fact the graphs are useless as they don't show anything that is actually comparable or even state what they are comparing.
So, we've now covered how to use them. As for what they state they're comparing, it's a summary of the other benchmarks on that page. You'll see two other graphs on that page which contain the breakdown for each composite: SPECint2017 and SPECfp2017. The int portion consists of 10 tests, while the fp portion consists of 12. The names hint at what they're doing, but the details can be seen on the spec.org website:



Sorry, but the article may talk a big game, but the results in the graphs do not convey them. The conclusion of the article is incorrect. If there really were this large of a gain it would be slathered all over the internet and youtube in particular. It is not. And that is because this 33-50% DDR5 advantage is purely imaginary. All other results that I have seen indicate anywhere from minus 5% to plus 5%. It is simply not worth it right now.
The first rule of using data is that you must understand what it measures. That's the key to understanding why it doesn't align with what else you've seen or heard on youtube and elsewhere.

The data I cited specifically measures heavily-multithreaded workloads. I'm guessing the youtubers and websites you follow or read tend to be more concerned with gaming performance, which tends not to be as bottlenecked on memory. The same article has an entire page devoted to analyzing the gaming performance of DDR4 vs. DDR5.



As previously mentioned, they used a fairly basic spec of DDR4 memory, whereas a better spec probably would've closed the gap shown in their gaming benchmarks. On the other hand, they also used the slowest DDR5 memory on the market (this was Nov. 2021, so not many options there). So, if you really wanted to know which you should use for gaming, then that article isn't a very good resource.

Different applications have different bottlenecks, and are therefore best suited by different sorts of hardware configurations. To be successful, you really need to measure what you're trying to optimize. You're not going to build a well-optimized high-volume transaction server by using gaming benchmarks or vice versa. That also doesn't mean either configuration is "crap". It just means that it might be "crap" for a specific purpose.
 

pauljh74

Distinguished
Mar 1, 2009
3
2
18,515
I still say it's more the fact that Zen 3 and Intel 10th gen and newer aren't slow enough to justify replacing your core system. Even if current generation parts cost the same as the previous generation, AMD especially, that's still a lot of money to pour in for a small gain, which for most cases is no noticeable gain.

Many of us are running much older hardware with partial upgrades and they're still competent for gaming today. I'm only just about to demote my overclocked i5-2500K to download/media PC attached to the TV and about to complete a full build. That's 11 years on the same CPU. Admittedly the GPU has gone from an AMD HD6950 to an Nvidia GTX1080. I've been buying and/or building PCs for gaming since 1997 and I've been upgrading the CPU and MB every 2 years with the GPU around the same. My Core 2 Duo was the first to break the trend and lasted about 4 years as a main PC.
The CPU performance has risen at a faster pace than demand and GPU improvement more so. Definitely if you're running a gaming build that is 2-3 generations old, maybe a new GPU is all that is desired. It isn't "needed", but will maximise your gaming - that is why I said desired. 10 years ago we had 4 cores/4 threads from the mid-range Core i5, now the i5-13600K is 14 cores/20 threads. I picked up the AMD 7700X - there's 16 threads - all at higher per core clock rates stock too than pretty much anything 5+ years old.
Storage is another area - 10 years ago I moved to an SSD - it breathes new life even into older machines and the newer M2 drives - they're some 10 times or more faster than the SATA SSDs

We're seeing a 2 tier performance path appear. You can either get a still very capable Intel 11th/12th gen or up to Ryzen 5xxx with DDR4 memory and get away with around AUD$2500 for a full new case with sub $1000 GPU (like an RX 6750XT, or it quickly blows out to $4k plus for a new gen DDR5 build. I just spent $5k on mine with a 4070Ti. If you don't need a GPU for gaming, it can be closer to $1000 with a 6 core/12 thread CPU
 
  • Like
Reactions: Tac 25 and bit_user

PiranhaTech

Reputable
Mar 20, 2021
136
86
4,660
I agree with the general sentiment here. When one reviewer was reviewing a $170 motherboard and said "this is a great entry-level"... I about screamed at the monitor. Premium boards are $500. When Biostar is making $500 motherboards. BIOSTAR! What...? (However, I don't think I ever had a Biostar motherboard die on me). They didn't even come up with a new brand name!

I grew up as a poor PC builder and the budget boards were $70 ish. The premium boards maybe $200. I can afford the $200 ish boards now, but in the past, no. For me, game consoles are kind-of a baseline for a budget/entry-level system. The build maybe $400-600 ($100-300 higher than the price of a game console ish. You are building a general-purpose computer)

I do give AMD a little pass because they are incorporating bleeding-edge tech, and AMD has been price dropping eventually, but man, they also are part of the problem. Both AMD and Intel have been dropping prices eventually, and I think people are waiting for the AMD price drops if they can. If you are a gamer, it's probably worth waiting, especially once DDR5 matures. Remember that DDR4 used to be more expensive than DD3.

If my GPU didn't start going out during the Pandemic, I probably would have waited
 
  • Like
Reactions: Tac 25

Sleepy_Hollowed

Distinguished
Jan 1, 2017
536
237
19,270
You know how old 10Gbit/s over ethernet is? It has been used for over 20 years.

When consumers pay $300 for a mobo. The bare minimum, should be a 10Gbit/s switch.

Taiwanese mobo makers are making cheap outdated crap. Stuffing old ethernet chips and old USB2.0 chips in there, and are asking premium prices.

If this market collapses, I say good. Maybe we can finally get more than 1 USB-C port.

Cool, but home routers barely have 2.5G right now, 10G is not widespread, not even at the WAN port, and that’s their fault mostly. We’re them to have 10 G ports available on all of them, this would not happen.
 

domih

Reputable
Jan 31, 2020
205
183
4,760
.../...
Another thing to watch out for, with datacenter-grade SFP+ switches, is fan noise.
.../...

Yes to all your OP. We went through the same exercises :)

For the fan noise, I just replace the fans with Noctua and usually not putting back the top of the switch. This not only eliminate the noise but also significantly reduces the power consumption. The fans in data center switches are powerful (they have to insure air flow in the whole rack) and are responsible for most of the switch consumption. Finally, as you mentioned, data center switches consume a lot less in a home usage.

Beyond 10GbE, I setup a 56Gbps InfiniBand subnet at my home/soho. This provides me with 45 GbE with IPoIB (Internet over InfiniBand). It's a little bit more elaborated but the process is identical to 10GbE: find the cards, switch(es) and cables on eBay. 45 GbE is heaven when you transfer 25+GB VM image between servers. Last but not least the whole setup with used hardware is much less expensive than current "consumer" 10GbE.
 
  • Like
Reactions: bit_user

tek-check

Reputable
Aug 31, 2021
37
25
4,535
Asus, Gigabyte, MSI and ASRock saw motherboard shipments drop by around 10 million in 2022, claims an IT industry journal.

Motherboard Shipments Plummet by Ten Million Units in 2022: Report : Read more
The numbers don't add up to 10 million. The table suggests it's 13 million. Can you, please, address it?

It seems that multiple tech outlets simply regerget the same information without even looking into numbers. What happened with the accuracy of tech journalism?
 
  • Like
Reactions: bit_user

DaveWis

Prominent
Feb 14, 2023
2
1
515
Maybe I am just grumpy.

I don't see much tech on the market this year which provides a good bang-for-the-buck improvement over my existing home/office gear. Especially when considering increased energy consumption and cooling costs.
 
  • Like
Reactions: citral23

bit_user

Titan
Ambassador
I don't see much tech on the market this year which provides a good bang-for-the-buck improvement over my existing home/office gear. Especially when considering increased energy consumption and cooling costs.
The 65 W models, or "Eco mode" on the performance models, do a lot to tame power & cooling requirements.

It'll be interesting to see how the laptop models compare, as that's the ultimate test of energy efficiency. They're a little harder to compare, since you really want to find two laptops with different CPU brands, but equal cooling capacity and power settings.
 
Except you answer with a graph that measures something completely different and just happens to span more apps.

And you have the sheer temerity to accuse me of an agenda, Intel?


Well, if it's more apps you want, the article I cited has some. I didn't quote them all, because they show a similar pattern and I included a link so people could go and see for themselves.
Completely different from the imaginary stuff you are making up, yup.
I said that the 7950x uses 45% more power than the 5950x for just as much improvement.
How is power to performance scaling on the 7950x alone even relevant to this?!
The link I posted shows the 7950x using 45% more power than the 5950x, it shows exactly what I talked about.

Sure, the 7950x is being punished so hard by amd's stock settings that it doesn't even reach the advertised usage.
But that's just my point, they had to push the 7950x to use so much more power because that was the only way for them to get any performance improvement.
The 5950x was 105W and was able to hit 142W before overclock, it had headroom.
From your link, the 7950x can't even reach the power target that AMD wishes that it would nevermind 40% headroom, it has a negative headroom, it throttles out of the box.
130462.png