News Desktop CPU Sales Lowest in 30 Years, AMD Gains Market Share Anyway

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Johnpombrio

Distinguished
Nov 20, 2006
248
68
18,770
While 2021 was a good year for AMD the last two quarters AMD has increased their revenue by making less money than before, they are dumping their products at discount prices to at least make some money out of it.
This quarter they made more than 40% less money compared to q1 even though their revenue increased.

Code:
AMD Quarterly Revenue       AMD Quarterly Net Income

(Millions of US $)                        (Millions of US $)

2022-06-30    $6,550             2022-06-30    $447

2022-03-31    $5,887             2022-03-31    $786

2021-12-31    $4,826             2021-12-31    $974

2021-09-30    $4,313             2021-09-30    $923

2021-06-30    $3,850             2021-06-30    $710

2021-03-31    $3,445             2021-03-31    $555

AMD's Gross Margin was only a couple of percentage points lower this quarter than last. Net income can be effected by a LOT of things (like buying another company or paying off loans) but if the Revenue is high and the Gross Margin is high, that is the sign of a healthy company.
 
  • Like
Reactions: prtskg and bit_user
this is predictable.

Intel's getting bit by the early adaptor bug for DDR5. With how insanely expensive intel MBs and DDR5 are, no one is buying their chips even if they are marginally faster then DDR4 AMD chips. The AMD motherboards are far more affordable, and just as high quality, and current AMD chips reach their almost intel performance on DDR4 ram which is much much cheaper then DDR5. Yes those intel chips can run on DDR4 on the right motherboard, but then the benching performance falls and those motherboards are still pretty expensive, so the extra gaming performance starts to vanish on DDR4

Combine this with the HIGH power numbers the intel chips draw (needing a larger psu) and the much higher temp issues with the intel chips (needing a custom cooler, where as AMD chips work fine with their default cooler), and you have an intel gaming platform that can cost as much at 80%-100% more then a comparably performing AMD gaming platform. Those i5 intel chips selling for $200 needing a $200 mb, $500 ram, $100 cpu cooler, and $150 psu ($1150), looks really poor in front of an r5 selling for 250, which needs a 100 mb, 100 ram, 0 cpu cooler, and 80 psu ($530).
 
  • Like
Reactions: ottonis
this is predictable.

Intel's getting bit by the early adaptor bug for DDR5. With how insanely expensive intel MBs and DDR5 are, no one is buying their chips even if they are marginally faster then DDR4 AMD chips. The AMD motherboards are far more affordable, and just as high quality, and current AMD chips reach their almost intel performance on DDR4 ram which is much much cheaper then DDR5. Yes those intel chips can run on DDR4 on the right motherboard, but then the benching performance falls and those motherboards are still pretty expensive, so the extra gaming performance starts to vanish on DDR4

Combine this with the HIGH power numbers the intel chips draw (needing a larger psu) and the much higher temp issues with the intel chips (needing a custom cooler, where as AMD chips work fine with their default cooler), and you have an intel gaming platform that can cost as much at 80%-100% more then a comparably performing AMD gaming platform. Those i5 intel chips selling for $200 needing a $200 mb, $500 ram, $100 cpu cooler, and $150 psu ($1150), looks really poor in front of an r5 selling for 250, which needs a 100 mb, 100 ram, 0 cpu cooler, and 80 psu ($530).
Not to completely disagree, but you're not taking into account a couple things:
1- Intended usage of the platform.
2- Connectivity options.
3- Target audience per tier.

If we're talking from a semi-pro + heavy enthusiast perspective, going 12900K/KF/KS and DDR5 will definitely be more than, say, a 5950X and DDR4, but there will be a performance benefit. If that delta is what you're looking for, then there's no point in arguing. Same argument applies going up to ThreadRipper parts, although the jump is a tad more insane, lel.

Gaming is an interesting one, which is probably your angle: this for me is essentially a tie with a slight benefit on Intel. Why? Because the 12400 and 12600K exist. Both of them offer great performance AMD has a bit of trouble matching vis a vis all around, so they can only fight with the overall platform cost and the 5800X3D or 5600 (non-X) on both ends of their "value" spectrum, depending on what type of gamer you are. The 5800X3D definitely sweeps the floor with the* 12K fam until you're willing to go DDR5, so in terms of "value", AMD has a strong offering there still, but in terms of overall platform goodies, Intel is just slightly better. The price difference is not that terrible either if you compare B660/H670+12400 and B550+5800X3D (assuming anyone buying a 12400 will just use DDR4), where the 5800X3D gives you more frames and the B660/H670 offers really good connectivity at similar price points and the price of the 12400 sucks up most of that price difference and it needs (on paper and tests) the same cooling as the 5800X3D.

So, all in all, not quite right IMO.

Regards.
 
Last edited:

bit_user

Polypheme
Ambassador
Yes that is the point the "reporters" are making.
Do we have any link to any actual intel statement?
Intel isn't going to come out and admit strong-arming their customers. Especially if they end up in court, such a statement could hurt them greatly. And there's no upside to it, either.

TSMC has products from like 3 or 4 big players that are up-comming, already produced to a large degree but not yet sold.
TSMC orders are still very backlogged.

Hmm... the more emphatic the denial, the more heavily weighs the guilt.
 

bit_user

Polypheme
Ambassador
Intel's getting bit by the early adaptor bug for DDR5. With how insanely expensive intel MBs and DDR5 are, no one is buying their chips even if they are marginally faster then DDR4 AMD chips.
Uh, but as you note, Alder Lake actually supports both DDR4 and DDR5. It's dependent on which motherboard you buy, however. That was a smart move, on their part.

In contrast, AMD's Ryzen 6000 series is limited to DDR5.

Now, if you want to talk PCIe 5.0, then I think you might be onto something.

Yes those intel chips can run on DDR4 on the right motherboard, but then the benching performance falls
I think that just shows what a win DDR5 is, for such highly-threaded CPUs. If AMD simply took a Ryzen 9 5950X and updated it to support DDR5, I'll bet they'd see a similar performance differential.

Those i5 intel chips selling for $200 needing a $200 mb, $500 ram, ...
The lower you go down the product stack, the less performance delta there is between DDR4 and DDR5. Someone buying an i5 + DDR5 would get better performance per $ by getting an i7 with DDR4.
 
TSMC orders are still very backlogged.
That's beside the point and completely irrelevant.
Digitimes that you quoted as reliable has a story about everybody cutting orders from tsmc .
I know it's a trash site, but their sources aren't (Digitimes, Toms Hardware).
We are talking about how much fake news there is...
Both companies are faced with less sales in the future and both companies have announced an increase in prices.
Either you believe that both stories are true or neither.
https://www.tomshardware.com/news/amd-apple-nvidia-reportedly-reducing-5nm-tsmc-orders
 
I am sorry, I cannot see any reason AMD's desktop or notebook can gain market share as Intel's similar parts have better performance and lower price, you and most evaluators said so. Intel's server may be lagging mainly because Sapphire Rapids is delayed. I can believe it. Is there something wrong here or a scam by AMD?
One simple reason I can think of is that AMD's 5000-series initially only targeted higher-end price-points. They did this because the series was offering higher performance than Intel's 10th-gen all around, and they knew demand would outpace their supply, even at relatively high prices. Intel's 11th-gen processors did not quite retake the lead, but Intel had since adjusted the prices of most of their lineup to better target the lower-end to mid-range, while AMD essentially wasn't offering any desktop CPUs for under $300. Intel's 12th-gen finally retook the lead at the high-end though, so several months back AMD finally released new, lower priced models to better target the mid-range after largely being absent from that market for a year and a half. So it makes sense to see their sales up, as their product range was expanded to target a wider range of budgets at the start of the quarter.

While 2021 was a good year for AMD the last two quarters AMD has increased their revenue by making less money than before, they are dumping their products at discount prices to at least make some money out of it.
This quarter they made more than 40% less money compared to q1 even though their revenue increased.
You mean like Intel had to do with their processors in recent years? : 3 It's not so much "dumping products at discount prices" as it is adjusting prices of hardware released a year and a half prior to account for new hardware that has since made its way to the market. Obviously, they couldn't still sell the processors at such a high price-per-core once those processors were no longer offering industry-leading performance. The prices they are selling their processors for now are more along the lines of what they sold prior generations of Ryzen processors for.

I have a theory for this. I don't think that it's related to a covid buying surge. We had one hell of a computer buying surge in the years leading up to Y2K (remember that? : ) and shipments in 2000-2002 for microprocessors was supposed to be way down, but that didn't happen. Sales kept going.
While most people today are not going to see much benefit from getting a new computer every several years, unlike the late 90s and early 2000s when there was a much more noticeable difference, I suspect the surge in sales related to the covid-lockdowns still played a role in reduced sales now. For those with older systems, having to stay at home for a while, and in many cases work at home, gave them a reason to push forward their eventual computer upgrades that may have otherwise come in the following years (where we are now). So it's only natural that would have at least some ongoing impact on sales.

Except for games, where sloppy bad coding rules, the hardware competed itself to a level where a 12th gen i3 is pretty darned good for most needs, even gaming with a low to upper midrange card and software bloat hadn't kept up.
I wouldn't necessarily say games are badly coded (though some are, or are crippled by excessive copy protection). Often it's just that they are doing demanding things in the background, like AI, physics simulation, and so on. For the most part, games in recent years have actually been relatively light on the CPU, since most have continued to be designed with nearly decade-old console chips in mind. People have become more demanding of higher frame rates though, as high refresh-rate screens have become common. I suspect CPU performance may become even more limiting as games increasingly drop support for the older consoles and focus on the new ones though, so something like the current i3s might have their days numbered for running modern games smoothly.

Intel's getting bit by the early adaptor bug for DDR5. With how insanely expensive intel MBs and DDR5 are, no one is buying their chips even if they are marginally faster then DDR4 AMD chips. The AMD motherboards are far more affordable, and just as high quality, and current AMD chips reach their almost intel performance on DDR4 ram which is much much cheaper then DDR5. Yes those intel chips can run on DDR4 on the right motherboard, but then the benching performance falls and those motherboards are still pretty expensive, so the extra gaming performance starts to vanish on DDR4
When it comes to today's games running on Intel's 12th-gen processors, there is little performance difference between a relatively cheap kit of DDR4 and a kit of DDR5 costing 2-3 times as much. In many cases, DDR4 will outperform DDR5 in games due to DDR5 having higher latency on anything but the highest-end kits. And even in most applications, DDR5 doesn't exactly add that much to a processor's performance. So at least for Intel's 12th-gen processors, people are not missing out on much by going with DDR4 in place of DDR5. That's likely to be more of a concern for AMD in the short-term, as it doesn't sound like AM5 will even offer DDR4 support. And power draw on Alder Lake also seems decent, roughly on par with AMD's current processors, at least aside from the i9s, which push clock rates a little too far. Intel's current lineup actually seems pretty decent, though of course we'll have to see what AMD's new processors have to offer in the coming months.
 
  • Like
Reactions: JWNoctis

bit_user

Polypheme
Ambassador
That's beside the point and completely irrelevant.
I agree that your point about TSMC is irrelevant.

We are talking about how much fake news there is...
Maybe you are.

Either you believe that both stories are true or neither.
First, that's not actually true. Second, it's beside the point.

This is getting silly. I can't prove that Intel is raising prices on Alder Lake in bad faith, and you can't prove they're not. And so there it sits.

However, contrast that to AMD cutting prices to deal with their inventory glut, and Intel just doesn't look good by comparison.
 

bit_user

Polypheme
Ambassador
Hi, Cryoburner. Good to chat with you, again.

And power draw on Alder Lake also seems decent, roughly on par with AMD's current processors, at least aside from the i9s, which push clock rates a little too far.
I guess you only looked at gaming power, where Ryzen non-APUs are at a disadvantage due to their I/O die. However, if you look at raw compute power & efficiency, you couldn't be much further from the truth. From everything I've seen, it seems clear that "Intel 7" is less power-efficient than TSMC N7.

Since you carved out an exception for the i9, let's focus on the i7-12700K, with its TDP of 125 W and Turbo Peak of 190 W. This is not "on par with AMD".



In case you missed it, here are multithreaded power consumption & efficiency charts from the i7-12700K review. I included only one power chart per app, but the others are pretty consistent with these. As you'll see, the stock i7 uses more power and has worse efficiency than not only the stock R9 5900X, but in all but one case (plot excluded) it's worse than even the stock R9 5950X!

q6ioN8xxBBibUTTRRgb8sn.png


eUj4zA6Xth9kk573Qvotug.png


rwo4EcnSG9SZDRWHBipJNh.png


4brKag37F3uvgqrYdUCP8h.png


Pjt6nhNhRMzA4GbPo8J9Ki.png
 
  • Like
Reactions: JWNoctis
I guess you only looked at gaming power, where Ryzen non-APUs are at a disadvantage due to their I/O die. However, if you look at raw compute power & efficiency, you couldn't be much further from the truth. From everything I've seen, it seems clear that "Intel 7" is less power-efficient than TSMC N7.

Since you carved out an exception for the i9, let's focus on the i7-12700K, with its TDP of 125 W and Turbo Peak of 190 W. This is not "on par with AMD".
Still, I'd say it's "decent" in most real-world workloads, especially compared to something like the 11th-gen processors. In that Handbrake efficiency chart, the 12700K with DDR4 was only around 10% behind the 5800X, and 14% behind the 5900X. While it drew more power than the 5900X, it also slightly outperformed it. That's a massive improvement from the 11700K, which only offered around half the efficiency in that test. The 12700K's efficiency is not quite as favorable in the Blender test, seeing as the 5900X slightly outperforms it while drawing around 24% less power at stock, but that's still a big improvement over the 11th-gen results. The "stress test" synthetic results are not really as meaningful, as they don't necessarily relate to how efficiency will compare in real-world software.

And once you get down to the i5 level, we see that the 12600K is only around 2% behind the 5800X in their Handbrake test, drawing a little more power, but also being a little faster, and it's a similar case in Blender. I would call that "roughly on par". At this lower core-count, the separate IO chip is likely causing AMD's efficiency lead to evaporate, but that's still relevant as far as the actual hardware goes. And that likely goes for moderately-threaded workloads on the higher core-count processors as well.
 

bit_user

Polypheme
Ambassador
Still, I'd say it's "decent" in most real-world workloads, especially compared to something like the 11th-gen processors.
Heh, if Rocket Lake is your standard, then I'd agree that Alder Lake is definitely much closer to the trend lines on efficiency. As you can see in the charts I posed above, stock Rocket Lake often burns more power and has worse efficiency than even their overclocked Alder Lake counterparts! But it's been well established that Rocket Lake is quite an outlier, so I think that's a very low standard.

The 12700K's efficiency is not quite as favorable in the Blender test,
Are you referring to the article, now? Because I didn't see a chart on that. Efficiency is performance per Watt. The provided Blender charts were just in Watts used.

The "stress test" synthetic results are not really as meaningful, as they don't necessarily relate to how efficiency will compare in real-world software.
Stress tests are useful for establishing upper bounds. If someone has a compute-heavy use case that's not covered in the article, then they can use the stress-test as an upper-bound of how much heat the CPU will churn out.

And once you get down to the i5 level, we see that the 12600K is only around 2% behind the 5800X in their Handbrake test, drawing a little more power, but also being a little faster, and it's a similar case in Blender. I would call that "roughly on par".
That's cherry-picking the most efficient Alder Lake, which happens to pair up with one of the least efficient Ryzen 5k models. And on one test.

It's not fair to use that comparison to characterize the efficiency of the entire Alder Lake range. Though, I don't blame you for trying.

Looking at the aggregate of the results presented above, a fair-minded person can draw only the conclusion that Alder Lake is significantly hotter and less-efficient than Ryzen 5k, overall.

And that likely goes for moderately-threaded workloads on the higher core-count processors as well.
I wouldn't be too sure about that, because I've also seen evidence that Alder Lake will burn a lot of power on lightly-threaded workloads. Zen 3 tops out at about 12 W per core, but Alder Lake will burn up to 45 W per Golden Cove. I'll see I can dig up a source.

Also, we should be clear what we mean by "lightly-threaded". I figure that's about 1/4rd of the threads busy. Much less than that, and you're really talking about single-threaded.

FWIW, here's that review's only chart for single-threaded power:

8vb3EymzFtdM6CRHeQcAeg.png
 

rluker5

Distinguished
Jun 23, 2014
625
377
19,260
Uh, I guess 1/2 is a fraction, but people typically use "a fraction" to mean quantities smaller than that.

Benchmarks of the 5600G vs. 5600X show the X has a distinct performance advantage, but it's less than 10%. Not worth making a huge deal over.

BTW, Intel uses different dies in its entry-level and mainstream laptops, as well. Only their H-series typically uses the same die as their desktop CPUs, but I think Alder Lake discontinued that trend.


You definitely need to take another look at those charts...


No, the main thing to watch out for is whether it's a 15 W or a 35 W part. That makes a huge difference, and it's mostly in the additional power delivery and cooling which lets the 35 W parts stretch their legs.
Outside of the igpu, the best curren AMD apu style desktop chip the 5700g is comparable to a 9900k, so Skylake. That is a large dropoff from the 5800x in performance. Like the mobile Jaguar chips were.

I just don't like people trying to pretend the performance is the two is equivalent. With Intel, if you could hold the same clocks and memory speed with their desktop vs mobile chips you would gat the same performance. But ofc power limitations exist and do affect performance.
 

bit_user

Polypheme
Ambassador
Outside of the igpu, the best curren AMD apu style desktop chip the 5700g is comparable to a 9900k, so Skylake.
No, that's not a fair comparison. The i9-9900K was almost 1.5x the price of the 5700G and uses significantly more power (rated: 95 W vs. 65 W).

It's also not accurate to say "Skylake" when you mean "Coffee Lake R". Coffee Lake R shares the same microarchitecture as Skylake, but runs at higher clock speeds and has other tweaks. When you compare a specific implementation of Zen3 with another CPU, you need to be more specific about which one.

Even taking that comparison on face value, it's also quite incorrect to say that it's comparable. I couldn't find a single review which directly compared them, but AnandTech has several benchmarks in common between these two reviews (which thankfully also includes data for the i9-9900K):

Comparing those test results, we see that:
  • Agisoft Photoscan: 5700G is 19.3% faster
  • 3D Particle Movement (non-AVX): 5700G is 11.3% faster
  • 3D Particle Movement (AVX): 5700G is 1.1% faster
  • Y-Cruncher (MT; 250M): 5700G is 5.6% faster
  • Corona: 5700G is 1.1% faster
  • Dolphin 5.0 Render Test: 5700G is 5.9% faster
  • DigiCortex (32k Neuron; 1.8B synapse): 5700G is 171% faster!*
  • 7-Zip (Combined): 5700G is 20.6% faster
  • AES Encoding: 5700G is 14.6% faster
  • Mozilla Kraken: 5700G is 32.5% faster
  • Google Octane: 5700G is 40.9% faster
  • x264 HD 3.0: 5700G is 5.3% faster
  • SPEC2006 (1T): 5700G is 6.5% faster
  • SPEC2017 (1T): 5700G is 10.4% faster
  • SPEC2017 (nT): 5700G is 14.4% faster

* The DigiCortex result is not a mistake, as the 5700G review confirms "The Zen3 processors have been doing well in the Digicortex test due to the rearrangement of load/store ports with better memory access. We're seeing >2x gen-on-gen gains here."

So, in terms of raw CPU performance, it's not even close. It's a resounding victory for the 5700G.

However, we still haven't accounted for power. The 5700G will peak at 87.4 W, while the i9-9900K will draw up to 168.5 W, as tested in the above article. So, whether you compare their base power (i9-9900K uses 46% more) or peak power (i9-9900K uses 93% more), the two aren't even in the same league.

I just don't like people trying to pretend the performance is the two is equivalent.
That's certainly no worse than your mischaracterization of Ryzen 5000G-series as Skylake-tier, which it's most definitely not. In fact, yours is downright shameful - not only to be wrong on the facts, but also in the way you lazily use "Skylake" to include even CPUs 3 generations newer and the failure to compare CPUs in the same power or price band.
 
  • Like
Reactions: -Fran-

rluker5

Distinguished
Jun 23, 2014
625
377
19,260
No, that's not a fair comparison. The i9-9900K was almost 1.5x the price of the 5700G and uses significantly more power (rated: 95 W vs. 65 W).

It's also not accurate to say "Skylake" when you mean "Coffee Lake R". Coffee Lake R shares the same microarchitecture as Skylake, but runs at higher clock speeds and has other tweaks. When you compare a specific implementation of Zen3 with another CPU, you need to be more specific about which one.

Even taking that comparison on face value, it's also quite incorrect to say that it's comparable. I couldn't find a single review which directly compared them, but AnandTech has several benchmarks in common between these two reviews (which thankfully also includes data for the i9-9900K):
Comparing those test results, we see that:
  • Agisoft Photoscan: 5700G is 19.3% faster
  • 3D Particle Movement (non-AVX): 5700G is 11.3% faster
  • 3D Particle Movement (AVX): 5700G is 1.1% faster
  • Y-Cruncher (MT; 250M): 5700G is 5.6% faster
  • Corona: 5700G is 1.1% faster
  • Dolphin 5.0 Render Test: 5700G is 5.9% faster
  • DigiCortex (32k Neuron; 1.8B synapse): 5700G is 171% faster!*
  • 7-Zip (Combined): 5700G is 20.6% faster
  • AES Encoding: 5700G is 14.6% faster
  • Mozilla Kraken: 5700G is 32.5% faster
  • Google Octane: 5700G is 40.9% faster
  • x264 HD 3.0: 5700G is 5.3% faster
  • SPEC2006 (1T): 5700G is 6.5% faster
  • SPEC2017 (1T): 5700G is 10.4% faster
  • SPEC2017 (nT): 5700G is 14.4% faster
* The DigiCortex result is not a mistake, as the 5700G review confirms "The Zen3 processors have been doing well in the Digicortex test due to the rearrangement of load/store ports with better memory access. We're seeing >2x gen-on-gen gains here."

So, in terms of raw CPU performance, it's not even close. It's a resounding victory for the 5700G.

However, we still haven't accounted for power. The 5700G will peak at 87.4 W, while the i9-9900K will draw up to 168.5 W, as tested in the above article. So, whether you compare their base power (i9-9900K uses 46% more) or peak power (i9-9900K uses 93% more), the two aren't even in the same league.


That's certainly no worse than your mischaracterization of Ryzen 5000G-series as Skylake-tier, which it's most definitely not. In fact, yours is downright shameful - not only to be wrong on the facts, but also in the way you lazily use "Skylake" to include even CPUs 3 generations newer and the failure to compare CPUs in the same power or price band.
Check the gaming benchmarks with dgpu.
Nobody is going to use either a 9900k or 5700g for those applications you listed.
And if you want to go to power consumption, remember 14 vs 7, old vs current.
You say coffee lake refresh isn't Skylake arch, yet defend Ryzen apu as Ryzen desktop?
If you were to need security and had to disable that SMT on the 5700g it would perform like 8 Raptor ecores, that most would disable because of their lack of usefulness.
You can't say the same about desktop Ryzen that has the little heater io. That's my point. They are different enough that conflating them is intentionally misleading.
 

bit_user

Polypheme
Ambassador
Check the gaming benchmarks with dgpu.
That's not what you said. You made an absolute statement about Ryzen 5000 APUs, and it was flat-out wrong.

Nobody is going to use either a 9900k or 5700g for those applications you listed.
That's definitely not true, and those benchmarks also represent a wide diversity of applications that should cover just about any compute-intensive workload.

And if you want to go to power consumption, remember 14 vs 7, old vs current.
You didn't have to pick a K-series to compare it with. At least comparing it to another 65 W CPU would've been more reasonable in some sense.

As for the manufacturing technology, the end user doesn't care. Nor should they. They just know how much it cost, hast fast it is, how much heat and noise their PC is making, and how bulky/heavy/expensive the case, CPU cooler, and PSU are to deal with the heat & power requirements.

You say coffee lake refresh isn't Skylake arch, yet defend Ryzen apu as Ryzen desktop?
Your original post said "Skylake", which is both a CPU generation and generally referred to as a core microarchitecture. If you didn't mean Skylake as in the 6th gen Core-i series, then you should have said something like "Skylake-derivative", or better yet "Coffee Lake R". Doesn't matter, because it's still wrong. However, it does show just how hypocritical you are in your complaints that people don't distinguish Zen3 APUs from non-APUs, when you're so easily lumping together 4 generations of Intel CPUs with vastly different core counts and clock speeds.

If you were to need security and had to disable that SMT on the 5700g it would perform like 8 Raptor ecores, that most would disable because of their lack of usefulness.
I'd ask on what basis you're making this assertion, but I'm quite sure it's utter rubbish. You have no data to support this, because there is none. Even once Raptor Lake is released, the data will show it's not true.

You can't say the same about desktop Ryzen that has the little heater io. That's my point. They are different enough that conflating them is intentionally misleading.
Ryzen 5000G and non-G are the same cores. The only meaningful differences affecting CPU performance are in the amount of L3 cache and the SoC interconnect. You vastly overestimate how much that matters.

Your reply is the equivalent of flipping the chess board, once your opponent has you check-mated. Having your assertion proven wrong and nonsensical, you suddenly decide it doesn't matter. If you want people to take you seriously, I suggest you check your assumptions before making such sweeping claims.
 

rluker5

Distinguished
Jun 23, 2014
625
377
19,260
That's not what you said. You made an absolute statement about Ryzen 5000 APUs, and it was flat-out wrong.


That's definitely not true, and those benchmarks also represent a wide diversity of applications that should cover just about any compute-intensive workload.


You didn't have to pick a K-series to compare it with. At least comparing it to another 65 W CPU would've been more reasonable in some sense.

As for the manufacturing technology, the end user doesn't care. Nor should they. They just know how much it cost, hast fast it is, how much heat and noise their PC is making, and how bulky/heavy/expensive the case, CPU cooler, and PSU are to deal with the heat & power requirements.


Your original post said "Skylake", which is both a CPU generation and generally referred to as a core microarchitecture. If you didn't mean Skylake as in the 6th gen Core-i series, then you should have said something like "Skylake-derivative", or better yet "Coffee Lake R". Doesn't matter, because it's still wrong. However, it does show just how hypocritical you are in your complaints that people don't distinguish Zen3 APUs from non-APUs, when you're so easily lumping together 4 generations of Intel CPUs with vastly different core counts and clock speeds.


I'd ask on what basis you're making this assertion, but I'm quite sure it's utter rubbish. You have no data to support this, because there is none. Even once Raptor Lake is released, the data will show it's not true.


Ryzen 5000G and non-G are the same cores. The only meaningful differences affecting CPU performance are in the amount of L3 cache and the SoC interconnect. You vastly overestimate how much that matters.

Your reply is the equivalent of flipping the chess board, once your opponent has you check-mated. Having your assertion proven wrong and nonsensical, you suddenly decide it doesn't matter. If you want people to take you seriously, I suggest you check your assumptions before making such sweeping claims.
So if I buy a Ryzen apu style I should expect the same performance as a Ryzen desktop chip if I can hit the same clocks and not the same game performance core for core and clock for clock that was consistent throughout the Skylake arch era?
We both know the 5000 desktop series beats the Skylake arch in any reasonable gaming scenario.

And no, you can't accurately predict gaming performance from synthetics. The 5800x3d is a good example of this.

The only reason I went with the 9900k is it had the same number of cores and about the same clocks as that 5700g. Am I not allowed to compare on an even playing field or is AMD above such things?

There are some gaming comparisons and they support my point that Ryzen apus perform an entire gen less in dgpu gaming than their desktop counterparts and roughly equivalent to Skylake gen core and clock matched. To pretend they perform on par is misleading and anyone getting mobile Ryzen should know this ahead of time. Not that they are bad, just not on the same level as desktop.
Just like the two tiers of processors AMD used to have with Jaguar and FX. You wouldn't tell somebody to expect FX processor performance from a similarly named Jaguar build would you?
 

bit_user

Polypheme
Ambassador
So if I buy a Ryzen apu style I should expect the same performance as a Ryzen desktop chip if I can hit the same clocks
I didn't say "the same". What I said is "within 10%".

Unfortunately, I don't know of benchmarks that include both 5700X and 5700G, but AnandTech included the 5600X in their review of the 5600G. Here's a summary of how they compare. I used the same tests as above, even though the article has more (in the previous post, I was limited by the tests which were common to the two different reviews had to use). Unfortunately, I had to replace the SPEC tests with Geekbench + Handbrake, because the SPECbench results excluded the 5600G for some reason.

Test(5600X/5600G - 1) * 100%
Agisoft Photoscan4.5%
3D Particle Movement (non-AVX)10.6%
3D Particle Movement (AVX)9.5%
Y-Cruncher (MT; 250M)11.8%
Corona (renderer)14.9%
Dolphin 5.0 Render Test5.9%
DigiCortex (32k Neuron; 1.8B synapse)15.4%
7-Zip (Combined)14.7%
AES Encoding14.4%
Mozilla Kraken7.5%
Google Octane8.4%
x264 HD 3.08.3%
Geekbench 5 (ST)8.9%
Geekbench 5 (MT)9.6%
Handbrake H264 720p -> Youtube5.1%
Mean9.97%

Wow. Just squeaked under 10%!

However, one thing I hadn't noticed during the comparison with the Coffee Lake R i9-9900K is that the 5700G benchmarks used integrated graphics (except during the dGPU gaming tests), yet the i9-9900K benchmarks used a dGPU for the entire test suite.

Therefore, the 5600G and 5700G are comparing a little worse than they would if you used a dGPU, because even when you're not doing anything graphically intensive, the iGPU is still burning a little power and using a little memory bandwidth.

Because you brought up dGPU gaming benchmarks, here are all the games they tested at 1080p.

Gaming Test (RTX 2080 Ti @ 1080p)(5600X/5600G - 1) * 100%
Chernobylite (Max)9.1%
Civilization Vi (Max)31.7%
Deus Ex MD (Max)1.6%
Final Fantasy 14 (Max)18.8%
F1 2019 (Ultra)9.1%
Far Cry 5 (Ultra)24.0%
Red Dead 2 (Max)5.7%
Mean14.3%

Other than remarking about L3 cache on the Civilization benchmark, the article doesn't venture explanations for the other two big disparities (FF14 and Far Cry 5).

The only reason I went with the 9900k is it had the same number of cores and about the same clocks as that 5700g. Am I not allowed to compare on an even playing field or is AMD above such things?
It's always fine to pose a question. The issue is making claims without having the data to support them.

There are some gaming comparisons and they support my point that Ryzen apus perform an entire gen less in dgpu gaming than their desktop counterparts
Yes, we see that. It's very uneven, though. Not a consistent story.

To pretend they perform on par is misleading and anyone getting mobile Ryzen should know this ahead of time.
True. That's why it's good to have a diverse set of benchmarks. However, it's important spend enough time looking through them to see what they can tell you.
 
  • Like
Reactions: rluker5

rluker5

Distinguished
Jun 23, 2014
625
377
19,260
I didn't say "the same". What I said is "within 10%".

Unfortunately, I don't know of benchmarks that include both 5700X and 5700G, but AnandTech included the 5600X in their review of the 5600G. Here's a summary of how they compare. I used the same tests as above, even though the article has more (in the previous post, I was limited by the tests which were common to the two different reviews had to use). Unfortunately, I had to replace the SPEC tests with Geekbench + Handbrake, because the SPECbench results excluded the 5600G for some reason.

Test(5600X/5600G - 1) * 100%
Agisoft Photoscan4.5%
3D Particle Movement (non-AVX)10.6%
3D Particle Movement (AVX)9.5%
Y-Cruncher (MT; 250M)11.8%
Corona (renderer)14.9%
Dolphin 5.0 Render Test5.9%
DigiCortex (32k Neuron; 1.8B synapse)15.4%
7-Zip (Combined)14.7%
AES Encoding14.4%
Mozilla Kraken7.5%
Google Octane8.4%
x264 HD 3.08.3%
Geekbench 5 (ST)8.9%
Geekbench 5 (MT)9.6%
Handbrake H264 720p -> Youtube5.1%
Mean9.97%

Wow. Just squeaked under 10%!

However, one thing I hadn't noticed during the comparison with the Coffee Lake R i9-9900K is that the 5700G benchmarks used integrated graphics (except during the dGPU gaming tests), yet the i9-9900K benchmarks used a dGPU for the entire test suite.

Therefore, the 5600G and 5700G are comparing a little worse than they would if you used a dGPU, because even when you're not doing anything graphically intensive, the iGPU is still burning a little power and using a little memory bandwidth.

Because you brought up dGPU gaming benchmarks, here are all the games they tested at 1080p.

Gaming Test (RTX 2080 Ti @ 1080p)(5600X/5600G - 1) * 100%
Chernobylite (Max)9.1%
Civilization Vi (Max)31.7%
Deus Ex MD (Max)1.6%
Final Fantasy 14 (Max)18.8%
F1 2019 (Ultra)9.1%
Far Cry 5 (Ultra)24.0%
Red Dead 2 (Max)5.7%
Mean14.3%

Other than remarking about L3 cache on the Civilization benchmark, the article doesn't venture explanations for the other two big disparities (FF14 and Far Cry 5).


It's always fine to pose a question. The issue is making claims without having the data to support them.


Yes, we see that. It's very uneven, though. Not a consistent story.


True. That's why it's good to have a diverse set of benchmarks. However, it's important spend enough time looking through them to see what they can tell you.
Techpowerup did a review on the 5700g and has a good variety of CPUs to compare it to. It does do better than the 9900k in most general uses but trails it in 720p gaming with a 3080 by 6.9%. Techpowerup has always struck me as a little biased towards Intel so a better comparison would be vs the 5800x where the 5700g trails by 16% in the 720p games they tested. A 5700g probably isn't going to be used with the expectations of being top of the line with professional uses, but a casual user may expect desktop Ryzen performance in games where they are only getting kind of close. They would be getting significantly better power consumption for light use though. The chip is just setup differently. Better for mobile uses for sure, but different nonetheless.

And good post.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
a better comparison would be vs the 5800x where the 5700g trails by 16% in the 720p games they tested.
It's worth noting that 5800X has 105 W TDP, while 5700G has 65 W. If we can't find benchmarks that directly compare the 5700X with the 5700G, then the next best thing to do would be to offset the 5800X vs 5700G results by the difference between the 5800X and 5700X.
 
  • Like
Reactions: rluker5
Are you referring to the article, now? Because I didn't see a chart on that. Efficiency is performance per Watt. The provided Blender charts were just in Watts used.
I based the efficiency on their performance results for that same blender test elsewhere in the review, compared to the wattage chart. They only included an efficiency chart for the Handbrake results, but it's possible to determine the others based on each processor's performance in those tests.

For the Blender Barcelona test, the stock 12600K with DDR4 drew 8.6% more power than the stock 5800X on average, but the 5800X was shown to take 3.6% longer to complete the render. So, in terms of efficiency, the two were within 5% of one another, not exactly a difference that would amount to much. In the Classroom test, the 12600K drew 11.5% more power, but the 5800X took 5.0% longer, for a around a 6% difference. Interestingly, in the BMW test, they showed the 12600K to only draw 3.4% more power, but the 5800X took 12.5% longer. So at least in that example, the 12600K was not only faster, but also around 9% more efficient.

In Handbrake x265, the 12600K drew 14.3% more power, but the 5800X took 11.2% longer to complete the test, so efficiency of the two was within 3%. In the x264 test, the 12600K drew 11.6% more power, but the 5800X took 9.1% longer, so a similar 2% difference in efficiency.

And if you want to look at the synthetic test results, in the Y-Cruncher single-threaded test, the 12600K drew 14.3% more power, but the 5800X took 13.3% longer, for around a 1% difference in efficiency. In the multithreaded test, the 12600K drew 11.7% more power, but the two basically tied to within a fraction of a percent as far as performance goes. I'm not sure those synthetic results are all that meaningful though. I kind of wish they included power charts for more real-world workloads, not just ones based on rendering and video encoding that push all cores to their limits, as the vast majority of people won't often be using their processors for such tasks.

So, while the 12600K did tend to draw a little more power than the 5800X under load, it also tended to perform better, resulting in the efficiency being relatively close. It will draw more power and in turn put out more heat when all cores are loaded, but the greatest difference we see in their tests is under 15%, which is not exactly huge.

That's cherry-picking the most efficient Alder Lake, which happens to pair up with one of the least efficient Ryzen 5k models. And on one test.

It's not fair to use that comparison to characterize the efficiency of the entire Alder Lake range. Though, I don't blame you for trying.
Well, there's the other tests. : P And it's not really cherry-picking when it's arguably the 12th-gen part with the most mass-market appeal among those represented in the charts you posted. I would argue that the ~$400+ niche enthusiast models are less representative of what most will be buying. And it makes sense to compare parts with as similar performance as possible. Though seeing as the 12600K is the faster of the two, it should naturally be at a disadvantage in terms of efficiency. If one were to reduce the clock rates a bit to provide more comparable performance to the 5800X, the efficiency would likely improve.

I suppose I could have compared the 12400 against the 5600X to keep the core configurations more equivalent, though from a quick glance at the charts, Tom's appears to show the 12400 to not only be a little faster on average in their aggregate results, especially at single-threaded tasks, but also more efficient. Again, that's likely due to the IO chip and unused cores holding back the efficiency of the 5600X a bit.

Of course, Alder Lake came out a year after the Ryzen 5000 series, and the 7000 series might improve efficiency further. Or maybe not, if pushing clock rates higher happens to have more of an adverse effect on efficiency than the architectural/process gains. Though that goes for Raptor Lake too.
 

bit_user

Polypheme
Ambassador
They only included an efficiency chart for the Handbrake results, but it's possible to determine the others based on each processor's performance in those tests.
But let's take another look at that chart. If you eliminate the PBO entries, it clearly shows the 5800X is the least efficient Ryzen 5000 model the reviewed, and the i5-12600K is the most efficient Alder Lake CPU! You're comparing at outlier on the low end of Ryzen 5k (efficiency-wise) to an outlier on the high end of Alder Lake. Thus, your chosen matchup is not at all characteristic of how the respective CPU families compare.

it's not really cherry-picking when it's arguably the 12th-gen part with the most mass-market appeal among those represented in the charts you posted.
It is absolutely cherry-picking, when you don't pick samples representative of their respective product lines. Your original statement was not limited to those two models, but applied across the entire product stacks of both brands. The most accurate way to do it would be to use geomean efficiency of Alder Lake models vs. Ryzen 5k.

it makes sense to compare parts with as similar performance as possible.
Depends on the goal. If the objective is to inform buyers deciding between CPUs at that specific price point, then yes it makes sense to compare them. However, you're confusing the best methodology for one purpose (and that's the main purpose of reviews on sites like these) with the best methodology for the purpose of comparing the entire product lines.

The way to test how good your generalization is would be to look at how well it applies to any other matchup, in the range.

MatchupRelative Efficiency (Handbrake)
5900X vs i7-12700K (DDR4)15.8%
5900X vs i7-12700K (DDR5)20.7%
5950X vs i9-12900K (DDR4)46.6%
5950X vs i9-12900K (DDR5)49.5%

Though seeing as the 12600K is the faster of the two, it should naturally be at a disadvantage in terms of efficiency.
That isn't a given, unless you're admitting the only way the i5-12600K pulled a win on performance is by Intel juicing its power limits. It's only a given that increasing performance comes at the expense of efficiency within a single architecture!

Consider this: Zen 3 is more efficient than the Zen 2 XT-series, even though the two CPUs were made on the exact same process node. And guess which one is faster?

If one were to reduce the clock rates a bit to provide more comparable performance to the 5800X, the efficiency would likely improve.
Yes, because guess what? It's the highest W/core across the entire Zen 3 range, both rated & actual! Worse, the 5800X has only 32 MB of L3, while the next higher models are both 64 MB. So, that puts it at a performance disadvantage. Comparing with the 5600X, the 5800X has less L3 per core.

ModelW/core (rated)W/core (Y-cruncher)W/core (AIDA)W/core (Handbrake)W/core (Blender)
5600X10.810.811.312.512.7
5800X13.111.812.114.014.5
5900X8.88.610.811.311.6
5950X6.66.38.07.87.8

The only conclusion to draw from this is don't get a 5800X, if you care about power-efficiency!

I suppose I could have compared the 12400 against the 5600X to keep the core configurations more equivalent
But the number of cores & threads is the only thing equivalent about them! On most of the multi-threaded benches, the i5-12400 gets absolutely stomped by the 5600X. The two CPUs are in different leagues, as indicated by their price disparities.

I guess another point is that the i5-12400 has a base power of 65W, which makes it seem equivalent, but it has a turbo power of 117 W, which is 54% more than we actually saw from a 5600X. Unfortunately, the i5-12400 review article on this site doesn't seem to have any power measurements. Interesting, that.

the 7000 series might improve efficiency further.
It's on a whole generation newer fab node, so we're in trouble if it's not.

Or maybe not, if pushing clock rates higher happens to have more of an adverse effect on efficiency than the architectural/process gains.
Could happen, but I sure hope not. At least, not at stock. I don't really care what happens with PBO, because most people don't use it.

Though that goes for Raptor Lake too.
Not really. Raptor Lake is made on the same node and has the same microarchitecture for both sets of cores. They're already rumored to be increasing power limits for it. Since I don't believe in miracles, I don't believe it'll be any more efficient than Alder Lake.
 
But let's take another look at that chart. If you eliminate the PBO entries, it clearly shows the 5800X is the least efficient Ryzen 5000 model the reviewed, and the i5-12600K is the most efficient Alder Lake CPU! You're comparing at outlier on the low end of Ryzen 5k (efficiency-wise) to an outlier on the high end of Alder Lake. Thus, your chosen matchup is not at all characteristic of how the respective CPU families compare.
It doesn't matter if they are outliers on those charts, as they are currently directly competing with one another with roughly similar performance and pricing. And again, the efficiency of the 12400 and 5600X also looks to be similar, so this also holds for the other "mainstream" parts that most people will be buying. The overall efficiency of this part of the lineup tends to be rather comparable. AMD's current desktop processor design allows them to be notably more efficient than Intel at high core counts, but they lose most of that advantage at lower core counts. It's definitely an advantage for those regularly working with heavily-multithreaded applications that max out a high-end CPU on all cores, but most people today don't really have a pressing need for 12+ core processors, and are more likely to be looking at processors in the sub-$300 price range. So yes, I would consider these samples the most representative of their respective product lines out of those included in the charts.

But the number of cores & threads is the only thing equivalent about them! On most of the multi-threaded benches, the i5-12400 gets absolutely stomped by the 5600X. The two CPUs are in different leagues, as indicated by their price disparities.
I have not seen a review showing that. They tend to show the 5600X and 12400 roughly evenly matched. Each has workloads where they can be faster than the other, but overall, the performance tends to be quite similar between the two. Looking at Tom's 12400 review, the 12400 was not only faster in their single-threaded benchmarks, but also slightly faster on average in their multi-threaded tests. That could of course vary depending on what applications and workloads are included in a comparison, but I wouldn't expect to see any review showing one processor "absolutely stomping" the other.

And there's also not really a major price disparity either. According to PCPartPicker, the 5600X has dropped to around $200 currently, or as low as $190 from Newegg with a promo code. The 12400 is currently $194, or $180 for the "F" version without integrated graphics.

Not really. Raptor Lake is made on the same node and has the same microarchitecture for both sets of cores. They're already rumored to be increasing power limits for it. Since I don't believe in miracles, I don't believe it'll be any more efficient than Alder Lake.
What I mean is, while Ryzen 7000's architecture will likely be more efficient than the 5000 series, they could potentially lose efficiency if they really push clock rates to their limits in an effort to be more competitive with Raptor Lake at the high-end. And "That goes for Raptor Lake too" was referring to the likelihood that Raptor Lake's efficiency will drop due to increase clock rates. But while Raptor Lake's core design will most likely not improve, the addition of more E-cores means they could potentially see some improvement in terms of efficiency for heavily multithreaded workloads, though it's questionable whether that would be enough to outweigh the increases to clock rates.

In any case, my original post was simply stating that power draw on Alder Lake "seems decent" and is "roughly on par" with AMD, outside of the i9s, and relatively speaking I'd say that holds true, especially for the i5s. The 12700K's efficiency is further behind the 5900X for heavily multithreaded workloads, though still not exactly out of the ballpark, like what we saw with some of Intel's prior high-end processors. Both lineups seem like viable options to me, at least now that AMD's prices have adjusted to account for Alder Lake's big performance gains.
 

bit_user

Polypheme
Ambassador
It doesn't matter if they are outliers on those charts, as they are currently directly competing with one another with roughly similar performance and pricing.
It matters if you use them to generalize about the product lines, which you did.

And again, the efficiency of the 12400 and 5600X also looks to be similar,
Those are not comparable, because the 12400 performs much worse on multi-threaded workloads and that's largely a consequence of its lower PL1 relative to the faster i5's. This also explains how it manages better efficiency. As discussed, when you lower the ceiling on performance, keeping all else the same, efficiency will go up. Almost by direct consequence of being a lower-performance model, it's more competitive on the efficiency front.

You know what's also more efficient? Jasper Lake. No, it performs nowhere near 5600X, but if you're not sticking to a price & performance bracket, then why stop at i5-12400?

AMD's current desktop processor design allows them to be notably more efficient than Intel at high core counts, but they lose most of that advantage at lower core counts.
That's where the G-series comes into the picture. Why else do you think they have no IOD CPUs with < 6 cores?

The 5500 and 5600G are both based on the Cezanne design, and they both smack the i5-12400 silly, on efficiency.

eyYZrREP4BTfX9cyWn4hrh.png


I have not seen a review showing that. They tend to show the 5600X and 12400 roughly evenly matched.
Most of the multi-threaded CPU benchmarks in the i5-12400 review on this site have the 5600X performing well above the i5-12400.

The reason you think they're comparable is because of Alder Lake's secret weapon: single-thread performance. In lightly-threaded workloads, the active cores can use a lot of power to pull ahead. That's how it compensates for the weaker showing on multi-threaded performance. It doesn't make the CPUs comparable, though. Instead, it just means that if you do mostly lightly-threaded work, Alder Lake is the better choice.

Looking at Tom's 12400 review, the 12400 was not only faster in their single-threaded benchmarks, but also slightly faster on average in their multi-threaded tests.
Not how I count. I got:
6 ties​
8 wins for the i5-12400 (stock PL)​
14 wins for the 5600X​

What I mean is, while Ryzen 7000's architecture will likely be more efficient than the 5000 series, they could potentially lose efficiency if they really push clock rates to their limits in an effort to be more competitive with Raptor Lake at the high-end.
Yes, I understood and I hope they don't push clocks that hard.

In any case, my original post was simply stating that power draw on Alder Lake "seems decent" and is "roughly on par" with AMD, outside of the i9s, and relatively speaking I'd say that holds true, especially for the i5s.
I think I've shown that Alder Lake is definitely not "on par" with Ryzen 5000. The one legit comparison you picked is pitting two outliers against each other. In the other case, you ought to be pitting the i5-12400 against Cezanne, which lacks the IOD overhead.

The 12700K's efficiency is further behind the 5900X for heavily multithreaded workloads, though still not exactly out of the ballpark, like what we saw with some of Intel's prior high-end processors.
Part of the problem with this debate is that you're using the atrocious efficiency of Rocket Lake as an anchor (see anchoring effect). We know Rocket Lake is an outlier, so it shouldn't even be taken into account.

Both lineups seem like viable options to me,
That was never in dispute.
 
Most of the multi-threaded CPU benchmarks in the i5-12400 review on this site have the 5600X performing well above the i5-12400.
At least going by the "Multi-Threaded Performance Ranking" chart in Tom's Hardware's review, the stock 12400 slightly outperformed the 5600X on average in the multithreaded workloads they included in the chart.

Though, that will admittedly depend on exactly what tests counted toward that number, and I wouldn't exactly say the 12400's win in Y-Cruncher is all that meaningful, whereas some other tests got left out. So the 12400 might not have actually performed ahead in a majority the multithreaded applications, but it generally wasn't far behind either. Tom's Hardware's choice of benchmarks to use in that comparison does seem a bit arbitrary, though really any choice of software to benchmark could be considered arbitrary to anyone not using that software, or using it in different ways.

The reason you think they're comparable is because of Alder Lake's secret weapon: single-thread performance. In lightly-threaded workloads, the active cores can use a lot of power to pull ahead. That's how it compensates for the weaker showing on multi-threaded performance. It doesn't make the CPUs comparable, though. Instead, it just means that if you do mostly lightly-threaded work, Alder Lake is the better choice.

Of course, the vast majority of people are going to get more out of light to moderately-threaded performance than heavily-multithreaded performance. And if someone really cares a lot about heavily-multithreaded performance, they're probably going to go with something with more cores than either of these options.

However, I didn't see the 12400 pulling "a lot of power" in lightly-threaded workloads either. The power draw might be a little higher, but so is the performance. Unless you happen to know of some review showing the 12400 going crazy on power draw, but I haven't seen anything like that.

You know what's also more efficient? Jasper Lake. No, it performs nowhere near 5600X, but if you're not sticking to a price & performance bracket, then why stop at i5-12400?
We are in the same price and performance bracket. The 12400 costs about the same as the 5600X, and delivers a similar amount of performance, a bit faster in some workloads and a bit slower in others. The same goes for the 12600K compared to the 5800X. When these processors launched, the going rate for the closest AMD equivalents was much higher, but that only made those processors look a bit bad at their much higher price points. The cost of the 5000-series processors adjusted down though, and they are now priced roughly where their performance suggests they should be.

Part of the problem with this debate is that you're using the atrocious efficiency of Rocket Lake as an anchor (see anchoring effect). We know Rocket Lake is an outlier, so it shouldn't even be taken into account.
My point is that the efficiency of some of Intel's recent processors was different enough from AMD's that it could substantially affect power draw and cooling. For the most common Alder Lake processors, that's not really much of a concern though. The differences are close enough where people are unlikely to notice a difference. A cooling solution that can handle a 5800X should also be able to handle a 12600K perfectly fine, likely with little difference to fan speeds. In fact, Ryzen's chiplet design can often result in higher core temperatures even if the overall heat output is lower. And the cost of power under typical bursty usage scenarios will not likely make any meaningful difference either. Overall, these factors should not be a big concern, at least until you get into the more niche high-end models and subject them to heavy workloads. So I stand by my suggestion that they are "roughly on par" with "roughly" being the key word.
 

bit_user

Polypheme
Ambassador
At least going by the "Multi-Threaded Performance Ranking" chart in Tom's Hardware's review, the stock 12400 slightly outperformed the 5600X on average in the multithreaded workloads they included in the chart.
I have no idea what they considered a "multithreaded workload", but I counted the ones which actually seemed to scale according to the number of cores.

Another thing: benchmarks are tricky, because they typically have a limited run duration. During that time, the Intel CPU can flex its 117 PL2 threshold to juice the results beyond what you'd see in a sustained compute job.

My point is that the efficiency of some of Intel's recent processors was different enough from AMD's that it could substantially affect power draw and cooling. For the most common Alder Lake processors, that's not really much of a concern though.
If you're trying to say "don't worry about Alder Lake's efficiency", that's a different statement than saying it's equivalent to Ryzen 5k. The evidence I've posted contradicts the latter.

In fact, Ryzen's chiplet design ...
Except for where I highlighted the massive efficiency win of Cezanne, where the 5600G showed a 17% advantage over your best example of the i5-12400.

So I stand by my suggestion that they are "roughly on par" with "roughly" being the key word.
Again, the problem with your analysis is that you're cherry-picking some of the most efficient Alder Lake models and comparing them with less than the best Ryzens. And then throwing in the added anchor of Rocket Lake, to make it look like the difference between Alder Lake and Zen 3 is really just splitting hairs, when it's decidedly not.

We mustn't forget the massive 15% - 50% efficiency advantage Ryzen 5k showed over Alder Lake i7 and i9, as tested.
 

TRENDING THREADS