News Ryzen 9000 Zen 5 CPU trails Core i9-14900K in leaked benchmark — Granite Ridge 5.8 GHz CPU shows Core i9-13900K-like single-threaded performance in...

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Why people are still arguing this is weird to me, Intel is shown to have the most efficient desktop CPUs by far

efficiency-multithread.png
How does the above chart show that Intel has the most efficient chips? All it shows is the stock 7950x beating everything but very power limited Intel CPUs at the top but has no power limited AMD chips to compare against it... You also need to be careful about the claims made because this is only for cinebench r23. Different architectures are better or worse at specific tasks because of the silicon alloted to specific types of operations.
 

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
How does the above chart show that Intel has the most efficient chips? All it shows is the stock 7950x beating everything but very power limited Intel CPUs at the top but has no power limited AMD chips to compare against it... You also need to be careful about the claims made because this is only for cinebench r23. Different architectures are better or worse at specific tasks because of the silicon alloted to specific types of operations.
Cause Intel ships those power limited CPUs OOB. 14900 - 14900T. The 7950x isn't beating anything, it's losing to last gens 5900x and 5950x
 
Cause Intel ships those power limited CPUs OOB. 14900 - 14900T. The 7950x isn't beating anything, it's losing to last gens 5900x and 5950x
Yet its still way above the stock 14900k and the 7950x3d blowing everything else out of the water. You cannot just power limit a 14900k to different amounts and say that's a 14900, or 14900t. If you want benchmarks of those CPUs to compare with you need to test those specific CPUs...
 
  • Like
Reactions: TJ Hooker

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
Yet its still way above the stock 14900k and the 7950x3d blowing everything else out of the water. You cannot just power limit a 14900k to different amounts and say that's a 14900, or 14900t. If you want benchmarks of those CPUs to compare with you need to test those specific CPUs...
That's because the stock 14900k is power limited way higher than both the 7950x and the 7950x 3d. When you power limit the 14900k as well - we all know what happens. We can clearly see it in the graph, no?
 
That's because the stock 14900k is power limited way higher than both the 7950x and the 7950x 3d. When you power limit the 14900k as well - we all know what happens. We can clearly see it in the graph, no?
When you power limit the CPUs they do better on this chart, you realize that right? My point is that if you were to take both CPUs and then put them in a build without touching BIOS the 7950x is more power efficient, doubly so with the 7950x3d compared to the 1#700k/f, 1#900k/f, and the 1#900ks. If we were to compare the 14900k and the 7950x at the same power watt for watt we could make conclusions about just the 14900k and the 7950x and NOT all Intel and all AMD CPUs as a whole and only within Cinebench r23.
 
The 7950x isn't beating anything, it's losing to last gens 5900x and 5950x
Also, this is not true. The zen4 CPUs are not less power efficient than the zen3 CPUs, its just that when tested the 5950x is more power limited than the 7950x, so this allows the 7950x to use significantly more power to get another 5-10% more performance. If you were to limit both the 5950x and the 7950x to the same power limit the 7950x would be significantly more performance per watt.
 

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
My point is that if you were to take both CPUs and then put them in a build without touching BIOS
And I should care about that - why?

And if you take the 14900t and put in without touching the bios it's the most efficient dekstop cpu ever known to mankind, humiliating every amd offering.

Also, this is not true. The zen4 CPUs are not less power efficient than the zen3 CPUs, its just that when tested the 5950x is more power limited than the 7950x, so this allows the 7950x to use significantly more power to get another 5-10% more performance. If you were to limit both the 5950x and the 7950x to the same power limit the 7950x would be significantly more performance per watt.
Now apply your previous argument here.

My point is that if you were to take both CPUs and then put them in a build without touching BIOS the 5950xis more power efficient than the 7950x. Right?
 
And I should care about that - why?

And if you take the 14900t and put in without touching the bios it's the most efficient dekstop cpu ever known to mankind, humiliating every amd offering.
You should care because that is how 90%+ of these CPUs will be used meaning they will be consuming mor power to do the same work. The 14900t may very well be more performance per watt than any of AMDs non-server consumer offerings, not that I have seen it tested, but it is definitely not the most performance per watt consumer CPU of all time, and its not even close. The most power efficient CPUs that consumers use are almost undoubtedly the low power CPUs in either phones or laptops. I cannot give you a reference to what the most efficient consumer CPU is but most mobile CPUs can have 20-30% of a desktop CPUs performance while using 5-45 watts of power depending on what CPUs we are talking about. The newest CPUs from apple, I believe, are the most efficient consumer CPUs right now.
Now apply your previous argument here.

My point is that if you were to take both CPUs and then put them in a build without touching BIOS the 5950xis more power efficient than the 7950x. Right?
In that configuration, yes.

There is a difference between absolute performance per watt of a CPU, its process node, its architecture, and its stock configured power efficiency. AMD went with way more of a performance focus as far as power is concerned with the 7950x than the 5950x. This is partly to do with the socket change. With the AM4 socket you cannot put as much power into the 5950x as you can with the AM5 socket and the 7000 series CPUs.
 

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
You should care because that is how 90%+ of these CPUs will be used meaning they will be consuming mor power to do the same work.
And I care about how other people are going to use their computers why? Make me understand, im really giving you all the benefit of the doubt here.
There is a difference between absolute performance per watt of a CPU, its process node, its architecture, and its stock configured power efficiency. AMD went with way more of a performance focus as far as power is concerned with the 7950x than the 5950x. This is partly to do with the socket change. With the AM4 socket you cannot put as much power into the 5950x as you can with the AM5 socket and the 7000 series CPUs.
And that's what Intel has been doing on their k models for a while now but we are using those as the efficiency benchmark. Bud please...
 
And I care about how other people are going to use their computers why? Make me understand, im really giving you all the benefit of the doubt here.

And that's what Intel has been doing on their k models for a while now but we are using those as the efficiency benchmark. Bud please...
I am not telling you what is relevant to you, I am telling you that most people are going to be using these products in a certain manner, that is all, just stating that for context. As to your other point, you seem to make no distinction between different models of CPU. A 14900k is not a 14900, 14900t or even a 14900kf. So when a 14900k is tested for efficiency that test is only relevant to other tests of a 14900k regardless of testing settings and circumstances. To my other point at the beginning, stock vs stock efficiency is just one plot point and it may or may not be something you care about, I understand, but that is the majority use case for these CPUs. One could argue, as a general rule, since a stock 7950x is more efficient than a stock 1#900k/f/t/s the 7950x is the more efficient chip compared to the 1#900k/f/t/s even if a 1#900k/f/t/s can be tweaked to be more efficient than a stock 7950x. Now, if we are talking about absolute perf per watt, no holds barred, 7950x vs 1#900k/f/t/s that is a fundamentally different question as the above. I have not seen any benchmarks that convince me that the Intel CPUs are the defacto perf per watt winner over the 7950x in such a no holds barred versus benchmark, or any other 7000 series CPU for that matter. @Albert.Thomas maybe this is something that should be tested for an article or maybe you have some insight or other references to our points you could share.
 
Last edited:
  • Like
Reactions: jeremyj_83

blkspade

Commendable
Nov 12, 2021
11
4
1,515
The only truly valid comparisons about microarchitecture efficiency are with cpus manufactured on equivalent nodes, having equal core counts and running at the same clock speeds. Comparing the 7700K versus the 1800X is not a valid comparison. Adding more cores and running them at a lower all-core frequency is going to be way more efficient in MT workloads than having say half the cores and running them at double the clock speed. So if you want to go back that far and compare 7th gen to Ryzen 1000 the comparison should be the 7700K against the 1500X and that on equal clock speeds (by downclocking the 7700K). And even that wouldn’t be fair to AMD as Intel’s 14nm is superior to GloFo/Samsung 14nm. The latter when mapped on Intel’s scale is not a 14nm but a 16nm node. The former when mapped on TSMC’s/GloFo/Samsung scale is a 10nm node.

Anyway, as I said in terms of node equivalency, the closest we have is Intel 10nmSF/10nmESF/Intel 7 and TSMC 7nm/7nm+. The Intel cpus manufactured on this type of node are Intel’s 11th gen mobile and 12th-14th gen. The AMD cpus manufactured on such nodes are the Ryzen 3000, 4000 and 5000 series cpus. So the Ryzen 7000 series you mentioned are out of the comparison as there is a node advantage for AMD. Valid comparisons would be for example the following:

1. AMD R3 3100 vs Intel i3 12100F (link here). Both at 4.1GHz, both 4C/8T, both on a 7nm class node. In gaming the Intel cpu pulls 32-42W, the AMD cpu pulls 52-55W and the Intel chip delivers 40% more frames on average and 50% more 1% lows. That is Intel having 2.3x better performance per watt.

2. AMD 5600 Vs Intel 12500 (link here). Both on 7nm class, both 6C/12T. Intel is running at 4.1GHz and AMD at 4.45GHz. Intel is winning by 3-16% while AMD consumes 30% more power. So Intel is having up to 1.5X better performance per watt.

3. AMD 4900H vs Intel i9 11900H. Both 7nm, both 8C/16T, both mobile cpus. Intel is running at higher frequency. Intel wins comfortably in raw performance but AMD edges out a win in stock performance per watt. Downclocking the 11900H to match the 4900H clockspeeds and the performance per watt win goes to Intel.

Of course the choice of core count and clock speed is ultimately a design choice. And that is why Intel pivoted to the use of a hybrid architecture with many e-cores that also run at lower speeds in order to boost MT performance more efficiently. This allowed them to remain competitive for long enough despite being a full node behind. Let's wait and see what they do when they are a node ahead.
There are a lot of potential metrics that make for a valid comparison. One of which is time. Technology advances over time, and so it makes sense to compare the best competing offerings at a given time. Now I would count price in that equation, which is better grounds to invalidate the 1800X being $200 more expensive. The 1500X being both cheaper than 7700K and at the bottom of the stack, is no more valid a comparison. The 1700X is the more sensible middle ground for price despite the core count difference because its still what was offered for the closest prices with-in similar power envelopes. Intel always had the option of moving to better 3rd party node, but were too stubborn to do so under past management. The limits that placed on their tech is completely on them. They went from refusing to offer more than 4 cores in the consumer space, to being forced to by competition. The fact that the LGA1151 socket eventually scaled to 8 cores on the same node and architecture invalidates any reasoning that that you can't compare AMDs higher core count offerings, based on Intel's artificial limitations.

Everything after that is you comparing parts/architectures separated by more than a year, and weren't even close to being each others competition. Like the 11900H released months after the 5800H, yet you chose the 4900H as a point of comparison? There is no reason to compare AMDs 2019 architecture to Intel's 2021 architecture (3100 vs 12100). That's choosing to compare equivalent core counts, but not contemporary architectures. Which seemed to be the initial point of your post. AMD's architectures allowed for more cores in a competitive power envelope and reasonable die cost.

I'll point out a problem with Intel's hybrid approach. While it helps MT perf in productivity tasks the asymmetrical performance hurts any latency sensitive task that can use more threads than what the P cores offer. For example I've seen Star Citizen using 20 threads of a 5950X. Before it was patched to ignore Intel's E cores, still wanting to scale up to use more threads resulted in massive stutters. Cyberpunk will use up to 16 threads, but limit itself to about 75% of total available. I reference CP2077 as a more mainstream game that already has the capacity to max out 8 P cores. As GPUs and game engines continue to advance, asymmetric hybrid designs will be a hindrance. People were buying Alderlake for gaming just to turn off the E cores because they were game breaking at first. Now Intel is removing HT and limiting P cores to 8. Maybe future designs will keep pace, maybe not. That would put me off Intel's 12-14th Gen for gaming or productivity, for a build intended to stay 5+ years.
 
  • Like
Reactions: jeremyj_83

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
I am not telling you what is relevant to you, I am telling you that most people are going to be using these products in a certain manner, that is all, just stating that for context
I still don't understand what some people may or may not decide to run their cpus at matters when it comes to efficiency. It's really not relevant.

A 14900k is not a 14900, 14900t or even a 14900kf. So when a 14900k is tested for efficiency that test is only relevant to other tests of a 14900k regardless of testing settings and circumstances.
Of course it is. Even if the 14900 is the worst bin humanly possible it will still be within 10% of the 14900k in ISO efficiency. We are just pretending otherwise to avoid admitting the obvious, that Intel has a huge, gigantic lead in OOB efficiency.
One could argue, as a general rule, since a stock 7950x is more efficient than a stock 1#900k/f/t/s the 7950x is the more efficient chip compared to the 1#900k/f/t/s
But it's not. A 7950x is definitely not more efficient than a stock 14900t or 14900.
I have not seen any benchmarks that convince me that the Intel CPUs are the defacto perf per watt winner over the 7950x in such a no holds barred versus benchmark, or any other 7000 series CPU for that matter. @Albert.Thomas maybe this is something that should be tested for an article or maybe you have some insight or other references to our points you could share.
If you are talking about MT workloads at ISO power, yes , the 7950x is the most efficient chip at most power limits. But that's not OOB - and the 7950x isn't the entire amd lineup. The rest of amd's stack doesn't do that well. Also there is more to efficiency than just full throttle blender 24/7. As I'm typing this my Intel oven is consuming 5-6 watts, your 5800x 3d is consuming 25, a 7950x is hitting 45. Intel just does better in mixed workloads and there are multiple tests that show that, people just ignore them. Here, I'll go ahead and link one again - and see how nobody will even acknowledge it.

View: https://www.youtube.com/watch?v=JHWxAdKK4Xg&t=673s
 
Does this upcoming line of CPUs have NPU capability in them?
It's not easy finding info om this before official launch.
Not for the Ryzen 9000 series in all leaks and official information. The integrated GPU is also very weak (2CUs, same as Ry7000), so it may be able to do some, but not all, NPU-accelerated things. You're better off depending on the discrete GPU for NPU-related stuff for a desktop CPU anyway.

Their official CPUs with NPUs in them are Phoenix and Dragon Range APUs.

Regards.
 
  • Like
Reactions: Hotrod2go

Hotrod2go

Prominent
Jun 12, 2023
200
56
660
Not for the Ryzen 9000 series in all leaks and official information. The integrated GPU is also very weak (2CUs, same as Ry7000), so it may be able to do some, but not all, NPU-accelerated things. You're better off depending on the discrete GPU for NPU-related stuff for a desktop CPU anyway.

Their official CPUs with NPUs in them are Phoenix and Dragon Range APUs.

Regards.
Thanks, nice to know. This AI thing & now with dedicated chips to support it, still has me cautious to some degree in other respects about its overall intrusion into our daily lives. Not keen to jump on to the NPU bandwagon. It's a complex issue of course, but that's all I've got say about it at this point.