News AMD Launches Zen 4 Ryzen 7000 CPUs, Launches September 27

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
What do you not understand about the word default in this regard? Tell ya what here is the definition from Merriam-Webster's dictionary for default in regards to computers: "to make a selection automatically in the absence of a choice made by the user"

If a motherboard manufacturer enables unlimited Tau right away in the BIOS that means it is the DEFAULT SETTING!!! Intel HAS NOT said that they do not want the manufacturers to discontinue this practice. Therefore even INTEL views this as normal and therefore would be the DEFAULT.

It could be argued that Intel turned a passive eye to these settings which could be argued to be out of spec. But Intel allows their symbol to be printed on the box which is implicit approval of said product. So technically speaking who is liable here? Intel for not enforcing their own guidelines? Or the motherboard manufacturers who chose to passively ignore them?

Look at AMD did with the original RX480 spec. The PCIe power bus limit was clearly violated. But they all designed it that way (till there was a big stink)
 
  • Like
Reactions: jeremyj_83
It could be argued that Intel turned a passive eye to these settings which could be argued to be out of spec. But Intel allows their symbol to be printed on the box which is implicit approval of said product. So technically speaking who is liable here? Intel for not enforcing their own guidelines? Or the motherboard manufacturers who chose to passively ignore them?

Look at AMD did with the original RX480 spec. The PCIe power bus limit was clearly violated. But they all designed it that way (till there was a big stink)
They are not turning a passive eye, it's overclock mobos, the only reason to exist for the Z boards is to push the CPU, what possible reason could intel give to keep a overclock mobo from doing what a overclock mobo needs to do to overclock? The approval isn't just implicit, intel wants there to be mobos that can overclock their CPUs, it's a main selling point for them.

The issue is with reviewers actively choosing one z board, and only one and only z board, and then pretending that doing that is the only possible choice someone has. I.E. calling it default or out-of-the-box. You can't change settings, you can't choose a different mobo...for some reason. The mobo of the review is the only one being sold... (and if there are others then all of them use the exact same settings? )
They don't even tell you all the features the mobo has and what is enabled and what not, so unless you have the same mobo at home you have no idea what the settings even are.
 
  • Like
Reactions: shady28
They are not turning a passive eye, it's overclock mobos, the only reason to exist for the Z boards is to push the CPU, what possible reason could intel give to keep a overclock mobo from doing what a overclock mobo needs to do to overclock? The approval isn't just implicit, intel wants there to be mobos that can overclock their CPUs, it's a main selling point for them.

The issue is with reviewers actively choosing one z board, and only one and only z board, and then pretending that doing that is the only possible choice someone has. I.E. calling it default or out-of-the-box. You can't change settings, you can't choose a different mobo...for some reason. The mobo of the review is the only one being sold... (and if there are others then all of them use the exact same settings? )
They don't even tell you all the features the mobo has and what is enabled and what not, so unless you have the same mobo at home you have no idea what the settings even are.
Actually there are B series boards that have it enabled by default as well so there goes your entire argument.
 
  • Like
Reactions: TCA_ChinChin
They are not turning a passive eye, it's overclock mobos, the only reason to exist for the Z boards is to push the CPU, what possible reason could intel give to keep a overclock mobo from doing what a overclock mobo needs to do to overclock? The approval isn't just implicit, intel wants there to be mobos that can overclock their CPUs, it's a main selling point for them.

The issue is with reviewers actively choosing one z board, and only one and only z board, and then pretending that doing that is the only possible choice someone has. I.E. calling it default or out-of-the-box. You can't change settings, you can't choose a different mobo...for some reason. The mobo of the review is the only one being sold... (and if there are others then all of them use the exact same settings? )
They don't even tell you all the features the mobo has and what is enabled and what not, so unless you have the same mobo at home you have no idea what the settings even are.

Yes and no. The warranty stated by intel says they can void warranties on K chips if they are overclocked. Even if they allow it, they still reserve the right to deny claims.

It's like saying "Yes the GM corvette's ECM won't stop you from putting a new cam, and timings on the chip, or override the super charger params. But if you do, your warranty will be void." Technically speaking the ECM could allows all of this, but it sets a flag which says "Reprogrammed and running out of spec" And Intel and AMD do the same thing on their CPUs
 

gamr

Commendable
Jul 29, 2022
126
5
1,585
ok. power efficiency aside, are these going to be the next big thing in terms of price, performance, and longevity (am5 support), or is raptor lake going to be like "lol no"
 
ok. power efficiency aside, are these going to be the next big thing in terms of price, performance, and longevity (am5 support), or is raptor lake going to be like "lol no"
Most experts with inside industry knowledge have said that performance will be a toss up between Zen 4 and Raptor Lake. However, Zen 4 will win hands down on efficiency compared to Raptor Lake.
 
  • Like
Reactions: gamr

shady28

Distinguished
Jan 29, 2007
443
314
19,090
...
The issue is with reviewers actively choosing one z board, and only one and only z board, and then pretending that doing that is the only possible choice someone has. I.E. calling it default or out-of-the-box. You can't change settings, you can't choose a different mobo...for some reason.
...

Exactly.

A reviewer can "prove" almost any point they wish using a different motherboard.

Here's proof that the 12900K is more efficient than any Zen 3 in light thread workloads. This is not even close, the much slower low thread count 5950X is pulling a full 35W more than the 12900K. And the 12900K / Steel Legend combo is performing 30% higher at the same time.

High end overclocker motherboards that are typically used in review sites will almost universally push the chips to their limits. That means a lot of power draw on Intel.

But unless you're going to go out and get a Maximus or Taichi or some such, the power those boards draw is 100% irrelevant to you.

Here's what happens when paired with a much more typical Asrock Z690 Steel Legend WiFi 6E.

And at Idle with this board (as most people's rigs are 95% of the time), it draws less power than any Zen 1+, Zen 2 or Zen 3, 13W less than even the 5600X.

yNAYWLc.jpg


Syfkyrx.jpg
 

King_V

Illustrious
Ambassador
Likewise. But I guess this is inevitable given that Intel started giving themselves the advantage with more power. So to close that gap, it also means AMD will have to bump power consumption up. You can deliver IPC improvements, but that number in today's context may be too low when you can clearly tell people are expecting very high double digits improvement in performance over previous gen.

The only thing I can think of is that maybe they're going to do the non-X versions relatively soon? Maybe they'll all be lower power-specs, ie: 65W and 95W or something.

This is complete speculation on my part.
 
  • Like
Reactions: Roland Of Gilead
Exactly.

A reviewer can "prove" almost any point they wish using a different motherboard.

Here's proof that the 12900K is more efficient than any Zen 3 in light thread workloads. This is not even close, the much slower low thread count 5950X is pulling a full 35W more than the 12900K. And the 12900K / Steel Legend combo is performing 30% higher at the same time.

High end overclocker motherboards that are typically used in review sites will almost universally push the chips to their limits. That means a lot of power draw on Intel.

But unless you're going to go out and get a Maximus or Taichi or some such, the power those boards draw is 100% irrelevant to you.

Here's what happens when paired with a much more typical Asrock Z690 Steel Legend WiFi 6E.

And at Idle with this board (as most people's rigs are 95% of the time), it draws less power than any Zen 1+, Zen 2 or Zen 3, 13W less than even the 5600X.

yNAYWLc.jpg


Syfkyrx.jpg
There are quite a few things that you are using that are misleading and therefore render your point moot.

First, you are showing power consumption in single threaded workloads. No modern CPU is going to pull down 300W when boosting to maximum levels in a ST workload. In fact if you were to take an Alder Lake 45W mobile and benchmark it against a 12900k they would be almost identical in score with the 12900k winning due to slightly higher boost. The reason for this is that no core is going to be pulling 45W or more, especially for ST work. When you look at workloads that will show differences for unlimited tau, mainly MT applications. You see based on your own site that the 12900k is pulling 100W more and performing 8.5% worse than the 5950X.

Second, the site you have chosen measures power of the entire system and NOT just the CPU cores.
"Power Consumption
We show energy consumption based on the entire PC (motherboard / processor / graphics card / memory / SSD). This number depends and will vary per motherboard (added ICs / controllers / wifi / Bluetooth) and PSU (efficiency). Keep in mind that we measure the ENTIRE PC, not just the processor's power consumption. Your average PC can differ from our numbers if you add optical drives, HDDs, soundcards etc. "
Just the differences in what is on the motherboard or active at that time on the motherboard can play a HUGE difference in power consumption when looking at ST work loads. As soon as you look at the MT workload you see the 12900k hitting 320W+ total system power.
 
  • Like
Reactions: TCA_ChinChin
Actually there are B series boards that have it enabled by default as well so there goes your entire argument.
I see you missed the pics...and the whole point...let me re upload it.
Both of the settings in this pic are valid settings that are 100% supported by intel that don't void your warranty, you can use both or anything in-between or below those settings.
6WUJWOp.jpg

Even if they allow it, they still reserve the right to deny claims.
But magically this reasoning suddenly disappears if it's about unlimited power settings...
 
  • Like
Reactions: KyaraM
When you look at workloads that will show differences for unlimited tau, mainly MT applications. You see based on your own site that the 12900k is pulling 100W more and performing 8.5% worse than the 5950X.
Yeah, that's the effect of using a bad mobo for your reviews, that's what I'm talking about.
5W more power for the 12900k and the 5950x is 8.8% faster.
For perspective the 12900k has 24 threads same as the 5900x that it beats in both performance and power draw.
The 5950 has 32 threads 50% more than the 12900k and is still only about 13% more efficient and only in heavy MT.
You get 100W more power draw for the exact same performance result...

Core i9-12900K und Core i5-12600K: Hybrid-Desktop-CPUs Alder Lake im Test - Hardwareluxx
grBsmGV.jpg
 

shady28

Distinguished
Jan 29, 2007
443
314
19,090
There are quite a few things that you are using that are misleading and therefore render your point moot.

Okay ...

First, you are showing power consumption in single threaded workloads.

Almost all normal users are doing single and very light thread workloads. So that's relevant.

What's not relevant is doing all core heavy workloads and trying to extrapolate actual real-world use.

No modern CPU is going to pull down 300W when boosting to maximum levels in a ST workload.

Never said it would. But my PC sitting here at near idle for 12 out of 16 hours of use is pulling 35 W/Hr * 12 hours less than a 5950X.

In fact if you were to take an Alder Lake 45W mobile and benchmark it against a 12900k they would be almost identical in score with the 12900k winning due to slightly higher boost. The reason for this is that no core is going to be pulling 45W or more, especially for ST work. When you look at workloads that will show differences for unlimited tau, mainly MT applications. You see based on your own site that the 12900k is pulling 100W more and performing 8.5% worse than the 5950X.

All-core workload?

I've taken the time to look at how often I actually see all-core max workload by using Windows Performance Monitor over multiple days of my typical use, which is more intense than most.

It's like 10 second per day, literally, that I see an all core workload. It happens when my PC applies a patch either via Steam, OS, or 3rd party patch.

That's real world. All core workloads don't matter to 99% of folks. Try again.

Second, the site you have chosen measures power of the entire system and NOT just the CPU cores.

I have an entire system sitting on my desk, not just a CPU. When they benchmark, they use all the same components and just swap the motherboard.

Again, it is my scenario that is real, not yours.

"Power Consumption
We show energy consumption based on the entire PC (motherboard / processor / graphics card / memory / SSD). This number depends and will vary per motherboard (added ICs / controllers / wifi / Bluetooth) and PSU (efficiency). Keep in mind that we measure the ENTIRE PC, not just the processor's power consumption.

That was the entire point of the post. You can change the benchmark power use by merely changing the motherboard out, and suddenly losers become winners and winners become losers. Has very little to do with the CPU. I guess you missed that part....

Your average PC can differ from our numbers if you add optical drives, HDDs, soundcards etc. "
Just the differences in what is on the motherboard or active at that time on the motherboard can play a HUGE difference in power consumption when looking at ST work loads. As soon as you look at the MT workload you see the 12900k hitting 320W+ total system power.

How many people run their PC balls to the wall multi-core?

Not many, that's a simple fact. If you do care, running a render farm, more power to ya. Probably shouldn't be running a desktop PC for one. For another, time is money. If you finish 15 tasks 3 mins quicker every day you save 45 mins of labor time. If your labor is worth $60/hr that's $45 per day.

How's that compare to the cost of the electricity?
 
  • Like
Reactions: KyaraM
After taking several days of looking at the data, I'm a bit confused by several things.

Rumors are Top End Raptor Lake will be faster than 7950X (16 core/32 Thread) on multi core benchmarks. This explains why the top end 7950X is $100 cheaper. They know they won't be on top.

It seems Zen 4 is can't sustain it's clocks as it cooks at 95C. Even single threaded is running at a toasty 90C+. AMD is bragging about energy efficiency improvements, yet they are unable to translate this into better performance, which means a lot of energy is lost to heat. And the early temperature readings I'm seeing support this. So it's clocked well beyond it's efficiency curve sweet spot.

So at the laptop level, AMD will kick tail because they will be in that sweet spot, but when it comes to desktop, Intel Raptor Lake will likely win. This also explains why the top end 7950X will be $100 less than the 5950X. Intel will simply be faster.
 
  • Like
Reactions: KyaraM
How many people run their PC balls to the wall multi-core?

  • Currently I run compiles on projects that take 20 minutes or more.
  • I have excel spreadsheets that are 50+ sheets deep now with thousands of rows. Generating a single report from this can take several minutes-and I might have to run 30 or more
  • I use AI Training and Inference on large training sets.
  • I run F@H when I'm in a sleep state.
And soon I will be encoding my wife's side business videos

The work I do is actually non exciting, but it is work intensive for CPUs
 
  • Currently I run compiles on projects that take 20 minutes or more.
  • I have excel spreadsheets that are 50+ sheets deep now with thousands of rows. Generating a single report from this can take several minutes-and I might have to run 30 or more
  • I use AI Training and Inference on large training sets.
  • I run F@H when I'm in a sleep state.
And soon I will be encoding my wife's side business videos

The work I do is actually non exciting, but it is work intensive for CPUs
So what's your point?
Have you done tests with all of this software on all of your CPUs at many power levels to tell which one is most efficient at which power level for which software?

Because otherwise, if you only went by a review you trust and that review only shows the efficiency at max power level for one CPU but the efficiency at the optimal power level for the other, and you bought a CPU just because of that then you were potentially duped.
 
  • Like
Reactions: KyaraM and shady28

shady28

Distinguished
Jan 29, 2007
443
314
19,090
  • Currently I run compiles on projects that take 20 minutes or more.
  • I have excel spreadsheets that are 50+ sheets deep now with thousands of rows. Generating a single report from this can take several minutes-and I might have to run 30 or more
  • I use AI Training and Inference on large training sets.
  • I run F@H when I'm in a sleep state.
And soon I will be encoding my wife's side business videos

The work I do is actually non exciting, but it is work intensive for CPUs

That's great, but I'd ask, have you ever run performance monitor and left it running for say an entire week to see what your min max avg loads are?

Besides that question of whether you even really know what your use case load is, there's the question of how many people do those things?

These days people are apt to bring out edge cases as if they are common. What you are describing is absolutely not common.

Let me put it this way, If I were to say "Not everyone needs a 4 door F-250, most people should be driving a Corolla or Civic or something along those lines because they just commute back and forth."

If I said that, what would you say to the person who said "I haul concrete blocks for foundation repair, and my wife has a land scaping company that needs to haul 3 tons of mulch every other day."

I'll tell you what I would tell that person - you're not the normal use case, not by 100 miles.
 
  • Like
Reactions: KyaraM
That's great, but I'd ask, have you ever run performance monitor and left it running for say an entire week to see what your min max avg loads are?

Besides that question of whether you even really know what your use case load is, there's the question of how many people do those things?

These days people are apt to bring out edge cases as if they are common. What you are describing is absolutely not common.

Let me put it this way, If I were to say "Not everyone needs a 4 door F-250, most people should be driving a Corolla or Civic or something along those lines because they just commute back and forth."

If I said that, what would you say to the person who said "I haul concrete blocks for foundation repair, and my wife has a land scaping company that needs to haul 3 tons of mulch every other day."

I'll tell you what I would tell that person - you're not the normal use case, not by 100 miles.
Just a nitpick: that is most definitely "normal" (expected within the confines of reality), just not "average".

Regards :p
 
So what's your point?
Have you done tests with all of this software on all of your CPUs at many power levels to tell which one is most efficient at which power level for which software?

Because otherwise, if you only went by a review you trust and that review only shows the efficiency at max power level for one CPU but the efficiency at the optimal power level for the other, and you bought a CPU just because of that then you were potentially duped.

The CPU is a fixed cost. The power used to complete a task favors lower clock speeds. HOWEVER

End users are not a fixed cost. I am paid to get work done. If I'm sitting there tapping my fingers because I'm waiting on the CPU, that cost the company. If work doesn't get done, then the cost goes out the window. So that's uncalculatable based on the individuals cost.

For cloud servers that are rarely ever touched directly by humans, it's a different story. You want the best power efficiency / clock to lower TCO.
 
  • Like
Reactions: TCA_ChinChin
That's great, but I'd ask, have you ever run performance monitor and left it running for say an entire week to see what your min max avg loads are?

Besides that question of whether you even really know what your use case load is, there's the question of how many people do those things?

These days people are apt to bring out edge cases as if they are common. What you are describing is absolutely not common.

Let me put it this way, If I were to say "Not everyone needs a 4 door F-250, most people should be driving a Corolla or Civic or something along those lines because they just commute back and forth."

If I said that, what would you say to the person who said "I haul concrete blocks for foundation repair, and my wife has a land scaping company that needs to haul 3 tons of mulch every other day."

I'll tell you what I would tell that person - you're not the normal use case, not by 100 miles.

I agree. I gave a anecdotal case. But with enthusiast and power users, it is an important metric. These kind of people eventually influence what's bought at the corporate level.

That said one of my machines runs near full tilt for a very large percentage. My NAS runs F@H in a docker when it's not serving LAN traffic. My work computer does so when idle. (Although I shut the later down when my day is done) And I have a 3rd computer running AI training sets. I shut this down when it's done. But it takes a LOT of time.
 
Last edited:

shady28

Distinguished
Jan 29, 2007
443
314
19,090
I agree. I gave a anecdotal case. But with enthusiast and power users, it is an important metric. These kind of people eventually influence what's bought at the corporate level.

That said one of my machines runs near full tilt for a very large percentage. My NAS runs F@H in a docker when it's not serving LAN traffic. My work computer does so when idle. (Although I shut the later down when my day is done) And I have a 3rd computer running AI training sets. I shut this down when it's done. But it takes a LOT of time.

Unfortunately they do influence corporate. It's frankly a failure of critical thinking skills across the board.

This is why home users now have twice as many cores as they need, because if Intel and AMD didn't do that then they'd get trampled on at these review sites.

In fact that's exactly what happened to Intel when AMD came out with 8, 12,16 core chips vs Intel's desktop 4, 6, 8 core chips. Nevermind single and light thread performance that is relevant to the 99%, it's all about cinebench, pov-ray, and prime 95.

So now you have a 10 core 12600K, and in a month we'll have 10 core 13400's alongside 24 core 13900K's. Everyone in the PC world will soon be doing the equivalent of driving an F-250 back and forth to work. Don't blame Intel, that's what these sites programmed everyone to 'want'.
 
  • Like
Reactions: KyaraM
Unfortunately they do influence corporate. It's frankly a failure of critical thinking skills across the board.

This is why home users now have twice as many cores as they need, because if Intel and AMD didn't do that then they'd get trampled on at these review sites.

In fact that's exactly what happened to Intel when AMD came out with 8, 12,16 core chips vs Intel's desktop 4, 6, 8 core chips. Nevermind single and light thread performance that is relevant to the 99%, it's all about cinebench, pov-ray, and prime 95.

So now you have a 10 core 12600K, and in a month we'll have 10 core 13400's alongside 24 core 13900K's. Everyone in the PC world will soon be doing the equivalent of driving an F-250 back and forth to work. Don't blame Intel, that's what these sites programmed everyone to 'want'.

Clock speed gains due to die shrinkage is a thing of the past. A low core count doesn't scale to a much better maximum clock better than a multi core.
 

shady28

Distinguished
Jan 29, 2007
443
314
19,090
Clock speed gains due to die shrinkage is a thing of the past. A low core count doesn't scale to a much better maximum clock better than a multi core.

I've never associated clock speed with die shrink.

Performance now has to do with how many micro-code instructions can be done per clock, it is and has been possible for a long time to do more than 1 instruction per clock in aggregate.
 
  • Like
Reactions: KyaraM
I've never associated clock speed with die shrink.

Performance now has to do with how many micro-code instructions can be done per clock, it is and has been possible for a long time to do more than 1 instruction per clock in aggregate.

Yes. But we are running out of optimization techniques for the scheduler. I mean if you are optimizing 1024 instructions ahead how much delay are you sticking into the pipe and how much benefit are you really getting?
 

shady28

Distinguished
Jan 29, 2007
443
314
19,090
Yes. But we are running out of optimization techniques for the scheduler. I mean if you are optimizing 1024 instructions ahead how much delay are you sticking into the pipe and how much benefit are you really getting?

Dunno, I'm going to invest a bit in this next generation I'll be fine if they stall out on improvements for a few years :)

The idea that parallelizing things instead of single thread is the future has been out there for a while. Dr Dobbs had a big article about it in the 90s (argument was the same then - we can't go much faster single thread).

Simple fact is though, many many problems cannot be greatly parallelized. Even ones that can, often have stages where all solutions to the first set of parallel operations are required before the next set of parallel operations can begin (a chokepoint) meaning you're constrained by the longest solution time. This is to say nothing of increased complexity, time, cost to build and to maintain.