News Intel Core i9-12900K and Core i5-12600K Review: Retaking the Gaming Crown

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
No it's not, reviewers just do this for ...reasons.
TDP(PBP) is still PL2 for x seconds and then PL1 (simplified, it's a whole song and dance to keep an average of 125)
Power limits lifted is now Maximum turbo power and is PL1=Pl2 for ever.
Jayz compared the i9-12900k's power consumption with and without power limits enforced and found that there was only a 20W drop between the two, went from something like 260W above idle measured at the AC plug to 240W running CB.
 
Jayz compared the i9-12900k's power consumption with and without power limits enforced and found that there was only a 20W drop between the two, went from something like 260W above idle measured at the AC plug to 240W running CB.
Yeah that was MCE enabled and MCE disabled, that is a whole different thing altogether.
MCE is overclocking which is why it went above the 241W PL2 limit, without the additional overclocking it went down to the 241 Limit.

Following all the limits with all the limits set to AUTO means that all the limits that are being followed are the ones that ASUS put in there, so basically everything at max.

You have to adjust the short and long duration package power manually if your mobo doesn't have a intel PBP/MTP switch.

intel still shows tau=56 for the 8+8 core 125W CPU.
They did add a tau max but max is not default, recommended is what should be.
https://www.intel.com/content/www/us/en/products/docs/processors/core/core-technical-resources.html
cZWz3eD.jpg
 
  • Like
Reactions: The_King
Test System Configurations
Can you please specify what timings was used for the Ryzen system?
DDR4 3600 CL16? DDR4 3200 CL14? Please be specific. I see timings for all intel system but none for the AMD system.
Are these single rank dimms? 8GBX2 ?

AMD Socket AM4 (X570)AMD Ryzen 9 5950X, Ryzen 9 5900X, Ryzen 7 5800X, Ryzen 5 5600X
MSI MEG X570 Godlike
2x 8GB Trident Z Royal DDR4-3600 - Stock: DDR4-3200
 
Test System Configurations
Can you please specify what timings was used for the Ryzen system?
DDR4 3600 CL16? DDR4 3200 CL14? Please be specific. I see timings for all intel system but none for the AMD system.
Are these single rank dimms? 8GBX2 ?

AMD Socket AM4 (X570)AMD Ryzen 9 5950X, Ryzen 9 5900X, Ryzen 7 5800X, Ryzen 5 5600X
MSI MEG X570 Godlike
2x 8GB Trident Z Royal DDR4-3600 - Stock: DDR4-3200
All the DDR4 was run at the same timings on the various platforms. AM4 and LGA1700 are both rated for DDR4-3200 maximum speed (without overclocking), so it's a DDR4-3600 kit that can run at DDR4-3200 14-14-14.
 
  • Like
Reactions: The_King
All the DDR4 was run at the same timings on the various platforms. AM4 and LGA1700 are both rated for DDR4-3200 maximum speed (without overclocking), so it's a DDR4-3600 kit that can run at DDR4-3200 14-14-14.
Thanks for the quick reply. Where these Single or Dual rank Dimms? This is known to impact ryzen 5000 series performance.
View: https://youtu.be/-UkGu6A-6sQ?t=489
 
Last edited:
Thanks for the quick reply. Where these Single or Dual rank Dimms? This is known to impact ryzen 5000 series performance.
Paul would have to reply, but I’m betting single rank. There’s a tradeoff between capacity, ranks, speed, and timings. Extremely low latency (14-14-14 at DDR4-3200) generally only comes with single-rank DIMMs, though it’s often posssible to do four 8GB DIMMs. Almost all 8GB modules are single rank, just as nearly all 16GB modules are dual rank.

While it’s true that dual rank (or four single rank) does improve performance — on AMD and Intel platforms — it’s also true that CL14 performs better than CL16. So, Paul improved performance by using a low latency kit, and hurt performance by not using two ranks per channel. But that applies to all the DDR4 platforms.

As for the DDR5 kit, it’s the only kit Paul has right now. I can assure you that he wasn’t trying to handicap AMD any more than he handicapped Intel. Running all of the platforms at the maximum official speed with lowest possible latency memory is the general approach.
 
Paul would have to reply, but I’m betting single rank. There’s a tradeoff between capacity, ranks, speed, and timings. Extremely low latency (14-14-14 at DDR4-3200) generally only comes with single-rank DIMMs, though it’s often posssible to do four 8GB DIMMs. Almost all 8GB modules are single rank, just as nearly all 16GB modules are dual rank.

While it’s true that dual rank (or four single rank) does improve performance — on AMD and Intel platforms — it’s also true that CL14 performs better than CL16. So, Paul improved performance by using a low latency kit, and hurt performance by not using two ranks per channel. But that applies to all the DDR4 platforms.

As for the DDR5 kit, it’s the only kit Paul has right now. I can assure you that he wasn’t trying to handicap AMD any more than he handicapped Intel. Running all of the platforms at the maximum official speed with lowest possible latency memory is the general approach.
Why no do both... Its expensive but not nearly as much as it was 6 months ago.
 
Why no do both... Its expensive but not nearly as much as it was 6 months ago.
Consistency and time. If he tested with one and two ranks per channel, it just doubled the amount of time required. We were one of the first to talk about this back in the day. I switched to 2x16GB CL16 for precisely this reason. But time and hardware are finite resources.

Why not custom tune the AMD sub timings? Why not test 18 different ways of overclocking, PBO and stock and all that other stuff? Time. Always time. Plus, 99% of people will run default settings or close to it, so it’s best to start there.
 
  • Like
Reactions: kinney
Consistency and time. If he tested with one and two ranks per channel, it just doubled the amount of time required. We were one of the first to talk about this back in the day. I switched to 2x16GB CL16 for precisely this reason. But time and hardware are finite resources.

Why not custom tune the AMD sub timings? Why not test 18 different ways of overclocking, PBO and stock and all that other stuff? Time. Always time. Plus, 99% of people will run default settings or close to it, so it’s best to start there.

I don't think anyone suggested THG to custom tune sub timings and such for AMD.

However using 16GBX2 Dual Rank DIMMS instead of 8GBX2 Single Rank DIMMS on all systems including Intel would not have cost any additional testing time.
I think it was a poor choice to use the 8GBx2 SR DIMMs when I'm sure several 16GBX2 Dual Rank kits would not be to hard to find in the THG testing lab. 😉
 
Last edited:
The loss of AVX512 is unfortunate because Intel engineering did an AMA not long ago where they stated that they're going to continue pushing AVX512. I'm expecting this to return on future iterations of this architecture once they sort out the additional complexity in the ISA. Intel is not going to confirm or deny that today. 🙂I'm going to build one of these anyway since I keep two rigs, but for someone looking to keep a machine further out, AVX512 has proven to be extremely potent when it's used.
View: https://www.youtube.com/watch?v=_qoOqAjhegE
 

Do you guys just read one post from everyone? You're the 2nd person to point this out to me- after I said in this thread that AVX512 is on the chip, on page two. Thursday, November 4th at 6:35 PM 👍 AVX512-

The same chip can see a 20-fold performance improvement in some workloads. Intel spends a lot of money and effort on compilers and software (including Windows 11's upcoming Android support) . That's just a reality so it's really just buyer-beware, things should be carefully considered if keeping a system for a long period of time. Intel and Nvidia have proven that software support matters as much, if not more, than hardware. Both invest heavily into software. This shouldn't be dismissed outright. I pretty much buy everything from both Intel and AMD, but I personally want it on my main rig going forward.

Since I posted my comment, I've found other reviews revealed that Alder Lake does indeed have AVX512, and it is not fused off. It's not only there, it works. So I was definitely correct, AVX512 was intended for ADL and will certainly return in later chips, if not be enabled later on in these.
 
  • Like
Reactions: The_King
Yeah that was MCE enabled and MCE disabled, that is a whole different thing altogether.
MCE is overclocking which is why it went above the 241W PL2 limit, without the additional overclocking it went down to the 241 Limit.

MCE is not considered overclocking by Intel. It's covered by warranty. MCE is covered on 11th gen as a default option all motherboards that I looked at buying, and essentially fully endorsed on 12th gen by including it in the rating system. If my 11900K dies with my current settings, MCE (default on) and ABT (I enabled this), neither will void my warranty. PBO is the performance enhancing feature that is overclocking for sure, as AMD doesn't warranty it.

To each their own here a bit, but I prefer to see benchmarks with settings at whatever is warrantied, as that's what the company will stand behind, and XMP enabled with the same kit used on each system. At least that's what helps me the most in making buying decisions, which I assume is the point of these reviews. If you're not buying, it's really just the peanut gallery.
 
Jayz compared the i9-12900k's power consumption with and without power limits enforced and found that there was only a 20W drop between the two, went from something like 260W above idle measured at the AC plug to 240W running CB.
The video Posted by Jayz has several major issues.

While you can says his load test were "acceptable." (Subtracting idle load from the higher load)

At the begining of the video 3min 50sec he states that the high idle power consumption on the AMD system
is because of the 10 RGB fans and the water-block loop and pump connected to the GPU etc.

He says himself that idle power consumption can not be compared several times in the video. Because the intel system
would need to have 10 RGB fans etc. His very own words

He then uses these high idle readout values on the AMD system to conclude that AMD has much higher idle power consumption
compared to that of the Intel system.

This type of content should be removed from THG has its spreads disinformation. I have pointed this same fact out on the TPU forums.
 
Last edited:
At the begining of the video 3min 50sec he states that the high idle power consumption on the AMD system
is because of the 10 RGB fans and the water-block loop and pump connected to the GPU etc.

He says himself that idle power consumption can not be compared several times in the video. Because the intel system
would need to have 10 RGB fans etc. His very own words
When you subtract idle from load to compare power draw deltas, all of the baseline power bloat gets cancelled out.

In the case of my own comments, the AMD system is irrelevant since I was only concerned with the Intel figures, namely 300+W at load in a CPU test - 65W idle - PSU losses - VRM losses = 200+W of extra CPU power dissipation.
 
When you subtract idle from load to compare power draw deltas, all of the baseline power bloat gets cancelled out.

In the case of my own comments, the AMD system is irrelevant since I was only concerned with the Intel figures, namely 300+W at load in a CPU test - 65W idle - PSU losses - VRM losses = 200+W of extra CPU power dissipation.

This does not change the fact that his comment about AMDs high idle power consumption is not valid. Has several users here and on TPU are repeating this incorrect statement.
 
MCE is not considered overclocking by Intel. It's covered by warranty. MCE is covered on 11th gen as a default option all motherboards that I looked at buying, and essentially fully endorsed on 12th gen by including it in the rating system. If my 11900K dies with my current settings, MCE (default on) and ABT (I enabled this), neither will void my warranty. PBO is the performance enhancing feature that is overclocking for sure, as AMD doesn't warranty it.

To each their own here a bit, but I prefer to see benchmarks with settings at whatever is warrantied, as that's what the company will stand behind, and XMP enabled with the same kit used on each system. At least that's what helps me the most in making buying decisions, which I assume is the point of these reviews. If you're not buying, it's really just the peanut gallery.
Can you link to where intel says that MCE is not considered overclocking by them?

Even if, it still goes above the clocks of just power limits lifted and reviewers still should know that this is not "stock" and first and foremost should never tell their viewers/readers that this is stock.
And even more importantly they should know that disabling MCE is not the same as setting processor base power.

It's important to see full power go crazy settings, no argument there, but it's also important to show sane settings so that people can compare, and most importantly people have to be told which is which.
 
They warranty MCE. They rubber stamped it completely with 12th gen, listing the boost power requirements with MCE in the official specs (Intel Core i912900K Processor 30M Cache up to 5.20 GHz Product Specifications). They clearly consider it additional turbo boosting, not overclocking as it doesn't go above rated clocks. For example my 11900K never goes above 5.3GHz, it just holds boosts longer on more cores. What else do you want? There may be an official statement out there somewhere but I don't think that's necessary for me to track down considering.

I wouldn't disagree that other people want to see other settings, but that's subjective analysis on each scenario and IMO is cherry picking on what suits you. I want to see each CPU as fast as they go, under warranty. That should be fair to ask of everyone on a launch day review.