News Core i9-13900K Creeps Behind Ryzen 9 7950X In Blender Benchmarks

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
There has been too much comparing of one CPU to another...when one might be a year old.

So, I guess it's official - in CURRENT GENERATATION (and granting Intel their generation which has yet to be released), AMD is the NEW KING as far as the tests are able to show?
People are not thaaaaaaaat stupid...
And it would be the same the other way around, actually it was the other way around, with the first gen ryzen nobody bought intel CPUs (or amd cpus either) because they waited to see what exactly ryzen 1 would bring to the table.
It's the same thing now, judgement is open until 13th gen comes out, even if it would still take months for it to come out let alone just a couple of weeks.
ZEN 4 sales are terrible right now not only because of the increase in price but because everybody waits to see what things will be like when the dust settles.
 
It will be interesting to see the eco mode 7950x vs the 13900.
7950x may win in productivity and lose in gaming as the previous gens did.
But not that interesting since I like to power plan limit my Intel cpu in windows when I value silence over unneeded performance already.
If you have an efficient volt/ frequency curve set up you can just limit the upper clocks. Persistent over reboots, 0 additional overhead and you can change max clocks as fast as you can choose a different power plan.

Edit: The admin command/terminal/powershell line to get Windows Power Plan to show the max clock option is:
powercfg -attributes SUB_PROCESSOR 75b0ae3f-bce0-45a7-8c89-c9611c25e100 -ATTRIB_HIDE
and if you have Alder Lake with e-cores active you will also need:
powercfg -attributes SUB_PROCESSOR 75b0ae3f-bce0-45a7-8c89-c9611c25e101 -ATTRIB_HIDE
For some reason Windows mixes up which one is the power efficient cores and you need both types just for Alder (and Raptor). Won't hurt if you open both options for other arches, but it will clutter your processor power management section if you add a useless option. These won't raise clocks above your bios limit, but they can reduce them and the corresponding power consumption. Also I don't get a perfect correlation between my p-core entry and what the result is, and it even varies between bios versions and overclocks, but it is proportional. I just load the cpu, check the clocks/power in hwinfo and adjust until I get what I want.

It's been my favorite power saving method as of late and it seems like it would be handy for Zen4 temp control, but I have no idea if it works with them. I like to set up a new balanced power plan and name it whatever max frequency I've chosen for p-cores.
I can only speak about 7950x. I use eco 105W mode and performance is only few percents lower in MT. On ST its same (or even little higher in some benches). And thats great. On same PPT as my former 5900X it is able to drive all 16 cores to frequency of 1 core PBO boost of 5900X. No need to setting it higher. It destroys previous gen even on same TDP. Thats a really good job. RL should destroy ALD also on same TDP. So only thing which would be interesting is comparing 7950X and 13900K on 65 and 105W.
 
I can only speak about 7950x. I use eco 105W mode and performance is only few percents lower in MT. On ST its same (or even little higher in some benches). And thats great. On same PPT as my former 5900X it is able to drive all 16 cores to frequency of 1 core PBO boost of 5900X. No need to setting it higher. It destroys previous gen even on same TDP. Thats a really good job. RL should destroy ALD also on same TDP. So only thing which would be interesting is comparing 7950X and 13900K on 65 and 105W.

Indeed I would love to see this comparition to
 
There has been too much comparing of one CPU to another...when one might be a year old.

So, I guess it's official - in CURRENT GENERATATION (and granting Intel their generation which has yet to be released), AMD is the NEW KING as far as the tests are able to show?
And that's what I'm complaining about. Why can't an article just say that? Articles spent years calling Intel king when most people's rigs didn't even have the cooling to make it outperform AMD (especially since most rigs are prebuilt).

At least give AMD that until the new Intel chips actually release and get optimized.
 
That's Toms H, for you. They've been Intel's and Nvidia's biatches for a very long time. Once in a while you get an honest TH reviewer on here. Once in a while. Don't hold your breath.
Yep
I can only speak about 7950x. I use eco 105W mode and performance is only few percents lower in MT. On ST its same (or even little higher in some benches). And thats great. On same PPT as my former 5900X it is able to drive all 16 cores to frequency of 1 core PBO boost of 5900X. No need to setting it higher. It destroys previous gen even on same TDP. Thats a really good job. RL should destroy ALD also on same TDP. So only thing which would be interesting is comparing 7950X and 13900K on 65 and 105W.
I've been asking the Tom's writers to do a matching TDP benchmark for years, where both are capped and use a good air cooler in a closed case, or something totally reasonable like that. But we won't see that.
 
Things will really get interesting when we look at the wattages...😉
You will never see that review. There's ONE that I've found and nobody else will do it again.

Here's what Intel chips do in real life:
This guy bought an i9-12900k with AIO cooler and it keeps throttling shy of quoted benchmark performance.

Intel's quoted performance is after a 28% boost with increase of TDP to 241W:
https://videocardz.com/newz/intel-c...5-12600k-last-minute-cinebench-results-leaked

Application performance at 125W. Alder Lake supposedly held the performance crown over the Ryzen 5950X. But it looses at most tests until it's at 190W TDP.
https://www.techpowerup.com/review/...er-lake-tested-at-various-power-limits/2.html

Of course, Intel doesn't even say "TDP" anymore. They made up new acronyms because their chips always exceed what they quote.

This article brags about Intel only lying by 12% when they drop power usage to the levels AMD is at (therefore, it would lose at benchmarks):
 
Intel's quoted performance is after a 28% boost with increase of TDP to 241W:
https://videocardz.com/newz/intel-c...5-12600k-last-minute-cinebench-results-leaked
Where is intels quote? What kind of performance did intel ever quote?
Don't confuse 3rd party benchmarks with intel's quotes.
Application performance at 125W. Alder Lake supposedly held the performance crown over the Ryzen 5950X. But it looses at most tests until it's at 190W TDP.
https://www.techpowerup.com/review/...er-lake-tested-at-various-power-limits/2.html
So it still wins even when running well below it's maximum setting...
You will never see that review. There's ONE that I've found and nobody else will do it again.

Here's what Intel chips do in real life:
Of course, Intel doesn't even say "TDP" anymore. They made up new acronyms because their chips always exceed what they quote.
AMD doesn't even use power in their TDP formula at all....
https://www.gamersnexus.net/guides/...lained-deep-dive-cooler-manufacturer-opinions
Intel switched to using PowerLimit because it's much more accurate, if there is a limit and it's in place then power will not go above that.
If you have a TDP then it's just a thermal design point which means that if you change the thermals you can make it run at whatever power you want which is much less accurate and the reason why every review would show "their chips always exceed what they quote"
This article brags about Intel only lying by 12% when they drop power usage to the levels AMD is at (therefore, it would lose at benchmarks):
Again you have to show a link that shows what intel's lie was.
If intel said that the 12900k uses 240W and all the results are at 240w then what exactly is the lie?
 
  • Like
Reactions: KyaraM
I am finally retiring my 2500K and EVGA Z68 FTW. Wanted to go big and straight to 7950X but sadly no AM5 boards from EVGA so I am probably going 13900K and Z790 Classified. Oh boi It's gonna be hell of a jump.

I was also thinking 5950X and X570 FTW.
 
Of course they are recovering in multi threading, they are doing what AMD did with the first series, more cores to overcome the difference

Remember that the 13600k is a 14/20, the 7600x has less than half the cores and almost half the threads, the 7700x has 6 cores less, it's a huge difference.
 
So, for people who are complaining about "having to disable e-cores" for gaming...

 
  • Like
Reactions: alceryes
So, for people who are complaining about "having to disable e-cores" for gaming...

Test with one CCD disabled, 7950X
Yes, it's a bug with Windows 11's scheduler. It doesn't happen in the patched Linux Kernel that is being rolled out and the 7950X is about 10% faster there thanks to removing old Intel ACPI code. I'd imagine the same needs to happen in Win11.


In short, from what I've gathered on the non-detailed information, it's basically Win11 trying to force "bigLITTLE" on the CCDs.

Regards.
 
Yes, it's a bug with Windows 11's scheduler. It doesn't happen in the patched Linux Kernel that is being rolled out and the 7950X is about 10% faster there thanks to removing old Intel ACPI code. I'd imagine the same needs to happen in Win11.


In short, from what I've gathered on the non-detailed information, it's basically Win11 trying to force "bigLITTLE" on the CCDs.

Regards.
Your article says nothing about that, though, and it doesn't even say how fast "10% faster" is explicitly. 10% faster than what? Old, unpatched Linux performance? Windows? What is it? Completely useless without context. No comparison to Windows performance that I found, either. The article I linked notes high inter-CCD core latency, which afaik was an issue with Ryzen 5000 as well and won't likely vanish completely with fixes. They also talk about higher all-core boost in general withone CCD disabled, which might or might not be due to the fewer cores getting more juice since half the "competition" for power consumption is missing. So I doubt it's as easy as you claim.
 
Your article says nothing about that, though, and it doesn't even say how fast "10% faster" is explicitly. 10% faster than what? Old, unpatched Linux performance? Windows? What is it? Completely useless without context. No comparison to Windows performance that I found, either. The article I linked notes high inter-CCD core latency, which afaik was an issue with Ryzen 5000 as well and won't likely vanish completely with fixes. They also talk about higher all-core boost in general withone CCD disabled, which might or might not be due to the fewer cores getting more juice since half the "competition" for power consumption is missing. So I doubt it's as easy as you claim.
No need to nitpick that part. It's just on average for games and other tasks actually get bigger increases (in Linux), but I haven't seen benchmarks of that. I'll check Phoronix later, as I'm sure they'll have that information at some point soon, if not already there.

The important bit of the news is it's software and not a problem with the hardware, so it can be fixed via a patch. That's the main takeaway.

EDIT: Ironically enough "Clear Linux", an Intel-centric distro with bleeding edge code for Intel CPUs, has the changes already in and... Well, see for yourself: https://www.phoronix.com/review/zen4-clear-linux/6

Regards.
 
Last edited:
The important bit of the news is it's software and not a problem with the hardware, so it can be fixed via a patch. That's the main takeaway.
For both intel with the big.little and amd with the two ccds it's a hardware issue.
The software just has to be made to work "with it and not against it" use the specifics as an advantage and not just blindly launch threads anywhere.

The regression on the 7950x is because they don't use "game mode" on it and the regressions on the intel chips are also because they don't correctly use the thread optimizer.
 
  • Like
Reactions: KyaraM
For both intel with the big.little and amd with the two ccds it's a hardware issue.
The software just has to be made to work "with it and not against it" use the specifics as an advantage and not just blindly launch threads anywhere.

The regression on the 7950x is because they don't use "game mode" on it and the regressions on the intel chips are also because they don't correctly use the thread optimizer.
It's just hardware design choices that need to be accompanied with the right software tweaks, yes.

I think I'm failing to see your point or what you're trying to get at here?

Regards.
 
It's just hardware design choices that need to be accompanied with the right software tweaks, yes.

I think I'm failing to see your point or what you're trying to get at here?

Regards.
I'm just doing small talk...
You can see it both ways, as a hardware problem because they changed something that worked well or as a software problem because they didn't change fast enough to support new hardware.
 
I'm just doing small talk...
You can see it both ways, as a hardware problem because they changed something that worked well or as a software problem because they didn't change fast enough to support new hardware.
Hm... Well, yeah... It's one of those "chicken and egg" things. Or maybe "breaking the status quo", perhaps?

Either way, no radical hardware redesign is going to be "seamless" or without it's downs (and ups, hence why you're making them). AMD demonstrated the first set of pains with chiplets and their cross-CCD latency (that still exists) and then Intel with bigLITTLE in the X86 world. I think Intel has been making better strides on the software side, for sure. I mean they did help release Win11 after Microsoft said they'd stay in 10, lol. Now it's AMD's turn to just make sure their (Win11's) kernel has the important bits for Zen4, which it seems they've been struggling with since its launch and it's been quite a while already... I hope AMD gets their act together and solve these rather soon. Sure, it's not causing huge problems, but it's never a good outlook.

As for "Game Mode"; that's just a stupid solution that never really worked outside of ThreadRipper. I'd say because of the IF tweaks they've made with Ry7K, plus DDR5's extra bandwidth, I think the performance hit is way less noticeable now for CCDs. It's still there, for sure, but a 10% penalty for games only is not that bad? Is it comparable to Intel when game threads hit the E-cores, for instance? Either way, they need to fix it, since it was demonstrated in Linux that you can get that performance back.

Regards.
 
Hm... Well, yeah... It's one of those "chicken and egg" things. Or maybe "breaking the status quo", perhaps?

Either way, no radical hardware redesign is going to be "seamless" or without it's downs (and ups, hence why you're making them). AMD demonstrated the first set of pains with chiplets and their cross-CCD latency (that still exists) and then Intel with bigLITTLE in the X86 world. I think Intel has been making better strides on the software side, for sure. I mean they did help release Win11 after Microsoft said they'd stay in 10, lol. Now it's AMD's turn to just make sure their (Win11's) kernel has the important bits for Zen4, which it seems they've been struggling with since its launch and it's been quite a while already... I hope AMD gets their act together and solve these rather soon. Sure, it's not causing huge problems, but it's never a good outlook.

As for "Game Mode"; that's just a stupid solution that never really worked outside of ThreadRipper. I'd say because of the IF tweaks they've made with Ry7K, plus DDR5's extra bandwidth, I think the performance hit is way less noticeable now for CCDs. It's still there, for sure, but a 10% penalty for games only is not that bad? Is it comparable to Intel when game threads hit the E-cores, for instance? Either way, they need to fix it, since it was demonstrated in Linux that you can get that performance back.

Regards.
Performance loss of the e-cores is, actually, basically nonexistent when viewed over a greater sample of games than 3:
View: https://m.youtube.com/watch?v=RMWgOXqP0tc&feature=share


Even in the worst case scenarios it's less than 10%, and those worst case scenarios feature ridiculous FPS anyways. However, people tend to blow that up as if stuff suddenly gets unplayable just because the e-cores are running. That is also an older test, too, mwaning chances are that things got improved since. Yet, "I have to deactivate e-cores for gaming, AMD doesn't have this issue!" is a very persistent myth that annoys me to no end. And is part of the reason I posted that article here.
 
So, for people who are complaining about "having to disable e-cores" for gaming...

Ooooo, interesting. I hate it when these articles don't give full set up information though. We NEED to have, at a minimum, MCLK and FCLK speeds.
TechPowerUp theorizes that this is, in part, due to power constraints. With CCD-1 disabled, CCD-0 gets the full 230W power budget.

https://www.techpowerup.com/299959/...higher-gaming-performance-with-a-ccd-disabled

However, this doesn't change my stance on Intel's E-cores. The "well, you do it too," argument, is never a valid argument in my book. 😉


Hmmm, There also seems to be a false core affinity bug out there -

View: https://www.reddit.com/r/AMDHelp/comments/wcj6ol/false_core_affinity_with_the_new_dx11_driver_and/


Curiouser and curiouser. Thanks for the link @KyaraM. You've got me going down the rabbit hole now. 😆
 
Last edited:
Hm... I don't like quoting WTFBBQTech, but this reads as a legit leak:


Regards.
It probably is legit but it also has the ryzen with 6400Mhz ram vs the intel with 5200Mhz ram...
And the 13th gen still wins by a landslide at least in some games and settings while the ryzen wins by a lot in the same games at other resolutions...
Funnily enough intel wins more often at higher resolutions even though it has slower ram.
In general this review is a huge mess that doesn't make any sense, at least to me it doesn't.
Example:
AMD-Ryzen-7-7700X-vs-Intel-Core-i7-13700K-Core-i5-13600K-Raptor-Lake-CPUs-_-RDR2-1480x961.png
 
It probably is legit but it also has the ryzen with 6400Mhz ram vs the intel with 5200Mhz ram...
And the 13th gen still wins by a landslide at least in some games and settings while the ryzen wins by a lot in the same games at other resolutions...
Funnily enough intel wins more often at higher resolutions even though it has slower ram.
In general this review is a huge mess that doesn't make any sense, at least to me it doesn't.
Example:
AMD-Ryzen-7-7700X-vs-Intel-Core-i7-13700K-Core-i5-13600K-Raptor-Lake-CPUs-_-RDR2-1480x961.png
Min and Max "FPS" is basically the lower and top most numbers. They're not averages of either (not 99% cumulative nor 1% lows) and the average is stated in another set of graphs, for some reason. Makes it harder to read for sure, but not to terrible to interpret and add a bit more information

And the 7700X is tested with both 5200 and 6400 (orange and blue) to give a sense of DDR5 scaling, or something? Not sure.

Looking at the graphs, it's a coin toss, so it'll be a case of "grab whatever is cheaper for your needs", which is great. Caveat being this is Intel using DDR5, so using DDR4 it'll be slower.

In any case: this is a leak, so salt needs to be applied in abundance. Soy sauce if you want, with a bit of pepper (it's actually good, trust me 😀).

Regards.
 
Looking at their own FPS table at the end, I wonder what the author smoked. There they compare the 13700k vs the 7700X at the same RAM speed of 5200 MHz, and I really ask myself, in what parallel universe is 292 lower than 276, or 238 lower than 210, or 320 lower than 297 FPS? First of those pairs is the 13700K with DDR-5200 and second the 7700X with the same RAM speed, respectively. And that's 3 out of 8 games tested there... of which 5 are a win for Intel and one is a draw (sorry, not counting a single FPS digference...), and the total average is also in favor for Intel. The 13700K wins and the 13600K is tied with the 7700X. Even the 7700X with faster RAM loses against the 13700K in certain games. All was done at 1080p. And the 13600K manages to even beat the 13700K at times, which will likely make it the gaming king this gen. Again, what did they smoke? They should have tested all at the same speeds, or not at all. This is bulls.
 
  • Like
Reactions: alceryes