News Intel's Core i9-13900KS Rips Ryzen 9 7950X In Early Benchmark

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

ottonis

Reputable
Jun 10, 2020
166
133
4,760
Not sure how representative CPU-Z is for real life performance, but if it turns out that even the "normal" 13900K variant (not the KS) would beat Ryzen 7950 in single- and multithreaded tasks, then this might be followed by some price adjustments on AMD's part.

Of course, these findings here might be total hoax, if the tester didn't match cooling, RAM-speeds etc on both, Intel and AMD systems.
 

SunMaster

Commendable
Apr 19, 2022
159
136
1,760
If a game tops out to the max with cache why would you think it would scale even more with core clocks?
Zen4 with vcache could come out and possibly only match the 5800x3d because the game engine can't go any higher.
Zen4 could also possibly gain even more performance with vcache but either scenario is just as possible.

Interesting logic. Excactly why can't the game engine "go faster", if it's not GPU-bound? Cosmic radiation from AMDs factory?

Certain games benefit a lot from the 3d-vcache. There is no reason they won't benefit from Zen4s vcache. If the vcache size used with zen4 is larger than with zen3 it is likely even more games will benefit from it.
 
  • Like
Reactions: TesseractOrion
Interesting logic. Excactly why can't the game engine "go faster", if it's not GPU-bound? Cosmic radiation from AMDs factory?
If the cache helps because the data goes faster between gpu and sys ram then a better cpu will likely not increase speed even more.
I don't know why it helps, nobody made an article about it.
We have often seen faster ram helping one group of cpus and not the other, the same could happen here.
 

Eximo

Titan
Ambassador
There is only so much endlessly re-useable code in any game engine. If the game isn't large or complex enough, there isn't anything useful to cache. Anything that changes constantly is going to come from RAM or storage based on gameplay. Only the core code that runs the game will persist in cache.
 
  • Like
Reactions: KyaraM
Sep 25, 2022
28
13
35
Do we know anything about Intel updating its socket so the mounting pressure is more even? My understanding is that the current system results in poor pressure across the chip and so people have been using third party fixes that void warranties. There's been so much fuss about it, I hope Intel addresses it.

Also, I've been thinking I'll grab the 12900ks if I can find a great deal when Raptor Lake is out, but the ks series has been laughably overpriced for a while now. A guaranteed winning ticket in the silicon lottery for an extra 200ish bucks is not a good deal. Maybe 50ish? My point is that even if the 13900ks mops the floor with AMD's stuff, which we will have to wait and see, it still doesn't mean you should buy it unless you're a pro overclocker or something. Which is a totally valid but very rare use case.
 
Do we know anything about Intel updating its socket so the mounting pressure is more even? My understanding is that the current system results in poor pressure across the chip and so people have been using third party fixes that void warranties. There's been so much fuss about it, I hope Intel addresses it.
It's only poor if you want to push the CPU even farther than intel and the mobo makers do to begin with.
The 12900k has TVB which will keep boosting until it hits 100 degrees so you need to use way overpowered cooling to get anything below 100 degrees...or you just turn off TVB and apply a manual boost.
 
350+W power draw is "secret weapon"? wow!!! such amazeballs design strategy!
You go all the way up and you go all the way down...you have the best of both.
RrTgPnQtmU7fgeVn5BfZTS-1200-80.jpg.webp
 
  • Like
Reactions: alceryes and KyaraM
I'll take care of the leftovers.
Would you like some chimichurri with your chapeau?? :ROFLMAO:

I have to admit, I will be impressed if these numbers hold.
What I really want to know is, do the E-cores still have a net negative performance effect in any workload scenarios. If they don't, and these numbers are even close to being true, I may have to stick with team blue! I would be amazed if, in one generation, Intel managed to turn around that waste of sand known as E-cores.
 
  • Like
Reactions: SunMaster

spongiemaster

Admirable
Dec 12, 2019
2,278
1,280
7,560
I was comparing the 13900KS to the 7950X. No, 350W is NOT better than 230W to get 18-25% faster than the 7950X. Not sure what kind of math you're talking about.
And I was comparing the 7950x to AMD's own 5950x. Not doing so hot, no pun intended, in the efficiency category at stock settings vs its predecessor. If you want to be Captain Sarcasm about the 13900KS which is months away from release with no official numbers, then we expect you to be equal Captain Sarcasm for the 7950x which we do have numbers for and is almost exactly as bad as you described vs its predecessor.
 

knightofeffect

Honorable
May 20, 2016
7
1
10,515
If it beats the 7950 in multithreaded work loads I'll eat my hat.

Amen... I'm an intel guy who has always built core i9s for the heavy computational work my company does, but I just built a 7950x system and I have to say I have never seen a CPU do anything like this. I know it says its 4.5GHz base and 5.7GHz boost, but in practice I don't know what the "4.5Ghz" is referring to. Using a Corsair H150i 360mm cooling solution with completely stock settings, all 16 cores will simultaneously run at sustained, constant 5.4GHz with a temperature of 88C. Additionally, I get a 5.8GHz boost, but not just on a single core, but rather on 2 cores in each CCD (so 4 cores total @ 5.8GHz boost). If you had told me last year that in the year of our lord 2022 there would be a 16 core processor capable of effectively a 5.4GHz base clock speed, I would have called you delusional.

I have seen several major publications rush to discount the 7950x and credit the 13900K... I think intel might be scared as looking at the specs of the 13900K with largely the same architecture as the 12900K, I don't see how they actual compete with the multithreaded performance of the 7950x.

Here are the early returns from Passmark (most comprehensive CPU benchmark IMO): https://www.cpubenchmark.net/compare/Intel-i9-13900K-vs-AMD-Ryzen-9-7950X-vs-Intel-i9-12900KS/5022vs5031vs4813

My own Passmark bench got 66700 stock.
 
  • Like
Reactions: -Fran-

Eximo

Titan
Ambassador
Without actual power measurements, going to be tough to use Passmark on these new CPUs.

If you look at the individual data for the 7950x there are some abysmally low scores.

Intel's isn't showing yet, only 4 samples compared to AMD's 33. I would wait and see on that one, but I think cooling and power limits is going to be too much a factor. OEM machines will probably lock them at silly values and people who insist on a Hyper 212 Evo aren't going to see the max capability.
 
but I think cooling and power limits is going to be too much a factor. OEM machines will probably lock them at silly values and people who insist on a Hyper 212 Evo aren't going to see the max capability.
That's going to be true for both now though so same difference.
Here are the early returns from Passmark (most comprehensive CPU benchmark IMO): https://www.cpubenchmark.net/compare/Intel-i9-13900K-vs-AMD-Ryzen-9-7950X-vs-Intel-i9-12900KS/5022vs5031vs4813

My own Passmark bench got 66700 stock.
Any idea on how they come up with those MAX TDP numbers? 150 for the 12900k? 125 for the 13900k?
The 125 at least is one of the standard TDP numbers but the 150 for the 12900k is completely out of nowhere.
 
  • Like
Reactions: KyaraM

KyaraM

Admirable
Amen... I'm an intel guy who has always built core i9s for the heavy computational work my company does, but I just built a 7950x system and I have to say I have never seen a CPU do anything like this. I know it says its 4.5GHz base and 5.7GHz boost, but in practice I don't know what the "4.5Ghz" is referring to. Using a Corsair H150i 360mm cooling solution with completely stock settings, all 16 cores will simultaneously run at sustained, constant 5.4GHz with a temperature of 88C. Additionally, I get a 5.8GHz boost, but not just on a single core, but rather on 2 cores in each CCD (so 4 cores total @ 5.8GHz boost). If you had told me last year that in the year of our lord 2022 there would be a 16 core processor capable of effectively a 5.4GHz base clock speed, I would have called you delusional.

I have seen several major publications rush to discount the 7950x and credit the 13900K... I think intel might be scared as looking at the specs of the 13900K with largely the same architecture as the 12900K, I don't see how they actual compete with the multithreaded performance of the 7950x.

Here are the early returns from Passmark (most comprehensive CPU benchmark IMO): https://www.cpubenchmark.net/compare/Intel-i9-13900K-vs-AMD-Ryzen-9-7950X-vs-Intel-i9-12900KS/5022vs5031vs4813

My own Passmark bench got 66700 stock.
Uhm. You do know that literally all CPUs nowadays operate like that, right? There are several boost "stages" depending on how many cores are loaded. Been like that for quite a while now. Clock speed will also improve on Intel, too, and the 12900KS already ran at ober 5GHz boost, so why that is impressive now but wasn't last gen I don't get. Btw, the base clock is simply the only clock rate the manufacturer, be it Intel or AMD, guarantees. Boost can be different for each chip depending on quality, that's why it says "up to" on the marketing slides. However, boost can exceed even that value depending on cooling, mainboard settings, etc. This again is the same for both AMD and Intel. Yes, the multi-core performance of the 7950X is impressive. It's essentially the only impressive thing about the new series. But it's not unbeatable and Intel didn't even relese yet. That's why we need to wait for a verdict.
 
I just wanted to bring this back to the fore to throw a little 'salt' on the great performance increase (over the Ryzen 5000 and 12th gen Intel) touted by Intel.
Many here know not to trust 1st person reviews, whether from AMD, Intel, NVIDIA, or whoever, due to blatant inaccuracies or non-like-for-like performance comparisons. Well, Intel has done it again.

In their 'Leadership Gaming performance' slide (shown below), Intel shows a decent performance increase over their own 12th gen CPU and AMD CPUs. When you read the fine print, however, you'll see that they're pitting the 13900k with 5600MT/s RAM up against the 12900k with 4800MT/s RAM. Depending on the game, this can leave anywhere from 2-10% of performance on the table, in certain scenarios. The AMD system, was also run with only 3200MT/s RAM, no RAM timings listed, AND limited the Ryzen CPUs to a PL of 105W (basically, PBO disabled).

Now, Intel (and many others) may be quick to point out that the RAM used is what was officially supported, and that's fine (I guess), but it is well known that Ryzen 5000 CPUs get a substantial boost from running memory up to 3600-3800MT/s. What about the 105W limit on the Ryzen CPUs though?! I guess AMD's auto-overclocking isn't supported by Intel, but Intel's auto-overclocking is?? Totally ridiculous! It turns the below graph into a completely meaningless slide.

So, if you're on a Intel 12th gen OR an AMD Ryzen 5000 CPU, just know that the 13th gen does NOT give as great a boost over the previous gen as Intel is making it seem. As always, wait for multiple, reputable, 3rd party reviews, with reviewers that actually take other factors out of the equation and test actual CPU generational performance increase as we'll see it in the real world.

https://uploads.disquscdn.com/image...0383ad177aa624aa43ec1ed2e46639d21547dcaf1.jpg
 
I just wanted to bring this back to the fore to throw a little 'salt' on the great performance increase (over the Ryzen 5000 and 12th gen Intel) touted by Intel.
Many here know not to trust 1st person reviews, whether from AMD, Intel, NVIDIA, or whoever, due to blatant inaccuracies or non-like-for-like performance comparisons. Well, Intel has done it again.

In their 'Leadership Gaming performance' slide (shown below), Intel shows a decent performance increase over their own 12th gen CPU and AMD CPUs. When you read the fine print, however, you'll see that they're pitting the 13900k with 5600MT/s RAM up against the 12900k with 4800MT/s RAM. Depending on the game, this can leave anywhere from 2-10% of performance on the table, in certain scenarios. The AMD system, was also run with only 3200MT/s RAM, no RAM timings listed, AND limited the Ryzen CPUs to a PL of 105W (basically, PBO disabled).

Now, Intel (and many others) may be quick to point out that the RAM used is what was officially supported, and that's fine (I guess), but it is well known that Ryzen 5000 CPUs get a substantial boost from running memory up to 3600-3800MT/s. What about the 105W limit on the Ryzen CPUs though?! I guess AMD's auto-overclocking isn't supported by Intel, but Intel's auto-overclocking is?? Totally ridiculous! It turns the below graph into a completely meaningless slide.

So, if you're on a Intel 12th gen OR an AMD Ryzen 5000 CPU, just know that the 13th gen does NOT give as great a boost over the previous gen as Intel is making it seem. As always, wait for multiple, reputable, 3rd party reviews, with reviewers that actually take other factors out of the equation and test actual CPU generational performance increase as we'll see it in the real world.

https://uploads.disquscdn.com/image...0383ad177aa624aa43ec1ed2e46639d21547dcaf1.jpg
So your problem is that a company only uses settings that will not void your warranty?
For the last three if not more years all we have been hearing is out-of-the-box this and out-of-the-box that...now that this isn't in the best interest of some people anymore suddenly it's all about tuning everything to the best possible point...
How do you get from the slide that they are using auto overclock on intel?
(Or the 105 TDP on ryzen)
If you are still talking about the ram, 5600 is the official spec by intel.
https://www.intel.com/content/www/u...-36m-cache-up-to-5-80-ghz/specifications.html
 
  • Like
Reactions: KyaraM
So your problem is that a company only uses settings that will not void your warranty?
For the last three if not more years all we have been hearing is out-of-the-box this and out-of-the-box that...now that this isn't in the best interest of some people anymore suddenly it's all about tuning everything to the best possible point...
How do you get from the slide that they are using auto overclock on intel?
(Or the 105 TDP on ryzen)
If you are still talking about the ram, 5600 is the official spec by intel.
https://www.intel.com/content/www/u...-36m-cache-up-to-5-80-ghz/specifications.html

I have a problem with misleading the public to make something appear better than it really is, yes. It doesn't matter who the manufacturer is. I do have a problem with the memory speed limits imposed, but I can at least understand their reasoning for the limit. I don't, however, like that they didn't divulge at least the CL of the AMD RAM as they did with both Intel chips. If they used slow RAM timings modules for AMD then that just compounds the slowdown from using 3200MT/s RAM.

However, there's no reason whatsoever to limit the 5800X3D or the 5950X to 105W PL when up to 142W is fully supported by the manufacturer. One can only guess at the real numbers if the AMD chips were allowed to run at default settings out of the box. 105W PL is a manual (definitely NOT out-of-the-box) limit for the motherboard used - not the default. Intel specifically reduced performance through non-default settings changes.

Start at 6 mins for the info.

View: https://www.youtube.com/watch?v=eAa41vVclGA
 

King_V

Illustrious
Ambassador
And I was comparing the 7950x to AMD's own 5950x. Not doing so hot, no pun intended, in the efficiency category at stock settings vs its predecessor. If you want to be Captain Sarcasm about the 13900KS which is months away from release with no official numbers, then we expect you to be equal Captain Sarcasm for the 7950x which we do have numbers for and is almost exactly as bad as you described vs its predecessor.

Right, you completely moved the goalposts to another topic, and information from another source on top of that. At some point, I'll go see what GN is talking about, exactly, but that information you picked out seems to clash with the Tom's Hardware review which is pretty explicit in touting the efficiency gains of Zen 4 over Zen 3.

But, as I said, moving the goalposts. In this case, as you are quite well aware, I was referencing the boost of the same chip itself, not comparing it to another chip from a different generation with differing architecture.

It's not exactly unusual for Intel - cranking the power draw to insane levels and clock speeds as hard as they can just to scramble that last few percent of performance.
 
However, there's no reason whatsoever to limit the 5800X3D or the 5950X to 105W PL when up to 142W is fully supported by the manufacturer.
That is 105W TDP...
142W is the ppt, the extra power the cpu draws at 105W tdp...
So you are angry because intel doesn't sit you down like a little kid and explains to you how AMD calls things...
A Ryzen CPU limited to 105W TDP draws up to 142W from the mobo... because reasons...

https://www.gamersnexus.net/guides/3491-explaining-precision-boost-overdrive-benchmarks-auto-oc
We’ll quote directly from AMD’s review documentation so that there is no room for confusion:
Package Power Tracking (“PPT”): The PPT threshold is the allowed socket power consumption permitted across the voltage rails supplying the socket. Applications with high thread counts, and/or “heavy” threads, can encounter PPT limits that can be alleviated with a raised PPT limit.

  1. Default for Socket AM4 is at least 142W on motherboards rated for 105W TDP processors.
  2. Default for Socket AM4 is at least 88W on motherboards rated for 65W TDP processors.
 
  • Like
Reactions: alceryes
That is 105W TDP...
142W is the ppt, the extra power the cpu draws at 105W tdp...
So you are angry because intel doesn't sit you down like a little kid and explains to you how AMD calls things...
A Ryzen CPU limited to 105W TDP draws up to 142W from the mobo... because reasons...

https://www.gamersnexus.net/guides/3491-explaining-precision-boost-overdrive-benchmarks-auto-oc

Crap. Missed that. But yes, this is still comparing apples to oranges.
Let me ask you this, do you think that testing a 13900k with a PL1 of 253W TDP and using 5600MT/s RAM gives a good CPU performance comparison against a 5800X3D with a PL1 of 105W and RAM of 3200MT/s?

If you do then there's no need to discuss further. I definitely do not think it's good CPU performance comparison, so we'll just have to agree to disagree.
 
Last edited:
Crap. Missed that. But yes, this is still comparing apples to oranges.
Let me ask you this, do you think that testing a 13900k with a PL1 of 253W TDP and using 5600MT/s RAM gives a good CPU performance comparison against a 5800X3D with a PL1 of 105W and RAM of 3200MT/s?

If you do then there's no need to discuss further. I definitely do not think it's good CPU performance comparison, so we'll just have to agree to disagree.
If you think that it's fair and apples to apples to compare a CPU with a huge cache against a normal cpu...for a CPU GAMING performance comparison, and that's what the pic you posted is, it only has games and the 5800x3d has an "unfair" advantage there due to the cache that doesn't give any better performance in actual CPU performance.
They tested everything at the best settings that keep your warranty intact, you can't ask for more.
 

blkspade

Commendable
Nov 12, 2021
4
0
1,510
If the cache helps because the data goes faster between gpu and sys ram then a better cpu will likely not increase speed even more.
I don't know why it helps, nobody made an article about it.
We have often seen faster ram helping one group of cpus and not the other, the same could happen here.

You clearly don't understand what's going on with the CPU when it comes to cache. It has nothing to do with the GPU communication directly. The CPU is doing work to eventually send to the GPU, but the CPU's ability to that work can be hindered by needing to fetch more data from RAM. More L3 cache reduces the frequency of those fetches, so the CPU itself isn't waiting and can do more work per clock cycle. So everything there is in a faster better CPU, gets amplified by the increased cache. This is why the 5800X3D in spite of having far lower IPC and clock speed can beat/compete ADL in some/many games. Areas where it ineffective is solely because the app never had to additional fetches to RAM because that data completely fit in the cache of Non-Vcache chips. When that isn't the bottleneck IPC and clocks win. Everything the 5800X3D improved, will be much further improved by Zen4 versions. Even better it's very likely they'll have less of hit to clocks.
 
You clearly don't understand what's going on with the CPU when it comes to cache. It has nothing to do with the GPU communication directly. The CPU is doing work to eventually send to the GPU, but the CPU's ability to that work can be hindered by needing to fetch more data from RAM. More L3 cache reduces the frequency of those fetches, so the CPU itself isn't waiting and can do more work per clock cycle. So everything there is in a faster better CPU, gets amplified by the increased cache. This is why the 5800X3D in spite of having far lower IPC and clock speed can beat/compete ADL in some/many games. Areas where it ineffective is solely because the app never had to additional fetches to RAM because that data completely fit in the cache of Non-Vcache chips. When that isn't the bottleneck IPC and clocks win. Everything the 5800X3D improved, will be much further improved by Zen4 versions. Even better it's very likely they'll have less of hit to clocks.
All the games are made for consoles and consoles have a shared pool of ram between cpu and gpu...even if this isn't directly the issue it's still the issue.
Rebar improves the data transfer between ram and vram I guess more cache does the same.

Think about it, you really think all the CPU benchmarks ALL fit completely into the normal cache? Because otherwise we would have huge increases in any CPU benchmark that doesn't fit into normal cache as well.
 
They tested everything at the best settings that keep your warranty intact, you can't ask for more.
I can definitely ask for more. Luckily, we have 3rd party reviewers who will test real world performance as applicable to most of the custom build audience. Another question for you, do you think a 253W PL1 TDP was without ANY auto overclocking?? Intel's own whitepapers say that 253W is the CPUs maximum turbo power.

So yes, the Intel 13900k and 12900k chips were allowed to boost and both the 5950X and 5800X3D were NOT allowed to boost as they were held to 105W.
If anything this shows just how close the 5950X and 5800X3D will be when they are allowed to perform at default settings for most custom build systems.
 
I can definitely ask for more. Luckily, we have 3rd party reviewers who will test real world performance as applicable to most of the custom build audience. Another question for you, do you think a 253W PL1 TDP was without ANY auto overclocking?? Intel's own whitepapers say that 253W is the CPUs maximum turbo power.

So yes, the Intel 13900k and 12900k chips were allowed to boost and both the 5950X and 5800X3D were NOT allowed to boost as they were held to 105W.
If anything this shows just how close the 5950X and 5800X3D will be when they are allowed to perform at default settings for most custom build systems.
I thought we already agreed that a 105W TDP ryzen auto overclocks to 142W...
Intel at least states that 253W is the max boost, AMD keeps the 142W number as hidden as possible.
That is 105W TDP...
142W is the ppt, the extra power the cpu draws at 105W tdp...
So you are angry because intel doesn't sit you down like a little kid and explains to you how AMD calls things...
A Ryzen CPU limited to 105W TDP draws up to 142W from the mobo... because reasons...

https://www.gamersnexus.net/guides/3491-explaining-precision-boost-overdrive-benchmarks-auto-oc

We’ll quote directly from AMD’s review documentation so that there is no room for confusion:
Package Power Tracking (“PPT”): The PPT threshold is the allowed socket power consumption permitted across the voltage rails supplying the socket. Applications with high thread counts, and/or “heavy” threads, can encounter PPT limits that can be alleviated with a raised PPT limit.

  1. Default for Socket AM4 is at least 142W on motherboards rated for 105W TDP processors.
  2. Default for Socket AM4 is at least 88W on motherboards rated for 65W TDP processors.
 

TRENDING THREADS