News AMD's Ryzen 9000 single-core performance again impresses in early Geekbench results — 9700X, 9600X dominate previous-gen AMD and Intel CPUs

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Thunder64

Distinguished
Mar 8, 2016
172
252
18,960
So what is TDP in your mind?! Because TDP for the 14900k is 125W.

Yes, ever since GPUs can do it better and a lot quicker CPU transcoding has become a purely server thing, everybody else does it with a GPU.

Not to mention that 125W is way way way way way below the guidelines...

That's my point!
Unless you are running a server, and I do bunch professional workloads into the same category, you will almost never stress all the cores.

Yes, I already agreed to that the previous time you brought it up, if you only do server type workloads ryzen is great.

But you are the only one that has been arguing that point at all.
From the beginning I said:

Intel has been playing TDP games for awhile now. Show me an out of the box 14900k or review that runs at 125W by default.

Yes transcoding is done on CPU's but that doesn't mean it can't be used as a benchmark for CPU's.

All of these people buying Ryzen (6 out of the top 10, 12 out of the top 20) must surely be doing "server work" then. That's like saying all those buying X3D's are only playing games.

Your arguments are paper thin. At first it was fun proving you wrong, now it has just gotten tiresome.
 

Thunder64

Distinguished
Mar 8, 2016
172
252
18,960
Yeah, very bad and thick IHS on the AMD 7000 series CPUs trapps in a lot of extra heat. A couple things too note would be that just because a CPU core temps are higher or lower comparatively its the wattage of power going into the CPU that is being dumped into the room as heat. For instance an AMD CPU at 90C but 100w usage compared to an intel CPU at 80C but 300w usage means the Intel CPU is dumping 200% extra heat into the surrounding case and room.

So many people fail to understand that. It's why I mentioned the Easy-Bake oven. It literally makes food! And at least some of the 100W from the light bulb they used produced light and not just heat (though not much). If you live in a warmer area that extra heat adds up.
 
  • Like
Reactions: helper800
But you are the only one that has been arguing that point at all.
From the beginning I said:
And as I have said numerous times; an application, or game, can only address 1 or 2 cores that means that single threaded performance is predominately determinative of performance.

You have made several claims with different language as shown below:
How is it supposed to beat the 14900ks if it doesn't even beat the 14600k in multithreaded?!
The single core scores are with actually only one core doing any work, you won't find that while gaming anymore.
That is much different from what I'm saying, clock speeds rapidly drop when more than one single core is doing work.
You can see that from the chart by comparing single to multi scores.

Sure but so are single core scores, even more so since more games use more cores than fewer cores.
That's what I said from the beginning, ST performance, in only one bench especially ,doesn't represent gaming performance.

No, it's the ST performance THAT YOU CAN GET WHEN THE WHOLE CPU IS RUNNING that will be determinant of gaming performance, or at least when several cores are doing something and not just one.
Don't forget that unlike intel you can't just overclock the hell out of an ryzen CPU to get the same single core clock on all (p) cores.

There is not a single single threaded game left that is being benchmarked, or even played by the main stream.
Not a single game is completely ST, many have a big bias to one or two cores but the rest still do plenty of work, and that is the main thing here, as soon as the other cores do even a little work the clocks go down to whatever multiplier is set for that amount of cores.
A similar thing was said at least 6 different ways and none of them mean the same thing. My entire argument was that the vast, vast majority of games can only address 1 or 2 cores. I have explained it over and over again that most games and programs can only address 1 or 2 cores and how that is fundamentally different from core usage. If a game can only address 1 or 2 cores at any given time that means ST performance is still predominantly the bulk of the workload for said games. There are games that can address more than 2 cores simultaneously, but there is a very small handful of them. You will get core usage across an entire CPU even if something can only address a couple of cores at the same time as I have explained.
 
Last edited:

TheHerald

Notable
Feb 15, 2024
1,107
312
1,060
Yeah, very bad and thick IHS on the AMD 7000 series CPUs trapps in a lot of extra heat. A couple things too note would be that just because a CPU core temps are higher or lower comparatively its the wattage of power going into the CPU that is being dumped into the room as heat. For instance an AMD CPU at 90C but 100w usage compared to an intel CPU at 80C but 300w usage means the Intel CPU is dumping 200% extra heat into the surrounding case and room.
I don't care about the room though, that's trivial to cool, the hard part is cooling the chip without going 360 AIO
 
I don't care about the room though, that's trivial to cool, the hard part is cooling the chip without going 360 AIO
I mean what does it matter that the CPUs are at 60c, 70c, 80c, 90c ,100c or more if it's not about performance or the heat being dumped into the room and within TJmax? CPU temperatures primarily affect performance now, and nothing more? Electromigration on modern chips is not realistically a threat anymore. Maybe just cause you want the CPU to run cooler because the old ways caused us to hyperfocus on temps?
 
So what is TDP in your mind?! Because TDP for the 14900k is 125W.
For Intel TDP is the wattage draw when ALL cores are active and at base frequency. This is absolutely useless as CPUs boost all the time and Intel wanted by default to allow chips unlimited power. Don't try and say that they didn't and it was the motherboard manufacturers because had Intel actually cared they would have stopped the practice YEARS ago. Only after the bad press did Intel put a stop to this and all at the same time throw the motherboard manufacturers under the bus. If you design your Intel 14900k around a 125W TDP and go with a cooler rated for 50% beyond TDP (190W) you will be severely limiting your CPU performance in MT workloads. The number of reviews out there that show the Intel chips hitting 250W+ in MT applications is countless. You will be throttling the CPU quite often and probably leaving 10% or more performance on the table.

AMD has TDP and PPT. PPT is the total package power which are cores, infinity fabric, and IO die. I'm guess that TDP is specific to the cores alone but cannot say for sure. Overall PPT is the important one and is 35% higher than TDP. An AMD chip with a 120W TDP (7950X3D) has a 142W PPT. Doing the same idea as above and using a 190W TDP cooler will not limit your AMD as the PPT is 170W. That usually means you will not be throttling at all. This also means that you don't need as expensive of a cooling solution so you can put that $100 or so to use in another area of the computer.

In conclusion TDP for Intel is worthless. Intel knows they should be saying their chips have a TDP equal to their PL2, maybe PL1, but that looks VERY bad in marketing so they don't. Instead they give you worthless numbers and hope that you don't realize that the TDP they say isn't actually its TDP.
 

NinoPino

Respectable
May 26, 2022
438
264
2,060
With the same amount of cooling the 7950x maxes out reaching 94 degrees out of 95 degrees max at 215 out of the maximum (supposedly) 230W while intel reaches 85degrees out of the max 100 degrees max at 330 out of the maximum 253W
Max temperature is not so relevant considering that AMD CPUs are stable while Intel's are unstable as demonstrated recently and anyway 7950X is more performant while using half the power of the 14900K where multithreading is heavily used.

Ryzen, at least the 7950x is completely maxed out at stock settings, while the 13900k runs 30% above stock and is still cooler.

https://www.anandtech.com/show/17641/lighter-touch-cpu-power-scaling-13900k-7950x/3
sNUwgPN.jpg


So you do agree that it is very hard to keep up single core clocks when many cores are active...
Not from a single software (yCruncher) and with only power and temperature graphs without performance. Where is the performance graph associated to that test ?
If you want serious tests look at this :
https://www.phoronix.com/review/intel-14600k-14900k-linux/11
Also you are arguing that hyperthreading, and SMT would be the same, is using power while not doing anything at all...
If hyperthreading is not used in a core, power consumption overhead is barely relevant I suppose, do you have specific tests that prove the contrary ?
... and my argument is that additional cores not used will do that as well, let alone if they do have a decent amount of load on them which I showed that they do.
SMT was used to take advantage of unused logic that otherwise stay here unused but powered. So, unless a very extensive and capillary clock gating, also SMT in some ways can enhance efficiency.
 
Sep 25, 2023
2
0
10
the crusty old chestnut about how game developers "poorly optimize" for parallelization* and "single thread/core performance tends to matter most" is either trending rapidly to obsolescence or else its already so

indeed, its likely only valid when you carefully include recent years or legacy titles (vs forward looking titles only)

*a term i use vaguely to encompass any/all available advantages provided by microarchitecture, ie not merely hyper-multithreading, but also SpecExec and the like, as well as avx or similar, DMA or similar, etc.
 

TheHerald

Notable
Feb 15, 2024
1,107
312
1,060
Intel has been playing TDP games for awhile now. Show me an out of the box 14900k or review that runs at 125W by default.

Yes transcoding is done on CPU's but that doesn't mean it can't be used as a benchmark for CPU's.

All of these people buying Ryzen (6 out of the top 10, 12 out of the top 20) must surely be doing "server work" then. That's like saying all those buying X3D's are only playing games.

Your arguments are paper thin. At first it was fun proving you wrong, now it has just gotten tiresome.
The last bastion of anti intel propaganda. The "what most users do". Kind sir, why do YOU specifically care about what most users do? You know how to set power limits, you've already been shown that the 14900k is the fastest across a wide range of applications while sipping power, so what exactly is your issue with it?
 
Since I don't want to keep editing the same post, from that Anandtech article, in Blender the 7950X at 65W (really 88W) beats the 14900k at 125W. Makes the image from your link look a bit sad, doesn't it? That's why I said earlier you can't use a review where Intel gets power limit tests but AMD doesn't, unless you are comparing both at stock.



Like I said, AMD used stupid TDPs. Performance hardly drops off until you really limit it. Intel falls off much faster without the extra watts.
Yeah, I already agreed several times that ryzen is better if you look exclusively at one type of workload, 3d render/transcoding/fluid dynamics/etc, stuff that is extremely rarely run on a desktop from normal people.
The restricted techpowerup benchmark shows that at 125w , that intel chose as TDP, their CPU is extremely competitive in performance and power draw for people that do more than those things.

Also don't mix and match bench results, techpowerup has blender results you can use.
BUNXtrh.jpg

Yeah, very bad and thick IHS on the AMD 7000 series CPUs traps in a lot of extra heat. A couple things to note would be that just because a CPU core temps are higher or lower comparatively its the wattage of power going into the CPU that is being dumped into the room as heat. For instance an AMD CPU at 90C but 100w usage compared to an intel CPU at 80C but 300w usage means the Intel CPU is dumping 200% extra heat into the surrounding case and room.
As above, it is possible and allowed to limit at 125W at which point you will still have extremely good performance, unless you need a server CPU, and your room will get less heat than with a ryzen.
Intel has been playing TDP games for awhile now. Show me an out of the box 14900k or review that runs at 125W by default.

Yes transcoding is done on CPU's but that doesn't mean it can't be used as a benchmark for CPU's.

All of these people buying Ryzen (6 out of the top 10, 12 out of the top 20) must surely be doing "server work" then. That's like saying all those buying X3D's are only playing games.

Your arguments are paper thin. At first it was fun proving you wrong, now it has just gotten tiresome.
It's not intel playing games, it's the reviewers and youtubers trying to make things more exiting.
Intel specified TDP and they specified the max turbo power they allow, which you can run at all the time if you choose to (pl1=pl2=253W) ,if you want to compare technology based on technology you have to go by what the technology can do.
You are trying to go by how stupid people are, as if people can't change out of the box setting back to what is allowed.
And as I have said numerous times; an application, or game, can only address 1 or 2 cores that means that single threaded performance is predominately determinative of performance.

You have made several claims with different language as shown below:
Yeah I have made several claims that are all the same claim worded differently, and you have not shown one single case of an app or game that only addresses 1 or 2 cores, so how about you showing us a modernish big game that only has 1 or 2 threads.
Your only try to do so until now showed that more cores running have lower clocks which is what my one claim said from the start.
 
Yeah, I already agreed several times that ryzen is better if you look exclusively at one type of workload, 3d render/transcoding/fluid dynamics/etc, stuff that is extremely rarely run on a desktop from normal people.
The restricted techpowerup benchmark shows that at 125w , that intel chose as TDP, their CPU is extremely competitive in performance and power draw for people that do more than those things.
Limited to its TDP the 14900k offers 78% the performance of the 7950X in Blender. That isn't "extremely competitive performance" for that application. It is competitive with the 7900X which is only 4% slower than the 128W 14900k but has 12c/24t instead of 24c/32t.

It's not intel playing games, it's the reviewers and youtubers trying to make things more exiting.
Intel specified TDP and they specified the max turbo power they allow, which you can run at all the time if you choose to (pl1=pl2=253W) ,if you want to compare technology based on technology you have to go by what the technology can do.
You are trying to go by how stupid people are, as if people can't change out of the box setting back to what is allowed.
The motherboard manufactures had the Intel motherboards running at unlimited tao/power by default. Intel has known about this for YEARS but didn't do anything about it. Reason why is Intel knew that their CPUs once they went beyond 4c/8t were power hogs and to limit them to only the TDP or even something like AMDs PPT would kill their performance in benchmarks. Intel's stated TDPs are an absolute joke. Plenty of people look at benchmarks and see well X CPU is faster so I will get that and then see that it is a 125W CPU and think that is the max power draw so they get a CoolerMaster 212 EVO to cool a 14900k. Nothing against that HSF but it is a 150W TDP cooler and not something to be used to cool an i9k or even i7k CPU. Their performance isn't what they see on the benchmarks, falling easily by 10% or more on some workloads. They also are probably more prone to the burnout on their CPUs as the cooler cannot keep up with the power draw. It wasn't until Intel got bad press for their CPUs burning up that they started to care. They immediately threw the motherboard manufactures under the bus when Intel is just as much to blame for this issue. Now the new default power plan kills performance on MT applications.
 

DS426

Upstanding
May 15, 2024
178
147
260
How is it supposed to beat the 14900ks if it doesn't even beat the 14600k in multithreaded?!
The single core scores are with actually only one core doing any work, you won't find that while gaming anymore.
Also they are from a benchmarking app which doesn't translate to gaming speed.
Yes, many games still don't load down more than four cores/threads. Sure, all the latest games that are designed to push the edge in graphics will load down far more, but most real-world gamers still play games that are several years old with engines that have been marginally upgraded, if at all. American Truck Sim and ETS2 are ones that I can think of right away.

As he said, just beating 14900KS "on average," granted that means it depends on the test suite.

Digressing, no, I don't believe the 9600X needs to beat that chip -- it just needs to beat 1(2/3/4)700K as that already makes it a higher performer at a lower cost, and I do suspect that it will probably succeed at that. Also, AMD has the respect now that they don't have to run 20-30% cheaper than Intel at the same perf level to move product, not for CPU's that is. I absolutely believe that Radeons do, but nVidia is also relatively stronger against AMD than Intel is.
 

Thunder64

Distinguished
Mar 8, 2016
172
252
18,960
The last bastion of anti intel propaganda. The "what most users do". Kind sir, why do YOU specifically care about what most users do? You know how to set power limits, you've already been shown that the 14900k is the fastest across a wide range of applications while sipping power, so what exactly is your issue with it?

Fastest? Sipping power? All those gamers buying 7800X3D's must be idiots. Also, I thought Ryzen was only good in servers? Best gaming CPU using nearly 100W less than the 13900K.

power-games.png


Even Rocket Lake uses less power! Lets not forget it seems every day there is a new article on 13th/14th gen CPU's failing after a few months due to degradation, and Intel has been very quiet about this:

https://www.tomshardware.com/pc-com...mpany-sells-defective-13th-and-14th-gen-chips

Call a turd a turd and hope Arrow Lake does better.
 
Yeah I have made several claims that are all the same claim worded differently, and you have not shown one single case of an app or game that only addresses 1 or 2 cores, so how about you showing us a modernish big game that only has 1 or 2 threads.
Your only try to do so until now showed that more cores running have lower clocks which is what my one claim said from the start.
Complete hogwash, check the quotes from yourself again. You made several baseless claims, not just the one. I am done litigating this topic.
 

TheHerald

Notable
Feb 15, 2024
1,107
312
1,060
Fastest? Sipping power? All those gamers buying 7800X3D's must be idiots. Also, I thought Ryzen was only good in servers? Best gaming CPU using nearly 100W less than the 13900K.

power-games.png


Even Rocket Lake uses less power! Lets not forget it seems every day there is a new article on 13th/14th gen CPU's failing after a few months due to degradation, and Intel has been very quiet about this:

https://www.tomshardware.com/pc-com...mpany-sells-defective-13th-and-14th-gen-chips

Call a turd a turd and hope Arrow Lake does better.
See, this is why it's pointless. You are arguing entirely on bad faith. You were talking about blender workloads and now you are back to gaming cause your argument hit a wall. Whatever man, there is really no point.
 

jp7189

Distinguished
Feb 21, 2012
470
278
19,060
For Intel TDP is the wattage draw when ALL cores are active and at base frequency. This is absolutely useless as CPUs boost all the time and Intel wanted by default to allow chips unlimited power. Don't try and say that they didn't and it was the motherboard manufacturers because had Intel actually cared they would have stopped the practice YEARS ago. Only after the bad press did Intel put a stop to this and all at the same time throw the motherboard manufacturers under the bus. If you design your Intel 14900k around a 125W TDP and go with a cooler rated for 50% beyond TDP (190W) you will be severely limiting your CPU performance in MT workloads. The number of reviews out there that show the Intel chips hitting 250W+ in MT applications is countless. You will be throttling the CPU quite often and probably leaving 10% or more performance on the table.

AMD has TDP and PPT. PPT is the total package power which are cores, infinity fabric, and IO die. I'm guess that TDP is specific to the cores alone but cannot say for sure. Overall PPT is the important one and is 35% higher than TDP. An AMD chip with a 120W TDP (7950X3D) has a 142W PPT. Doing the same idea as above and using a 190W TDP cooler will not limit your AMD as the PPT is 170W. That usually means you will not be throttling at all. This also means that you don't need as expensive of a cooling solution so you can put that $100 or so to use in another area of the computer.

In conclusion TDP for Intel is worthless. Intel knows they should be saying their chips have a TDP equal to their PL2, maybe PL1, but that looks VERY bad in marketing so they don't. Instead they give you worthless numbers and hope that you don't realize that the TDP they say isn't actually its TDP.
TDP is AMDs recommended thermal design for OEMs. Meaning they expect that amounting of cooling to be sufficient for the average user under average workloads. PPT is the power the motherboard socket must be capable of delivering.
 

jp7189

Distinguished
Feb 21, 2012
470
278
19,060
I'm a bit late to the party, but regarding the debate of single thread performance... years ago some processors would only hit max boost on a single thread, and the moment a second thread was scheduled, the entire CPU (all cores) would downclock. I believe this is the root of the argument.. that loading 2, 3, 4, etc. cores is the same thing. It used to be. This is not the behavior with Ryzen 7000 and not the expected behavior with Ryzen 9000. In current AMD CPUs, each individual core clocks to the maximum possible as long as there is sufficient socket power and temp is under the threshold. It is now possible to load a handful of cores and maintain max frequency of each.

This is why a ST benchmark is likely to be more representative of gaming vs a MT benchmark. Games make use of high performance from individual cores, and are unlikely to hit max socket power limits, similar in behavior to ST benchmarks.

With games spawning many threads and the OS working hard to schedule them to get performance and save power, and with so many variables within the CPUs themselves, each CPU will perform differently with each game. That's why real benchmarks are more important than ever.
 
  • Like
Reactions: purposelycryptic

SunMaster

Commendable
Apr 19, 2022
195
180
1,760
Go ahead and limit you CPU to 2-4 cores for a game to test. If your FPS does not lower compared to playing it unrestricted that means that game does not use more than the amount of cores you have limited the CPU to. What you may be confusing games with doing is core hopping for clock priority. Typically a CPU will juggle the work through cores to keep it on the coolest cores to maintain the highest boost clocks thus giving the illusion of using more cores. You also may not understand how programming works, but a game or application that can take advantage to more than one core has to be specifically programed for it in an engine that supports it to do so. This means that we can know for a fact how many cores a game uses depending on what engine it is coded in or how specifically it is coded.

View: https://www.youtube.com/watch?v=glSTKD28sb8
clearly shows using 4 cores isn't excactly optimal anymore. Say goodbye to the way of Core2. A lot of PC games are ported to/from the Xbox/PS today. PS5/PS4/Xbox X/Xbox S/Xbox One all have 8 cores.
 

purposelycryptic

Distinguished
Aug 1, 2008
48
56
18,610
"Nevertheless, Intel’s 14th-gen chips still outperform them in the multi-core department"

Well, this is no surprise. After all, the 14700 has 12 more cores than the Ryzen 9700 (8+12 vs 8).
We will see how things will pan out, when AMD CPUs with 12 and 16 cores see the light of the day. Their multi-core performance numbers should be significantly above anything Intel's 14th gen has to offer.
It would be somewhat problematic if they didn't, since it's a competition against an older (if still current) generation processors.

I wonder what effect Intel abandoning hyperthreading is going to have on their multi core performance with Arrow Lake and going forward🤔

I'll personally be waiting for the 9950X3D, but that's because I haven't had a proper new build in a decade+, and I feel like being stupid with my money. Not Threadripper stupid, but still stupid.
 
  • Like
Reactions: ottonis

Latest posts