News AMD's Ryzen 9000 single-core performance again impresses in early Geekbench results — 9700X, 9600X dominate previous-gen AMD and Intel CPUs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
That is just 100% false... What about the literal 30 years of games from 1970-2000 when there was literally only 1 core in PCs so that was all that was programed for? THere were very few games that used more than 1 thread from 2000-2010ish.
Yes, that's what AMD is targeting.....what was your point here even?! That AMD makes CPUs for 20 years ago?!
As far as the comment on AMD and how their CPUs clock themselves, you also have not a clue what you are talking about.
7950x has a max single core boost of 5.7Ghz according to AMD ,in games it runs at 5-5.1Ghz so you are the one that has no clue what you are talking about.
View: https://www.youtube.com/watch?v=7qts9vT7M9o

dude you are really reaching here.

What everyone is saying and what you are trying very hard not to hear is that CPU's that scored highly in single threaded benchmarks have historically been better at gaming
Yeah when they can keep that single core speed going when the whole CPU is doing work, that has been my argument the whole time.
While games may use multiple cores, most of the work is done on just a couple, check out your core loads next time you are gaming and you'll see what I mean.
How many do I have to look at for you to make up your mind? Just asking to know how long this will go on for.
Here is the most single thread biased modern game I could find and it runs 16 threads that have a decent amount of load additional to the main thread.
This will keep all cores busy enough for them to not go to sleep and force the CPU into the all core clock speed.
QXgNI0Z.jpg
 
4 threads? You serious? Most if not all AAA games use more than 4 threads for sure. The hell are you guys talking about I have no idea honestly.
Go ahead and limit you CPU to 2-4 cores for a game to test. If your FPS does not lower compared to playing it unrestricted that means that game does not use more than the amount of cores you have limited the CPU to. What you may be confusing games with doing is core hopping for clock priority. Typically a CPU will juggle the work through cores to keep it on the coolest cores to maintain the highest boost clocks thus giving the illusion of using more cores. You also may not understand how programming works, but a game or application that can take advantage to more than one core has to be specifically programed for it in an engine that supports it to do so. This means that we can know for a fact how many cores a game uses depending on what engine it is coded in or how specifically it is coded.

People here seem to confuse CPU thread prioritization for performance for "games use 8+ cores." In reality, most games only require 1-2 threads and any "use" you may be seeing on other cores is to keep maximum performance by jumping to the higher clocking cooler core.

Yes, that's what AMD is targeting.....what was your point here even?! That AMD makes CPUs for 20 years ago?!
My point is that ALL CPUs are clearly made with specific intent to clock higher on single threads BECAUSE most applications ever made are highly dependent on such performance. Intel is targeting single threaded performance just the same as AMD because it is so determinative of performance for a wide variety of applications, including the vast majority of games made. Only just recently in the past 10-15 years or so have games specifically started using more than 1 thread and only the last 5-10 years have games started taking advantage of more than 2 threads.

Not a single game is completely ST

many have a big bias to one or two cores but the rest still do plenty of work, and that is the main thing here, as soon as the other cores do even a little work the clocks go down to whatever multiplier is set for that amount of cores.

And on ryzen you can't put the multiplier for all cores to the same as single because the CPU would just ignore the setting or just blow up.
Specifically these three claims you made above are you having no idea what you are talking about and being completely wrong. These are some of the most ridiculously incorrect claims I have ever heard someone make.

7950x has a max single core boost of 5.7Ghz according to AMD ,in games it runs at 5-5.1Ghz so you are the one that has no clue what you are talking about.
You saying the this does not support the claims you made as noted above. You can go into BIOS and set all the cores on 7950x to its boost clock and it will run fine, no exploding as you have claimed, provided you can keep it cool enough and give it enough power/voltage to remain stable.
 
Last edited:

TheHerald

Notable
Feb 15, 2024
1,107
312
1,060
Go ahead and limit you CPU to 2-4 cores for a game to test. If your FPS does not lower compared to playing it unrestricted that means that game does not use more than the amount of cores you have limited the CPU to. What you may be confusing games with doing is core hopping for clock priority. Typically a CPU will juggle the work through cores to keep it on the coolest cores to maintain the highest boost clocks thus giving the illusion of using more cores. You also may not understand how programming works, but a game or application that can take advantage to more than one core has to be specifically programed for it in an engine that supports it to do so. This means that we can know for a fact how many cores a game uses depending on what engine it is coded in or how specifically it is coded.
I know how a cpu works. In your previous post you said 4 threads. Does that include 2c with HT? Cause without even testing I can tell you most games will absolutely struggle with that configuration. If you were talking about 4 cores, sure I can test it, but I can already tell you every single game I have installed on my pc right now will run worse. I'll only test the lightweight ones and completely skip games like cyberpunk and tlou.

I'll test dota 2, hellblade 2, ratchet and clank and nfs unbound.

Regarding the rest of your post I don't get where you guys get that idea from. In 2017, that was 7 years ago, my fx8350 was hitting 100% utilization across all cores in basically almost every game. What 2 cores are you people talking about? Do you actually play games?
 
  • Like
Reactions: KyaraM
I know how a cpu works. In your previous post you said 4 threads. Does that include 2c with HT? Cause without even testing I can tell you most games will absolutely struggle with that configuration. If you were talking about 4 cores, sure I can test it, but I can already tell you every single game I have installed on my pc right now will run worse. I'll only test the lightweight ones and completely skip games like cyberpunk and tlou.

I'll test dota 2, hellblade 2, ratchet and clank and nfs unbound.
Two threads is two cores. Two addressable threads from 1 core is clearly not the same thing. The nomenclature can get a bit unclear. Good faith interpretation would be 1 thread is 1 core not including hyperthreading, because as we all know hyperthreading does not 1:1 double your performance, so for the purposes of the tests I was referring to 1 core = 1 thread.

Regarding the rest of your post I don't get where you guys get that idea from. In 2017, that was 7 years ago, my fx8350 was hitting 100% utilization across all cores in basically almost every game. What 2 cores are you people talking about? Do you actually play games?
I ran a 3570k 4c/4t CPU until September of 2019 and played modern and older games and rarely if ever touched 100% CPU utilization with other applications running in the background at the same time like discord, chrome, monitoring apps, RGB lighting software, macro software, et cetera, et cetera... So I cannot personally speak to your particular system and circumstances revolving your particular environment while gaming 7 years ago. Some things to note about the fx series of processors, they were so architecturally bottle necked with how the pairs of cores interacted that they even got sued and lost with the claimants saying that it was false advertisment that their CPUs were in this case 8 cores, but rather 4.
 

TheHerald

Notable
Feb 15, 2024
1,107
312
1,060
Two threads is two cores. Two addressable threads from 1 core is clearly not the same thing. The nomenclature can get a bit unclear. Good faith interpretation would be 1 thread is 1 core not including hyperthreading, because as we all know hyperthreading does not 1:1 double your performance, so for the purposes of the tests I was referring to 1 core = 1 thread.


I ran a 3570k 4c/4t CPU until September of 2019 and played modern and older games and rarely if ever touched 100% CPU utilization with other applications running in the background at the same time like discord, chrome, monitoring apps, RGB lighting software, macro software, et cetera, et cetera... So I cannot personally speak to your particular system and circumstances revolving your particular environment while gaming 7 years ago. Some things to note about the fx series of processors, they were so architecturally bottle necked with how the pairs of cores interacted that they even got sued and lost with the claimants saying that it was false advertisment that their CPUs were in this case 8 cores, but rather 4.
I asked for clarification, didn't want to put words in your mouth.

I can give you a huge list of games pre 2019 that wouldnt rub well or even drop to or even below 30fps on your 3570k due to the lack of cores. Im not kidding. Off the top of my head the ac odyssey and origins, watchdogs 2, warzone 1,nfs heat, battlefield 1 and those are just the ones I've tested
 
Specifically these three claims you made above are you having no idea what you are talking about and being completely wrong. These are some of the most ridiculously incorrect claims I have ever heard someone make.
Because your bias blindness kept you from seeing it the first time...
QXgNI0Z.jpg

You saying the this does not support the claims you made as noted above. You can go into BIOS and set all the cores on 7950x to its boost clock and it will run fine, no exploding as you have claimed, provided you can keep it cool enough and give it enough power/voltage to remain stable.
But you can't keep it cool enough or give it enough power, at least not for a normal person.
Also "out of the box" is not overclocked.
 
I asked for clarification, didn't want to put words in your mouth.

I can give you a huge list of games pre 2019 that wouldnt rub well or even drop to or even below 30fps on your 3570k due to the lack of cores. Im not kidding. Off the top of my head the ac odyssey and origins, watchdogs 2, warzone 1,nfs heat, battlefield 1 and those are just the ones I've tested
Thats funny because I have played over half of those games and never got below 40-50ish fps. Lets take AC odyssey. From tech powerup you can see the game does not scale much past 4 cores. I would argue that the only reason performance rises past 2 cores is because CPUs with more cores can jump the processing from a core that is hot for being used to one that was not getting used at all and is still much cooler thus improving performance. More cores means less heat in the cores doing the actual processing. Just to be clear hear we are arguing whether or not the application, or game, can within the limits of its engine or code address more that one or two cores versus having more cores means more performance. These are two completely different things. All games will perform better with more cores than their code can address because of CPU behavior, not that the underlying code is parallelized past 1 or 2 cores.
cpu_720p.jpg

Because your bias blindness kept you from seeing it the first time...
QXgNI0Z.jpg
You are correct in that I did not include the complete nonsense you were spouting past your already largely nonsensical points, call that bias if you wish. Just to be clear, everyone is biased, however, this is only a valid critique if you can presume someone is allowing their biases cloud their judgment or argumentation.
But you can't keep it cool enough or give it enough power, at least not for a normal person.
Also "out of the box" is not overclocked.
I am a normal person. I can assemble a PC including an AMD CPU that can keep its boost clock across all cores with good cooling and BIOS settings. If we are talking about out of box performance then you failed to mention what you were talking about. I quote you again:
And on ryzen you can't put the multiplier for all cores to the same as single because the CPU would just ignore the setting or just blow up.
Do you see how you just made a blanket statement above not referring to literally any type of setting like stock, or OCed, or otherwise? You definitely can do exactly what you said cannot occur and stated that if you try what you said the CPU will "blow up," which is all utter nonsense.

Out of the box CPUs I would argue have been OCing themselves since the inception of turbo boosting tech, though this argument is my opinion rather than fact. A "base" clock as per any manufacturer is as it implies, a baseline performance at a set clock. Logically if there is a function that increases clocks over that baseline you could very easily argue that as an "overclock." This has been completely muddied in the last couple decades because OCing today usually means setting a custom or specific CPU behavior not typically covered by manufacturer warranty instead of the original meaning of the word "overclock."
 

TheHerald

Notable
Feb 15, 2024
1,107
312
1,060
Thats funny because I have played over half of those games and never got below 40-50ish fps. Lets take AC odyssey. From tech powerup you can see the game does not scale much past 4 cores. I would argue that the only reason performance rises past 2 cores is because CPUs with more cores can jump the processing from a core that is hot for being used to one that was not getting used at all and is still much cooler thus improving performance. More cores means less heat in the cores doing the actual processing. Just to be clear hear we are arguing whether or not the application, or game, can within the limits of its engine or code address more that one or two cores versus having more cores means more performance. These are two completely different things. All games will perform better with more cores than their code can address because of CPU behavior, not that the underlying code is parallelized past 1 or 2 cores.
You don't know where he is testing. There are obviously light areas of the game and heavy areas of the game. Then there is the ingame benchmark which he might be how he is testing which is irrelevant when it comes to ingame performance.

I can tell you Alexandria in Ac origins maxed out my 8700k and fps was barely at 80. On my r5 1600 it was dropping to around 50-55. There is no way you were holding a minimum of 40 with a 3570k.
 
Last edited:
Thats funny because I have played over half of those games and never got below 40-50ish fps. Lets take AC odyssey. From tech powerup you can see the game does not scale much past 4 cores. I would argue that the only reason performance rises past 2 cores is because CPUs with more cores can jump the processing from a core that is hot for being used to one that was not getting used at all and is still much cooler thus improving performance. More cores means less heat in the cores doing the actual processing. Just to be clear hear we are arguing whether or not the application, or game, can within the limits of its engine or code address more that one or two cores versus having more cores means more performance. These are two completely different things. All games will perform better with more cores than their code can address because of CPU behavior, not that the underlying code is parallelized past 1 or 2 cores.
To further emphasize your point Riven 2024 was just released using the UR5 engine. While it is graphically intensive, you can run it on an 2c/4t CPU just fine. IIRC once you get past 2 cores the rest are mainly used for physics and AI. However, there is still a limit on how much you can thread those as well.
 
  • Like
Reactions: helper800
Thats funny because I have played over half of those games and never got below 40-50ish fps. Lets take AC odyssey. From tech powerup you can see the game does not scale much past 4 cores. I would argue that the only reason performance rises past 2 cores is because CPUs with more cores can jump the processing from a core that is hot for being used to one that was not getting used at all and is still much cooler thus improving performance. More cores means less heat in the cores doing the actual processing. Just to be clear hear we are arguing whether or not the application, or game, can within the limits of its engine or code address more that one or two cores versus having more cores means more performance. These are two completely different things. All games will perform better with more cores than their code can address because of CPU behavior, not that the underlying code is parallelized past 1 or 2 cores.
cpu_720p.jpg
1) We have no idea if and how much the GPU, or as a matter of fact the game engine, is limiting FPS here.
2) What is your explanation for the FPS dropping above 720p going from 6 to 12 threads? Because clocks dropping when more threads are active would explain it.
cpu_1080p.jpg
 

NinoPino

Respectable
May 26, 2022
438
264
2,060
How is it supposed to beat the 14900ks if it doesn't even beat the 14600k in multithreaded?!
Why not ? Games notoriously use fewer threads.

The single core scores are with actually only one core doing any work, you won't find that while gaming anymore.
But typically AMD suffer less from power/temperature constraints.

Also they are from a benchmarking app which doesn't translate to gaming speed.
Agree.
 
You don't know where he is testing. There are obviously light areas of the game and heavy areas of the game. Then there is the ingame benchmark which he might be how he is testing which is irrelevant when it comes to ingame performance.

I can tell you Alexandria in Ac origins maxed out my 8700k and fps was barely at 80. On my r5 1600 it was dropping to around 50-55. There is no way you were holding a minimum of 40 with a 3570k.
My 3570k was OCed to 4.5 ghz to be fair.
 
1) We have no idea if and how much the GPU, or as a matter of fact the game engine, is limiting FPS here.
2) What is your explanation for the FPS dropping above 720p going from 6 to 12 threads? Because clocks dropping when more threads are active would explain it.
cpu_1080p.jpg
1). True but I am not sure that is relevant to the conversation because we are talking about core scaling. As long as the GPU is powerful enough to show meaningful differences any FPS beyond that is kind of irrelevant.

2). I would argue that FPS dropped a small amount in the 6c/12t example versus the 6c/6t because hyperthreading usually draws more power/ voltage to maintain clocks for double the threads which causes more heat build up and that will lead to the lowering of clocks. Another reason would be because more of the load is shifted to the graphics card versus the CPU at the higher resolution can lead to burstier CPU usage when it is needed, if you couple that with the drawbacks of hyperthreading there seems to be a reasonable explanation there.
 
Last edited:
  • Like
Reactions: Roland Of Gilead
But typically AMD suffer less from power/temperature constraints.
With the same amount of cooling the 7950x maxes out reaching 94 degrees out of 95 degrees max at 215 out of the maximum (supposedly) 230W while intel reaches 85degrees out of the max 100 degrees max at 330 out of the maximum 253W

Ryzen, at least the 7950x is completely maxed out at stock settings, while the 13900k runs 30% above stock and is still cooler.

https://www.anandtech.com/show/17641/lighter-touch-cpu-power-scaling-13900k-7950x/3
sNUwgPN.jpg

2). I would argue that FPS dropped a small amount in the 6c/12t example versus the 6c/6t because hyperthreading usually draws more power/ voltage to maintain clocks for double the threads which causes more heat build up and that will lead to the lowering of clocks. Another reason would be because more of the load is shifted to the graphics card versus the CPU at the higher resolution can lead to bustier CPU usage when it is needed, if you couple that with the drawbacks of hyperthreading there seems to be a reasonable explanation there.
So you do agree that it is very hard to keep up single core clocks when many cores are active...
Also you are arguing that hyperthreading, and SMT would be the same, is using power while not doing anything at all and my argument is that additional cores not used will do that as well, let alone if they do have a decent amount of load on them which I showed that they do.
 
  • Like
Reactions: TheHerald

TheHerald

Notable
Feb 15, 2024
1,107
312
1,060
My 3570k was OCed to 4.5 ghz to be fair.
I found a video that tested exactly what I was talking about, Alexandria in AC origins.


As you can see fps drop to as low as 30 , 1% and 0.1% lows are horrible and the CPU is completely maxed out at 100% utilization.

And here is ratchet running on 4p vs 16+HT. Massive difference as you can see

 
Last edited:
  • Like
Reactions: KyaraM

Thunder64

Distinguished
Mar 8, 2016
172
252
18,960
With the same amount of cooling the 7950x maxes out reaching 94 degrees out of 95 degrees max at 215 out of the maximum (supposedly) 230W while intel reaches 85degrees out of the max 100 degrees max at 330 out of the maximum 253W

Ryzen, at least the 7950x is completely maxed out at stock settings, while the 13900k runs 30% above stock and is still cooler.

https://www.anandtech.com/show/17641/lighter-touch-cpu-power-scaling-13900k-7950x/3
sNUwgPN.jpg


So you do agree that it is very hard to keep up single core clocks when many cores are active...
Also you are arguing that hyperthreading, and SMT would be the same, is using power while not doing anything at all and my argument is that additional cores not used will do that as well, let alone if they do have a decent amount of load on them which I showed that they do.

The CPU may run 1 degree cooler, but it is using over 100W more that is throwing heat out into the case/room. So I would call Zen 4 cooler.
 
  • Like
Reactions: NinoPino
The CPU may run 1 degree cooler, but it is using over 100W more that is throwing heat out into the case/room. So I would call Zen 4 cooler.
1 degree? Ryzen is 1 degree under max, intel is at 86 degrees which is 14 degrees below the max of 100 and 9 degrees below ryzen and all of that at 100W more power draw as you said.
And the difference in performance between 330W and 253w is basically zero, it's like 1-2%, so you can reduce the power draw if you don't want that heat in your room.
 

Thunder64

Distinguished
Mar 8, 2016
172
252
18,960
1 degree? Ryzen is 1 degree under max, intel is at 86 degrees which is 14 degrees below the max of 100 and 9 degrees below ryzen and all of that at 100W more power draw as you said.
And the difference in performance between 330W and 253w is basically zero, it's like 1-2%, so you can reduce the power draw if you don't want that heat in your room.

AMD played stupid with the TDP with Zen 4. It maintains most of its performance at far lower TDP levels, unlike Raptor Lake which needs ever watt it can to match Zen. Also it was 215W vs 330W stock for both. That is a large amount of extra heat being generated. Remember the old 100W incadescent light bulps? They used them for "baking" in the Easy-Bake ovens.

130512.png
 
AMD played stupid with the TDP with Zen 4. It maintains most of its performance at far lower TDP levels, unlike Raptor Lake which needs ever watt it can to match Zen. Also it was 215W vs 330W stock for both. That is a large amount of extra heat being generated. Remember the old 100W incadescent light bulps? They used them for "baking" in the Easy-Bake ovens.
That is true for server applications but how relevant are those for desktop usage?!

Also if you think that lowering TDP only benefits ryzen them you are crazy.
If you look at more than just server apps exclusively and you run the 14900k at 125W, which is the official TDP, then it comes within3-4% of the 7950x while the later one uses almost 40% more power to do so....
(The 14900k uses 91W at average when limited to 125W while the 7950x uses 128W )
And this benchmark does include server apps, it's just not limited to only server apps.

https://www.techpowerup.com/review/...ke-tested-at-power-limits-down-to-35-w/8.html
D5TipA9.jpg
 

Thunder64

Distinguished
Mar 8, 2016
172
252
18,960
That is true for server applications but how relevant are those for desktop usage?!

Also if you think that lowering TDP only benefits ryzen them you are crazy.
If you look at more than just server apps exclusively and you run the 14900k at 125W, which is the official TDP, then it comes within3-4% of the 7950x while the later one uses almost 40% more power to do so....
(The 14900k uses 91W at average when limited to 125W while the 7950x uses 128W )
And this benchmark does include server apps, it's just not limited to only server apps.

https://www.techpowerup.com/review/...ke-tested-at-power-limits-down-to-35-w/8.html
D5TipA9.jpg

Wow you are playing some mental gymnastics. That is a stock 7950X at 128W. You can only compare stock vs stock in that review because they don't have power limited numbers for AMD. Also, I didn't realize x264 transcoding was a "server app". Not to mention those numbers came out before Intel had to release guidelines for new BIOS's that lowered performance to make sure they didn't crash.

And how many of those 47 applications really stress all of the cores hard? If an application only benefits from so many cores or isn't very demanding, of course the 14900k will use less power. But look at that page you linked, using Blender the 7950X is 254 vs 282 of the 14900k. The 14900k can be efficient, but not for the most demanding applications.

power-multithread.png
 
Last edited:
So you do agree that it is very hard to keep up single core clocks when many cores are active...
Also you are arguing that hyperthreading, and SMT would be the same, is using power while not doing anything at all and my argument is that additional cores not used will do that as well, let alone if they do have a decent amount of load on them which I showed that they do.
I agree, yes, but not necessarily in furtherance of your point. There is a difference between how many cores are addressable by any given application and how many cores are being used by a CPU to perform a task with said application. When CPUs detect usage on more cores compared to fewer cores the clocks have to lower across the board because power and voltage are not assignable on a core to core basis. Clocks on the other hand are completely independant between cores.

So let us say we have a 4 core CPU and are running an application that uses 100% of 1 core and this application can only address 1 core at a time. Because 1 core is being hammered pretty hard it is going to build up heat to the point that the CPU will have to lower clocks on that core at its given power and heat. A CPU will then typically assign the next highest boosting core at its individual heat and the same power that the first core was limited to, and then take over for maximum performance. This will often look like CPU usage is dancing around the cores with such a single threaded application. This can often look like there is a prefered core at 80-100% usage and then the other cores periodically jumping in usage as the CPU hot potatoes the task between the cores that can clock higher than the preferred core that is too hot to maintain the highest performing clocks.

So when you say:
...that it is very hard to keep up single core clocks when many cores are active...
That is true, however, because you have more cores to jump to when the previously used ones are hot you will get more performance for having more cores. But more performance because you have more cores does not mean any such application can necessarily address more than 1 core at a time. This is an entirely application specific thing that requires much additional/different types of programming to allow.
I found a video that tested exactly what I was talking about, Alexandria in AC origins.

As you can see fps drop to as low as 30 , 1% and 0.1% lows are horrible and the CPU is completely maxed out at 100% utilization.

And here is ratchet running on 4p vs 16+HT. Massive difference as you can see
Okay, so we may have been arguing a bit past one another. When I say I usually got 40+fps in those modern games I specifically meant average FPS. You are correct that, in particular, older architecture CPUs that have lower core counts can have particularly bad 1% and 0.1% lows in titles made in the last 8 years or so. I never meant to say that my lows never dipped below 40 fps in the more modern games before when I had my 3570k. Interestingly enough, even in most of these newer games a 1X100 CPU from the last 3 gens will do much better than you would expect them to even with 4c/8t. If I remember correctly, that Rachet & Clank game was highly storage dependant as well which is another highly sensitive CPU task especially for 1% and 0.1% lows. That was one of the game I had not played on the list you provided so I have no bearing on what sort of FPS I would have gotten.
 
Last edited:
Wow you are playing some mental gymnastics. That is a stock 7950X at 128W. You can only compare stock vs stock in that review because they don't have power limited numbers for AMD.
So what is TDP in your mind?! Because TDP for the 14900k is 125W.
Also, I didn't realize x264 transcoding was a "server app".
Yes, ever since GPUs can do it better and a lot quicker CPU transcoding has become a purely server thing, everybody else does it with a GPU.
Not to mention those numbers came out before Intel had to release guidelines for new BIOS's that lowered performance to make sure they didn't crash.
Not to mention that 125W is way way way way way below the guidelines...
And how many of those 47 applications really stress all of the cores hard?
That's my point!
Unless you are running a server, and I do bunch professional workloads into the same category, you will almost never stress all the cores.
But look at that page you linked, using Blender the 7950X is 254 vs 282 of the 14900k. The 14900k can be efficient, but not for the most demanding applications.
Yes, I already agreed to that the previous time you brought it up, if you only do server type workloads ryzen is great.
I agree, yes, but not necessarily in furtherance of your point. There is a difference between how many cores are addressable by any given application and how many cores are being used by a CPU to perform a task with said application.
But you are the only one that has been arguing that point at all.
From the beginning I said:
The single core scores are with actually only one core doing any work, you won't find that while gaming anymore.
 

Thunder64

Distinguished
Mar 8, 2016
172
252
18,960
Since I don't want to keep editing the same post, from that Anandtech article, in Blender the 7950X at 65W (really 88W) beats the 14900k at 125W. Makes the image from your link look a bit sad, doesn't it? That's why I said earlier you can't use a review where Intel gets power limit tests but AMD doesn't, unless you are comparing both at stock.

power-multithread.png


Like I said, AMD used stupid TDPs. Performance hardly drops off until you really limit it. Intel falls off much faster without the extra watts.

130496.png
 
The 7950x @ 105w was drawing the same amount of power as the 13900k @ 125w, but look at the temperature difference. HOLY. Thank god I don't have a ryzen part, they are impossible to cool, especially with my single tower air cooler.
Yeah, very bad and thick IHS on the AMD 7000 series CPUs traps in a lot of extra heat. A couple things to note would be that just because a CPU core temps are higher or lower comparatively its the wattage of power going into the CPU that is being dumped into the room as heat. For instance an AMD CPU at 90C but 100w usage compared to an intel CPU at 80C but 300w usage means the Intel CPU is dumping 200% extra heat into the surrounding case and room.
 
Last edited:

Latest posts