News AMD Ryzen 9 7950X and Ryzen 5 7600X Review: A Return to Gaming Dominance

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
2025 is only 2 years away in reality by the time you acquire one of these systems. In 2 years you might get one generation newer, this no different than what Intel has done with Alder/Raptor Lake. That's not really a selling point IMO. One generation isn't usually worth the upgrade.
They are saying at least until 2025. Nothing's stopping them from going 5+ years now that they've opened the throttle on socket power limitations.
 

TJ Hooker

Titan
Ambassador
With respect to the ~20C drop der8auer found with direct die cooling:
Igor's lab found a 16C drop with direct die cooling a 12900k.


And they're just using regular thermal paste. If they had been using liquid metal like der8auer, their 12900k results could have been even closer to the 7900x results. So I'm not sure if the Ryzen 7K IHS is especially bad. But I'm surprised how much difference direct die cooling makes for either CPU.
 

TJ Hooker

Titan
Ambassador
I'm surprised how many people seem to think that a higher operating temp means more heat dumped into your case/room, and/or means it's harder to cool. Both in this thread and even other reviews I've looked at.

Heat generated is essentially equal to power consumption; operating temp is largely irrelevant (although sometimes power consumption itself can increase at higher temp, due to higher resistance and leakage). And, all else being equal, a chip running at a higher temperature is easier to cool than one running a at a lower temp. I.e. a higher operating temp increases the amount of heat that a given cooler can dissipate.
 
Last edited:

KyaraM

Admirable
Yup. For gaming and low-ish core count workloads, Intel's Raptor will probably win.

As I said, I may actually wait for the X3D variants to come out. I mean, anything will show a nice boost over my current 9900k but I want gaming longevity. X3D chips are where that's at!
I expect to run my 12700k for around 8 years if it doesn't die before that for some reason. That's a lot of longevity, plus better performance in applications.

Honestly, all modern CPUs are more than enough for gaming, at least right now and likely for quite a while longer...
 
Heat generated is essentially equal to power consumption; operating temp is largely irrelevant
Maybe you missed the news but the cpu boosts against the temp, temp is always 95 but power keeps increasing until the power (or any other) limit is reached (same as TVB on intel)

Also max power is 230W so if you get that amount dumped into your room 24/7 I'm sure you would notice that as warm. (again same as intel with 240-250W )

The solution is to not need maximum performance and let the cpu chill at a locked lower TDP.
 
I'm surprised how many people seem to think that a higher operating temp means more heat dumped into your case/room, and/or means it's harder to cool. Both in this thread and even other reviews I've looked at.

Heat generated is essentially equal to power consumption; operating temp is largely irrelevant (although sometimes power consumption itself can increase at higher temp, due to higher resistance and leakage). And, all else being equal, a chip running at a higher temperature is easier to cool than one running a at a lower temp.
Caveats all around, as it will depend on conductivity of the materials and such. There's no perfect superconductor for neither heat or electricity, so to be pedantic, it depends on some key elements:
  • A CPU that runs hotter will keep heat within itself making electric properties a tad worse (loss), so the hotter a CPU runs you can make the point it'll be "less efficient" overall. The extremes are LN2 cooling for world records and such and the extremely hot CPUs using much more power than actually needed (be it because of more voltage applied or more amps to compensate) in order to hit certain target speeds; you can cross check with GPUs in this particular case as they've had more time exposed to this.
  • In a thermally constrained environment, how you manage each "jump" from one material to another will determine greatly how much heat you can move in total. More jumps, more loss (or, in this case, less transfer). So with delidding and using direct contact, you already remove the chunky bit the IHS takes away from the equation.
  • Contact area. This affects Intel more than AMD (at least, no one has reported issues with AMD yet) and the IHS being removed allows you to reach, ideally, almost 100% surface area for the components that do need moving the heat away from them. So far, AMD worked in lower power modes, so it wasn't a big problem to move that heat away, but now they've more than doubled the power, so all that needs to be moved fast.
  • Thermal capacity. This is about heat saturation for components. When something (metal) becomes heat saturated, it becomes a terrible conductor of, well, anything. This comes down to what the IHS is made of and, let's be real, it's probably very cheap stuff, otherwise they'd be using gold or copper. So it means it'll become saturated immediately and it'll start moving less heat across than at lower operating temps. This sounds strange, but that's how a lot of materials behave. If AMD says "95°C is ok", I'd imagine they've made their homework and decided the IHS material's optimal operating temp for heat transfer is around the 90°C mark. Well, I doubt it.

All in all, a CPU that uses 250W won't necessarily output that equivalent amount of heat to the HSF/cooler as it'll keep some of it heating the motherboard and other components around it in the current scenario from AMD, which I do not like. I'm not sure how the split would be, but let's go with the good ol' 80/20, so 80% of that energy will be heat transferred to the case/ambient and 20% will stay behind, circling around and heating up other stuff. If you delid the CPUs, that ratio improves, so it'll move more heat to the ambient, but also removing that heat which would otherwise stay at the motherboard/CPU level. Also, improving power efficiency.

It's not uni-dimensional why I personally dislike this "95°C is fine" take from AMD. It does have implications, practical ones, I do not like. Looks like, for the mid range, this is won't be such an issue, but I'm still wondering if AMD can do better there. I would not mind them breaking cooler compatibility and just improving the heat transfer to get rid of that 10°C difference.

I'd love to see more testing around how these higher temps affect other components, specially in more constrained environments, but most review sites and testers use open-case or have setups where "heat buildup" is less of an issue for them. Which is, obviously, not the case for 99% of people out there. I don't even want to imagine OEM machines with Ryzen 7 in them and how they'll perform. They'll probably have to enforce the "ECO" modes like they do with Intel enforcing the 65W/125W modes.

Regards.
 

TJ Hooker

Titan
Ambassador
Maybe you missed the news but the cpu boosts against the temp, temp is always 95 but power keeps increasing until the power (or any other) limit is reached (same as TVB on intel)

Also max power is 230W so if you get that amount dumped into your room 24/7 I'm sure you would notice that as warm. (again same as intel with 240-250W )

The solution is to not need maximum performance and let the cpu chill at a locked lower TDP.
No, Ryzen 7K does not always operate at 95C or max TDP (particularly for lightly threaded loads), and there are plenty examples of this. You can see it in this article for the single threaded temp/power results. Or you can look at the temp/power results for gaming in the TPU review, etc. In addition to temp/power limits, there are current, limits, voltage limits, probably frequency limits (although this may just be a result of voltage limits). Plus it's not just going to waste power running full bore unless the load is sufficiently high.

And only the R9 chips will even hit 230W at stock (and even they won't necessarily draw that much even under all core load). I've seen people complaining about the heat from a 7600X, which obviously is not going to be producing 230W of heat.
 
Last edited:
  • Like
Reactions: TCA_ChinChin
No, Ryzen 7K does not always operate at 95C or max TDP (particularly for lightly threaded loads), and there are plenty examples of this. You can see it in this article for the single threaded temp/power results. Or you can look at the temp/power results for gaming in the TPU review, etc. In addition to temp/power limits, there are current, limits, voltage limits, probably frequency limits (although this may just be a result of voltage limits).

And only the R9 chips will even hit 230W at stock (and even they won't necessarily draw that much even under all core load). I've seen people complaining about the heat from a 7600X, which obviously is not going to be producing 230W of heat.

Never the less, the weakest your cooling solution is, the more likely it is that you are leaving performance on the table. Thats why I would love to see same games and app benchmarks using cheaper cooling solutions to know about how much performance is lost (or not?).

In any case anyone like me using an AM4 cooler that needs a non-standard backplate to fix in place, is going to need a new cooling solution anyways. And thats another cost to the expensive platform.

AMD took some strange decisions with this new bacth of products, I hope they really have a plan in the long run (and let us know pretty soon what it is).

So far (and I know we are living some bizarre times worldwide) Zen 4 is not even close as how appealing Zen 2 was. Perhaps I still need to learn more and more about how this new architecture behaves.
 

TJ Hooker

Titan
Ambassador
Caveats all around, as it will depend on conductivity of the materials and such. There's no perfect superconductor for neither heat or electricity, so to be pedantic, it depends on some key elements:
  • A CPU that runs hotter will keep heat within itself making electric properties a tad worse (loss), so the hotter a CPU runs you can make the point it'll be "less efficient" overall. The extremes are LN2 cooling for world records and such and the extremely hot CPUs using much more power than actually needed (be it because of more voltage applied or more amps to compensate) in order to hit certain target speeds; you can cross check with GPUs in this particular case as they've had more time exposed to this.
  • In a thermally constrained environment, how you manage each "jump" from one material to another will determine greatly how much heat you can move in total. More jumps, more loss (or, in this case, less transfer). So with delidding and using direct contact, you already remove the chunky bit the IHS takes away from the equation.
  • Contact area. This affects Intel more than AMD (at least, no one has reported issues with AMD yet) and the IHS being removed allows you to reach, ideally, almost 100% surface area for the components that do need moving the heat away from them. So far, AMD worked in lower power modes, so it wasn't a big problem to move that heat away, but now they've more than doubled the power, so all that needs to be moved fast.
  • Thermal capacity. This is about heat saturation for components. When something (metal) becomes heat saturated, it becomes a terrible conductor of, well, anything. This comes down to what the IHS is made of and, let's be real, it's probably very cheap stuff, otherwise they'd be using gold or copper. So it means it'll become saturated immediately and it'll start moving less heat across than at lower operating temps. This sounds strange, but that's how a lot of materials behave. If AMD says "95°C is ok", I'd imagine they've made their homework and decided the IHS material's optimal operating temp for heat transfer is around the 90°C mark. Well, I doubt it.
All in all, a CPU that uses 250W won't necessarily output that equivalent amount of heat to the HSF/cooler as it'll keep some of it heating the motherboard and other components around it in the current scenario from AMD, which I do not like. I'm not sure how the split would be, but let's go with the good ol' 80/20, so 80% of that energy will be heat transferred to the case/ambient and 20% will stay behind, circling around and heating up other stuff. If you delid the CPUs, that ratio improves, so it'll move more heat to the ambient, but also removing that heat which would otherwise stay at the motherboard/CPU level. Also, improving power efficiency.

It's not uni-dimensional why I personally dislike this "95°C is fine" take from AMD. It does have implications, practical ones, I do not like. Looks like, for the mid range, this is won't be such an issue, but I'm still wondering if AMD can do better there. I would not mind them breaking cooler compatibility and just improving the heat transfer to get rid of that 10°C difference.

I'd love to see more testing around how these higher temps affect other components, specially in more constrained environments, but most review sites and testers use open-case or have setups where "heat buildup" is less of an issue for them. Which is, obviously, not the case for 99% of people out there. I don't even want to imagine OEM machines with Ryzen 7 in them and how they'll perform. They'll probably have to enforce the "ECO" modes like they do with Intel enforcing the 65W/125W modes.

Regards.
  • I don't think it's the IHS itself that's really the problem, as a solid chunk of metal (probably aluminum with some sort of plating) generally has pretty good thermal conductivity. It's the imperfect interface between the die and the IHS, and the IHS and the cooler (and the use of TIM for each of those interfaces that usually has lower conductivity than the IHS/cooler). By removing the IHS, you remove one of those interfaces, and potentially improve on the other (depending on how good the die-IHS interface was to begin with.). And the IHS already has 100% coverage of the heat generating components (i.e. the dies), for either Intel or AMD. It's the surface area of the IHS and cooler interface that's in question.
  • Thermal capacity is how much energy it takes to raise the material's temperature by a given amount, not a limit on how much total heat they can absorb. Metals don't abruptly stop being conductors of heat (or electricity for that matter) at some temperature, unless you're getting close to a phase change. There will be no appreciable change in the thermal conducivity of the IHS materials from operating at 95C instead of, say, 65C.
  • You're right, some heat will be absorbed by the socket/mobo, and the amount will generally be higher with higher operating temps. I'm not aware of any real consequences of that in practice though. A typical laptop mobo/internals likely deal with much higher temperatures than what you'd see in a Ryzen 7K desktop, without issue.
 
I'm surprised how many people seem to think that a higher operating temp means more heat dumped into your case/room, and/or means it's harder to cool. Both in this thread and even other reviews I've looked at.

Heat generated is essentially equal to power consumption; operating temp is largely irrelevant (although sometimes power consumption itself can increase at higher temp, due to higher resistance and leakage). And, all else being equal, a chip running at a higher temperature is easier to cool than one running a at a lower temp. I.e. a higher operating temp increases the amount of heat that a given cooler can dissipate.
I think the confusion is in the 'qualifier' for these statements going back and forth.

So, if we say, "higher operating temps means more heat dumped into your case/room, if the workload is sufficient to drive the CPU up to it's TJMax," then yes, this statement is 100% true.
 
Temps, pricing, no DDR4 support.

zen41.gif


zen42.gif
 
Last edited:
  • Like
Reactions: KyaraM

TJ Hooker

Titan
Ambassador
I think the confusion is in the 'qualifier' for these statements going back and forth.

So, if we say, "higher operating temps means more heat dumped into your case/room, if the workload is sufficient to drive the CPU up to it's TJMax," then yes, this statement is 100% true.
Not really, it's still dependent on power consumption. E.g. you could have a 7950X with a crappy cooler hit TJMax at a 100W, or your could have a 7950X with a liquid cooler hitting TJMax at 230W. Or a 7700X and a 7950X both at TJMax, with the 7700X drawing ~140W and the 7950X drawing ~240W (looking at TPU numbers).

Using operating temp as a proxy for heat production just doesn't work.
 
Last edited:
Not really, it's still dependent on power consumption. E.g. you could have a 7950X with a crappy cooler hit TJMax at a 100W, or your could have a 7950X with a liquid cooler hitting TJMax at 230W. Same temp, completely different amounts of heat being produced.
Totally understood, but I'm not comparing different coolers TDPs. I think most (all?) here are comparing the behaviors of these new chips vs behaviors in, say, the 5950X. (maybe that's where most of the confusion lies?)
So, it's really a combination of the behavior of the new chips PLUS the different TDP and PPT of the new socket.

A good way to visualize this is to check out Gamers Nexus all-cores blender workload test for the 7950X and then go back to their 2020 review of the 5950X, specifically the blender all-core workload.
 

Phaaze88

Titan
Ambassador
The boost behavior specifically at full load looks kinda like a backwards version of Nvidia and AMD Navi gpus to me; different starting point, which reminds me somewhat of the FX cpu thermal margin mess.
More thermal headroom still equals more boost across the 3, and none of them boost infinitely, so it shouldn't be too hard to find a happy medium, yes?
Consider lowering the thermal limit to 90C - maybe 85C - if it makes one feel better, change how the current custom fan curves are on one's cpu coolers/fans and call it a day.

I would believe the engineers know better about the limits of the silicon than folks speculating about what's good or bad, allowing personal feelings to get in the way.
It's bad enough with the gazillion desktop Ryzen 3000 and 5000 low load thermal threads, and there wasn't a darn thing wrong with them from the get-go most of the time, when the real problem was a PEBKAC one.

You know what some of Intel's own engineers consider the red zone for their cpus? Sustaining above 80C for extended periods - though Alder Lake may be a different beast with how the i9s are, and follow different rules.


Raptor Lake should put LGA 1700 back on top, Ryzen 8000(?) probably takes it back for AM5. After which both could be looking to change sockets and start over, IDK.
 
  • Like
Reactions: -Fran-
  • I don't think it's the IHS itself that's really the problem, as a solid chunk of metal (probably aluminum with some sort of plating) generally has pretty good thermal conductivity. It's the imperfect interface between the die and the IHS, and the IHS and the cooler (and the use of TIM for each of those interfaces that usually has lower conductivity than the IHS/cooler). By removing the IHS, you remove one of those interfaces, and potentially improve on the other (depending on how good the die-IHS interface was to begin with.). And the IHS already has 100% coverage of the heat generating components (i.e. the dies), for either Intel or AMD. It's the surface area of the IHS and cooler interface that's in question.
  • Thermal capacity is how much energy it takes to raise the material's temperature by a given amount, not a limit on how much total heat they can absorb. Metals don't abruptly stop being conductors of heat (or electricity for that matter) at some temperature, unless you're getting close to a phase change. There will be no appreciable change in the thermal conducivity of the IHS materials from operating at 95C instead of, say, 65C.
  • You're right, some heat will be absorbed by the socket/mobo, and the amount will generally be higher with higher operating temps. I'm not aware of any real consequences of that in practice though. A typical laptop mobo/internals likely deal with much higher temperatures than what you'd see in a Ryzen 7K desktop, without issue.
Well, generally speaking, the IHS is always a problem, but it shows itself more evidently at higher power draws, since the amount of energy translated into heat will saturate the thermal conductivity of any IHS almost immediately due to what you mention as a secondary cause: the IHS surface area that contacts the heat spreader of the cooling solution itself. This is to say, the IHS will suck no matter the material it is made of, unless it's a heat superconductor (on the extreme). As for Aluminium... Aluminium actually sucks for thermal conductivity and so does Tin, which I'm sure both AMD and Intel IHS'es are a combination of. I also remember AMD is using gold and metal TIM for Ryzen, so the coverage between the IHS and the die is less of a problem so I agree there. So, all I can see in der8auer's test is that chonker IHS is a problem. Be it due to thickness, material, TIM or effective surface contact area, doesn't matter as much as just accepting it is a problem (usually it's a combination of all anyway).

Also, good point on Laptops, because I have a 5900HX that has a massive metal block as a heat spreader for both the CPU and GPU with heat pipes using liquid metal (or so is advertised; Asus G15 AMD Vantage-yadda yadda). My point is, I believe the CPU and GPU in laptops are delidded (I'm not brave enough to verify XD), so they have way way way better thermal conductivity right off the bat compared to regular DYI PC CPUs. That's also why my 5900HX can compete (and beat) with the 5700G at a lower TDP and power draw, because it's way more effective at moving away the heat, making the APU even more efficient compared to the average 5700G. That's how you check that even at and under 90W power draw it still matters. You can also see this with Intel laptop parts in the higher end of the spectrum.

You can check my sig links and see where the 5900HX I have stacks in the overall ranking in CPU-Z as an illustration to my point.

Regards.
 
2025 is only 2 years away in reality by the time you acquire one of these systems. In 2 years you might get one generation newer, this no different than what Intel has done with Alder/Raptor Lake. That's not really a selling point IMO. One generation isn't usually worth the upgrade.
am4 ended at 2020..
it doesnt mean no new cpus after that date, its chipset termination date
 

RedBear87

Commendable
Dec 1, 2021
153
120
1,760
2025 is only 2 years away in reality by the time you acquire one of these systems. In 2 years you might get one generation newer, this no different than what Intel has done with Alder/Raptor Lake. That's not really a selling point IMO. One generation isn't usually worth the upgrade.
Well, AM4 was supposed to get support until 2020 IIRC, eventually they kept pushing out updates until this year when AMD belatedly extended support for Zen 3 to first generation motherboards only because Alder Lake was competitive, initially they tried to avoid it. Something similar might happen again, or not. It depends on their future architectures and whether Intel will remain competitive. But of course the average fanboy will believe that AMD is his friend and they will certainly support AM5 well beyond 2025...
 
Also, good point on Laptops, because I have a 5900HX that has a massive metal block as a heat spreader for both the CPU and GPU with heat pipes using liquid metal (or so is advertised; Asus G15 AMD Vantage-yadda yadda). My point is, I believe the CPU and GPU in laptops are delidded
5900hx is monolithic and yes its delided (direct die, no IHS)