Info AMD says Ryzen 7000 is designed to hit the thermal limit as much as possible

This'll probably be lost in the sea of threads, but until then, this was something I picked up from a review of the Ryzen 7000 processors

For what it's worth, the AMD reviewers' guide says that this is safe, expected behavior:
With the new AM5 socket and higher TDP, most processors will run into a thermal wall before they hit a power wall. You will therefore see the Ryzen 7000 series, especially the higher core count variants, reside at TJMax (about 95 degrees Celsius for the Ryzen 7000 series) when running intense multithreaded workloads like Cinebench nt. This behavior is intended and by design.
It’s important to note TJMax is the max safe operating temperature—not the absolute max temperature. In the Ryzen 7000 Series, the processor is designed to run at TJMax 24/7 without risk of damage or deterioration. At 95 degrees it is not running hot, rather it will intentionally go to this temperature as much as possible under load because the power management system knows that this is the ideal way to squeeze the most performance out of the chip without damaging it.

It's also likely reviewers have been using the temperature that comes from the hottest part of the processor, so it's not like the entire package is damn near boiling.
 

punkncat

Champion
Ambassador
If this is indeed the case, I will be passing those over. My office/computer room already borders on too hot during the summer. Having a case blowing 200* air into the room isn't going to be tenable.
It appears that some of the pre release stuff coming out about the high end 4xxx GPU indicates that they run warm and require/use a large power supply. All of these factors make these less than ideal IMO.
 
If this is indeed the case, I will be passing those over. My office/computer room already borders on too hot during the summer. Having a case blowing 200* air into the room isn't going to be tenable.
It appears that some of the pre release stuff coming out about the high end 4xxx GPU indicates that they run warm and require/use a large power supply. All of these factors make these less than ideal IMO.
Keep in mind that a single temperature metric from Ryzen is typically the hottest part of the processor. The average die or package temp may be well below that. On top of that, a decent cooling system won't even reach 95C anyway.

To put in another way, car engines can have hot spots well past 500C, but the coolant is expected to be around 90-95C.

Remember: how much power the chip is consuming will determine how much heat is dumped into your room. If you're using a 105W TDP chip already, a 7600X is going to dump roughly the same amount of heat, even if it's temperature reporting is 90+
 
Well that is his point, that the cooling is going to draw the heat out of the cpu and dump it into his room.
Their post seems to be focused on the temperature aspect, which is meaningless without knowing the power draw of the part if the concern is how much hotter the environment will get.

You can have a 5W chip reach 95C if the chip is small enough and there isn't anything really to cool it off.
 

punkncat

Champion
Ambassador
Your original post quote indicates that the CPU is designed to work near its TDP "all the time" under load, which would be indicative that if the cooling solution is keeping it below that parameter that the design would then throw more power at it to reach target. What else would they mean?
 
Your original post quote indicates that the CPU is designed to work near its TDP "all the time" under load, which would be indicative that if the cooling solution is keeping it below that parameter that the design would then throw more power at it to reach target. What else would they mean?
In relation to your previous post, I was addressing this bit:

Having a case blowing 200* air into the room isn't going to be tenable.

The thing is from what I can tell, if the reviewer reported a temperature, they reported the T_die temperature. As far as I know, every single temperature reporting app that reports only a single CPU temperature is using this metric. But HWiNFO notes that this temperature reading is the hottest reading the processor is giving. This does not mean that the package nor the entire CPU die is that hot. As an example of this, I ran my 5600X for about a 15 minute stress test and this is what HWiNFO is reporting:

HW9RGtd.png


Notice that the only thing that's hot here are those related to CPU cores. L3 cache? About as warm as the heat wave we had a few weeks ago. IOD? It barely moved. Nobody else is reporting these other temperatures. And I'll bet the IOD is not hitting 90C like the CPU dies. Another thing is that Ryzen has an array of thermal sensors. The ones listed by HWiNFO are not the only ones in the processor.

One thing that I thought looked interesting in the review was this table:
7JFHifK.png


Ignoring Intel's side since I don't know how they measure the single CPU temp, how can, for instance, the 5800X3D get to 90C on 126W when the 7950X only goes up to 81C on 144W? The workload is spread out over more of the processor. That is, pretending the IOD doesn't exist, the 7950X has 144W to spread out over two dies while the 5800X3D has to put all 126W on one die. But the 7950X is going to dump more heat into the environment because it's using more power.

In any case, what I'm getting at is AMD is trying to throw a wrench in how we should look at temperatures in their CPUs. And I think it's stupid as hell that they decided that the CPUs should hit the thermal wall first on its default profile. As @Phaaze88 mentioned, it's bad enough people keep freaking out over Ryzen's supposedly high temperatures during low load scenarios.
 
After seeing some of the reviews, think I’ll hold with my 5900x. With my vetroo v5 cooler in push pull it stays usually about 72 fully loaded. So I think I’m going to pick up 32gb ram and sit. Then I can see in the next year or so how things progress. I don’t care for high temps. I guess I remember back in the days of the old athlon xp cpus where you didn’t want to hit 60 C. Maybe they can refine and get the power draw down.
 
So looking more at Ars Technica's review, they also benchmarked the CPU at 105W TDP and 65W TDP profiles, presumably to match AMD's existing TDP profiles. There's basically zero reason to use the 170W TDP profile. And yet it ships with it on by default.

AMD should've shipped this as a 105W CPU, with the option for enthusiasts to go up to 170W if they wanted. I mean, this alone is enough to tell me that AMD should really re-think about how to ship the default configuration

ryzen-7000x-power-redo.001-980x735.png


ryzen-7000x-power-redo.004-980x735.png


This is a great processor, don't get me wrong, but the default settings are dumb.
 
AMD should've shipped this as a 105W CPU, with the option for enthusiasts to go up to 170W if they wanted. I mean, this alone is enough to tell me that AMD should really re-think about how to ship the default configuration
105W TDP setting uses 144W instead of 170W still more than the 12900k at its 125W setting, it's not like this is going to make a huge difference for how people see the power draw.
Also other apps might scale a lot with that little bit more power, making an assumption on one single app is never a good idea for anything.
ryzen-7000x-power-redo.003-1440x1080.png
 
105W TDP setting uses 144W instead of 170W still more than the 12900k at its 125W setting, it's not like this is going to make a huge difference for how people see the power draw.
However, the 7950X used less energy overall than the 12900k at 125W because the 12900K took long enough that the lower power consumption no longer mattered. A wattage figure doesn't really mean anything without a time component. And even then, what wattage is it? Max? Average? Some random point in time?

Also other apps might scale a lot with that little bit more power, making an assumption on one single app is never a good idea for anything.
They did other benchmarks with these settings and aside from Cinebench, there wasn't really what I'd call a significant difference in performance for adding extra power.
 
However, the 7950X used less energy overall than the 12900k at 125W because the 12900K took long enough that the lower power consumption no longer mattered. A wattage figure doesn't really mean anything without a time component. And even then, what wattage is it? Max? Average? Some random point in time?
That wasn't my point but ok.
It's barely less at 12.68 Wh compared to 13.55. Less than 1 watt per hour more, someone get a calculator and figure out how much more expensive that would be for a year at 24/7...
My point was that 144W compared to 170W wouldn't make a difference for people looking at the benches and caring about power draw.
And even then, what wattage is it? Max? Average? Some random point in time?
For the 12900k ?
However you look at it, it would be the most power it would draw over time, with max it's obvious and if it is average it means it has the max draw for at least some time but averages out over time.
Far as I know the 7950x doesn't switch between TDPs unless it hits thermals or any other limit that would make it throttle, which I highly doubt happened in this bench.