News Next-Gen GPUs Will Require More Robust Cooling Solutions

Phaaze88

Titan
Ambassador
I don't think more robust cooling solutions are needed, when the problem comes from a company 'inevitably' settling on throwing power efficiency out the window to keep up or stay on top of their competition.
That just looks bad. The upper end Nvidia cards lost to AMD's in rasterization this gen - but the mindshare is just too stronk.
The 12900K/KS is a special case for those who just plug-n-play, and bios/XTU are foreign lands to them. Other than that, it's completely manageable, while still competing with Ryzen 5000.
I wish desktop cpus had a retail no IHS version...

Laptops might be another story, but I'm specifically talking about the desktop space.


Creating the problem and selling the solution will strike again...
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
I don't think more robust cooling solutions are needed, when the problem comes from a company 'inevitably' settling on throwing power efficiency out the window to keep up or stay on top of their competition.
That just looks bad. The upper end Nvidia cards lost to AMD's in rasterization this gen - but the mindshare is just too stronk.
The 12900K/KS is a special case for those who just plug-n-play, and bios/XTU are foreign lands to them. Other than that, it's completely manageable, while still competing with Ryzen 5000.
I wish desktop cpus had a retail no IHS version...

Laptops might be another story, but I'm specifically talking about the desktop space.


Creating the problem and selling the solution will strike again...
I have already called out European Parliament and AMD and Nvidia in twitter and saying they should stop the insanity with always higher limits to win their bigger d*** wars and EU should limit the maximum a GPU can use to about 350W max, which is about the maximum which makes sense for a GPU without being terribly inefficient, but of course my twitter was ignored.

I came here to say, inb4 graphics cards with 2 16pin cables and upper limit of 1200W which means, they can do whatever they want. Unless there is a limit by government capitalism will go rampant, and also, people will do whatever they want, I mean, they’re being encouraged by Intel, Nvidia and AMD to do so, so a lot of people will think, why not, unless they have issues with their energy costs at the end of the month. It’s especially ridiculous if you think about the awful prices we now have in Europe. Hopefully people will be smarter than the manufacturers, but my hope is slim.

It’s 2022 not 1995 btw, environmental problems are a thing too. A thing not to be ignored anymore. Inefficient GPUs aren’t high tech, they’re just a monstrosity.
 

spongiemaster

Admirable
Dec 12, 2019
2,276
1,280
7,560
I don't think more robust cooling solutions are needed, when the problem comes from a company 'inevitably' settling on throwing power efficiency out the window to keep up or stay on top of their competition.
Current rumors for the 4090 predict about twice the performance of a 3090 while using 600W's. That's a 100% increase in performance while using 70% more power. That is an increase in efficiency. People need to stop using power usage and efficiency interchangeably. They are not synonyms.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
Current rumors for the 4090 predict about twice the performance of a 3090 while using 600W's. That's a 100% increase in performance while using 70% more power. That is an increase in efficiency. People need to stop using power usage and efficiency interchangeably. They are not synonyms.
That’s for the base model, and then people will 90% buy the custom models which will use more power than that, just wait for it, which leads to terrible inefficiency just like with the 3090 Ti and 3090, lol, whole of GA102 is just inefficient cards.

And then you’re comparing your “efficiency metrics” against a inefficient card, I think your whole point is flawed in the first place.
 

Phaaze88

Titan
Ambassador
Current rumors for the 4090 predict about twice the performance of a 3090 while using 600W's. That's a 100% increase in performance while using 70% more power. That is an increase in efficiency. People need to stop using power usage and efficiency interchangeably. They are not synonyms.
Up to 600w from just this part alone - screw that.
Deliver that kind of performance while using around half to maybe 3/4ths of it.
 
  • Like
Reactions: martinch

Giroro

Splendid
I live in a state that has "Summer", so I have no interest whatsoever in buying a GPU that consumes over 200 Watts. My GTX 1080 was already an intolerably hot space heater, thanks for asking.

If power and price are increasing near linearly with performance, then it's not a "next generation". That's not an advancement of technology. It's just more of the current generation. We used to have a word for a GPU that provides about twice the performance while consuming about twice the power for twice the price: SLI.
 
  • Like
Reactions: martinch and KananX
The only way to solve this problem is to have a cultural shift in PC enthusiast circles. But good luck trying to tell the random person who gets off to having a million FPS that they need to tone it down. Same goes for the manufacturers.

At one point I wanted to see how much effiicency I could get out my setup, so I've got three profiles in my MSI Afterburner for my 2070 Super:
  • Voltage capped to ~0.925V, with a frequency ceiling of 1950MHz
  • Voltage capped to ~0.8V, with a frequency ceiling of 1800MHz
  • Voltage capped to ~0.65V, with a frequency ceiling of 1650MHz
I usually stick with the second profile because for the most part, I can achieve about 65-70% TBP utilization and get within 95% performance. If I go down to the third profile, I can easily get down to 50% TBP.

The funny thing is too is I noticed in some games that performance doesn't go up, but the card will happily go up to the limits. In one game, I noticed I got a 0% performance improvement at 100% TBP compared to dropping the power limit down to 75% TBP. In another game, similar thing, only it was between the two lower Afterburner profiles.

And I relayed this point a lot but I found a similar thing happened with my CPU, a 5600X, while doing a Handbrake run. If I let the CPU run at base clock, it chewed about 60% of the power than if I let it run at full bore, and it only lost about 15-20% performance. Although when running games, it normally doesn't consume anywhere near its PPT limit. Plus I undervolted it.

So as a result of all this, my computer would've normally sat at 300+W while gaming. It now averages around 200-220W and it still handily meets or exceeds performance requirements.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
The only way to solve this problem is to have a cultural shift in PC enthusiast circles.
Not really, EU or California state could limit it, things like this have already happened in California with fully sold PCs.

And I generally see many enthusiasts not being friendly towards extreme power usage, this has various reasons, among them, electric bill, heat and environmental reasons. Smart enthusiasts are undervolters.
 
This has gotten out of control! It was mentioned above, there is a up to a potential 100% performance increase (never) with 70% more power.

These two companies are out of control. AMD pushes clock speed and NVIDIA pushes resources. Each are jamming more power to eek out more performance.

There was an article stating the 3090ti when limited to 300 watts it was still potent...
Dropping power is an issue for amd, but when amd pushes clocks and pushes past nv they have respond.
 
Not really, EU or California state could limit it, things like this have already happened in California with fully sold PCs.

And I generally see many enthusiasts not being friendly towards extreme power usage, this has various reasons, among them, electric bill, heat and environmental reasons. Smart enthusiasts are undervolters.
LoL there needs to be a law preventing enthusiast level performance out of the box.


Brutal
 

JamesJones44

Reputable
Jan 22, 2021
644
574
5,760
That’s for the base model, and then people will 90% buy the custom models which will use more power than that, just wait for it, which leads to terrible inefficiency just like with the 3090 Ti and 3090, lol, whole of GA102 is just inefficient cards.

And then you’re comparing your “efficiency metrics” against a inefficient card, I think your whole point is flawed in the first place.

That doesn't mean it's less efficient though. Overclocked models can be more efficient believe it or not.

If the argument is for overall power usage, I'm with you. However, the inefficiency argument isn't correct and not equatable to overall power usage. It's like using "there" and "their" interchangeably in the English language, they sound the same, look similar, but they mean two completely different things and are not equatable.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
That doesn't mean it's less efficient though. Overclocked models can be more efficient believe it or not.
Rarely if ever happens, I followed the tech market closely in the last decades. Most custom models are less efficient due to overclocks to beat the competition. It’s ironic, because overclocks are now a standard feature of both companies anyway, with most models, not all of them.

I certainly don’t need a lecture on language from you

Efficiency is extremely important. Full stop.

High power usage isn’t welcome and not good for the environment. You can spin this however you want, it’s not a good thing. We’re not talking about Supercomputers here.
 

Giroro

Splendid
I have already called out European Parliament and AMD and Nvidia in twitter and saying they should stop the insanity with always higher limits to win their bigger d*** wars and EU should limit the maximum a GPU can use to about 350W max, which is about the maximum which makes sense for a GPU without being terribly inefficient, but of course my twitter was ignored.

I came here to say, inb4 graphics cards with 2 16pin cables and upper limit of 1200W which means, they can do whatever they want. Unless there is a limit by government capitalism will go rampant, and also, people will do whatever they want, I mean, they’re being encouraged by Intel, Nvidia and AMD to do so, so a lot of people will think, why not, unless they have issues with their energy costs at the end of the month. It’s especially ridiculous if you think about the awful prices we now have in Europe. Hopefully people will be smarter than the manufacturers, but my hope is slim.

It’s 2022 not 1995 btw, environmental problems are a thing too. A thing not to be ignored anymore. Inefficient GPUs aren’t high tech, they’re just a monstrosity.
I have no interest in buying a hot overpowered GPU, that is an unappealing product. I don't want or need it. But, I strongly disagree that high powered consumer GPUs should be illegal.
All that would accomplish is to steal capability away from the real people who need that product to do work, and hand it over to the companies who can afford billion dollar supercomputers and render farms with tens of thousands of GPUs/CPUs running 24/7 at 100% with the power budget of a small city.

Limiting GPUs is basically saying "You want to try editing a video, rendering graphics, or making an indie game? Too bad, it's illegal for you to get the equipment you need. "
That is not environmentalism. It's hardcore government enforcement of the current Hollywood/tech oligopoly. That is why trillion dollar companies spend so much money lobbying for hypocritical "green" policies like this: It kills competition and codifies their permanent control over the market. They know they will always be granted an exception to the policy, so the policy gives them a competitive advantage
If you truly believe that GPU efficiency will finally be the piece of government control that will magically save the world (despite lack of evidenced that it would be effective), then you should be pushing for the restrictions to be forced onto big business, and only on big business. That's the only place it has any chance of having a detectable effect, because it is a waste of resources to target gaming GPUs that will spend 99% of their life off or at idle. But, before any policy is put into place, there needs to be a lot of legwork to prove that it will actually do anything. I'm not sure there's been any research to prove things like "How much energy would be saved if every GPU on earth ran at peak efficiency" and "Would saving that amount of energy actually have a noticeable effect." Because nothing that any government has done so far has a single iota of evidence that it has had any effect whatsoever on the global environment, let alone if that change is actually a net-positive.
In the very least, you should be complaining about the lowest-hanging-fruit of societal harm. How about crypto currencies, which continue to be an immensely wasteful money laundering scam?
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
LoL there needs to be a law preventing enthusiast level performance out of the box.


Brutal
If humanity is just sheep who can’t think for themselves, or understand a fuller picture, it is what has to happen then. Already did with things like cars.
All that would accomplish is to steal capability away from the real people who need that product to do work
Strange, professional workers don’t use inefficient cards, they also work for their electricity bill and especially if they use the GPU day and night it simply doesn’t add up for them to use inefficient cards. There are cards for that, and they are usually way more efficient than gaming cards.
 

JamesJones44

Reputable
Jan 22, 2021
644
574
5,760
The only way to solve this problem is to have a cultural shift in PC enthusiast circles. But good luck trying to tell the random person who gets off to having a million FPS that they need to tone it down. Same goes for the manufacturers.

At one point I wanted to see how much effiicency I could get out my setup, so I've got three profiles in my MSI Afterburner for my 2070 Super:
  • Voltage capped to ~0.925V, with a frequency ceiling of 1950MHz
  • Voltage capped to ~0.8V, with a frequency ceiling of 1800MHz
  • Voltage capped to ~0.65V, with a frequency ceiling of 1650MHz
I usually stick with the second profile because for the most part, I can achieve about 65-70% TBP utilization and get within 95% performance. If I go down to the third profile, I can easily get down to 50% TBP.

The funny thing is too is I noticed in some games that performance doesn't go up, but the card will happily go up to the limits. In one game, I noticed I got a 0% performance improvement at 100% TBP compared to dropping the power limit down to 75% TBP. In another game, similar thing, only it was between the two lower Afterburner profiles.

And I relayed this point a lot but I found a similar thing happened with my CPU, a 5600X, while doing a Handbrake run. If I let the CPU run at base clock, it chewed about 60% of the power than if I let it run at full bore, and it only lost about 15-20% performance. Although when running games, it normally doesn't consume anywhere near its PPT limit. Plus I undervolted it.

So as a result of all this, my computer would've normally sat at 300+W while gaming. It now averages around 200-220W and it still handily meets or exceeds performance requirements.

I'm with you on this, in most cases you don't see a big boost from jumping a single generation of anything (CPU, GPU, etc.) or the slight overclocking a lot of vendors do. Also, when it comes to older games you rarely see much difference in overall game play. I keep Quake 2 around for fun and run the benchmarks each time I upgrade my hardware. At first, way back in 2000 there was some nice jumps in look and playability. However since about the Nvidia 4/5 series it runs the benchmarks even more insanely quickly with each generation I test, but gameplay itself feels no different. At some point it no longer matters how much performance the card is capable of as long as it can maintain at least 60 FPS at your desired quality settings and resolution.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
And now you do what they told ya! Only kidding, Now to be in agreement, Like japan we need an agreement between gpu makers that we only use 260 watts ;)
I would be fine with 260W, the last very good enthusiast GPU of Nvidia had a power limit of around 250-280W, ref/most custom models. See, if there is no pressure, they actually care about efficiency. Otherwise they throw it out the windows, because they can. Environment and everything else be damned.
 

ien2222

Distinguished
You people do know that this isn't like CPU's. Given the way GPU's work, there's not that much that can be done with IPC gains unless new algorithms are developed which is rather difficult at this point because the easy ones are done already. Nor it is easy to simply limit the data being worked by employing new methodologies as we're at the point where that's harder and harder to do as well. Therefore the most gains in efficiency will come from node shrinks, which they have done.

And it's not even having millions of FPS as Hotaru suggests, right now there is a huge movement to 4k gaming to 8k at the top end. Everyone I know is going 4k or multiple monitors at various resolutions. Whether you like it or not, that requires larger and larger transistor counts, therefore you get the higher power consumption.

No one is forcing you to get a Halo card. 600w for the 4090? Just get a 4070, or a 4060, even the 4060 should give you 3070 -3070 Ti performance with a significant reduction in power usage.
 
  • Like
Reactions: Sluggotg

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
You people do know that this isn't like CPU's. Given the way GPU's work, there's not that much that can be done with IPC gains
IPC gains aren’t needed, just let it run on efficiency curve, this will not use over 500W, more like 400ish tops.

Also if there is no pressure from EU or other institutions to innovate, of course, there’s not motivation to press for efficiency either, which includes advances in IPC.

CPU makers press very much for efficiency since a long time, and hence care a lot more about IPC, the CPU market is way more mature in general.
 
Who cares about those 600W+ GPUs? You can't stop ppl that have more money then brain from buying what they want, but most will pass sanity check and just buy GPUs with 'normal' power usage. At least as long as such exist ... hopefully they do.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
Who cares about those 600W+ GPUs? You can't stop ppl that have more money then brain from buying what they want, but most will pass sanity check and just buy GPUs with 'normal' power usage. At least as long as such exist ... hopefully they do.
If you consider 300-400W “normal” which will apparently be the new “mid range”, don’t think for a second the insanity stops at the high end.
 

ien2222

Distinguished
IPC gains aren’t needed, just let it run on efficiency curve, this will not use over 500W, more like 400ish tops.

Also if there is no pressure from EU or other institutions to innovate, of course, there’s not motivation to press for efficiency either, which includes advances in IPC.

CPU makers press very much for efficiency since a long time, and hence care a lot more about IPC, the CPU market is way more mature in general.


You don't understand. At all.

GPU's take a given data set and transform it, grab the next data set and transform it. Massively parallel processing is happening here and it's actually rather simple in how it works, so to speak.

CPU's are different. They do all sorts of different computations and tasks, to understand increases in IPC, you need to understand how pipelines are created and how prediction works to keep the pipeline full. This doesn't really happen the same way with GPUs

For GPUs, it requires a certain amount of transistors, running at a certain speed, to create an output of a specified resolution and FPS. Power draw is a function of what's needed to run a specific number of transistors at a given clock speed to keep errors from happening due to under or over voltage. **

Therefore, you want a higher resolution at the same FPS, or higher FPS at the same resolution, you're either increasing the transistor count, or the clock speed, either way it REQUIRES a higher power draw. A GTX 1080 just can't do any meaningful 4k resolution as it doesn't have the transistor count, nor could you increases the clock speed to what's required without liquid nitrogen.

We're into the 4k gaming era now, and we have been for at least a couple of years. The amount of power needed for that is going to be higher because the transistor count demands it for any given FPS. IF you are still running 1080p, then you won't need anything more than a 4050 when it comes out. If you're at 1440p, then a 4060 is all you will probably need and either way the power usage will be lower than what the previous generations needed for the same performance.

**Edit: Given the same node. Having a smaller node will require less power as mentioned in my first post.
 
Last edited:

Phaaze88

Titan
Ambassador
Each are jamming more power to eek out more performance.
Yeah, this gen hasn't been showing that - at least, that's what I've been seeing from TPU's vast list of vbios files.
AMD is trading blows with Nvidia [rasterization] while using less; even the best from each - the 3090Ti has a board power limit around 60% higher than the 6950XT... the gap only widens with the partner overclocked models.
So they really needed that much to stay on top of RX 6000? And 600w from the best new gen Nvidia is ok - just accept it, huh?


There was an article stating the 3090ti when limited to 300 watts it was still potent...
That's bloody amazing, and so is the following...

At one point I wanted to see how much effiicency I could get out my setup, so I've got three profiles in my MSI Afterburner for my 2070 Super:
  • Voltage capped to ~0.925V, with a frequency ceiling of 1950MHz
  • Voltage capped to ~0.8V, with a frequency ceiling of 1800MHz
  • Voltage capped to ~0.65V, with a frequency ceiling of 1650MHz
I usually stick with the second profile because for the most part, I can achieve about 65-70% TBP utilization and get within 95% performance. If I go down to the third profile, I can easily get down to 50% TBP.

The funny thing is too is I noticed in some games that performance doesn't go up, but the card will happily go up to the limits. In one game, I noticed I got a 0% performance improvement at 100% TBP compared to dropping the power limit down to 75% TBP. In another game, similar thing, only it was between the two lower Afterburner profiles.

And I relayed this point a lot but I found a similar thing happened with my CPU, a 5600X, while doing a Handbrake run. If I let the CPU run at base clock, it chewed about 60% of the power than if I let it run at full bore, and it only lost about 15-20% performance. Although when running games, it normally doesn't consume anywhere near its PPT limit. Plus I undervolted it.
So as a result of all this, my computer would've normally sat at 300+W while gaming. It now averages around 200-220W and it still handily meets or exceeds performance requirements.
Of these, unfortunately, people that do them are still the minority - the majority plug-n-plays without a care.


...right now there is a huge movement to 4k gaming to 8k at the top end. Everyone I know is going 4k or multiple monitors at various resolutions.
Show us where we can look up this huge movement - what do I google search or whatnot - because I have to see this; everyone you know is too small of a sample.
 

spongiemaster

Admirable
Dec 12, 2019
2,276
1,280
7,560
Strange, professional workers don’t use inefficient cards, they also work for their electricity bill and especially if they use the GPU day and night it simply doesn’t add up for them to use inefficient cards. There are cards for that, and they are usually way more efficient than gaming cards.
The top end configuration of the just announced Hopper professional GPU's from Nvidia have a 700W TDP. Plop 8 of those into an H100 workstation and you have 5.6kW's just for GPU's in a single system. Based on your incorrect definition of efficiency, that is catastrophically worse than any rumored 4000 series gaming GPU.