Question How efficiency affects GPUs and is it straight forward to calculate?

rambomhtri

Distinguished
Nov 3, 2014
35
1
18,545
1
Hi, I currently own an msi Z390 Pro Carbon AC + i5-9600K @4.7GHz Turboboost + iGPU Intel UHD 630.

Today I bought a nVIDIA GTX 1660 Super OC 1875MHz "ROG-STRIX-GTX1660S-O6G-GAMING" (still has not arrived) for 2 reasons:
  1. I don't want or need an expensive GPU like 3070
  2. I don't want a freaking 180-200W (or even more) power hunger component
Given that, I found that I could go for a GTX 1070 150W, or a GTX 1660S 125W, so I decided the later, also because it is newer. Then I started to think...

- Wait... I know it's terrifying to see that a RTX 3070 consumes maxed out 200W or even more, but what I am really looking for is the efficiency rate, that means the "points" or "performance" a GPU can give for each W of power used. Ideally and in theory, the GPU that gives me the most points per watt is the one that will be more efficient, hence the one that will do the tasks using the least power possible. Then I read this great article:
https://www.tomshardware.com/features/graphics-card-power-consumption-tested

And then I started to calculate, roughly, the numbers for possible GPUs, omitting and ignoring that different tasks will have different performances, and the fact that MAX TDP does not exactly equals power consumption, but anyway:

Oh, and before calculating anything, I thought: "well, obviously, the newer a GPU is, the better and more optimized and efficient it will be. Globally that rule will be a thing I believe, although I know that some models or variants will sacrifice power for performance brainlessly to be crowned as kings"

https://i.ibb.co/MS0tLzN/1.jpg


Acknowledging the errors and "asterisks" each number has behind, I guess we can conclude that RTX 3070 and GTX 1660 Super are the most efficient GPUs out there, data that fits the previous article I linked, which states these are amongst the most efficient cards in the market today. Now I ask this:

1. According to this, if I export a video that uses the GPU a lot (rest of components the same), would 1660 Super be the best choice for using the least amount of energy possible?

2. Lets say that with my current Intel UHD 630 iGPU, I play a game at max settings with constant 55-65FPS, meaning I am using the UHD 630 at 100% (task manager confirms this, CPU not holding GPU back at all). In other words... 15W of full GPU power. Then I activate VSync to cap the FPS to 60 max so I don't waste GPU power to create FPS that my monitor won't use. Now here's the real deal:
I highly doubt it, but according to what I just calculated, that means that the GTX 1660 Super, same game same settings, VSync enabled, will use about 13W to handle that game (a bit less than 15W because it's more efficient than Intel UHD 630). Which sounds completely ridiculous as it is way too low.
Is this true?
No?
How can we correctly approach this if what I said is not really what happens?
 
Last edited:
The 1660 Super power draw on idle is about 15-20 watts, so if the game is barely using any GPU then your calculations are plausible. You have to take into account that a discrete GPU and iGPU have not only different architectural designs but silicon boards. So 13w is definitely too low, the GPU needs that much wattage just to function at its base level.
When your card arrives, you can install MSI afterburner with Rivatuner. It allows you to monitor the power draw, both volts and watts, of your GPU. You can then see how much power-GPU utilization that particular game uses.
 

rambomhtri

Distinguished
Nov 3, 2014
35
1
18,545
1
The 1660 Super power draw on idle is about 15-20 watts, so if the game is barely using any GPU then your calculations are plausible. You have to take into account that a discrete GPU and iGPU have not only different architectural designs but silicon boards. So 13w is definitely too low, the GPU needs that much wattage just to function at its base level.
When your card arrives, you can install MSI afterburner with Rivatuner. It allows you to monitor the power draw, both volts and watts, of your GPU. You can then see how much power-GPU utilization that particular game uses.
Yeah, I will personally measure that stuff, but I wanted to know what to expect and how to approach this topic of efficiency, energy wasted and power used. If the Intel UHD 630 is clearly less efficient (uses more energy to do the same task) than the 1660 Super, then I don't know how to understand that I am gonna use more energy with that discrete GPU to do the same task. May be with a discrete GPU you have to add +15W of "power to be turned on" to my calculations?
In other words, 15W of "I need this to be turned on" + 13W of the task in my example = 28W of power in 1660 Super to do the same task as the 15W of power in UHD 630.
 
Yeah, I will personally measure that stuff, but I wanted to know what to expect and how to approach this topic of efficiency, energy wasted and power used. If the Intel UHD 630 is clearly less efficient (uses more energy to do the same task) than the 1660 Super, then I don't know how to understand that I am gonna use more energy with that discrete GPU to do the same task. May be with a discrete GPU you have to add +15W of "power to be turned on" to my calculations?
In other words, 15W of "I need this to be turned on" + 13W of the task in my example = 28W of power in 1660 Super to do the same task as the 15W of power in UHD 630.
There's no real way of telling until you just either test it out yourself, or look up a study where somebody else already did.
You could try to compare processors this way but this will just give you a general idea, it won't show you the big picture - and that is reality has too many variables to consider; manufacturing, difficulty of compute task, level of optimization from the code, how said code favors the API's the GPUs run on, etc.
 

lordmogul

Distinguished
Jun 14, 2014
532
25
19,440
192
If power draw is really the bigger factor than price, you can always buy a bigger one and limit it's power draw. Might give you a way better efficiency.
Many of the cards are running above their optimal peak for that in terms of MHz/W. For Polaris (RX 580) the "optimal" point is around 1200 MHz and for Pascal (GTX 1000) it's around 1600 MHz, but the cards boost to 1400 and 2000 MHz respectively.

So a 3070 limited to 125 W should be faster than a 1660 Super at 125 W

Now for your use case of exporting/rendering videos: A faster chip will draw more power, but obviously be done faster. So if you think about 250W for 15 minutes (62.5 Wh) vs 180W for 30 min (90 Wh)

As a note on that, techpowerup has a performance per watt and performance per dollar (at the time of the review) part in their reviews, including nice graphs. (Here an example from their RTX 3070 founders edition review) That stuff is obviously focused on gaming, results in content creation can differ.
 
Reactions: rambomhtri

rambomhtri

Distinguished
Nov 3, 2014
35
1
18,545
1
If power draw is really the bigger factor than price, you can always buy a bigger one and limit it's power draw. Might give you a way better efficiency.
Many of the cards are running above their optimal peak for that in terms of MHz/W. For Polaris (RX 580) the "optimal" point is around 1200 MHz and for Pascal (GTX 1000) it's around 1600 MHz, but the cards boost to 1400 and 2000 MHz respectively.

So a 3070 limited to 125 W should be faster than a 1660 Super at 125 W

Now for your use case of exporting/rendering videos: A faster chip will draw more power, but obviously be done faster. So if you think about 250W for 15 minutes (62.5 Wh) vs 180W for 30 min (90 Wh)

As a note on that, techpowerup has a performance per watt and performance per dollar (at the time of the review) part in their reviews, including nice graphs. (Here an example from their RTX 3070 founders edition review) That stuff is obviously focused on gaming, results in content creation can differ.
No, power draw is not and was never a factor at all, efficiency is, that's what I am talking about. I think I explained it clearly in the first post and that's why calculated points per Watt and not sorted GPUs based on which one has less power. As you just said, I don't care if a GPU is 250W, what matters is how much energy doing a task costs, AKA looking for the best efficiency, the one GPU that makes a task with least energy.

About that graph... GTX 1660 Super (6GB GDDR6) was always the top in efficiency amongst all 1600 series, I don't know how Ti and regular versions are over it in that graph, that's weird. Specially the Ti with so much difference...

Oh dang it, I think you might have a point there. I was assuming (because I am forced to with the info you can find) that the efficiency is constant amongst all the power range of a GPU. So, if we divide the maximum number of points at top performance (no OC of course) by the power draw that requires, we get an efficiency rate of points/watt, which I consider the same if that GPU is performing at max 250W or at 100W.

You might be right about that, a GPU efficiency might not be constant along that power range, but then, we don't have the information as no review benchmarks GPUs several times jumping from 50W to 100W, 150, 200, 250... Since we don't have that information...

How did you manage to find out that peak efficiency of a given GPU, and its MHz and at what power draw?

For example, your chart positions the 3070 as the most efficient GPU with top score points/watt, but... at what power draw?
I'm sure it's at its maximum 250W, how many points can it obtain.
According to what we are discussing, may be at 200W other card is way better that the 3070 in terms of efficiency?
Where or how can you check if efficiency greatly varies depending on the power draw of a GPU?

I actually don't quite understand what variables come into play in a GPU to increase the power draw:
Is it just the MHz?
So if a card ranges in MHz from 300 to 1800, it's equivalent to 10W (idle) to 250W (maxed out)?

Anyways, the correct way is to actually measure how much energy it needs to run a constant task (game with constant FPS, same scenes). For example, you play a game capped at 60FPS for 1h and measure how much energy it needed to run it with different GPUs (all capable of running it at constant 60FPS with power room left, AKA GPU is not completely maxed out). And that's exactly what I did!

After getting my GTX 1660 Super (6GB GDDR6), I proceeded to do some tasks with the iGPU Intel UHD 630, such as watching a 4K video, playing a game... and then repeated the same exact tasks with my 1660S. I own a Z390 with i5-9600K, 16GB of RAM 3200MHz, 23" 1080p IPS monitor, Windows 10.

Here are the results, the power results and energy results are measured directly at the outlet, meaning these measure the whole system values (including the screen, which actually I could have avoided but its consumes are completely constant and won't disturb the differences between GPUs), which is what we want. The usage percentages are taken from Windows Task Manager. The games are capped at 60FPS, and with settings that allowed the UHD 630 to be capable of maintaining constant those 60FPS so we know the task remains equally demanding for both setups:

Intel UHD 630

Idle
: 40W (desktop, nothing running, no load, all at 0%)

1080p 60fps: 43W, GPU 17%, CPU 3% (YouTube video in Chrome)

4K 60fps: 44-65W (constant variation between those values), GPU 57%, CPU 10% (YouTube video in Chrome)

4K 60fps HDR local video file: 52W, GPU 35%, CPU 4%

MTG Arena: 90-110W, GPU 85%, CPU 30%, 9min 0.01480kWh

Left 4 Dead 2: 80-90W, GPU 80%, CPU 47%, 10min 0.01430



ASUS ROG STRIX GTX 1660 Super OC (default clocks, no OC enabled, Quiet fans mode)

Idle
: 49W (desktop, nothing running, no load, all at 0%)

1080p 60fps: 55W, GPU 25%, CPU 4% (YouTube video in Chrome) Why the 1660S needs 25% and UHD 630 17%???? no idea

4K 60fps
: 60-75W (constant variation between those values), GPU 59%, CPU 10% (YouTube video in Chrome) TBH I expected in these tests a lower or way lower GPU usage in 1660S

4K 60fps HDR local video file: 78W, GPU 60%, CPU 8% This makes absolutely no sense

MTG Arena
: 95W, GPU 42%, CPU 20%, 9min 0.01420kWh Okay, now here's what I wanted to see, clearly the 1660S beating the UHD 630 in efficiency as the data was stating

Left 4 Dead 2
: 70-80W, GPU 25%, CPU 20%, 10min 0.01300kWh Yeah, definitely a trend here, 1660S is way more efficient when GPU is demanded "a lot"


As you can see, and I knew this, it's really tough to beat an iGPU in terms of efficiency, because those chips are extremely efficient (look for Iris Xe performance/W), and I'm trying to beat a freaking 15W iGPU with a 125W GPU. Running the GPU at idle itself almost equals the iGPU running at max performance...

Now, two things:
First, as I was expecting, the 1660S is more efficient than the UHD 630 when the GPU is demanded a lot of performance, which is what was supposed to happen in theory. I was right.
Second, when playing back videos in YouTube or locally, I see a huge difference between the 2 cards. I understand that at very low demanded performance for the iGPU, the UHD 630 is going to beat up the 1660S in efficiency because in the extreme lows, the simple fact of maintaining the 1660S powered on, idling, is enough to be way worse than the iGPU. But still, so much difference... may be that's the G2D score, where iGPUs completely destroy dedicated GPUs in terms of efficiency?
Also, the CPU and GPU usage is what still breaks a little my schemes. I know as a fact that sometimes, when Task Manager says a GPU is at 60% usage, that does not mean at all that it's at 60% of it's total power. If it was, in the local 4K HDR test, the 1660S setup would draw WAY more power than 78W, more like 150W or so. Still, I don't know how to interpret those values. I mean look at the power draw between iGPU and GPU in the 4K HDR, it's 52W vs 78W, that's a lot, way more than a lot.
 
Last edited:

Karadjgne

Titan
Ambassador
It works both ways. Efficiency is a measure of load. The 3070 might be more efficient per watt, but takes more watts. So on short loads, like minutes, the 1660 will use less watts overall, on equal time loads the 1660 will use less watts, but on a longer, untimed load the 3070 will use less because it'll finish far faster.

For instance, if you render a video, the 3070 might take 1 hour at 300w per minute. Total 18000w used. The 1660 will take 1.5hrs to complete the task at 200w per minute, so total 18000w used. On a watt to watt basis they are equitable, but on a watt to minute basis the 1660 is more efficient, but on a watt to time basis the 3070 is far more efficient, taking far less time to complete the task.

So efficiency can and will depend on point of view, upon which criteria do you base judgement on.
 

rambomhtri

Distinguished
Nov 3, 2014
35
1
18,545
1
It works both ways. Efficiency is a measure of load. The 3070 might be more efficient per watt, but takes more watts. So on short loads, like minutes, the 1660 will use less watts overall, on equal time loads the 1660 will use less watts, but on a longer, untimed load the 3070 will use less because it'll finish far faster.

For instance, if you render a video, the 3070 might take 1 hour at 300w per minute. Total 18000w used. The 1660 will take 1.5hrs to complete the task at 200w per minute, so total 18000w used. On a watt to watt basis they are equitable, but on a watt to minute basis the 1660 is more efficient, but on a watt to time basis the 3070 is far more efficient, taking far less time to complete the task.

So efficiency can and will depend on point of view, upon which criteria do you base judgement on.
I'm sorry to tell you... you don't really understand how watts and power works. This is a tricky topic where you have to have a good base of electricity knowledge to have something to say.
 

Karadjgne

Titan
Ambassador
I'm sorry to tell you... you don't really understand how watts and power works. This is a tricky topic where you have to have a good base of electricity knowledge to have something to say.
Ehh, really. I'm a industial/commercial electrician for the last 30 years, also have a degree in electronics engineering and have been building/fixing/modding pc's for over 40 years. Tell me again how I am clueless about how power works?

Just because someone simplifies a topic to general or basic theory doesn't mean they lack an understanding, but rather they may see something outside of the box you assume encompasses the whole idea. Work = time x effort. If the work takes twice as long, but you expend half the effort, the resultant work expenditure is equal. In the case above, both gpus use the same amount of power overall, 18kw, but the 3070 did so in less time, so depending on expected results, the 3070 is more efficient if the expectation is overall time, the 1660 is more efficient if the expectation is power per minute, but taken from a power from the wall perspective, they are equal.
 
Last edited:

rambomhtri

Distinguished
Nov 3, 2014
35
1
18,545
1
You said 300W per minute. Then you added watts. I stopped reading there. Don't take it personally, but really, you just clearly don't know how this works, and that's fine. No need to prove anything here.
 

Karadjgne

Titan
Ambassador
That's because you used wattage, not joules or coulombs, which is what's required to estimate power per time period, but you knew that already, being a genius yet still asking questions here. As I said, general terminology and theorum, because specifics are unknowable with how you are going about testing theory based on assumption. So I used 'per minute' to set standard for both cards, since wattage is not and never has been an association of time beyond the 60Hz frequency.

I could just as easily used 'per nanosecond' or 'per hour' and still been correct, having set boundaries of time for both cards.

Simple fact is you cannot use wattage as a basis for determination of efficiency, regardless of the fact you've no stated expectations of results or criteria.
 

lordmogul

Distinguished
Jun 14, 2014
532
25
19,440
192
That is also the reason back in the day I oc'ed my Core 2. I could give it about 30% more clock for 20% increase in power draw. So for fixed tasks like rendering it could do it (theoretically) do 30% faster and go back to idle clocks and consumption faster.

But yes, finding the optimal efficiency obviously requires testing at different settings. Like running it on unlimited, 200W, 150W, 100W etc and measuring at the wall how much the system pulls, then calculating it against the time it takes.

And power draw is not linear, with increased clocks, voltage also has to be increased, and power draw scales almost with the square of the voltage increase. There is a reason Intel decided to cancel their 7 GHz Tejas chips and went for more efficient dual cores instead.
 
Reactions: rambomhtri

ASK THE COMMUNITY