[SOLVED] 2080 vs 2080ti Idle temperatures

Fluffly Fluff

Reputable
Jul 31, 2019
47
4
4,535
I have two very similar rigs with relatively comparable builds (that is, similar airflow, cases and fans, similar CPUs, identical AOI coolers, etc). One has two 2080s in SLI and the other two 2080tis in SLI. It seems very odd that the lesser of the set, the 2080s idle at around 34 degrees while the 2080tis idle at around 40 degrees in the exact same environment. In both cases the cards are Asus Turbo cards that run GPU Tweak II software with the fan settings identical in both cases.

Can anyone explain why this is and/or how to get the idle temperature of the 2080tis down?
 
Solution
I can't explain anything, you can worry about what "tests" show, or you can test your own hardware. That's it, and that's all. And QUIT using HWmontitor. So unreliable.

Here's my copy pasta on that subject.

Monitoring software

HWmonitor, Open hardware monitor, Realtemp, Speccy, Speedfan, Windows utilities, CPU-Z, NZXT CAM and most of the bundled motherboard utilities are often not the best choice as they are not always accurate. Some are actually grossly inaccurate, especially with certain chipsets or specific sensors that for whatever reason they tend to not like or work well with. I've found HWinfo or CoreTemp to be the MOST accurate with the broadest range of chipsets and sensors. They are also almost...
Of course the higher end models run hotter, they have more memory, more cores, higher boost clock usually, and so on. Idle temperatures are 100% irrelevant anyhow. ONLY peak temperatures matter and you should only be concerned if your GPU card is exceeding the maximum recommended peak temperature for that card. If it is not, then there absolutely nothing to worry about or fret over. You're looking for an answer to a problem that doesn't exist.
 

Fluffly Fluff

Reputable
Jul 31, 2019
47
4
4,535
On closer inspection of this my two 2080tis are drawing 40 watts and 20 watts at idle. However, the two 2080s are drawing 20 watts and 10 watts at idle (and this with the exact same monitor). But the raw power draw of a 2080 card of the same type is actually typical one watt lower than that of a 2080ti. Hence, how is the power draw at idle double for the 2080tis vs the 2080s. It is very strange.
 
Last edited:

jasonf2

Distinguished
On closer inspection of this my two 2080tis are drawing 40 watts and 20 watts at idle. However, the two 2080s are drawing 20 watts and 10 watts at idle. But the raw power draw of a 2080 card of the same type is actually typical one watt lower than that of a 2080ti. Hence, how is the power draw at idle double for the 2080tis vs the 2080s. It is very strange.
Card wattage is rated for max draw, not minimum power draw. The TIs as stated earlier have more of just about every secondary card component, more cores and clock out differently. It is typical in these cards to see different power draws.
 

Fluffly Fluff

Reputable
Jul 31, 2019
47
4
4,535
I have found reports of another user with a single 2080ti and while powering two monitors, one 60Hz 4k like mine, plus a second monitor, and the guy reports his GPU is drawing only 24 watts for this at idle. Meanwhile, I am running bare bones 2080ti cards in SLI with a much lower idle load, and the total GPU wattage is 60 watts. The main card of mine is using 40 watts and the secondary card, 20 watts. Is not more than double the wattage for a sizably lower load strange? Something is off I think.
 
If you don't have performance problems and aren't exceeding the maximum temps or voltages then like I said, you're barking up the tree for no reason. Every CPU and GPU are different, right at the silicon level, as are the differences in "idle" from one system to another, so that's really not a great metric even if all the hardware was the exact same. There would still likely be some differences. I'm honestly wondering why you bother with SLI these days when it's mostly not even supported anymore by developers.
 

Fluffly Fluff

Reputable
Jul 31, 2019
47
4
4,535
Well, it's hot where I live and I can turn the air con on, but prefer not to if it isn't necessary. The fact that my 2080tis are running at 40 degrees at idle worries me. I've seen reports of others with two 2080tis in SLI maxing out at the same range while gaming at 4k and getting high FPS while gaming.

To be honest, the cards in my second rig with two 2080s only see max temps of around 44 degrees while gaming. These two 2080tis with the exact same settings in game are hitting 60 degrees plus with the same game at the exact same settings (in game and GPU Tweak II)--so it is not only at idle. And this is at the same ambient temperature.

And I know 60 degrees plus is not a huge concern, but why is a beefier system with the same cooling running the same game at the same FPS and the same in game settings at this 16 degree higher temp with two 2080tis vs two 2080s. That's higher power draw plus lower performance. If I upped the settings, the 2080s in SLI would do it way easier than the 2080tis. In a nutshell the two 2080s have about 25% more headroom than the two 2080tis. And the 2080s aren't even Supers. Not only that, they are a year older and have way more use under them than the 2080 tis.

It just makes no sense to me.
 
Last edited:
But even at 60 degrees they are still like 30-40 degrees under the thermal specification. You have tons of headroom. Either that, or you aren't putting much load on them or are monitoring the wrong thermal reading. If it's correct, then it's well below recommended peak temp.
 

Fluffly Fluff

Reputable
Jul 31, 2019
47
4
4,535
If it's correct, then it's well below recommended peak temp.

Yeah, but why are the 2080s doing the same game with less power, and lower temps at the same performance! If you upped the settings the ti cards, which cost about 50% more than the 2080s would not be able to keep up! They would hit the max safe temps at much lower settings.

It's ridiculous that cards that cost me about $700 are proving to use less power and hence would actually outperform the way supposedly "better" cards that cost about $1100.

I still think something is off or some tweaking is needed. It would explain why the idle temperatures and wattage are unexplainably excessive.
 
Last edited:
As I said before, SLI is pretty much a dead technology now, and the higher you go in the tiering of the cards the lower the return is and has always been anyhow. The higher performance the card is, usually the lower it scales in performance gains.

That doesn't mean there isn't something wrong with your cards or something else, but you need to consider that.

What are the EXACT hardware specifications for EVERYTHING in BOTH of these systems? Keep in mind, if they are not running the exact same motherboard, THAT ALONE can have a significant impact because yes, all motherboards are NOT equal when it comes to multiple areas of performance. And furthermore, you say "would" which means you haven't tested them. Likely, they don't have the same cooling systems on them, and like I said before, they have ENTIRELY different hardware structures so they are not at all comparable.

If you're that worried, pull one card from each system and do full testing on each of them, then pull them and test the other two, then put them in the OTHER system and test them again. This is how you find where problems are.
 

Fluffly Fluff

Reputable
Jul 31, 2019
47
4
4,535
Explain this though. Multiple articles test and show and explain that the power draw of a 2080 is almost exactly the same as a 2080ti at idle. So why is HWMonitor showing the power draw of my main 2080 as around 20 watts, but that of the 2080ti as around 40 watts. This is a huge discrepancy and totally and extremely different than what multiple articles and tests show should be the case.

Here is a link to an article where tests are done showing that GPU power draw ought to be similar when in my case, it is double.

https://www.anandtech.com/show/14663/the-nvidia-geforce-rtx-2080-super-review/15
 
Last edited:
I can't explain anything, you can worry about what "tests" show, or you can test your own hardware. That's it, and that's all. And QUIT using HWmontitor. So unreliable.

Here's my copy pasta on that subject.

Monitoring software

HWmonitor, Open hardware monitor, Realtemp, Speccy, Speedfan, Windows utilities, CPU-Z, NZXT CAM and most of the bundled motherboard utilities are often not the best choice as they are not always accurate. Some are actually grossly inaccurate, especially with certain chipsets or specific sensors that for whatever reason they tend to not like or work well with. I've found HWinfo or CoreTemp to be the MOST accurate with the broadest range of chipsets and sensors. They are also almost religiously kept up to date.

CoreTemp is great for just CPU thermals including core temps or distance to TJmax on older AMD platforms.

HWinfo is great for pretty much EVERYTHING, including CPU thermals, core loads, core temps, package temps, GPU sensors, HDD and SSD sensors, motherboard chipset and VRM sensor, all of it. When starting HWinfo after installation, always check the box next to "sensors only" and de-select the box next to "summary".


Run HWinfo and look at system voltages and other sensor readings.

Monitoring temperatures, core speeds, voltages, clock ratios and other reported sensor data can often help to pick out an issue right off the bat. HWinfo is a good way to get that data and in my experience tends to be more accurate than some of the other utilities available. CPU-Z, GPU-Z and Core Temp all have their uses but HWinfo tends to have it all laid out in a more convenient fashion so you can usually see what one sensor is reporting while looking at another instead of having to flip through various tabs that have specific groupings, plus, it is extremely rare for HWinfo to not report the correct sensor values under the correct sensor listings, or misreport other information. Utilities like HWmonitor, Openhardware monitor and Speccy, tend to COMMONLY misreport sensor data, or not report it at all.

After installation, run the utility and when asked, choose "sensors only". IF you get a message about system stability you can simply ignore it and continue on WITH the option to monitor the sensor OR you can disable the monitoring for THAT sensor and continue on based on the option it gives you at the time. If you choose to continue on, WITH monitoring of that sensor, which is what I normally do, and there IS instability, that's fine. It's not going to hurt anything. Simply restart the HWinfo program (Or reboot if necessary and THEN restart the HWinfo program) and THEN choose to disable that sensor, and continue on with sensors only monitoring.

The other window options have some use but in most cases everything you need will be located in the sensors window. If you're taking screenshots to post for troubleshooting, it will most likely require taking three screenshots and scrolling down the sensors window between screenshots in order to capture them all.

It is most helpful if you can take a series of HWinfo screenshots at idle, after a cold boot to the desktop. Open HWinfo and wait for all of the Windows startup processes to complete. Usually about four or five minutes should be plenty. Take screenshots of all the HWinfo sensors.

Next, run something demanding like Prime95 (With AVX and AVX2 disabled) or Heaven benchmark. Take another set of screenshots while either of those is running so we can see what the hardware is doing while under a load.


*Download HWinfo




For temperature monitoring only, I feel Core Temp is the most accurate and also offers a quick visual reference for core speed, load and CPU voltage:


*Download Core Temp




Ryzen master for Zen or newer AMD CPUs, or Overdrive for older Pre-Ryzen platforms (AM3/AM3+/FM2/FM2+)

For monitoring on AMD Ryzen and Threadripper platforms including Zen or newer architectures, it is recommended that you use Ryzen master if for no other reason than because any updates or changes to monitoring requirements are more likely to be implemented sooner, and properly, than with other monitoring utilities. Core Temp and HWinfo are still good, with this platform, but when changes to CPU micro code or other BIOS modifications occur, or there are driver or power plan changes, it sometimes takes a while before those get implemented by 3rd party utilities, while Ryzen master, being a direct AMD product, generally gets updated immediately. Since it is also specific to the hardware in question, it can be more accurately and specifically developed without any requirement for inclusion of other architectures which won't be compatible in any case. You wouldn't use a hammer to drive a wood screw in (At least I hope not) and this is very much the same, being the right tool for the job at hand.

As far as the older AMD FX AM3+ platforms including Bulldozer and Piledriver families go, there are only two real options here. You can use Core Temp, but you will need to click on the Options menu, click Settings, click Advanced and put a check mark next to the setting that says "Show Distance to TJmax in temperature fields" and then save settings and exit the options menu system. This may or may not work for every FX platform, so using AMD Overdrive is the specific, again, right tool for the job, and recommended monitoring solution for this architecture. Since these FX platforms use "Thermal margins" rather than an actual "core/package" temp type thermal monitoring implementation, monitoring as you would with older or newer AMD platforms, or any Intel platform, won't work properly.

For more information about this, please visit here for an in depth explanation of AMD thermal margin monitoring.

Understanding AMD thermal margins for Pre-Ryzen processors





*Download Ryzen Master




*Download AMD Overdrive



Also, posting screenshots, when requested, is helpful so WE can see what is going on as well and you can learn how to do that here:

How to post images on Tom's hardware forums

 
Solution