Nvidia GeForce GTX 1000 Series (Pascal) MegaThread: FAQ and Resources

Page 55 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.



I found the die size specs.
GTX 1080 = 314mm^2
GTX 980 = 398mm^2

It also depends on how much heat is generated per unit area. However the other manufacturers are getting good temperatures.
https://www.techpowerup.com/reviews/Gigabyte/GTX_1080_G1_Gaming/28.html
Gigabyte GTX 1080 G1 Gaming 42°C 70°C
MSI GTX 1080 Gaming X 51°C 72°C
NVIDIA GTX 1080 FE 36°C 83°C
NVIDIA GTX 1070 FE 39°C 83°C

I think that may be because they have moved to three fans though. Maybe it's the case that Pacal's are hotter. Otherwise it's just the case that the FE fan is fancy looking rubbish.



I also, I looked up FinFET technology and it says it is more efficient than MOSFET. (Posted back in the thread on this.) I don't know how that equates to this Pascal chip.

What I do think, (thinking out loud) is that Pascal probably follows the same architecture design that Maxwell did. I could be way off with that though.

What I mean is, Maxwell was more powerful than Kepler (Cuda core to Cuda core,) because they redesigned the whole chip. From the ground up. They found a way to use same size transistors, but use less power and get more performance.

Nvidia may have adopted the same architecture for Pascal, but used the different FinFET transistors. I don't know though. FinFET in layman's terms means the channel operates slightly differently.

 
Well of course the other manufacturers are getting good temps - those are custom cards. I'm just saying, my 300W R9 390 gets the same temps under full load as that Gigabyte 1080 G1 Gaming (about 70C), and with 25-40% fan speeds. Why? Well, the heatsink on it is a lot larger. The die size is probably also larger. A lot of people make this assumption that a card with less power requirements like the 180W GTX 1080 will be cooler than one with higher requirements like the 300W R9 390. There is more heat to dissipate with the R9 390 but in general the coolers on the R9 390 are better at dissipating heat than the 1080 coolers.
 


just found a ftw review this morning. seemed right on par with the other custom cards i have seen reviewed. they have all been pretty much equal so far with small improvements over the FE cards. of course they run cooler and quiter but have only gained a few fps over the FE cards and none have stood out over the others as "best" you can't go wrong with either card. i will say the ftw card used more power than the other at almost 250w. the extreme gaming is not out yet so no idea how that one will do. keeping my eye out for other reviews as i find them, i'll post them :)
 


Most excellent. Thank you! :)

 


Remember that in DX12, its up to the Developer to implement SLI/CF. NVIDIA/AMD can't force it via drivers anymore. I'm fully expecting DX12 to kill multi-card gaming when all is said and done, due to this.
 
Gave in and purchasing the Gigabyte 1080 GTX Gaming X. Huge upgrade from my 770 GTX, which did NOT age nearly as well as I thought it would. Likely the last HW upgrade for my rig (2600k based); hoping to squeeze another 2-3 years out of it.
 
definitely something to remember and so far dx 11 has been much kinder to sli than dx 12 is so far. over time though, it should even out as developers figure things out. was something i noted when dx 12 came out and we got the details of it. now we can stop blaming amd/nvidia for poor driver support. the majority of the blame should go to the developer now and how well/bad they implement it if at all.

not sure if it will "kill" multi-gpu but will clearly have an impact if few developers chose to do the work to include it. that and single cards should be strong enough for most uses except for those that really like taking it to the extreme. a single 1080 is more than enough for me. 2 would just be because i can afford it and not cause i need it. even then i'm leaning toward the 1070 level of performance. hopefully amd adds something to this range in the next couple months. clearly the 480 is not going to do it. maybe a 490 if/when it shows.
 
Lol some people a raving about "multi gpu is the future" while in reality i saw we probably moving away from it. Some people thinking that moving the implementation to developer instead of gpu maker will make more and more games supporting multi gpu.
 
Well, it is a double edged sword.

When you move stuff out of the driver and onto the wild, you always have the "in-between" fellas making graphical engines.

I wouldn't put it past that UE or Source include in their engines an "out of the box" multi GPU solution.

We just need to see how it gets tackled in the development cycle and how much it is publicized. Plus, *forcing* AFR in games is not *that* hard. Instead of including the full SLI/XFire structure in the driver, nVidia and AMD could expose some of that so that the developers just need to create the profile or something.

Anyway, we gotta wait and see.

Cheers!
 
i'd assume there are basically templates to follow same as the various engines out there with other aspects.

but someone has to write these templates and then they have to be learned and such. i'd hope it could be added as part of a package using a certain engine but who knows. as a developer i'd be more inclined to use an engine that made it as easy as possible for me to implement as much as possible. but then i am not a developer so have no idea how it really works. only know the basics of how the engines and sdk's work on a very low level :)
 
@ Math Geek. We both expressed a preference for Asus Strix. I read a review but can't remember which one that said something important to me. The Asus cards have moved to some sort of fixed solid block coils. Meaning they will never coil whine. Gotta be a winner for that.

I was really happy that my 980 Strix didn't whine. After my 970 had done: the 970 went back for a fault though. Double lucky.
 



N.B. Yes of course.

Example is that I have owned the Asus Strix 970 and 980. The 980 had a different cooling pipe system. The 980 cools just as good if not a degree cooler than the 970 did. Means it runs quieter too. Just love my 980; I got so lucky.

Even if I went crackers and bought a 1080, or 1070, I would keep my 980. I would not part with it for love or money. I mean I like the idea of recouping some cash by selling my 980, if I upgrade. However I think I'd be better off keeping it. It can do PhysX or just be a solid back up card. Or I might end up with two PCs. (That's why I still have a GTX 650 Ti Boost. I can still play games if my main graphics card goes faulty.)
 


Based on Techpowerup the 980, 1080 are more efficient than the 980 ti Titan x and 970... Jen-Hsun Huang also was talking about the 980's efficiency and talking about the 1080's improvements...

I guess power management was a major focus for x80 cards...
 


I don't think the 980 is more efficient than the 970... both are very similar, but that would not make sense. The GTX 1080 has slightly higher power requirements than the GTX 980. Going by my definition of efficiency, it is less efficient. Of course perf/watt is always going to increase but that's not what we are talking about. That's like saying the 295X2 is an extremely efficient GPU because its performance/watt is so much higher than that of GPUs from 2007. But it is not efficient, and we know that. You may say, "well that's a 2007 GPU" but the same applies when comparing 2016 GPUs to 2015 GPUs. You leave performance out of the equation and strictly compare power "consumption".
 

We heard a bit of coil-noise though, nothing dramatic or high squeaks just a bit of rattling in the background.
http://www.guru3d.com/articles_pages/asus_rog_strix_geforce_gtx_1080_review,39.html
 


Perf per watt is a measure of efficiency -- However according to tech power up the idle power consumption is better on the 980 according to the charts.
 
I don't really care about idle personally. Idle just has to do with those extremely tiny and unnoticeable electric bill charges. Load is what really matter in my mind.

Anyway, well the 295X2 is an incredibly efficient GPU. The FX 9590 is an incredibly efficient CPU.

If the GTX 1080 was a 250W GPU it would have better perf/watt still over the 980Ti. So technically it'd be more efficient. But you know what else? That would also suck eggs if it was a 250W GPU instead of 180W.
 


Thank you. I am gonna go reading now to find out what's what.
 


I found it. http://www.kitguru.net/components/graphic-cards/zardon/asus-republic-of-gamers-strix-gtx-1080-aura-rgb-oc/31/

They say: "Due to the adoption of concrete alloy chokes ASUS have completely negated any coil whine under extreme load situations."

I wonder if guru3D were hearing something else. I bet they know coil whine when they hear it though. They did say it was, "just a bit of rattling in the background". (Basically no high-pitched squeaks.)


I think I still have my mind on the Strix. I would think more seriously about it, but I can't justify the UK price.


EDIT: Actually I found OC3D TV talking about zero coil noise on the Asus 1080 Strix. At 12:00 - 12:45 he covers it completely. https://www.youtube.com/watch?v=IevrJgHhM5U
 


About time! I've been waiting for proper sli benchmarking for weeks! I honestly do not understand why these benchmarks aren't being pushed out? Now..where is a gtx 1070 sli benchmark? -.-
 

The Guru3d GTX 1080 SLI review has been out since 6/6/16.
http://www.guru3d.com/articles-pages/geforce-gtx-1080-2-way-sli-review,1.html
 


After reading that review, I actually did not see it as being anywhere near complete. A lot of ifs and buts and also worth noting is their concern about the driver in which was only available at the time. Now I know there are new drivers currently available which are the successor to those in which Guru3d used with their benchmarking. Have they made improvements to sli? I am not sure. But I would like to see a more complete benchmark undertaken with both the 1080's and especially the 1070's! Lets face it...not many will sli the 1080 but the 1070sli is a much more important benchmark to look at when comparing bang for buck ("sli issues" aside).

BENCHMARKKKKS ploiiiise
 


http://www.gamersnexus.net/guides/2441-diy-gtx-1080-hybrid-thermals-100-percent-lower-higher-oc-room

Scroll to around the 5th minute mark.

"these dips translate in the real world to severe screen artifacting or flickers or blackscreens or even frame drops"

Again, mentioned in the 9th minute.

If you can't be bothered watching the video. Here I've pulled a qoute from their article below their video.

"This chart shows the clock-rate versus time. The GTX 1080 chokes on its thermals (and power), and begins spiraling once it's racked-up heat over the period of an hour-long test. 82C is the threshold for a clock-rate reduction on the GP104 GPU, as we show above.

You can see that the clock remains stable for a good 10+ minutes, but starts dying after that. The clock-rate recovers about 15 minutes later. These dips cause screen severe frame dropping or complete screen blackouts, in the worst cases."


Now I'm no expert, but I've always read and have come to know that fluctuations in clock rates can directly translate and have an impact on in game performance.

Do you disagree with what is being said within the article? (and there are many more across the web)
 

Quotes from your own article:

Frequency fluctuations show a range of approximately ~60MHz each time the GPU diode hits ~82C absolute. This can trigger a slight latency increase or framerate fluctuation at the exact moment of frequency fluctuation, but is basically imperceptible.
This dance occurs five times in a span of 42 minutes. To this end, the metric is important to measure, but can be ignored as an end user. You will not perceive these hits, and the frequency throttle is so minimal as to be imperceptible on framerate in the greater scheme of a 2-hour play session."