Nvidia GeForce GTX 1070 8GB Pascal Performance Review (Archive)

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.



I Think he means if you clock the 980 ti the same and manufacture them using 16nm you will get the same results.

to compare 2 generations compare at the same clock speed.

That is they are cheating the new gen performance .. not much real gain design wise
 


Actually, the comparison you are making is way more crazy. You would need LN2 to accomplish that which isn't realistic in the slightest.

The die shrink allows you to overcome thermal thresholds & diminishing returns by reducing the amount of power needed to do the same task... then they added more transistors to utilize the added [strike]overhead[/strike] headroom that was gained, if I may over-simplify. This allows designers to still use readily available cooling solutions to keep costs down.

This is one of the better performance increases we have seen over the years. The GPU hardware is ahead of what most gamers actually use. You wouldn't think it here because some of Tom's users fall into the enthusiast crowd but, they are the exception to the rule, not the rule itself. The 1070 and 1080 are, dare I say, overkill right now for a majority of PC gamers. That is a really good thing =)
 


I know they are good cards , but I am saying the new design is not much faster than the Maxwell , and thats why they increased the clock ... I was expecting the 1080 to run at 1ghz and be faster ..

maybe some one will underclock the GTX 1080 and compare it with GTX 980 ti at the same clock speed to see ..

just for fun ...
 


I do "get" what you are saying.... But you might as well be saying that if the 980 Ti came with a Unicorn horn and Pixie dust filters, it could achieve 1080 benchmark numbers. Maxwell was as far as it could go with the bones it was built on, period.

I guess if I Overclocked my old GTX 260 to 10GHz, I could avoid buying a 1080. This argument discounts all the other features and the one that sticks out immediately is VRAM capabilities & capacities.

We have a Titan killer at a third of the cost! A card that can brute force its way thru lazy coding. The down side I suppose, is coding can get more lazy...ugh.
 


lol . I love the cards calm down . I just want to verify nvidia claim that Pascal design is a huge leap over Maxwell ... if it is only the clock that is giving us more , then they are cheating thats all ...

I am not saying Avoid the 1080 lol
 
Running the same number of cores, (i.e., comparing the 1070 to the 980) if you look at the gains in performance, it's almost exactly a linear relation to the difference in clocks. That's why I'm asking if there is anything else I'm missing. Shrinking a core doesn't make it faster. I'm basing that observation on 30 years of experience. All shrinking the core can accomplish, performance wise, is cut the power consumption at a given frequency. What a designer does with the gain, whether they just take the lower power requirement or bump the clock, etc., there are tradeoffs that have to be made. Without a change in the instruction set, your only leverage is clock or core count. Some changes in organization of the supporting parts can add a little bit of performance. So can better firmware and drivers. In my experience, though, it takes clock to make substantive gains at the same core count.
 


I see what you are saying and admittedly, GPU architecture/design is not my field. I'm a component level troubleshooter by trade and my understanding stems from basic electronics & electricity, transistor and diode theory etc...

I would agree a die shrink does not mean faster, by itself but, you would agree that it creates usable [strike]overhead[/strike] headroom allowing for increased core capacity and frequency am I right? Frequency has hit a wall of diminishing returns at every process size and Maxwell was flat up against it using consumer cooling methods. That is all I am saying and I believe you are saying the same thing.:)
 


Yes, I think so. lol. That would mean we are like Dennett and Gould, saying the same thing using different words and claiming to argue. Now I'm embarrassed.
 

The term you should be using here is headroom, not overhead. Overhead usually means dead weight, stuff that does not contribute anything useful to the results and would wish you could get rid of.
 


Indeed! TY
 
Still benching 3840x2160 with 4MSAA. That is completely pointless and misinformative. High DPI display, high resolution suffer much less from Aliasing artifacts than traditional 1080p on large displays so it doesn't really need that much AA which is very FPS taxing. I really wanted to know the performance of those high end cards with 0 or 2xAA, I believe they would be pretty good.
 

I think NoAA is fair enough across all GPUs even at 1080p: when sitting 30" away from a 24" display, 1080p aliasing is not that bad and unless the games you play involve tons of sitting still, you usually don't have much time to stare at edges.
 


Why no full review? What about power usage, and heat? Seems like just a performance review is lacking...I want a real way to compare across generations and for that I need to know about sound, and how hot it will make my house. We all know it is faster..the rest is important. Will we ever get a full review?

 


Well i'd just check the performance compared to 980 instead. 980 is abit stronger than a 970 so if 1070 beats it but some margin you'd know its quite an upgrade. i have a 290x and its equivalent to 970 and it sure does look like quite an upgrade to me.
 
perfrel_1920_1080.png

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/24.html
 

Especially considering that NVidia claims to have doubled the sync capabilities. Are they trying to avoid a two GPU card or are they trying to facilitate such a card. So long as they LC the next one instead of trying to make another of those 3 slot albatrosses, I'll be happier. Not exactly happy, mind you, since such a card will be beyond my reach. Never mind that it would be beyond my needs, as well. I like new and shiny way more than I should. lol
 
~450$ MSRP, right just like with 1080 MSRP. You basing you statement on assumption this card will be cheaper then any of Fury series, good luck with that for 1-2 month. Then another assumption AMD will do something about it, yes they will release Polaris and don't give a F about pricing Furies lower, it likely will be obsolete card with 0 new production, chips will be sold in Pro Duo instead or at least I would do so. Time will tell but your MSRP references is a joke especially that this article is written back in time before official release, so you have 0 knowledge about availability and actual street prices.
 


We have only founder edition in store. and here it's 672USD for GTX 1070. Not buying until it falls to 530 at most 🙁
 
People do wanna sell out their old cards before the price will come down. If they start selling these with "official" price the stores has huge amounth of old cards, that don't sell. So it is pure busines.
I remember time when AMD 5000 series was very good, but Sellers refused to sell it at official price, because it was too cheap... So the actual price for customers were high above that price that AMD suggested.
 


530? pffffff, what a rip off. I'm happy that I realize when money is trying to be taken from me. Low supply and high demand is the cause for all these skyrocketed prices.
 


buy it from US store that ships outside USA
 
Status
Not open for further replies.

TRENDING THREADS