Nvidia GeForce GTX 1000 Series (Pascal) MegaThread: FAQ and Resources

Page 80 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Fair enough.

Just don't get caught up in the middle of it IMO, since I bet we're just seeing the tip of the iceberg that DX12 is. With more games in the pipeline adding DX12 support (like the latest RoTR DX12 patch) and some being DX12 native, we will have more samples to analyze. Same goes for Vulkan support, although I'd say there will be way less games using it.

All in all, the more games tested, the better. It never hurts to have more information... Well, maybe give some headaches, but that's about it, haha.

Cheers!
 

Ambular

Respectable
Mar 25, 2016
356
0
1,960


I've been wondering for a while now whether that works the same way as it did at the china company where I used to work. We did separate the really good, 'first quality' items from the slightly imperfect, 'second quality' stuff to be sold at a discount. But demand for the discounted stuff outstripped supply so much sometimes that every now and again we'd get an order from corporate to slightly deface the bottom of a bunch of first-quality product and sell it as seconds.

So maybe the chip makers do make use of less perfect chips for their less expensive cards, but they may also throw in better chips when there just aren't enough poorer ones to fill an order.
 

Design1stcode2nd

Distinguished
Oct 27, 2010
85
0
18,660
Are the prices we are seeing for 3rd party cards likely to go down in the next few months or is this pretty much what they are going to cost until they have been out for 8-12 months?
 


It really depends, my gut is telling me that they should go down in the next 2 months. By then Nvidia should have supplies being produced in higher frequency + demand will start to fall.

By the holidays we should be seeing good discounts.
 


the situation is a bit similar if you look closely. just that the difference between the two is hardware compatibility. with tessellation you need your hardware to be compatible with shader model 5 before you can use the feature. but that's not the case with DX12. so in tessellation case AMD did not get the benefit "being ahead" with the feature in their earlier hardware. by the time tessellation become part of the API spec nvidia was somehow able to create solution that is "stronger" than AMD. there are rumor talking about why fermi late to the market. true the yield of TSMC 40nm was terrible for their big chip but they say nvidia also late with fermi because they add tessellation to their design because fermi initial design did not include tessellation engine. back then many speculate about how nvidia will handle tessellation via software only instead of using dedicated hardware like AMD. now look at asnyc compute. AMD have the hardware since the very first GCN. what if DX12 specifically need new hardware like DX11 was? probably not even AMD GCN 1.1 can use their ACE engine if that's the case. so this time having the hardware since their first generation of GCN definitely giving them the advantage.



in project cars developer clearly mention the lack of AMD interaction with them. and AMD not even trying to deny that. that's not the case with hitman. if something wrong happen with devrel we should hear something from developer or nvidia themselves like what happen in AoS. just that in AoS the issue mostly happen with nvidia PR team (AoS dev already make clarification on this). another example is Dragon Age 2 and TR reboot. in case of DA2 nvidia said during game launch day that they did not have access to the game build until the game officially being launch so they can't make any performance optimization prior to that. with TR they specifically mention that they did not get the final build for final version of tressfx being use in the game. then there is dirt showdown. we know how bad nvidia performance in that game. we never heard nvidia making any statement or complain about it. the game simply using tech that is favorable to amd architecture. all nvidia did was keep working with dev to improve their performance in that game. in several driver release i saw dirt showdown being highlighted as having more performance optimization back then. i think something similar happen with the new hitman. the game simply favoring amd hardware more. funny thing is people accuse PhysX as a reason why AMD performance a poor in Project Cars and yet the new Hitman also use PhysX (same as absolution).



to certain extend yes they were biased towards AMD because of mantle. so nvidia have hard time with async. so did intel, qualcomm, imagination and ARM have async implementation that is on par or exceeding AMD implementation? nvidia have support for all DX12 feature except async compute because MS did not dictate how async compute should really be done. this is problem with DX12. MS want fast and wide adoption ASAP they did not dictate how the implementation should strictly be done. Crytek dev mention that the problem with async compute is there is no standand way of implementing them between gpu vendor. since AMD hardware in Xbone and PS4 developer prefer how it is done on AMD hardware. and speaking about DX12 support Polaris also still not incorporating all DX12 feature.
 


thing is you can't make everyone happy. same with tessellation. people complain excessive usage of tessellation is a way to make AMD look bad. then some will say that the hardware have the raw power to take it so why not taking advantage of it? hey then again no one complaining Dirt 2 have worse performance in DX11 despite there is no visual difference between DX11 and DX9 in that game. that's when nvidia still not have their 480 to show how strong their cards are with tessellation. also 3Dmark 11 initially was faster on AMD hardware because the gpu accelerated portion in the test using OpenCL which is running better on AMD gpu back then. so did people accuse Futuremark back then to tailored the test to give AMD hardware the advantage?
 


IMO AMD should have no problem coming out with AIB cards on day one with their RX480. but i think they were repeating the same mistake they did with R9 290X. though this time it is not as bad like how it was back then.
 


hard to say. it will be depending on what chip being use on that titan.
 


Good thing you mentioned Fermi, because it is a great example on what I meant by "DX12 has been there for nVidia and AMD all this time". To be honest, I can't remember how much time passed between DX12 and the 980 launch, but if nVidia delayed (taking what you said as truth) Fermi because of a single feature, then why not do the same with Maxwell's generation?

I do agree it doesn't really matter who supported it first, but instead it's who supports it better. Tessellation was a mini-war that nVidia won and now the ASync battle is being won by AMD so far. Two very specific features that result in some form of rage-inducing fanboism with the masses, haha.



I have no idea why those situations you mention come to be, but I have anecdotal evidence with DiRT Showdown. It happens that I have all DiRT games (1, 2, 3, Showdown and Rally) and I have seen their changes in the EGO engine first hand. Showdown was more of a tech showcase with little to do with content. They added (if memory serves right) new illumination techniques ("global" or something like that) and improved the physics engine a lot; they even added some specific things for the Intel graphics! They were also included in GRID2. At that time I had the 670 and then moved to the 7970Ghz. The game ran, to my own eyes, pretty much the same without the new things enabled. The 7970Ghz took a hit with those features enabled, but not as big of a hit as the 670. All in all, the visual difference was negligible at best XD

All in all, you can say "X Dev supports Y Company because it performs better". Well, yes, that has always been the case and it will continue happening. It's the same discussion when using MSAA vs TXAA and all the other flavors that one GPU handles better than the other. And then you have the more explicit efforts like GameWorks and whatever AMD has (that I think almost no one uses? Sound thingy, LiquidVR and some other stuff).

The only way to really compare "apples to apples" is to either:

1.- Disable all custom stuff and see how they stack up (very little sites do this).
2.- Enable each proprietary equivalent per game and see how they stack up (most sites do this).

And even then, some things can't be disabled since they could be "built in" into the engine. I would love to have a mix of the above and not only "ultra, high" settings. Taking a bit more time to analyze *why* a single game performs the way it does also adds a LOT of value to a benchmark, since tells more than just "A behaves better than B".

I would believe the only "solution" for people that doesn't know, is to teach them? Leaving fanbois and trolls aside, I do believe most young enthusiasts are ignorant to such things and only take reviews at face value: "A is better than B, because Z game says so!". There's always more to that and we all know it.



I don't disagree with it. AMD did have a head start with MANTLE and we all knew the consoles would have that effect at some point. Plus, no API is specific to the point on how to implement them. OGL has had that issue for *ages*. And yeah, the whole point of DX12 and Vulkan is for the Dev to decide on *how* to implement stuff. But I like have always said, that is why you have frameworks like UE, Cryengine, Unity and Source, among many others. They kind of take away part of the burden and give you great tools to develop.

In any case, thanks for the answer, renz.

Cheers!
 
Good thing you mentioned Fermi, because it is a great example on what I meant by "DX12 has been there for nVidia and AMD all this time". To be honest, I can't remember how much time passed between DX12 and the 980 launch, but if nVidia delayed (taking what you said as truth) Fermi because of a single feature, then why not do the same with Maxwell's generation?

maxwell has support for async compute to certain extend but not all of them. the async time warp use in VR is also one form of async compute. maxwell based card do quite well in that area. as i said MS did not dictate strictly how async should be implemented. but the fact that AMD hardware are being the one in Xbone and PS4 they will sway developer to use async as how it is done on console. AoS dev once mentioned that he probably read DX async compute spec the wrong way. this is my pure speculation but i think most dev probably thinking async compute implementation will be the same across gpu vendor because the thing is part of API. did we ever heard developer complaining about tessellation implementation between AMD and nvidia? i think that's what crytek dev meant when he mentioned there is no standard way of implementing async compute between available gpu maker.

also one of the key point of async compute is to increase gpu utilization. try looking this on another angle you will find that nvidia gpu utilization are mostly good even without the aid of async. but that's not the case of AMD gpu.

I have no idea why those situations you mention come to be, but I have anecdotal evidence with DiRT Showdown. It happens that I have all DiRT games (1, 2, 3, Showdown and Rally) and I have seen their changes in the EGO engine first hand. Showdown was more of a tech showcase with little to do with content. They added (if memory serves right) new illumination techniques ("global" or something like that) and improved the physics engine a lot; they even added some specific things for the Intel graphics! They were also included in GRID2. At that time I had the 670 and then moved to the 7970Ghz. The game ran, to my own eyes, pretty much the same without the new things enabled. The 7970Ghz took a hit with those features enabled, but not as big of a hit as the 670. All in all, the visual difference was negligible at best XD

same story with tessellation. high level of tessellation did hurt nvidia performance but it hurts AMD more. but you see no one raging about how AMD using a feature that advantageous to them to "cripple" nvidia in dirt showdown. to me there is no good or evil between nvidia and AMD. just business. but AMD usually whine a lot.

I don't disagree with it. AMD did have a head start with MANTLE and we all knew the consoles would have that effect at some point. Plus, no API is specific to the point on how to implement them. OGL has had that issue for *ages*. And yeah, the whole point of DX12 and Vulkan is for the Dev to decide on *how* to implement stuff. But I like have always said, that is why you have frameworks like UE, Cryengine, Unity and Source, among many others. They kind of take away part of the burden and give you great tools to develop.

when it comes to this low level API i think only id Software was the only one that have the right mindset about it right now because of this:

“DirectX 12 and Vulkan are conceptually very similar and both clearly inherited a lot from AMD’s Mantle API efforts. The low level nature of those APIs moves a lot of the optimization responsibility from the driver to the application developer, so we don’t expect big differences in speed between the two APIs in the future. On the tools side there is very good Vulkan support in RenderDoc now, which covers most of our debugging needs. We choose Vulkan, because it allows us to support Windows 7 and 8, which still have significant market share and would be excluded with DirectX 12. On top of that Vulkan has an extension mechanism that allows us to work very closely with AMD, NVIDIA and Intel to do very specific optimizations for each hardware.”

http://www.dsogaming.com/news/id-software-on-opengl-versus-directx-11-and-on-why-it-chose-vulkan-over-directx-12/

i think that's how low level API should be done. you go low level so you can do specific optimization down to architecture specific.
 

The bolded part is the most important of all of this. I don't really know what to say to that argument other than "because GCN is under utilized", but then it could be at the driver level AMD dropped the ball? I really don't know, but it does looks weird. I am happy with knowing, at least in DX9 titles that I still play, the 7970 is always 90%+. I will take measurements today and see how it behaves with DOOM.


I don't know if AMD or nVidia "whine". I would say it's their own fanboi / diehard-fan base that usually whines. I do remember the episode with Crysis and tessellation though, when they were over-tessellating stuff off screen even, haha. That was a fun read.



To be fair, you can still do "low level optimization" to a certain degree with OGL targeting specific hardware calls; they were part of the extensions IIRC. The difference is how the new stuff is being exposed opposed to as before with Vulkan. I can only backup that statement from my own experience with OGL, but I don't really know for DX. I would imagine it's a similar scenario. Thinking on console ports, I'd say most companies were using UE or another game engine that took away all of the implicit optimizations needed by the Dev anyway. With DX12, all engines will need to add specific optimizations in order to keep the same functionality available, I guess.

Cheers!
 

TehPenguin

Honorable
May 12, 2016
711
0
11,060
Guys, I'll bypass the Async discussion.

I'm having difficulties finding reviews for the cheaper custom 1060's. Like The Palit/Gainward or the MSI 6GT OC.
They all seem like a standard double fan design cooler just not as fancy looking as the more expensive ones.

In fact, I can't find any reviews for the 1080 or the 1070 with said coolers.

It'd really be interested if there is a difference in cooling performance. These are the only cards I can find ATM that sell at MSRP.
 

TehPenguin

Honorable
May 12, 2016
711
0
11,060
Oh yeah, I've searched the german Tom's site. But they too only test the high end models.

I think the Palit one is called "Dual Fan"(very original). I'll post some pictures that might clarify what I am talking about:
Gaiward:
1478150-1.jpg

Palit
1478630-2.jpg

MSI
1478879-2.jpg

The Gainward and Palit cards look almost exactly the same so maybe just a rebrand.
 


From what I read in the Toms.de review, wait for an MSI Gaming variant. That cooling solution appears to be the one to beat.

Cheers!
 


Oh! Most of them use the "gaming" moniker! Haha.

And yes, I got the maker wrong. I did mean the "Gigabyte Xtreme Gaming", but not the G1.

I wonder if a 1060 would need such a big cooling solution though.

Cheers!

EDIT: Typo.
 

RobCrezz

Expert
Ambassador
Yeah theres a bunch of MSI 1060's, looks like the gaming ones are the only ones using the better Twin Frozr cooler.

Yeah for a low wattage card like this the cooler isnt that important, but for me I would like it as quiet as possible, with as much overclocking potential as possible (keeping temps below throttling point when oc'ing).

I suspect the Gaming would be the one to go for.
 

Math Geek

Titan
Ambassador
did you guys check page one for reviews? http://www.tomshardware.com/forum/id-3047729/nvidia-geforce-gtx-1000-series-megathread-faq-resources.html#17902599

i found some for the 1060's like the 6G DT. not the best sites but at least it's a review :) please link any others you may find so i can add them to the list. i can't read every site on the web and i'm sure i miss some!!

any idea if the german round-up is going to get translated to english for this site? google translate works but it's not as readable and drops the pictures https://translate.google.com/translate?hl=en&sl=de&u=http://www.tomshardware.de/nvidia-geforce-gtx-1080-gtx-1070-grafikkarten-roundup,testberichte-242137.html&prev=search
 

Math Geek

Titan
Ambassador
just referring to how the translate messes up the formatting and leaves out the pics. i don't read german so have to rely on the translation to english :( been wondering why tom's has not had any reviews beyond FE cards. would love to link some but don't have any to link :(