Nvidia GeForce GTX 1000 Series (Pascal) MegaThread: FAQ and Resources

Page 79 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
There's still a bit of value left in our 980 then if the 1060 is about £250 and slower than the 980.

The 1060 listed is listed in that link between £240 and £320. (From the ebuyer link from Yuka.)
 
looks like once we see custom rx 480 models, there will be some direct competition between the 1060 and 480. looking at the numbers comparing the 2 cards the 1060 custom cards i have seen reviewed are about 15% or so better than the reference 480.

the teasers we have seen for oc'ing a 480 show this much improvement easily. would expect custom models to match the custom 1060's pretty well for similar prices overall. now if the 480 custom cards would just hit the market we can know for sure.
 
It seems pretty clear that Nvidia wants you to step up to the 1070 and higher if you want to run SLI. For the price of 2x 1060's you can get an high end OC'd 1070 or a base 1080 (when available of course)
 
Okay, I'm feeling a little vindicated. Finally, a non-AMD developed DirectX 12 Async Compute benchmark, the 3D Mark Time Spy demo. And guess what? Pascal cards do pretty well when they don't have to deal with vendor specific optimizations. Of course, anyone with a knowledge of how these things work can guess the backlash, denials, and cries of bias!
http://www.pcper.com/reviews/Graphics-Cards/Whats-Asynchronous-Compute-3DMark-Time-Spy-Controversy
http://www.futuremark.com/pressreleases/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy

Again... it is too early to tell what the future of DirectX 12 is. Using 2 or 3 AMD developed benchmarks to make claims about the entire universe of DirectX 12 is misleading. Show me some Elder Scrolls Online, Unreal Tournament, Watch Dogs 2, Star Citizen, or some other game that wasn't developed in-house at AMD and then we can talk.
 


yes async is part of DX12 but the feature was heavily pushed by AMD just like tessellation was. the thing is both Xbone and PS4 have AMD hardware in them. so when it comes to it dev prefer async to be done how it is done on AMD hardware. probably we should ask MS why they did not dictate how async should be implemented in DX12. as mentioned by Crytek dev right now there is no standard way to implement async compute among gpu maker. personally i think dev can improve performance in DX12 with maxwell (heck even as far as fermi) even without async compute but i dare to bet none will willing to do it. and look at recent development. even on GCN some dev decided only supporting GCN 1.1 and above with async compute despite GCN 1.0 have the hardware to properly support the feature. so what does it mean going forward? does developer only supporting most recent hardware only with their games? also will they optimize their current title with future hardware?

with AoS we could see architecture specific optimization. same with hitman. AoS was first develop in mantle before they transition to DX12 and was created to showcase GCN hardware strength. hitman clearly favors AMD GCN as shown the game was quite significantly faster even in DX11 vs nvidia card. Doom? amd benefit that major console is being GCN. remember alpha? AMD hardware was faster during those period because the build have console optimization. with Vulkan AMD have the advantage of using the idle hardware that otherwise only idle in OpenGL or DX11. though AMD OpenGL was supposed to match nvidia equivalent card but that did not happen because we know how AMD treat OpenGL. also as far as i know async in Doom has not been enabled in any nvidia gpu including pascal.
 
I ran across this old benchmark illustrating the point about Hitman. It is impossible to make the claim that this game performs better because AMD has an advantage in DirectX 12 or Async Compute. AMD has an advantage in Hitman, period, regardless. This exact same thing can be said about Ashes too.

hitman_1920_1080.gif
 


people will always try to find fault when they did not see what they don't like to see. 3Dmark 11 for axample perform quite good on AMD card back then because the gpu accelerated portion of the test were using Bullet with opencl and the result is a bit better on AMD card. did people accuse futuremark to tailor the software more towards AMD hardware? there is none. people complain excessive tessellation they complain when futuremark did not use async compute heavily?
 
I don't know what is the point you guys (renz and 17seconds) are trying to make, really...

AMD had a head start with ASync because they added it in MANTLE, but using "tessalation" as an example is kind of ironic. Yes, AMD pushed for Tessalation in DX10.1 and then when nVidia pushed for DX11, Tesselation was way better performing under nVidia hardware. So, it's kind of a contradicting point saying "AMD has an advantage with ASync!", especially when nVidia is doing pretty well with Pascal's ASync implementation. I'd say it's not an "unfair" advantage as you are making it sound like.

In regards to Hitman, I don't now why it's "biased" towards AMD hardware to be honest, but it could be the equivalent for Project Cars in AMD's case? It could be nVidia just doesn't want to do anything further for that title or they have a grief with EIDOS or the developer? I wouldn't put it past petty manager's rivalry or something like that for both companies.

And like I said, and keep it close to your hearts, DX12 did is not biased towards AMD. Same as Vulkan. Maybe nVidia is just having a hard time adjusting their hardware to it? No, I don't think so, they are doing perfectly fine with Pascal, so that can't be. I really believe DX12 came and nVidia did not have plans to fully support everything in the API/Spec until Pascal arrived.

Cheers!
 
I am planning on staying in 1080p gaming for the next 2 years and then upgrading to 1440p. Is that a good plan or should I just go to the 1070 and try 1440p on that? If I were to go 1080p, should I get the RX 480 or the GTX 1060?
 
The point, Yuka, is that all across the less informed internet people are declaring AMD the "winner" in the DirectX 12 performance battle. They base these findings on a select few benchmarks that are misleading. That is about as clear as I can make it.
 


i have read some reports that the new time spy is using async commands that favor nvidia and not the ones used in games that amd excels at. the result is that nvidia fairs better in this benchmark than it does at async in games as we have seen so far. criticism was loud enough that it even prompted 3d mark to release a statement about it. if i run across the link again, i'll share it.

there is always gonna be some controversy one way or the other when dealing with how badly people want something to be true. you want nvidia to be better at async/dx 12 than they are so you latch on to anything just as easily as amd fans lock onto the async benefits we have seen in games.

i don't personally care but if i had to chose i'll pick in game performance over a synthetic benchmark any day :) but whatever helps you feel "vindicated" go with it
 
i am liking this 1060 launch a lot more than the 1070/80 ones. we get custom cards same day as FE, stock seems to be ok (overlcockers.uk had a ton of cards yesterday) and overall seems to be going smoothly. seeing reviews for many different cards already as well.

pay attention nvidia/amd cause this is how it is supposed to look when you release a new card!!
 


Lets see what happens when the Pascal Titan comes out. I hope Nvidia has learned their lesson...
 


Not only do I not care about synthetic benchmarks. (I read them because I read everything.) I don't care about benchmarks for games I don't play. ( I read those too, for the same reason.) If a card isn't going to improve my in game experience, I pass it by. Hypothetical performance is all well and good, but it won't increase your framerate in a game unless it's well executed in the game engine of the moment.
 
lol, sadly i spend so much time reading everything to keep this thread updated. but i really not care so much overall. i stay up to date with what can handle what settings and resolutions for the custom builds i do. but the nit picky "i got 3 more fps than you 😛 " stuff i really don't care for. it's just not that important and occupies a lot of time for some people thinking of reasons why their 3 fps higher is better than your 3 fps higher.......
 


Asus 1060 oc strix vs non oc strix, your opinion math geek! ready, set go! 😀
 
non oc strix. same set-up just have to oc it yourself :)

i'm not buying into the whole "binning" of chips people claim. it's a good old wives tale started by the companies to sell the same card at 2-3 different prices. 😛 (or 10 prices for brands like msi and evga)
 


Thanks man! 😀 again, more patience is a virtue they say