Nvidia 980Ti vs AMD R9 Fury X?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

teflon66

Reputable
Feb 8, 2015
289
0
4,780
Which one would be better for 1440p?
Is AMD's driver support bad?
Overall, what would be the best GPU to get and have the least problems?
(I know the Fury X hasn't been released yet, but its specs and some benchmarks have)
 


But the new cards from AMD dont support dx12.1... So not sure what you mean.
 
I'm still going to get the Hybrid 980Ti, to keep my system as cool as possible, as it gets really hot where I live. I believe it's coming out mid July.

Would an 850 Watt PSU be sufficient for 980Ti SLI, or would something like 1,000 Watt PSU be better?

980Ti supports DX12.1 right?
 
Indeed, any GM2xx chip is Dx12.1 Compatible.

May I ask what brand and model the PSU is? IF you would like to SLI in the future, I would suggest an 1000G2 SuperNOVA by EVGA or The 850G2 SuperNOVA by EVGA as well. Both are based on SuperFlower leadex gold platform so they are built well..
 
One 980 Ti requires 600w, 38a total sys power. You want roughly 1.5 times that for dual SLI, primarily in amperage. So you should be shooting for 57a combined on the 12v rails for 980 Ti 2x SLI.

Most 850w PSUs will have easily over 57 amps. I've only seen one that has less, and it was a Topower. My XFX Black Edition Silver rated 850w has 70a.
 
The 980ti is fan based GPU. and trust me they arent loud at full load. Im sitting next to one. The Fury X is a lovely card and it was a choice of the 2 for me too.
But Nvidia is the lesser of 2 evils for driver support. If only these 2 companies would put as much emphasis on Driver support as they do on how good the card is.
But at same time I am hopeful (i know its stupid to dream of) that they are both holding back top driver support for DX12.

I intend to SLI the 980ti at some point after DX12 Win10 release. which means some hopefully nice boosts in FPS for 4K.

The Fury X as lovely as it is, is designed for someone who only really ever wants 1 card due to its closed water cool loop and need for that space in case for that RAD.

DX12 will prove which one is better in my opinion. and god only knows maybe even may bring TITAN X into a new light of full use of its power.

Happy hunting
 


It's not DX12.1. It's feature level 12_1. There's only DX12. And then there are feature levels, which are hardware implementations of certain rendering techniques. Even though nVidia has DX12_1, they do not have 11_2 which AMD has. The weird part comes in that 11_2 is not exactly defined either, but that's another story.

What I actually mean, AMD drivers are known to be ' badly' optimized for DX11. AMD has to rely on close to the metal coding for their drivers, but DX11 disallows this. This makes AMD cards run worse than they actually can. That's why despite their on paper stronger hardware, they end up equal to or slightly slower than nVidia. In DX12 this limitation will be gone. This is true for all cards from AMD's HD7xxx series onwards, and they support only feature level 11_1. Still superior to nVidia's 11_0 prior to Maxwell.

nVidia has got you people good with their 12_1 advertising.
 
Kepler and maxwell V1 did not fully compliance with DX11.1 specification but Maxwell V2 are including the new 12_1 feature level. In regards to DX11.2 both amd and nvidia have partial support for it but honestly i will not going to say that nvidia hardware (kepler and maxwell v1) are incapable of DX 11.2. In fact MS were using nvidia hardware when they first showcase the feature of DX11.2. And back then AMD make public statement about why they were late with DX11.2 drivers because MS direction with DX11.2.

As for DX11 i don't think there was anything that 'blocking' AMD drivers from performing as it should. If anything it is AMD that refused to support stuff like DCL for DX11
 
The 1000G2 SuperNOVA and the 850G2 SuperNOVA were exactly the PSU's I was looking at.

I think I'll probably just go with 1000 watts.

What's the difference between the 1000P2 and the 1000G2 from EVGA, and is it worth paying $30 more for the P2?

Only time will tell what card comes out on top with DX12, but I'm going to buy my card after it is released so I'll truly see what card is a good choice.
 


To put things into perspective... Look at this Wikipedia page almost at the bottom:
https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D

You'll notice that;
GCN 1.0, 1.1 and 1.2 all support Resource binding tier 3, while Maxwell 2 supports Tier 2. Advantage for AMD.
GCN 1.1 and 1.2 support tiled resources tier 2, Maxwell 2 supports tier 3. Advantage for nVidia.
GCN does not support conservative rasterization, Maxwell 2 does. (this is 12_1 FL) Advantage for nVidia.
GCN does not support rasterizer-ordered views, Maxwell 2 does (the other 12_1 feature). Advantage for nVidia.
GCN 1.0, 1.1 and 1.2 all support Stencil reference value from Pixel Shader, no nVidia GPUs support this. Advantage for AMD.
GCN 1.0, 1.1 and 1.2 all support UAV slots for all stages tier 3, Kepler, Maxwell and Maxwell 2 support tier 2. Advantage for AMD.

The rest are pretty much the same. In other words, nVidia's 12_1 'advantage' is not really an advantage. They happen to support it while AMD doesn't. AMD supports things that nVidia doesn't. But 12_1 sounds nice because it seems like it's the newest thing.

If you really want to look at it in a very simplified, slightly incorrect but still indicative, nVidia has feature level 11_0 + 12_1, and AMD has 11_1 + 12_0. Neither of them has ALL the feature levels. Developers will decide which features they want to use. If they decide to use 12_1 features, nVidia will have the advantage. If they decide to use 11_1 features, AMD will have the advantage.
 
Just keep in mind that this is speculation, we really won't know what happens until later on.

Let's just say that both companies have their advantages and disadvantages and both the 980Ti and Fury X are beasts, and I can't go wrong with either.
 


Two things, firstly DX12 isn't even here yet and by the time it is widely used these cards will be pretty much obsolete and secondly, wiki, really?
 


Not true... DX12 will be widely used quite soon because it benefits developers themselves. Basically a developer can create a game just like they would in DX11, but it would already have benefits, because DX12 reduces API overhead a LOT, which means the developers have more CPU time to do what they want. We already know that big players are on with it.. Witcher 3 will be getting a DX12 patch. So will the next Assassin's Creed, Deus Ex: Mankind Divided and EA wants all Frostbite games to have DX12 by next year. That means that all the biggest players are already on it.

As for your second point, there's nothing wrong with wikipedia. And this has been confirmed by multiple people in the industry. That was just the most handy table I found.
 


sometimes it doesn't matter what kind of advantage/disadvantage each party have. more importantly which vendor have more influence towards game developer. just look what happen in the past. Assassin Creed DX10.1. why ubisoft decides to revert back to DX10 only? and as my example above why MS decides to use nvidia hardware to showcase DX11.2 feature and why AMD said they did not expect MS decision with DX11.2? to be exact i was hearing rumors about MS actually putting out feature level 12_1 especially by nvidia request.

now nvidia have the advantage they most likely push that advantage. after all they have the money to make it happen. just like that over tessellation done on crysis 2.
 


we will see about that. most triple A title probably will jump to DX12 but for others they probably will stick to DX11.

At first glance the announcement of Direct3D 11.3 would appear to be at odds with Microsoft’s development work on Direct3D 12, but in reality there is a lot of sense in this announcement. Direct3D 12 is a low level API – powerful, but difficult to master and very dangerous in the hands of inexperienced programmers. The development model envisioned for Direct3D 12 is that a limited number of code gurus will be the ones writing the engines and renderers that target the new API, while everyone else will build on top of these engines. This works well for the many organizations that are licensing engines such as UE4, or for the smaller number of organizations that can justify having such experienced programmers on staff.

However for these reasons a low level API is not suitable for everyone. High level APIs such as Direct3D 11 do exist for a good reason after all; their abstraction not only hides the quirks of the underlying hardware, but it makes development easier and more accessible as well. For these reasons there is a need to offer both high level and low level APIs. Direct3D 12 will be the low level API, and Direct3D 11 will continue to be developed to offer the same features through a high level API.

http://www.anandtech.com/show/8544/microsoft-details-direct3d-113-12-new-features
 


Which is exactly why (I will openly say this), I despise nVidia. When policy and money reign over what's best for developers and consumers, I will not support it. I'm not going to say that something from AMD is better when in fact nVidia's is better, but if they're equal, I urge people to support AMD. It's not fanboyism. It's having a vision for what's important. We all vote for the world we want every time we buy something. And as of right now, I don't want the world that nVidia seems to be going towards.
 


that simply business. it's not that AMD are innocent since they also doing the same thing as well. only that AMD likes to whine a lot when things are not going their way. nvidia got late access to dragon age 2 build causing them to have performance issue for a quite some time in that game. something similar happen to Tomb Raider. Dirt Showdown? that game simply hate anything that is nvidia. but did nvidia go to the public and cursing AMD for sabotaging their performance? nope. in some case (like TR) they simply apologize to geforce user and work to sort things out. because in the end they understand that is business. not that i'm defending nvidia here because there are also things that i don't like with what nvidia do. but the reality is both company are just the same. the difference is nvidia have more money and their long establish devrel (compared to AMD) make it possible for them to have more influence towards game developer. to be honest i want AMD to put more effort on fixing their performance issue (be it general performance issue or issues caused by nvidia relation with certain game developer) and forged stronger devrel with game dev instead of running around whining.
 
Suffice it to say there are too many variables to say whom will have the strongest card going forward. We can only go by what we see now. 2016 will be a MUCH better time to make such choices, but by then Nvidia will have HBM2 GPUs via Pascal.

I don't feel Dx12 will be so slow to show it's worth like some imply, but it could easily be a scenario where it looks better on AMD in some games, and on Nvidia in others.

That's the way it always is isn't it? It's up to the devs to pick and choose, endorsement, features, and how the API is used.
 


Simply business... Right. Which is why someone needs to take a stand against shady business practices. And how can you say that AMD does the same when AMD's stuff is open? For example, TressFX is open, while something like HairWorks is completely closed. Mantle was and Vulkan is open. Freesync is open. nVidia refuses to work with a lot of open AMD tech, while nVidia makes it impossible for AMD to work with their tech. What was the last open standard that nVidia came up with? For AMD I can come up with a huge list from the top of my head. Even if some games are AMD optimized, there's nothing in there that locks nVidia out from optimizing for it later.

BUT, even if you're right. All these shenanigans started with nVidia's "The Way it's Meant to be Played" campaign. And also, is there a single game that runs better on AMD hardware despite GameWorks? Because Hitman: Absolution was an AMD Gaming Evolved title, and still nVidia hardware performed better on it. This indicates that they're not delibirately locking nVidia out. I don't know if we can say the same from the other side. When you start to tessellate invisible stuff, it sure as hell seems like deliberately trying to damage the competition.

Give me one example as bad as nVidia's tessellation to invisible water under the city, crippling performance on AMD hardware. If you can, we'll call it even.

I'll also leave this hear. Obviously this is AMD's side, but, it makes you think;
https://youtu.be/fZGV5z8YFM8?t=30m14s

Watch for 15 minutes, after 30:14 if the whole thing is too long. And notice how what they said regarding Khronos, they actually did exactly what they said. So in that sense, it all sounds like the truth to me...

For nVidia's side, here, they get right into the allegations by AMD;
https://www.youtube.com/watch?v=aG2kIUerD4c

You be the judge.
 



The world isn't fair, man.

Companies do some stupid shit and it's nearly impossible to change that.

I wish nVidia would stop doing this, but you just gotta learn to accept it.
 
Sure. I can accept that they don't get my money ^_^ And people need to start caring about this stuff. Otherwise it won't be long before we find ourselves in a world like Mirror's Edge.

But I've said what I needed to say. I will keep supporting only AMD as long as nVidia is doing this sh1t. Carry on people :)