Nvidia 980Ti vs AMD R9 Fury X?

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

teflon66

Reputable
Feb 8, 2015
289
0
4,780
Which one would be better for 1440p?
Is AMD's driver support bad?
Overall, what would be the best GPU to get and have the least problems?
(I know the Fury X hasn't been released yet, but its specs and some benchmarks have)
 
Mantle is open? seriously? Vulcan is open because it was handled directly by Khronos but Mantle was never open. let me ask you this. late last year AMD bragging that almost 100 developer has signed up for Mantle program then suddenly earlier this year why AMD suddenly move Mantle to internal development only instead of keeping their promise to open Mantle as true open source API? and that despite Richard Huddy insisting before Mantle will continue to exist along side current API and AMD are planning to roll out Mantle 2.0 to the public. why the sudden change? and to be clear it is not because of Vulcan and DX12 because according to Huddy Mantle 2.0 should offer something that much more advance than what Vulcan and DX12 are capable of. also both intel and nvidia has been asking for mantle spec since the very beginning. AMD give beta excuse but at the same time have no problem to give that 'beta' to game developer and even use it in actual commercial game making AMD as the only vendor capable of running Mantle in the market. intel in the end give up on mantle because they have access to DX12 and able to make working driver for it while still waiting for AMD to open mantle spec. AMD even try to use intel interest in mantle as a marketing point to push mantle towards developer. that's why intel release a quick statement about it later because they know what AMD intentions are.

And also, is there a single game that runs better on AMD hardware despite GameWorks? Because Hitman: Absolution was an AMD Gaming Evolved title, and still nVidia hardware performed better on it.

Far Cry 4 run better on AMD hardware. and that is Gameworks title even. heck it is the sole game that they use to show that their Fury X is faster than GTX980Ti during Fury X first reveal by AMD. did you still say that nvidia deliberately try to cripple AMD?

Give me one example as bad as nVidia's tessellation to invisible water under the city, crippling performance on AMD hardware. If you can, we'll call it even.

http://techreport.com/review/23150/amd-radeon-hd-7970-ghz-edition/6



 
You didn't listen to the interview did you? It was clearly stated that they were planning on giving Mantle to Khronos. And that's exactly what happened. Vulka IS pretty much Mantle 2.0. That action speaks for itself. It's also in the name... Mantle as in earth's mantle, and Vulkan refers to volcanoes. They gave it to Khronos because nVidia didn't want to support it, and they knew it would die without them. And if you listen to the nVidia interview, first they say Mantle has no benefits because lower level APIs are not that great, and then a few minutes later they praise DX12. That alone tells me enough about nVidia. They have been known to lie in your face. GTX 970 is the most popular example. And yet people still lick their balls.

As for Intel vs Mantle, read this:
In a separate email, the Intel spokesman said that it had been working with the Khronos Group and with Microsoft to ensure that future APIs target "a wide range of graphics hardware".
http://www.pcworld.com/article/2365909/intel-approached-amd-about-access-to-mantle.html

What does that tell you?

As for Far Cry 4... This says enough:
http://www.hardocp.com/article/2015/06/24/amd_radeon_r9_fury_x_video_card_review/7#.VZAN1xtViko

4K is a bad example because the Fury X is pretty much faster than the 980 Ti 90% of the time at 4K. And the reason for that is simple. The higher the resolution, the more the graphics card is put under load. Limits by CPUs and drivers become less.
Again, when DX12 rolls around, I can give you a 90% guarantee, that AMD cards will suddenly get a boost in performance while nVidia's won't as much. Their raw power will finally be used like it's supposed to.

As for your tech report link... They say this: "We've seen lots of similar scenarios in the past where Nvidia took the initiative and reaped the benefits. Perhaps this is karmic payback." Says enough. And even then, it's not as bad as the tessellation under a city. Dirt Showdown was the first game to use Global Illumination. And it actually had visual benefits. Tessellation on invisible water under a city has no visual benefits. And even even then, nVidia has access to global illumination source code. It's only logical that something that has been used for the first time in a game will have bad performance on the other group. But when something is repeatedly used and it STILL affects the competitor's performance more negatively than it should, it's a sign of maliciousness. It's good that AMD drivers have so many override settings. If it weren't for that, performance in any GameWorks game would be abysmal since nVidia ties it to other settings within the game, without being able to change the nVidia specific settings alone. Also, AMD will never get access to HairWorks source code, while nVidia does have access to TressFX source code.

tfx_tr_perf.png


 
Your best bet is to with the 980ti that is from say msi you shouldn't have any trouble and you will be able to overclock it to have a higher edge over say titan x and fury x at 1440p you should be able to max out games and have 60+ fps
 


I'll probably get the EVGA Hybrid 980Ti, as it gets hot where I live and want to keep my PC temps as low as possible.
 
You didn't listen to the interview did you? It was clearly stated that they were planning on giving Mantle to Khronos.

what they did was give Khronos to peak at mantle and from there Khronos will shape Vulcan so it can be used by other vendors. BUT they did not give Mantle outright to Khronos. because initially AMD want to keep Mantle as an advantage to them. if they did then Mantle will not moved into internal development only at AMD. hence i was asking you why despite bragging many participation from game developer on mantle then they suddenly halt their initial plan regarding mantle. AMD even asking for developer to look for Vulkan and DX12 instead. why is that?

They gave it to Khronos because nVidia didn't want to support it f

there is no such thing. yes they indeed give the spec to Khronos but not because of nvidia. if anything AMD does not want to share Mantle with anyone. the sign is clear. both nvidia and intel has been asking since the very beginning (and intel was quite persistent on it) but they give 'beta excuse' while at the same time have no problem give access to game developer. the one that hoping Mantle to be open is not AMD. but Johann Anderson of DICE.

As for your tech report link... They say this: "We've seen lots of similar scenarios in the past where Nvidia took the initiative and reaped the benefits. Perhaps this is karmic payback." Says enough.

then you should understand that given a chance any company will implement feature that will give advantage to their hardware. instead of whining (like what AMD like to do) nvidia keep improving their driver for dirt showdown. the performance improvement for dirt showdown has been highlighted on nvidia driver release for several times showing that they put their effort to optimized their performance. there is no good and bad between the two. only that AMD like to paint themselves as the good guy and victim by intel and nvidia. AMD probably better put the money towards better marketing and R&D than wasting their money to gain sympathy from people because so far such effort did not help them gain market share at all. they will entertain those fanbois at best and nothing more.

 


I wouldn't say at this current time that one is better then the other, being both brand new, but non-reference GTX 980ti's do have slightly better performance. If you want to save your CPU, Nvidia is better for PhysX than AMD counter-parts. Besides only running at 1440p is a massive bottleneck for these graphic cards, unless you plan to upgrade later.
 



You know what? It's not bad drivers, it's shitty game developers. Why the hell should AMD and NVidia need to tune their drivers every time a game comes out? The driver worked for every other game before it why not this one? The game studios who are too damn lazy to make their games work with the hardware and existing drivers are the real issue; they expect that the GPU makers will put a crap ton of effort into making their cards run with the garbage code they created. I wish NVidia and AMD had not bothered releasing a Batman Arkham Knight driver; they should have just agreed to say that the game was shit and drivers were not workable. It's time the damn developers learned to code and time they took the heat for it not the GPU makers.
 



1440p and 144 Hz can take more power than 4K @ 60 Hz. Less VRAM but more crunching power.
 


You can't expect anything until it gets tested. This is just like all the guys saying DX12 will make it 50% faster than 980Ti. It could but until it does that is silly speculation. Remember when Bulldozer came out and wasn't that good but Windows 8 was going to make it 50% better than Sandy Bridge? How'd that work out?
 
I'm going to get an SSD for my OS and other vital programs, and a Hybrid for everything else. Someone told me that Hybrids offer some SSD benefits, like making games load faster, so they're better than standard HDDs, while being the same price.

Also, I don't think 1440p 144hz is a bottleneck for the 980Ti. Like razamatraz said, 1440p 144hz can take more power than 4K.
I may possibly upgrade to 4K, but not anytime soon.
 
After reading the review for the Gigabyte GTX 980 Ti G1 Gaming 6GB, it's clear to me it is the best single GPU card going. In some games it nearly doubles performance of one 970 at 4K. On average it's about 15% faster than a ref 980 Ti at 4K. Plus it's a cool running tri fan HSF, yet the card is only 11.6" long.

Gigabyte does it again. Well worth the $40 upcharge ($690).

http://www.techpowerup.com/reviews/Gigabyte/GTX_980_Ti_G1_Gaming/

 


this is interesting point to discuss. because in the end one way or another both company have no choice but to do it. the reason is simple. once developer are done with one specific game they will move on to another project. for example imagine when nvidia finally able to come out with pascal. when that time comes RS most likely already move on with new project. will they care to make Arkham Knight to work as intended on Pascal based card? like it or not nvidia have work for it. RS will not going to release specific patch to make AK working with Pascal based card. the same goes with nvidia upcoming Volta. also there is probability dev for the said game no longer available (like in case of being disband by publisher or simply out of business like Kingdom of Amalur dev).

also there is thing that is competitive landscape between AMD and nvidia. providing launch driver for triple A games has become marketing point to both company especially with nvidia. sometimes it doesn't matter if the driver really work or not but releasing drivers in this manner will tell the customer why they should care with your product. so in the end releasing this specific driver is not only to cover up for game developer but for gpu maker own benefit as well.
 
I haven't had too much problem with my 7970 driver wise, and I agree that some PC ports being extremely poorly ported certainly doesn't help. Coding on PC titles can be crap. AMD are very slow to catch up with Nvidia on things like CPU optimization and frame pacing in their drivers though.

That said, since this thread is specifically about 980 Ti vs Fury X, I'll address that. I've seen FAR too many people excuse Fury X's less than expected performance on drivers needing maturation time. Look at how well the 980 Ti performed at launch.

Some have suggested it's new architecture growing pains, or HBM being used for the first time. It shouldn't be a surprise by now though that AMD's processor architecture itself can be inefficient at lower loads, be it threads or res.

Look at FX compared to i5 or i7. It's the same thing with Fury X, except in res vs threads. AMD seem to learn by expensive trial and error, which is a BIG part of their financial demise. Worse yet it's a viscous cycle in which once you've made too many mistakes, you can't afford as big or talented a driver team.
 
ahh i love it when these young guns come out spouting card X will beat card y when this DX version comes out (research hd4800s vs gtx2xx / dx10 vs dx10.1/11).

based on actual experience any 1st gen card of a DX generation will utterly suck, but will be heralded nonetheless as those are the only cards available (see dx9 9700pro/5800ultra, dx10 8800ultra/2800).


and for the drivers... give up on that utter BS. the 7970 had numerous incarnation for the past 3 years and if drivers really make magic, that 7970/280x would be on the top of the charts by now and a 290x would be utterly ridiculous in terms of performance.