Nvidia 980Ti vs AMD R9 Fury X?

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

teflon66

Reputable
Feb 8, 2015
289
0
4,780
Which one would be better for 1440p?
Is AMD's driver support bad?
Overall, what would be the best GPU to get and have the least problems?
(I know the Fury X hasn't been released yet, but its specs and some benchmarks have)
 


Exactly. Haha.
 


I still don't get why people paint Mantle in the same light. Mantle was always supposed to turn into an open standard. It would be up to nVidia to optimize for it, but they refused. Most straightforward thing; nVidia stated that they don't need Mantle because low level APIs are unnecessary. Literally a few minutes later, they praise DX12 because it's a low level API. No joke. Seriously.. This says enough. But in either case, Mantle is now Vulkan.

As for GameWorks being a reaction to Mantle is BS. TWIMTBP was doing shady things before anyone even thought of Mantle. There were times where .exe file's names needed to be changed in order to have AA on ATi cards, because nVidia's TWIMTBP program blocked it out specifically when it detected an ATi card.

And another example, in Windows 7 you could freely use a secondary PhysX card aside from a primary AMD card to get the PhysX effects. nVidia actively blocked this...

AMD's record is surprisingly clean compared to nVidia's... The fact that people still bend over for nVidia despite the mountain of evidence of their shady BS is really appalling.
 
Mantle is dead. Vulkan, I don't think. Haven't really seen much news about support for it actually, but considering that it will be used in Linux also, developing for Vulkan makes it easier for developers to make games for multiple PC platforms. And steam seems to also be jumping on the Vulkan bandwagon. So, we'll have to wait and see. I think only Mac OS doesn't/won't support Vulkan.
 


Ahh, I see. Makes sense.
 


Actually it was W7 and XP that were able to run an AMD card for PhysX but you forgot to mention that it was a certain R. Huddy of AMD who said that AMD users didn't want or need PhysX as AMD's own "Bullet Physics" was going to be way better anyway. That was why Nvidia cut them off, they were asked to.
 
Funny how XFX are advertising both Mantle AND Vulkan as feature sets on their GPUs.

"Vulkan cross platform graphics

The next generation graphics API from Khronos.

Vulkan is the new generation, open standard API for high-efficiency access to graphics and compute on modern GPUs. This ground-up design, previously referred to as the Next Generation OpenGL Initiative, provides applications direct control over GPU acceleration for maximized performance and predictability.

- See more at: http://xfxforce.com/en-us/products/amd-radeon-r9-300-series/amd-radeon-r9-390-double-dissipation-r9-390p-8df6#sthash.GtjZsNGW.dpuf


Mantle & GCN Architecture

Powered by Mantle technology and AMD TrueAudio technology.

Discover the direct-to-the-GPU performance advantage of AMD’s revolutionary Mantle API and Graphics Core Next (GCN) architecture, enabling stunning detail and dynamic gaming, a richer and more immersive VR experience, with higher graphics performance and low power consumption.

- See more at: http://xfxforce.com/en-us/products/amd-radeon-r9-300-series/amd-radeon-r9-390-double-dissipation-r9-390p-8df6#sthash.GtjZsNGW.dpuf"


http://xfxforce.com/en-us/products/amd-radeon-r9-300-series/amd-radeon-r9-390-double-dissipation-r9-390p-8df6
 


Meh, it's words to write on a box I suppose.
 


Really? Ok. Interesting. Well. Just FYI, Richard Huddy has worked at AMD since (I think) June of 2014, so he's barely there for one year. He previously worked at both Intel and nVidia as well 😉

http://www.pcper.com/news/General-Tech/PCPer-Live-Interview-AMDs-Richard-Huddy-June-17th-4pm-ET-1pm-PT

And I really doubt nVidia would do anything because AMD made any statement. Whatever nVidia does is for their own selfish desire. That's their track record.
 
Because that's how it is. AMD got flak for not opening mantle itself as open source (seperate from vulcan) because richard huddy are the one saying mantle is open source API while AMD themselves never said so before. Then he also got it wrong im regards to tressfx licensing term once. What else? Oh he kept saying that GSync have latency but never come up with actual data to prove it. The best of all? AMD is the future of gaming because 285 is faster than 760 (while nvidia already comes out with 900 series).
 
Mantle was in a closed beta. It was planned to become open source in Q4 of 2014. But when it was given to Khronos, their plans changed.

In any case, here's a nice article for y'all.

http://wccftech.com/exclusive-nvidias-amds-perspectives-gameworks-bottom-issue/
 
The AMD Fury is the better Graphics card overall... Hands down! The 8gb of memory helps a lot with the 1440p and greater resolutions...
AMD Strengths:
Much wider memory bus 4,096 bit vs 384 bit Around 10.8x wider memory bus
Significantly better floating-point performance 8,602 GFLOPS vs 5,632 GFLOPS Around 55% better floating-point performance
Higher clock speed 1,050 MHz vs 1,000 MHz 5% higher clock speed
More shading units 4,096 vs 2,816 1280 more shading units
More texture mapping units 256 vs 176 80 more texture mapping units

 



Kinda begs the question if there's a whole new definition of "best" if you have to spend over $1000 just to be assured 80% ASIC quality, which isn't even always a determining factor of OCability. And even if it always were, $850 for a mere 72%, seriously?

I'm really not surprised EVGA would do such a thing though. I also don't take world records seriously, because they're usually about as skewed as shish kabobed pork. They usually go by clocks only and get all celebratory if they so much as beat another card by a few MHz, without even checking if it translates to better performance in game.

http://wccftech.com/evga-geforce-gtx-980-ti-kingpin-unleashed-pre-binned-acx-gm200/
 


Oh yeah I agree, but the kingpins and the classifieds have always been pretty stout overclockers, wr nonsense aside and using real world performance...which is what we all want anyways. The G1 is still a great card though, I wasn't knocking it.
 


Oh I'm not implying you were, or that the Kingpins can't OC pretty well. I'm just saying the pricing and methodology for determining better product is ridiculous and not always accurate, esp considering a mere 72% ASIC is mediocre at best, and ASIC quality isn't even always a guarantee of better OC performance.

For that kind of pricing they should be tested on a set platform and gaming bench to show that X bracket of card actually outperforms the lower price bracket. But the reason they don't do that of course is not just cost of testing, but the fact that ASIC numbers don't guarantee better performance like I said.

There's also the fact that you can't just go by clocks alone. Higher clocks alone don't necessarily mean a better OC unless it translates in game as better performance. There have been numerous cases where X brand clocks higher than another, but is beaten in game. At that point you're just running a hotter card that's taking more wear and tear.

I still say Giga is the clear winner in raw bang for buck power in 980 Ti (and other models). If it weren't for the fact that 980s are limited to 4GB, two of those at a $1000+ price point would be easily better than a $1050 80% ASIC Kingpin, and you don't necessarily always NEED 6GB at 4k anyway. More often than not you don't

There's been tests done where even two 970s will beat the 6GB Titan and 980 Ti on most games at 4k. Something to think about when the "best" moniker get's tossed around.

 



Oh yeah, no doubt but I would say that you stand a pretty good chance of getting them to overclock pretty high, the kingpins, even the lower asic score cards they are selling. The asic is not everything at all and I don't know why they are selling them like it matters because their stance before has been that asic is not a leading indicator that the card will overclock to x amount or whatever.

If they are going to sell them based on asic scores like that, I would want more assurance than just that score, especially at $1000. They should just go ahead and up the base clocks much higher than they do now, based on the asic scores.

I agree in the bang for the buck, the giga is the winner, most of them do overclock pretty well but they have their own problems also in not being able to overclock, no matter the asic score but that goes for all of the cards, no matter what partner. I think people put too much stock in that score because it's not like ...okay I have a 72% asic, my overclock will be 1200 base max with 1450 boost...there's just no way to tell what it will end up overclocking too.
 


Actually they should up the clocks based on testing and stability, not ASIC alone. And even then, the pricing should be WAY better than that. It just feels like heavy doses of hyped up expensive snake oil to me.

No thanks EVGA.

I bet Giga are just rolling their eyes about now at this.