AMD Radeon R9 Fury Review: Sapphire Tri-X Overclocked

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

FormatC

Distinguished
Apr 4, 2011
981
1
18,990


In maxed out settings Fury looks not worse, mostly better against the 980 vanilla. The limited memory might be a bottleneck, but only in a few, more or less special cases (game mods, hi-res texture packs etc.). Take a look at my fresh UHD-benches in the power consumption part. On slide 3 you can see the performance results for both cards in this (especiallly for my measurings hand-picked) benchmarks - but the trend is always visible.

It really makes sense to compare this factory-oc'ed Fury from Sapphire with factory-oc'ed GTX 980 to keep price-point and performance in an objective balance. I have here in my archive a lot of 980's with up to 17% more performance as the temperature-limited 980 vanilla. In this case the Fury looks not so sexy because it is simply too expensive.

If other AIBs will also launch their Fury cards - if - we can better decide, what is the best bang for the buck. But I know from a few AIBs that they will not launch any Fiji Pro as custom Fury card. I know the reasons too but this is not for public. We will se after official launch a handfull of models, not more. And I don't know if I'll get a personal sample. May be I have to buy one, again and from my own money. This is not really media-friendly but the reality :)
 

Gurg

Distinguished
Mar 13, 2013
515
61
19,070


Igor's review of the MSI390x vs MSI980 with the extreme settings and factory overclocks was perfect. I personally would never buy a plain reference card and want to see aftermarket cooler cards at the factory overclock settings as that is what I will evaluate for purchase. The only time I ever want to see reference card results after an initial launch is to see how much improvement the aftermarket brands have achieved.

The problem is with the Fury reviews! What I would like to see is Igor's gaming charts updated to include the Fury and FuryX run at extreme/ultra/max settings with the factory overclock enabled and at the three resolutions 1080-1440-2160.





 

Dark Falz

Reputable
Mar 8, 2015
97
0
4,660
An overclocked 980 (~15%) is about equal to an overclocked R9 Fury, has been out for a year and is $50 USD cheaper. Also chews less power. I guess it's nice to have competition though.
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990


I'm just testing here in Germany seven GeForce GTX 980 Ti custom, nice single reviews with a separate, roundup-style landing and result page after the 3rd published review (with continuous updates). I will compare this later also with Fury X. And if some other Fury cards (Fiji Pro) will enter the market I will do the same with Fury the GTX 980's. I have a lot of this cards in an older, but large roundup with 9x GTX 980 and 6x GTX 970. It is all only in German but the charts are always self-exploring without Google translator. ;)

Just trying all my contacts in Asia to catch a living Fury Pro for my battlefield and archive. But not only Gigabyte will not launch a custom Fiji Pro due the low profit (and other reasons).

BTW:
I know, it is really off-topic, but I've tested exclusive together with Gigabytes R&D guys a new silent BIOS for their GTX 980 Ti Gaming G1, based on my suggestions and fan curves. Lower power consumption, similar performance and finally up to 4 dB(A) less noise. The temperatures are also similar. If someone need it, read the German sister page at Monday. This BIOS was proofed now from NV and is ready for mass production :D
 

cmi86

Distinguished
I would love to see AMD drop the Fury X down to 980 prices and the Fury down to 970 prices and smash nvidia to pieces in the realistic enthusiast segment. Let Nvidia have the ultra high end/bragging rights snob crown, I won't ever be buying a 980Ti/Titan-X/Fury X (currently Priced) and neither will 99.9999999% of the other gamers on this planet. Price the cards where they can dominate, not just compete and increase revenue through increased sales volume. More cards in the market and devs start to favor that architecture and the same card gets even faster so more people buy in. It is the same exact thing nvidia has been doing since the GTX 4XX stuff. AMD needs to start doing this instead of trying to cash in on ultra high end users and leaving the average enthusiasts stuck with slower hardware. Drop the fast stuff down and let green have the snob crown, who cares.
 

cmi86

Distinguished
It is funny that when Metro LL dropped it was SO heavily optimized for NV that most AMD users couldn't even play the game until a month or so later when a patch was released turning off all the NV crap that was built in to cripple AMD cards seemingly on purpose. Now here we are some years later and AMD cards are actually faster in Metro LL. Old game, doesn't really matter just goes to show that we should take all reviews with a grain of salt. All it takes is a little monetary motivation for said company to flip a few switches inside the game and that is the difference between a good and a bad review.
 

Gillerer

Distinguished
Sep 23, 2013
360
81
18,940
Slapping on a standard tri-cooler onto this short board is effing dumb.

No, on the contrary, its very clever. With the cooling fins overhanging the PCB, the third fan can blow air straight thru the card upwards to the top of the case, without being obstructed by the card. I'd bet it increases the cooling capacity and does its part in keeping noise down.
 

rvalencia

Distinguished
Nov 11, 2011
5
0
18,510
I hope you can add the 5.7 driver results.

15.7 is a feature/WHQL driver, not a performance driver. The performance %s that AMD gives on the website are "since Omega".

Nvidia Titan X reduce image quality cheating...

The root of the problem is the image quality setting in Nvidia control panel. Check the difference (and proof) here:

http://hardforum.com/showpost.php?p=1041709168&postcount=84










Screenshots taken from the game

I'd say that is a pretty big difference. Where did the AF go in the first screenshot?

+ there seems to be around 10% perf. drop after setting the quality to highest.




Somehow, I think if the situation was reversed and AMD's default produced worse image quality, these same sites will jump all over it and sensationalize it to the max! The double standards are blatant.







http://hardforum.com/showpost.php?p=1041709226&postcount=100

On AMD, default = higher quality, but it hurts performance.

On NV, default = worse quality, but it runs faster (10% according to BF4 tests).

Do a detailed analysis in recent games, image quality comparisons on default driver settings (since its what you test on), same settings in-game, same location. Then do it forced on quality in CC/NVCP.

You will get a ton of site hits from everyone interested in this issue. If it turns out to be true, then basically NV has "optimized" their way to extra performance for years at the expense of IQ.










 

alextheblue

Distinguished
Considering the temperatures being reached with even this large of a cooler, I completely disagree with you there.
That's with the fans running silently in real-world use. Even at default slow automatic settings it still doesn't exceed ~80-81, much cooler than many Hawaii cards and very close to the competition. I like the cooler but it is a bit overbuilt until you have the means to boost voltage - then you could adjust the fan speeds a bit and actually make better use of it.

Then again it's only a $20 premium, and as someone else mentioned most decent gaming cases (even relatively small ones) have made provisions for cards of this length.
i had a 17 inch 1600x1200 monitor. i can tell you right now, that i never once used it at 1600x1200 outside of playing 720p video, that monitor was always at 1024x768... but here is something you are failing to remember, crts did not look like hell outside of native resolution like lcds do...
Yeah that alone kept me from upgrading to LCDs for some time. Drop resolution and it's completely native. A good CRT looked perfect running different "recommended" resolution/refresh combinations. Those were the days. Well, minus the size and weight, haha...
 

Dodecahedron

Distinguished
Nov 17, 2014
3
2
18,510
Nvidia Titan X reduce image quality cheating...

The root of the problem is the image quality setting in Nvidia control panel. Check the difference (and proof) here:

http://hardforum.com/showpost.php?p=1041709168&postcount=84

It appears that this was just a mistake in settings and other users could not reproduce the problem, see http://hardforum.com/showpost.php?p=1041717506&postcount=252.

Somehow, I think if the situation was reversed and AMD's default produced worse image quality, these same sites will jump all over it and sensationalize it to the max! The double standards are blatant.

AMD has had image quality problems in its texture filter for a long time. In the Radeon 5000 series there were some obvious banding problems. They were supposed to have been fixed in the 6000 series, but then users noticed texture shimmering, i.e., aliasing. The first GCN products were supposed to address that problem, too. In reality, I still observe both problems on a Radeon R9 280X based on the first generation GCN chip Tahiti. The banding problem is not very noticeable in practice (except for artificial test cases), but shimmering is a real issue in some games. The only way to deal with the shimmering is supersampling but it kills performance. On Nvidia cards (based on Fermi and Kepler) I have not seen banding problems at all and shimmering is also reduced compared to the Radeon.

Far from sensationalizing, the issues have been almost completely ignored. The only places where I have found any information about the problems are discussion forums, some of them in German language.

Do a detailed analysis in recent games, image quality comparisons on default driver settings (since its what you test on), same settings in-game, same location. Then do it forced on quality in CC/NVCP.

You will get a ton of site hits from everyone interested in this issue. If it turns out to be true, then basically NV has "optimized" their way to extra performance for years at the expense of IQ.

This kind of a test I would actually like to see. I have no idea what the results would be with the most modern hardware (such as AMD Fiji and Nvidia GM200), but very likely some differences could still be found.
 
In almost every test, Sapphire’s R9 Fury Tri-X outperformed Nvidia’s GeForce GTX 980.

Correction:

"In almost every test, Sapphire’s overclocked R9 Fury Tri-X outperformed Nvidia’s reference GeForce GTX 980."

Again, why no apples and apples comparisons ? Since most users have Afterburner open right after installing drivers, wouldn't such comparison's be relevant ? Is the single digit % overclocked fps results on the Fury X / Fury not relevant compared to the 31% we see on the 980 Ti or the 23% on the 980 ?
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310
 
Listing an entire lengthy quote before responding makes it extremely difficult to read, especially when the response gets lumped into the quotes.

As the SC is basically a reference GPU with a nice cooler, it hardly represents the capabilities of the non-reference 980s.
 

Scibbo

Honorable
Nov 6, 2013
173
0
10,710
I am very interested and exited to see where this new gpu technology leads for amd's the next generation gpu's.

With 1440p becoming more the accepted norm and 4k is out with 8k around the corner. it is a very interesting time to see how amd and nividia technology with adapt and transform over the next 18-24 months.

At 1440p and beyond, AA is beginning to become useless due to monitor pixel densities. Unless you have 20/20 vision, AA I believe now is deemed pointless.

It is going to be very interesting to see how new technology will adapt to 4k 144hz and 8k 60-120-144hz in the not to distant future.
 
4k @ 144 Hz is not currently possible with current cable technology. And I don't think we'll cards being able to drive 4k @ > 60 fps in most games until Xmas 2016. 4k represents just 0.06% of the market (6 in 10,000) currently so I don't see it as being a breadwinner for either side in what I would all the "not to distant future".

With the Acer Predator (144 Hz / G-Sync / ULMB) removing all barriers to using IPS for gaming, 1440p is where it's at. The Freesync version is TN and Freesync is currently broken on most monitors requiring a factory firmware update, I'm anxious to see how that one looks, tho w/o ULMB, > 60 fps gaming quality will suffer. The Asus MG279Q (Freesync works) looked promising but again w/o ULMB performance > 60 fps suffers and the response time and lag was more than I expected.

The new 3440 x 1440 Freesync Predator is out but again no ULMB and no more than 75 Hz. Again, this will be curtailed until Display Port 1.3 arrives. The tftcentral review says:

http://www.tftcentral.co.uk/reviews/acer_xr341ck.htm

All in all, it's likely to be a fair amount of time before we see a 144Hz capable 3440 x 1440 screen sadly.

So, if 3440 x 1440 is a "fair bit off", suffice to say 3840 x 2160 is even further off at 144 Hz.

Starting with the 2016 Xmas generation of cards, I think we'll start to see HBM making a significant impact. Now that Freesync is fixed, I am anxious to see what AMD comes up with in the ULMB department as FreeSync / G-Sync start losing their luster over 60 fps.
 

Gurg

Distinguished
Mar 13, 2013
515
61
19,070


People are talking about HBM as a done deal. But with the die shrinks coming next year, a safer more economical approach to generating substantially more performance. might be packing much more capacity and speed into the the same imprint chips and cards without increasing the heat and throttling.
 


The complaint isn't about how well the arrows function. They block part of the charts because they're too large and opaque.
 


There are no solutions at any price for 4k at 144 hz. Two 980 Ti's struggle to get to 60 in most of the games in TPUs test suite. Until Display Port 1.3 drops, ULMB grows into that realm, and we slough through two generations of GFX cards, it just ain't happening.
 


A smaller fab process isn't enough for GDDR5 to catch HBM and if you shrink GDDR5, then why not shrink the HBM memory for even more performance per watt? Sure, if you want 16 or 32 chips then GDDR5 is the answer, but that's ridiculous right now, especially with the smaller fab process where HBM should easily scale to 8GB or 16GB.
 


I think Pascal and AMDs Greenland will both get us closer to 144Hz 4K gaming. Only because we have been on 28nm for quite a while and we have gone pretty far. A Fury X is quite a bit more powerful than the first 28nm GPU, the HD7970, and as is a GTX 980Ti. A die shrink alone should lower heat/power and increase performance by adding cores. As well both will be on HBM 2 and have other enhancements.

To top that all off we should have games using DX12 which should provide a bit of a boost in some cases.
 


They may bring us closer, but close enough? I doubt it. DirectX 12 is not for making the game run MUCH faster as much as it is for reducing overhead. That will help a lot, but mostly on the CPU side and with multi GPU setups. If it's like a 20% gain or even 30%, which I believe is stretching it, that's still not going to make 4K 144Hz a reality in today's most intensive games.
 
Status
Not open for further replies.