AMD Introduces Radeon R9 Fury Series Graphics Cards With Fiji GPUs

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

cmi86

Distinguished


Right, 50% improvement in PPW at roughly the same wattage as 290X + 45% increase in core resources =95% mix in the rumored clock speed boosts and there ya go 100% from the 290X. Like I said this is all speculation at best until these things hit the test benches but my hunch tells me they will turn out to be significantly quicker than a 980Ti. A good amount of leaked synthetics ran on DX12 show the AMD stuff benefiting a quite a bit more than the Nvidia stuff as well. Again speculative until tested but the indicators are certainly there that the tides may be shifting, not like it hasn't happened before right ?
 


All in theory and probably looks great in synthetic tests like 3DMark. We will have to wait and see how well it does in the real world though. I am willing to bet that any game that uses more than 4GB at 4K will probably not do as well.

Again I am just assuming that as we have no concrete evidence yet.



As I said, we have to wait and see. If it is a good GPU then AMD better make damn sure they keep drivers up and start working better with developers. They have a very select few games using their technology while NVidia has a lot of big hitters (Witcher 3 and Batman AK)



If you pause it at the right time (when they show them building it at about 1:06) you can actually see that the CPU is an Intel CPU. It even helps that the NIC says "Intel" on it.

http://www.kitguru.net/desktop-pc/gaming-rig/anton-shilov/amds-project-quantum-systems-are-based-on-intel-core-i7-devils-canyon/

That even says it.

Ouch. I guess they don't have faith in their FX 9590 to keep cool enough and to not bottleneck the GPU?
 

rdc85

Honorable


The bigger question is why they don't use their Zen prototype CPU, are the timeline not match...??
or they don't want to show what they have yet...

will they use Zen in it later? kinda strange indeed to use your competitor chip..
(well at least they admit/know their BD chip is not good)..
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290

Why are you just adding 50 and 45 together? That makes absolutely no sense. Perf per W is not the same as raw performance. If a card performs the same as the 290X, but has 50% better perf per W it would have roughly 75% the power consumption of the 290X. If you then take that same card and add 45% more performance at the same efficiency it's also going to consume roughly 45% more power, or in other words right around that 275W figure. Add a slight boost in clock speed and I think a performance increase in the 50% range over the 290X becomes realistic. A doubling of performance at 1.50x the perf per W would require roughly 50% more power than the 290X, or in other words far more than 275W. It would also require far more than a 45% increase in compute resources.

In addition, leaked benchmarks completely conflict with your performance assessment.


You're talking about API overhead tests like Star Swarm and 3DMark? Ya, AMD is getting a bigger boost because they had more overhead to begin with in DX11. And lets not forget that all of these tests were conducted on early pre-release versions of Windows 10 running a new driver model, WDDM 2.0, so lets not get ahead of ourselves. And again, it sounds like you're trying to draw some sort of equivalence between this and actual performance. It's strictly a synthetic test of API overhead, it will not in any way reflect real world performance gains in DX12.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290

Yep... Ouch. It's not a huge surprise I suppose. I think what's more surprising are the number of high-end AMD sponsored systems I've seen up to this point that were bottlenecked by their own processors. I think getting past the denial was only a matter of time, it's just sad it took as long as it did. If this doesn't say everything you need to know about the current state of AMD's CPU lineup, I don't know what will.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290

You have got to be the biggest voting troll on this site. Even voting up your own comments? lol, are you kidding me?
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290

Zen isn't ready, it's scheduled to launch sometime in 2016.
 

cmi86

Distinguished
Ok I am going to try and break this down one more time for you, real simple like so you can follow along..

Fiji has 1.5X the performance per watt of Hawaii XT(290X) Right ?
So at the same core count and TDP Fiji will be 50% faster than Hawaii XT, Right?

Ok now that we have established the performance per watt increase, lets cover the additional compute resources found in Fiji over Hawaii XT.

Fiji features 1,280 more shaders than Hawaii XT which equates to 45% of the 290X's core count. This makes Fiji 45% faster than Hawaii XT in additional resources alone (no 1.5X increase in performance per watt calculated yet)

Once we factor in the 1.5X PPW increase in to the additional 1,280 shaders the additional resources alone stack up to represent 67.7% of the performance of a 290X GPU

So when we add the 50% performance per watt increase to the 2816 cores + 67% performance increase of the additional 1,280 shaders also benefiting from that same 1.5X ppw increase we actually end up at 117%, factor in some faster clocks and ballpark it at 120-125%. These are really basic calculations, I struggle to comprehend how you fail to understand them.
 


There is no way Fury X is 125% faster than a 290X. If anything a much easier way to look at it is TFLOPs. 8.6 (Fury X) vs 5.6 (290X). That is a 3 TFLOPS increase which is a 53% increase in raw performance.

If the AMD slides are even accurate then we can even asses performance increases from there. In Shadow of Mordor a R9 290X gets 37.8FPS on "Ultra Quality" at 4K resolution. The Fury X, according to AMD, gets 46.13FPS at the same settings. Lets make it simple with rounding up. 38FPS vs 47FPS. That is a 9FPS increase at the same settings in the same game with the same amount of VRAM yet Fury X has a pretty big advantage with over 50% more SPUs, newer version of GCN and much more memory bandwidth. So calculated out and that 9FPS is a 23% increase in performance. A GTX 980Ti gets 45FPS rounded up in the same game and settings which means it is 18% (7FPS) faster than a R9 290X so Fury X will win there.

Another game, the Witcher 3 at 4K ultra settings. 18FPS (290X) vs 34FPS (980Ti) vs 39FPS (Fury X). That means that the 980Ti has a 16 FPS ( 88%) advantage over the 290X and Fury X has a 5FPS (15%) advantage over the 980Ti or 21FPS (116%) advantage over the R9 290X. Of course this is math based on only what AMD has given us and what we have online in benchmarks.

Now there will be some games that it might not win at. But it seems that it has at least a 20% advantage on the R9 290X and that is not even including the current top of the line ones that were already clocked like a R9 390X.

Fury looks promising but it is not some revolutionary GPU. We haven't had one of those since we went from separate pixel and shader pipelines to a unified shader uArch.
 

cmi86

Distinguished


So then what is going on that causes the calculation not to translate in to real world gains ? If Fury has 1.5X the performance per watt then how is it not 50% faster on that alone at the same core count and TDP ? Why don't the additional 45% increase in physical resources in Fury represent a 45% increase in performance ? Is is possible we are seeing the effects of immature drivers or something else because I fail to understand how Fury fails to benefit proportionally from the 45% addition of physical resources alone, let alone a 1.5X increase in performance per watt at roughly the same TDP as 290X. If you use Tahiti for example and look at the difference in physical resources from a 7870LE to a 7970, the performance increase in the 7970 is directly proportionate to the additional resources, why don't we see this in fury ?
 

scolaner

Reputable
Jul 30, 2014
1,282
0
5,290
While Tom's has been woefully wrong in the past, If you have conflicting sources, post them so Tom's could follow up and possible correct their article if they are wrong.

Hmmm, unfortunately I haven't known Tom's to have the greatest track record when it comes to the accuracy of information in their news articles either. Actually that's being too generous, their news articles have had a terrible track record in my experience. Maybe that's changed recently, I wouldn't know.

I'm not claiming that Tom's is wrong, but if they're right, I would be surprised...

That's quite a thing to say. I strongly disagree.
 


The 1.5x increase per watt is based on the hardware as well, it is not saying it is a 1.5 performance increase with the same specs as a R9 290X.

As for the HD7870LE

http://www.tomshardware.com/reviews/tahiti-le-7870-7930-benchmark,3401-3.html

The HD7970GHz was getting 158FPS (again rounding up for ease) averaged in all games while the HD7870LE was getting an average of 128FPS which gives us 30FPS more on the HD7970 which equates to a 23% performance increase. The HD7970GHz has 2048 SPUs vs the HD7870LEs 1536 so a 512 SPU difference or 33% less than a HD7970GHz.

If we drop to a HD7970 non GHz it would have the same clock speed as the HD7870LE and it averaged 147FPS vs 128FPS so a 19 FPS difference of only 15% with 33% more SPUs.

I truly wish it was a 100% performance increase. That would mean a Fury X with 2x the SPUs alone would give me double the performance from my HD7970GHz but that is never how it works.

I will say that over the R9 290X probably 25% average performance increase but it will probably match the GTX 980Ti at stock. If NVidia throws the 980Ti into overclock mode it will push past the Fury X.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290

No, you're already wrong. Do you not understand what perf per W is? It's a ratio of performance to power consumption. It has no direct influence over raw performance. Performance doesn't just automatically scale with increases in efficiency.

At the same core count as Hawaii, Fiji wouldn't have the same TDP to begin with, and to get it up to the same TDP you would have to do something like increase voltages and clocks beyond what's realistically achievable. This is an oversimplification, but assuming equivalent clocks on a 2816 core Fiji, you would have roughly the same performance as Hawaii, that's not what has changed. What's changed is the power consumption. At 1.5x the perf per W it should have a ~25% lower TDP. At 2x perf per W it would have a ~50% lower TDP. The 290X supposedly has ~250W TDP. To account for the 1.5x increase in perf per W we can multiply this by .75, giving us ~187W. To account for the increase in compute resources we can multiply this by 1.45, giving us ~272W, and we're already in the ballpark for the published TDP and performance profile of the FuryX. That's basically how it works, it'll likely be ~50% faster than the 290X with a ~10% higher TDP. The increase in core count is going to be the primary reason for performance increases in Fiji, not the increase in perf per W.

All the rest of your reasoning just stems from this first false assumption, so it's irrelevant anyway.


You just have some sort of profound misconception about the way this all works, and I'm struggling to identify what exactly that is. I also struggle to understand how you're a graphics card expert on this site... I've been here a long time, and there's always someone like you that's in absolute denial, so certain they know exactly what they're talking about even though they haven't got a clue. Even when multiple people try to explain to you as clearly as possible that you're wrong, you still stubbornly say you're right. At this point I'm just not sure what else to say, you can believe what you want I guess.

But I would like to know your thoughts on the leaked FuryX benchmarks, which show performance gains in the ~50% range over the 290X. Have you given it any thought as to why that might be? Has that led you to question or reassess your performance predication at all? Because if not, your fundamentally flawed performance assessment is probably the least of your issues.


 


https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290

It is, but unfortunately it has been my experience as well as the experience of many others who have frequented the news articles here on Tom's over the past few years. Although this is far from the worst offense, there was an example of it in this very article. This isn't meant as a direct jab at you, I actually have no idea who you are or how long you've been an editor on this site, but the general attitude I've experienced has been one of indifference to even basic standards of journalistic integrity. Corrections are rarely made, even when multiple comments point them out. And when corrections are made or articles taken down entirely it's done without any sort of retraction or effort to inform the readership. In fact one of the only authors I've known to make corrections in their news articles and engage the readers in the comments has been N.Broekhuijsen. Now I haven't read many news articles on Tom's recently, for this very reason, but up till I would say 2014, this was very much the case.

I am a bit surprised though, this is the first time I've ever had a dialogue with any of the staff about this issue. I just assumed after a few years that no one really cared enough to even make an effort to deny it.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290

It does, that's not the issue. The issue seems to involve misconceptions about the influence of the 50% increase in perf per W.
 

cmi86

Distinguished


Wow you really get your panties in a twist over someones opinion on a technology site don't you ? Instead of flinging cause for clinical diagnoses my way you might want to get your own issues under control there, eh rage machine? I now understand my logic is flawed but this realization did not come as a result of your incessant desire to be correct and flex your e peen. It came as a result of another user who spoke much more clearly and less insultingly with almost zero perceived desire to be "right" like you but rather his only intent was to inform me, which I greatly appreciate.

Now as to how and why I hold the rankings that I do on this site. I hold these rankings because for nearly 5 years I have taken time out of my life to help others resolve their pc related questions and problems to the best of my ability. I believe that is what this site is for as opposed to using this site as a vehicle to arrogantly impose my desire to be correct unto others, I highly doubt your are so matter of factly spoken in daily life when removed from the comforts of your keyboard and mouse. In my time here I have amassed 122 best solutions compared to your 2 so if you would like to criticize the badges that I hold on this site I highly suggest actually earning one of them first.

Generally I do not deviate out of topic to this degree but if you are going to attempt to embarrass me publicly, I will gladly return the favor. Now in an effort to not derail this otherwise reasonable thread I suggest we both shut our mouths and move on to more productive dialogue. Agreed ? Good.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290

Actually that was quilciri, not me...

I made NO conscious effort to emabarrass you publicly, nor was that ever my intention. You managed to do it all yourself in your responses and subsequent reactions to an honest effort to disseminate accurate information and contribute to an informed community.
 


I wear a thong, tyvm.

If you didn't consciously mean to belittle, your subconscious is kind of a dick :D
 

scolaner

Reputable
Jul 30, 2014
1,282
0
5,290


(We're rabbit-trailing a bit here, but...)

I came to TH and took over as News Director about 11 months ago, so I can only speak to that time period. Perhaps you're completely correct about the way things were done in the past, I don't know.

I can tell you, though, that none of what you're asserting is currently the case. We always update articles if and when there are errors, even little typos, and when we do, we make a note at the bottom of the post. If it's a large enough update (whether there was an error or just new info to include), we'll even add a note to the headline and re-publish.

We care about *all of it*. We care what commenters say. We do engage much more now (apparently) than we did before. Our desire, and the thing we work toward every day, is to be the best source of tech news and reviews, period.

That takes a lot of blood, sweat and tears, and it takes smart, dedicated people that buy into that idea. I'm extremely proud of our growing news team. These guys work incredibly hard, and they put the time in to know their stuff and to run down as many details as possible on every story. We're constantly badgering companies we cover for more information, for clarifications, and so on.

Speaking of journalistic integrity, we also spend a lot of time internally discussing and debating what to post, and how. I can't tell you how many times we've killed stories because we felt, ultimately, that it wasn't worthwhile to cover, or we couldn't substantiate or debunk a claim someone else made, or what have you. It takes time and effort, and it costs money (nobody's time is free) to run those things down, and sometimes the end result it that we don't post the story. We routinely sacrifice clicks for the sake of integrity.

You can't believe everything you read on the Internet--there's a mountain of false rumors, leaks, and flat-out poor research out there. We want TH to be a site where you CAN believe everything you read on here, because we've done our due diligence on every story.

And if and when we make a mistake, we correct it.

Speaking of, we're so far down the rabbit trail here that I don't even remember what it was that you thought we got wrong in this article. (Seriously, I thought other commenters addressed it?)

 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290

I admit, questioning his qualifications was inappropriate and unnecessary. It was said out of frustration, but it didn't exactly come out of nowhere. It only escalated to that point after a lengthy argument that seemed to be going nowhere.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290

I really appreciate the detailed response. Over the years my confidence in the accuracy of news posted on Tom's had eroded to the point that I now rarely even click on a news article. But I will come back to give it another try, thank you. Your response in itself is a total night and day difference compared to what I've grown accustomed to on this site.


It wasn't anything serious, just launch time frames for dual Fiji and Nano that had been mixed up, or so we theorized.
 
Status
Not open for further replies.

TRENDING THREADS