AMD Radeon R9 300 Series MegaThread: FAQ and Resources

Page 45 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


There is a massive difference between a Windsor V8 of the 60s and the Coyote V8 of today. Sure it is still a V8 but there is so much tech behind the new one that it easily drives circles around the old on in terms of raw power and is way more efficient.

That said, it is nothing against AMD but GCN has hit its limit on 28nm and is no longer efficient. While NVidia is using a similar uArch to Fermi, Maxwell V2 is much more efficient than even Maxwell. It too has hit its limits on 28nm though.
 


From what I can see it's still just a revision of GCN though? (GCN Gen 4 compute clusters).

Most of what makes 'Polaris' unique are new 'macro' blocks as the slides put it (i.e. media encode / decode, display engine and so on). The *only* novel thing I can see in the block diagram is the 'geometry engine' which is now apparently decoupled from the CU's? I'm guessing this is due to geometry throughput being a weakness? It strikes me that most of the other changes are still evolutionary (with the big power savings coming from FinFet tech).
 


Until I don't see what they are actually doing, I'm very keen on believing they will use GCN ideas (for a lack of a better term... foundation?) to implement Polaris-whatever.



It's still a V8, and it's still a Mustang. The core principle stays the same. That is precisely my point. The shades of gray can be up for debate like I said, though. Moving from a carburetor engine to EFI + ECU is a huge advancement, but it's still a V8 block. And so on...

Lipstick on a pig? Well, the pig might be quite the performer or not, but it's just a name. What matters is how they define the Arch. Like you say, nVidia has been doing a good job, but they haven't been perfect. I still have in my mind all the hate Kepler got from the GPGPU community.

Cheers!
 


And so the marketing begins. I love the comparison slide. A new uArch on 16nm vs a 28nm current part. Almost feels like when the FX 8150 was first being shown off. Only in what it did better in vs whatever Intel CPU was it ever compared in AMD marketing slides.

@Yuka, true it is a V8 block but even the core of a V8 block has changed. I could say that since the unified shader not much has changed in a GPU, and that would be true since most of the advancements come around the shader. It is just hard to compare a car engine to a GPU.
 
WCCFtech had an article with an image directly from AMD comparing a Polaris GPU to a GTX 950, and they were playing the same game at the same resolution and 60FPS, and it showed the Polaris using 86W of power and the GTX 950 using 140W. What? 140W?

That is just a joke. a GTX 950 won't even reach 100W, that's completely false information from AMD and an extreme exaggeration, so if I can deduce properly, power usage will be only slightly better than Nvidia's current cards. I have a feeling Pascal will win the power game over AMD, but performance is starting to matter more anyway now that AMD is going away from the 300W+ cards.
 
That was from the wall power draw, not GPU power draw.

That 86W was powering an entire system, including the GPU.
http://www.anandtech.com/show/9886/amd-reveals-polaris-gpu-architecture

"To that end RTG also plugged each system into a power meter to measure the total system power at the wall. In the live press demonstration we saw the Polaris system average 88.1W while the GTX 950 system averaged 150W. Meanwhile in RTG’s own official lab tests (and used in the slide above) they measured 86W and 140W respectively."
 


Now that makes sense, and may I say also say holy smokes! That's is extremely low power draw. If you consider the GTX 950 was drawing in about 80-90W of power, that's about 60W for the rest of the system, so if this is incredibly true, that Polaris GPU was drawing in about 26-36W from the PSU. Sorry Pascal, you're going to have to try pretty hard to beat that, and I'm saying from a totally honest standpoint.

It would be huge if AMD wins the power game, that's been a major thing Nvidia has held for quite while.
 


It is still just marketing. They are using a platform to their advantage to show an advantage, such as the CPU, board, RAM and settings of each platform for each GPU. I highly doubt they would give nVidia the same advantage.

that is why the only truth will come out when a independent site like Toms gets their samples and they will be plugging the GPUs into the same system thus eliminating any variables AMD, Intel or nVidia use to skew their results.
 


Not saying they do not but even so people need to take this with a grain of salt and also remember that nVidia has a GPU coming out too.

I am just against marketing slides, especially when they compare to the competition. Intel has always been my favorite in this respect as their slides tend to focus on their CPUs vs their old CPUs with less marketing against the competition but they still do, everyone does.

The only thing this will do is build up hype. It is much like WCCFTech, they report on every rumor for everything which ends up building up hype that no company can ever live up to.
 


Out of interest, AMD were apparently quite specific in the talks that the specific gpu in question was produced by Global Foundries on 14nm finfet, rather than the 16nm finfet used by TSMC (although AMD also stated that some of the new gpu's will be on 16nm with TSMC).

A bit of speculation but I guess that GF don't have the experience TSMC have building the *big* gpu's so AMD will stick with the latter for the higher end. One of the articles I saw made a good point- it's curious for them to split the gpu's up between two manufacturers like this- I'm wondering if the GF process has a perf/w advantage compared to TSMC, but probably doesn't scale up as well?

It could create an interesting situation where the two companies end up with the best perf/w parts in different categories. The thing is though getting GTX 950 (desktop) performance at under 40W is impressive. That is using gddr5 memory as well (so no power savings from HBM for this gpu)....
 


Really? Do you recall the AMD marketing stunt when they presented Intel with the "X64 bit for Dummies" book? Or the "Going for Green" vid they used to take a jab at Nvidia for the GTX480? I don't recall Nvidia resorting to such marketing tactics but if you know different then please share.
 
If there is information directly from AMD that says the system with the Polaris draws 86W and the GTX 950 system 140W, I won't take it with a grain of salt because it comes directly from AMD so I'm going to take their word for it and hold it as a true statement, I'm not one to assume someone lies. If that turns out being false when they come out, shame on them, but I don't think they simply made up 86W and 140W out of the blue. I do believe 86W seems accurate. For one thing, 14nm, better architecture (a card that matches a GTX 950 will probably cost much less, $90 about).

Pascal will probably be around the same premises I suppose. But I haven't heard specific information from Nvidia on this, only AMD, so I'm going to take AMD's word for it right now. Ans, I kind of like WCCftech's rumors.
 


I clearly told you (the other day, I think) that I did not get into computer hardware until 2014, so your question is either rhetorical or you just forgot that at the time I was doing whatever. That's like your old man asking you if you remember something from when he was a kid.

Anyways, I really don't care, this is not going to become an argument of "who markets more".
 


Really? You actually need reminding about some of Nvidia's genius marketing moves?

*cough* Pascal Announcement, 'CEO Math' *cough*
http://wccftech.com/nvidia-pascal-gpu-gtc-2015/

10x more powerful than Maxwell? If they find a way of proving it, it'll be in one very niche specific bit of software, and it only applies when the operator is hopping on one leg whilst performing a piano concerto that coincides with the alignment of all the planets in the solar system 😛

By contrast AMD's assertion they can achieve *double their previous best perf / w* is pretty modest. I know which sounds more believable to me, although heck if Pascal really is all that I'd be very impressed 😛
 


You haven't been around very long. It is wise to never trust the hardware maker and to only trust independent reviews. I can;t tell you how many times people got hyped over hardware developers performance claims only to be let down by the tech sites when they don't live up to it. Here is an example. Remember the FX series first launch? They compared it to three different CPUs (i5 2500K, i7 2600K and the 980X) in different aspects (only where it won in their testing) but never mentioned power draw or anything beyond that (such as systems used and specs etc). It is marketing.

As for pricing, again people need to drop the "AMD is cheaper" stigma. First off the 14nm will cost more, It cost way more to produce smaller nodes. Second if it performs the same as a GTX 950, or better, it will cost the same, or more.

As for WCCFTech, I don't mind the site but the problem with rumors is that when reported by a big site they get spread as truth which hurts the companies in the long run. There is a reason why Fud and Fudzilla is known for nothing but BS.
 



Note the difference. nVidia is talking about their GPU vs their GPU, not a 10x performance jump over AMDs Fiji XT for example. That is still marketing so it should be taken with a grain of salt, although HBM plus NVLINK, 16nm and other improvements could possible jump raw performance by that much (we do not know the changes in Pascals uArch yet) but it is still marketing so grain of salt, the size of Texas is needed for both.
 
I know nothing being stated is fact and I know that only independent testing not done by the marketer is true, but it's just that I and many others are highly anticipating these GPUs. It's the idea that if these statements are true, it'll be incredibly awesome. I'm not saying they are true, only if.

And I disagree with "if it performs the same, it'll cost more or the same". That doesn't make sense, because even I know that today a new $150 GPU is much stronger than a $300 GPU of 5 years ago. If what you're saying is true, the successor to the GTX 950, since it'll perform better, will have to cost about $250. What you said just doesn't make any sense.

Currently, the R7 370 competes and beats the 750Ti. So the Polaris card that matches the 750Ti will probably be a 460, which won't be priced like a 370, it'll be priced like a 360.
 


That is not the same. The new GPU takes place of the old GPU i.e. a 950 replaces the 750 and a 1050 (or whatever nVidia calls it) will replace the 950 so the price comes out to be the same.

Look at the Fury X. Performs pretty much the same as the GTX 980Ti at 4K and was launched at the same price as a GTX 980Ti, a bit less than the highly overclocked after market 980Tis though.

You are assuming that AMD is a charity company, they are not. They price according to performance, and sometimes they are wrong. Or do you not remember the FX 9590 being priced at $1K at launch? Even though it was a power hog, required at minimum a H80 class CLC and was bested by half priced, if not lower, stock i7s?

Trust me, if Polaris (or Greenalnd or Arctic Islands whatever) beats Pascal at every point they will be priced equal to or higher depending on that performance level. AMD is not going to sell better performing GPUs they can ask more for for less. They only are right now due to the fact that nothing they have beats any of their competition in anything absolute.
 
I'm not saying that. If AMD makes a new GPU that performs the exact same as a GTX 950, it wouldn't be the same price as a GTX 950, it'll be lower, that's all I'm saying, because the successor to the GTX 950 will cost the same as the GTX 950, meaning the successor would crush AMD's GPU, as it'd be priced the same and performs better than the GTX 950.
 
So Polaris is a lot of GCN... Lipstick on a pig until further notice, then.

And all marketing teams are evil. It's just a discussion between diarrhea and solid turds, but the essence is the same.

We will need more eyes into the GPU to see how much is from GCN tweaking and how much is from die shrink; if at all we could know that. I guess when nVidia has some 14/16 nm stuff we will be able to draw lines. Wake me up until then, please.

Cheers!
 


If you are going to make claims then make sure you know the history, if you cannot be bothered to learn the history then don't make the claims.