AMD GPUs In 2016: Polaris Lights Up The Roadmap

Status
Not open for further replies.
Chris, another site claimed it was 16nm FinFETs not 14nm. I would assume your source is correct but is it 14nm for sure?

And while I find this interesting and hopefully good for the GPU market I am not interested in the normal marketing. Can;t wait till you get your hands on one of these and Pascal to do a true equal review instead of some marketing slide mumbo jumbo.
 


They are dual sourcing TSMC and Global Foundry.
 


I know about the dual sourcing. That doesn't answer my question because dual sourcing could mean high end is 14nm, low end is 16nm or it could even mean a Apple 6S debacle where it is luck of the draw which one you get.

My question is to which is it and if there is a definitive source as to how it will work.
 

chronium

Distinguished
Nov 22, 2010
38
0
18,530


I know about the dual sourcing. That doesn't answer my question because dual sourcing could mean high end is 14nm, low end is 16nm or it could even mean a Apple 6S debacle where it is luck of the draw which one you get.

My question is to which is it and if there is a definitive source as to how it will work.

You're not going to know until they announce the cards, they only announced the road map not the specifics.
 

Achoo22

Distinguished
Aug 23, 2011
350
2
18,780
not ashamed to admit I only understood about 10% of this article. XD
That's because it uses a lot of words to say very little. The summary is that AMD hasn't released meaningful details on their upcoming products beyond that: they should arrive in six-ish months, they enjoy a process shrink that can't fail to increase efficiency, they will support 4k and h.265 at least partly in hardware.
 

voodoochicken

Distinguished
Dec 11, 2009
41
3
18,540
Coupled with other technologies, this could be a boon for AMD in the mobile space. However, Intel might be providing more competition in this space than nVidia. Even IBM is a wildcard as far as feasibility in this segment.
 
"And yet, AMD continues to hold its own in terms of absolute performance. The Radeon R9 Fury X even bests its nemesis, the GeForce GTX 980 Ti, in most benchmarks at 3840x2160."

While that is certainly significant to the fraction of one percent of the market that is at 3840x2160, the fact remains that neither the 980 Ti nor FuryX can provide a satisfactory experience to the gaming enthusiast at this resolution, where even twin cards struggle to top 60 fps in current AA Games. In addition, while 4 GB remains fine for 1440p, at this resolution, 4GB comes up a bit short... not be measuring the amount "allocated" by GPU_Z which should be accepted by now as inaccurate, but by measuring performance impact. For example:

http://www.extremetech.com/gaming/213069-is-4gb-of-vram-enough-amds-fury-x-faces-off-with-nvidias-gtx-980-ti-titan-x/2

In Far Cry 4, the Radeon R9 Fury X is fully playable at 1080p and 1440p, as are the GeForce GTX 980 Ti and the GeForce GTX Titan X. By 4K, with all features maximized, however, only the GTX 980 Ti is managing 30 FPS. The minimum frame times, however, consistently favor Nvidia at every point. We’ve decided to include the 0.1% frame rate ratio as a measure of how high the lowest frame rate was in relation to the highest. This ratio holds steady for every GPU at 1080p and 1440p, but AMD takes a hit at 4K. ...

Both AMD and Nvidia GPUs throw high frames out of band at every resolution, but the AMD Fury X tends to throw more of them, at every resolution. This is particularly noticeable at 4K, which is also where we start seeing spikes at the 4GB node. This looks to be evidence that the GPU is running low on memory, whereas the higher RAM buffers on the 980 Ti and the Titan X have no problem. With the resolution already below 30 FPS in every case, however, it’s hard to argue that the Fury X is uniquely or specifically disadvantaged.....

As in Far Cry 4, AMD takes a much heavier minimum frame rate hit at every resolution, even those that fit well within the 1080p frame buffer. AMD’s low 0.1% frame rates in 1080p and 1440p could be tied to GameWorks-related optimization issues, but the ratio drop in 4K could be evidence of a RAM limitation. Again, however, the GTX 980 Ti and Fury X just don’t do much better. All three cards are stuck below 30 FPS at these settings, which makes the 4GB question less relevant.

Assassin’s Creed Unity shows a similar pattern to Far Cry 4. AMD’s frame timing isn’t as good as Nvidia’s, but we see that issue even below the 4GB limit. The situation gets noticeably worse at 4K, which does imply that Fury X’s memory buffer isn’t large enough to handle the detail settings we chose, but the GTX 980 Ti and Titan X aren’t returning high enough frame rates to qualify as great alternatives. The frame pacing may be better, but all three GPUs are again below the 30 FPS mark.

Not trying to argue the relative strengths of the cards but just making the point that in each instance all 3 cards were below 30 fps making the 4 GB VRAM issue also irrelevant. One could argue that all one needs to do is turn down the quality settings, but when investing $1350 in GPUs, or $450 per year for a typical 3 year system life, that doesn't quite seem "satisfactory".

I guess the point I am addressing is the growth in this segment at this time will be at 1440p, whereas most of the market will remain at 1080p for the immediate future. So I think when most purchasers are looking at Polaris, Pascal or anything else, the great majority of those making purchase decisions, will be basing that decision on 1080p / 1440p performance rather 2160p.

And, to my eyes, most users consider performance per watt a secondary consideration. If two competing cards perform roughly the same and cost roughly the same, only then I think will needing a extra 100 watt PSU and maybe an extra case fan come into play.

The bad news is that new AMD GFX seems at least 6 months away. I don't see nVidia releasing anything, regardless of whether or not it is ready, until AMDs next generation cards drop. If this is accurate, I guess we can expect new AMD cards this summer and new nVidia cards this fall. I don't think either will provide a satisfactory experience at 4k so, while 4k development is certainly exciting, I hope future articles focus more on 1440p / 1080p performance. I think we'll need yet another generation to arrive before single card 4k performance brings a truly enthusiast level experience.

Would love to see some more concrete info on when we'll see the HBM2 cards w/ DP1.3 and how this may impact where monitor manufacturers go w/ regard to refresh rates. I'd be hesitant to invest in an expensive 4k monitor until we see them @ 144 / 165 Hz


 


Short of the R9 Fury everything else in the R9 300 lineup is a rebrand of the previous generation and before that Only Hawaii XT was a "new" uArch.

Maxwell V2 is actually more like GCN 1.1 to 1.2, GCN 1.2 (Fiji) was all about power improvements more than anything which is why they are not quite rebrands.

Even the newest R9 300, the 380X, is just Tahiti with the improvements to Tonga although I would say their biggest power savings came from dropping the 384bit memory controller for a 256bit memory controller and faster clocked VRAM than anything else they did.
 


I know about the dual sourcing. That doesn't answer my question because dual sourcing could mean high end is 14nm, low end is 16nm or it could even mean a Apple 6S debacle where it is luck of the draw which one you get.

My question is to which is it and if there is a definitive source as to how it will work.

If you do a bit of Googling you'll find your answer, the two processes are in all ways that matter equivalent. One is GloFo probably due to contractual obligations, the other is TCSM to account for the likely inability of GloFo to meet all the demand. The A9 for Apples iPhone was similarly dual sourced and the pieces were found to be roughly equivalent.
 
Good to know AMD is introducing much needed perf/W upgrades in their new architecture. I'm not sure but I think this is more than just a redesign/process shrink that achieves the efficiency they talk about. Of course, without some real solid benchmarks, we can't say for sure what the scenario really is.
 
Another thing I was thinking about is that use of GDDR5 is a smart move. It OC's far better than HBM, and you can use more of it at this stage than just 4GB. Also, the raw frequency is more, which I think should help in lower resolutions with FPS.
 

pdegan2814

Distinguished
May 29, 2014
20
17
18,515


I know about the dual sourcing. That doesn't answer my question because dual sourcing could mean high end is 14nm, low end is 16nm or it could even mean a Apple 6S debacle where it is luck of the draw which one you get.

My question is to which is it and if there is a definitive source as to how it will work.

According to the initial article I read about AMD's presentation, it's not "dual-sourcing" in the Apple 6S sense. Some of the gpu's in the Polaris product line will be manufactured by TSMC, some will be manufactured by GloFo. But every chip with the same model name on it will be manufactured on the same process. Using totally made up model names, you'll see the R9-480/480X/490/490X made at GloFo, but the R9-470/470X made at TSMC, that sort of thing. It will NOT be a "Did you get a GloFo 490X or a TSMC 490X?" kind of situation.
 


That would be my assumption. It would be a PR nightmare to do it the way Apple did it. Apple only gets away with it because the Apple sheep will take anything they say as the word of God.

It does seem odd though. I would assume the 14nm should be more efficient but considering the reviews on the TSMC and Samsung A9 SoCs it might be better to have a TSMC 16nm part.
 


Short of the R9 Fury everything else in the R9 300 lineup is a rebrand of the previous generation and before that Only Hawaii XT was a "new" uArch.

Maxwell V2 is actually more like GCN 1.1 to 1.2, GCN 1.2 (Fiji) was all about power improvements more than anything which is why they are not quite rebrands.

Even the newest R9 300, the 380X, is just Tahiti with the improvements to Tonga although I would say their biggest power savings came from dropping the 384bit memory controller for a 256bit memory controller and faster clocked VRAM than anything else they did.

Tahiti with improvements sounds like more then a rebrand. In fact it's exactly what you described maxwell v2 as. The R9 300 series is power improvements and a few added onboard features. Nvidia and AMD both do this, no need to stint the reality.
 


The entire 900 series though has more than just a rebrand though, they had vastly different core counts. The GTX 780 was a 2304 SPUs vs thew GTX 980s 2048 SPUs. There was also an increase in pixel and texture fill rate. That is not a rebrand. The R9 390 and 390X however is the same core count but with the power improvements (mostly in software) that Tonga had and more VRAM.

As I said the 380X probably benefited more from the drop from a 384bit bus to a 256bit bus. The biggest issue with the HD2900XT was the 512bit bus used a ton of power. I would call it a rebrand+.

I would not call Maxwell V2 rebrands due to the massive change in core count and performance of the cores.

That said, yes nVidia has done it as well. A GTX 770 was pretty much a GTX 680 with slight improvements.
 
Status
Not open for further replies.

TRENDING THREADS