Discussion: Polaris, AMD's 4th Gen GCN Architecture

Page 14 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
My understanding is that 1200 MHz is the boost clock, not the base clock. Last generation, the boost clock was rarely attainable in real world use, and just getting consistently up to the base clock was a struggle.

Hopefully that has been addressed, but it will depend on how hard they need to push clocks to hit their performance goal. I'm still thinking there was probably some truth in that HardOCP article, and I'm wondering if there were any compromises, besides price, AMD needed to make. Overly high clock speeds would definitely be a potential candidate.
 


Well if you look at the Steam VR scores you have to consider, that benchmarks puts a stock GTX 980 above the Fury X. I don't think anyone here would argue that a 980 is faster than Fury X- the benchmark heavily favors NV (and that is on Maxwell not pascal).

What that benchmark *IS* useful for is comparing AMD to AMD- the RX480 is scoring almost double the 380 which is an impressive gain and puts it up there with AMD's R9 390X. I wouldn't put too much stock on the benchmark though as an absolute measure of real world performance- I'd like to see actual, real world VR applications to see where cards really fall. Just the same as AMD being extremely strong in AOTS doesn't carry over to all games or even all RTS games. Specific benchmarks tend to favor specific hardware.

In regard to Polaris having important tech for VR- I'd say the primitive discard accelerator is pretty important. I remember many years ago (back when Geforce 2 was out) there was an oddball card released using some Sega tech doing exactly this and that card punched well above it's weight class based on it's specifications (sadly I can't for the life of me remember what it was called now). That one change alone could potentially account for much of the improvement in Polaris as it allows the gpu to remove a lot of data from the scene that the game engine would pass through for processing even though it wasn't visible. I'm actually surprised neither company already have a hardware block that does this, however as far as I'm aware this is new on either side. If the block works like the Sega system the other big bonus is this function is totally independent of the software or driver- it just works in the background culling unnecessary data so it should benefit any game be it DX9, DX12 or VR.
 
i'm not gpu expert but seeing many other forum that discussing gpu architecture and tech they say nvidia already have such hardware in maxwell. they say now that AMD employing similar tech in polaris AMD hardware might not be at disadvantages as they were before for example in tessellation.

 


I think the card you are talking about is the Matrox Parhelia. The feature is known as Occlusion Culling or Z-Culling. The following link describes what the feature is:

Efficient Occlusion Culling

Essentially anything hidden behind something else from the point of the view of the cameral is culled (ie not rendered as it isn't seen) to increase performance. This feature is in all modern GPU's as far as I know. Whether the feature is used is dependent on the game engine.
 
http://wccftech.com/amd-rx-480-faster-than-nano-980/

I have an R9 380X for my second rig. I'm very happy with it and all that, but damn... an 8GB RX 480 for 249€?

I wonder what that will translate into for France/Germany and specifically what the partner cards will sell for? 300€ maybe?

It is seriously tempting to sell it off to a friend (who's not a graphics whore) and slot one in!

1080p games are now using more and more vram.... this would future-proof my second rig for a good while - even if I end up with a 1440p monitor. I heard that the RX 480 is a good over clocker as well.

Can't wait for the benchmarks at the end of the month in addition to what is already out there. I'm actually more excited about this than I was about the 1080/1070 launch. Which I didn't expect!
 
@renz that would explain Maxwell performing so well compared to its specification...

@techgeek well remembered! That sounds like a different system then, although the principal is similar. Was looking the other night back at older cards, brought back a few memories. Late 90s were exciting times in tech!
 


i am feeling the same way. i was looking forward to seeing how far nvidia pushed the envelope, but find myself much more excited for seeing these new amd cards and the effects they promise to bring if they deliver.
 


Which is a feature heavily used in the Source engine, I listen to the developer commentary for all of VALVes stuff. It is a great tech honestly.
 
some 470 benchmarks including xfire. http://wccftech.com/radeon-rx-470-crossfire-3dmark-11-benchmarks/ meets minimum vr scores and is around an r9-290 score. not bad for what will be roughly a $150 card
 


I don't think that's very good, honestly. Two would be $300, you would expect it to beat a 290 not match it at the same price point.

Edit: Oh wait I thought you meant 2 match a 290 not 1.
 


The funny thing is that people are so used to meh returns since GPUs have been stuck on 28nm that this kind of performance seems amazing. If 20nm didn't fail we would have had a much better jump for Hawaii XT (and better thermals) and Maxwell, both which were supposed to be 20nm. Then 14nm wouldn't have been as big of a jump.

But since 20nm did fail we have a much larger gap to jump that is allowing for much better performance at a much lower TDP. I would imagine it would have been the same if Intel skipped 22nm and went straight to 14nm.

Honestly I am expecting this kind of performance. I guess after being involved with computers so long it takes a lot to really wow you. I mean like Netburst to Conroe wow you.
 
that is very true. skipping a level does make for some big jumps in performance. was noting numerous sites are reporting decent price drops in maxwell cards in the last couple days. have not looked myself but it seems the pascal cards and likely the coming soon polaris cards are having the desired effect on maxwell prices.
 
Which I think I am skipping on. I think my 980Ti will hold me at 1080p until a good successor with HBM2 comes out that is not priced insanely. Considering the 1080s price I am sure the 1080Ti is going to be priced way worse and a good AMD replacement is not going to happen till next year at the earliest anyways.
 


Yeah all these launches are telling me is anyone with a previous gen high end card is fine, as only 1 card is actually notably faster- the GTX 1080. GTX 1070 matches the 980ti / Titan X / Fury X at a big discount. RX 480 and 470 match the tier below at a big discount.

That said I'm toying with the idea of upgrading, as I have a now quite dated R9 280, an 8gb RX470 or RX480 with lower power draw and current high end performance might be a nice option. I'm only really interested in 1080p gaming right now although something that has the chops to support a VR headset down the road is tempting...
 


Well, I am currently enjoying my 7970Ghz, but if the RX480 i better and consumes less (meaning, less heat), then I will swap it. Now, I don't know what to do with the 7970Ghz though, haha.

What do you guys do with old video cards like that? Did you all give your GTX570's to your parents? haha.

Cheers!
 


All my old cards get passed down to friends / family... One friend who has a first gen i3 and was struggling to run minecraft at more than 10fps on the igpu got my HD4600 and was literally amazed at how much better it runs 😛

Another friend got my GTX 560 so she can now play the sims smoothly (again she was stuck on an igpu before).

My parents also inherited an old ATi HD 2600 so my mum can play Plants vs Zombies in all it's glory... nothing goes to waste! One of my latest projects has been to get a second gaming machine available so me and my fiancee can game together (we're both into Star Citizen, and she quite likes most RPG and MMO titles although I've yet to teach the ways of the RTS and how 'Warcraft' really should be played haha). In the end I kinda went to the dark side and purchased an Intel / Nvidia laptop (primarily for CAD for work although as a happy coincidence it also happens to game fairly well) 😛
 


The GTX 900 series was a way more significant jump than the GTX 1000 series. Let's look at it this way. The performance jump alone was the exact same, the GTX 970 slightly beating out the 780Ti, just as the GTX 1070 has beat out the 980Ti. The change in performance from 700 series to 800 series is about identical to the change in performance from the 800 series to the 1000 series.

Now let's take a look, though, at power consumption. The GTX 800 series really decreased power requirements over the 700 series. That is all on 28nm still. The GTX 1000 series, which drops down to 16nm, has higher power requirements than the GTX 800 series. Therefore, the node shrink was actually a huge disappointment in my mind. The perf/watt jump to Maxwell was much larger than the perf/watt jump from Maxwell to Pascal, because the jump to Maxwell both increased perf and decreased watt, but the jump to Pascal increased perf while watt also increased, though a small amount.
 


That would be a good upgrade. I went from a HD7970GHz Vapor-X OC to a 980Ti so I am fine for now. You need a bit more power and more VRAM honestly as games are using more and more.



What? A stock GTX 1080 is a 180w TDP card and gives 30% better performance than a stock GTX 980Ti which is a 250w TDP GPU. The 1080 at stock uses less and gives better performance.

If we compare the GTX 980 to the 780Ti it was not nearly as big. The 780Ti is a 250W TDP GPU and the 980 is a 165w TDP GPU. Yes way less power but the performance gains from the 780Ti to the 980 were not nearly as great as the performance gains from the 980Ti to the 1080, in fact the 980 was about on par with the 780Ti in terms of performance, maybe a bit ahead.

The 1080Ti will probably be a 250W TDP GPU again but will probably add another 10% or so on top of the 1080 in performance.

Either way this is off topic and I was just making an observation of how used to the performance I have gotten.
 
1080ti 10% faster than 1080 would be abysmal. I kinda get that the 980ti was a nuisance with how well it performed(vs 980)but 10%? Ugh. Makes me lose all hope.

Is Vega the hero we need?
 


I said "10% or so". I have no idea how much more it will be although rumors are showing the 1080Ti being at least 50% faster than a 980Ti which would mean about 20% better performance over a 1080 which is not too bad.

What AMD needs is a good process node and a good uArch that doesn't draw a lot of power.
 


I am well aware that you haven'T set the 1080ti's performance into stone. I just expressed my fear of such a small performance gain.

No idea what uArch is D: All I hope for is that the announced performance/watt from the 480 will translate into vega and will get further boosted by HBM2 which seems to be the solution for high resolution.
 


The Fury was pretty efficient if compared to the 980Ti. Both were 250W GPUs. And we know Polaris is seeming to be more efficient so far, so I'd only expect Vega to be more efficient. Though Polaris isn't that efficient. If Nvidia releases the 1060 with the same performance as the RX480, it'll probably be about 30W less like the GTX 960 was. Then again that 30W is near negligible.
 
The fury are more efficient than 980ti thanks to HBM. probably why amd cannot counter 980/970 ASAP because they were waiting for HBM. without HBM Fiji probably will not fit within 300w limit. If AMD want to be more power efficient they need to radically change how their architecture work. With polaris amd already hinted that most of their efficiency gain coming from node shrink. They probably doesn't want to stray away too much from GCN architecture.

Right now i'm interested to see how power efficiency goes between pascal a d polaris
 


uArch = microarchitechture or the specific design such a Skylake, Zen, Polaris, Pascal. All of those are uArchs. GCN has shown to be pretty power hungry in the past. GCN seems to have benefited from 14nm though as it is not as power hungry as it was on 28nm. It is almost like AMDs Phenom on 65nm vs Phenom II on 45nm. Massive power draw difference.

That said, HBM2 wont be a massive advantage for AMD as everything points to nVidia also having an HBM2 card plus so far everything spec wise is showing both Vega with HBM2 and Pascal with HBM2 only having 768GB/s of bandwidth vs the reported 1TB/s of bandwidth that HBM2 should bring which makes me wonder if there are problems that are not being reported.

The real benefit to HBM2 will be increased total VRAM because HBM is severely limited at 4GBs.
 


GCN isn't really that bad. GCN1 and GCN2 were just slightly less power-efficient than Kepler. It's just that Nvidia made bigger strides with Maxwell than AMD did with GCN3, and Nvidia also launched a fuller lineup, where AMD is still selling GCN2 and even GCN1 to this day. Now AMD is going GCN4 with further focus on power efficiency improvements (beyond just the process shrink). I'll be interested to see if they're back on par with Nvidia - or, heck, if they've even ended up a bit ahead.
 
Status
Not open for further replies.