Discussion: Polaris, AMD's 4th Gen GCN Architecture

Page 16 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
and now some 3dmark scores for the 460 card. remember this is the 75w budget card. looks like roughly a 950/960 performance range for a single card.

it is half of a 470 gpu and as a result xfire 460's matches a single 470. hhhhmmmmm sounds right to me

http://wccftech.com/radeon-rx-460-crossfire-3dmark-11-benchmarks/
 


150W TDP, 100W power consumption in games.

As we know from present cards, AMD cards usually draw less power in games than in GPGPU, with their TDP based on the latter. Nvidia cards throttle in many GPGPU tests, keeping power consumption the same or even lower than in games. And their TDPs tend to be lower than what the cards actually draw in the real world. Like the unrealistic 145W TDP on the GTX 970.
 


I'm not saying to take that 100W value as 100% guaranteed on face value. All I will say- lets wait until reviews before declaring a winner? AMD said 100W typical gaming power draw, Toms do some detailed power measurements so we'll soon know if that is true or not. They also said up to 150W under torture test conditions (i.e. Furmark).

What I want to see, the RX480 vs a GTX 1070 (or 1080 if you prefer full die vs full die) running same games and test and compare power usage to get a perf/w comparison. We know 1070 will be faster but I have a feeling we might see a pretty close matchup in perf/w if the 100w value +performance numbers we've seen are accurate...
 


Actually the Vapor-X has become a lower end GPU and the Nitro is their new "high end" design. Haven't even seen a Dual-X in a while, they do still have the Triple-X.

While I have almost always bought Sapphire for my AMD but I like Asus designs better now. Before they were OK, the DCUII wasn't bad but the Strix is a very nice design.
 
Does anyone know what that article means by the following?

Obviously there are overclocking tools like Afterburner that have had voltage control for a while. Do they mean you can set the voltage in absolute terms, rather than +/- increments from stock? Do they simply mean that this is the first time a GPU manufacturer itself released a tool like this, rather than 3rd party software?
 


You forgot the XFX, they were once strictly Nvidia and jumped to AMD, bet they regret that decision now, lol. They have some very cool models with great cooling solutions like the Double Dissipation models.
 
1.5Ghz OC? lol i will wait until actual card hit the market and there is review unit being bench by various tech sites. this is coming from the same site that claiming there will be 2.5Ghz 1080 (on air to boot) from board partner. 😀
 


For sure. Until reliable benchmarks are out, everything is tentative.
 


i noted the same thing but not sure what they mean. i took it as first time amd or nvidia has released a utility. and not sure what they mean by voltage control. many brands like to pretend they are the first at something when others have been doing ti for years, so it's possible they mean a simply +/- like we are used to and wish to pretend they invented the idea. i'd like to believe it means being able to set it almost to any number, even if it risks the gpu to do it. but that seems silly from amd's point of view so i doubt it.
 


Don't forget it's BBQWTFTech writing this article, not AMD claiming they're the pioneers at overclocking.
 
this quote "That’s right, this is the very first time any GPU maker has done this" suggests to me they meant first time amd or nvidia has released an oc tool. they only hint at the voltage controls but seem to imply they have the software and will do a review of it soon. they seem rather excited but are vague as well.

so it's possible it really is something special and not amd posturing.
 
You know we are all talking about this crossfire performance, but my wonder is, Is crossfire still going to suck? Because as a Crossfire user, it sucks. Microstutter, not every game supports it, sometimes have to jump through hoops for it to work. I don't so much care about the performance if I have to deal with that. I'm getting an RX 480 to carry me through to my next build (hoping to get all the $ I can for my 280s), but I probably will sell it and get a Vega or whatever crazy thing Nvidia comes out with early next year vs Crossfire it.
 


It would be the first time either has a voltage control in a GPU overclocking program but not the first time ATI/nVidia/AMD has released a OCing tool as we have had Overdrive for near 13 years now

http://www.anandtech.com/show/1176/2
 


Not really a fair question as PhysX is a proprietary API. They have only begrudgingly allowed CPU's to run PhysX and only because it showcases how much faster a GPU is at running physics. I can't see nVidia ever allowing PhysX to run on a competitors GPU.

This has always been my beef with nVidia, they are excellent at introducing new technologies, but they shackle them to their hardware only. This is one area that AMD / ATI is definitely better. Technically speaking though PhysX wasn't really developed by nVidia, they purchased PhysX (the company) to get the IP.

What I'd like to see is an open physics standard that takes advantage of OpenCL. This way both GPU makers can support it and game developers would be more likely to use it. At present, if the game isn't a TWIMTBP title then there is no PhysX support. Although PhysX isn't the only physics API, it's definitely got more power behind it. As far as I know Havok still runs on the host CPU. With something like this, nVidia might feel the pinch in getting comparable Async Compute performance as AMD has as the OpenCL stuff would have to be scheduled around the graphics processing.
 
I think the one that pitch the idea of unloading physics processing to gpu was ATI. i still remember when they talk aboit having three cards in your pc; two will work in regular CF while the third will be there for physics processing.

Then Havok come into the picture. Back then there were initiative called HavokFX. this is true cross vendor solution for gpu based acceleration physics. Both ATI and nvidia work on their software to accelerate havok processing on their gpu. But it comes to an end when intel acquire Havok. Nvidia know that HavokFX is no more so they acquire Ageia to continue their gpu accelerated effort. ATi continue working on havok on gpu accelerated physic but it never move beyond tech demo. Most likely because intel would not allowed gpu to take the spotlight on physics prosessing.

After years of no progress and nvidia already out with few game with GPU physx amd finally ditch the idea of havokfx and work with bullet to offer gpu accelerated feature uaing opencl. And before nvidia completely locking hybrid physx setup they did mention that they have no problem if amd want to license PhysX from them. Of cource AMD would refuse that and believe bullet will kill PhysX since Bullet like havokFX will be vendor neutral solution that will work on any hardware (opencl based).

But then why there is still no game using Bullet to this day uaing it's gpu accelerated feature? One thing about AMD was they prefer not to spend their money if possible (just look how they handle their steresocopic 3D solution and freesync). With bullet being vendor neutral solution they hope developer will pick bullet with their own free will and did not help bullet to promote the feature. The thing is developer in general have not much interest in supporting feature that only exclusive to pc.
 


True. My intentions were pure, though! 😀

I thought maybe polaris introduces something new to CF.
 
Amd did introduce few interesting to multi gpu (like those bridgeless XDMA) but in the end they always screw it up on software implmentation. Amd will be glad if they can unload all this stuff to developer instead. That's why they bring in mantle to the table. they want to minimize their effort in optimizing one specific games. Though the problem is developer has been asking for high level API in the past because they don't want to deal with various vendor specific API and learn the quirk of all available architecture.
 
Actually @renz let's be fair here, crossfire works (or doesn't depending on the game) just as well as sli does. Xdma greatly improved it for them and in well supported games and certain engines it offers a big boost and little in the way of stutter.

Where it's got proper support it's good. It even scales slightly better than sli. The issue is, just like sli, it's very game and driver dependent. So it's hit and miss. Where support is patchy you'll get a boost but irregular frame times negating the benefit, where it isn't supported your back to a single gpu.

It's quite frustrating really when I remember dual gpu setups dating back to the voodoo 2 in the mid 90s.
 
Status
Not open for further replies.