Discussion: Polaris, AMD's 4th Gen GCN Architecture

Page 19 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Actually, no, that was the point of the comments. If the 480 is the same performance as a 390, then there are also custom 8GB 390s for the same price as a custom 8GB 480. It's hard to get hyped about something that already has a price-performance match on the market. If all Polaris offers is power efficiency, then I would guess there will be some disappointment. The 480 is going to have to punch a bit higher than the 390 at those prices to even start to be considered a success.

Based on the leaked Firestrike score of just under 5500, that is somewhere in the neighborhood of the 390.
 


It's quite possible that Vega will be coming out by the time to 1070 and 1080 prices normalize. I have a 390 and not considering a upgrade of any kind unless Vega is a large jump in performance.
 


You can justify the "side" jump if the gains in "quietness" and "coolness" are great enough.

Plus, having 2x390s will have a hefty PSU requirement, whereas 2x480 are like half? I'm not talking about the usual "but mah power billzzzz!", but intrinsic downsides that come with that, such as heat and noise.

I'd wait for Vega, like Martell says. The 390 is no slouch, so...

Cheers!

EDIT: Formatting.
 


It's not backwards thinking. There is no best way to load-balance GPUs. If the drivers do it, then the game developer CAN'T do it, which means all games have to do it the same way. Yes, game developers get multi-GPU for free, but at the cost of not being allowed to innovate.

" It should be that you can add an additional GPU and boost any game" This is like saying you should be able to add more CPU cores and boost any software. I'm sorry, but our Universe does not work that way. There is no free lunch anymore.

The whole issue with the OpenGL/DirectX was that APIs were slow, really slow. Games should be running closer to 20fps, but they don't, because AMD/Nvidia/et al create "optimized" drivers. They detect what game you're playing and do some really crazy things behind the scenes that are not to the API speck. They cheat. Want more than 20fps? Then they have to do it this way.

Mantle/DX12/Vulkan get rid of this issue. Yes, the game engine developers now need to do more work, but they will get more than 20fps without having to get the drivers to cheat. This means all GPUs get to benefit and indie games don't need to reverse engineer how AAA games do their graphics pipeline in a hope that the drivers will also cheat for their games to give decent performance.

Why do you think drivers are like 200MiB now days? It's not because GPUs are that crazy. Most of that cruft is all of those cheats for all of the games over the years. It's a mess.
 
Yes gpu maker "cheat" with their drivers but do you really hope game developer should shoulder the work instead? Look at dx12 title that available today. Nvidia hardware did not benefit the same way amd hardware did in AoS because nvidia hardware are not the same as amd. They probably can solve the issue if they make another tweak or build specifically for nvidia hardware but they already said that they will not going to do that because doing so will take too much time. Then look at hitman. A game touted by amd will show us the best case of async compute usage. Dx12 have far more stability issue and even radeon take performance in dx12 for that game. And the new war hammer. GCN 1.0 based card and kepler don't even run in DX12 version of the game.
 


There are pros / cons to both approaches.

Whilst DX12 is a pain for developers support will improve- the reason I think this is most games these days are built on one of a few common engines- be that Frostbite, Cry Engine, Unity, Unreal or so on... What is going to slowly start happening is developers are going to implement DX12 (and Vulkan) implementations into the core engine, and future games built on it will come with support.

That said I think all major games are going to maintain a DX11 render path for the foreseeable future as DX12 support is just a bit too patchy / differentiated between developers.
 
reports say the 480 should be available in mass quantities on release day. but only reference crads. pre-orders already being taken. custom cards will follow later on http://wccftech.com/amd-rx-480-pre-order-in-stock/

"RX 460, 470, 480 4GB & 8GB Will Be Available June 29 For $99, $149, $199 And $229 Respectively" and "In terms of performance and bang for buck, each card in the new lineup will deliver roughly twice the performance per dollar compared to the R9 300 series and the GTX 900 series. The RX 480 is poised to take on the R9 390X and GTX 980. The RX 470 will fall just below that at around the R9 390 and GTX 970 mark. The RX 460 will be closer to the GTX 960 and GTX 950."

480 still looks like a beast at $229 and near 980 performance despite all the folks denying the rumors :)
 

Not an appropriate analogy. There are many applications that are inherently difficult/impossible to parallelize, causing them not to benefit from more cores. Graphics processing is parallel by nature, and already scales to thousands of 'cores' (shaders) in modern GPUs. There's no inherent reason why every game couldn't benefit from multiple GPUs.

Do you have a source regarding drivers "cheating" and doing stuff not to API "speck"?

Edit: Guess we're getting off-topic here.
 
seems it is overseas at this point. newegg had a few listings up but they took them down real fast.

guess only our overseas brother and sisters get to pre-order right now. i'm looking around to see if they are available somewhere we can get to them here in the US. since they are all reference cards right now, perhaps some of the brands sell them directly, like evga does.
 


I want to get a reference 480 4gb. I don't need any extra performance considering my rig is a bottleneck anyway, and when I rebuild next year I'll probably get whatever comes out in January - Vega maybe? I just want to sell my old GPUs now while they still have some value. My rig bottlenecks them somewhat already. Hoping Amazon does pre-orders, free prime shipping!

Checked Sapphire and XFX, nothing yet, they are the only exclusives I think right?
 
powercolor and HiS also stick to amd. though i don't know about his. newegg pulled all their cards a while ago due to very shady business practices so no idea if they are still making new cards or not.
 
I feel like no one here reallt has the programming experience to pitch in well enough about all of this stuff. It's all very interesting, but I feel like I should get someone like Pinhedd to come to this thread and tell us some of this stuff. Because I know none of us have worked with a team making a PC game.

$99 is a wonderful price for the RX 460. I'm expecting it to fall somewhere around GTX 960 performance, R9 380 performance, so if that is the case Nvidia's 750Ti will no longer be a viable option.
 
yah i think maxwell can be replaced all around. not even the 75w 950 models will be worth much if a 460 beats it for less. looking forward to what nvidia has up their sleeve to combat the lower end cards amd is putting out. win win win win winw win for us anyway :)

and i'm looking and don't see any amd pre-orders or product pages out yet on our side of the pond. also noted how many brands only make nvidia cards. i never noticed before, but a list of models for the amd cards will include a lot less brands to have to search!!
 


I don't even deal with those third party brands personally. I understand getting an off-brand GPU maybe for a $100-$200 card, but for Nvidia's $700 cards I'd only ever look into the mainstream brands.
 


I love how developers kept saying DX 12 is what they wanted and now they claim it is a pain because they got what they wanted, an API closer to the metal but now have to do more work to optimize.

I guess it makes sense considering how many big games get launched with poor performance and how many rarely ever get vastly better from patches.
 


So do our drivers today still contain all the game-specific stuff for games from 15 years ago?
 


I guess it would depend on a lot of factors but I would assume yes since most of the optimizations driver side are not taking up a lot of space, they are normally just text telling the driver how to handle the memory for the game.

I wouldn't know for sure since a game from 15 years ago would run at an insane FPS (try loading Decent on a PC with a 980Ti, you will get a few hundred FPS) due to just brute force hardware.
 


The simple answer is "no, new drivers don't have old code".

But, as usual, is more complicated than that, haha.

You have driver optimization for some subroutines in OGL and DX nVidia and AMD do; this is dead easy to see in Linux. In Windows is different, since you have more layers of stuff in between the video card and the code, with some promises in between.

For old stuff in new hardware, as long as it was fully OGL, it should run in newer software, but not DirectX stuff. Microsoft, for better or for worse, cleans up the API every once in a while, so old API calls will either: fail miserably or need a workaround patch to work, even when they are "legit" calls.

I can say this with full confidence, because it's the same for every single API out there.

Cheers!
 
I can say fairly confidently that there were not game specific driver updates like we have today 15, 10, even 5 years ago. There were driver updates to fix game specific issues, but unless it was a major title and it was totally broken, you usually had to wait for their update to come later on. Nowadays its every time a new AAA title comes out we get updates.
 
Status
Not open for further replies.