AMD Radeon R9 300 Series MegaThread: FAQ and Resources

Page 47 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
If the first small Polaris GPU comes out in Q1 (or possible Q2) this year- then comparing against Maxwell v2 is a good idea as that is what will be out. That's a big if, but still possible (especially for such a small gpu, shoudl ramp up pretty quickly). Getting it out early could net them some laptop wins at least 😛
 
This laptop win is a bit tricky because they involve OEM. In the past it is AMD used to dominate mobile discrete gpu. It is the opposite of desktop market. In mobile it is AMD that command 60% market share. Kepler efficiency did help nvidia to gain some ground but ultimately it is AMD own decision that sees them almost none existent in laptop market. When Rory Read become the CEO of AMD he rejects OEM deal if they did not intend to sell their product in certain volume that he thinks worth the R&D. It is part of his strategy to reduce AMD bleed. Nvidia also face the same problem as AMD; the R&D spend might not really worth in some of these deal with OEM but they take them anyway because they try to increase their market shate back then.

AMD said they were sampling to partners right now. Deskstop version of polaris might come out in Q1 but for laptop we probably not going to see them in Q2 or even Q3 this year.
 
It half agree with you on the lappy projection, renz. Simple reason being MXM cards are not that hard to produce and wire. If we talk "difficulty"range, it would be the same as a regular PCIe PCB. What would be left to actually know if OEM would salivate towards them is bulk price and performance, since wattage seems to be well within notebook range on this first glance.

Now, the other half agrees because AMD Drivers in lappies are bad (to say the least) and OEMs would choose Desktop bulk (in terms of development cost of the PCB) than lappies first. I believe history also backs that statement, since low power parts always show in desktop first and lappies second.

Let's see if AMD surprises us.

Cheers!
 
Well at least that's how it is back in 2012. The way i understand it each design win will involve some R&D from graphic chip maker as well. This is just my very own speculation but i think Rory Read will reject OEM deal if they did not intend to sell machine using AMD product in very large quantity; like the contract they get with Apple. Rory Read insisting on 'worth' profit. He once mention that he decided to delay AMD next series (R 200) because he believe the 7k series still did not profit enough for what it's worth.
 
AMD Radeon Software Crimson Edition 16.1 Hotfix
http://support.amd.com/en-us/kb-articles/Pages/AMD-Radeon-Software-Crimson-Edition-16.1-Hotfix-Release-Notes.aspx



Resolved Issues
[82645] Fallout 4 – The compass flickers during gameplay on AMD Radeon™ R9 290 and AMD Radeon™ R9 295X2
[84118]/[59475] Elite: Dangerous - Poor performance may be experienced in Supercruise mode under Windows® 10
[82887] The driver installer appears to hang at various stages of the install process
[84116] Call of Duty: Black Ops 3 – random frame freezes may be observed during gameplay
[84112] Frame Rate Target Control (FRTC) setting do not apply consistently to all games. In order for FRTC to function properly, Vertical Refresh/VSync must be turned off
[58978] DiRT Rally – A crash may occur when starting a new race with AMD Crossfire™ and AMD FreeSync™ enabled
[83370] The AMD Gaming Evolved overlay may cause a black screen, or introduce game stutter
[82497] Assassins Creed Syndicate - Using "Very High" graphics settings in 3x1 Eyefinity mode may cause displays to switch off
[82093] Star Wars™: Battlefront - Some flickering may be experienced in shaded areas of the screen while game levels are loading
[82788] Call of Duty: Black Ops 3 - Frame freezing during gameplay may be experienced
[82794] Just Cause 3 - The system may hang when task switching on systems with AMD CPUs and GPUs
[82777] Just Cause 3 - Application profile setting added for laptops with Switchable Graphics
[82779] Fallout 4 - Gameplay may be choppy in AMD FreeSync™ mode in Ultra mode at 1440p resolution
[82895] Fallout 4 - Brightness flickering observed with AMD FreeSync™ enabled on certain displays
[80254] cnext.exe intermittently crashes during Windows® shutdown
[81809] A crash may be experienced if an HDMI™ display is a cloned display device on an HP Envy 15 notebook
[82485] "Failed to create OpenGL context" error message may appear after installation
[82842] "Cannot find RadeonSettings.exe" error message may appear during installation
[83277] "AMD Install Manager has stopped working" error message may appear during installation
[83484] "Cannot find cncmd.exe" error message may appear during installation
[82902] Display may flicker on certain laptops after prolonged gameplay with AMD FreeSync™ enabled
[81489] Unable to create 4x1 or 2x1 portrait mode SLS with 4K displays
[82042] Video corruption may appear in Movies & TV app when is VSR enabled and scaling mode is set to "Full panel"
[82492] Portrait Eyefinity mode may not be configured correctly using Radeon Additional Settings
[82695] No display on certain laptops when toggling display mode or connecting an HDMI™ display
[82900]/[81859] Flickering may be experienced on some monitors when AMD FreeSync™ is enabled
[80064] Notifications reverting back to English on non-English systems after reboot
[82490] Misaligned UI may be observed on the Bezel Compensation screen
[81777] Launching a game from the Game Manager may launch on a single display after enabling and disabling AMD CrossFire™ in a 3x1 AMD Eyefinity™ setup
[81856] Marginally increased power consumption may be observed during video playback
Known Issues
[79428] StarCraft II: Flickering may be observed in the 'Episode 3' campaign
[80836]/[59701] Call of Duty: Black Ops 3 - Flickering or poor performance may be experienced when running in AMD Crossfire™ mode
[81736] Call of Duty Online - The game may crash if the Print Screen key is pressed on a 4K monitor
[81448]/[77961] A system restart may be experience when waking the system from sleep mode on some systems with Intel processors
[81651] Star Wars™: Battlefront - Texture corruption may be experienced if the game "Field of View" setting is > 100
[82213] Star Wars™: Battlefront - Some users may experience minor flickering or corruption at different game location or while viewing the in-game cinematics
[81915] Assassin's Creed Syndicate - Building textures may be missing on some AMD Freesync™ displays with VSync enabled
[82387] Assassin's Creed Syndicate - The game may crash if the Gaming Evolved "In Game Overlay" is enabled. A temporary workaround is to disable the AMD Gaming Evolved "In Game Overlay"
[82789] Total War™: Rome II - Choppy gameplay may be experienced
[84509] Gaming Evolved client does not initiate when launching Metro Last Light if AMD CrossFire™ is enabled
[84434] Far Cry 4 – A crash may occur after performing (ALT + Enter) to switch between windowed/full screen modes with the AMD Gaming Evolved "Video Capture" feature turned on
[82499] Talos Principle - A crash may occur while changing Gaming Evolved Video settings or pressing ALT + Enter when "In Game Overlay" is enabled
[84591] Mad Max – Low FPS performance may be experienced in game when AMD FreeSync™ and AMD CrossFire™ are enabled
[84428] Battlefield Hardline – A crash may occur when changing graphics settings from "Ultra" to "High" during gameplay
[83839] Some games may experience brightness flickering with AMD FreeSync™ enabled
[83833] Radeon Settings - AMD OverDrive™ clock gauge needles for the secondary GPU may be in wrong position when the system is idle and the secondary GPU is inactive
[83832] Radeon Settings – AMD OverDrive™ Power setting changes on the secondary GPU are not immediately displayed. This is seen only on dual GPU graphics cards, such as the AMD Radeon™ HD 7990 and Radeon R9 295x2
[83287] Game stuttering may be experienced when running two AMD Radeon™ R9 295X2 graphics cards in AMD CrossFire™ mode
[82892] Display corruption may occur on multiple display systems when it has been running idle for some time
[83031] Star Wars™: Battlefront – Corrupted ground textures may be observed in the Survival of Hoth mission
[82824] Call of Duty: Black Ops 3 – Flickering may be observed is task switching is used during gameplay
[81915] Assassin's Creed Syndicate – Building textures are missing and game objects stutter if VSync is enabled in Quad AMD Crossfire configurations
 
So I was reading that Polaris isn't going to use HBM? But will be utilizing GDDR5? Is that to keep prices down? No mention of GDDR5X? I'm assuming at some point Polaris will use HBM down the road. I'm just wondering why they would say that when Nvidia pretty much already said Pascal will be uses HBM2 and GDDR5X... Makes me nervous that Nvidia will continue to dominate and keep driving the price of graphics cards up...
 


nvidia high price is not something that new. just that a few years back AMD force them to lower their price. back then their top end card would cost you $800. right now they just going back to what is 'normal' to them.
 


HBM2's volume production date is around Q2 2016. Don't expect big gpus with hbm2 till Q3. Also AMD showed a low end graphics card. Adding hbm to it would increase price which is not ideal for such a gpu.
 


Polaris is the name of the overall architecture- AMD showed a GPU recently that is going to come out *first* and that specific gpu is using GDDR5 memory. It's only a low end chip though (roughly GTX 950 performance at lower power based on the demo). On that basis, there would be no advantage using HBM on it as it's obviously not got enough shaders to make use of it.

The higher end parts will probably use GDDR5X and HBM for the top part- remember to use HBM requires an interposer like with Fiji so it will only be used on high end (and is only really needed on cards that need more than 300 gb/s of memory bandwidth, below that GDDR5 should be sufficient).
 
I agree with that. The GPU AMD showed does not seem to justify making use of HBM in terms of performance. The only usage case I could imagine *might* justify it, is going mobile with it: the power savings could be tempting. That GPU with HBM slapped to a MXM board for *any* notebook would be sweet. The price wouldn't though, haha.

Plus, GDDR5 is not obsolete yet. In terms of speed is not that far away from HBM, specially in low end stuff. GDDR5 could become the new "DDR3" for low end.

Cheers!
 


Is it just a limitation of HBM that it doesn't perform well (in terms of blowing away GDDR5) or is it that developers aren't taking advantage of HBM's potential?
 


The majority of people do not benefit from HBM because memory bandwidth is not the biggest issue holding systems back performance wise.

Of course it will be beneficial in the future, when higher resolutions and VR become more common place.

But top end VRAM speed has almost always been more than enough.
 


Jimmy is correct- HBM is only needed when the rest of the GPU is fast enough. It's other advantage is that is saves power compared to equal bandwidth worth of GDDR5, however as it's new it's also expensive.

In terms of performance, the Fury with first gen HBM has a 512gb / s memory bandwidth, compared to the fastest bandwidth of GDDR5 being around 380gb /s. The neat thing for Fury is HBM also uses *less power* than that 380 gb/s of gddr5 (that is how the Nano actually manages to match the efficiency of Maxwell best cards even though GCN itself is more power hungry).

The thing is though, there wouldn't much advantage of say putting HBM on the 380- that only has a 256 bit gddr5 memory bus which is plenty for the card (the 380 / 380X can actually have up to a 384 bit memory interface but AMD didn't bother as there isn't enough of a performance boost from it). Yes you could achieve that using HBM easily and it would save a bit of power- however it would also cost a lot more and price is a key factor lower down the performance spectrum.

You could probably make a better version of the 390 / 390X cards with HBM as both those cards do benefit from memory bandwidth and both also consume more power than they should (the 390X uses much more power than Fury X for example despite being slower, the very large GDDR5 memory interface is at least partly to blame imo). It's also in a price segment that means it could be achievable. Anything less than a 390 (circa 2500 shaders) there's probably no point, so your entry and mid range are going to stay GDDR5 this generation. I fully expect Nvidia is doing the same as well.

What will happen as it did with GDDR5 is the newer memory will slowly get adopted on lower and lower end cards as a card with 2500 shaders (or future equiv as design will probably change) becomes 'entry' 😛 Also chances are they'll get better at making interposer and HBM as time goes on so it'll get cheaper (so lower cards might use it for power savings).

This is all for discreet cards though- for APU's there's much more argument for a pool of HBM as DDR3 / DDR4 system memory is really slow compared to even GDDR5 which really holds the APU's integrated graphics back (Intel on their top integrated parts also include a pool of on chip memory and it really boosts the igpu on their 'Iris PRO' parts).
 
By design Tonga is 384 bit gpu. Though the way i heard about it AMD decided to use 256 bit instead because of cost. As for HBM vs GDDR5 i believe the later will still being use for the low end (mid range even) for years to come.
 
As it's been stated before, HBM is not being used to it's full potential firstly because of the hardware design itself. AMD is more bandwidth (historically) constrained than nVidia, because of design decisions (can't remember the details, but it had to do with ROP arrangement). With Fury, after reading most tests done to it, the max bandwidth it was able to push was around 400GB/s out of the theoretical 510GB/s. So the GPU was just "not fast enough" to saturate the available bandwidth.

That being said, there is also a Software side to it. DirectX and OGL don't optimize the API towards being memory intensive. I would imagine because Memory was a scarce resource back in the day, so the API and Software development tries its best to stay away from memory. I do not think HBM nor HBM2 will switch this trend, since I don't think there will be a time when memory bottlenecks are none existent.

There was a test to see if the memory subsystem is a problem or not. I can't remember what it was though. Haha, my memory today is a real mess. Sorry about that.

Cheers!
 
Hard to say when each foundry have their own definition when it comes to the size of their node. The way i heard about it only intel have 'true' 14nm node. But maybe you guys can check the comparison between samsung 14nm vs TSMC 16nm done on Apple A9 SoC.
 
Adding to renz comment, you also have the type of lithography optimizations you put into the mix. You can optimize for density, like AMD did with Kaveri or you can optimize for speed like PD's SOI.

So, 14nm or 16nm could have very similar performance at the end of the day, or be as different as they want them to be. I would imagine that is part of the secret sauce AMD has along with TSMC and GF.

In any case, just keep in mind not all GPUs will come out of TSMC, some will come out from GF IIRC. They might choose the "best node" (in their own internal measurements) for the high end GPUs/APUs.

Cheers!