AMD Vega MegaThread! FAQ and Resources

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


part of it thanks to both PS4 and Xbox one is using AMD hardware. and that hardware also using the same exact architecture than their current GPU on desktop market. right now many multi plat games running faster on AMD hardware even when there is no low level API being involved. heck some of them is even nvidia sponsored title like The Division, Titanfall 2, RE7 to name a few. nvidia know this that's why they are back in the console business. it is also probably the reason why nvidia keep releasing faster GPU like 1080ti when AMD does not even have proper GPU to compete with their 1080. those low level API might favor AMD hardware but nvidia counter it with faster GPU that not even with the help low level API can reach them.
 


While Nvidia might be back in the console business, the Switch isn't exactly a graphics powerhouse - especially when you consider that they are forced to use their competitor's technology (i.e. Vulkan) to actually make it.

Then, if GameWorks games work so well on AMD hardware, it could be because of AMD's versatility : a driver update solved The Witcher 3's speed problems a few weeks after the game came out, while we're still waiting for Nvidia's driver that enables async compute in Doom. AMD devs did say that several units inside their recent GCN revisions can be repurposed on the fly with a new driver. I personally think that's why AMD isn't letting go of GCN: the flexibility of that solution is much better IMHO than Nvidia's that needs to re-spin a whole new architecture with every new card family to squeeze more horsepower for current apps.

My take is that Nvidia may be leading when it comes to pure horsepower while AMD's solution is more flexible; I would put my bets on the second's long-term viability. I just hope AMD is strong enough to see it really come to fruition.
 
While Nvidia might be back in the console business, the Switch isn't exactly a graphics powerhouse - especially when you consider that they are forced to use their competitor's technology (i.e. Vulkan) to actually make it.

it's not about how powerful those console are. the intention is so that game developer to be more familiar with nvidia more recent architecture when using low level API. and while nintendo did say they are supporting Vulkan the console have specific low level API developed by nvidia (NVN). some developer even said that developing on the switch is much easier than for example PS4.

Then, if GameWorks games work so well on AMD hardware, it could be because of AMD's versatility : a driver update solved The Witcher 3's speed problems a few weeks after the game came out,

that's because CDPR allow AMD (and the public) to access and override in game setting though AMD CCC. batman origin for example the developer did not give AMD the access requested by AMD to override tessellation setting in the game. so it still depends on how open are game developer/publisher towards IHV.

while we're still waiting for Nvidia's driver that enables async compute in Doom.

nvidia async compute did not work the same as AMD async compute. from what i can understand in case of Doom the developer also using specific extension to use specific hardware on AMD GCN that did not exist on nvidia GPU. that is one of the reason why it is faster on AMD hardware with Vulkan.

I personally think that's why AMD isn't letting go of GCN: the flexibility of that solution is much better IMHO than Nvidia's that needs to re-spin a whole new architecture with every new card family to squeeze more horsepower for current apps.

i don't think it was like that. AMD have hardware in two major console. so many game engine and game development are focused on AMD hardware first. so over the time their hardware feature are used more and more. nvidia in their case need to change their architecture to keep up with this not just because they make changes to increase their raw power. just look at pascal itself. it is very identical to maxwell design wise except for GP100. why? maybe because there isn't much changes with AMD GCN over the years so nvidia did not really need to change the base design going from maxwell to pascal....except for async compute related stuff. but then again nvidia architecture in general did not really need async compute because their utilization is already good to begin with unlike AMD. that's why we often see nvidia cards with much less raw theoretical performance able to match AMD card that supposed to have much higher theoretical performance.
 
it has been discussed a lot this past few days in other forum. but here at toms it seems no one really care about it. as usual i will wait for review. many people out there want AMD to take the crown from nvidia so badly. seeing the trend for AMD release this few years the hype is crazy. in fact i start suspecting that some people did this on purpose to AMD so when the said product did not meet the hype expectation it is regarded as "bad product" by many people. it doesn't help AMD when they also start hyping their product much early starting from polaris generation. remember when Raja said that they were confident that were few months ahead of their competitor when it comes to finfet? in the end nvidia still beat them to the market with finfet process.
 


that "it's nice" was refering to how Vega compared to 1080ti and the new titan Xp not regular 1080.

check PandaNation question and direct answer for it in this AMA thread:

http://www.tomshardware.com/forum/id-3378010/join-tom-hardware-amd-thursday-april-6th/page-3.html

in AMD case any kind of hype is very dangerous. does not matter from where they come. last year AMD try to let people know few months before launch that Polaris will not going to compete with nvidia GP104 performance wise but even when the tip coming directly from AMD some of them choose to ignore it and only want to believe good thing they heard about polaris.
 
In-depth look at some of the more "Interesting Features" on Vega...

There's a lot cramed into this video... found myself pausing an rewinding few times to take it all in..

Interesting stuff all the same.. the YT states that where Ryzen is weak in AVX, Vega is strong..
This is important an with good reason, ya can see the puzzle start to come together here with regard to AMD's Server Solution that is..
He goes onto say that this synergy may give AMD the ability to take advantage of a hardware "niche" in the HPC..
I also believe that this is the kicker that will hopefully cause us to start hearing of design win's in the HPC for Naples & Vega.. but I digress.

Also note at 25.05 how the Infinity Fabric connects the L2 cache on the GPU directly to the CPU an the PCI Express lanes, interesting..

Vega & Zen together look like quite a team.

https://youtu.be/m5EFbIhslKU
 


Makes me wonder if AMD put something in Ryzen/Vega that will take advantage of that access. Could we see the GPU offloading work to the CPU automatically at a hardware level?

 


They've called that HSA, if I'm not mistaken. It already works in their APUs, I wonder if they're not trying to do that with Ryzen+Vega...
 


You perhaps didn't read the phrase "hardware level"

HSA doesn't necessarily work at a hardware level. It's mainly about data access. I'm talking about using hardware blocks designed for processing specific common workloads normally targeted for GPU's and making them run on the CPU similar to instruction set extensions.
 


No, I did read it well enough. If it weren't the case and were software-only, HSA would work as soon as you use an Athlon and a GCN card together - but it doesn't. Why? Because common addressing and resource sharing - you'd need a memory controller on the CPU able to converse with the GPU's VRAM controller directly. Thus, hardware-level support. Moreover, your hardware block processing GPU tasks is... An integrated GPU, so you'd get an APU, basically - HSA again.
 


AFAIK the only "real" one was the one that score around 1070 performance. the score is available on 3dmark database:

http://www.3dmark.com/spy/1544741

TBH i don't know why some people take the prank that coming from WCCFTECH comment section as a rumor.
 
Well, that sucks! I guess we will have to wait till Computex to find out the deal.... 🙁 AMD said it goes up for sale in June.

also,
"AMD will finally be disclosing more information about its next generation CPU & graphics architectures Vega, Navi and Zen+ in 10 days. The company is set to unveil its long-term CPU & graphics roadmaps for 2017 and beyond in a little over a week, sources close to AMD have told us. If you’ve been waiting to hear more about Vega, Navi & Zen+ make sure to tune in to wccftech on Tuesday May 16th."
http://wccftech.com/amd-taking-the-covers-off-vega-navi-may-16th/
If we can trust anything they say...
 


Good Video... Looks very promising indeed.. it say's 1600mhz (maybe this is liquid cooled) Maybe a cherry picked chip.

Or could we be looking at even more headroom I wonder. Sound's like wishful thinking but I reckon it's possible, we shall see I guess..especially as the process matures. It is designed from the ground up as a high speed architecture an we have Polaris coming in at 1400mhz.

An then I hear there will be liquid cooling options as well...

It seem's their being very careful not to over hype this time round... but all in all, the more info were getting, it's starting to look more an more beastly.. Happy Day's

An FP16 coming in at 25 Terraflops WOW... if they can get dev's to take advantage of this.. but they will have the option to avail of it in Polaris on the consoles.. so this should help. An give more oomph to the console games.. (an the ports to PC) :)
 


they will tells us a bit more information but will AMD also going to launch Vega on the same day? in the end it could be another Vega T-shirt give away.......just like they did in late feb event.
 
Good Video... Looks very promising indeed.. it say's 1600mhz (maybe this is liquid cooled) Maybe a cherry picked chip.

the score showing Vega matching 1080ti performance is coming from the prank at WCCFTECH comment section.

An FP16 coming in at 25 Terraflops WOW... if they can get dev's to take advantage of this.. but they will have the option to avail of it in Polaris on the consoles.. so this should help. An give more oomph to the console games.. (an the ports to PC)

hard to say. because according to many game developer most of modern game development rely more and more towards FP32. especially if you were aiming for console quality graphic and above. unless you want your game to look like one that coming from 2005 relying heavily on FP16 is no go. AMD did demo tressfx using FP16 back in feb event. with FP16 they were able to render twice as many hair for the same performance vs FP32. but that demo only render the hair only. not the entire game. that should give you the hint why they do that.

there are talk about mixing the usage of FP16 and FP32 (logically this is the only way to use FP16 in current generation game without affecting the image quality severely) but special attention are needed on the optimization side. or else there will be no performance benefit using FP16. worse it might end up with more effort but the performance difference is none existence compared to using FP32 entirely. this is the major reason why all developer out there did not do this even when our GPU are capable of doing it for years.

the one that really pushing for FP16 in games is Imagination Technologies. but they pushed for FP16 because they were aware that games on mobile are not as complex as games on home console and PC. they do it so they can offer higher performance in power and bandwidth limited situation on mobile SoC.
 
For consumers, for the time being, I don't think there's a real benefit of using FP32, but for anyone using complex calculation programs that need the precision, then they'll suffer.

This is starting to feel like the "32 bits vs 16bits depth" from 3DFX vs nVidia back in the day. The difference is, at that time, nVidia did have a tangible difference to show; I'm not so sure now if this "FP16 vs FP32" is tangible for consumers playing games.

Cheers!
 
@renz496

Yeah that's what I meant, certain aspects of the game can be rendered in FP16... Actually why does Nvidia have this feature disabled. I heard it's because of money, that they prefer to sell it as an extra in the HPC ?

I believe it might hit 1550mhz on the reference board's, I was reading this is roughly what's required to hit 12.5 t/f.. we live in hope I guess.
This is not really that much of a stretch considering the Rx580 saphire is hitting 1400mhz+ an the Vega chip is designed for faster clock speeds.
 
while it's true nvidia limiting their FP16 support exclusively for their tesla (for deep learning) that's also because majority if not all console/pc games development are using FP32 only. especially on how complex the look of modern games are. the thing with FP16 is it will make game development more complicated on game developer side of things in term of optimization. if not done properly you probably did not gain any advantage that using FP16 supposed to give you. to save time and effort that's why game developer decided to use FP32 for everything. also one of the reason to use FP16 is to save on power and bandwidth. but our hardware on PC definitely not limited in this way. just look how much raw performance available on current 1080ti. we only become limited in performance once we push crazy resolution like 4k and above. and we expect to get another 15%-20% more performance than 1080ti next year. but one of the biggest hurdle on game development is how rushed are majority of games are today to the market. day one patch is almost the norm in every game. game developer already busy fixing the issues within their own game so adding more complexity to their game optimization effort will definitely not the thing they want to add to the list of their already busy work.


 
DDR4 had about a 300% price premium against DDR3 when it first came out. How long did that last? Barely a month if I recall correctly, then it started dropping pretty fast until the current supply crunch. The timing on that could be really unfortunate if it's causing supply/price constraints on the Vega launch. Even if it's a limited release, at least it will give benchmarkers a chance to test Ryzen against Intel with a high end AMD card.
 
Status
Not open for further replies.