AMD Vega MegaThread! FAQ and Resources

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


both company have their own way of solving problem. but for certain things both company will not going to tell how they handle things. AMD for example will not going to give the exact details how freesync actually work with their GPU. when pressed for it they will mention "secret sauce". same with nvidia. to explain their DX12 performance nvidia most likely need to tell how they they handle DX11 optimization as well. and that's where nvidia "secret sauce" is at the moment.


 


The Skhynix 4GiB 4 stack HBM2 chip is rated for 256GiB/s. If you use two of these chips, you will have 8GiB and 512GiB/s of bandwidth.
 


According to AMD, both effective throughput and efficiency are negatively affected by their large processing units. The 480 has overly large execution units that consume the same amount of power if processing a smaller batch or a larger batch. Not only does this mean lots of power is being used processing non-existent data, but those units are being under-utilized.

With Vega, they're making some of the execution units smaller and able to merge together. This allows for smaller batches to not consume resources they don't need while allowing larger units to still have high throughput. Of course dynamically sized units makes certain things more complex, but it's an overall large win.

I think they said that this change alone will allow for 20% reduction in power or 20% increase in performance, depending on if the executing code will make better use of the new setup or allow the unused execution units to actually stay idle.

Nvidia will still have a power lead in efficiency. AMD will need to make many other tweaks to become competitive, but they're in the same ballpark, so that's good. With my 1070 undervoltaged, I'm seeing about 150fps in Overwatch 1080p Ultra at 30% TDP(50watts). Pascal is crazy.
 


Your wrong AMD FreeSync is not a black box. FreeSync is hardware implementation of VESA DisplayPort 1.2a standard adaptive synchronization, it's not a secret technology, Nvidia could make they own version, but they won't, as they are to busy counting your money :)
 


the Vesa adaptive sync indeed an open standard. but i'm not talking about that. i was talking how technically freesync being handle inside AMD GPU. if you asked them they will not going to answer this. for example we know how nvidia Gsync behave when the frame rate drop below the 30FPS limit window (PCPer have an article explaining this). but for AMD when ask how exactly they handle this they will not going to give you the details. PCPer for their part just assuming that AMD is doing something similar to what nvidia did with Gsync but the never confirmed it is really the way they do it.

also despite being open standard AMD is the only one present when proposing the spec for adaptive sync. company when they proposing a spec they always do it the way their hardware work. and in adaptive sync case there is no other company objecting the way how adaptive sync must be handled. so saying that nvidia can just simply adopting adaptive sync into their GPU is false. if the required hardware does not exist inside nvidia gpu then no matter how open the spec is nvidia will not be able to implement them. can nvidia develop that required hardware inside their gpu? maybe they can but is it as easy as that? what if AMD have patent for that kind of hardware? if nvidia try to do create something similar inside their what if AMD try to sue nvidia for violating AMD patent? if i remember correctly intel have mentioned that they have interest in supporting adaptive sync but it has been two or three years since that but to this day we still not seeing any of intel gpu capable of using adaptive sync monitors out there. there probably some hurdles that preventing intel from really adopting the tech inside their GPU.
 
If I'm pretty sure I'm going to buy a 1080ti sometime in May or June, how long should I expect to hold out to see what Vega brings just to be sure? I'm also going to be getting a 4k monitor, so I have to decide on free sync or g sync.
 


The obvious answer is "until it's officially benchmarked".

Not even with a full list of official specifications you'd be able to accurately measure where it will place, so it's a moot point to even tell you "hold for it".

What I can say is if you get the 1080ti, when Vega comes out, you will still have top notch performance. As usual, the only factor you have to take into account is "am I willing to wait?". Everything else is noise.

Cheers!
 


How AMD handles adaptive sync: we'll know exactly how they do that once DC lands in the Linux kernel. Heck, I guess if you look at the out-of-tree patches now, you'll see exactly how they do it. bridgman seems to be saying so (he's a well-known AMD employee prowling forums such as Phoronix and reddit).

Why Nvidia pushes G-sync and not Adaptive Sync: vendor lock-in. Actually, it seems that they're using Adaptive Sync in their mobile chips as those panels don't make use of their G-sync clock generator - of course they don't advertise it as such. Tested here.

Why Intel isn't implementing Adaptive Sync: with the rumours of Intel going for AMD for their future GPU, they actually will.

Why is no one else using Adaptive Sync: there are no other GPU makers needing such a wide range of refresh rates, as other use cases can either use a proprietary screen clock controller (hand-held, mobile) or they don't need variable frame rate (media centres).
 
Why Nvidia pushes G-sync and not Adaptive Sync: vendor lock-in. Actually, it seems that they're using Adaptive Sync in their mobile chips as those panels don't make use of their G-sync clock generator - of course they don't advertise it as such. Tested here.

this stuff has been discussed a lot. but from what i can remember what being discussed back in 2014 the standard to make it work on regular monitor did not exist at the time. some people that familiar with eDP and how monitors working said the protocol for the adaptive sync to work cannot work over cables that longer than 10cm. at least it was the case with eDP. also the Gsync module on regular monitors is not just there for for Gsync functionality but also for uniform integration. remember some of the early Adaptive Sync monitors having issues to work with Freesync and have to be send back to manufacturer for firmware updates? with nvidia Gsync there is no such problem because all issues are being fixed by nvidia driver unless you got panel defect and needs your panel to be replaced physically.

 

Well, there is nothing that actually prevents updating a monitor's firmware over DP, except that no screen maker is ready to open up the API to do so - that's one main point for G-sync, but is it worth a $100 premium on EVERY screen? And, considering that Nvidia decided not to make use of that possibility when they finally managed to make ULMB work with Gsync on those panels that could have handled it, I'm not too convinced it is.

G-sync was very useful at a time when this technology was still in its infancy - and truly, screens using G-sync and Adaptive Sync were very different at the beginning, with G-sync being much better. Nowadays, I'm not so sure - advantages like frame doubling being managed in the screen's module are compensated at no extra cost on the card's side with a driver option on AMD hardware, G-sync doesn't allow colour management by panel makers, and non-G-sync screens are now able to hit 144Hz+ too.

As for cable length, I think I remember reading that it was the main reason why Nvidia split from the design group for adaptive sync: in early versions of the spec, no timing information was transferred and no one agreed on how to do this. The final version of the spec (which came out one year after G-sync hit the market) did mostly solve that problem but even then the actual implementation was quirky. Later revisions for the hardware pretty much did away with that though.
 
regardless if there is module or not part of the reason why Gsync monitors are more expensive is also because of more tight control and effort by nvidia to make sure every Gsync monitors out there provide the same experience. anandtech recently made an article about AMD Freesync 2 and in their discussion with AMD it seems AMD also agree that if they want more streamline experience with Freesync monitors they need to work even more closely with panel maker instead just let panel maker do what they want to do with current adaptive sync monitors. but working closely means more effort and resource have to be spend on AMD end. right now AMD is thinking about charging royalties with Freesync 2 (and upwards).

also i think nothing stop from monitor makers from making cheap Gsync monitors. but nvidia branding did carry that premium "aura" on it and it seems monitor maker intend to fully exploit that. i still remember when nvidia said in the beginning that they intend the first Gsync monitor to cost no more than $400. in the end that's end up being nvidia very own pipe dream. don't underestimate board partner desire to make profit. Asus for example will not hesitant to charge premium dollar for their nvidia based product. take their pricing for GTX1050Ti Strix for example. they boldly price them around $175-$180 despite very knowing that AMD drop the price of RX470 4GB down to $170 mark before nvidia start selling their 1050s.
 


Looking it up, there are a few reasons why G-sync monitors are more expensive:

  • ■ the G-sync module itself isn't cheap, and more expensive than a "normal" clock generator considering it's not usually found in large enough quantities to become cheap; it also includes some electronics that are found directly on the GPU's card on competitors' offers (RAM chips)
    ■ as it is a plug-in card and not simply soldered on, designing the screen's chassis is more difficult than a "standard" one as a connector always takes up more space than a handful of chips directly soldered on the PCB
    ■ DisplayPort, even without FreeSync, is expensive (that's the main reason behind AMD's FreeSync-over-HDMI
    initiative)
    ■ and, yes, since G-sync has a premium image, screen makers don't hesitate to bleed the consumer dry.
I think the module costs $35, replaces $12 worth of components, but costs $20 more in design constraints - that's $43 extra over an Adaptive Sync screen. Double that for screen maker's margin on the feature, add taxes... You got your $100.
 
Isn't the Rx580 supposed to be launching today ?

Os it delayed again:
http://www.thetech52.com/amd-polaris-refresh-rx580-rx570-delayed-till-april-18th-ryzen-5-launch-ahead/

Also we have no Sticky for the Rx500 Series..

Jay
 
Fair enough so relevant post's go in there then...

Maybe we could rename the 400 series sticky to cover both ?

It's a little confusing.. Especially for someone not too familiar with the architecture.
 


from many discussion that i heard it is based on 14nm LPC instead of LPP. they said it is basically the same as LPP only it is much cheaper to manufacturer. for those that hoping higher performance for the same power consumption then it is not much improvement in that regard. officially AMD listed RX580 TBP at 185w vs 150w for RX480.
 


Could be, we don't know what the Vega cards will be called. I'm thinking there will be a 590, maybe a 590X and then whatever the Fury X successor is.
 
They said they were gonna keep the codename for the gpu, an call the final product "vega". This was announced at capsaicin an cream.

So "Rx Vega" or just Radeon "Vega" I guess..
 


I don't accept much of anything as set in stone until the launch event...especially with AMD. They might change it on a whim, you never know. But, Vega is a good name and we can hope it fares better than the Chevrolet Vega did, lol.
 


it's only from the discussion that i heard. the "C" is referring to cost. but some people also said only samsung has LPC. and instead of "cost" the C in samsung LPC refers to "Compact". samsung LPC was supposed to be using much lower power than LPP. but as we see the power consumption rating for this new 500 series did not improve much. so they said this is more like GF own improvement towards the initial LPP (some people refer to it as LPP+). it is a bit confusing of course but AMD themselves never really explain which exact process they use and most reviewer did not have much interest digging deeper in that regard as well other than the cards performance.


 
Status
Not open for further replies.