AMD Vega MegaThread! FAQ and Resources

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
find me one game that use GPU accelerated feature but at the same time they did not engage with marketing effort with AMD or nvidia. TressFX was first introduced with the original Tomb Raider reboot. now if tCrystal Dynamincs did not engage any marketing effort with AMD will they ever use TressFX? heck AMD most likely promoting CD to use their TressFX (and back then it was still not open source) because AMD know that without being pushed to use it game developer will not going to implement it in their games. because it is none other than AMD that said game developer have no interest with GPU accelerated physics feature:

http://aphnetworks.com/news/2011/03/25/amd-game-developers-not-exactly-interested-hardware-accelerated-physics

btw tressfx was only dealing with hair simulation (and grass with TressFX 2). they are not complete physics solution like PhysX and Bullet are.
 


Any game using Bullet?

http://bulletphysics.org/wordpress/

That thing runs using OCL and... That's about it XD

Cheers!
 


GTA V. there are few others but to my knowledge none of them ever use GPU accelerated feature that offered by the engine.
 


Because enabling compute is a huge performance penalty for everything else. Tinfoil hats ensue: AMD is less affected by it! LOL.

Anyway, I have to say nVidia is very bad at "sharing" their stuff. Remember even Linus gave them the middle finger, so...

Cheers!
 
Because enabling compute is a huge performance penalty for everything else. Tinfoil hats ensue: AMD is less affected by it! LOL.

nah more likely because GPU accelerated physics is more exclusive to PC. remember 7th gen console most likely not be able to dedicate extra resource just for much pretty physics effect that did not affect gameplay mechanic at all. with 8th gen console GPU accelerated physics was supposed to be feature than game developer can finally use because of the increased raw power (during PS4 official unveil they show demo gpu physics being done using havok). but because MS and sony want the console to be more affordable they skimp on the hardware resulting new games have trouble just to get 1080p 60FPS. some of the games already struggle just to run "medium" setting on both console so most of this developer most likely did not want to add another frame rate killer effect in their games. and then boom! 4k craze 😛

Anyway, I have to say nVidia is very bad at "sharing" their stuff. Remember even Linus gave them the middle finger, so...

Cheers!

that is simply how nvidia rolls. but they tend to be very clear with their intention. i still remember when nvidia being ask about licensing Gsync tech to other and their answer is straight no.
 


And that is where we don't really give nVidia the benefit of the doubt in this DX12 multi-GPU conversation, I'd say. If they find a way to "protect their IP" (not incorrect technically, but still annoying), they will. And what "protect the IP" means, can vary from locking everyone else out or just standing on their own corner, sometimes both at the same time.

Cheers!
 
Latest Rumors!

2ibp21g.png


http://wccftech.com/amd-radeon-rx-vega-cards-reportedly-launching-june-5th-lineup-includes-3-skus-priced-at-599-499-399/
http://hothardware.com/news/amd-radeon-vega-specs-leak-confirm-16ghz-core-clock-and-16gb-hbm2-memory
https://www.digitaltrends.com/computing/amd-vega-16gb-1600mhz/
 


if nvidia have no intention for their card to used in mixed combination manner then they most likely coming out with such statement right now. DX12 launch in 2015 with windows 10. fast forward to 2017 and still nvidia did nothing to stop it. they probably will not going to help game developer when asked how to make their GPU work better with AMD GPU in DX12 multi GPU but they will not going to stop it either. also mixed multi GPU is not really a new idea. it has been done before. and back then many people also expect nvidia going to stop it from working. but nvidia simply let it happen.
 


we probably will get better answer by the end of month. hopefully this is not some purposely made up rumor to inflate the hype. there are already people expecting vega to OC up to 1800mhz.
 


uhm, the Vega Core specs don't add up- it's got higher clockspeed and core count than the Eclipse but lower performance numbers?

I think the leaks we have seen suggest the core should run around 1200mhz, giving GTX 1070 level perf (unknown number of shaders, though I suspect it'll be somewhat cut down).
 


If they flat out say that, then it's kicking the hornet's nest. One thing is for them to keep others at bay from something no one else is licensing, but DX12 is another ball game.

I think they're waiting to the very last moment to say it, if they indeed want to. But, I think the bad press and all the associated problems with taking such a stance, are not worth it to nVidia... Or at least, I'd hope that is the case.

Cheers!
 


DX12 spec is something that MS and other IHV discussed a lot among them to dictate which feature should be included. if nvidia does not want their GPU to be used with other vendor product they can just outright say it. just like AMD that coming out right clean about why they are not supporting FL12_1 in their hardware.
 


Does this mean they are going with the slower HBM2 I thought it was 512Gb/s ?? I was expecting high clock speeds alright... the 1200mhz was only early engineering sample. An the Vega architecture is designed to run faster than Polaris. But I think they were supposed to be using stacks of 256Gb/s HBM2.
 


i'm not sure if "given up" is the right word for it. it's more like they don't want it? but initially intel and nvidia do cross licensing in 2004 so both can get what they want. things start to go sour for both of them in 2008/2009. the initial licensing deal was supposed to end in 2011. but in the end intel decided to settle the issue out of court and reward nvidia 1.5 billion to be paid in installment. both use this settlement as a stepping stone to "renew" the licensing deal that they do in 2004. but the catch is nvidia have to ditch the idea about making their own x86 CPU. permanently. this pretty much confirm that intel bar nvidia from making intel new chipset not so much about licensing issue but to kill nvidia attempt in entering x86 market directly.
 


Well, I think Intel definitely looks at Nvidia as a strong competitor to it's autonomous vehicle business. AMD's GPU's are cheaper, but the quality is really attractive when compared to Nvidia. I think AMD is just a better deal for Intel's bottom line on a multiple different fronts.
 


about auto i'm not too sure about it. nvidia probably was in auto longer than intel (since the very start of tegra). in any case intel does not like GPU itself. intel did not like the fact that GPU is simply better than CPU in certain type of task. with Xeon Phi intel try to convince the market that x86 can also be very good at massively parallel task. in intel dream those fastest supercomputer in the world should only consist of x86 core. but nvidia disturb the market by introducing GPU into the equation where 80%-90% of the computation performance are coming directly from GPU.

 
AMD launches Radeon Vega Frontier Edition
by Hilbert Hagedoorn on: 05/17/2017 12:26 AM

"AMD just launched their Radeon Vega Frontier Edition card. It is a 'mission intelligent' enterprise graphics card for professional usage (data-crunching) and product designers, and yes it is not for PC gamers.

The card comes with 16 GB of HBM2 graphics memory and will perform in the 13 TFLOP (fp32) performance bracket. It will be available late June. At this point there have been no consumer announcements regarding Radeon RX Vega graphics cards. From the looks of it, the announcements will be made during Computex."

http://www.guru3d.com/news-story/amd-launches-radeon-vega-frontier-edition.html

"Earlier on in the presentations Mark Papermaster mentioned Radeon Rx Vega to become available in June. Meanwhile Chief architecht of the Radeon Technology Group Raja Koduri is talking a lot about the benefits of the new Vega architecture and heterogeneous computing (cpu+gpu) and its various possible implementations. AMD things that Vega is going to be big in the data-center, he shows examples of DeepBench inbetween the NVidia P100 and Vega. Nvidia scored 133 Ms, the Vega setup 88. In this score lower is better.
AMD launches Radeon Vega Frontier Edition - the card is a mission intelligent enterprise graphics card for professional usage (data-crunching) and product designers. The card comes with 16 GB of HBM2 graphics memory and will perform in the 13 TFLOP (fp32) performance bracket. The 16GB Radeon Vega Frontier Edition will become available late June 2017."

http://www.guru3d.com/news-story/10pm-cest-amd-financial-analyst-presentation-live-feed.html
 
July......that's mean Q3 instead of Q2 for "RX Vega"? it seems anandtech also have the same sentiment (second last paragraph).

For AMD gamers who have been holding out for Vega, it’s clear that they’ll have to hold out a bit longer. AMD is developing traditional consumer gaming cards as well, but by asking gamers to hold off a little while longer when the Vega FE already isn’t launching until late June, AMD is signaling that we shouldn’t be expecting consumer cards until the second half of the year.

Anandtech
 


Well AMD did say Vega by 2h- not 'Vega gaming product 2h'.... If the FE card comes out in June they have met their target, just.
 


Well it looks like 256gb/s HBM2 isn't available yet, the current HBM2 runs at 1.6ghz and not 2ghz required for that 256gb/s speed.

Still, 400+ gb/s bandwidth should be ample (just look at a GTX 1080 vs a Fury X- the latter has 512 gb/s of bandwidth vs 320 gb/s on the 1080, yet the 1080 is a much faster card. Yes high bandwidth helps however it has to be balanced against the rest of the card. Fury had way more bandwidth than it really needed given it's shader performance).

As for the speeds- what I'm saying is for the *bottom model* (the cheapest 'Core') vega is likely to use a lower clockspeed (e.g. 1200mhz) and that will go up for higher sku. That is the point of the table- they are suggesting that Vega will have 3 gaming models aimed at different price / performance points. The issue I have is the table also suggests that the entry level 'core' that is intended only to go up against the GTX 1070 is faster than the faster, more expensive part aimed at the GTX 1080. That doesn't make sense.

To my mind the specs would be something like this:
- Vega Core: 1200 mhz clock, Circa 3500 shaders, 400gb/s memory bandwidth
- Vega Eclipse: 1400 mhz clock, Circa 3500 shaders, 480gb/s memory bandwidth (based on the specs of the recently announced FE)
- Vega Nova: 1500+mhz clock, full 4096 shaders, 480gb/s memory bandwidth
 


if they can launch the "semi pro" vega next month then it means the gaming version must also be ready right now because Vega is not like GP100 where the focus is 100% for professional market. so what the exact reason AMD holding gaming Vega to be released at the same time as semi pro Vega?
 


My guess would be availability of HBM2.
 
Status
Not open for further replies.