AMD Vega MegaThread! FAQ and Resources

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Agreed, they have kind of cornered themselves with their price war so we will have to wait and see how it goes with Vega. the good thing is that nVidia already has everything out except the 1080ti (they love using those XX80ti's as a spoiler), but I have a hunch that AMD just might have an answer for it this time.

AMD grabbed a good amount of market-share with Polaris and if they are on top of things, Vega could be a great spoiler. But I doubt they will catch nVidia sleeping, either they have a source inside AMD or are clairvoyant but AMD never seems to be able to blindside nVidia to get the upper hand for long.
 
if you look closely nvidia have always anticipate for AMD to have answers to their new GPU. when nvidia first launch 900 series it was like this: 980 > 780Ti > 970. nvidia are very aware that AMD should at least see this pattern. so with the new 10 series they make it like this: 1080 >1070 > 980Ti. note that this time around even the x70 are faster than previous gen flagship (at least in stock vs stock case). i think nvidia are expecting polaris are at least somehow able to match some of GP104 variant.

also if you have been following the market share for each quarter AMD did not gain market share with polaris. AMD was gaining market share from nvidia in Q2 2016 while polaris only launch at the very last day of Q2 2016. AMD able to gain market share even without polaris so some people probably expect AMD will be able to gain much bigger with polaris. but the data from Q3 shows that AMD actually lose a very small market share to nvidia instead (less than 1% to be exact). that's mean nvidia somehow able to defend their market share with GP106 vs polaris. now we are waiting for Q4 numbers.

http://jonpeddie.com/publications/add-in-board-report
 

mitch074

Distinguished
Mar 17, 2006
1,856
2
19,815
18
While AMD didn't gain market share with Polaris, it did regain some mind share - up until Q2 2016, AMD was selling "heaters that could do games for cheap" while Nvidia sold sleek, quiet and powerful gaming chips with futuristic reference designs (for a price, and the odd half-hidden hardware limitation for the 970) and now that Polaris is out, powerful, not too expensive and yet building upon all the expectations AMD had built ever since they released the first GCN cards, they again became contenders for the sweet spot.

I mean, where I live, the RX480 8Gb reference design was selling for MORE than the price I paid for it at launch (last summer) a couple months ago. Not bad for a card which was decried for its (admittedly lousy) reference cooler and (overdramatized) over-specification power draw. Current AIB cards are more powerful, more silent... And until recently regularly out of stock.
 
did AMD still producing reference design for RX480/470? from what i heard those reference got very solid built but the cost probably very expensive for a card that meant to be sold at $250 (there are speculation about AMD probably intend on selling polaris on much higher price hence more expensive use of component on the reference design). some board partner like Gigabyte actually use more cheaper stuff on their custom PCB design.
 

mitch074

Distinguished
Mar 17, 2006
1,856
2
19,815
18


AMD provided a reference design for the RX480, but not for the RX470 (which would have been rather pointless, as it's the same chip with a couple of disabled shader clusters). Contrary to Nvidia, AMD's reference design was supposed to be the baseline for cooling, with the bare minimum cooling solution, and the most solid circuitry ever, with oversized voltage converters and a strangely short PCB. I'd recommend it for anybody looking to overclock the sh*t out of it on a watercooled compact rig, otherwise grab any other version which will be far more silent and/or clocked quite a bit higher.
 

jaymc

Distinguished
Dec 7, 2007
561
0
18,980
0
I think the general consensus would of been Vega.. but I suppose nothing is certain.

Vega is supposed to arrive around May. As stated earlier in this tread they may be just waiting on the HBM 2 bandwidth to increase as production is still ramping up an being tweaked. (hurry up will yis for gods sake).

I think they should just release a slightly lower spec Vega with the slower HBM. This may let us see if Nvidia have counter punch waiting and also what it is. Think it's just the Titan for now (hopefully).. Isn't that enough to be dealing with eh.

Although I did hear that they developed a special extra powerful GPU for the Auto-pilot in the Tesla's.. Anyone any info on this.. As I have been concerned about it, can it be used as an ace up the sleeve ??

The taught's of a mole in AMD is disturbing to say the least..
 


hard to say. for one AMD has always been changing their naming scheme more often after HD7k/HD8k (OEM). with R200 and R300 they have R9, R7, R5 performance category. then they ditch that and just call them RX on every performance tier with RX400 series. RX 400 series don't even have x90 card.

performance wise they said this "RX580" is in the ballpark of 1070. so if this is indeed a polaris based how much further you have to push the clock on the existing polaris to reach 1070 level?
 


don't look at nvidia titan x (pascal). look at Quadro P6000 instead. that is the fastest GP102 configuration at the moment. if nvidia still can't beat Vega then nvidia probably have to make new chip that is bigger than GP102.

nvidia most likely going to save volta for their 2018-2019 product line.
 


This very well could just be from small tweaks and process maturity. Kind of like Sky/Kaby Lake, we saw a performance boost and higher stock clocks from essentially the same architecture. However, Polaris getting a jump to 1070 performance levels is much more impressive.
 


but the question is it really possible for polaris to reach 1070 performance level? 1070 firestrike score is around 16k:

http://www.guru3d.com/articles-pages/nvidia-geforce-gtx-1070-review,28.html

polaris 10 at 1600mhz can get around 14000 with firestrike:

https://www.youtube.com/watch?v=HYg123osc04

and that's already on LN2. to reach 1070 performance RX480 probably would least need to reach 1700mhz on the core alone.
 

jaymc

Distinguished
Dec 7, 2007
561
0
18,980
0
Weren't they supposed to have tweaked the fabrication process there recently.... ? I don't think it's Vega if it is they have stuck a limiter in.
 

Yuka

Splendid
The problem for AMD here is how to tweak GCN to squeeze extra Mhz out of it without throwing power out of the window. nVidia did an excellent job of tweaking Pascal to get a lot of extra hertz out of the GPU.

This is just a broad sentiment I have about GCN, but AMD is being very stubborn about not dropping some parts of GCN that just waste space in a card that would be aimed to the consumer market first and Pro market second. Having full HSA compliance when I haven't read anywhere it's being used is just stubbornness in my eyes.

In any case, Vega... I'm still thinking they'll have a GDDR5X variant and a HBM2 variant.

Cheers!
 

mitch074

Distinguished
Mar 17, 2006
1,856
2
19,815
18


Thing is, HSA doesn't exactly consume silicon: it's a (actually successful) attempt at creating a publicly-documented GPU-based massively scalar coprocessor; the only thing you could call a "waste" are the hardware-based schedulers, graphics command units and asynchronous shader units - all elements that Nvidia also has (in some measure, although theirs is not HSA-compliant) and that games that are made to use them sure do enjoy (cf. Doom 2016 in Vulkan mode). They do require newer APIs to be used (DX12 or Vulkan, used as such and not as "DX11 with croutons") but they can be massively useful.

Most current games still use older DX11 render paths even in DX12 mode; they mostly enjoy the lower CPU overhead on draw calls and multiple queue job submitting, but almost none make use of async compute or directly manageable VRAM addressing. Future Unreal Engine 4, Serious Engine or id Tech based games should finally start to make use of those technologies and really unleash the power of GCN cards.
 

Yuka

Splendid


That is very interesting, thanks for that.

In my mind, HSA hasn't been all the glory and boom they might have hoped to be, but it seems it has paved the way to interesting things in Vulkan and DX12; probably future APIs as well. Again, thanks for pointing that out.

So, do you have any insight as to what Vega brings to the table that might make it a worthy 1080 / 1080ti adversary?

Cheers!
 

neiliohep

Reputable
Jan 2, 2015
71
0
4,660
7
Not gunna lie I fully expect the Vega 10 flagship to beat the Titan XP, if it doesn't then AMD has let us all down pretty badly.
 
if the rumored die size it's true then Vega most likely able to compete head to head with nvidia GP102. but the target is not Titan XP. but Quadro P6000 instead. it depends if AMD can beat it with wide margin or not. if they can then nvidia most likely forced to bring in 600mm2 behemoth to the game. if anything there is one thing that nvidia will not going to give up: single GPU crown. that is what lead to their market share dominance since Fermi generation.
 

mitch074

Distinguished
Mar 17, 2006
1,856
2
19,815
18


No idea :p I'm happily rocking my Polaris, and since I'm not looking to change it any time soon I've pretty much lost interest in GPUs for the moment.
 

AndrewJacksonZA

Distinguished
Aug 11, 2011
447
0
18,780
0
I just bought an MSI RX470 Gaming X 4GB yesterday for +-USD165! I'm soooOOOooOOOooo excited to start playing BF1 when the card arrives! I haven't played a proper AAA game in a number of years!

My 6670 has served me well for the past six years and deserves to be heading into a happy retirement, perhaps as an HTPC box.
 

Yuka

Splendid


Are they going with a ~600mm2 die like the Fury siblings then? I hope they don't do another Fury though... HBM was good like performance at high resolutions, but the card still wasn't the star we wanted.

Plus, the bigger the die, the harder it will be to cool and keep the costs down. Following the trend and if they continue to push the 300W power envelope, it's going to be another big room heater. Performance not withstanding, it's going to be a hot card, me thinks.



Same here, but I still want to know if I should give my RX480 to my GF and get myself a Vega or just get her an RX480 and keep mine.

Still, I can understand where you're coming from. If you come up with any ideas, please let us know anyway. I'd say it's still interesting to speculate and see how the technology unfolds.

Cheers!
 


in the past VCZ try to predict GP104 die size based on the leak picture. they estimate GP104 to be around 333mm2. real GP104 measured at 314mm2.
https://videocardz.com/59266/nvidia-pascal-gp104-gpu-pictured-up-close

using the same method they did with GP104 before they estimate Vega to be in the range of 520mm2-540mm2

https://videocardz.com/65477/amd-vega-gpu-pictured-features-two-hbm2-stacks
 

Yuka

Splendid


I can agree to that seeing the pictures put up in the scale they showed. Good they try to make it a tad smaller, but I kind of miss seeing 4 HMB2 blocks there as "cache".

I wonder if they'll keep GDDR5 or move onto GDDR5X as general memory. Although, it kind of makes little sense... That memory shenanigan sounds like it will add a lot of latency when you have a cache penalty.

EDIT: According to different sites, the AMD conference at GDC didn't really give any interesting insight to Vega... It will be called "RX Vega". AMD is really flushing the brandings this generation, haha.

Cheers!
 
for cost reason i think AMD will stick with regular GDDR5 for low end cards. not sure about mid range like RX480 though. because GDDR5X higher spec it probably a bit more expensive than older GDDR5. if there is performance advantage to be had they most likely going to use them. but if there is not much then why raise the cost of the product just for the sake of having much "modern" stuff slap on the card?

for consumer Vega most likely going to top at 8GB VRAM. and this time they really try to do something so the lack of VRAM will not going to hamper the card's performance (that highbandwidth cache thingy). second reason most likely because of cost and and yield.

interesting thing will be FP16 usage in games. it seems AMD try to encourage for game developer to take advantage of FP16 while nvidia seems doing the opposite (hence nvidia limiting FP16 performance for geforce starting with pascal).
 

Similar threads


ASK THE COMMUNITY