AMD Radeon Vega RX 64 8GB Review

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

zippyzion

Distinguished
Jan 23, 2012
114
0
18,680
Conspiracy theory time...

So, there has been some testing on the Frontier Edition which has shown it to be a fairly competent mining card, but so far testing on the release RX versions has shown that they are pretty meh at mining for the price. Then there is the GamerNexus review that Fluffy_Hedgehog mentioned that shows modding and clocking lowers power usage and increases performance of the 56... so... what if AMD gimped them to get them into the hands of gamers? They make them unpalatable to miners to keep them out of their hands. Fast forward to the decline of this mining craze and AMD sends a BIOS flash and driver update to release the cards to their intended performance with lower power usage and higher performance.

Crazy? Yeah, I know. That is why I call it a conspiracy theory.
 

TJ Hooker

Titan
Ambassador

Modding/tweaking clocks and voltage always has the potential to increase performance and/or decrease power. That's kinda the point of tweaking in the first place. The last several generations of AMD cards tend to undervolt pretty well, which would lower power consumption and therefore temperature. If the card was hitting power or thermal limits, then the undervolt would also increase performance.

AMD may prefer that gamers buy their cards over miners, but gimping Vega performance at launch (which is what most people see and remember) would just be shooting themselves in the foot. After all, they make the same profit whether it's a miner or gamer buying the card.
 

zippyzion

Distinguished
Jan 23, 2012
114
0
18,680
@TJ Hooker

I did say conspiracy theory. I think in general they don't need to make sense to be a thing.

I'm not going to defend it past the "they could do it gradually" point because I know how absurd it sounds. But, hey, gotta get in on the ground floor here. Crazy doesn't spout itself.
 

Michael_209

Commendable
Feb 17, 2016
4
0
1,510
Still using my r9 290 in a i7 5820k rig with 4k 40" monitor. Really wanted to upgrade but real disappointed after all the hype. Will stay with R9 290 it works great, runs satisfactory with all games at 4k. Not worth making a purchase with the paltry benchmarks and the rip off price of the 1080 line. Was willing to buy one but feel the squeeze ad game is not for me right now, will wait till something worthwhile comes along.
 

InvalidError

Titan
Moderator
Looks like Vega is too little far too late and somewhat overpriced when you consider that it uses ~100W more power than similarly performing alternatives despite HBM2 supposedly bringing better efficiency.

It would have been decent a year ago but today, it seems pretty 'meh' and very much not worth the fanfare AMD tried to build around its launch.
 

jdwii

Splendid
Just as i thought overall slower then a 1080 unless its with vulkan or DX12 while consuming almost as much power as 2 1080's in sli.

Least its Amd's return to the high-end for GPU's but with Volta around the corner Nvidia has this in the bag. Person on top always dictates prices.
 

mapesdhs

Distinguished
Perhaps a side effect of the toms site login update, we seem to have returned to the days of the borked comments section. Duplicated posts, the voting buttons are gone (for me anyway) and no link to the main site where editing posts, etc. is much easier (have to go via Followed Threads from one's profile). Could this be why AT's article has 3X more comments posted? Or maybe it's just because they covered the 56 aswell. Guys, please fix the comments system, it's been months now and the .com setup is still painful.

Re the 64, just too power hungry for my liking, and as CA says the delay has given NV time to prepare. Scan UK has the Vega64/Air priced one hundred UKP higher than a stock 1080 (60 more than a 1080 @ 1759MHz base!), which is just nuts, while a Vega64/Liquid costs more than a 1080 Ti! I can't see the Vega64 selling that well to gamers at such prices, at least not in the UK anyway (so much for that $500 launch price, it's the equivalent of $750 here). Maybe the 56 will do better, especially against the 1070.

(just read the AT piece; more impressed with Vega56's performance, less impressed with its power/noise though)

Chris, one thing, what causes the 1080 Ti's minimum fps to plummet so much for RotTR at 1440p? It behaves normally at 4K, while at 4K the 1070 looks like it's been punched in the goolies re its minimum showing. Weird.

Ian.

 

bit_user

Polypheme
Ambassador
I'm trying to remember where I read how much of the increased transistor budget in Vega were spent just on extra pipeline stages and buffers to make it clock higher than Fury X. ...because that worked so brilliantly for the Pentium 4.

I'm also reminded of hundreds (can't find the exact number) of efficiency improvements Jen-Hsun Huang claimed had gone into Pascal, during the launch presentation. It seems like the way to build a fast GPU might be to focus first on efficiency. Then, you have headroom you can afford to spend on increased clocks.
 

InvalidError

Titan
Moderator

Deep pipelines are horrible for desktop CPUs because they increase the effort required to find instructions eligible for out-of-order execution needed to maintain IPC. GPUs don't care about individual threads' IPC, only overall throughput. That's why they can get away with tons of simple multi-threaded single-issue in-order cores running at comparably low frequencies.

Completely different design goals and some design choices make more sense for one than the other. Nobody would want to go back to single-issue in-order cores on the desktop, that'd be a 5-10X reduction in IPC and make even casuals cringe. Works fine for GPUs because their typical workloads are embarrassingly parallel and scale to thousands of threads with limited effort.
 

bit_user

Polypheme
Ambassador

Not that I disagree (although GPUs aren't single-issue), but you're just going in a different direction than I was. I was more concerned with the additional power and die area consumed in the pursuit of higher clocks. I'm not convinced they wouldn't have been better off just using those transistors on more CUs and clocking them lower.

Anyway, I think I found what I was remembering:
Talking to AMD’s engineers, what especially surprised me is where the bulk of those transistors went; the single largest consumer of the additional 3.9B transistors was spent on designing the chip to clock much higher than Fiji. Vega 10 can reach 1.7GHz, whereas Fiji couldn’t do much more than 1.05GHz. Additional transistors are needed to add pipeline stages at various points or build in latency hiding mechanisms, as electrons can only move so far on a single (ever shortening) clock cycle;
(Source: http://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/2 ...although that text must've also been included in an earlier article, where I originally read it)

They go on to claim this has similarities with Pascal, which sort of undermines my point. But, I still think the key difference might have something to do with Pascal aiming for efficiency first, then clock speed.

I wonder if Vega is still using 64 SIMD lanes per CU. That could be another strike against it, as the GP104 has 128 lanes per SM.
 

bit_user

Polypheme
Ambassador
The other thought I can't shake is how much impact all the non-graphics features are having on gaming performance & efficiency. Things like int8, fp16, and the HBCC are clearly aimed at deep learning and general-purpose/high-performance compute. Nvidia GPUs have some of that stuff, but not all in consumer-oriented GPUs, nor even any one Pascal GPU.

Like Ryzen, Vega feels like it's leveraging consumer markets to attack the cloud. Was it trying to be all things to all users? Did it even go as far as prioritizing cloud applications over gamers and VR?

But, I'm sure that's not the whole story, or else you'd expect Polaris (which lacks such features) to be stronger than it was.
 

InvalidError

Titan
Moderator

I think it is quite clear that AI, analytics, cloud, etc. where they can sell the same GPUs for 2-10X the price is where both AMD and Nvidia are going as their primary design focus and gaming is slipping. A few years from now, we'll be running graphics on compute accelerators that simply happen to have display output ports.
 

Karadjgne

Titan
Ambassador
With sli/cf basically on the back burner, cards like the 1080ti making mgpu an 'eh' consideration in DX12, makes me wonder just how much Vulkan is going to play into Vega's performance in the future.
 

Nintendork

Distinguished
Dec 22, 2008
464
0
18,780
It's good to only test 1440p and 4K, anan test at 1080p which is a worthless resolution for high end gpu's, it's simply a waste of money an unused potential to buy this king of gpu's for such a low res, just get a RX580/1060 and be happy, everything above is overkill.
 

bit_user

Polypheme
Ambassador

That's what I thought, until sometime within the past year or so. Now, we're starting to hear about machine learning accelerators that are significantly out-stripping even the fastest GPUs. It remains to be seen just how much these are breaking the mold of the GPU vs. clever hardware and software optimizations of architectures that are still quite adept at high-performance graphics rendering.

It will also be interesting to see which of the machine learning features trickle down from the V100 to Volta's consumer models. For instance, I don't expect to see its tensor unit showing up anywhere else in the model range. And perhaps even the double-rate fp16 capability of the P100 might not reach any lower than the V102.

I also wonder whether the inferencing features of the lower-end GPUs will be included in the V100, or will Nvidia continue to maintain a dichotomy between inferencing-oriented and training-oriented GPUs (unlike what AMD did with Vega)? This functions not only as an effective market segmentation scheme, but also reduces the amount of non-graphics-centric functionality on each die. Above, I was questioning how much AMD's decision to include everything in Vega ended up compromising its graphics prowess. How much of that additional pipelining and latency-hiding would've been necessary without all of it?

Lastly, I wonder how much the efficiency-first mentality mandated by their mobile/embedded Tegra GPUs informed their desktop/server designs. This could also help explain the efficiency disparities between the two companies' designs.
 

footman

Distinguished
Aug 26, 2004
68
2
18,635
I invested in Freesync monitor and now need something better than my current RX 580 to power it. I will likely just buy the Vega 56 once the custom designs come to market, or buy stock version and waterblock, assuming I can find one at $399 USD. People are using the increased power requirements as a major showstopper for buying Vega. Lets say that if you buy Vega and find out that there is a change in full load power requirements from your old card to Vega of 85W, the cost in additional power is minimal, based on my power costs here in Nevada. If I play 20 hours of gaming a week, I end up paying an additional 20 cents per week (12c per kwh in Nevada) based on calculation wattage x hours used ÷ 1000 x price per kWh= cost of electricity
If I refer to gaming load results at Anandtech, during BF1 gaming, there is a difference of 51W between the RX 580 and the Vega 56 and 142W between the RX 580 and Vega 64, so if I use the example above for calculating additional power costs then the swap to a Vega 56 will cost an additional 12 cents per week and swap to Vega 64 will cost an additional 34 cents per week.

Perhaps my math is wrong? While the talk of increased TDP and power requirements are initially alarming, the additional costs look to be minimal. These increased power requirements won't stop me from buying Vega.

Having said this the additional cost will hurt miners who are using their computers 24/7 to mine.....
 

Karadjgne

Titan
Ambassador
Then theres the other side, components already owned by consumers. Last generation cards with a decent cpu OC could still be effectively powered by a decent 550w psu. So that's what many bought and spent hours getting all that wiring tidied up, bundled up, sleeved, whatever. Now throw in a Vega that looks to need a 650w to cover the bases, and the outlay becomes the card + psu +time + expenses. When faced with that vrs a 1080 for 1440p gaming, I'll keep my sleeves and the extra $100 for a new psu and deal with missing those extra 10 fps or so. I'm in it to game, not benchmark
 
Status
Not open for further replies.