AMD Vega MegaThread! FAQ and Resources

Page 26 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
[video="https://www.youtube.com/watch?v=t8K2yc11eC4&ab_channel=HardwareUnboxed"][/video]
Ryzen 5 1600 vs. Core i7-7700K using Vega 64 & GTX 1080
Hardware Unboxed
Published on Sep 15, 2017


Core i7 vs. Ryzen 5 with Vega 64 & GTX 1080
Clash of Titans
By Steven Walton on September 15, 2017

1L8F3Y6.png

Side-by-side Breakdown of the Results
Not that long ago I compared the overclocked R5 1600 and i7-7700K in 30 games using the GTX 1080 Ti and at 1080p the Ryzen CPU was on average 9% slower. Here we see with the slower GTX 1080 the Ryzen CPU was just 5% slower, so that's pretty well in line with previous findings.
What's interesting to note about this side-by-side game comparison is that in the more modern and well put together titles, Ryzen is extremely competitive. In fact, the only real head scratchers here are Hitman, Dawn of War III and Total War Warhammer.
1080_1080.png

In many other games such as Dirt 4, Doom, Sniper Elite 4, Battlefield 1, Rise of the Tomb Raider, The Division, Prey, Overwatch and Resident Evil 7 we found Ryzen to be extremely competitive. These results are of course based on the GTX 1080 handling the rendering work, so let's see how things look at 1080p with Vega 64.
Vega_1080.png

Dropping in Vega 64 we see that overall Ryzen is actually slightly slower as it now trailed the 7700K by a 7% margin overall. Although the Ryzen 5 1600 processor does much better in Civilization VI, it now struggles in quite a few more titles than what was seen previously with the GeForce graphics card and we'll take a closer look at that in a moment.
For now let's see how things change at 1440p.
1080_1440.png

At 1440p we are more GPU limited but even so we saw some strange things when comparing the Core i7 and Ryzen CPUs. A number of times Ryzen was at best able to match the 7700K at 1080p though at 1440p delivered noticeably better results. Only in Hitman Ryzen is slower by a 10% margin or greater and as a result is now just 2% slower overall.
Vega_14.png

Moving to the Vega 64 results at 1440p, we again find that overall Ryzen was a mere 1% slower than the Core i7 processor. The margins on a per-game basis though are significantly different to what we just saw with the GTX 1080 so let's explore that a little closer.
I guess one takeaway here is that it's bad to generalize. For example claiming that Nvidia's DX12 performance handicaps Ryzen is certainly not true in all titles, though we might start to see more of this as newer games take better advantage of modern PC hardware.
For now though, Ryzen isn't always superior in DX12 titles and we can look to Hitman as an example. The 7700K is miles better in this game. Vega 64 doesn't always perform better with Ryzen either, as seen in titles like Dawn of War III, F1 2016 and Rise of the Tomb Raider where we witnessed the R5 1600 doing much better with the Nvidia GeForce GPU.
We also saw how much more the higher 1440p resolution brings both Ryzen and Vega into play. Ryzen still did well at 1080p for the most part, though Vega certainly appears much more competitive at 1440p versus 1080p.
Overall, the higher-end Vega 64 parts don't offer a great value while the complete opposite is true for Ryzen. If I didn't have money to burn, which I don't, and I was building a gaming system today intending it to last for the next three, four or possibly even five years, then I'd invest in the Ryzen 5 1600 rather than the more expensive Core i7-7700K, especially if $500+ GPUs aren't in your future.
Bottom line, it's safe to say that it doesn't matter what GPU reviewers use to compare AMD and Intel CPUs and it doesn't matter what CPU reviewers use to compare AMD and Nvidia GPUs either. It's all fair game.

If I didn't have money to burn, which I don't, and I was building a gaming system today intending it to last for the next three, four or possibly even five years, then I'd invest in the Ryzen 5 1600 rather than the more expensive Core i7-7700K, especially if $500+ GPUs aren't in your future.
With less powerful GPU's the FPS gaps disappear, and for the average budget gamer using a midgrade card you won't notice much difference between Ryzen and Intel depending on the titles.
We also saw how much more the higher 1440p resolution brings both Ryzen and Vega into play. Ryzen still did well at 1080p for the most part, though Vega certainly appears much more competitive at 1440p versus 1080p.
Interesting with Vega that the 1600 did beat the 7700k in a few games at 1440P. Maybe with 7nm Vega and some availability we could see Ryzen doing very well in the future at higher resolutions.
 
With less powerful GPU's the FPS gaps disappear, and for the average budget gamer using a midgrade card you won't notice much difference between Ryzen and Intel depending on the titles.

it would be interesting if they can test how does 7700K and 1600 using even more faster GPU such as 1080ti or even 1080ti SLI.

Maybe with 7nm Vega....

honestly i don't know if 7nm node will be on time for AMD to use. because i still believe for GPU with massive die we most likely will not going to see such process to be ready in 2019 time frame at the very least. early on we heard talk about AMD using GF 7nm process. now they were supposed to use TSMC 7nm instead. but AMD should not rely heavily on node shrink to improve their architecture efficiency. polaris and vega shows that node shrink did not help much in improving their efficiency.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965

Link to the ThreadRipper thread, which has benchmarks with the 1800X and 7700K with 1080Ti.
I actually arranged the benchmarks by DirectX 11/12/resolution for each game tested for an easier comparison. You can see how much some games love higher memory bandwidth. It's the Vega 64 results with Ryzen at higher Resolution for a few games that is really eye catching. It looks like some kind of synergy is happening there.


TSMC is supposed to be out this year they just had a event talking about their roadmapClick here for link I agree there should always be a continuous process of improvement beyond what the process node provides, but regardless it should help to a significant degree.
Cliff made comparisons between 16nm and 7nm giving 7nm a 33% performance or 58% power advantage. 7nm is now in risk production with a dozen different tape-outs confirmed for 2017 and you can bet most of those are SoCs with a GPU and FPGA mixed in. 7nm HVM is on track for the first half of 2018 followed by N7+ (EUV) in 2019. N7+ today offers a 1.2x density and a 10% performance or 20% power improvement. The key point here is that the migration from N7 to N7+ is minimal meaning that TSMC 7nm will be a very “sticky” process. Being the first to EUV production will be a serious badge of honor so I expect N7+ will be ready for Apple in the first half of 2019.
It explains the payouts AMD has already made to GlobalFoundries when they restructured their deal. Lisa Su did emphasize that they would use the lead process. It seems AMD knew or had a good idea this was going to happen, because of the tight node race between Samsung, TSMC, and Global Foundries. Samasung's early adoption of EUV is the most aggressive looking forward with 4nm by 2020.
TSMC's innovative CoWoS® advanced packaging technology integrates logic computing and memory chips in a 3-D way for advanced products targeting artificial intelligence, cloud computing, data center, and super computer applications. This revolutionary 3-D integration facilitates power-efficient high speed computing while reducing heat and CO2 emissions.

In 2012, TSMC successfully used CoWoS® to integrate four 28nm chips, providing customers high-performance FPGA components with the shortest time-to-market. In 2014, TSMC produced the world's first 16-nm three-chip integrated device with networking capabilities using CoWoS® technology.


Then in 2015, TSMC developed and qualified a super large interposer (greater than 32mm x 26mm) using CoWoS-XL technology. This process integrates multiple large advanced chips on a single CoWoS® module. The productions in 2016 and 2017 include an FPGA with 20nm multi-chip structure and super-high performance computing chips that integrated a 16nm SoC with a next generation HBM2 DRAM. Currently, TSMC is developing a 7nm CoWoS® advanced packaging technology.
Click here for link

In addition to their 10nm processes already in production, Samsung plans to offer risk production of 8LPP this year, 7LPP in 2018, 6LPP and 5LPP in 2019 and 4LPP in 2020.
Click here for Link
 
well that is what the marketing from TSMC said. we still need to look at how it will work in reality. TSMC used to say that finfet is not needed for 20nm process. the end result was both nvidia and AMD are stuck with 28nm process for another generation. also most of the early node only suitable for small chip. for bigger chip they most often need to wait for process improvement or even specific process for big die. honestly i don't think were are going to see 600mm2 die built on 7nm process in 2018 timeframe.

 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
It's true it's getting harder to create smaller process, but going smaller has more to do with cost than anything else. They have been trying to figure out how to do it cheaper. Every node has offered price savings until recently. That is the biggest problem they have been trying to solve. Samsung being an early adopter of EUV and is reaping the rewards with an aggressive node roadmap.
 


i'd say their track record was more like a roller coaster lol. 40nm was hit with yield issue. it hit nvidia the hardest at the time since nvidia launch their biggest chip first back then. 28nm was quite good. but they only able to bring it on time after canceling their 32nm plan. 20nm? again it was bad to the point nvidia and AMD have to skip them entirely for GPU (at least the disaster with 40nm still allow AMD and nvidia use the said process for their GPU). AMD at the end have to scrap some of their 20nm design which cost them money. nvidia probably aware about this trend hence they did not wait for TSMC 7nm and further enhance existing more proved 16nm node into "12nm" FFN.
 

TwilightRavens

Reputable
Mar 17, 2017
341
0
4,960
Yeah that was one of the reasons I didn't buy a Vega because TSMC didn't manufacture the node and GF did (or at least every bit of research I did said GF did). AMD (from what I notice) has always had better GPU's when they went with TSMC over GF. Look at the performance of the 290X and 780 ti for example.
 

TwilightRavens

Reputable
Mar 17, 2017
341
0
4,960


Probably, like they said they will do with Ryzen before Zen 2 launches.
 
I am also one that thinks most of Vega's problems lie within trying to make GCN survive longer.

I would imagine this 12nm transition, if it does happen, would help alleviate some of the pain, but it is not a solution, yeah.

Cheers!
 

TwilightRavens

Reputable
Mar 17, 2017
341
0
4,960


Yeah you are probably right, I just thought it was weird they went with GF and why it was so power hungry. I mean I thought my 290X drew a lot of power but dang Vega makes Hawaii XT and Grenada look like they sip power.
 
GF 14nm was licensed from samsung. So if there is problem with the process itself it might also affect nvidia GP107 that being manufactured using samsung 14nm process. And yet nvidia GP107 still pretty much clocked as high as other pascal and it's efficiency still pretty much intact. Though i still believe when it comes to large die size TSMC still the most experienced fab out there (thanks to nvidia asking them to fab their massive chip for years).

 
It's not the process, it's the architecture. AMD has designed in resources that while not great or contribute much for gaming, it's really good for other things, like mining. AMD gives the average consumer performance that most never use or need and I think that is where a lot of the power consumption is from. I think AMD needs to further refine the architecture, but they do offer computer performance where nVidia lacks. I wonder what nVidia's power consumption would be if they included all the resources AMD has baked in.
 


lol i dare to bet nvidia can do it in more power efficient manner because of one reason: nvidia have separate design between compute card and gaming card. just look at GP100 SM configuration. it is not the same as gaming pascal.
 


AMD definitely has a 1 arch to rule the all thing going. but that is more than likely due to budget limitations. Their power consumption is up there, but manageable. When it comes to performance, it depends on how much your electricity costs. If I were in the market for a GPU, I'd probably get a Vega 56, above that, probably the 1080. Even though nVidia drivers have been nothing but a headache for me.
 
AMD definitely has a 1 arch to rule the all thing going.

nvidia used to do this as well. but they realize that since kepler they can no longer go with that route. from what i heard kepler SM configuration is very good at FP64 work load but not so much for FP32 (which is what used by games). that's why nvidia tweak their 192 SMX into 128 SMM from kepler to maxwell. and within that 128SMM it was divided further into 4 cluster. this is all done to increase their GPU utilization. this is the difference between AMD and nvidia. for AMD to increase their utilization they need async compute (via ACE inside their hardware). but async compute are not included into DX spec until DX12. so to use async compute AMD needs DX12. on the other side nvidia make changes more directly to the architecture itself to increase their utilization. doing it this way they don't need specific function to be supported in DX so even DX11 games will get the benefit.

Even though nVidia drivers have been nothing but a headache for me.

what's the problem with nvidia drivers?
 


I bought an ASUS FX502VM laptop last year that has no IGP, but a 1060 3gb. I added an SSD and fresh installed WIndows 10 and nVidia drivers from the website. This was back in November. Since about April this year, I have not been able to update the driver without uninstalling, running DDU, then CCleaner, then I might be able to install the latest driver. More than half the time I have to make more than 1 attempt to get the driver in. nVidia acknowledged the issue for a bit, but no longer and the issue has not been resolved. The GPU runs fine, but the driver install is bugged, so I have skipped multpile driver updates. I did the one that is current now and...same issue. I even tried reinstalling Windows 10 to make sure it wasn't a Windows issue.

On the flip side, I have been running AMD GPU's in several other machines for many years and have never had a problem, especially with updating.
 
Hmm not really sure how it was on the laptop side of things but on my win 10 desktop i don't have any issues with driver install. But recently thete are problems with microsoft creator updates that affecting nvidia gpu. And from my observation it seems it only affect pascal based GPU. And there are few others that probably affecting both nvidia and amd gpu equally like HDR support in games. For HDR both AMD and nvidia have their own setting to make sure the game working properly with HDR enabled but there are updates from MS that overwrites gpu maker setting making HDR did not work as it should. It seems to me MS wants more control on the operating system including driver updates. Remember the early days of win 10 where there is lots of issue with nvidia driver? To certain extend i suspect nvidia did that on purpose to show MS taking the ability of driver updates from user hands and forcing them to accept what ever decision MS has decided for them is a bad idea. Because back then even if you try to roll back to previous version of driver upon rebooting the system win 10 still forcefully download the "broken" driver with no way to stop the process. Because of this incident MS can the idea about forcing driver updates on user machine regardless we like it or not.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
AMD Launching RX Vega 32, 28 & A Dozen New Vega 11 Cards, GPU Passes Certification
By Khalid Moammer
Sep 22

AMD is readying several new graphics cards based on its Vega 11 GPU which has just passed its final manufacturing certification. The graphics cards will be based on AMD’s yet unreleased Vega 11 XT and Vega 11 Pro GPUs.

The new graphics cards are expected to replace AMD’s Polaris 10/20 GPUs found in the RX 480/70 and 580/70 graphics cards on the desktop as well as in laptops. The company is also preparing a variety of new Radeon Pro and Radeon Instinct accelerators based on the new GPU
AMD Preparing To Launch 13 New Graphics Cards Based On Vega 11 GPU
A few days ago we reported that AMD’s Vega 11 GPU is entering production and that company had placed wafer orders some time ago with Globalfoundries, its manufacturing partner to produce the GPU dies. Yesterday, 13 different graphics card variants based on the new Vega 11 GPU passed their RRA certification in South Korea. Which means that Vega 11 based graphics cards are now ready to enter the market.
AMD-Vega-11-wccftech.png

The RRA certification for Vega 10 appeared just a month ahead of AMD’s Radeon Vega Frontier Edition announcement, so we’re definitely nearing Vega 11’s debut. There were only six graphics board variants based on the Vega 10 GPU mentioned in its RRA ceritifaction, Vega 11 has 13.

Unlike Vega 10 which was exclusive to the desktop, Vega 11 will be the first ever HBM based graphics card to come to notebooks. So a number of those 13 certified boards are going to be mobile variants. If things go according to plan for AMD, Vega 11 should be ready to go into notebooks in time for the holiday season alongside its Raven Ridge APUs.

According to information that surfaced a couple of months ago, we know that two of the 13 graphics cards will be RX Vega boards and two more will be Radeon Pro boards. Finally, an unknown number of the 13 variants are Radeon Instinct accelerators.
AMD Radeon RX Vega 32 & RX Vega 28 (RUMOR)
Whispers around the industry are that the two RX Vega boards based on Vega 11 will be RX Vega 32 and RX Vega 28. Vega 11 XT is rumored to have 2048 GCN stream processors, a 1024-bit memory interface and 4GB of HBM2, while Vega 11 Pro is rumored to have 1792 stream processors and the same memory interface and capacity. The cards are expected to go up against NVIDIA’s GTX 1060. As with all rumors take this information with a grain of salt. We should be able to learn more fairly soon, so stay tuned.
tJ6XsXx.png


 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


[video="https://www.youtube.com/watch?v=dE-YM_3YBm0&ab_channel=AdoredTV"][/video]
Nvidia - Getting Away With (GPU) Murder
AdoredTV
Published on Sep 21, 2017
Nvidia's somewhat less than illustrious history is finally compiled into an easily-digested 15 minutes.

 
Interesting video and very true. I've known of all those issues, yet there is this myth about nVidia's great driver support.

You should see the look on peoples face (techies) when I tell them I still have a laptop, working, with a GTC 8600m GT in it. I've had to replace the thermal paste every 2 years since I found out about the defective GPU's as a precaution. So far it works.

As for the driver that was cooking GPU's, I was lucky. I hadn't gotten around to updating the driver yet, so it didn't kill mine. That taught me to wait a couple weeks before updating nVidia drivers.

I know both companies have had issues in the past, but for some reason AMD gets punished while nVidia gets a pass.


Back on topic though, The new lineup coming from AMD is looking to be in time for Volta, I hope they will have gotten some power consumption under control. (LOL, nvidia got a pass on that as well...Fermi)
 
Interesting video and very true. I've known of all those issues, yet there is this myth about nVidia's great driver support.

depends on which angle you look at it. AMD for example did drop driver support much earlier than nvidia. so in this regard nvidia driver support indeed is great. and AMD from time to time can be very late with their crossfire profile. i still remember back in early 2015 that AMD did not release any driver update until at the end of march 2015 (their last driver release was december 2014). it might be fine for those that only use single GPU. but for those that own GPU like 295x2 they did not get crossfire profiles for games that coming out in Q1 2015 until the very end of Q1.

(LOL, nvidia got a pass on that as well...Fermi)

actually no. the wood screw fiasco and then people end up calling Fermi as Thermi. so i don't think people give nvidia a pass. AMD even make specific video to poke at nvidia on how hot fermi are. but despite that nvidia able to get the top spot for themselves. though AMD move to joke about nvidia fermi might not be a good decision. because when you do that you better make sure your product did not suffer from the same issue in the future.
 
Status
Not open for further replies.