AMD Ryzen 2 vs. Intel Coffee Lake: What's the Best CPU Platform?

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

madmat9

Distinguished
Feb 13, 2009
10
0
18,510


True enough but Meltdown/Spectre are extraordinarily bad, impacting a HUGE swath of CPU's going back many years. Nothing is "safe" or fool-proof but that doesn't mean all bad things are equally bad - some are much worse.
 


I am just surprised that HBM didn't take off. I was super excited about it originally but nVidia doesn't even seem to be interested in it for the consumer space. Their next GPU will be GDDR6, or so say the rumors, and it will start at 16Gb/s and looks to be able to OC to 20Gb/s, which with a 384bit bus will be faster than the Tesla V100 (960GBps vs 901GBps).

I think the one part that could contribute t higher power and heat though is the memory bus. I remember when the Radeon HD2900 came out. It took a lot of power thanks to its 512bit ring bus. That or current GCN is just a power hungry design like the old Bulldozer chips or Pentium 4s.
 


I don't think you can directly relate a ring bus with high power consumption, at least not directly (a low frequency wide bus may actually use up less power than a high frequency narrow bus for equal data rate - thus why IDE stayed in use on laptops long after SATA became the norm for desktops). Moreover, don't forget that GCN can use any bus width: from 64-bit (all GCN APUs) to 1024-bit (Vega 56/64 with HBM) but only high clock rates make its power usage go through the roof.
So, yes, it's a design flaw: GCN (all generations) cannot reach high clock rates without guzzling power, but no, it's not linked to its bus width as its power use to compute power ratio gets very good at lower chip clocks with high memory bus speed (see mining cards).
My opinion is that GCN has a bad pipeline design WRT clock speed; if AMD managed to create a pipeline design that clocks well on very small engraving processes, Navi @7µm could be a reall killer. If they didn't do that, Navi will keep the same clock speeds as Vega and consume a bit less.
 

bit_user

Titan
Ambassador

I think it's more that its raw horsepower can't be used efficiently. In other words, that the real work done per clock cycle is lower than Nvidia. IMO, that's why AMD cards have higher performance, on paper, than their Nvidia counterparts.

Think I'm wrong?

Code:
AMD        Nvidia           TFLOPS         Ratio
RX 560     GTX 1050         2.41 / 1.73    1.39
RX 570     GTX 1060 3 GB    4.78 / 3.47    1.38
RX 580     GTX 1060 6 GB    5.79 / 3.85    1.50
Vega 56    GTX 1070         8.29 / 5.78    1.43
Vega 64    GTX 1080        10.26 / 8.23    1.24

Nvidia is just doing more with less, and has been since Maxwell.
 


It absolutely can. The HD 2900 XT had a 512bit bus and consumed a ton of power while performing poorly. It is very possible that the larger bus is causing the poor power design or it could just be GCN in general. Whatever it is they really need to get it handled so they can compete evenly with nVidia.

Then again what does AMD care? They are probably profiting heavily off the mining inflation.
 

bit_user

Titan
Ambassador

They care because mining isn't going to be there for them, forever.

Their primary market is still graphics. To that end, they need gamers to use their products so that game devs have an incentive to optimize for them and gain experience developing for them. Otherwise, they drop even further behind and eventually could lose out on the gaming market.

Don't forget that Intel has expressed intentions to compete in the dGPU space. That's just another reason AMD can't afford to neglect or under-serve gaming.
 


You're using one single example from a dead design (2900 used the first ever version of TeraScale, which then saw 3 major redesigns, and was radically different from GCN) card to prove that a current design (GCN, which is on its 4th major redesign) doesn't work for the reason you give, i.e. that bus width is the main reason why GCN is power hungry?

Bull - while it DOES have an impact (more lines does mean more power at the same clock rate), said impact is negligible when compared with frequency, which correlates directly with pipeline lengths and process optimization - not to mention actual specifications, as AMD is very conservative (even early reference RX480 cards could be undervolted) while Nvidia is aggressive (i.e. you can't touch a voltage setting).
 


It's more a matter of when the API can keep it fed - Nvidia spent BILLIONS making sure that game programmers would use GameWorks and DX11, toward which they geared a lot of resources to optimize their drivers (when they don't simply replace shaders on the fly for ones that run better on their hardware) and nerf the competition as much as they can (tessellation, on which AMD isn't too good). For proof, Vega 56 at stock actually beats a 1080ti ( Tom's benchmarks, not mine)

In actual games, when the playing field becomes level (no use of GameWorks), AMD is neck and neck with Nvidia. When games actually cater to AMD's hardware (Vulkan), Nvidia gets whipped.

So yes, on current games Nvidia has the edge: AMD's cards get nerfed (30% handicap on GCN). On next generation games, AMD does reach more of its hypothetical power, such as the RX580 pushing 20% more frames than the 1060 - Vulkan makes it so that driver level optimizations have almost no relevance to results when compared with DX11/OGL.

And before Nvidia fanboys reply that AMD paid Bethesda to nerf Geforce cards on id Tech 6, please remember what game was used to show off the 1080@4K originally...
 

bit_user

Titan
Ambassador

That's more like proving my point that AMD is just throwing raw horsepower at the problem.


That's interesting, because such games are few and far between. I don't believe GameWorks is that ubiquitous.

If you take something like Far Cry 5, Vega pulls a strong lead because the game was specifically optimized to exploit its fast 16-bit arithmetic, much in the way you're saying other games are optimized for Nvidia.


But it's got 50% more compute, 33% more memory bandwidth, and burns 54% more power. So, it's still significantly under-performing! Based on the specs, AMD's RX 580 should be neck-and-neck with the GTX 1070.

I've read all of AMD's whitepapers about how GCN works on a micro-architectural level, and most of the comparable info about Maxwell and Pascal. Although the data I presented paints a pretty simplistic case, I'm not so superficial when I lay the blame at the feet of GCN's ISA. I presented the data simply to underscore my point - not as a substitute for deeper analysis.

Look, I'm just trying to be realistic. I own one Nvidia card and about 5 AMD. The last card I bought was actually AMD, but not for its performance. I hope my next card is AMD, too. But they've got to get their stuff together. That's all I'm saying.
 

stdragon

Admirable


AMD RX 580 - 14nm lithography - 5.7 billion transistors - 1,257 Mhz GPU clock - TDP 185 Watts

GTX 1060 - 16nm lithography - 4.4 billion transistors - 1,506 Mhz GPU clock - TDP 120 Watts

So yeah, kinda dove-tails into what you're saying. Now, I get that more transistors consume more power, but that should be countered from the smaller 14nm size and reduced clock rate. Yet, somehow still manages to pull 185 watts of power peak load. That's NUTS!

So what's the deal? Is it that the AMD is more programmable of a GPU than nVidia? Meaning, is acting more programmable and less of an ASIC in terms of baked in DirectX functions ultimately the cost? I guess that's really the trade off isn't it? You can be more programmable, or more specialized and fast (inflexible to change of standards), but not both? Or, is the RX 580 just that inefficient all the way around?

 

TL;DR - Geforce is best for current games which is what most people want, GCN is best for raw compute power, especially when power tuned, which is what miners (and all other blockchain users) want.

Both - DX11 (and OpenGL up to 4.5) is still a rather fixed API (programmable shaders, yet fixed/single-threaded hardware management functions) where an ASIC like the Geforce really shines, and the RX580 is not very efficient, the fault laying (in my opinion) in both a sub-par pipeline architecture (high power requirements at high clock speeds) and conservative power settings (i.e. higher voltage than absolutely required in order to use as many chips as AMD can get from its manufacturers). Geforce like current 3D engines an API, GCN don't. Geforce is very finely power-tuned, GCN is conservative.
 
Status
Not open for further replies.