AMD CPU speculation... and expert conjecture

Page 671 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
@juan you miss my point. HEDT is a market that will put two 300w graphics cards in their system, and then buy 125W CPUs and OC them to 200w+ CPUs. The only metric that matters on HEDT is raw performance. If it was all about efficiency, no one on HEDT would overclock at all.

Lets say those rumors I posted are true.
http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/13

Anandtech has GTX 980 doing ~42fps in BF4 at 4k on ultra. Add 20% performance from the rumor I posted and you get 50.4fps. GTX 980 system pulls 294w. So 6% increase is 311w.

What do you think gamers and other HEDT enthusiasts will want? The one with the lower TDP or the one with the higher performance (ignoring metrics like driver quality since they're irrelevant to what I'm getting at)? Have you seen anyone take a GTX 660 over a GTX 980 because the GTX 660 used less power? I don't think I've ever seen that happen in a forum that specialized in HEDT parts.

My entire point is that this is basically a given fact. AMD released 200w+ TDP CPU and people bought it! Yes efficiency is important, but people will take the higher performing card even if it's far less efficient. People still bought Fermis and Fermi refresh. People buy Titan and dual GPU high end cards from Nvidia and AMD.

Also, there's no need to treat me like some sort of idiot who doesn't understand efficiency. I have a 4 CPU Opteron rig and they're all 75w TDP CPUs and it's trading blows in multi-thread with 4.5ghz Haswell-E 6 core or stock Haswell-E 8 core in what I use it for. Yes total I have 300w of CPU, but it cost significantly less than Haswell-E EE 8 core and I still use it because it has more raw performance. And this might be difficult for you to fathom but most HEDT people only have one or two computers and they pay the average US electricity rate of 12 cents /KwH. Spending even $20 a month on electricity for a ridiculous overclocked, two GPU system is still far cheaper than doing something like going to see the movies or going out to the bar.

I know you love your precious efficiency but the vast majority of HEDT owners don't care at all. It's irrelevant in the market. What matters is raw performance. And when Intel, AMD, or Nvidia start talking about it, HEDT gets disappointed eventually.

Even your little APUs you fawn over so much have been a massive disappointment to HEDT owners. In fact, it's such a huge disappointment, that people will buy a two year old architecture on an archaic platform over taking something more efficient, because the old architecture and platform offers much better raw performance, regardless of how efficient the APUs are. The majority of people out there who want high end gaming rigs go for dCPU and dGPU, even if APU is way more efficient. Because they only care about raw performance.

I agree with you in servers that there's a "sweet spot" between power consumption and raw performance if you're doing something like running Disney's 55,000 core render farm, but no one in their right minds cares about it for HEDT as long as they have access to reasonably priced electricity, which most of the developed world has unless you're living on a little Island somwhere.

Servers: efficiency good
GPGPU farms: efficiency good
Mobile: efficiency good
Embedded: efficiency good
High end desktop: no one cares about efficiency, they are about performance.

The biggest problem with enthusiasts is the fact that they masturbate over bar graphs of raw performance without even understanding them. To say that HEDT is caught up in something else besides raw performance is borderline blasphemy.
 

8350rocks

Distinguished


+1

HEDT and perf/watt in the same breath would only be mentioned to say how much HEDT ignores perf/watt. The only time that even comes up is when some fanboy is trying to defend his favorite brand against an outright performance deficit. Even then, most people will acknowledge the argument is on shaky ground at best.

If you are so concerned about perf/watt...then why put 2-4 GPUs in a HEDT rig? The answer is, you would not be concerned about it at all because you are looking for outright performance.

The average consumer may look at the data, and if the highest performing card they can afford also has slightly more reasonable power consumption, then it is considered lagniappe for all intents and purposes. This is primarily because HEDT buyers purchase hardware to hit a performance threshold.

Now, the metric that is typically most relevant in HEDT is perf/$. Because, unless you are a whale...you always have a budget. So...I have seen performance decisions based on budgets. If you can get 90% performance for 50% of the cost...that is FAR more justifiable than worrying about $5-10 more on your light bill. I have even seen perf/watt ignored summarily to get more out of a budget build.

So, while in trololololololand where APUs that can form an exascale computer are around the corner in a year or 2, APUs dominate. However, back in the real world...(better known as not-dreamland)...dCPU + dGPU is better for HEDT, and will be for the foreseeable future. Likely well beyond APUs that do exascale massively parallel computing. Because an exascale supercomputer will not play crysis 3 @ 60 FPS...this will always be the case.

 

colinp

Honorable
Jun 27, 2012
217
0
10,680
The HEDT market as you describe it is miniscule. AMD would be much better served by scoring design wins in laptops the way that Nvidia is right now with Maxwell. That's a huge volume of chips.

And the fact you can now get ITX compatible versions of the 980 is an incredible achievement that opens up high end performance to all form factors.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Using 2133 MHz DRAM, the 8 CU desktop version of Kaveri performed like

7750-DDR3 version < Kaveri < 7750-GDDR5 version

Also no sure what you mean by "APU in current form (8CU)". The PS4 is built around an APU with 18 CUs. I can guarantee you that this APU is much faster than any 7750.

On the 14nm node the use of HBM gives AMD the technical possibility of fabricating enthusiast APUs with more than 18CUs for PCs, but I think that AMD will not cannibalize its entire dGPU business; the transition will be smooth.
 
The HEDT market as we describe it sounds like its small, yes, but by no means it's less true that Perf/Watt is meaningless for Enthusiasts.

That's like saying a Car enthusiast is going to prefer a Smart 2 over a Twin Turbo V8 BMW M3 just because it's efficient. Or that BMW is going to stop making performance cars because the M3 market is so small.

No one is discussing AMD is better off making power guzzling CPUs/GPUs over efficient ones, but just like you could swap a V8 into an old Mini Cooper, we want to have CPUs and GPUs that can go to a performance point we're comfortable with, without looking at the efficiency.

Everything within reason, for power envelopes, is fine in my book as long as they deliver an acceptable raw performance. I don't like to think about having a 600W CPU, but I don't mind keeping 125W ones that can go into 200W. Same with GPUs. I don't mind them going up to whatever power target PCIe lets them reach, as long as it's safe for everyone.

Cheers!

EDIT: "efficient" -> "performance"
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Indeed! Taking the above equation and dividing both terms by power gives

Performance / power = Efficiency



Take again the above equation, which is valid also for Piledriver. Performance is about linear with frequency, whereas Power is about cubic with frequency. Thus increasing frequency decreases the efficiency.
 


And that is what we don't really care about when discussing HEDT, unless they plan to have the sweet spot for performance at 200W (which I doubt) so they get max performance at max power envelope (we could reach by normal means). We all know they won't do that, just like we know BMW won't put an M3 on the streets with 900BHP from factory, but you still say it's an enthusiast car. Point is, AMD is talking about making saloon 3-series and some 2-series, when we want them talking about making M3's (and M4's) for us. I can name you any car manufacturer you like for another analogy :p

Anyway, like I told you by PM: I think the future looks boring.

Cheers!
 

colinp

Honorable
Jun 27, 2012
217
0
10,680
I'm not actually sure I get the gist of this discussion. Anyone would think it is being suggested that the 980 is somehow unsuitable for HEDT...
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


You have ignored my point entirely from bottom to top and transversely. Take your mentioned 900W HEDT system, which satisfies the equation

Performance = Efficiency x 900W

The goal for the year 2020 is to increase the performance by a factor of about 25x. How do you propose to achieve the new performance?

One possibility is to maintain the 900W and increase the efficiency of the hardware by 25x

New-Performance = 25 x Performance = (25 x Efficiency) x 900W = New-Efficiency x 900W

This is the option chose by AMD engineers. Check the links and slides given before about the former 25x20 talk or the recent Future of Compute talk about increasing the efficiency by 25x.

Another possibility is to leave the efficiency unchanged and increase the power consumption by 25x

New-Performance = 25 x Performance = Efficiency x (25x 900W) = Efficiency x 22500W

But this option is purely formal, not real, because nobody can generate 22500W at home, not even your "gamers and other HEDT enthusiasts".
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


And then future 7750 (R5 450?) will also have 18CUs, higher clocks and HBM.
 


Because it will be in an APU.
 

8350rocks

Distinguished
@colinp:
980 is only suitable now because of performance. When it no longer performa on par, efficiency will mean nothing.

@juanrga:

If 25x more efficient means performance equal to today with 25x less power draw then AMD will go out of business. If it means 25x more performance with same power draw as today, they will blow the doors off everything else and will thrive. So 25x more efficient means nothing without the context or pretense of how.
 

con635

Honorable
Oct 3, 2013
644
0
11,010

Juan is talking about efficiency and others are talking about power consumption.
 

colinp

Honorable
Jun 27, 2012
217
0
10,680
Nvidia seem to be doing an Intel tick - tock. New arch on an old process then comes the die shrink.

Maxwell's efficiency becomes important then because to increase performance of a gpu, all you do is add cores and memory, not increase clock speed. What might be a problem on the old node is less of a problem on the new.
 

colinp

Honorable
Jun 27, 2012
217
0
10,680
It's all about optimising performance within constraints. If for the sake of argument, an Nvidia core is twice as efficient as an AMD core, then you can fit twice as many cores as AMD in the same power envelope. Until you run of space on the die that is. So, in this example, AMD is constrained by TDP, Nvidia by how many cores they can efficiently produce on the die.

Has anyone considered that the 980 looks "efficient" simply because they can't easily put more cuda cores on the die at 28nm? Until the 980ti comes along, at any rate.
 
The 980 is a great product, just like the 680 was when launched... Until the next comes knocking on the door, performing better while being less power hungry.

We can say that about the 980, because of its performance and performance alone. No one said the 750ti was the second coming of <insert your favorite deity> when it lunched, because in terms of raw performance, it didn't blow anyone's mind. It was impressive and a showcase of what nVidia was cooking, but by no means a home run.

So, to crown something a king, you need to take a look at its raw performance more than efficiency. At the time, the 680 was easily displaced by the Titan (no one cared it used a lot more power) and then by the 290X, and again no one cared it used more power (IIRC).

Cheers!

EDIT: Typo.
 

jdwii

Splendid
Juan is correct when he or amd or anyone else talks about efficiency they mean performance per watt. This will be the future. I think its nonsense when a product uses 80% more power while being 10% faster and I think the market who doesn't think its nonsense is extremely small and no I'm not talking about gaming PCs which have beat consoles in software sales.

50 watts or so isn't that big of a deal to the hedt but look at the fx 220 watt CPUs its very competive to the i5 yet I'm see few of them in builds why over you needing 100 watts more on a PSU and over you needing a beast motherboard and a 100$ water cooler or you can simply get a i5 and 212+ and beat the fx 220 watt CPU using less power and heat.

Mini itx is getting popular in the gaming market and that leaves less room for power hungry parts that consume to much power for its performance.

Rejoice fellow members who are so extreme you can always take your more efficent parts and probably get crazy over clocking results keep in mind the 980 ti isn't here yet their probably waiting for amd to release their next gen parts but if those results are true amd will get to claim they have the performance per watt killer.

Its extremely important that amd makes efficient GPU designs since it makes a big impact on their apus which they are making money on.
 

cemerian

Honorable
Jul 29, 2013
1,011
0
11,660
while the overclokced fx8350 that many are rocking(@4.8-5,2ghz), i had it before i moved to intel consume almost 300w and no one is complaining about that, because we don't car about it we care about more raw performance period
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
I have given an elementary treatment of efficiency and how it affects performance. I have also tried to explain why AMD has taken the correct design decision for future products. I am not going to explain here the more advanced engineering details behind the decision of AMD to focus on efficiency and HSA, but I will copy and paste the next excerpt:

Power is the Problem

The technology underpinnings responsible for the move toward hybrid computing are pretty compelling, driven by the huge inflection point we experienced in the previous decade. Moore’s Law is alive and well, continuing to dish up more and more transistors per square mm. But Dennard Scaling is not.

We can no longer reduce voltage in proportion to transistor size, so the energy per operation is no longer dropping fast enough to compensate for the increased density. The result is that processors are now totally constrained by power. And it’s getting exponentially worse with subsequent generations of integrated circuits!

Circuit performance per watt is still improving, but now at closer to 20 percent per year instead of the almost 70 percent per year we used to enjoy. So how can we continue to improve performance anywhere close to historic rates, and achieve exascale computing by the end of this decade? Since the underlying technology is going to fall far short of the improvements we need, our only hope is to dramatically reduce the overhead per operation.

If anyone is interested in details can PM me as usual.
 


You're coming out a bit jerkish again, Juan.

We all know and understand what "efficiency" means and how it affects designs and products. We don't need advanced nuclear knowledge to grasp the concept of "more efficient at a power target => better performance". You could have started saying "because of how the shrinks are coming around, squeezing the same level of performance at the same power level is getting very hard" and then justify that. That would make as a consequence that HEDT starts making less sense from a power constraint POV.

Consumer level means squat to AMD and Intel, so they're worried because the server side is not going to benefit with big gains for the same power envelopes, so that means they have to search other means of "performance". So focusing on "efficiency" is not because they want to save the planet, but because the economics of going under 28nm and making a business case to justify the expense is harder with each new shrink.

That comes to the conclusion of also reviewing what you're putting inside the CPUs nowadays and how ARM is so "efficient". Seems like Intel will have to shave some of the fat from X86 like it or not.

Cheers!

EDIT: Typo.
 

jdwii

Splendid
I'm not sure he was acting that way a lot here is acting like efficiency means only means power consumption which isn't true performance per watt will be the future not pure performance the market who doesn't caremabout efficiency is to small for anyone to care about. Again this doesn't mean you can't have great OC results.

Also I find it sad that some can think most don't care about power consumption in the hedt again how many 220 watt CPUs do we see again?
 

Anyone who OCs a intel extreme processor is drawing about that much power as well. Even a 4790k can draw over 150w when OCed. People willing to OC anything usually drops the efficiencies of the chips significantly.
 
Status
Not open for further replies.