Intel Plays Defense: Inside Its EPYC Slide Deck

Status
Not open for further replies.

bloodroses

Distinguished
Just like every political race, here comes the mudslinging. It very well could be true that Intel's Data Center is better than AMD's Naples, but there's no fact from what this article shows. Instead of trying to use buzzwords only like shown in the image, back it up. Until then, it sounds like AMD actually is onto something and Intel actually is scared. If AMD is onto something, then try to innovate to compete instead of just slamming.
 
LOL... seriously... track record...? Track record of what? Track record of ripping off your customers Intel?

Phhh, your platform is getting trash in floating point calculation... 50%. And thanks for the thermal paste on your high end chips... no thermal problems involved.
 

InvalidError

Titan
Moderator

To be fair, many of those "cheap shots" were fired before AMD announced or clarified the features Intel pointed fingers at.

That said, the number of features EPYC mysteriously gained over Ryzen and ThreadRipper show how much extra stuff got packed into the Zeppelin die. That explains why the CCXs only account for ~2/3 of the die size.
 
To Intel, PCIe Lanes are important in today technology push... why?... because of discrete GPUs... something you don't do. AMD knows it, they knows that multi-GPU is the goal for AI, crypto and Neural Network. This is what happening when you don't expend your horizon.

It's taking us back to the old A64.
 
It's funny...

- They quote WTFBBQTech.
- Use the word "desktop die" all over the place without batting an eye on their own "extreme" platform being handicapped Xeons.
- No word on security features. I guess omission is also a "pass" in this case.

This reads more like a scare threat to all their customers out there instead of trying to sell a product. Miss Lisa Su is doing a good job it seems.

Cheers!
 

InvalidError

Titan
Moderator

The extra server-centric stuff (crypto superviser, the ability for PCIe lane to also handle SATA and die-to-die interconnect, the 16 extra PCIe lanes per die, etc.) in Zeppelin didn't magically appear when AMD put EPYC together... so technically, Ryzen chips are crippled EPYC/ThreadRipper dies.
 


I don't know if you're agreeing or not... LOL.

Cheers!
 
One reason that Intel didn't mention TDPs is that Intel CPUs can drastically overshoot their TDP if subjected to, say, an AVX-512 workload. AMD CPUs are more consistent under load, albeit with a higher nominal power usage.

Imagine for a moment: Intel attempting to pitch an unpredictable TCO to a datacenter that encounters a worst-case situation only when using one of the most attractive features of the product. That won't end well for Intel, and I suspect they know it. Especially in IaaS applications.

It's also worth mentioning that there's no standard for determining TDPs, and varies from manufacturer to manufacturer.
 

Steve_104

Commendable
Mar 8, 2016
13
0
1,510
Great article Mr. Alcorn,

I would point you to Anandtech's power related portion of their review, I would place even odds on Intel not bringing up TDP being the real world results on those numbers. Those numbers didn't look that rosy for Intel, to me, in Anandtech's very rapidly written and published early comparison review.

It would appear to me, that Intel's PR department is not very confident in their product from presentation of many of their slides.

Can not wait to get my mitts on these new chips (of both flavors) myself.
 

Rookie_MIB

Distinguished
One thing that I find interesting is that they didn't really address thermals very well, perhaps because the 4 die design has a distinct advantage over the monolithic design.

With some space between the dies as in the Epyc CPU - you've spread out the thermal load (not a lot, but somewhat) which combined with the soldered IHS should help the Epyc to run cooler.

Meanwhile, you have 18 or so concentrated cores in Intels monolithic design and while that might help with intercore latency, it doesn't do much good if the CPU has to run slower because of thermal limits.

Also, according to the Anandtech tests, the AMD Epyc 7601x2 vs Intel Xeon 8176x2, Idle power usage was 151w for the Epyc system, vs 190w for the Xeon system. Under MySQL loads, 321w for Epyc, 300w for Xeon. Under POVRay - 327 for Epyc vs 453 (!!!) for Xeon.

All in all, being +/- 20w isn't too bad, but that 120w margin for Xeon in the POVRay testing was rather surprising.
 

none12345

Distinguished
Apr 27, 2013
431
2
18,785
You forgot to mention the embaressment on the ecosystem slide. Where they have duplicated multiple vendors to make it look like they have more then they do. Granted, this is rather meinglesss, but its quite embarsing that they did it.

On the L3 near vs far metric. Its only fair to mention the fact that there are near and far L3 caches on the xenon as well. So it will also be a concern of intel chips. Its going to take 9 link hops to get from the core in the upper left to get down to the lower right. Thats going to add a lot of latency.

The other concern with the mesh network. Is routing between cores. Im just going to assume intel has done their homework here and is routing intellegently. But if they havent, there will be bottle necks arround the memory controllers and in the center of the chip.

Again its another consistency issue that will have to be watched.

Neither Intel's nor AMD's design is bad, they both have trade offs. Which is to be expected with this level of scaling.
 

none12345

Distinguished
Apr 27, 2013
431
2
18,785
Forgot to mention. On the virtualization segmentation. You could do the exact same thing on intel chips. What happens with a 30/32/34/36 core VM, it shares 2 sockets, would destroy performance. All of those would fit in a single socket on AMD.

Not trying to dismiss the issue tho, just saying its an issue on both platforms.

You gotta tune your workload to your hardware.
 

Knowbody42

Prominent
Jul 17, 2017
1
0
510
I suspect Nvidia would prefer to use EPYC over Xeon, due to the larger number of PCI-E lanes. And Nvidia has been making a big push to get their GPUs into data centres.
 

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
858
315
19,360


We mentioned the duplicated vendors at the top of the last page.

 

mapesdhs

Distinguished


Equally stupid was the implication that AMD hasn't been talking to these companies. It's by far the worst PR nonsense from Intel I've ever seen. The whole thing reads like some kid in a schoolyard yelling, yeah, well my dad could beat up your dad! Nuuuhhh! Unbelievable. Linus Tech Tips covered it here:

https://youtu.be/f8sXQ6JsNu8?t=20m23s

Ian.

 


Indeed it does. Thing is, Intel is the one pointing it out here, not AMD. At least, so far, I haven't seen any AMD presentation with this level of snarky remarks.

I mean, I have to admit I did had a chuckle at "glued dies", because, ironically enough, we all called that the Pentium Ds and C2Q's back then. Intel was just relieving all that accumulated anger from those years when AMD had the "real" dual and quad core variants.

Cheers!
 

Hellbound

Distinguished
Jul 7, 2004
465
0
18,780
With X299 looking like a rushed convoluted mess, and AMD releasing a competitive (price/performance) product.. This article makes Intel look like they are not very confident in their product.. The sad truth is this, if AMD Threadripper can actually compete with Intel's $2000 cpu, then I will be buying AMD. I'm very happy AMD is now competitive..
 

AndrewJacksonZA

Distinguished
Aug 11, 2011
576
93
19,060
Full disclosure: I currently own and thoroughly enjoy using an i7-6700.

The biggest thing for me that affects all of their performance material here and raises my skepticism to even healthier levels, is the Notices and Disclaimers slide:
aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9SL1MvNjkzMDY0L29yaWdpbmFsLzAwMi5QTkc=

    ■"Software and workloads used in performance tests may have been optimised for performance only on Intel microprocessors," and
    ■"Optimisation notice: Intel's compilers may or may not optimise to the same degree for non-Intel microprocessors for optimisations that are not unique to Intel microprocessors."

Yes, Intel's compiler optimisations are old topic of discussion, and now it feels like pointing to a flogged dead horse and saying that we should flog it some more, but it's still an important point to consider.



 


Funny some of the same ones have been made by AMD. I remember AMD ripping on Intels C2Q for using MCM designs.



You realize this is in the data center, a market that the price of just the CPU is not as big as to an individual, correct? Also it has nothing to do with the desktop market and how they are affected.

Did you not read the article or do you assume all Intel CPUs are for consumers?



Talking or not I understand Intels point there. Not being a very active member of a lot of markets in the server space, not just data center, brand new uArch that is still very young means that the software devs will have a lot of work optimizing for AMD specifically.

It is not a simple matter of "We talked to them and it will work". Hell VMWare has a ton of issues even on Intel platforms. Sometimes a single VMWare driver will just kill VMs due to bugs.

It is a lot of work and AMD needs to be prepared to work very closely with those companies and that will also cost quite a bit to maintain if they expect to truly be competitive.



To be fair Intels compiler will optimize for every possible Intel supported extension. And the thing is that AMD does not support all of the same ones. Hell AMD only started SSE 4.1/4.2 support with Barcelona while Intel does not support SSE 4a at all. Intel has AVX-512 while AMD is still running AVX-256.

I would not expect Intels compiler to optimize as well for AMD as others and in this market where Intel easily dominated for years I am not surprised that their compiler is the most widely used.

I personally think AMD should build their own compiler as they know their uArch best.
 

lsatenstein

Distinguished
Mar 8, 2012
77
0
18,630
Want to see a swing to AMD? Just double the price of electricity for these large datacenters. AMD delivers with 65watts what takes Intel almost double that. Add a few hundred/thousand compatible plugin systems and AMD can be there. Double the cost of electricity also doubles the A/C costs.
Reduce your power by 50%, you also reduce A/C costs by a substantial amount.

The name of the game, all things more or less equal == cost of operation.
 
Status
Not open for further replies.