News AMD's EPYC Milan Breaks Cinebench Record, Here's a 10nm Ice Lake Xeon Comparison

Status
Not open for further replies.

thGe17

Reputable
Sep 2, 2019
70
23
4,535
So where's the news? The dual-Epyc has +60 % more cores and has a +52 % higher score. This shouldn't be a surprise.
The only thing to mention is at best the perf/watt rating, which is still better for Epyc because of TSMCs mature N7.
R23 should require more time due to a higher workload, so I guess turbo's wouldn't play a role here:
CB R23 MTScore/CoreScore/Core/GHzScore/Watt
Dual Epyc 7763888362406
Dual Xeon 8380933406276
Obviously Intel is doing fine, the problem is still the 10nm node. It's getting better, especially if compared to 14nm+++, but the node still cannot fully compete with TSMCs N7 in this iteration.
 
Last edited:

watzupken

Reputable
Mar 16, 2020
1,178
660
6,070
So where's the news? The dual-Epyc has +60 % more cores and has a +52 % higher score. This shouldn't be a surprise.
The only thing to mention is at best the perf/watt rating, which is still better for Epyc because of TSMCs mature N7.
R23 should require more time due to a higher workload, so I guess turbo's wouldn't play a role here:
CB R23 MTScore/CoreScore/Core/GHzScore/Watt
Dual Epyc 7763888362406
Dual Xeon 8380933406276
Obviously Intel is doing fine, the problem is still the 10nm node. It's getting better, especially if compared to 14nm+++, but the node still cannot fully compete with TSMCs N7 in this iteration.
I actually don't think any iterations of Intel's 10nm will be competitive in terms of core count until at least 7nm. While SuperFin seems to have improved things with higher clockspeed (also higher power consumption), Intel's current 10nm seems to be struggling with density in my observation because they are still pretty much capped out at 4 cores on their ultra low power CPUs, and around 8 to 10 cores on their desktop CPUs.
 
Apr 19, 2021
1
0
10
Who cares about cinebench ? Its a totally useless benchmark. No one makes their purchasing decision on this benchmark.
 

thGe17

Reputable
Sep 2, 2019
70
23
4,535
@amootpoint: Nobody said that, expect for yourself. ;) CB is just one way to put heavy load on all CPU cores, nothing more, nothing less.
And Ian Cutress' SPEC2017 tests show silimar results for Score/Core, Score/Core/GHz and Score/Watt. The Ice Lake Cores seem to be faster than Zen3 but here also the N7 shows better efficiency.

@watzupken: Should be interesting to see if Intel can improve Enhanced SuperFin (10nm+++) as much as for SuperFin, but it also may be possible, that the node will never be able to fully catch up with TSMCs N7(P), but in the end, this may be a minor problem, because Meteor Lake will be already manufactured on 7nm (or maybe (partially?) on TSMCs 5nm), therefore 10nm will only be used for a short period.
 

waltc3

Honorable
Aug 4, 2019
453
252
11,060
This should at last put paid to the notion that "Intel's 10nm = TSMC's 7nm" but good...;) Clearly that is not the case. If Intel could make a competitive CPU it would, but it can't, so it doesn't. Try as they might, Intel these days mostly makes AMD look really good. Intel performance per watt is poor beside these Epycs. It's going to be an especially difficult road for Intel from now on as AMD is a constantly moving target.
 

tripleX

Distinguished
Oct 9, 2013
35
2
18,535
So where's the news? The dual-Epyc has +60 % more cores and has a +52 % higher score. This shouldn't be a surprise.
The only thing to mention is at best the perf/watt rating, which is still better for Epyc because of TSMCs mature N7.
R23 should require more time due to a higher workload, so I guess turbo's wouldn't play a role here:
CB R23 MTScore/CoreScore/Core/GHzScore/Watt
Dual Epyc 7763888362406
Dual Xeon 8380933406276
Obviously Intel is doing fine, the problem is still the 10nm node. It's getting better, especially if compared to 14nm+++, but the node still cannot fully compete with TSMCs N7 in this iteration.

Score/Price?
 
I don't think anyone will argue Milan's great power efficiency and core count, but unfortunately for AMD, there are some customers that simply don't care about that metric as much anymore. If you've already bought into any of Intel's proprietary or specialized workloads like avx-512, optane, or some machine learning extension, it doesn't matter how efficient your architecture or good it is at general computation, since Intel's not even trying to compete on that level anymore. They've shifted to marketing their fancy extensions and exclusive technologies creating an entire market segment that is completely dependent on Intel products, regardless of how good each new Intel release is. It's similar to how so much of machine learning/AI and compute is dependent on Cuda, and thus must buy Nvidia accelerators and GPUs.
 

thGe17

Reputable
Sep 2, 2019
70
23
4,535
[...] Intel performance per watt is poor beside these Epycs. [...]
And? AMD was already there in the last decade. ;) So if they have managed to come back, why shouldn't others be allowd to do so?

Score/Price?
Yes, maybe, but this competition will AMD easily loose against ARM-servers, e.g. an Altra Q80-33 and the M128-30 is going to follow soon. (Obviously they use the same market mechanics like AMD does to gain market shares ;))
And additionally, you do not know prices for OEMs, because you can be sure that these prices differ from basic stock prices. And the differences already slowly dissipate, if you look at the more relevant general/basic CPUs with lower core counts.
And to make it even worse for you, price sometimes is only a secondary factor in the datacenter (to be precise, Lisa Su even omitted the "sometimes" in her statement).
And in many cases, it doesn't matter at all, because AMD simply does not have the capacity. The topic is a bit more complex ...


@TCA_ChinChin: "I don't think anyone will argue ..." correct.
"... but unfortunately for AMD, there are some customers that simply don't care ..." nonsense, now the customers are to blame? These are no gaming/hobby products where you simply look at the FPS counter and decide which one is the most suitable CPU. Requirements in datacenters are much more complex (in and outside the cabinet).
And yes, AMD currently has nothing competitive to offer with regards to AI and that's the reason why AMD is going to enhance the vector units of Zen4. (Btw, they currently also have nothing competitive in GPGPUs with regards to AI; here they will also need more time and hardware iterations or have to compete with a lower price).
And yes, CUDA is heavily used for AI and other tasks, but not because its useless and Nvidia simply has a good marketing. It is a good and powerful framework and collection of tools and AMD simply has failed to provide resources years ago to put something similar for their own architecture into place. Of course a problem of resources, but certainly nothing to blabe Nvidia for.

@mac_angel: The CryEngine heavily relies on its primary thread. This is nothing you want to throw onto a slow-clocked, 32+ cores server CPU. And in this case even 200 GB/s memory bandwith won't make a difference. ;)
 

Fdchdfa

Commendable
Apr 20, 2021
5
0
1,510
So where's the news? The dual-Epyc has +60 % more cores and has a +52 % higher score. This shouldn't be a surprise.
The only thing to mention is at best the perf/watt rating, which is still better for Epyc because of TSMCs mature N7.
R23 should require more time due to a higher workload, so I guess turbo's wouldn't play a role here:
CB R23 MTScore/CoreScore/Core/GHzScore/Watt
Dual Epyc 7763888362406
Dual Xeon 8380933406276
Obviously Intel is doing fine, the problem is still the 10nm node. It's getting better, especially if compared to 14nm+++, but the node still cannot fully compete with TSMCs N7 in this iteration.
In case you blind, AMD's Epyc 7763 cost $7890, meanwhile Intel's pathetic 40 core 8380 cost $8099.
 
@TCA_ChinChin: "I don't think anyone will argue ..." correct.
"... but unfortunately for AMD, there are some customers that simply don't care ..." nonsense, now the customers are to blame? These are no gaming/hobby products where you simply look at the FPS counter and decide which one is the most suitable CPU. Requirements in datacenters are much more complex (in and outside the cabinet).
And yes, AMD currently has nothing competitive to offer with regards to AI and that's the reason why AMD is going to enhance the vector units of Zen4. (Btw, they currently also have nothing competitive in GPGPUs with regards to AI; here they will also need more time and hardware iterations or have to compete with a lower price).
And yes, CUDA is heavily used for AI and other tasks, but not because its useless and Nvidia simply has a good marketing. It is a good and powerful framework and collection of tools and AMD simply has failed to provide resources years ago to put something similar for their own architecture into place. Of course a problem of resources, but certainly nothing to blabe Nvidia for.
I'm not blaming the customers for choosing what is best for them. I know that Nvidia's software-hardware ecosystem in deep learning and AI is due to how good it actually is and how easy it is to use. I also know that for the customers that actually rely on AVX and specialized instructions, there are no other options where this is concerned. I'm simply stating some facts. I did not say that Nvidia simply has good marketing. As for Intel, I talk about their marketing strategy in that they are doubling down on their exclusive ecosystem and specialized features, while AMD still holds the crown for more general CPU tasks. Of course there will be datacenters and servers that will happily use Optane and AVX, but there are also just as many use cases for Milan and the architecture advantages that AMD provides. Obviously there isn't really any other way for Intel to compete at the moment so they have no choice but to ignore more general performance metrics such as perf/watt or core count, but by shifting consumer attention towards things like AVX and Optane, they can hide the pitfalls their products have.
 

thGe17

Reputable
Sep 2, 2019
70
23
4,535
In case you blind, AMD's Epyc 7763 cost $7890, meanwhile Intel's pathetic 40 core 8380 cost $8099.
Obviously I am not, but maybe you should learn to read and understand postings first. The price was not a topic of the article, nevertheless it has already been discussed in the previous posting.

I'm not blaming the customers [...]
"but there are also just as many use cases for Milan and the architecture advantages that AMD provides."
Obviously and with good reason, but limitations already begin with AMDs limited capacities. Therefore it doesn't make sense to propose an "all-have-to-switch-scenario", because this is simply not possible.
And in the datacenter advantages are not simply counted by mere benchmark results. There are a lot of factors to be taken into consideration and for example Intel has the much wider, diverse and customized portfolio/ecosystem ... its much more than only AVX-512. For example about 50 % of all Xeon's sold are semi-customed for specific needs for networking and other workloads and are sold to the big cloud-players and hyperscalers. This is currently simply impossible for AMD.
According to Mercury AMD has still below 10 % market share in the datacenter and this number will only grow slowly, because the market needs time to change and also does AMD.
"[Intel] can hide the pitfalls their products have." How should this be possible? This would mean your IT personel or CTO is simply stupid and hasn't got a clue of his/her job. I think they know exactly what they do and so does Intel with its marketing, and every pro/CTO should be able to differentiate between a fact based product analysis and simple marketing messages. It is much too simple to imply something like "everyone not bying AMD is simply stupid".

And again I think you overestimate the effect of AVX-512 and Optane. They both have their uses-cases and the largest servers still can only be build with Intel hardware, but these scenarios are only relevant for a smaller portion of the overall picture.
 
"but there are also just as many use cases for Milan and the architecture advantages that AMD provides."
Obviously and with good reason, but limitations already begin with AMDs limited capacities. Therefore it doesn't make sense to propose an "all-have-to-switch-scenario", because this is simply not possible.
And in the datacenter advantages are not simply counted by mere benchmark results. There are a lot of factors to be taken into consideration and for example Intel has the much wider, diverse and customized portfolio/ecosystem ... its much more than only AVX-512. For example about 50 % of all Xeon's sold are semi-customed for specific needs for networking and other workloads and are sold to the big cloud-players and hyperscalers. This is currently simply impossible for AMD.
According to Mercury AMD has still below 10 % market share in the datacenter and this number will only grow slowly, because the market needs time to change and also does AMD.
"[Intel] can hide the pitfalls their products have." How should this be possible? This would mean your IT personel or CTO is simply stupid and hasn't got a clue of his/her job. I think they know exactly what they do and so does Intel with its marketing, and every pro/CTO should be able to differentiate between a fact based product analysis and simple marketing messages. It is much too simple to imply something like "everyone not bying AMD is simply stupid".

And again I think you overestimate the effect of AVX-512 and Optane. They both have their uses-cases and the largest servers still can only be build with Intel hardware, but these scenarios are only relevant for a smaller portion of the overall picture.

Like I said, I don't blame the consumers (who are those IT personnel). But I don't see how AMD can't build just as many semi-custom designs for cloud and hyperscalers, they literally did semi-custom design on all the recent and semi-recent consoles. I also still don't see how the largest servers can still only be built with Intel hardware, unless they require things like AVX-512 and/or Optane which are extremely good at what they're marketed towards. Other than having semi-custom designs (which I think that AMD can easily do just as well as Intel), the only other reasons why an Intel server can't be replaced with an AMD one is if the use case depends on Intel specific technologies like AVX-512 and Optane (or other proprietary technologies that Intel provides). Intel and AMD both specifically market and target their customers (those IT personnel) in order to hide their respective pitfalls. Intel tries to hide their general purpose and power consumption weaknesses while AMD hides their lack of targeted extensions and similar technology. Of course it's possible. Every company does this kind of marketing in order to mask their own weakness and only present their strengths, otherwise, marketing would not exist.
 

spongiemaster

Admirable
Dec 12, 2019
2,362
1,350
7,560
I also still don't see how the largest servers can still only be built with Intel hardware, unless they require things like AVX-512 and/or Optane which are extremely good at what they're marketed towards.
AMD Epyc maxes at 2 CPU's. So even though AMD has more cores per CPU, Intel can still sell companies servers with significantly more cores. AMD has a ceiling of 128 cores with 2x64. Intel maxes at 224 cores with 8x28. I doubt the market for such high core count servers is very large, so it's properly not worth it to AMD to develop a competing platform.
 
AMD Epyc maxes at 2 CPU's. So even though AMD has more cores per CPU, Intel can still sell companies servers with significantly more cores. AMD has a ceiling of 128 cores with 2x64. Intel maxes at 224 cores with 8x28. I doubt the market for such high core count servers is very large, so it's properly not worth it to AMD to develop a competing platform.
If we're talking about server density, I would imagine that the cooling requirements for such a dense Intel rack would be far more expensive than a less dense AMD rack with much easier cooling requirements. If we're talking about the largest servers in the world, they can simply have a less dense, but much more power efficient and easily cooled AMD high core count server. I imagine it's more cost efficient to spend more on a one time payment for more space rather than the recurring cost on higher energy bills & cooling.
 
Status
Not open for further replies.