Just want to note that this one is no longer true as AMD is commanding a higher percentage of revenue when compared to volume than Intel.
I probably should've worded it better. I meant that the cost of an EPYC CPU is lower than a Xeon CPU in the same class. I saw a listing for an EPYC CPU on Newegg once and, out of curiosity, looked at the prices of Xeons. I was shocked that the Xeon cost more than the EPYC despite having WAY fewer cores.
Lemme see if I can still find it (or something similar)... Ok, this should work. I wanted to use a single-socket EPYC as my example (because comparing dual-socket arrangements becomes way too complicated for a simple forum post) and the top one I could find on Newegg was the old Milan-based EPYC 7702P from Q3'2019. I'll compare that with a Xeon Platinum 8458P from Q2'2021. I know that there's almost a two year difference between them but it doesn't really matter:
Intel Xeon 8458P:
44 Cores, 88 Threads
Base Frequency: 2.7GHz
Max Boost Frequency: 3.8GHz
RAM Type: DDR5-4800
Max RAM Supported: 4TB
PCI-Express Generation: 5
Max PCIe Lanes: 80
TDP: 350W
Price: $8000
AMD EPYC 7702P:
64 Cores, 128 Threads
Base Frequency: 2GHz
Max Boost Frequency: 3.35MHz
RAM Type: DDR4-3200
Max RAM: 4TB
PCI-Express Generation: 4
Max PCIe Lanes: 128
TDP: 200W
Price: $4800
Now, I don't know about you but if I'm building a single-socket server, I am NOT going to pay an extra 67% for PCIe5, DDR5, higher clock speeds, 40 fewer threads and 150W more power use. Servers are all about throughput and that's why extra threads >>>>>>>>>>> clock speeds in servers. PCIe5 looks better on paper with a data transfer rate of ~32Gbps but I would rather have 128 PCIe4 lanes rather than 80 PCIe5 lanes because servers are all about data transfer over a network and PCIe4 lanes have a data transfer rate of ~16Gbps.
With the extra cores and PCIe lanes, more users can use the server at the same time without introducing any latency penalties. After all, the fastest Ethernet ports on Newegg's most expensive AMD SP3 and Intel LGA4766 motherboards is "only" 10Gbps so the motherboard's LAN port would be a bottleneck that erases any advantage that PCIe5 could have.
Please note that I am
only referring to a data centre server's use-case, not that of a number-crunching supercomputer. I would note however, that
Frontier, currently the world's most potent supercomputer, uses these same Milan cores found in the EPYC 7702P so clearly, the EPYC 7702P would be very suitable for a number-cruncher, especially if paired with a Radeon Instinct CDNA GPU.
Lastly, and for many,
most importantly, the EPYC 7702P is far more efficient as its 64 cores use 150W less than the Xeon's 44 cores. For many server operators, that is the most important statistic because all server components are high-performance (fast enough) so greater efficiency = greater profitability.
Things get remarkably worse for Intel if we start talking about multi-socket solutions because the advantages relating to core-count, number of PCIe lanes and efficiency advantage that AMD already enjoys literally doubles.
Let's again remember that this EPYC CPU is
two years older than the Xeon and still kicks it all over the place despite costing $3200 less.
I don't really know about the percentage of revenue invested because I'm not really into the server side of things. I based what I said on the pricing that I saw at Newegg on the server CPUs that they sell.
One thing I saw that I thought was hilarious, hilarious because I never thought that you'd be able to get one of these anymore and the prices that whoever is selling them is trying to charge:
AMD Opterons at Newegg in 2024?!
They must be a collectors' item or something. Like, the Opteron 6278 (Interlagos) was the first 16-core x86 CPU ever produced.