News AMD Unveils 7nm EPYC Rome Processors, up to 64 Cores and 128 Threads for $6,950

Actually, the Rome I/O die is manufactured on GlobalFoundries 14nm process. Only consumer I/O die is on 12nm.
 
Wow! Only 7 grand? What a steal!

I could buy a used Corvette or I could buy a lump of silicon. Hmm, I wonder whats a better deal.


I'm aware I'm not the target audience lol.

Datacenters and businesses that run VMs. We are a small business but our biggest server currently has two older gen Xeon CPUs at 10 cores each. We easily run 6 server VMs with those and likely could run 8-10. The CPUs cost over $2000 each at the time. So our next server may very well use AMD EPYC for cost savings alone.
 
  • Like
Reactions: bit_user
I am having trouble understanding the performance without a gaming benchmark. Will it warp Crysis?


Sent from my iPad using Tapatalk Pro
 
Datacenters and businesses that run VMs. We are a small business but our biggest server currently has two older gen Xeon CPUs at 10 cores each. We easily run 6 server VMs with those and likely could run 8-10. The CPUs cost over $2000 each at the time. So our next server may very well use AMD EPYC for cost savings alone.
If you are running Windows Server on it the costs for Server 2016/2019 go up VERY quickly when you have more than 8c/socket. The company I work for cannot afford to run Server 2016/19 because even Standard would cost ~$5k per instance of 2016 and that is with dual 24 core hosts. Server Datacenter allows for a single license with unlimited VMs, but you have to license EVERY host. Doing that on 4x 128core hosts would run over $200k just for Windows Server.
 
A quote from a very nice ServetheHome review on the Rome processors. "If AMD does not gain significant share, there is no merit to having a wholistically better product than Intel."

If AMD does not gain market share, then that means the the CPU market has accepted an Intel monopoly and innovation is no longer rewarded, rather punished.

If a company has the objectively better product, but still nobody adopts said product, then there is no hope for the industry to innovate.
 
Let's see, double the cores of Intel's current offerings? Check.
Costs 3/4 the current competition? Check.
Comparable thermals to the competition? Check.

But let's get real, Rome isn't aimed at competing with what's out now. It's trying to kill the next gen while it's still in the crib.

Intel's next gen stuff is already on the ropes, and hasn't even gotten into the market yet:

Lower core counts (56 v 64)
Higher price
Much higher TDP (400W/chip!) +
PCIe 3.0 v PCIe 4.0 (and fewer lanes!) +
Maybe fewer memory channels?

Let me point out somethings on the + marked items there.

The TDP per chip. I know, the two companies calculate it differently, but in general it's a ballpark for waste heat. For a two socket system, AMD's going to make 50-60% less heat, take 50-60% less energy to run, and that's not just power savings for, well, power, but also savings in COOLING. So much of a data center's total power draw is for cooling. Every watt wasted as heat becomes at least another watt spent trying to get rid of it.

Intel even admits the 400W chip is really two of the current gen chips glued together. Given what they said when EPYC firt arrived... yeah...

Fewer, slower lanes: If there's market for 4.0 bandwidth, it's the DC. More manufacturers are going to be coming out with products that can use the bandwidth. I can even see niche products that will let machines aggregate 3.0 devices onto fewer 4.0 lanes with multiplexers, though I think it's a terrible idea.
With cloud and virtualization and machine-learning growing, the thirst for bandwidth is growing as fast or faster than raw CPU power. Having gobs of fast lanes for supporting GPGPUs is the way a lot of enterprises are going.

Intel needs to turn off cruise control and hit the gas, because they're getting smoked, hard.