[...] Intel performance per watt is poor beside these Epycs. [...]
And? AMD was already there in the last decade.
So if they have managed to come back, why shouldn't others be allowd to do so?
Yes, maybe, but this competition will AMD easily loose against ARM-servers, e.g. an Altra Q80-33 and the M128-30 is going to follow soon. (Obviously they use the same market mechanics like AMD does to gain market shares
)
And additionally, you do not know prices for OEMs, because you can be sure that these prices differ from basic stock prices. And the differences already slowly dissipate, if you look at the more relevant general/basic CPUs with lower core counts.
And to make it even worse for you, price sometimes is only a secondary factor in the datacenter (to be precise, Lisa Su even omitted the "
sometimes" in her statement).
And in many cases, it doesn't matter at all, because AMD simply does not have the capacity. The topic is a bit more complex ...
@TCA_ChinChin: "
I don't think anyone will argue ..." correct.
"...
but unfortunately for AMD, there are some customers that simply don't care ..." nonsense, now the customers are to blame? These are no gaming/hobby products where you simply look at the FPS counter and decide which one is the most suitable CPU. Requirements in datacenters are much more complex (in and outside the cabinet).
And yes, AMD currently has nothing competitive to offer with regards to AI and that's the reason why AMD is going to enhance the vector units of Zen4. (Btw, they currently also have nothing competitive in GPGPUs with regards to AI; here they will also need more time and hardware iterations or have to compete with a lower price).
And yes, CUDA is heavily used for AI and other tasks, but not because its useless and Nvidia simply has a good marketing. It is a good and powerful framework and collection of tools and AMD simply has failed to provide resources years ago to put something similar for their own architecture into place. Of course a problem of resources, but certainly nothing to blabe Nvidia for.
@mac_angel: The CryEngine heavily relies on its primary thread. This is nothing you want to throw onto a slow-clocked, 32+ cores server CPU. And in this case even 200 GB/s memory bandwith won't make a difference.