[quotemsg=21458221,0,1920539]Intel also claims the processors offer up to 17X more AI/Deep Learning inference performance than its Xeon Scalable 8180 did at launch.[/quotemsg]
And yet still probably less than Nvidia's Tesla T40, which likely costs only 10% or 20% as much and burn far less power (50 - 75 W each).
[quotemsg=21458221,0,1920539]It would make sense to use the third existing UPI port for communication between dies within the same processor package, while leaving the other two UPI connections free for the connection to the other processor in a two-socket server.[/quotemsg]
Perhaps two of each die's UPI links are to each other, with one link each going to the outside. But I guess you could have a total of 4 links going to the outside for a fully-connected topology between all 4 dies in a 2-socket config. I just think it would scale better to have more connectivity between dies in-package, given that a single UPI link would probably still be a choke point for apps requiring substantial die-to-die communication.
[quotemsg=21458221,0,1920539]Intel did reveal that it isn't using an EMIB connection between the dies[/quotemsg]
Wow, that's a surprising indictment against EMIB. Quite a reversal, given how hard they were pushing it even within the past year.
[quotemsg=21458221,0,1920539]whether these processors will require a new socket (they are rumored to drop into an LGA5903 socket)[/quotemsg]
For 12 memory channels, I imagine they must.
I wonder how many PCIe lanes they'll support. Dare we hope to see all 96 lanes per socket?
[quotemsg=21458221,0,1920539]Intel also announced Xeon E-2100 series of processors.[/quotemsg]
These look to be nothing more than the Xeon flavor of the Coffee Lake desktop CPUs introduced about a year ago. This uncharacteristically long lag - normally it's >= 6 months, IIRC. I might be interested in a Xeon version of the i9 9900K, but I guess that might take another year...