News Intel LGA9324 leak reveals colossal CPU socket with 9,324 pins for up to 700W Diamond Rapids Xeons

In engineering design, the higher the number of connection points, the higher the inherent failure rate. Especially if there is high current through the CPU power pins. At least these are gold plated (oxidation) and restricted pin movement to reduce fretting failure.

As my point: Has anyone heard about the high pin count 16 way GPU connectors failing? Whoever chose that configuration did not understand paralleling high current DC with standard pin-socket connectors. Derating is required for the current capacity per pin. This is well know and easily found in app notes. Recipe for disaster. I found similar pin-sockets used in those connectors, and the derating for a 16 pin 94V0 nylon housing was 5A. And the datasheets are expecting decent wire cooling to draw heat away from the connector housing, i.e., no "pretty" nylon shroud covering the wiring to allow for reasonable convection cooling of the wire. The wire is expected to act as a heatsink for the connector pins and sockets.
 
That is a lot of stuff going on. And the number of pins is just going to increase.
Soon they will need holes through the middle parts of the package to even out the mounting pressure.
Good thing you can do that with chiplets.
 
That is a lot of stuff going on. And the number of pins is just going to increase.
Soon they will need holes through the middle parts of the package to even out the mounting pressure.
Good thing you can do that with chiplets.
SP5's latch only holds the CPU in place. It relies on the heatsink to provide proper pressure and distribution of pressure.
 
In engineering design, the higher the number of connection points, the higher the inherent failure rate. Especially if there is high current through the CPU power pins. At least these are gold plated (oxidation) and restricted pin movement to reduce fretting failure.
At least they have enough pins to dedicate to power so they won't be forced to overdrive the pads like some of the recent issues with AMD's AM5 socket.
Socket creep is one of the biggest man-hour timesinks when designing these new sockets, apparently the issue has gotten bad with these big sockets.

And the datasheets are expecting decent wire cooling to draw heat away from the connector housing, i.e., no "pretty" nylon shroud covering the wiring to allow for reasonable convection cooling of the wire. The wire is expected to act as a heatsink for the connector pins and sockets.
^^ This is something not enough people are talking about. The copper wire wicks away a non-trivial amount of heat from the molex socket connector. Larger wire gauges could help too; technically all 12v2x6 cables are supposed to be 16awg by definition, but 14awg would be even better.
 
Or, maybe you could just water-cool the power connectors!
: D
https://newsroom.intel.com/data-center/intel-shell-advance-immersion-cooling-xeon-based-data-centers
Intel-Shell_Xeon-3.jpg
 
I believe APX and AVX10.2 features are still coming in Diamond Rapids.
That will be very cool.

APX simply requires a recompile, but no special hand-tuning, assembly language, or intrinsics to utilize. I don't know if Intel ever said, but I'm guessing it's worth somewhere on the order of a 10% speedup. Could also provide an efficiency benefit, especially for multi-threaded workloads. It should primarily benefit scalar code, making it a very nice complement to all the rounds of vector extensions.

AVX10.2 is in a similar spirit to previous AVX-512 extensions (including everything that entails), except that it's now a strict superset of AVX10.1.