News Nvidia, SK Hynix, Samsung and Micron reportedly working on new SOCAMM memory standard for AI PCs

The article said:
SOCAMM is reported to feature a significant number of I/O ports when compared to LPCAMM and traditional DRAM modules. SOCAMM has up to 694 I/O ports, outshining LPCAMM's 644, or traditional DRAM's 260.
LOL, "I/O ports"? I think the word you're reaching for is "electrical contacts" (or just "contacts", for short)! "Pads" would probably also work.

I/O ports have a rather different meaning and I didn't even know what you were talking about, at first.

The article said:
Nvidia appears to be developing the standard without any input from the Joint Electron Device Engineering council (JEDEC).
I hope they're just doing this for time-to-market reasons, and will eventually contribute the spec to JEDEC.

The article said:
The reason as to why that might be is down to the company's focus on AI workloads. Running local AI models demand a large amount of DRAM, and Nvidia would be wise to push for more I/O, capacity and more configurability.
I'd venture a guess that the key differences are size and bandwidth. Given the size constraints, it seems unlikely to me that it would handle more capacity than LPCAMM, although the diminutive size of SOCAMM might allow for fitting two of them in roughly the same footprint as one SOCAMM.

I think the size difference probably has a lot to do with this business of mounting the dies directly on the PCB.

Should be interesting.
 
Last edited:
  • Like
Reactions: P.Amini
"Finer details about SOCAMM are still firmly shrouded in mystery, as Nvidia appears to be developing the standard without any input from the Joint Electron Device Engineering council (JEDEC)"

How nice, nVidia...
 
Thus being backed by nVidia the power supply to the module will be via many individual traces, traces will vary in length, no power conditioning will be installed on the module (not even a cap), there will be no termination on data traces just stubs of copper, and when modules stop working nVidia will blame the owners for not inserting the module correctly.
 
I've seen these half baked articles popping up all over the place. The only thing anyone knows for sure is that it uses LPDDR and is a similar format to CAMM2. Unless they're putting some sort of compute on the module itself (which would limit capacity so I highly doubt it) then the only possible thing they could be doing here is higher bus width. It doesn't seem like the pin difference alone would be enough to increase bus width which would indicate some assignments changing.