News MSI delivers first motherboard with CAMM2 memory — Z790 Project Zero brings new RAM standard to desktops

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
A waste off space and memory itself on a desktop motherboard, memory upgrade, you have to sell it off rather than as we do now, add another. It may have made sense if they stuck to the sodimm size and connected to motherboard on angled riser boards.
 
  • Like
Reactions: Papusan

Pierce2623

Upstanding
Dec 3, 2023
182
163
260
If/when desktops get support for CXL.mem devices, then maybe that will be offered as a way to add capacity.
Don’t hold your breath for CXL coming to consumer platforms. It’s uptake has greatly slowed in the commercial sector and it’s not even a sure thing it will really survive as a standard.
 

abufrejoval

Reputable
Jun 19, 2020
400
276
5,060
Yeah, a large, flat single CAMM2 slot for memory doesn't actually register as a good thing in my view, for desktops. Because with that you have to buy all the RAM you want at the start, there's no good way to upgrade.
I know we want it all.

But when you have to connect 616 pins that transmit related signals at gigahertz speeds without too much clock skew, there isn't a terrible lot of affordable mass produceable alternatives.

CAMM2 is an interesting half-way point somewhere between having stacked DRAM on the die carrier and DIMM-slots, but judging by my last 2DPMC disaster on Raphael DIMM sockets are no longer guaranteeing upgradeability either.

At least having the option to swap (and move the modules between systems) is better than having no option but to replace the entire system.

But yeah, I'd definitely prefer this on the back-side of the board, just like most of the M.2 slots are both more easily cooled and taking away less precious PCIe slot/CXL connector space. But even that might be an issue with PCB design, going from top to bottom at this density.

Having to pick your RAM capacity at the time you build or purchase your system is unfortunately not just about vendors cheaping out, but where physics are driving us.

And I'm completely divided (pun intended) on if vendors should perhaps still combine on-die-carrier RAM with DIMM sockets at least for high-end desktops or workstations to cater to both high-speed and high-capacity workloads, as they did on Xeon Phi or do with HBM stacks on server chips.

I know I'd want some of my DRAM to be really fast (perhaps even for a game), but once it gets into capacity having lots of RAM instead of paging is so much better that I'd even accept it to be noticeably slower and ideally cheaper.

Case in point: the ability to forfeit 64GB of RAM but run DDR5 at 6000 in mode G but use all 128GB of DDR5 at 3600 in mode VM or ML preferably on-the-fly with a CLI toggle, would have saved me from buying 96GB of DDR5-5600 ECC in two DIMMs at the price of 128GB in four.

And differentiation in storage tiers from V-cache to xLC in NVMe drives is just how things are evolving.
 

abufrejoval

Reputable
Jun 19, 2020
400
276
5,060
not a chance.

pcie is fast, but it pales in speed vs how fast the ram sends data to cpu. pcie ram would be a huge latency hit that it wouldnt be worth it.

There is a reason RAM is always located as close to socket as possible.
It depends on your alternatives.

Sure, you'd want at least some of your RAM to be really close and fast. It's one of the reasons we have these big caches, because compared to good old 4116 times, when DRAM access felt instant, it's really the new tape and best used sequentially..

But when close-by runs out of space and capacity, more distant pools of RAM can still be quiet a lot better than going to tape or even NV-DIMMs.

It might seem an HPC server issue today, but more distinct RAM tiers might become the compromise of choice even in laptop and desktop designs sooner or later.

And if CXL is built into SoC IP blocks, it might find usage in areas that didn't drive its initial design.

Just be careful wit predictions in IT: under the pressure of profits, engineers run wild.
 
  • Like
Reactions: bit_user

Frozoken

Prominent
May 16, 2023
14
10
515
Is this wanted? While it enables shorter traces and better latency chip to memory, I think this is the wrong segment for it(Full ATX MOBO). Makes a lot more sense on the itx/ mini pc area to enable performance in the small package. I personally have zero interest on the desktop side, maybe once it becomes an established standard. Till then DIMMs are intransigent part of a system, If we see incursions on workstations and servers, maybe DIMMs days will be numbered. I'll take it over soldered Ram any day but this application makes no sense to me outside of a camm2 test platform.
Yes? Lower latency, higher bandwidth memory thats still replaceable, why wouldn't you prefer that
 

Frozoken

Prominent
May 16, 2023
14
10
515
I'm not even remotely sold on this design for desktops given how hot high speed DDR5 runs. Even with clamshell chips the module they're showing cannot have more than 96GB capacity. So we have limited capacity and potential cooling issues with no real benefit for desktop users. The only place I see it really having any potential on desktop would be ITX boards and replacing SODIMMs on minis.
96GB is literally Just stated because it' the current max with 2 dimms in dual rank. Yes you lose the extra 2 dimm slots but those slots also hurt your performance by merely existing on desktop motherboards and cripple it (and your stability) when you decide you wanna actually use them. good riddance.
 
  • Like
Reactions: 35below0
96GB is literally Just stated because it' the current max with 2 dimms in dual rank.
What are you talking about? You can buy 64GB DIMMs right now, and the 96GB I cited is based on counting the memory chips on the module in the picture and the highest available CAMM2 DDR5 modules currently are 64GB.
Yes you lose the extra 2 dimm slots but those slots also hurt your performance by merely existing on desktop motherboards and cripple it (and your stability) when you decide you wanna actually use them. good riddance.
I'm not worried about losing "extra 2 DIMM slots" as you say and I don't know why you think that's what I said. I don't like losing maximum capacity without there being a tangible benefit, which there simply isn't right now. Cooling alone is the most important part for me and if they're doing single sided modules for cooling sake that means further limited capacity. There simply isn't a consumer benefit to switching to CAMM2 modules over DIMMs for desktop systems yet.
 

cyrusfox

Distinguished
Yes? Lower latency, higher bandwidth memory thats still replaceable, why wouldn't you prefer that
I don't need or want an expensive camm2 interface on a Full ATX mobo, give me 4 to 8 dimm slots surrounding the Socket.

Put camm2 on an ITX board and I would have zero complaints, but it better do something novel with the space saved(Put it on the bottom of the board and give a full set of M.2 config).

My issue is this is not a novel offering, this is more of a test platform for camm2.
 

35below0

Commendable
Jan 3, 2024
1,484
623
1,590
I don't need or want an expensive camm2 interface on a Full ATX mobo, give me 4 to 8 dimm slots surrounding the Socket.
Why?

I get why you don't want something more expensive, but if the price is competitive, you're still against it. But why?
I only use 2 slots anyway. Plus dimm slots have lots of problems of their own.

I see a lot of practical benefits to RAM that can be slotted in/out like a CPU.
 

cyrusfox

Distinguished
but if the price is competitive,
It's not price competitive, its 1.7 to 2.5x the cost of DDR5, and for DDR6... 5x.
Crucial selling 32GB module of LPCAMM2 LPDDR5X-7500 for $174.99
64GB module $329.99 [LINK]

CAMM2-standard LPDDR6, with a 32GB specification for example, costs about USD 500, which is five times the price of LPDDR5 (SO-DIMM/DIMM) memory [Link]
The performance uplift stands no chance of matching the price increase. With limited availability and support (adoption phase), CAMM2 will be much more expensive than DIMM for a good while.

Its key benefits are least expressed on a desktop platform, especially of the ATX variety, its a bad idea that will see limited adoption. If DDR6 will only exist in a CAMM factor it will need to transform to be more rack friendly for those 12 channel memory servers. the current CAMM standard will not work there, not sure what will work better than the current DIMM setup.
 

bit_user

Polypheme
Ambassador
not a chance.

pcie is fast, but it pales in speed vs how fast the ram sends data to cpu.
Not if you're talking about PCIe 5.0. A x16 link would net you up to 64 GB/s (bidir), whereas a standard desktop 128-bit (2 DIMM) DDR5-5600 setup can only provide a nominal 89.6 (unidir). So, the aggregate bandwidth of that x16 card would already beat your DRAM.

This is the play CXL is making. And you wouldn't use it the same as regular DDR memory, but you'd instead swap in/out entire pages at a time. Not quite like virtual memory, since CXL.mem has low enough latency to support the occasional access, but if you have a way of tracking "hot" pages, then they can be migrated to on-package memory.

pcie ram would be a huge latency hit that it wouldnt be worth it.
With CXL, the latency hit is only about double that of regular DRAM. So, as long as your "hot" pages are on package, the latency of going off package would be tolerable.

There is a reason RAM is always located as close to socket as possible.
Yeah, mostly for signal integrity.

Don’t hold your breath for CXL coming to consumer platforms. It’s uptake has greatly slowed in the commercial sector and it’s not even a sure thing it will really survive as a standard.
It'll happen. I never expected it to arrive overnight.

For power savings and continued scaling, server CPUs need to start using on package memory as a standard configuration. However, they can't fit nearly enough on package, and will then continue to rely on off-package. The cool thing about CXL is that you can give a CPU like x160 lanes and let the customer or integrator decide how to partition them amongst memory, storage, networking, accelerators, etc. Plus, CXL supports switching. So, you can have an even larger pool of CXL.mem modules than you have available lanes to directly connect, and they can be shared between multiple CPUs and/or other devices. The benefits are there and the use cases are inevitable. It will happen.
 
Last edited:
  • Like
Reactions: thestryker

35below0

Commendable
Jan 3, 2024
1,484
623
1,590
It's not price competitive, its 1.7 to 2.5x the cost of DDR5, and for DDR6... 5x.

The performance uplift stands no chance of matching the price increase. With limited availability and support (adoption phase), CAMM2 will be much more expensive than DIMM for a good while.
That's what i mean by "if". If the price is competitive, then you are still hating on this standard, and i don't understand why.
Its key benefits are least expressed on a desktop platform, especially of the ATX variety, its a bad idea that will see limited adoption. If DDR6 will only exist in a CAMM factor it will need to transform to be more rack friendly for those 12 channel memory servers. the current CAMM standard will not work there, not sure what will work better than the current DIMM setup.
We don't know how well it will be adopted. And i don't see why ATX doesn't benefit from this?

It's not a replacement for DIMMs, though maybe it should be. DIMM is old and only hanging around because nobody thought of anything better. Now something new appears and you're picking it apart before it's even had a chance.
Why?

DIMM is not without fault. Frankly, it kinda sucks. Counter intuitive slot population, RAM sticks hating each other, incomprehensible memory QVLs.

The space required and heat dissipation is one thing where both standards are in trouble. But overall i'm willing to welcome this new idea. Relatively new anyway.
 

35below0

Commendable
Jan 3, 2024
1,484
623
1,590
I don't believe CAMM2 or LPCAMM2 will "fix" this problem.
Why not? It's a one piece jigsaw. It's either listed or it isn't. Compared to kits of 1,2 or 4 sticks that can work in dual, single, spinning, half-everted quarter cycled, or submerged mode. I may have made some of those up, but it's about the right level of clarity.
 

bit_user

Polypheme
Ambassador
Why not? It's a one piece jigsaw. It's either listed or it isn't. Compared to kits of 1,2 or 4 sticks that can work in dual, single, spinning, half-everted quarter cycled, or submerged mode. I may have made some of those up, but it's about the right level of clarity.
The memory QVLs I've seen just list which DIMMs are supported. If a DIMM is supported, then it should work in any configuration.

There's a separate set of rules governing what speed you get, based on rank and slot population. Don't confuse that with strict compatibility, however.

CAMM2/LPCAMM2 could simplify the second issue in exactly the same way as 2-slot motherboards do. You get effectively 1 DIMM per channel, but there could still theoretically be speed differences depending on whether you use single-rank or dual-rank.
 

35below0

Commendable
Jan 3, 2024
1,484
623
1,590
AsRock is pretty clear, Gigabyte is overly complex: https://www.gigabyte.com/Motherboard/Z790-UD-rev-10/support#support-memsup
Asus is little better ditto MSi.

For example:
VendorModelDDRSPD Speed (MHz)Supported Speed (MHz)ChipsetVoltage (V)SidedSize (GB)1|2|4 DIMM
G.SKILLF5-6400J3648G24GX2-TZ5RKDDR556006400Spectek1.4SINGLE24√ | √ |

The above establishes that the G.Skill kit with part number F5-6400J3648G24GX2-TZ5RK is compatible. It's a 6400 kit with default speed 5600.
But the confusing part is the size and slot population. The F5-6400J3648G24GX2-TZ5RK kit is a 48Gb kit, not 24Gb. It's made of 2 24Gb sticks.
According to the above the kit which is made of two pieces can be used by installing just one of them. Or both can be used, as was intended.

That's maybe useful information but it's also a little confusing. And it's different from other vendor's QVL. And it misrepresents the actual part, which is a 48Gb kit, not a single piece and certainly not 24Gb.

That motherboard QVLs are not standardized is not directly the fault of DIMMs but it is the result of it.

CAMM2/LPCAMM2 could simplify the second issue in exactly the same way as 2-slot motherboards do. You get effectively 1 DIMM per channel, but there could still theoretically be speed differences depending on whether you use single-rank or dual-rank.
No, that's not the same thing because the 2-slot motherboards still use the same RAM as all the other motherboards. In fact the only way those are less complicated is they eliminate 4x kits, but you still have single and 2x kits on the list.

CAMM2/LPCAMM2 is just a partnumber. If it's on the list, that's it.

And it's a single piece of hardware that goes into it's slot. NVMe drives aren't made of 2 or 3 parts that each have to be carefuly installed into a correct place on the motherboard. They also don't immediately reduce each other's speed or outright refuse to work because of incompatibility lottery, even though they left the same production line minutes apart.
One single piece. Seems like a great simplification.

It could be made more complicated with overclocking.
Or it could just be sold locked or unlocked, like intel CPUs are. So plug it in and forget it if you don't care, or tweak and OC it.

So there you go, it certainly could simplify QVLs. But in fairness, so could Gigabyte, or the others.
 
AsRock is pretty clear, Gigabyte is overly complex: https://www.gigabyte.com/Motherboard/Z790-UD-rev-10/support#support-memsup
Asus is little better ditto MSi.
It's too bad once you reach a certain speed they're all bad as every motherboard vendor is guilty of binning CPUs to run QVLs.
No, that's not the same thing because the 2-slot motherboards still use the same RAM as all the other motherboards. In fact the only way those are less complicated is they eliminate 4x kits, but you still have single and 2x kits on the list.
I think the point was more like how the Asus Apex can clock higher than any 4 DIMM board can.
And it's a single piece of hardware that goes into it's slot. NVMe drives aren't made of 2 or 3 parts that each have to be carefuly installed into a correct place on the motherboard. They also don't immediately reduce each other's speed or outright refuse to work because of incompatibility lottery, even though they left the same production line minutes apart.
One single piece. Seems like a great simplification.
Definitely simpler, and there's the promise of better speeds with lower latency, but that isn't here today.

There are simply way too many limitations to CAMM2 as it stands today to be a good time for desktop to shift. The desktop CAMM2 modules we've seen are limited to 16 memory chips which means no capacity higher than 64GB. 128GB is the maximum CAMM2 is rated for which likely requires the largest module size which is 78mm x 68mm (this would take up more space than 4 DIMM slots). The only high clocked (7800) one shown had to be actively cooled and E-cores disabled for stability. There's also the question of cooling the 4 chips on the back of the module which has not yet been addressed.
 

35below0

Commendable
Jan 3, 2024
1,484
623
1,590
There are simply way too many limitations to CAMM2 as it stands today to be a good time for desktop to shift. The desktop CAMM2 modules we've seen are limited to 16 memory chips which means no capacity higher than 64GB. 128GB is the maximum CAMM2 is rated for which likely requires the largest module size which is 78mm x 68mm (this would take up more space than 4 DIMM slots). The only high clocked (7800) one shown had to be actively cooled and E-cores disabled for stability. There's also the question of cooling the 4 chips on the back of the module which has not yet been addressed.
That's a fair assesment. Not a bad start though.

Given the majority of home PCs today run 16 or 32Gb of RAM, a maximum of 64Gb isn't so limiting. 32Gb would be enough today and if in theory, people started switching to this new type of memory in the next 2-4 years, 32 to 64 Gb would still be enough. I expect improvements in CAMM2 in the next 2-4 years as well.

As for 5600 Mhz, or say, 6000 Mhz without cooling, those numbers are still ok today, but too close to the limit. DDR5 goes above 7800 and doesn't need cooling (ominously, we have seen DDR fan coolers so who knows...) So there's certainly need for improvement.
Disabling E cores is i think a non-starter for mainstream adoption. I don't think anyone but enthusiasts can be asked to jump through such hoops, and it goes against the idea of making it simpler.

Those would be the biggest hurdles. Cooling and stability.
For me personally, just the simplification would be enough. I wouldn't need this standard to be faster or lower latency, though DDR5 is really getting up there. 30 is ok but 48 is high. Higher capacity sticks also run slower which somewhat negates the advantage over CAMM2.

Today, it's too limiting. I'm looking forward to continues improvement.
 
  • Like
Reactions: thestryker