News New desktop PC chips now support an incredible 256GB of memory — ASRock and MSI update DDR5 support on Intel Z790 and AMD X670E motherboards

Status
Not open for further replies.

brandonjclark

Distinguished
Dec 15, 2008
588
253
20,020
I'm not surprised.

It stands to reason that if they supported 192 GB, then 256 should be possible. The ultimate limitation is usually around how many physical bits are exposed by the memory controller, and you can't reach 192 GB if you can't also reach 256 GB.
Because to reach 192 you had to already support it's higher, divisible amount, 256?
 
  • Like
Reactions: bit_user
Because to reach 192 you had to already support it's higher, divisible amount, 256?
Correct. The memory address space for the chipset has to be at least 38-bit in order to support either 192GB or 256GB. That's probably the limit, though. I strongly doubt anyone would do a 39-bit address space, with 40-bit being the next likely step. Not that it couldn't be done... but it would be odd. (I'll show myself out now. :))
 

bit_user

Titan
Ambassador
I strongly doubt anyone would do a 39-bit address space, with 40-bit being the next likely step.
It should be possible to discover this, but I'm not sure where to look. There's no reason why 39-bit would be strange... it really just has to do with how much memory Intel thought they wanted to be able to support, on the platform. It's only datapaths that you'd really like to be a nice power of 2.

I'll just say I'm glad we finally pushed past 128 GB, not that I need even that much. However, that has been the limit on mainstream boards for a while, IIRC.
 
It should be possible to discover this, but I'm not sure where to look. There's no reason why 39-bit would be strange... it really just has to do with how much memory Intel thought they wanted to be able to support, on the platform. It's only datapaths that you'd really like to be a nice power of 2.

I'll just say I'm glad we finally pushed past 128 GB, not that I need even that much. However, that has been the limit on mainstream boards for a while, IIRC.
Mostly 39-bit would be strange because it's just one bit more than the presumably current 38-bit space. (Actually, it might already be 40-bit, just because that seems a nice stepping stone. Whatever.)

I think if a company is going to rework things to support more RAM, it would do more than a single bit step is all. So it's not that it can't be done, but we're not at the stage where generational improvements are so small that the idea of adding a single bit width to an interface or address space might be the right approach. In another ten or twenty years, maybe that starts to become the norm.
 

husker

Distinguished
Oct 2, 2009
1,251
243
19,670
Someday, all this memory will come in handy when we have a fully resident AI incorporated into games, which lives in a 128 GB memory space and uses, say, the first 32 cores in a run-of-the-mill 64 core gaming CPU. Let's go!
 
  • Like
Reactions: Order 66

bit_user

Titan
Ambassador
Mostly 39-bit would be strange because it's just one bit more than the presumably current 38-bit space. (Actually, it might already be 40-bit, just because that seems a nice stepping stone. Whatever.)
I'm telling you: nothing is gained by using "nice, round numbers". You're too used to thinking of bits in arithmetic and datapaths. Addressing works differently.

I think if a company is going to rework things to support more RAM, it would do more than a single bit step is all.
It needn't be a "rework" - it's just an aspect of designing their memory controller and socket specification. I believe each bit has a cost.

I don't know enough about DDR5 to understand what implications it might have at a physical or timing level, however I did try looking at the System Memory Interface definition, in Intel's datasheets. They do reference multiplexing the address, and I could see how selecting from more banks might add latency.

Not only that, but they have market segmentation to consider.
 
Last edited:

atomicWAR

Glorious
Ambassador
I was happy to see this. I had wanted 128GB via two channels in my wife's and my respective rigs but it was a no go when I built them. Maybe when we upgrade to Zen 5 (or ideally 6 if it does indeed hit AM5) as I already frequently hit 32GB+ in my everyday usage though when gaming I obviously need to free up some resources. In an ideal world I could have my billion tabs open watching a show on one monitor and gaming on the other. My wife maybe less so as she isn't the resource hog I am lol.

End of the day this is good news for all even if this was pretty much expected lag when I and others bought AM5 till they supported 256GB. They did the same doubling on AM4 from 64->128GB sometime into it lifecycle.
 
Last edited:
  • Like
Reactions: Order 66

newtechldtech

Respectable
Sep 21, 2022
386
144
1,860
Hopefully some motherboard makers will use oly 2 dimms and free more space on the motherboard after this ...

Also this is good news for itx builds ...
 
I'm telling you: nothing is gained by using "nice, round numbers". You're too used to thinking of bits in arithmetic and datapaths. Addressing works differently.

It needn't be a "rework" - it's just an aspect of designing their memory controller and socket specification. I believe each bit has a cost.

I don't know enough about DDR5 to understand what implications it might have at a physical or timing level, however I did try looking at the System Memory Interface definition, in Intel's datasheets. They do reference multiplexing the address, and I could see how selecting from more banks might add latency.

Not only that, but they have market segmentation to consider.
Of COURSE the engineers creating this stuff can work with any arbitrary bit width. They can decide, during the design phase, just how much cache a chip has, how wide the memory interface is, and how big the address space is, how much memory you can support, and a bunch of other stuff. They don't need to be beholden to any particular value, powers of two, or anything else. But that doesn't tell us where the actual limit is right now, and that's what I want to know.

This is in software as well as hardware. I know Windows 64-bit had a 48-bit physical address space limit for a long time, so it was restricted to a maximum of 256TB of RAM, no matter what you were doing (and I think non-Server was 40-bit and 1TB, maybe). The remaining 16 bits of the 64-bit address space were simply not supported or used. I think that may have changed, but changing software is obviously easier than changing hardware. And software has to deal with the physical constraints as well, which is why often we get multiples of eight bits and powers of two. If you have a 37-bit memory space (128GB), there's a decent chance that software somewhere is going to store that as a 40-bit, 48-bit, or even 64-bit value.

Anyway, Z490/Z590 motherboards and CPUs have with a maximum support for 128GB, and Z690/Z790 CPUs and motherboards also had up to 128GB of RAM support. Potentially, there was no inherent need to rework every part of the memory controller. Intel could take the existing building blocks and then decide what needs to be changed and upgraded.

But the previous generation was DDR4 only, and the new stuff is DDR4 or DDR5. That necessitated a change. Beyond that, Intel had to decide the maximum amount of memory to support. At launch, Z690/Z790 both stayed at 128GB, but then 48GB DIMMs appeared and BIOS updates 'unlocked' 192GB support. Was there some change made with the LGA1700 platform that allowed them to extend the maximum RAM support, or was there already a higher limit that just hadn't been exposed before?

The same thing happened with Z390 as well. Z170/Z270/Z370/Z390 all originally topped out at 64GB. Then with Z390, a bunch of motherboards added support for 32GB modules and 128GB total, even though that wasn't originally on the specs sheet. Certainly in that case, a firmware update was all that was needed, but was it a "hack" or was there already physical support that had just been locked out before? And was performance compromised at all with 32GB DIMMs instead of 16GB DIMMs?

I've been around long enough to know that a lot of the times, some of these "limitations" are not actually physical limits but rather just a limit of how things were designed in software/firmware — just like later 9th Gen chips being able to run in earlier LGA1151 motherboards (via hacks), but Intel intentionally changed some pinouts to make it "not supported." Or just like some Z390 boards getting a firmware update that allowed them to use 32GB DIMMs and 128GB of total RAM, and Z690/Z790 getting support for 48GB DIMMs.

So, what I'm saying is that I don't know where the actual limit on RAM support comes from. If anyone has actually done motherboard work that could speak to the "max supported RAM" aspect, I'd like to know a bit more about the inner workings. What is the true physical limit, assuming for example that we were to have 64GB, 96GB, and even 128GB DDR5 DIMMs in the future? Are there firmware changes that could add support, or is there an actual physical limit that would prevent it? As for me, these days, I just plug sticks of RAM into my PCs and expect them to work, because I'm never at the bleeding edge with regards to system RAM. 🤷‍♂️

(Doing more than 192GB in a consumer board with four slots is going to be tough, though — at least for now. I only see ECC server memory with 64GB per DIMM on Newegg. 64GB sticks for consumer platforms will come, at some point, but probably only for DDR5 or later platforms, just like the 48GB DIMMs.)
 
  • Like
Reactions: thisisaname

bit_user

Titan
Ambassador
Hopefully some motherboard makers will use oly 2 dimms and free more space on the motherboard after this ...
I honestly doubt it'll change anything, there. A lot of people are still going to start with a pair of lower-capacity DIMMs, and then want to add more capacity via another pair. They ascribe value to 4 DIMMs, probably unaware of the performance tradeoff it could have.
 

bit_user

Titan
Ambassador
I wish we could add on VRAM like RAM.
Maybe the new CAMM2 standard could work for GDDR. Even if it did, I doubt the GPU chipmakers would allow it. They want to reserve the high-capacity models for workstations and servers, where they can charge quite a bit more.

Plus, if you need more graphics memory, that's a great opportunity for them to sell you a whole new GPU. If you can just swap out the memory, they get nothing.
 

bit_user

Titan
Ambassador
And software has to deal with the physical constraints as well, which is why often we get multiples of eight bits and powers of two. If you have a 37-bit memory space (128GB), there's a decent chance that software somewhere is going to store that as a 40-bit, 48-bit, or even 64-bit value.
I'm pretty sure software simply populates the page table with 64-bit addresses, but the hardware won't use all of those bits.

(Doing more than 192GB in a consumer board with four slots is going to be tough, though — at least for now. I only see ECC server memory with 64GB per DIMM on Newegg.
The key point about those isn't ECC, but rather that they're registered. That enables them to have more ranks (presumably 4). It expect it'll take a while before DIMMs with these new Micron chips reach the market.
 
I'm trying REALLY hard to think of a possible use-case for 256GB of RAM on a mainstream desktop board. I can't think of any software that would need that much RAM but wouldn't be pretty hampered by even an R9-7950X. If I had a program that could use that much RAM, I can guarantee you that I would be running it on a Threadripper platform.
 

USAFRet

Titan
Moderator
I'm trying REALLY hard to think of a possible use-case for 256GB of RAM on a mainstream desktop board. I can't think of any software that would need that much RAM but wouldn't be pretty hampered by even an R9-7950X. If I had a program that could use that much RAM, I can guarantee you that I would be running it on a Threadripper platform.
Not yet, but give it a few years.

Some of my CAD models were choking at 32GB, so I had to move up to 64GB.
Same software (Rhino3D), but more complex models. And the rest of the hardware s the same.
 

umeng2002_2

Respectable
Jan 10, 2022
251
230
2,070
No average user uses CAD or heavy, real CAD. If you're making money as a 3-D artist or CAD drafter, you should already be on Threadripper or a terminal.
 

USAFRet

Titan
Moderator
No average user uses CAD or heavy, real CAD. If you're making money as a 3-D artist or CAD drafter, you should already be on Threadripper or a terminal.
My CAD, or video/photo work, is not my paid gig.
But it does reach beyond 16 or 32GB RAM needs.

What keeps the roof over my head is a much larger system than "Threadripper".
 

bit_user

Titan
Ambassador
I'm trying REALLY hard to think of a possible use-case for 256GB of RAM on a mainstream desktop board.
Until generative AI became popular, I'd have said the same thing. Dealing with so much data is usually something you do on a big workstation or server. In those contexts, certain engineering applications are quite data-intensive. Lots of VMs can also burn through RAM quickly.

On my desktop, I tend to use containers - not VMs. They tend to be much lighter-weight, in terms of memory & storage footprint, while still providing most of the benefits of VMs.

I can't think of any software that would need that much RAM but wouldn't be pretty hampered by even an R9-7950X.
Right. I usually point to memory bandwidth as the main argument why we can switch to a tiered memory architecture. To use so much memory capacity, you probably want higher bandwidth to access it quicker. So, either you don't need fast access to it all (in which case, much of it can be CXL-connected), or you ought to have a much wider data bus, like the 256-bit, 512-bit, and even 768-bit memory subsystems on workstations and servers.

Still, if you look at how much memory those big servers can accommodate, even they would take tens of seconds just to access all of their memory, at max capacity.

If I had a program that could use that much RAM, I can guarantee you that I would be running it on a Threadripper platform.
I wonder how bottlenecked inferencing a LLM is, on the CPU or GPU. If you're doing GPU inferencing of a model that won't fit in GPU memory, then the bottleneck would be its PCIe 4.0 x16 interface. That only allows for 32 GB/s to be read in, meaning it wouldn't matter if the host CPU had more than 128-bit memory interface. If you go multi-GPU, then you start to hit the limits of the memory controller, but the mere fact that you now need 2x PCIe 4.0 x16 slots would've already pushed you onto a TR or Xeon W platform.
 

bit_user

Titan
Ambassador
Some of my CAD models were choking at 32GB, so I had to move up to 64GB.
I'm looking at building a PC with DDR5, and 64 GB is what I'm targeting. Not because I think I need it, but because 32 GB is the smallest ECC DDR5 DIMM you can get that's dual-rank. So, 2x of those pushes me to 64 GB. The reason to prefer dual-rank is that it's a little bit faster than single-rank.

Today, I'm okay with 16 GB. Were it not for the constraint I mentioned above, my preference would be to go with 32 GB. I built my old machine with 16 GB, back when I thought 8 sounded like a lot. I only went with so much because it uses quad-channel memory.

On this PC, I mainly do software development and a bit of web browsing. For good build times, a heuristic I've discovered is 1-2 GB per thread. So, that'd put me right in the range of 32-64 GB.

No average user uses CAD or heavy, real CAD. If you're making money as a 3-D artist or CAD drafter, you should already be on Threadripper or a terminal.
But mainstream CPUs are so powerful, you probably don't need a full workstation to run them!

I had been on a workstation (again, mainly for software development), but modern CPUs have so many cores and DDR5 provides so much bandwidth that I just don't need to pay the ever-increasing premium or deal with the power & cooling issues of a full workstation machine.
 
Status
Not open for further replies.