I'm telling you: nothing is gained by using "nice, round numbers". You're too used to thinking of bits in arithmetic and datapaths. Addressing works differently.
It needn't be a "rework" - it's just an aspect of designing their memory controller and socket specification. I believe each bit has a cost.
I don't know enough about DDR5 to understand what implications it might have at a physical or timing level, however I did try looking at the System Memory Interface definition, in Intel's datasheets. They do reference multiplexing the address, and I could see how selecting from more banks might add latency.
Not only that, but they have market segmentation to consider.
Of COURSE the engineers creating this stuff can work with any arbitrary bit width. They can decide, during the design phase, just how much cache a chip has, how wide the memory interface is, and how big the address space is, how much memory you can support, and a bunch of other stuff. They don't need to be beholden to any particular value, powers of two, or anything else. But that doesn't tell us where the actual limit is right now, and that's what I want to know.
This is in software as well as hardware. I know Windows 64-bit had a 48-bit physical address space limit for a long time, so it was restricted to a maximum of 256TB of RAM, no matter what you were doing (and I think non-Server was 40-bit and 1TB, maybe). The remaining 16 bits of the 64-bit address space were simply not supported or used. I think that may have changed, but changing software is obviously easier than changing hardware. And software has to deal with the physical constraints as well, which is why often we get multiples of eight bits and powers of two. If you have a 37-bit memory space (128GB), there's a decent chance that software somewhere is going to store that as a 40-bit, 48-bit, or even 64-bit value.
Anyway, Z490/Z590 motherboards and CPUs have with a maximum support for 128GB, and Z690/Z790 CPUs and motherboards also had up to 128GB of RAM support. Potentially, there was no inherent need to rework every part of the memory controller. Intel could take the existing building blocks and then decide what needs to be changed and upgraded.
But the previous generation was DDR4 only, and the new stuff is DDR4 or DDR5. That necessitated a change. Beyond that, Intel had to decide the maximum amount of memory to support. At launch, Z690/Z790 both stayed at 128GB, but then 48GB DIMMs appeared and BIOS updates 'unlocked' 192GB support. Was there some change made with the LGA1700 platform that allowed them to extend the maximum RAM support, or was there already a higher limit that just hadn't been exposed before?
The same thing happened with Z390 as well. Z170/Z270/Z370/Z390 all originally topped out at 64GB. Then with Z390, a bunch of motherboards added support for 32GB modules and 128GB total, even though that wasn't originally on the specs sheet. Certainly in that case, a firmware update was all that was needed, but was it a "hack" or was there already physical support that had just been locked out before? And was performance compromised at all with 32GB DIMMs instead of 16GB DIMMs?
I've been around long enough to know that a lot of the times, some of these "limitations" are not actually physical limits but rather just a limit of how things were designed in software/firmware — just like later 9th Gen chips being able to run in earlier LGA1151 motherboards (via hacks), but Intel intentionally changed some pinouts to make it "not supported." Or just like some Z390 boards getting a firmware update that allowed them to use 32GB DIMMs and 128GB of total RAM, and Z690/Z790 getting support for 48GB DIMMs.
So, what I'm saying is that I don't know where the actual limit on RAM support comes from. If anyone has actually done motherboard work that could speak to the "max supported RAM" aspect, I'd like to know a bit more about the inner workings. What is the true physical limit, assuming for example that we were to have 64GB, 96GB, and even 128GB DDR5 DIMMs in the future? Are there firmware changes that could add support, or is there an actual physical limit that would prevent it? As for me, these days, I just plug sticks of RAM into my PCs and expect them to work, because I'm never at the bleeding edge with regards to system RAM. 🤷♂️
(Doing more than 192GB in a consumer board with four slots is going to be tough, though — at least for now. I only see ECC server memory with 64GB per DIMM on Newegg. 64GB sticks for consumer platforms will come, at some point, but probably only for DDR5 or later platforms, just like the 48GB DIMMs.)