This reminds me of the famous quote from none other than Bill Gates himself, "Who's going to need more than 640K of memory?"
The actual quote is
supposedly:
"640K ought to be enough for anybody."
However, he apparently denies ever saying it:
In 1981, when the IBM PC was introduced, Bill Gates supposedly said that 640KB of memory "ought to be enough for anybody." The quote has followed him through the years, despite a lack of solid evidence that he actually said it.
www.computerworld.com
History repeating itself all over again.
There are some key details about that 640k quote and about this situation that you seem to be missing. So, please allow me to spell it out for you.
Regarding the "640k" quote, I'd heard this occurred when they were doing the memory layout for MS DOS. A key point is that
the CPU had already been designed, and had a hard limit of just over 1 MB, due to the way addressing worked on the 8086. So, what they were deciding was how much of that address range would be available for normal programs and data. The other areas were reserved for BIOS and memory-mapped devices.
So, no matter what they had decided, there's no way they could've allowed even up to 1 MB. Looking back, 640 kB sounds ridiculously small - but, when you put it in context, it was still the
majority of memory possible - and PCs of that era usually shipped with far less RAM, because it was
expensive.
Given the design of the 8086 ISA, there was no way around having
some limit below 1 MB. They would've known that fundamental CPU changes would be required to break the 1 MB barrier, at which point they probably assumed you'd just design a new operating system. And that's actually what happened, since Windows was easily able to surpass the 640k limit, once CPUs like the 80286 and 80386 launched. For pure DOS programs, there were so-called "DOS Extenders" that allowed DOS programs to access memory above 1 MB, after jumping through some hoops.
This kind of thinking on the part of the Linux Kernel maintainers is typical of general thinking, "We don't need it yet," all over.
The kernel maintainers left open the door to adding it
when needed. They didn't say "never", just "not yet".
The reason
why they said "not yet" is that for each supported CPU core, there's a finite resource cost in the size of kernel datastructures. Essentially, it bumps into some minor scalability problems in the kernel. Thus, increasing the limit isn't free, so it makes sense not to do it prematurely.
However, it's easily done, when there
is a reason to do it. In that sense, it's definitely
not like the MS-DOS case, where it basically required a bunch of OS-level + application code changes to go past 640k. Although DOS' limit was really just due to the primitive nature of CPUs, at the time.