News Nvidia RTX 50 mobile Device IDs have been leaked — Flagship RTX 5090 mobile rumored to sport the GB203 GPU

the RTX 50 mobile family features GPUs from the RTX 5090M down to the RTX 5050M
It would be great if they actually called them that.

Based on previous rumors, the RTX 50 mobile family will see no increase in memory capacity as compared to the last-generation. In fact, as per leaks, Blackwell on mobile will only offer up to just 16GB of VRAM - so enthusiasts will need to consider their choices carefully.
Not according to MLID. 24 GB for the RTX 5090(M) using 3GB GDDR7 modules.

 
Can someone explain why all GPU makers are so stingy with offering RAM? Is there a limit to how much RAM is needed for a 1080p screen for example no matter how many textures are used?
 
Can someone explain why all GPU makers are so stingy with offering RAM? Is there a limit to how much RAM is needed for a 1080p screen for example no matter how many textures are used?
From my limited experience, more VRAM is needed at higher resolution. So 1440p and 4K. And my speculation on why they limit the VRAM is due to cost sure, but also to segment the different tiers more. So higher end cards are not as capped/ limited. It's the same thing with memory bandwidth. A 4060 8GB could have a 500 GB/s bandwidth and I'm sure that would help it compete more with a 4070 12GB also at 500 GB/s bandwidth. But then the increase in price for the 4070 wouldn't be as legitimate.
 
The increased cost isn't so much the memory, it is the GPU die area for the memory controller. 128 bit bus is a lot cheaper than a 192 or 256 or 384 bit bus. Silicon wafers are a fixed size and cost. The larger the GPU, the fewer you get. The larger the die, also results in decreased yields, further increasing cost.
So the answer to keep bandwidth the same and increase the DRAM package size, or increase the size of the GPU, which means another GPU design, so they just segment as Heiro says to make the minimal number of GPU designs to meet demand.

The move to chiplets should alleviate some of these problems, though it doesn't look like Nvidia has made that move on the consumer side yet. AMD has and it works pretty well.
 
The increased cost isn't so much the memory, it is the GPU die area for the memory controller. 128 bit bus is a lot cheaper than a 192 or 256 or 384 bit bus. Silicon wafers are a fixed size and cost. The larger the GPU, the fewer you get. The larger the die, also results in decreased yields, further increasing cost.
So the answer to keep bandwidth the same and increase the DRAM package size, or increase the size of the GPU, which means another GPU design, so they just segment as Heiro says to make the minimal number of GPU designs to meet demand.

The move to chiplets should alleviate some of these problems, though it doesn't look like Nvidia has made that move on the consumer side yet. AMD has and it works pretty well.
Surely they can make two versions for anyone of GPUs, one with lower amount of RAM and one with more (1.5x or 2x) amount of RAM and it gives us much more choices to choose depending our specific needs and budget. Instead they match lower end GPUs with lower amount of RAM which is barely enough and force you to buy higher end models if you want more RAM. Price of products (VGAs included) don't directly relate to the cost of products, supply, demand, marketing plans and policies and even greediness are far more important factors than the cost, as we all know graphics cards are overpriced, especially high end models.