News Nvidia gaming GPUs modded with 2X VRAM for AI workloads — RTX 4090D 48GB and RTX 4080 Super 32GB go up for rent at Chinese cloud computing provider

Status
Not open for further replies.

Notton

Commendable
Dec 29, 2023
853
751
1,260
They need to make GPU ram modular like system ram. We keep buying stuff we could easily reuse.
okay, how are you going to fit it?
rS8Upe93mQny4fw99qhHP6-970-80.png
 
  • Like
Reactions: jlake3 and KyaraM

Evildead_666

Prominent
Jul 21, 2023
50
48
560
And also giving customers easy way to increase VRAM and not making them buy a new product is bad for the bottom line of AMD, NV and Intel.
Back in the day, when this was possible, they were almost all proprietary, and were not cheap...
I remember getting a memory upgrade for my Matrox Mystique...

Also, the memory type changes every couple of generations.

I could see it happening on High end Pro cards, as a memory doubler thing though.
 

jlake3

Distinguished
Jul 9, 2014
134
197
18,760
okay, how are you going to fit it?
rS8Upe93mQny4fw99qhHP6-970-80.png
In addition to physical size, how about the trace length and signal integrity problems?

Also GPU memory controllers don't work like CPU memory controllers, so if it were possible you'd probably end up with something like a single CAMM socket rather than multiple DIMM slots, and because the bus widths and number of memory modules per GPU aren't constant, you'd need different daughterboards for each configuration.
 

toaste

Distinguished
Jun 18, 2013
15
4
18,525
okay, how are you going to fit it?
rS8Upe93mQny4fw99qhHP6-970-80.png

You could do it with something like CAMM on the back side of the board, but only if you dropped the GDDR speed dramatically. Nobody would like the crippling performance drop the GDDR bandwidth reduction would bring with it.

And also giving customers easy way to increase VRAM and not making them buy a new product is bad for the bottom line of AMD, NV and Intel.
Yeah, that's not a concern. If it were possible, then Intel or AMD would do it. They're competing for like 2% and 10% of the market, and increased longevity would be massively offset by the piles of money afforded by taking a chunk of Nvidia's pie.

It's not possible because the GPU relies on increasing memory bandwidth to keep up with increasing shader vertex or texture processing.

The most extreme GDDR5 overclocking is at around DDR5 7200 or so. So 7.2Gbit/s. It pumps MCLK at a languid 3.6GHz. Pathetic.

Your GDDR6 gpu is pumping the WCK pin at 8 or 9 GHz to hit 16 or 18Gb/s. GDDR6x uses QAM so it's "only" pushing around 5GHz.

The PCB routing is kept physically as short as possible, and arranging it so the traces are all EXACTLY the same length and impedance is critical. You simply cannot get a signal through a PCB at that rate with a mechanical connector in the way.
 
  • Like
Reactions: Notton
This is where tiered memory comes into play. Intel already has this in its Lunar lake. Consider 16 gigs of GDDR7 memory and supplement the rest with CAMM2. This will still have performance penalty, but at least you are no longer limited by memory.
 
Feb 7, 2024
9
1
15
This is where tiered memory comes into play. Intel already has this in its Lunar lake. Consider 16 gigs of GDDR7 memory and supplement the rest with CAMM2. This will still have performance penalty, but at least you are no longer limited by memory.
How is that even remotely worth it? Lots of memory is useless if you can't access it fast enough
 

Eximo

Titan
Ambassador
Solution for very large data sets is memory expansion cards via CXL. We aren't quite there yet on the consumer side, but nothing is stopping it at the moment. PCIe 5.0 back bone, someone would just have to build a chipset and CPU combo with support. Also allows system memory or other GPUs to be accessed in the same way (In the future you could install old GPUs with large memory pools to only use as expansion memory for another, newer, GPU)

They way I see it people upgrade their GPUs long before the memory is an issue, or they are buying mid-range cards and upgrading more regularly.

If every game required 24GB VRAM today that would be one thing. As it sits, most people are fine with 8-12GB for casual use. It is when you start upgrading your monitor and asking more of a card. Just keep in mind that you as a consumer of this website/forum represent a very narrow class of consumer that is enthusiastic enough to post about hardware rather then just buying a new one when it doesn't play the latest game.
 
Feb 7, 2024
9
1
15
Who said this is for gaming or AI? Productivity applications does not benefit the same from fast memory. But memory capacity improves the capability.
What kind of workloads benefit from slow graphics memory and ALSO need a powerful graphics accelerator? You're drinking the Kool-aid.
 
Feb 7, 2024
9
1
15
Meh, I don't see that happening. Those kinds of customers will just buy enterprise GPUs or build clusters to overcome the memory limitation. Time is money
 
Status
Not open for further replies.