Iver Hicarte

Distinguished
May 7, 2016
420
18
18,795
I read and heard somewhere (I can't recall exactly where) that Nvidia can get away with lower VRAM with their GPUs compared to their AMD counterparts since they have mastered something in relation to "Lossless Image Compression". If I'm not mistaken and from how I understand it, it works just like video encoding, their GPU can compress graphics and artifacts in a very lossless manner that the VRAM isn't used all too much. I've noticed this with the 6800XT and the RTX 3080, both cards are very capable and very similar in terms of performance. But the RTX 3080 most of the time is "faster" (Depends on how the benchmark was done really). Is this true? If this were the case, then the "longevity" that AMD offers with their GPUs because of their high VRAM isn't really a true "selling point" then, because both the 6800XT and RTX 3080 have been tested this year yet again in some video I watched, and their performance is relatively close with each other. But of course this argument will be different for people who play at 1440p and 4K, but for yours truly, I only game at 1080p, so the longevity with AMD GPUs is not really for "longevity" since at 1080p, those cards perform the same and the RTX 3080 with less VRAM pretty much performs the same, or in most cases, is faster, and comes with all the marvels like RT and DLSS. So what's the point of having more VRAM on a GPU if there's another one that pretty much performs the same but with less VRAM. I heard AMD is pretty much a million miles away from mastering this so-called "Lossless Image Compression" that Nvidia has already mastered, or so they say. Correct me if I'm wrong here guys, willing to learn as much as I can. Apologies for the long read.
 
Last edited:

Iver Hicarte

Distinguished
May 7, 2016
420
18
18,795
Until we hear from an engineer from NVIDIA themselves, we're not going to know the answer. We can speculate all we want, but usually what happens are threads like these will likely devolve into another NVIDIA bashing one
It's like we're delving into a conspiracy here lol, when you said that this thread can become an "NVIDIA bashing one", I couldn't help but squirt a loud laugh :ROFLMAO:
 
D

Deleted member 2838871

Guest
The only explanation I have is I saw some Steam survey recently that said somewhere around 75% of gamers are still running 1080p.

Really? That is surprising considering 2007 just called and wants its resolution back.

At any rate… maybe that’s why? 8GB might be enough for 1080p?

It sure ain’t for the 4K Ultra I game at… but I’m in the minority according to that survey.
 

Iver Hicarte

Distinguished
May 7, 2016
420
18
18,795
I mean the usual answer you'll get for your question is some variation of "NVIDIA is greedy"
You're not wrong, NVIDIA has pretty much been controlling the GPU market, we can go back as long as we want, NVIDIA has been the better choice even a decade ago. AMD has always been second place to them. It's only a few years ago that AMD was showing some promises. So it's not out of the question that NVIDIA has/is the puppet master in the GPU market.
 
  • Like
Reactions: buggerlugs_52

Iver Hicarte

Distinguished
May 7, 2016
420
18
18,795
If the thread does end up devolving into the 23,000th AMD vs. Nvidia flame thread, it *will* be closed.
I didn't really create this thread for such reason to pit people into an NVIDIA vs. AMD battle. I just got curious as to why NVIDIA's GPU's have lesser VRAM than their AMD counterparts and still offer the same if not faster performance. Because more VRAM typically means more headroom hence, more performance, but this isn't the case when you compare the 6800XT and the RTX 3080.
 
Well, to throw in an engineering perspective on this, I think it's twofold:
  • With regards to GDDR6X, the problem seems to be Micron or Samsung, the two manufacturers of memory, didn't seem to haven higher densities on their road map soon enough. 1GB was all that was available when the GeForce 30 series came out. Maybe NVIDIA was hoping for 2GB modules to come out sooner, or there was something about GDDR6X that appealed to them.

    In addition, it wasn't until November of last year did someone make a 4GB GDDR6 chip, even though it was part of the standard since its inception.

  • AMD may have had a point with memory controllers and whatnot. NVIDIA still wants to make things on a single die, while AMD separated the memory interface into its own die for RDNA3. So the single die approach may limit the number of memory controllers that could be added on GPU die, which limits the number of chips you can use. In addition, NVIDIA pairs each memory controller with L2 cache, which also eats up a lot of space.

    While there are some strategies that NVIDIA could've used to add more memory, the GTX 970 fiasco was basically the final blow to that.

You're not wrong, NVIDIA has pretty much been controlling the GPU market, we can go back as long as we want, NVIDIA has been the better choice even a decade ago. AMD has always been second place to them. It's only a few years ago that AMD was showing some promises. So it's not out of the question that NVIDIA has/is the puppet master in the GPU market.
And they will continue to be so for the foreseeable future, since they've invested in a lot of technology that the industry wants to use for a long time.

I feel like people sometimes forget that GPUs aren't simply used to play games anymore. It's very likely that data and HPC has overtaken gaming's market cap in recent years.
 
Last edited:
  • Like
Reactions: Iver Hicarte

Iver Hicarte

Distinguished
May 7, 2016
420
18
18,795
The only explanation I have is I saw some Steam survey recently that said somewhere around 75% of gamers are still running 1080p.

Really? That is surprising considering 2007 just called and wants its resolution back.

At any rate… maybe that’s why? 8GB might be enough for 1080p?

It sure ain’t for the 4K Ultra I game at… but I’m in the minority according to that survey.
If a person is not satisfied with the fidelity of native 1080p, there are a lot of workarounds for that, there's software sharpening that's included with a GPU, you can play around with your monitor's settings if you have a fancy one and there's resolution scaling, if it's present in the game. Modern games still look good at 1080p. So for a person such as myself that games at 1080p, I don't really see any point climbing up to 1440p and 4k, at least to my use case scenario. But the again, There has to be an option for every people.
 
Last edited:
D

Deleted member 2838871

Guest
There has to be an option for every people.

Indeed. I’ve never owned an AMD GPU. My first Nvidia card was a GeForce 3 Ti 200 in 2001… and have stayed with what works for me.

Ironically that system and my new system were/are running AMD processors.

I’ve said it before and I’ll say it again… I don’t care who makes it… as long as it performs.
 

Iver Hicarte

Distinguished
May 7, 2016
420
18
18,795
Well, to throw in an engineering perspective on this, I think it's twofold:
  • With regards to GDDR6X, the problem seems to be Micron or Samsung, the two manufacturers of memory, didn't seem to haven higher densities on their road map soon enough. 1GB was all that was available when the GeForce 30 series came out. Maybe NVIDIA was hoping for 2GB modules to come out sooner, or there was something about GDDR6X that appealed to them.

    In addition, it wasn't until November of last year did someone make a 4GB GDDR6 chip, even though it was part of the standard since its inception.

  • AMD may have had a point with memory controllers and whatnot. NVIDIA still wants to make things on a single die, while AMD separated the memory interface into its own die for RDNA3. So the single die approach may limit the number of memory controllers that could be added on GPU die, which limits the number of chips you can use. In addition, NVIDIA pairs each memory controller with L2 cache, which also eats up a lot of space.

    While there are some strategies that NVIDIA could've used to add more memory, the GTX 970 fiasco was basically the final blow to that.


And they will continue to be so for the foreseeable future, since they've invested in a lot of technology that the industry wants to use for a long time.

I feel like people sometimes forget that GPUs aren't simply used to play games anymore. It's very likely that data and HPC has overtaken gaming's market cap in recent years.
Most definitely there will be architectural and engineering limits, I almost overlooked that. This also spawned me an idea, maybe AMD keeps adding more VRAM because that's the only way they can compensate for them lagging behind the lossless image scaling/compression technology I mentioned.
 
Most definitely there will be architectural and engineering limits, I almost overlooked that. This also spawned me an idea, maybe AMD keeps adding more VRAM because that's the only way they can compensate for them lagging behind the lossless image scaling/compression technology I mentioned.
The biggest eater of VRAM AFAIK is texture data. The things you're mentioning only apply to the frame buffer or G-Buffers, which doesn't eat too much into VRAM even at 4K.

There was talk about having some sort of AI based reconstruction thing for textures, so that lower resolution textures can be stored and AI can just upscale it. If that works well enough, then that puts even less pressure on VRAM usage.
 
  • Like
Reactions: Iver Hicarte

Newb888

Distinguished
Nov 30, 2006
89
9
18,535
I find it puzzling that some people would settle for an 8GB NVIDIA chipset instead of a 12GB NVIDIA chipset, given that more VRAM is needed for "future proofing" high-resolution 6/8K video (not gaming).

According to some sources , current-generation GDDR6X and GDDR6 memory is supplied in densities of 8Gb (1GB of data) and 16Gb (2GB of data) per chip, which means that a 12GB NVIDIA chipset would have more memory chips than an 8GB one, and potentially higher bandwidth and performance.

This would be a benefit for someone like me who only upgrades their PC once every 10 years or so.
 
The capacity nvidia has on their cards comes down to memory bus width and nothing else (ex: 128 bit bus == 4x 32bit VRAM chips). They've chosen bus widths based on the performance characteristics they're looking for and the biggest memory chips they can buy are 16gb. That means they'd have to double up capacity using a clamshell PCB design which adds in board complexity and cooling issues. The other option would be wider bus widths which increases silicon size and brings its own board complexity as there are more chips.

In addition, it wasn't until November of last year did someone make a 4GB GDDR6 chip, even though it was part of the standard since its inception.
I was looking for GDDR6 memory capacities due to some thread a while ago and couldn't find anything above 16gb (I know nothing above 16gb is being sold). Samsung has 32gb capacity with GDDR6W, but that also doubles the bus width so it doesn't really count in terms of increasing capacity. If you have any links regarding 32gb chips I'd love to read through it.
 
I find it puzzling that some people would settle for an 8GB NVIDIA chipset instead of a 12GB NVIDIA chipset, given that more VRAM is needed for "future proofing" high-resolution 6/8K video (not gaming).
Video takes up almost no VRAM since the bandwidth requirements for video are so small that the GPU can get away with a just-in-time system.

I was looking for GDDR6 memory capacities due to some thread a while ago and couldn't find anything above 16gb (I know nothing above 16gb is being sold). Samsung has 32gb capacity with GDDR6W, but that also doubles the bus width so it doesn't really count in terms of increasing capacity. If you have any links regarding 32gb chips I'd love to read through it.
I was probably thinking of this, but didn't realize the bus situation.
 

IDProG

Distinguished
I didn't really create this thread for such reason to pit people into an NVIDIA vs. AMD battle. I just got curious as to why NVIDIA's GPU's have lesser VRAM than their AMD counterparts and still offer the same if not faster performance.
Offer the same if not faster performance?

I am not sure what you're talking about.

Anyway, I have many answers for your question about why Nvidia gives lesser VRAM.

One, the business answer. They don't want GeForce cards to cannibalize sales of the Quadro cards. The selling point of Quadro cards has always been the amount of VRAM. Professional graphics or AI people need a LOT of VRAM. They don't really need the GPU performance (well, the graphics people don't. The AI people do). AMD is not affected as much because ROCm does not exist in RDNA cards, yet. I think I heard a rumor that they would add ROCm to RDNA 4 or something.

Two, the silicon answer. The more advanced the silicon node becomes, the smaller the maximum size of a silicon die becomes. By using less RAM (and smaller bus), they can save space and use the space for compute die instead. Now, chiplet design is not affected by this at all, but Nvidia hasn't made their chiplet tech, again, yet.

Three, the mistake answer. Nvidia falsely predicted that the crypto mining era would stay and designed their GPU around miners. Designing an architecture requires AT LEAST 1 year. The crypto mining era ended around 6 months before the release of RTX 40 series. On top of this, they also didn't expect the RAM demand to increase so suddenly. In 2022, there were lots of people still defending 8GB VRAM, even people who now demand more.
Since the cards have already been produced, they can't change them (except for future emergency models like the 4060 Ti 16GB) and have to figure out how to sell the cards.
 

Zerk2012

Titan
Ambassador
I don't know what you would call low amount of ram.

You got low, enough, and overkill, each person can see any of the numbers for that different.

For me I still use my 2080 8GB @1440p and have no problems running anything so I guess to me even @1440p 8GB is enough.
 
Another thing of note is something that Jarred Walton pointed out in his article about 4K gaming. At native 4K rendering, the video card has to hold more MIP maps, possibly at higher resolutions as well than otherwise would've been used.

But with things like DLSS and FSR, the video card can render at a lower internal resolution, which reduces the need for those higher resolution MIP maps.

So another guess would be NVIDIA is well aware of their engineering issues as mentioned above, so they're finding ways to get around it. And the tech that they developed is the result of that.
 
D

Deleted member 2838871

Guest
Another thing of note is something that Jarred Walton pointed out in his article about 4K gaming. At native 4K rendering, the video card has to hold more MIP maps, possibly at higher resolutions as well than otherwise would've been used.

But with things like DLSS and FSR, the video card can render at a lower internal resolution, which reduces the need for those higher resolution MIP maps.

So another guess would be NVIDIA is well aware of their engineering issues as mentioned above, so they're finding ways to get around it. And the tech that they developed is the result of that.

Sounds pretty lame to me.

I can’t speak for anyone else… but I bought a 4090 to game in native 4K… not game with DLSS… and I don’t.
 
D

Deleted member 2838871

Guest
If I can't see any practical difference, then I don't really care. After all, graphics rendering does a ton of cheating anyway and hopes that the player doesn't notice.

Can’t say if I’d notice it or not because I’ve never enabled DLSS. I might one day… maybe for UE5?

I can say that I don’t see a difference in 4K 60 and 4K 120… which isn’t all that uncommon. So I game at 60hz which is good anyway because not even the 4090 will do 120 fps on 4K Ultra… best I’ve done is 80-90 fps.