Question Why Does Nvidia Keep Making GPUs that Have Low VRAM?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Jun 14, 2023
50
5
35
I read and heard somewhere (I can't recall exactly where) that Nvidia can get away with lower VRAM with their GPUs compared to their AMD counterparts since they have mastered something in relation to "Lossless Image Compression". If I'm not mistaken and from how I understand it, it works just like video encoding, their GPU can compress graphics and artifacts in a very lossless manner that the VRAM isn't used all too much. I've noticed this with the 6800XT and the RTX 3080, both cards are very capable and very similar in terms of performance. But the RTX 3080 most of the time is "faster" (Depends on how the benchmark was done really). Is this true? If this were the case, then the "longevity" that AMD offers with their GPUs because of their high VRAM isn't really a true "selling point" then, because both the 6800XT and RTX 3080 have been tested this year yet again in some video I watched, and their performance is relatively close with each other. But of course this argument will be different for people who play at 1440p and 4K, but for yours truly, I only game at 1080p, so the longevity with AMD GPUs is not really for "longevity" since at 1080p, those cards perform the same and the RTX 3080 with less VRAM pretty much performs the same, or in most cases, is faster, and comes with all the marvels like RT and DLSS. So what's the point of having more VRAM on a GPU if there's another one that pretty much performs the same but with less VRAM. I heard AMD is pretty much a million miles away from mastering this so-called "Lossless Image Compression" that Nvidia has already mastered, or so they say. Correct me if I'm wrong here guys, willing to learn as much as I can. Apologies for the long read.
Nvidia's "Lossless Image Compression" (LIC) technology allows for more efficient use of VRAM by compressing graphics data.
GPU performance is influenced by various factors, including architecture, clock speeds, memory bandwidth, and software optimization, not just VRAM capacity.
Benchmark results can vary depending on test conditions, settings, and games used.
For gaming at 1080p, the benefits of higher VRAM capacity may not be as significant compared to higher resolutions like 1440p or 4K.
Both Nvidia and AMD continue to make advancements in GPU technologies and optimizations.
VRAM capacity is just one factor to consider, and it's important to assess benchmark results across different games and scenarios to understand overall performance capabilities.
 
  • Like
Reactions: Iver Hicarte

Iver Hicarte

Distinguished
May 7, 2016
420
18
18,795
Nvidia's "Lossless Image Compression" (LIC) technology allows for more efficient use of VRAM by compressing graphics data.
GPU performance is influenced by various factors, including architecture, clock speeds, memory bandwidth, and software optimization, not just VRAM capacity.
Benchmark results can vary depending on test conditions, settings, and games used.
For gaming at 1080p, the benefits of higher VRAM capacity may not be as significant compared to higher resolutions like 1440p or 4K.
Both Nvidia and AMD continue to make advancements in GPU technologies and optimizations.
VRAM capacity is just one factor to consider, and it's important to assess benchmark results across different games and scenarios to understand overall performance capabilities.
I beg to digress, look at the 3070. It's still a capable card, but at 1080p with the newest AAA titles, it's starting to struggle, there are games like Hogwarts Legacy and The Last of Us, (which was poorly ported to PC) that the 3070 can still run at high settings but with hiccups like stuttering and at some times textures don't even load correctly. And compared to the 6700XT that has 12GB of VRAM, those issues that plague the 3070 never presented itself to the 6700XT because of its high VRAM. Case in point, 8GB should be close to obsolete, and 12GB should be the entry level, 16GB should be mid-range and 24GB should be high end because of how modern AAA games are VRAM hungry now. Consumers should push this notion to NVIDIA especially.
 
Jun 14, 2023
50
5
35
I beg to digress, look at the 3070. It's still a capable card, but at 1080p with the newest AAA titles, it's starting to struggle, there are games like Hogwarts Legacy and The Last of Us, (which was poorly ported to PC) that the 3070 can still run at high settings but with hiccups like stuttering and at some times textures don't even load correctly. And compared to the 6700XT that has 12GB of VRAM, those issues that plague the 3070 never presented itself to the 6700XT because of its high VRAM. Case in point, 8GB should be close to obsolete, and 12GB should be the entry level, 16GB should be mid-range and 24GB should be high end because of how modern AAA games are VRAM hungry now. Consumers should push this notion to NVIDIA especially.
While the RTX 3070 is still a capable card, it may struggle with the newest AAA titles at 1080p. Games like Hogwarts Legacy and The Last of Us, which was poorly ported to PC, can cause issues like stuttering and texture loading problems on the 3070. The 6700XT with its 12GB VRAM doesn't face these problems due to its higher VRAM capacity. Hence, it's arguable that 8GB VRAM is becoming obsolete, and pushing for higher VRAM capacities, such as 12GB as entry-level, 16GB as mid-range, and 24GB as high-end, would be beneficial for modern AAA games. It's essential for consumers to communicate this notion to NVIDIA.
 
Most definitely there will be architectural and engineering limits, I almost overlooked that. This also spawned me an idea, maybe AMD keeps adding more VRAM because that's the only way they can compensate for them lagging behind the lossless image scaling/compression technology I mentioned.
Delta compression is used by both AMD and Nvidia and perform similarly. Not sure why you think only Nvidia is benefitting from this