News AMD Goads Nvidia Over Stingy VRAM Ahead of RTX 4070 Launch

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Elusive Ruse

Estimable
Nov 17, 2022
459
597
3,220
WoW Classic was one of those things I thought I wanted to see but quit three days in due to how unbearably slow, tedious and over-crowded everything was. I've been through that ~10 times during late-BC (when I started playing) through WotLK, didn't take me long to realize I didn't have the patience to go through that again in even more painful raw vanilla form.
I enjoyed it immensely for a good while, joined a raiding guild; led the raid through MC and BWL before real life caught up to me, WoW is a teen/college boy's game. If I weren't married with a shift work I'd probably last till WotLK.
 
  • Like
Reactions: Avro Arrow
The only reason why Nvidia is selling more is because of RT gimmick trend which they started themselves

nvidia will sell more with or without RT.

AMD is probably having a hard time figuring out how to run RT coding without infringing patents,

no such thing. RT exist decades before nvidia as a company exist. AMD can have better RT if they want to but they never really interested in RT since the very beginning.
 
  • Like
Reactions: KyaraM

InvalidError

Titan
Moderator
no such thing. RT exist decades before nvidia as a company exist. AMD can have better RT if they want to but they never really interested in RT since the very beginning.
RT is kind of problematic right now because artists and programmers are working together to make the game look as they intended using traditional rendering and often looks awful or at least awkward when you just flip RT on using the exact same assets and scene setup. AMD probably won't focus on RT for the desktop until game developers design for RT first, which won't be until at least the next generation 2-3 years from now.
 
  • Like
Reactions: -Fran-
"In an attempt to hammer its point home, AMD supplied some of its own benchmark comparisons versus Nvidia chips, pitting its RX 6800 XT against Nvidia's RTX 3070 Ti, the RX 6950 XT with the RTX 3080, the 7900 XT versus the RTX 4070 Ti, and the RX 7900 XTX taking on the RTX 4080."

I had initially thought that comparing the RX 6800 XT to the RTX 3070 Ti and comparing the RX 6950 XT to the RTX 3080 was another bone-headed move by AMD's marketing team. We all know that the real rival of the RTX 3070 Ti is the RX 6800 and the real rival of the RTX 3080 is the RX 6800 XT. I wondered why AMD would do this because it's like comparing the RX 6800 XT and RTX 3090. It seemed insane...

Then I remembered just how overpriced the RTX 3070 Ti is. In fact, right now, it's more expensive than the RX 6950 XT! (EDIT: Newegg now has a PNY Verto 3070 Ti that's only $600 but it's still an insane price when the 4070 is also $600). Meanwhile, the RX 6950 XT outclasses the RTX 3070 Ti in every way. The RX 6950 XT beats the RTX 3070 Ti with RT on and absolutely demolishes it with RT off. This means that anyone who buys an RTX 3070 Ti at this point is fit for the funny farm:
The RTX 3070 Ti is actually $10 more expensive than the RX 6950 XT but $30 more expensive if you claim the $20 MIR from ASRock:
Gigabyte GeForce RTX 3070 Ti 8GB - $640
ASRock Radeon RX 6950 XT 16GB - $630 (-$20 MIR = $610)

I think the reason that they didn't compare the RTX 3070 Ti with the RX 6950 XT is because the pricing situation in the USA might not be the same everywhere else in the world.

No matter which company you prefer, there's no question that AMD is right. We've seen from all reviewers that nVidia hasn't been putting enough VRAM on cards that weren't made specifically for mining (aka anything not named RTX 3060). The stuttering that I've seen in modern AAA titles whenever a reviewer was using an 8GB card, even at 1080p, is not a surprise to me.

People paid way more for these cards than their Radeon rivals (because they didn't realise just how unimpressive RT actually is) and that is a travesty, but it's a travesty of their own making. This is not a case of nVidia being dishonest or misleading people because the amount of VRAM was made clearly obvious in all listings and on all retail boxes.

Jensen Huang knows that a lot of people who buy nVidia will only buy nVidia because they have all the tech-savvy of a clueless noob and have never owned anything else. He knew that those sheep would buy his cards even if they had 2GB of VRAM (I'm exaggerating of course, but you get the point). Anyone who is being forced to drop graphics settings on their RTX 3060 Ti, RTX 3070, RTX 3070 Ti or RTX 3080 have only themselves to blame.

I'd like to say that this is another example of nVidia screwing consumers but in this case, nVidia only had to let the consumers screw themselves with their own stupidity. I don't think that nVidia really did anything wrong (for once) because they actually weren't deceptive with their marketing (like they were with the RTX 3060 8GB).

This whole situation is a result of people being lazy and stupid. During the period that ran from 2020-2022, we all know that prices were astronomical. Wouldn't it have behooved people to actually understand just what the hell they were throwing $1,000+ at? Of course it would, but people who are intellectually lazy have a tendency to be brand-whores and often have to learn the hard way.

From the beginning, I had said that 10GB was a ridiculously small frame buffer for a card with the GPU power of the RTX 3080. Of course, lots of gamers (especially the younger ones) tend to be brain-dead egomaniacs who think that they know everything (walking Dunning-Kruger case studies). They're not brave enough to admit when they're wrong, but in this case, reality has kicked them square in the nads.

I only hope that, going forward, people remember this harsh lesson because while it didn't happen to me (RX 6800 XT), I don't like the idea of people getting fleeced, especially today.
 
Last edited:
Yup. And with all those fancy mental gymnastics AMD is getting outsold 6:1. If they had the better product, it would be the other way around.
Oh here we go, another "nVidia is better just because it is" person. I don't suppose it has ever occurred to you that often, mindshare is more powerful than quality or value. This is especially true in cases where most people aren't smart enough to actually understand the products that they're buying. I actually remember a time when American cars out-sold Japanese cars despite being so inferior that it was actually funny.

Currently, the RTX 3070 Ti actually costs more than the RX 6950 XT so it must be the superior product, eh?

Oh wait...
 

jkflipflop98

Distinguished
Jensen Huang knows that a lot of people who buy nVidia will only buy nVidia because they have all the tech-savvy of a clueless noob and have never owned anything else. He knew that those sheep would buy his cards even if they had 2GB of VRAM (I'm exaggerating of course, but you get the point). Anyone who is being forced to drop graphics settings on their RTX 3060 Ti, RTX 3070, RTX 3070 Ti or RTX 3080 have only themselves to blame.

I'd like to say that this is another example of nVidia screwing consumers but in this case, nVidia only had to let the consumers screw themselves with their own stupidity. I don't think that nVidia really did anything wrong (for once) because they actually weren't deceptive with their marketing (like they were with the RTX 3060 8GB).

This whole situation is a result of people being lazy and stupid. During the period that ran from 2020-2022, we all know that prices were astronomical. Wouldn't it have behooved people to actually understand just what the hell they were throwing $1,000+ at? Of course it would, but people who are intellectually lazy have a tendency to be brand-whores and often have to learn the hard way.

Yes, over 80% of all PC gamers are just flat out stupid. I think you've got it all figured out. We're a bunch of idiots. That's why we buy Nvidia.

Pure genius on display here, fellas.
 
  • Like
Reactions: KyaraM

edzieba

Distinguished
Jul 13, 2016
589
594
19,760
My own experiences will beg to differ.

I have a few screenshots I've shared in the Discord on how VRChat will just gobble up all the VRAM it can in complex and populated worlds. People with 3080s, 3080tis and anything with less than 16GB suffer dearly. Funnily enough, people with 3060 12GB have a better overall experience than people with 3080s even.

EDIT: View: https://www.reddit.com/r/VRchat/comments/xjjpup/vrchat_vram_usage/


Regards.
As always: vRAM reported usage tells you naff-all about vRAM requirements.
Any game engine can and should load as much texture and geometry data into vRAM as it has available, until vRAM is full. There is no penalty to doing so (overwriting that data when needed is no slower than overwriting 'empty' vRAM, because that's how DRAM works), and keeping as much of a level's data in vRAM means it does not need to be loaded from the backing store for a latency hit at a later date. Failing to cache every byte you can means inviting cache misses that could otherwise be avoided, and is just poor practice. Either you run out of vRAM and stop opportunistically caching, or you run out of level data to load (if you have multiple levels you can start caching their data too). The importing part is the vast majority of that cached data will never be read by the GPU, it will simply be loaded into vRAM, never touched, and then overwritten by something else. That's not a big, that's the entire purpose of opportunistic caching. vRAM 'usage' tells you how big that opportunistic cache is, not how much is being used for live data (e.g. buffers, textures actually being rendered, BSPs, etc).
Without the ability to artificially partition vRAM connected to the same GPU die (same number of cores, same clocks, etc) to have access to different quantities of vRAM, any "Card 1 has X vRAM, card 2 has X+1 vRAM, Card 2 is faster, so more vRAM better!" conclusions are confounded by all the other variable of actual GPU performance.
 
As always: vRAM reported usage tells you naff-all about vRAM requirements.
Any game engine can and should load as much texture and geometry data into vRAM as it has available, until vRAM is full. There is no penalty to doing so (overwriting that data when needed is no slower than overwriting 'empty' vRAM, because that's how DRAM works), and keeping as much of a level's data in vRAM means it does not need to be loaded from the backing store for a latency hit at a later date. Failing to cache every byte you can means inviting cache misses that could otherwise be avoided, and is just poor practice. Either you run out of vRAM and stop opportunistically caching, or you run out of level data to load (if you have multiple levels you can start caching their data too). The importing part is the vast majority of that cached data will never be read by the GPU, it will simply be loaded into vRAM, never touched, and then overwritten by something else. That's not a big, that's the entire purpose of opportunistic caching. vRAM 'usage' tells you how big that opportunistic cache is, not how much is being used for live data (e.g. buffers, textures actually being rendered, BSPs, etc).
Without the ability to artificially partition vRAM connected to the same GPU die (same number of cores, same clocks, etc) to have access to different quantities of vRAM, any "Card 1 has X vRAM, card 2 has X+1 vRAM, Card 2 is faster, so more vRAM better!" conclusions are confounded by all the other variable of actual GPU performance.
Yes, but I'm not just saying it because the number looks impressive on the images, but because it is actually used and people has reported being short on VRAM all around over and over.

There's no technicalities when your game starts stuttering like crazy and, specially in VR, it becomes a nauseating experience.

As shown in the Hardware Unboxed analysis, nVidia also has another nasty thing besides stutter: it lowers texture quality on the fly to compensate. This was also built into some game engines that force lower texture quality when the VRAM is close to being saturated or the GPU just can't do it. This also hides visual differences in most reviews and still report "normal" FPS'es. I believe this was quite commented around FarCry 5 and 6.

Regards.
 
  • Like
Reactions: Elusive Ruse

Madkog

Distinguished
Nov 27, 2016
12
1
18,515
All depends on how long the card is intended to be used. By a 4070 and keep it until next gen. Then do it again for the 5070. Or a 4080 and skip a gen. It ends up being about the same.

The advantage of more frequent buys is you get some new tech earlier and CPUs that match.

But be honest, if you buy a 4070 and try and skip a gen you will have to downgrade your experience prob about 3 years from now.
 
All depends on how long the card is intended to be used. By a 4070 and keep it until next gen. Then do it again for the 5070. Or a 4080 and skip a gen. It ends up being about the same.

The advantage of more frequent buys is you get some new tech earlier and CPUs that match.

But be honest, if you buy a 4070 and try and skip a gen you will have to downgrade your experience prob about 3 years from now.
A 4070 should still be usable for 5 years. Buying a new 70 series ever generation is a huge waste of money and creates a lot of e-waste as well.
 

DaveLTX

Commendable
Aug 14, 2022
104
66
1,660
A 4070 should still be usable for 5 years. Buying a new 70 series ever generation is a huge waste of money and creates a lot of e-waste as well.
Except that it won't be with increasing vram complexity. 5 years is for someone who literally doesn't care.
 

Colif

Win 11 Master
Moderator
AMD have the advantage of knowing ahead of time how much vram would be in both consoles and as such can plan ahead and put X amount of VRAM in their GPU to cover the inevitable time that games got to same size.
Nvidia sell their stuff based on having newest tech and expecting people to just buy a new GPU every 2 generations to be able to run the games with the highest amount of prettiness.

Their massive market advantage also helps and you get people making videos about why no one buys AMD GPU just to turn around one month later and buy an Nvidia card themselves. Other channels act like AMD don't exist. You don't notice until you have an AMD card yourself.

Having 20gb of VRAM and not caring about RT, I won't need another GPU for a while to come.
 
Except that it won't be with increasing vram complexity. 5 years is for someone who literally doesn't care.
With 12GB you will probably be fine for a while at 1440p if you don't care about RT. You don't come into issues with 8GB on 1080p until you enable RT. So you don't get the best eye candy...big deal as long as the gameplay is amazing.
 
You cannot forget that consoles while having 16GB VRAM that VRAM is shared between the CPU and GPU. The Xbox for example reserves 2GB RAM for the OS so only the Series X you have 14GB RAM for GPU & CPU but on the Series S (10GB VRAM) you only have 8GB for the GPU & CPU. Now your PC GPU's VRAM is dedicated to it only. In many ways your 16GB consoles really only have 8-10GB VRAM for the GPU. This is why the console games usually use lower texture settings or resolution.


I don't think every GPU needs to have 16GB VRAM. There is no reason for something like an RX6400 to have that much VRAM because it doesn't have the horsepower to drive a game with enough texture quality and resolution to come close to needing that much VRAM. Heck it cannot even come close to getting to where you need 8GB VRAM.
Nicely put!
 

DaveLTX

Commendable
Aug 14, 2022
104
66
1,660
With 12GB you will probably be fine for a while at 1440p if you don't care about RT. You don't come into issues with 8GB on 1080p until you enable RT. So you don't get the best eye candy...big deal as long as the gameplay is amazing.
I play forza horizon 5 (I have been playing forza titles for 7 years) all the time and I already hit 11GB (at high settings with some textures raised so it doesn't look so bad sometimes) at 1440p on my 3080 12GB before i switched to my 6900XT
If I was on 4k I'd have hit the vram limit and be forced to drop texture settings.
"12GB is enough" is for today, not tomorrow.
 

edzieba

Distinguished
Jul 13, 2016
589
594
19,760
I play forza horizon 5 (I have been playing forza titles for 7 years) all the time and I already hit 11GB (at high settings with some textures raised so it doesn't look so bad sometimes) at 1440p on my 3080 12GB before i switched to my 6900XT
If I was on 4k I'd have hit the vram limit and be forced to drop texture settings.
"12GB is enough" is for today, not tomorrow.
vRAM usage does not strongly scale with render resolution. The framebuffers themselves are only a very small portion of vRAM usage - e.g. a 32bpp 3840x2160 buffer is 33MB.
 
  • Like
Reactions: KyaraM

DSzymborski

Curmudgeon Pursuivant
Moderator
Let's not fight with each other. I've deleted a couple posts that I believe are partly the result of a misunderstanding.

Comparisons between AMD/Nvidia are *not* an excuse for hostility. Nobody's getting a warning for this, but let's watch the temperature here; these threads will not be warzones. Civility is and remains non-negotiable.
 
  • Like
Reactions: Avro Arrow

TJ Hooker

Titan
Ambassador
I play forza horizon 5 (I have been playing forza titles for 7 years) all the time and I already hit 11GB (at high settings with some textures raised so it doesn't look so bad sometimes) at 1440p on my 3080 12GB before i switched to my 6900XT
If I was on 4k I'd have hit the vram limit and be forced to drop texture settings.
"12GB is enough" is for today, not tomorrow.
Reported VRAM usage is a poor indication of how much VRAM a game actually needs. Unless you were seeing issues with stuttering/fps suddenly plummeting (and you confirmed these issues weren't caused by anything else), you probably weren't actually running out of VRAM.
 
  • Like
Reactions: KyaraM
Status
Not open for further replies.