[SOLVED] 20 GB VRAM 3080's?

farmfowls

Distinguished
Jul 23, 2014
299
0
18,790
So I know there have been rumors/confirmations(?) that a 20 GB VRAM SKU for the 3080 exists but am trying to understand something. As far as I am aware (could be wrong), games aren't using even 10 GB VRAM (isn't it around 6?), so why would NVIDIA make a 20 GB version? Is it just for the "big numbers" to lure people in? Is there actual reason to wait for a 20 GB card vs trying to get one's hands on the 10 GB cards that are almost impossible to get now anyway?

I didn't think that having more VRAM (past what you need) improved performance or offered anything extra in terms of gaming. Has this changed? Or is it just for people who hold onto cards for a few years and maybe in that time there will be need for VRAM past 10 GB? I for one am looking to upgrade before I head back to school and won't be able to upgrade again for a while after this. In this case, is waiting for a 20 GB 3080 the smarter choice vs trying to get a 10 GB one now?

I seem to remember a Gigabyte (I think it was them) leak where something was said that maybe the 20 GB VRAM cards could be clocked higher? I'm not sure so correct me if I'm wrong. I would assume that if a 20 GB card would come, it'd be in a year from now, so is it worth waiting?

Thoughts?
 
Solution
Here is what Tom's Hardware said about the 10GB VRAM in its 3080 review:

"
GeForce RTX 3080: Is 10GB VRAM Enough?

In the past few weeks since the RTX 30-series announcement, there have been quite a few discussions about whether the 3080 has enough memory. Take a look at the previous generation with 11GB, or the RTX 3090 with 24GB, and 10GB seems like it's maybe too little. Let's clear up a few things.
There are ways to exceed using 10GB of VRAM, but it's mostly via mods and questionably coded games — or running a 5K or 8K display. The problem is that a lot of gamers use utilities that measure allocated memory rather than actively used memory (e.g., MSI Afterburner), and they see all of their VRAM being sucked...

mjbn1977

Distinguished
Well, AMD usually pushes more VRAM out of marketing reason to have something "better" than Nvidia. They most likely will offer 16GB and 12 GB on their upcoming "big navy" cards. If they don't quite reach 3080 and 3070 performance they can than still marketing wise insist that Nvidias cards are not future proof to put doubt into the minds of potential 3080 buyers. So....those buyers might have an 20 GB alternative (but probably more expensive).

Right now no 4k game needs more than 8 to 10GB. But of course that can change the next few years, but I doubt it will be a big problem, because the new consoles will not offer more neither (they both have 16GB VRAM, but that is shared ram, the new Xbox only has 10GB fast VRAM and 6 GB "slow" VRAM).

I personally would have preferred 12GB VRAM on the 3080 (it would have give a little bit more piece of mind and long term future proofing beyond the next 2 year), but since it is ultra fast GDDR6X ram I can kinda live with it. RDNA2 supposedly only uses GDDR6. And its not only size that matters, also speed is important. So I am not too worried.

People who can really benefit from large VRAM are content creators who use it for Rendering in 3d software and stuff. Big VRAM is big there. If you render a scene you load all 3d assets in the VRAM and in big scenes (or for very large shadow maps calculations in game development, unreal engine for example) this can really speed up the process.

For gaming itself, VRAM beyond 8-12 GB is kinda pointless. Unless you mod a game to the extreme in 4k or want to play around with 5k or 8k......

In WQHD the 10GB of the 3080 will last the next few years without problems.
 
I believe pushing for higher VRAM in gaming cards is simply marketing. Historically whenever a card with more VRAM comes out and that's literally the only thing that changed, performance doesn't increase and it's likely whenever that extra VRAM would come into play, it's used in areas that would kill performance anyway. One that comes to mind is shadow rendering, which needs to basically re-render the scene, so it needs a non-trivial amount of VRAM for higher resolutions. But at the same time, it's re-rendering the scene, so it's harder on the GPU.

People may also tell you that games do in fact "use up" 8GB or more on their cards. The problem however, is that tools that report VRAM utilization like MSI Afterburner or GPU-z, are reporting the entire system VRAM usage. Remember: Windows still takes up a chunk of VRAM for itself. On top of that, games often ask for more VRAM than they actually need and not all of the data they have in VRAM may not be necessary for what its trying to do.

Will games require 8GB to 10GB on ultra quality settings in the future? Sure. But we don't know when that future is. One could say 4GB of VRAM is the bare minimum for good quality at 1080p or 1440p today. Video cards that had or exceeded 4GB of VRAM have been around for over 8 years now.
 

mjbn1977

Distinguished
Here is what Tom's Hardware said about the 10GB VRAM in its 3080 review:

"
GeForce RTX 3080: Is 10GB VRAM Enough?

In the past few weeks since the RTX 30-series announcement, there have been quite a few discussions about whether the 3080 has enough memory. Take a look at the previous generation with 11GB, or the RTX 3090 with 24GB, and 10GB seems like it's maybe too little. Let's clear up a few things.
There are ways to exceed using 10GB of VRAM, but it's mostly via mods and questionably coded games — or running a 5K or 8K display. The problem is that a lot of gamers use utilities that measure allocated memory rather than actively used memory (e.g., MSI Afterburner), and they see all of their VRAM being sucked up and think they need more memory. Even some games (Resident Evil 3 remake) do this, informing gamers that it 'needs' 12GB or more to properly run the ultra settings properly. (Hint: It doesn't.)
Using all of your GPU's VRAM to basically cache textures and data that might be needed isn't a bad idea. Call of Duty Modern Warfare does this, for example, and Windows does this with system RAM to a certain extent. If the memory is just sitting around doing nothing, why not put it to potential use? Data can sit in memory until either it is needed or the memory is needed for something else, but it's not really going to hurt anything. So, even if you look at a utility that shows a game using all of your VRAM, that doesn't mean you're actually swapping data to system RAM and killing performance.

You'll notice when data actually starts getting swapped out to system memory because it causes a substantial drop in performance. Even PCIe Gen4 x16 only has 31.5 GBps of bandwidth available. That's less than 5% of the RTX 3080's 760 GBps of bandwidth. If a game really exceeds the GPU's internal VRAM capacity, performance will tank hard.

If you're worried about 10GB of memory not being enough, my advice is to just stop. Ultra settings often end up being a placebo effect compared to high settings — 4K textures are mostly useful on 4K displays, and 8K textures are either used for virtual texturing (meaning, parts of the texture are used rather than the whole thing at once) or not used at all. We might see games in the next few years where a 16GB card could perform better than a 10GB card, at which point dropping texture quality a notch will cut VRAM use in half and look nearly indistinguishable.

There's no indication that games are set to start using substantially more memory, and the Xbox Series X also has 10GB of GPU memory, so an RTX 3080 should be good for many years, at least. And when it's not quite managing, maybe then it will be time to upgrade to a 16GB or even 32GB GPU.

If you're in the small group of users who actually need more than 10GB, by all means, wait for the RTX 3090 reviews and launch next week. It's over twice the cost for at best 20% more performance, which basically makes it yet another Titan card, just with a better price than the Titan RTX (but worse than the Titan Xp and 2080 Ti). And with 24GB, it should have more than enough RAM for just about anything, including scientific and content creation workloads."
 
Solution
I can't even imagine how much more that's going to cost over the 10GB variant...
Scrounging some stuff on the internet, GDDR6 currently costs about $10 per GB per chip at volume pricing on DigiKey. While can't find any figures on how much GDDR5X cost over GDDR5 for historical context, considering that GDDR5X performed about twice as good as GDDR5, I'm just going to peg it at double the cost. So that would add about another $200 on the BOM if another 10GB were added to the 3080 if we went with this assumption.
 
  • Like
Reactions: Phaaze88

mjbn1977

Distinguished
Well, Nvidia will of course charge a premium on the GDDR6X since the have an exclusivity agreement with Micron. So, that will be also the reason why AMD will only offer GDDR6. I don't think Nvidia will have a founders 3080 with 20GB. But the Board partner might. I would expect a price around $1000 if that is going to happen. If we are lucky it might be.

I am wondering about something else.....since Nvidia has the GDDR6X exclusive....how does it work for board partners. Do they have to order GDDR6X based on GPU orders from Nvidia? How can they get their hands on additional GDDR6X chips?
 
Last edited:

animekenji

Distinguished
Dec 31, 2010
196
33
18,690
It surprises me that they would do this. There was speculation as to why Nvidia chose to go with the slower variant of GDDR6X when there is faster available, and the reason was heat dissipation would not have been adequate. Now they are going to 20gb, which means double the number of RAM chips and double the heat generation. If they couldn't provide sufficient cooling of 10gb of the faster GDDR6X, how will they provide cooling for double the amount of the slower GDDR6X variant? I would buy the early RTX 30 card with less memory rather than wait because I think there are going to be cooling issues on the later cards. GTX 480, anyone?
 
It surprises me that they would do this. There was speculation as to why Nvidia chose to go with the slower variant of GDDR6X when there is faster available, and the reason was heat dissipation would not have been adequate. Now they are going to 20gb, which means double the number of RAM chips and double the heat generation. If they couldn't provide sufficient cooling of 10gb of the faster GDDR6X, how will they provide cooling for double the amount of the slower GDDR6X variant? I would buy the early RTX 30 card with less memory rather than wait because I think there are going to be cooling issues on the later cards. GTX 480, anyone?
The GPU on the 3080 only has enough memory channels for 10 RAM chips (12 for the 3090). A 20 GB variant would just use 2 GiB RAM chips instead of 1 GiB ones.

Also the memory chips don't contribute much to energy use. See Micron's GDDR6 introduction datasheet. There's a graph towards the end, which claims their GDDR6 chips need 5.5 picojoules of energy to transfer one bit.
 
Last edited:
So I know there have been rumors/confirmations(?) that a 20 GB VRAM SKU for the 3080 exists but am trying to understand something. As far as I am aware (could be wrong), games aren't using even 10 GB VRAM (isn't it around 6?), so why would NVIDIA make a 20 GB version? Is it just for the "big numbers" to lure people in? Is there actual reason to wait for a 20 GB card vs trying to get one's hands on the 10 GB cards that are almost impossible to get now anyway?

I didn't think that having more VRAM (past what you need) improved performance or offered anything extra in terms of gaming. Has this changed? Or is it just for people who hold onto cards for a few years and maybe in that time there will be need for VRAM past 10 GB? I for one am looking to upgrade before I head back to school and won't be able to upgrade again for a while after this. In this case, is waiting for a 20 GB 3080 the smarter choice vs trying to get a 10 GB one now?

I seem to remember a Gigabyte (I think it was them) leak where something was said that maybe the 20 GB VRAM cards could be clocked higher? I'm not sure so correct me if I'm wrong. I would assume that if a 20 GB card would come, it'd be in a year from now, so is it worth waiting?

Thoughts?
I play a few games at 4k that use more than 10gb VRAM
 

mjbn1977

Distinguished
Why does it constantly change during gameplay?
again, from Tom's Hardware Ampere review (also, that is basically the same what Steve from GamersNexus is saying as well):

In the past few weeks since the RTX 30-series announcement, there have been quite a few discussions about whether the 3080 has enough memory. Take a look at the previous generation with 11GB, or the RTX 3090 with 24GB, and 10GB seems like it's maybe too little. Let's clear up a few things.
There are ways to exceed using 10GB of VRAM, but it's mostly via mods and questionably coded games — or running a 5K or 8K display. The problem is that a lot of gamers use utilities that measure allocated memory rather than actively used memory (e.g., MSI Afterburner), and they see all of their VRAM being sucked up and think they need more memory. Even some games (Resident Evil 3 remake) do this, informing gamers that it 'needs' 12GB or more to properly run the ultra settings properly. (Hint: It doesn't.)
Using all of your GPU's VRAM to basically cache textures and data that might be needed isn't a bad idea. Call of Duty Modern Warfare does this, for example, and Windows does this with system RAM to a certain extent. If the memory is just sitting around doing nothing, why not put it to potential use? Data can sit in memory until either it is needed or the memory is needed for something else, but it's not really going to hurt anything. So, even if you look at a utility that shows a game using all of your VRAM, that doesn't mean you're actually swapping data to system RAM and killing performance.

You'll notice when data actually starts getting swapped out to system memory because it causes a substantial drop in performance. Even PCIe Gen4 x16 only has 31.5 GBps of bandwidth available. That's less than 5% of the RTX 3080's 760 GBps of bandwidth. If a game really exceeds the GPU's internal VRAM capacity, performance will tank hard.

If you're worried about 10GB of memory not being enough, my advice is to just stop. Ultra settings often end up being a placebo effect compared to high settings — 4K textures are mostly useful on 4K displays, and 8K textures are either used for virtual texturing (meaning, parts of the texture are used rather than the whole thing at once) or not used at all. We might see games in the next few years where a 16GB card could perform better than a 10GB card, at which point dropping texture quality a notch will cut VRAM use in half and look nearly indistinguishable.

There's no indication that games are set to start using substantially more memory, and the Xbox Series X also has 10GB of GPU memory, so an RTX 3080 should be good for many years, at least. And when it's not quite managing, maybe then it will be time to upgrade to a 16GB or even 32GB GPU.

If you're in the small group of users who actually need more than 10GB, by all means, wait for the RTX 3090 reviews and launch next week. It's over twice the cost for at best 20% more performance, which basically makes it yet another Titan card, just with a better price than the Titan RTX (but worse than the Titan Xp and 2080 Ti). And with 24GB, it should have more than enough RAM for just about anything, including scientific and content creation workloads."
 
  • Like
Reactions: Phaaze88

mjbn1977

Distinguished
It surprises me that they would do this. There was speculation as to why Nvidia chose to go with the slower variant of GDDR6X when there is faster available, and the reason was heat dissipation would not have been adequate. Now they are going to 20gb, which means double the number of RAM chips and double the heat generation. If they couldn't provide sufficient cooling of 10gb of the faster GDDR6X, how will they provide cooling for double the amount of the slower GDDR6X variant? I would buy the early RTX 30 card with less memory rather than wait because I think there are going to be cooling issues on the later cards. GTX 480, anyone?

That doesn't make sense. You know that most board partners using the same cooler design for 3080 and 3090 cards and the 3090 has even 24GB of VRAM.....so, I don't think that should be an issue....
 
again, from Tom's Hardware Ampere review (also, that is basically the same what Steve from GamersNexus is saying as well):

In the past few weeks since the RTX 30-series announcement, there have been quite a few discussions about whether the 3080 has enough memory. Take a look at the previous generation with 11GB, or the RTX 3090 with 24GB, and 10GB seems like it's maybe too little. Let's clear up a few things.
There are ways to exceed using 10GB of VRAM, but it's mostly via mods and questionably coded games — or running a 5K or 8K display. The problem is that a lot of gamers use utilities that measure allocated memory rather than actively used memory (e.g., MSI Afterburner), and they see all of their VRAM being sucked up and think they need more memory. Even some games (Resident Evil 3 remake) do this, informing gamers that it 'needs' 12GB or more to properly run the ultra settings properly. (Hint: It doesn't.)
Using all of your GPU's VRAM to basically cache textures and data that might be needed isn't a bad idea. Call of Duty Modern Warfare does this, for example, and Windows does this with system RAM to a certain extent. If the memory is just sitting around doing nothing, why not put it to potential use? Data can sit in memory until either it is needed or the memory is needed for something else, but it's not really going to hurt anything. So, even if you look at a utility that shows a game using all of your VRAM, that doesn't mean you're actually swapping data to system RAM and killing performance.

You'll notice when data actually starts getting swapped out to system memory because it causes a substantial drop in performance. Even PCIe Gen4 x16 only has 31.5 GBps of bandwidth available. That's less than 5% of the RTX 3080's 760 GBps of bandwidth. If a game really exceeds the GPU's internal VRAM capacity, performance will tank hard.

If you're worried about 10GB of memory not being enough, my advice is to just stop. Ultra settings often end up being a placebo effect compared to high settings — 4K textures are mostly useful on 4K displays, and 8K textures are either used for virtual texturing (meaning, parts of the texture are used rather than the whole thing at once) or not used at all. We might see games in the next few years where a 16GB card could perform better than a 10GB card, at which point dropping texture quality a notch will cut VRAM use in half and look nearly indistinguishable.

There's no indication that games are set to start using substantially more memory, and the Xbox Series X also has 10GB of GPU memory, so an RTX 3080 should be good for many years, at least. And when it's not quite managing, maybe then it will be time to upgrade to a 16GB or even 32GB GPU.

If you're in the small group of users who actually need more than 10GB, by all means, wait for the RTX 3090 reviews and launch next week. It's over twice the cost for at best 20% more performance, which basically makes it yet another Titan card, just with a better price than the Titan RTX (but worse than the Titan Xp and 2080 Ti). And with 24GB, it should have more than enough RAM for just about anything, including scientific and content creation workloads."
Thanks for the info. Not arguing or disagreeing with you, I'm just curious as to why it's constantly changing during gameplay? I'm wondering what causes that?
 
20GB on the RTX 3080 seems like silly marketing nonsense just to say they have more VRAM than AMD. There is almost no advantage to 20GB VRAM that I can see right now.

Also, unless AMD feels pressure from Nvidia, I kind of doubt AMD will have a card with more than 16GB for the RX 6000 series. They're almost definitely using HBM for VRAM and the extra cost of HBM compared to GDDR6X is going to be a limiting factor in how much VRAM AMD is likely to give the RX 6000 lineup.

For the average consumer, there is no real advantage to having more than 8GB VRAM until 4k res is as popular as 1080p is now and people are pushing toward 8k to be the new 4k. I don't see this happening for minimum 6 years. A quick glance at the Steam survey shows 1080p is still the dominate resolution right now and likely to be for the next 4+ years. 1440p or 4k may overtake 1080p as the more popular resolution in 6 years.
 
Thanks for the info. Not arguing or disagreeing with you, I'm just curious as to why it's constantly changing during gameplay? I'm wondering what causes that?
The same reason why an app's regular memory usage constantly chances. Sometimes it needs more memory for things in one moment, the next it doesn't. If you're in a game and are currently in a small hub level, it's not going to require the same amount of data to render a scene as a sprawling open one.
 
The same reason why an app's regular memory usage constantly chances. Sometimes it needs more memory for things in one moment, the next it doesn't. If you're in a game and are currently in a small hub level, it's not going to require the same amount of data to render a scene as a sprawling open one.
So that means that it does in fact use the allocated memory. That's contradictory to what is being said.
 
So that means that it does in fact use the allocated memory. That's contradictory to what is being said.
The VRAM is being requested by the game or program to be allocated to that current task. The program itself may not even use half of that VRAM. The VRAM that is allocated is not the same as what is actually being used.

Say you are playing a game that uses a lot of textures, models and other assets that need VRAM. Your GPU has 8GB of VRAM. The game may request 6GB of your 8GB total VRAM, but if you are running 1080p resolution, it will likely only use 2.5-3.5GB. If you set the game to use 4k textures, it may increase VRAM usage up to 5.5GB. It might even require the game be restarted so the full VRAM available on the GPU can be allocated.

Edit - Another way to look at it is like when you install an OS to your HDD or SSD. When installing an OS, you can partition part of the drive to separate the OS from your files or you can allocate the entire drive to the OS installation. That allocated space is not completely filled with files and still has space for other files and programs.
 
Last edited:

mjbn1977

Distinguished
The fact that the new Xbox Series One only has 10GB fast GDDR6 and 6GB slower GDDR6 ram is basically and indicator that they don’t foresee more than 10GB of VRAM use for 4k. The 6GB slower vram is used as general system ram. Consoles don‘t have separate system ram as we do on our PCs with our „slow“ DDR3 or DDR4. So, the 16GB is for both, system and buffer for the GPU. Also, vram speed is also very important. So, I think the faster GDDR6X will help quite a bit for texture streaming and stuff.

I really doubt that we will see HBM this time around with AMD. At least not if they planning to sell their cards at a better price than Nvidia while still making a profit. Also, the leaks indicate more and more 16GB with 256 bit bus. That would be kind of a expensive card with HBM.
 

mjbn1977

Distinguished
The VRAM is being requested by the game or program to be allocated to that current task. The program itself may not even use half of that VRAM. The VRAM that is allocated is not the same as what is actually being used.

Say you are playing a game that uses a lot of textures, models and other assets that need VRAM. Your GPU has 8GB of VRAM. The game may request 6GB of your 8GB total VRAM, but if you are running 1080p resolution, it will likely only use 2.5-3.5GB. If you set the game to use 4k textures, it may increase VRAM usage up to 5.5GB. It might even require the game be restarted so the full VRAM available on the GPU can be allocated.

Edit - Another way to look at it is like when you install an OS to your HDD or SSD. When installing an OS, you can partition part of the drive to separate the OS from your files or you can allocate the entire drive to the OS installation. That allocated space is not completely filled with files and still has space for other files and programs.

see, and that is where the benefits of higher vram speed come into play. The game can switch textures much faster and do faster changes to the files that a currently held in the allocated vram space, without causing delays theft would show as lower performance