News Nvidia Reportedly Ressuming RTX 3080 12GB Production, Thanks To GA102 Over Supply

spongiemaster

Admirable
Dec 12, 2019
2,276
1,280
7,560
I thought the 10GB 3080 had been discontinued with the release of the 12GB version. Then I saw FE models in stock at Best Buy last week. They sold out quicker than any of the other models. If Nvidia is sitting on so many GA102's, I don't see why they don't pump out more FE's and dump them on Best Buy. There seems to still be demand for a $700 3080.
 
  • Like
Reactions: artk2219

emike09

Distinguished
Jun 8, 2011
156
154
18,760
I'll never understand why Nvidia thought 10GB was enough for the 3080 for 4k gaming. Even 12GB doesn't cut it in some titles. Upgraded to the 3090 and I often see utilization above 14GB. FS 2020 I've seen go upwards of 18GB in some areas. On my 3080 10G, I often saw fps drops when turning in texture heavy games, which was annoying. GPU-Z reporting around 9.8GB utilization with CPU and GPU core utilization not being maxed out.
1080ti had 11GB. Same with the 2080ti. You would expect the 3080 to at least match the previous gen ti models, not go lower, especially as 4k and high-res textures are becoming more common.
 

Inthrutheoutdoor

Reputable
BANNED
Feb 17, 2019
254
68
4,790
HAHAHAHA.. ROTFLMAO....

anyone who thinks that nGreedia, or any other mfgr, will be willing to start selling gpu's at pre-pandemic/pre-scalper prices again, I have an extremely sunny & warm planet in the Degobah system that I can sell you for cheap, it even has bunch of beachfront areas too... :D
 
  • Like
Reactions: artk2219

MrStillwater

Prominent
Jun 4, 2022
7
7
515
I'll never understand why Nvidia thought 10GB was enough for the 3080 for 4k gaming. Even 12GB doesn't cut it in some titles. Upgraded to the 3090 and I often see utilization above 14GB. FS 2020 I've seen go upwards of 18GB in some areas.

It's been shown a number of times that memory utilization doesn't correlate to the amount of memory actually needed, and may well not be having any impact on performance. Some games and other apps will simply put as much as possible into all available memory, regardless of whether they really needed to.
 

Phaaze88

Titan
Ambassador
Dude...


I'll never understand why Nvidia thought 10GB was enough for the 3080 for 4k gaming.
What MrStillwater posted, but for those who don't know that, it's so they could up-sell them on the higher margin 3080Ti... but then all hell broke loose with product segmentation and pricing.

Besides, GDDR6X should be faster and more efficient than R5X and R6 on the 2 gpus you mentioned.
 
Last edited:
  • Like
Reactions: artk2219

thisisaname

Distinguished
Feb 6, 2009
794
436
19,260
An over supply of cards is lets make so
Sorry Nvidia ship has sailed on this old gen, you should not have been so greedy.
Unless Nvidia sells this stuff at REDUCED prices I dont imagine tons of people running out to buy one.
I'll just wait patiently for the next gen products to come out as my 1080ti still pushes 2k just fine.

Yes they have hyped up the next generation to much for lots of people to buy this generation.
 
  • Like
Reactions: artk2219

LolaGT

Reputable
Oct 31, 2020
276
248
5,090
I'll never understand why people thought a straight 3080 was a 4k card, it never was.
The 3090 was the only one in the original lineup that was just good enough for 4k ultra, and look how much of a beast it had to be, and even that monster is going to be brought to its knees without any trouble in the near future for the next series of AAA titles coming up. .

I'll never understand why Nvidia thought 10GB was enough for the 3080 for 4k gaming.
 
  • Like
Reactions: artk2219

spongiemaster

Admirable
Dec 12, 2019
2,276
1,280
7,560
I'll never understand why Nvidia thought 10GB was enough for the 3080 for 4k gaming. Even 12GB doesn't cut it in some titles. Upgraded to the 3090 and I often see utilization above 14GB. FS 2020 I've seen go upwards of 18GB in some areas. On my 3080 10G, I often saw fps drops when turning in texture heavy games, which was annoying. GPU-Z reporting around 9.8GB utilization with CPU and GPU core utilization not being maxed out.
1080ti had 11GB. Same with the 2080ti. You would expect the 3080 to at least match the previous gen ti models, not go lower, especially as 4k and high-res textures are becoming more common.
It's to create a larger performance gap between the 3080 and 3090. Memory bandwidth is determined by VRAM chip count up to 12 chips. 3080 has 10x1GB chips, 3090 has 12x2GB. The performance gap between them wasn't very large. If the 3080 had 11GB, the gap would have been even smaller. There was a long rumored 20GB 3080 (10x2GB chips, same performance as 3080) that never happened, and instead we got a 12GB 3080Ti wedged between the two of them and a 3090Ti slightly above and we ended up 4 GPU's way too close to each other in performance.
 
  • Like
Reactions: artk2219

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
I'll never understand why Nvidia thought 10GB was enough for the 3080 for 4k gaming. Even 12GB doesn't cut it in some titles.

Nvidia (AMD as well) doesn't have much of a choice. ITs due to the way industry works.

RAM chips are only offered in densities of 32bit x 1GB/2GB. There is no 1.5GB variant in the market.

So, for 3080 which has 320bit memory, its either 10 x 1GB (total 10GB) or 10 x 2GB (total 20GB), nothing in-between. 3080ti/3090 has 384bit so its using 12 chips.

AMD managed to get 16GB for 6800/6900 because their memory is 256bit. So they use 8 x 2GB chips to get 16GB.
 
  • Like
Reactions: artk2219

logainofhades

Titan
Moderator
I'll never understand why people thought a straight 3080 was a 4k card, it never was.
The 3090 was the only one in the original lineup that was just good enough for 4k ultra, and look how much of a beast it had to be, and even that monster is going to be brought to its knees without any trouble in the near future for the next series of AAA titles coming up. .

The 3090 isn't all that much faster than a 3080.

4K.png
 
  • Like
Reactions: artk2219

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
It's to create a larger performance gap between the 3080 and 3090. Memory bandwidth is determined by VRAM chip count up to 12 chips. 3080 has 10x1GB chips, 3090 has 12x2GB. The performance gap between them wasn't very large. If the 3080 had 11GB, the gap would have been even smaller. There was a long rumored 20GB 3080 (10x2GB chips, same performance as 3080) that never happened, and instead we got a 12GB 3080Ti wedged between the two of them and a 3090Ti slightly above and we ended up 4 GPU's way too close to each other in performance.

Yes, 3090 does not have ability to go beyond 24GB because the largest GDDR6x available is 2GB.
 
  • Like
Reactions: artk2219
It's been shown a number of times that memory utilization doesn't correlate to the amount of memory actually needed, and may well not be having any impact on performance...
Can you point to these instances? Note that I'm considering 1440p+ with all graphics settings at their maximum. Which is usually the target audience for top-end GPUs. If we take into account lowering texture and other VRAM-heavy settings, then you are correct.

It's been shown that 10GBs VRAM is NOT enough in certain AAA titles at high/max settings (DOOM Eternal and Microsoft Flight Simulator, to name a couple).
 
Last edited:
  • Like
Reactions: artk2219

AndrewJacksonZA

Distinguished
Aug 11, 2011
576
93
19,060
I'd be quite happy to acquire a 3080 at a good price....it should hold me over (GPU-wise, at least!) for another 5 years.... (My poor GTX1060 has already more than provided in gaming fun longevity compared to what it cost 5+ years back , at only $250 in 2017)
Same, except I'm still rocking my RX470 4GB. I've run GTA5 ragged and enjoyed every second. My next rig I'm purpose-building to play RDR2 at 4K, EVERYTHING set to ultra, hopefully on a 144Hz panel. It seems that an i5-12500 + a 3080 12GB should do the job. : -)
 
  • Like
Reactions: artk2219

edzieba

Distinguished
Jul 13, 2016
430
421
19,060
Can you point to these instances? Note that I'm considering 1440p+ with all graphics settings at their maximum. Which is usually the target audience for top-end GPUs. If we take into account lowering texture and other VRAM-heavy settings, then you are correct.
VRAM "usage" is dominated by opportunistic texture caching. Any engine with a competent developer will load the textures (and geometry and other duties the GPU now performs) needed to render the current scene into VRAM, and then immediate after start loading every texture for that level into VRAM until VRMA is full. There is zero penalty for doing so - the GPU can overwrite cached data with in-use as rapidly as it can overwrite 'empty' VRAM - and it reduces the number of times the GPU needs to request data over the slow PCIe link. Any empty VRAM is wasted VRAM. But that VRMA 'in use' may never actually make it to your screen, e.g. if you never visit half of a level that has been opportunistically cached. This is why you see VRAM 'usage' barely change as resolution increases or decreases: VRMA will continue to be used until either the available VRMA is full, or the engine runs out of data for that level/zone/block (depending on engine and how it partitions scenes) to load into it.

An apples-to-apples test is going to be pretty much impossible without at the very least manufacturer assistance, as any card with multiple VRAM SKUs is also going to have multiple bus sizes (e.g. 3080 10GB vs. 12GB) which will skew results more than available capacity. Without the ability to add an artificial cap on VRAM utilisation - and doing so spread correctly across dies to avoid incurring an unnecessary bandwidth limit, so something that has to occur at the driver or VBIOS level - there is no way to test the same model GPU with the same bandwidth with different VRAM capacities.
It's been shown that 10GBs VRAM is NOT enough in certain AAA titles at high/max settings (DOOM Eternal and Microsoft Flight Simulator, to name a couple).
Max settings are basically worthless outside of youtube videos bragging about your RGB PC you paid someone else to build but never actually use, they're just where the developer has taken every variable they have available and set it to the highest value regardless of function. Drop down one notch to the settings that have actually been optimised by the developer and are visually identical, and suddenly performance improves dramatically for the same output image.
 
  • Like
Reactions: artk2219
VRAM "usage" is dominated by opportunistic texture caching. Any engine with a competent developer will load the textures (and geometry and other duties the GPU now performs) needed to render the current scene into VRAM, and then immediate after start loading every texture for that level into VRAM until VRMA is full. There is zero penalty for doing so - the GPU can overwrite cached data with in-use as rapidly as it can overwrite 'empty' VRAM - and it reduces the number of times the GPU needs to request data over the slow PCIe link. Any empty VRAM is wasted VRAM. But that VRMA 'in use' may never actually make it to your screen, e.g. if you never visit half of a level that has been opportunistically cached. This is why you see VRAM 'usage' barely change as resolution increases or decreases: VRMA will continue to be used until either the available VRMA is full, or the engine runs out of data for that level/zone/block (depending on engine and how it partitions scenes) to load into it.

An apples-to-apples test is going to be pretty much impossible without at the very least manufacturer assistance, as any card with multiple VRAM SKUs is also going to have multiple bus sizes (e.g. 3080 10GB vs. 12GB) which will skew results more than available capacity. Without the ability to add an artificial cap on VRAM utilisation - and doing so spread correctly across dies to avoid incurring an unnecessary bandwidth limit, so something that has to occur at the driver or VBIOS level - there is no way to test the same model GPU with the same bandwidth with different VRAM capacities.

Max settings are basically worthless outside of youtube videos bragging about your RGB PC you paid someone else to build but never actually use, they're just where the developer has taken every variable they have available and set it to the highest value regardless of function. Drop down one notch to the settings that have actually been optimised by the developer and are visually identical, and suddenly performance improves dramatically for the same output image.
Yes, I already have a basic understanding of how programs and games use VRAM. Although, your explanation is a bit simplistic as it leaves out functions like texture compression and intelligently favoring the offloading (to RAM) or compressing of textures you are not currently looking at vs. the ones you are looking at. What I wanted was actual demonstrations that can be reviewed (like a YouTube video or whitepaper).

I do understand the point you're trying to make though. You are assuming that, when the cards runs out of VRAM it will start replacing those textures not in use with in use ones. This is all well and good until all the textures, meshes, shaders, framebuffers, and other data in VRAM is all considered in use. At that point we're back to the same issue of not enough VRAM. Also, the whole allocated vs. used argument is easily overcome with the latest MSI Afterburner or Special K DirectX 12 hook.

Your argument about VRAM limitation not being readily testable is false.
Please review this Reddit thread -
View: https://www.reddit.com/r/nvidia/comments/itx0pm/doom_eternal_confirmed_vram_limitation_with_8gb/

and this YouTube video (starting at the 6:30 mark) -
View: https://www.youtube.com/watch?v=k7FlXu9dAMU


Yes, there are other architectural difference between the cards in question (other than VRAM), but that baseline architectural performance difference is already know and factored in. When we make a change in the texture size, causing both cards to attempt to use more VRAM than one of the cards has, the results indicate a performance deficit due to the one setting changed.

Ahhh, the ol' 'max settings are worthless' line. This is what's called a subjective statement.
I purchased my 6900 XT for a couple of reasons. First - Longevity. I will have a great performing GPU 4+ years from now. Second - Performance today. Since this will be a great card years from now, it stands to reason that it is a superb card RIGHT NOW. This allows me to run today's AAA games at max settings. Which I very much enjoy doing.

Edit - Point of correction from an earlier post. DOOM Eternal shows the VRAM limitation of an 8GB VRAM card, not the 10GB RTX 3080.
 
Last edited:

edzieba

Distinguished
Jul 13, 2016
430
421
19,060
Yes, there are other architectural difference between the cards in question (other than VRAM), but that baseline architectural performance difference is already know and factored in.
That is a test of two cards on different architectures (Turing vs. Ampere), with different VRAM bandwidths (496GB/s vs. 912 GB/s), different number of shader cores (2944 vs. 8960), etc. Far more factors to control for than just VRAM capacity, none of which were controlled for in that test.
 
That is a test of two cards on different architectures (Turing vs. Ampere), with different VRAM bandwidths (496GB/s vs. 912 GB/s), different number of shader cores (2944 vs. 8960), etc. Far more factors to control for than just VRAM capacity, none of which were controlled for in that test.
Are you implying that the results, after changing JUST the texture size, and the shown subsequent VRAM usage, is due to these other architectural differences and NOT the 2080 running out of VRAM?
 

edzieba

Distinguished
Jul 13, 2016
430
421
19,060
Are you implying that the results, after changing JUST the texture size, and the shown subsequent VRAM usage, is due to these other architectural differences and NOT the 2080 running out of VRAM?
Architecture, cores, and VRAM bandwidth. Textures do not magic their way from VRAM to the display output, the GPU sitting between VRAM and the RAMDACs actually does something.
 
Architecture, cores, and VRAM bandwidth. Textures do not magic their way from VRAM to the display output, the GPU sitting between VRAM and the RAMDACs actually does something.
Right, and we know the results of the arcitecture, cores, and bandwidth differences, on performance, from the tests with the textures down a notch. The reviews show that difference. We'll just have to agree to disagree. I'm going to side with the half-dozen techs, reviewers, and hardware enthusists on this one though.

If you have any evidence pointing to the performance difference being something other than VRAM limitations, please post it.
 

TRENDING THREADS