News RTX 5080 Super rumored with 24GB of memory — Same 10,752 CUDA cores as the vanilla variant with a 400W+ TGP

Starting to wonder if for gaming the speed increases are just not as important as the size of the VRAM at this point. There has to be a cross section where bandwidth matters less than storage based on the scenes/area loaded. Sure you can stream in faster, but I am starting to feel like if they went for more ram and older gens we would see better results. Like would a 24GB gddr6 setup on a 5080 beat a 16GB gddr7 setup? I mean from what I can tell the bandwidth difference is only 16GBPS (1024-1008) between a 5080 super and 4090. Could they do it cheaper but better by not going up gens so fast?

Hard to find anything more than guesses on it, but I feel like if they stuck with gddr6 for longer we would get more out of it. Think pace of memory standards vs it's actual gains are out of whack right now.
 
but I feel like if they stuck with gddr6 for longer we would get more out of it.
not to nvidia.
They are "ai everything".
"ai" LOVES speedy memory (where it lives).
Thus they will push faster ram any time they can.

and im not sure of the 80 tier, but the improvement of the 60/70 tier from 40 series is mostly from the faster memory.
 
The 5080 super will beat a 4090 in games with DLSS4.

4090 prices have already started to fall and should fall further with this release.
 
The 5080 super will beat a 4090 in games with DLSS4.

4090 prices have already started to fall and should fall further with this release.
I don't know that I would call that "beat" when they'll still need to use MFG to do it. If you take frame gen out of the equation, which I absolutely do in real life anytime I can as I prefer not use it do to its well know side effects... They can't feasibly clock a 5080 high enough to bridge that gap, regardless of VRAM config.

I agree with @hotaru251 that a 5080 should have come with 24 GB to start and @Elusive Ruse that without a core count increase, this should not be called a super card. It should be treated like the 5060Ti series and just be a RTX 5080 24GB.
 
Last edited:
The 5080 super will beat a 4090 in games with DLSS4.

4090 prices have already started to fall and should fall further with this release.
Using 3-4 times the amount of fake frames isn’t “beating” a 4090. The only true beating is when using the exact same settings. With equal settings and no CPU bottleneck, a 4090 easily beats a 5080 super. It’ll be 5% faster than a 5080 which is 8-10% faster than a 4080 super, when the 4090 is known to be 30% faster than 4080 super….
 
I wouldn’t call it a Super variant if the CUDA cores remain the same and only the VRAM amount changes.
The GB203 was already maxed out on the vanilla 5080 so there was no possibility of having more CUDA cores, but the memory bandwidth has increased too, not just the size [Edit] and apparently it will be also be factory-overclocked to the max. Anyway, I don't really care what they call it, this looks like my next card as it gives me a decent upgrade from my current one and unlike the 5090 I might actually be able to justify the cost.
 
Last edited:
so the amount of vram the 80 series should of had from start :|
This is what you do when you have a near monopoly of a business. As Steve Burke from GN recently said, Nvidia should just consider leaving the gaming business and focus solely on AI. These guys just keep shooting themselves in the foot. The problem is no one else has bothered to capitalize on Nvidia's screwups. Like everything else, eventually it will catch up with them.

View: https://www.youtube.com/watch?v=AiekGcwaIho
 
Starting to wonder if for gaming the speed increases are just not as important as the size of the VRAM at this point.
so the amount of vram the 80 series should of had from start :|
There are some games that can use more than 16 GB, but not too many yet. 24 GB should be well future-proofed, and it's the minimum amount people should be getting when they spend $1,000 (real price: $1,500) on a GPU.

5070 Super 18 GB (previously leaked) is very interesting. I checked this June 2024 Hardware Unboxed video. Cyberpunk 2077 (4K) and Avatar: Frontiers of Pandora (1440p) could be made to use more than 16 GB, but less than 18 GB. So that extra 2 GB over 16 GB could be just enough to prolong the life of that card by years. Simply because most games are targeting 16 GB maximum based on sales volume and the current-gen consoles having 16 GB total memory, but 12.5% more is a decent cushion for games that sneak over that barrier.

At the low-end, giving the 128-bit cards 12 GB is going to be crucial. Even if Nvidia does not launch a 5060 Super 12 GB this generation, AMD and possibly Intel can start using GDDR7 next generation, releasing their own 128-bit 12 GB cards.
 
Last edited:
I wouldn’t call it a Super variant if the CUDA cores remain the same and only the VRAM amount changes. We already have 5060Ti with different VRAM configurations.
4080 Super had more cores, but that didn't mean much of anything because it still had the same number of ROPs and power limit. That meant the only meaningful difference between the 4080 and 4080 Super was the small increase in memory bandwidth.

If these rumors are accurate about the "5080 Super" it would likely be a bigger performance increase due to additional power budget on top of memory bandwidth.

Basically that was just a long winded way of me saying Super already doesn't mean anything.
 
Nvidia is going to continue to release slop after slop when the consumers reward Nvidia for selling manufactured e-waste.
According to steam hardware survey, that's 85~90% of the gaming GPU market.
Nvidia only responds when they feel their market share is threatened.
Unfortunately, it takes a while for consumer behavior to change. If you want any indication, people still believe AMD and Intel GPU drivers are crap, while Nvidia is flawless. They are not.
 
4080 Super had more cores, but that didn't mean much of anything because it still had the same number of ROPs and power limit. That meant the only meaningful difference between the 4080 and 4080 Super was the small increase in memory bandwidth.

If these rumors are accurate about the "5080 Super" it would likely be a bigger performance increase due to additional power budget on top of memory bandwidth.

Basically that was just a long winded way of me saying Super already doesn't mean anything.
more cores at same tdp is better than more tdp at same core count 10 times out of 10 in a GPU.
 
The GB203 was already maxed out on the vanilla 5080 so there was no possibility of having more CUDA cores, but the memory bandwidth has increased too, not just the size [Edit] and apparently it will be also be factory-overclocked to the max. Anyway, I don't really care what they call it, this looks like my next card as it gives me a decent upgrade from my current one and unlike the 5090 I might actually be able to justify the cost.
Looks like you have made your mind but your endorsement doesn’t make it worthy of the Super label. If it at least matches the 4090 mano o mano then I will gladly eat crow.
 
4080 Super had more cores, but that didn't mean much of anything because it still had the same number of ROPs and power limit. That meant the only meaningful difference between the 4080 and 4080 Super was the small increase in memory bandwidth.

If these rumors are accurate about the "5080 Super" it would likely be a bigger performance increase due to additional power budget on top of memory bandwidth.

Basically that was just a long winded way of me saying Super already doesn't mean anything.
4080 Super took flak as well but at least it came with a good discount on top of marginal performance improvement. The vanilla 5080 should have beaten or at least matched the 4090 but it didn’t which was a terse way for Nvidia to tell us the 80 class was dead.
 
There are some games that can use more than 16 GB, but not too many yet. 24 GB should be well future-proofed, and it's the minimum amount people should be getting when they spend $1,000 (real price: $1,500) on a GPU.
Agreed the future proof is a bigger issue, and is exactly why they don't do it, but I really wonder at brass tack level which would make better performance if a game optimized for it.

ie if you made a a5080 with gddr6 but put 25% more in at the same cost, and a game optimized for that would it out perform a 5080 with gddr7 at 5% extra bandwidth. Seems like it would have a greater performance boost especially at 4k with high rez textures, but it's hard to say without a apples to apples comparison.

Not to mention if they just push for lower cost gddr6 for an extra year instead of going to gddr7 you would get much greater savings. Things like that are curious to me and if you ask me very hard to answer.
 
  • Like
Reactions: usertests
ie if you made a a5080 with gddr6 but put 25% more in at the same cost, and a game optimized for that would it out perform a 5080 with gddr7 at 5% extra bandwidth. Seems like it would have a greater performance boost especially at 4k with high rez textures, but it's hard to say without a apples to apples comparison.
I don't think there has been much evidence to show that GDDR7 benefited RTX 5000 cards. Partly because some of the dies didn't get many more cores, since they remained on the same node as Lovelace.

As far as GDDR6 capacities go, they can use clamshelling to double, or they could have redesigned the dies to use different bus widths. You don't get to add 25% without doing so. GDDR7 at least unlocks the option to add 50% more later as with the 5080 Super, which allows them to keep the lower bus widths that they obviously want (these are more power efficient too).

It's said that GDDR7 is very expensive, but who knows what deal Nvidia made behind the scenes? They may have expected the memory to be ready sooner. What's done is done, and everyone will get to use 2-3 GB GDDR7 next go around.

Oh, I forgot that there is the rumored RTX 5050 with GDDR6 (or apparently GDDR7 in laptops). If Nvidia wants to make that, or an even cheaper 5040, nothing's stopping them. The die size is unknown, but if GB206 is 181mm^2, the 5050 die could be like 100-110mm^2 (rumored to have 2,560 CUDA cores, 56% of GB206's 4,608).
 
From the forum comments, it seems like many see this as a half-measure from Nvidia.

I just wanted to say, as a graphic designer / multimedia developer, a 5080 with more RAM is exactly what I need.

12gb is honestly junk for 3d rendering or doing VFX. It might as well be 8gb. I can't afford a render farm on a regular basis or (at least right now) a 5090.

3d is a glutton for RAM. Speed is good.... Very good. But it's meaningless if you top out the RAM. That's the end of the render, no matter how fast the card is. It just quits.

I would prefer 32gb. But if I can get at least 24gb with a reasonably powerful GPU at a reasonable price, I am thrilled.

Especially if the price is low enough to buy two for the price of a single 5090 and make it a dual GPU system, since Octane Render, 3ds Max, and Blender can all use dual GPUs for rendering.

Hopefully at some point, Adobe will pull their heads out of their butts and fully implement GPU rendering. In the meantime, having one reasonably powerful GPU will have to do.