News Nvidia's RTX 5070 Ti and RTX 5070 allegedly sport 16GB and 12GB of GDDR7 memory, respectively — Up to 8960 CUDA cores, 256-bit memory bus, and 300W...

It reminds me of the us auto industry, where rich peoples cars kept getting more expensive, and poor peoples cars, kept getting worse and more expensive, with small, turbo charged engines, cvt transmissions no one liked, and stop go engine on, environmental regulations everyone hates. In G-d I trust.
 
  • Like
Reactions: giorgiog
I know it’s Christmas and all, but maybe the hardware “journalists” should stop being shills and immediately <Mod Edit> on NVIDIA for gimping these cards with anemic amounts of VRAM. I’m keeping my 3080 for another 2 years.
 
Last edited by a moderator:
My RTX 2060 6GB which is turning 6 years old in March 2025, is so long in the tooth now that I have to replace it with something. Probably will get an RTX 5070, but I suspect it won't be the upgrade it should be given how much time has passed.
 
I have a feeling that if the leaked VRAM capacities are all correct we're going to see a midcycle refresh on everything but the 5060 Ti (rumored to have 16GB) using 24Gb GDDR7 modules (8GB becomes 12GB, 12GB becomes 18GB and 16GB becomes 24GB). I don't really care what sort of compression nvidia pulls off there are no guarantees when it comes to the way games are developed. Having to turn down options your GPU is capable of running because it's not paired with enough VRAM is just a place nobody should be in.

The mediocre releases seen the last couple of years makes it really easy to not upgrade, but I do feel bad for anyone putting together something new or who needs to.
 
I have a feeling that if the leaked VRAM capacities are all correct we're going to see a midcycle refresh on everything but the 5060 Ti (rumored to have 16GB) using 24Gb GDDR7 modules (8GB becomes 12GB, 12GB becomes 18GB and 16GB becomes 24GB). I don't really care what sort of compression nvidia pulls off there are no guarantees when it comes to the way games are developed. Having to turn down options your GPU is capable of running because it's not paired with enough VRAM is just a place nobody should be in.

The mediocre releases seen the last couple of years makes it really easy to not upgrade, but I do feel bad for anyone putting together something new or who needs to.

Those amounts are correct because GDDR7 is still primarily shipped in 16Gb (2GB) modules. Sometime next year Samsung is supposed to have it's 24Gb (3GB) modules available, so expect a Super or TI refresh using them at a price premium.

Having said, there is no game that you'll be playing at enjoyable framerates that is going to struggle with VRAM. It's like demanding all computers come with more then 32GB of ram. (32GB is actually overkill, but 16GB is starting to be too little and there isn't really a middle ground here).
 
Having said, there is no game that you'll be playing at enjoyable framerates that is going to struggle with VRAM. It's like demanding all computers come with more then 32GB of ram. (32GB is actually overkill, but 16GB is starting to be too little and there isn't really a middle ground here).
Here we go again with your lies regarding VRAM capacity I've asked you to stop before and here we are again: please just stop.

Rather than repeat myself again here:
https://forums.tomshardware.com/thr...pu-champion-has-arrived.3864870/post-23393536
https://forums.tomshardware.com/thr...pu-champion-has-arrived.3864870/post-23393123

Oh and I'll add another one for good measure:
3sJZprP.jpeg
J8xVzVN.jpeg

https://www.computerbase.de/artikel...der-grosse-kreis-benchmark-test.90500/seite-2

edit: Graphics intensive games today tend to work on console cycles so pretty much everything designed in the PS4/Xbox One era was good with 8GB VRAM and current are typically good up to 12GB VRAM, but as we've seen with increasing limitations with 8GB VRAM this is unlikely to last.
 
Last edited:
It's like demanding all computers come with more then 32GB of ram. (32GB is actually overkill, but 16GB is starting to be too little and there isn't really a middle ground here).
There are 24 GB DDR5 DIMMs, of course.

edit: Graphics intensive games today tend to work on console cycles so pretty much everything designed in the PS4/Xbox One era was good with 8GB VRAM and current are typically good up to 12GB VRAM, but as we've seen with increasing limitations with 8GB VRAM this is unlikely to last.
One thing about consoles is that even when you use a PS5 with a 4k TV, not all games will render natively at 4k. A lot will just render at 1080p and then scale, which is why the PS5 Pro put so much effort into improving scaling quality.

My point is that PS5 has 16 GB of memory shared between the GPU and CPU, but the memory used by the GPU might still only be holding textures and assets sized for 1080p rendering. So, however much you figure they devote to the GPU might be rather lower than what a PC version of the same game would use at 1440p or 4k.
 
  • Like
Reactions: P.Amini
How come the 7900 XTX does so well here with raytracing? I thought AMDs RT was much worse than Nvidia.
AMD does very well in DOOM Eternal which uses the same base graphics engine and the ray tracing being used isn't significant. The game itself mandates ray tracing as a baseline requirement to run so the ray tracing being referred to is the standard setting as opposed to turning it up/using path tracing.
 
  • Like
Reactions: Flayed
One should think a good long while on considering a 12 GB card. We can all see(At least most of us) that we're at the crossroads where it just isn't going to get it unless you are a casual game player. The world is switching to 1440p PC displays as the standard, with displays being incredibly cheap now and even 4k TVs are the norm in all the big box stores at what I'd call dirt cheap prices.
While I have no love for big corporate and am not a fanboy of any of them, team greedia is setting the bar on how to squeeze their cult members, and as long as they get in line to buy, why would they need to change their business tactics?
 
  • Like
Reactions: P.Amini
One should think a good long while on considering a 12 GB card. We can all see(At least most of us) that we're at the crossroads where it just isn't going to get it unless you are a casual game player. The world is switching to 1440p PC displays as the standard, with displays being incredibly cheap now and even 4k TVs are the norm in all the big box stores at what I'd call dirt cheap prices.
While I have no love for big corporate and am not a fanboy of any of them, team greedia is setting the bar on how to squeeze their cult members, and as long as they get in line to buy, why would they need to change their business tactics?

Ehh 12GB is more then sufficient for 1440p even with silly stuff like 4K textures. No scene is going to consume anywhere close to that amount of memory. 2160p with ray tracing it gets a bit dicey, pretty much anything involving crypto mining and AI models demands massive memory. Regular gaming, your gonna run out of rasterization horsepower before you hit that memory limit, at least with existing generations. The 50 series could every well be a big enough jump in compute horsepower that the lower end card compute outruns it's capacity.

As for the cards themselves, you simply can not just soldier on another 16Gb chip, it won't work. 192-bit memory bus means six chips at full bandwidth or twelve chips at half bandwidth. Current capacities are 8Gb (1GB) and 16Gb (2GB), and that's it. With the 40 series and looks to be 50 series, nVidia essentially downshifted each card to a lower tier memory bus size. Before 3070 had 256-bit (8 chip)., now it was 192-bit (6 chip). 3080 320-bit got turned into 256-bit, and the 3080ti 384-bit got turned into the 4090. Next year Samsung is supposed to come out with 24Gb (3GB) GDDR7 modules so we'll likely see a memory bump in a mid-cycle refresh. AMD use's a similar strategy, if anything nVidia adjusted their models down to match AMD's.
 
Last edited:
As for the cards themselves, you simply can not just soldier on another 16Gb chip, it won't work. 192-bit memory bus means six chips at full bandwidth or twelve chips at half bandwidth.
There's a so-called "clamshell" configuration, where chips are paired on both sides of the PCB. This is specially supported by GDDR standards and allows for a doubling of memory capacity at full bandwidth. It's how the workstation graphics cards are able to offer double-capacity.
You can even see the solder pads for this second set of chips on most graphics cards based on the reference design.

I have told you about this, previously.
 
  • Like
Reactions: P.Amini

allegedly sport 16GB and 12GB of GDDR7 memory​

This is not news, it's just noise! Please for the love of all things good and truthful, just stop "speculating" already, it's pure noise and serves no purpose aside from wasting drive space and time! I do not care what you are thinking or wondering nor pondering, I only care about the matter of provable facts!

Supposed leaks.... It's all B.S.!!!
I want the facts and only the facts, anything less is just a waste of my time!
 
Nothing you've displayed has refuted my statement.
Oh really? So you think that a 6700 XT is faster than a 3080 rather than the 3080 running out of VRAM?

The 5060 is supposed to be an 8GB card again and it's a fair bet that it will be faster than the 4060. This will run into the same problems as the 3070/3060 Ti/4060 Ti already have been. Now if nvidia is going to sell it for sub $250 perhaps that would be a reasonable tradeoff, but the chances of that happening is low.

The 5070 being a 12GB card shouldn't immediately be a problem based on what I've seen. Frame generation and ray tracing increase VRAM usage and this could be where it runs onto an issue first. Chances are it will be at least as fast as a 4070 Ti Super which means getting less VRAM at either a similar or higher level of performance.
 
  • Like
Reactions: P.Amini
Supposed leaks.... It's all B.S.!!!
I want the facts and only the facts, anything less is just a waste of my time!
I tend to agree and leaks aren't something I follow closely. However, when we're this close to the launch announcement, they do tend to be pretty accurate. So, I will start to notice what people are saying, and then find I'm rarely much surprised by the actual launch. YMMV.
 
Oh really? So you think that a 6700 XT is faster than a 3080 rather than the 3080 running out of VRAM?

Just game optimizations, nuff said. It's a very simple situation, just compare a 4060ti 8GB with a 4060ti 16GB. The 16GB is operating in clamshell mode so it has identical memory bandwidth and we get same performance in virtually all titles. Issues only start to appear when settings and resolutions are dialed up to max and by then your hitting very unenjoyable framerates.

Another way to do this requires some programming knowledge. You need to compile a CUDA program that will statically allocate VRAM, then keep it on a second screen or in the background to prevent WDDM from evicting it into system RAM. Then you can take a 4090 and test different titles with different amounts of available VRAM.

https://forums.developer.nvidia.com/t/need-a-little-tool-to-adjust-the-vram-size/32857/6

Use the version at the bottom, it keeps banging on the memory to prevent WDDM from evicting it.

Social media article writers don't really know how to do this stuff so we just get a per-card list of metrics without much reason for why and how the numbers are what they are. It gets pretty technical from here, if you want you can dig into how WDDM works and why "running out of VRAM" is a silly notion. Short answer is that you won't experience any performance issues unless the scene you are rendering requires more then total GPU VRAM in the middle of a frame. If that happens then you either render null and keep moving, or wait while the resource is loaded across the PCIe 4 bus at 30-32GB/s. This will necessitate evicting another resource out first, only you were kinda using that other resource so you'll have to swap it back in very soon and evict something else, and do this non-stop. That will create a stutter effect and is incredibly noticeable as both the FPS and frame times instantly go down the tube and it can quickly becomes unplayable. A good analogy is if you attempted to open a file in Adobe Premier that is larger then your total system RAM forcing Windows to have to start using the page file non-stop, things get very ugly very fast when that happens. It's WDDM not the game or the GPU driver that is responsible for keeping the GPU VRAM populated with resources it needs.

You can start reading up on the finer details of WDDM here, but warning it's not for the feint of heart.
https://learn.microsoft.com/en-us/w...ndows-vista-display-driver-model-design-guide

Essentially your argument boils down to suggesting that games need more then 12GB of VRAM to render a single frame because the PCIe bus isn't fast enough.
 
Essentially your argument boils down to suggesting that games need more then 12GB of VRAM to render a single frame because the PCIe bus isn't fast enough.
Let me make it real simple for you since you still don't even remotely get it:

If a game runs better on a card with more VRAM than a faster card with less that means one of the two doesn't have enough.

The why is completely irrelevant to the end user experience.