Review Nvidia GeForce RTX 5070 review: $549 price and performance look decent on paper

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The only way that 12gb vram is gonna make it 5yrs, is if you play at 1080p, and even that is questionable, the way games are getting more demanding.
What he actually means is, to stay under 12GB of VRAM, either:

1) Run lower resolutions
2) Run lower settings (esp. don't max out textures or shadows)
3) Don't expect new demanding game X to run 60 FPS at max or even high settings
4) Potentially look at using upscaling and/or framegen
 
  • Like
Reactions: helper800
What he actually means is, to stay under 12GB of VRAM, either:

1) Run lower resolutions
2) Run lower settings (esp. don't max out textures or shadows)
3) Don't expect new demanding game X to run 60 FPS at max or even high settings
4) Potentially look at using upscaling and/or framegen
Is this not also a bit of a misconception with how games say they will "use" X amount of vRAM, but in reality that is just "allocated" vRAM? If I remember correctly, there is no good way of knowing how much vRAM is actually being used to render a scene except getting texture artifacts/unexplained stuttering.
 
  • Like
Reactions: JarredWaltonGPU
Is this not also a bit of a misconception with how games say they will "use" X amount of vRAM, but in reality that is just "allocated" vRAM? If I remember correctly, there is no good way of knowing how much vRAM is actually being used to render a scene except getting texture artifacts/unexplained stuttering.

In HUB's 8gb vs 16gb vram videos, there was definitely rendering issues, when the vram availability was less than the game was demanding. MSI afterburner's OSD seemed to catch the usage pretty well.
 
MSI's OSD does not track usage just what the game has allocated. Games do not use all of the memory it allocates, especially all the time.

Whenever the OSD was showing a game bumping up against that 8gb limit, and the game was obviously wanting more, there were issues with things rendering properly, on the 3070, while the RX 6800 and in a later video an RTX A4000 did not suffer from, which both would show usage over 8gb. Even if the OSD isn't perfect, you could see a correlation between the displayed vram usage, and rendering quality.
 
Whenever the OSD was showing a game bumping up against that 8gb limit, and the game was obviously wanting more, there were issues with things rendering properly, on the 3070, while the RX 6800 and in a later video an RTX A4000 did not suffer from, which both would show usage over 8gb. Even if the OSD isn't perfect, you could see a correlation between the displayed vram usage, and rendering quality.
If you don't want to take my word for it here is an excerpt from TPU's conclusion section for the Avatar game on the subject:

"Our VRAM testing would suggest that Avatar is a VRAM hog, but that's not exactly true. While we measured over 15 GB at 4K "Ultra" and even 1080p "Low" is a hard hitter with 11 GB, you have to consider that these numbers are allocations, not "usage in each frame." The Snowdrop engine is optimized to use as much VRAM as possible and only evict assets from VRAM once that is getting full. That's why we're seeing these numbers during testing with the 24 GB RTX 4090. It makes a lot of sense, because unused VRAM doesn't do anything for you, so it's better to keep stuff on the GPU, once it's loaded. Our performance results show that there is no significant performance difference between RTX 4060 Ti 8 GB and 16 GB, which means that 8 GB of VRAM is perfectly fine, even at 4K. I've tested several cards with 8 GB and there is no stuttering or similar, just some objects coming in from a distance will have a little bit more texture pop-in, which is an acceptable compromise in my opinion."

I have been reading stuff about this from as early as 2013 about how game engines will allocate much more vRAM than it actually needs to. Steve from Gamer's Nexus has talked about this at length more than a few times over the years. The earliest video I can find on the subject that supports what I am saying is from 2015.
 
  • Like
Reactions: JarredWaltonGPU
I am mostly referencing this. The frame time graphs also showed a difference when bumping that 8gb buffer.


The RX 6800 was even winning in RT, because the 3070 just didn't have the vram resources need to stretch its legs.
 
I am mostly referencing this. The frame time graphs also showed a difference when bumping that 8gb buffer.

Its not the best test for the subject at hand in my opinion because the 3070 and 6800 are different architectures, different drivers, different optimizations for any particular game, more or less CPU overhead for the driver, et cetera, et cetera. It would be interesting to have someone test with specifically with the 4060 ti 8gb and 16gb because this takes away all of that guesswork for a perfect apples to apples test.
 
I am mostly referencing this. The frame time graphs also showed a difference when bumping that 8gb buffer.


The RX 6800 was even winning in RT, because the 3070 just didn't have the vram resources need to stretch its legs.
The problem is that you can't find out how much VRAM games really need based just on how much VRAM they allocate. We can definitely find a lot of games that exceed 8GB of VRAM these days. That's not difficult. We can even find games that exceed 12GB, and in a few cases, 16GB. But if you want to know "how much VRAM is truly needed to run [Game X] at [Settings Y]" all you can really do is try to test it with GPUs that have less VRAM. So 16GB works fine, 12GB is fine, 10GB struggle... or maybe 10GB is fine and 8GB struggles. And even then, there may be other explanations.

One example of the wonkiness that can happen is Far Cry 6, which at Ultra with the HD texture pack can definitely clear 8GB of VRAM use. But the game behaves differently on various 8GB GPUs. If I run an RTX 3070 as an example, 1080p will be fine, 1440p will also be fine... and 4K will waffle between runs of as low as 5 FPS and as high as 40 FPS (or something similar). Do the same test on an RX 6650 XT and you get similar oddities at 4K. But the last time I tested an Arc A750, it ran 4K just fine, every time. Or maybe it was just more consistently good, I don't recall 100%. The point is that the game was exceeding 8GB of "needed" memory but there was some interaction between drivers and the game where it wouldn't free up assets properly on some GPUs.

TLDR: It's easy to tell if a game is exceeding a certain VRAM threshold. It's harder to tell what amount is needed to satisfy the game's VRAM requirement for any specific setting.
 
The problem is that you can't find out how much VRAM games really need based just on how much VRAM they allocate. We can definitely find a lot of games that exceed 8GB of VRAM these days. That's not difficult. We can even find games that exceed 12GB, and in a few cases, 16GB. But if you want to know "how much VRAM is truly needed to run [Game X] at [Settings Y]" all you can really do is try to test it with GPUs that have less VRAM. So 16GB works fine, 12GB is fine, 10GB struggle... or maybe 10GB is fine and 8GB struggles. And even then, there may be other explanations.

One example of the wonkiness that can happen is Far Cry 6, which at Ultra with the HD texture pack can definitely clear 8GB of VRAM use. But the game behaves differently on various 8GB GPUs. If I run an RTX 3070 as an example, 1080p will be fine, 1440p will also be fine... and 4K will waffle between runs of as low as 5 FPS and as high as 40 FPS (or something similar). Do the same test on an RX 6650 XT and you get similar oddities at 4K. But the last time I tested an Arc A750, it ran 4K just fine, every time. Or maybe it was just more consistently good, I don't recall 100%. The point is that the game was exceeding 8GB of "needed" memory but there was some interaction between drivers and the game where it wouldn't free up assets properly on some GPUs.

TLDR: It's easy to tell if a game is exceeding a certain VRAM threshold. It's harder to tell what amount is needed to satisfy the game's VRAM requirement for any specific setting.
The only game I know wants more that my 3080's 10gb of vRAM is Cyberpunk at 4k with certain settings. Depending on various vRAM dependent settings I can get 40-50 fps or 10-20 fps, and this is excluding RTX settings.
 
The only game I know wants more that my 3080's 10gb of vRAM is Cyberpunk at 4k with certain settings. Depending on various vRAM dependent settings I can get 40-50 fps or 10-20 fps, and this is excluding RTX settings.
4K ultra at native resolution, or sometimes even 1440p native (alternatively 4K with DLSS quality upscaling) can exceed 12GB of VRAM in Indiana Jones and the Great Circle I believe. Or maybe it's only if you try to enable path tracing? If you do PT plus DLSS plus Framegen it will exceed 16GB (though obviously 3080 doesn't have DLSS framegen). And that's not even trying for the "Very Ultra" or "Extreme" presets.

Forza Horizon 5 at extreme preset and 4K would also exceed 10GB (but still remain mostly playable on the 3080). The same goes for Diablo IV with DXR enabled, and Spider-Man: Miles Morales with DXR enabled. I haven't retested the 3080 10GB on my new test suite yet (because I have a lot of other cards to get through still), but I suspect I will encounter a lot more games (now, in 2025) where it struggles at 4K ultra compared to the 4070. Drop to high settings and it's generally fine.

Basically, there are games where, if you look at the 3080 10GB and the 4070 12GB, you can see that the 12GB card handles things better. That's the reverse of a lot of games where if you don't exceed 10GB, the 3080 ends up slightly faster than the 4070.
 
  • Like
Reactions: helper800
If you don't want to take my word for it here is an excerpt from TPU's conclusion section for the Avatar game on the subject:

"Our VRAM testing would suggest that Avatar is a VRAM hog, but that's not exactly true. While we measured over 15 GB at 4K "Ultra" and even 1080p "Low" is a hard hitter with 11 GB, you have to consider that these numbers are allocations, not "usage in each frame." The Snowdrop engine is optimized to use as much VRAM as possible and only evict assets from VRAM once that is getting full. That's why we're seeing these numbers during testing with the 24 GB RTX 4090. It makes a lot of sense, because unused VRAM doesn't do anything for you, so it's better to keep stuff on the GPU, once it's loaded. Our performance results show that there is no significant performance difference between RTX 4060 Ti 8 GB and 16 GB, which means that 8 GB of VRAM is perfectly fine, even at 4K. I've tested several cards with 8 GB and there is no stuttering or similar, just some objects coming in from a distance will have a little bit more texture pop-in, which is an acceptable compromise in my opinion."

I have been reading stuff about this from as early as 2013 about how game engines will allocate much more vRAM than it actually needs to. Steve from Gamer's Nexus has talked about this at length more than a few times over the years. The earliest video I can find on the subject that supports what I am saying is from 2015.
Here you go, and I could come up with several other examples (this is ultra, no extra RT (game requires RT period) and no PT):
J8xVzVN.jpeg



The problem really comes down to different handling of VRAM limitations and limited methods of detecting it unless it tanks performance as above. I think TLOU Pt1 was another that tanked performance on launch and then they changed it to unload textures/drop to low res. HUB did a video comparing the two 4060 Ti models and had several obvious examples of VRAM limitations as opposed to performance.

As far as I'm aware all of these issues can be circumvented by dropping settings (dropping indy to high from ultra fixes it entirely), but I think it's bad to have a limitation on VRAM as opposed to performance. If for example you were running out of VRAM and only getting 30 fps or less not a huge deal, but when you're getting into that 45-60 fps (depending on title type and personal preference) and higher VRAM shouldn't be what's limiting.
 
Surprised to already see some on newegg for $609.

Still more then i'm willing to pay, but trending down toward MSRP.
Yeah, it's been flopping around. It's been on Newegg for $549 a few times, $600~$650 has been pretty common. Obviously not for the higher-end, triple-fan models with lots of bling, but base models are pretty much just as fast (within a few percent).
 
  • Like
Reactions: Order 66
Yeah, it's been flopping around. It's been on Newegg for $549 a few times, $600~$650 has been pretty common. Obviously not for the higher-end, triple-fan models with lots of bling, but base models are pretty much just as fast (within a few percent).
MSI has had their ventus and shadow models in stock several times at 550 over the months as well. Funnily enough, walmart has had them in stock in the past for 550 as well. Gotta look, but you can get MSRP cards eventually.

I knew I said I would not get a 5090, but I did anyways when I found the Zotac Solid OC for 2369.99. The uplift in 4k for the 5090 over the 4090 is 30-40% which was acceptable for me. Now I can take much better advantage of the 4k240hz QD-OLED monitor I got a while back.
 
Last edited: