Review Nvidia GeForce RTX 4070 Ti Super review: More VRAM and bandwidth, slightly higher performance

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
This is not just about memory capacity, it's also about the additional 33% bandwidth and 33% larger L2 cache. Basically, we got 10% more compute and 33% more memory capacity, bandwidth, cache. I was expecting the general trend to be closer to 4080 than the 4070 Ti because of that, and in most of the tests that didn't happen — it's closer to the 4070 Ti.
In terms of bandwidth, Ada is VERY bandwidth insensitive. We saw this with the 4070Ti (non super) and 3090: half the memory bus width, half the memory bandwidth, but effectively equivalent actual real world performance.
 
  • Like
Reactions: artk2219
I thought about holding out for this GPU, but I'm glad I opted for the 4070 Super (Asus Dual OC). I play at 1440p and don't feel that $300+ CAD is worth an extra ~ 14% performance increase. Sure, the 16GB and 256-bit bus would also be nice.

I may look at the RTX 5070 series when they arrive, pending their performance at 2160p.

Quick side note; going from an RTX 3070 to an RTX 4070 Super, VR streaming has significantly improved (even latency it appears). I'm using Virtual Desktop with Wi-Fi 6 and the AV-1 codec.
 
4070 TI Super is mostly a 4080
Came to say this except there's nothing "mostly" about it.

It's the same AD103 die with 10% of stuff fused off (probably because it didn't pass binning for RTX 4080 at the factory because of silicon defects).

It actually tells us what should be the real price for RTX 4080 which still allows them to make a profit.
 
  • Like
Reactions: AgentBirdnest
I thought about holding out for this GPU, but I'm glad I opted for the 4070 Super (Asus Dual OC). I play at 1440p and don't feel that $300+ CAD is worth an extra ~ 14% performance increase. Sure, the 16GB and 256-bit bus would also be nice.

I may look at the RTX 5070 series when they arrive, pending their performance at 2160p.

Quick side note; going from an RTX 3070 to an RTX 4070 Super, VR streaming has significantly improved (even latency it appears). I'm using Virtual Desktop with Wi-Fi 6 and the AV-1 codec.
Mostly the same here! I was planning on probably getting this GPU. On paper, it sounded like the perfect card. The +33% bus width and VRAM capacity were enticing, and I figured it might get it close to 4080 performance.
But after seeing the reviews, I was pretty underwhelmed. The better memory configuration doesn't seem too important after all, and the VRAM Panic of '23 feels pretty overblown.

15% faster than the 4070 Super for 33% more money is something I can't justify for myself. I realized that the reason I wanted 16GB VRAM and a 256-bit bus is: FOMO. Which is not a good reason. The only other reason to spend that much would be trying to futureproof, which is always a very uncertain proposition. But realistically, regardless of which card I buy today, I'll most likely be upgrading again in 4 years anyway (next-next-gen.)

So, I went ahead and ordered a 4070 Super instead. MSI Gaming X Slim. Should get here tomorrow. I really can't wait!! :-D My RTX 2060 has been needing the boot for far too long.

And by the way - congrats to you on the new card. Hope it serves you well. : )
 
Well I mean I spent $155 for a dual core Opteron 165 in 2007, the entry level model, and in 2024 you can get a quad core 14100F, the entry level model, for the same price, twice the cores for the same price. Also in 2007 I spent $118 on 2x512MB DDR-400, and in 2020 I spent $125 on 2x8GB DDR4-3200. 16x the capacity for effectively the same price.

In contrast, In 2013 I paid $433 for an XFX 7970 Ghz Edition, AMD's flagship of the time. In 2024 you're looking at $1000 for the current flagship model, twice the price. In 2004 I spent $174 on a Radeon 9600XT, a great midrange card of the 9000 series, and in 2020 I spent $520 on a 2070 Super, again a great midrange card of the 2000 series, well over twice the price. In 2009 I bought an ASUS Crosshair III Formula for $200, a flagship AM3 790FX motherboard, and in 2021 I bought a Gigabyte X570S Aorus Master for $390, a flagship motherboard.

In the realm of computers, the only two core components which have drastically increased in price are motherboards and GPUs. CPUs have gotten more expensive, but they also have more cores. My 5950X (that I loathe) cost $548, 3.5x the price of the Opteron 165, but on a per-core basis it cost $34.25 per core vs the Opteron's $77.50, so on a per-core basis they've actually decreased in price over the last 20 years. Motherboards have gained a lot more functionality and, especially on the AMD side, have many more models with the highest end chipset on more affordable models (the ASUS TUF GAMING X670E-PLUS WIFI 6E, for example, is $280, only 50% higher than the board I bought in 2009 and in line with inflation exactly). Compare this to GPUs where the $433 spent for a 7970 Ghz Edition in 2013 equates to $566 today, the price of some 7800XT, a lower tier model.

Sure GPUs have gotten more complex with many times the number of transistors and processing capabilities, but they've also increased in price far more than any computer component in the last...15 years? And they're going to keep increasing as long as there's a cooperating duopoly because they don't have to sell any consumer GPUs, they're making money hand over fist selling enterprise cards for many times the profit margin, and they know that whatever they price consumer cards at people will buy them because eventually you WILL need one, and you WILL buy one even though you will grumble about the price for years.
1 - I absolutely do not buy the "cooperating" duopoly claim. There's nothing to back such an assertion.

2 - while I loathe the pricing structure, have you done a performance comparison of old vs new GPU? With CPUs, you compared cores, though I would say there's even a more favorable price "reduction" if you also account for IPC improvements.
So how many times the performance do you get from a new video card vs that 7970 GHz Edition? Don't you wind up with a better price-performance ratio?

Again, not defending the prices really. They're clearly padding the profit margins. After all, people TODAY still insist that AMD graphs drivers are cripplingly unstable, and have even been told outright that "people should definitely pay $200 more for an Nvidia card versus an equally performing AMD card" because of it.

With people being convinced of that, and others continuing to push such a narrative, why would Nvidia bother with lower prices?
 
Mostly the same here! I was planning on probably getting this GPU. On paper, it sounded like the perfect card. The +33% bus width and VRAM capacity were enticing, and I figured it might get it close to 4080 performance.
But after seeing the reviews, I was pretty underwhelmed. The better memory configuration doesn't seem too important after all, and the VRAM Panic of '23 feels pretty overblown.

15% faster than the 4070 Super for 33% more money is something I can't justify for myself. I realized that the reason I wanted 16GB VRAM and a 256-bit bus is: FOMO. Which is not a good reason. The only other reason to spend that much would be trying to futureproof, which is always a very uncertain proposition. But realistically, regardless of which card I buy today, I'll most likely be upgrading again in 4 years anyway (next-next-gen.)

So, I went ahead and ordered a 4070 Super instead. MSI Gaming X Slim. Should get here tomorrow. I really can't wait!! :-D My RTX 2060 has been needing the boot for far too long.

And by the way - congrats to you on the new card. Hope it serves you well. : )

Congrats on the purchase, as well. You'll see a tremendous increase over your RTX 2060!

Looking back, I tend to upgrade ever 16 ~ 24 months, so the mid-life of the RTX 5000 series could be of interest for myself.

Best regards,
 
  • Like
Reactions: AgentBirdnest
First, I applaud you guys for including MSFS 2020 in all your reviews. It's hard to find reviews that test this game. But the test for Flight Simulator at 2k Ultra is being held back by the cpu 13900k. The 4070 Ti is almost tied with the 4080 which is odd. But MSFS is notorious for cpu bound. The AMD 7800X3D is a much faster cpu in this case. I understand using the 13900k for consistency's sake with pass results.
Also kudos for testing stable diffusion. It's the reason I picked NVDA over AMD in gpu.
 
First, I applaud you guys for including MSFS 2020 in all your reviews. It's hard to find reviews that test this game. But the test for Flight Simulator at 2k Ultra is being held back by the cpu 13900k. The 4070 Ti is almost tied with the 4080 which is odd. But MSFS is notorious for cpu bound. The AMD 7800X3D is a much faster cpu in this case. I understand using the 13900k for consistency's sake with pass results.
Also kudos for testing stable diffusion. It's the reason I picked NVDA over AMD in gpu.
Unfortunately we switched to 13900K before 7950X3D was available. Honestly, I often regret this, but I really don't want to try to redo all the GPUs on an X3D at this stage. I really hope I can get a Zen 5 X3D basically at launch and swap to that platform... whenever Zen 5 comes out. Unless Arrow Lake really surprises with performance and efficiency, I suppose.

If you're curious: I mostly regret the 13900K because I have some odd instability/crashing issues. Basically, there are a handful of games where the shader compilation process appears to overload something and I'll get a crash to desktop. The solution: Set affinity (during shader compile) to just the P-cores; when done, set it back to all cores. It's possibly/probably something weird with either the MSI motherboard firmware, or the RAM, or whatever. But my 12900K testbed is 100% rock stable and the 13900K isn't.

So, here's to hoping AMD doesn't delay the X3D Ryzen 9 8000 chips and just goes straight to what a lot of people will want, right out of the gates.
 
Mostly the same here! I was planning on probably getting this GPU. On paper, it sounded like the perfect card. The +33% bus width and VRAM capacity were enticing, and I figured it might get it close to 4080 performance.
But after seeing the reviews, I was pretty underwhelmed. The better memory configuration doesn't seem too important after all, and the VRAM Panic of '23 feels pretty overblown.

15% faster than the 4070 Super for 33% more money is something I can't justify for myself. I realized that the reason I wanted 16GB VRAM and a 256-bit bus is: FOMO. Which is not a good reason. The only other reason to spend that much would be trying to futureproof, which is always a very uncertain proposition. But realistically, regardless of which card I buy today, I'll most likely be upgrading again in 4 years anyway (next-next-gen.)

So, I went ahead and ordered a 4070 Super instead. MSI Gaming X Slim. Should get here tomorrow. I really can't wait!! :-D My RTX 2060 has been needing the boot for far too long.

And by the way - congrats to you on the new card. Hope it serves you well. : )
I agree with most, except one specific tidbit: do not confuse "future-proof" with "aging gracefully".

People that buys into more RAM, better CPU, faster this and that can be also because they want their purchase to age better and has nothing to do with what the technologies of the future bring. Plus, they perform the best now and some key choices have proven time and time again to be the right calls. Mainly: amount of memory (as it is the main context). No one knows what the future engines will focus on (look at PhysX) and where you need to spend the most right now and that is why "future-proofing" is self-defeating by definition. The chances of someone, anyone, predicting the future are low (being generous). That being said, history has proven time and time again what elements deserve more size in a budget to age better.

Comparable GPUs with double/half VRAM have aged wildly differntly and the same can be said about PCs. A PC from 2010 with 16GB is perfectly usable today for 99% of things, including light gaming, for example. Look at the 1060 3GB and 6GB or RX480 4GB and 8GB. There's plenty things in history (not just recent one) that can give you strong hints on where to put more of so your system feel the weight of tech advancement a bit less. It'll depend on the usage and expectations, but overall, you can make the assessment of what you need right now so that you won't need to change/increase that element of the build too soon.

So, in summary: "future-proofing" is not the same as "aging gracefully" for a system or component. At least, not for me. Having more memory, in general, falls squarely under "aging gracefully".

Regards.
 
That's a lot more then "slightly higher performance", it's almost an entire tiers worth.
4k geomean performance only improved by 10%. By contrast, the difference between RTX 4070 Ti and RTX 4070 was 27.7%. And the RTX 2080 was 25.9% faster than the RTX 2070 Ti. So, quite clearly not "an entire tier's worth".

Also, IIRC, you were among those highly critical of Nvidia's decision to use reduced bus widths. This is a good opportunity to revisit that complaint. This Super model has 42.3% 33.3% more memory bandwidth than the baseline RTX 4070 Ti, and yet delivered only 10% better 4k Geomean performance, even if we just consider raster performance. So, it looks as though that 192-bit bus indeed wasn't holding back the original RTX 4070 Ti, by much (if any).

Granted, at some of the lower tiers, where the amount of L2 cache is also less, the impact of the RTX 4000 generation's bus-narrowing was probably greater.
 
Last edited:
This is not just about memory capacity, it's also about the additional 33% bandwidth and 33% larger L2 cache. Basically, we got 10% more compute and 33% more memory capacity, bandwidth, cache. I was expecting the general trend to be closer to 4080 than the 4070 Ti because of that, and in most of the tests that didn't happen — it's closer to the 4070 Ti.
How are you computing those figures? Here's what I'm getting:

Attribute
RTX 4070 Ti​
RTX 4070 Ti Super​
Improvement​
Memory Bandwidth (GB/s)
504​
672​
33.3%​
fp32 TFLOPS (base clocks)
35.5​
39.5​
11.3%​
fp32 TFLOPS (turbo)
40.1​
44.1​
10.0%​
RT TOPS
92.7​
102​
10.0%​
TMUs
240​
264​
10.0%​
ROPs
80​
112​
40.0%​

I suspect some driver voodoo is happening.
Why should the driver see it much differently than a RTX 4080 (which indeed performs better)?

Based on the updated compute numbers (thanks, @TJ Hooker), it looks like it's compute-bound or TMU-bound, not bandwidth-limited.
 
Last edited:
It actually tells us what should be the real price for RTX 4080 which still allows them to make a profit.
Current price, maybe. Not the launch price, though. In 14 months, I expect yield and pricing of these wafers both improved, especially if you consider when they would've actually purchased said wafer capacity (i.e. during the "chip crunch", for the initial shipments vs. current wafers were probably reserved during the PC slump of the past year).
 
How are you computing those figures? Here's what I'm getting:
Attribute
RTX 4070 Ti​
RTX 4070 Ti Super​
Improvement​
Memory Bandwidth (GB/s)
504​
717​
42.3%​
fp32 TFLOPS (base clocks)
35.5​
44.1​
24.2%​
fp32 TFLOPS (turbo)
40.1​
49.83​
24.3%​
RT TOPS
92.7​
102​
10.0%​
TMUs
240​
264​
10.0%​
ROPs
80​
112​
40.0%​



Why should the driver see it much differently than a RTX 4080 (which indeed performs better)? Do you think the driver is intentionally holding it back? If so, then why'd they give it so much additional compute capacity, only to limit it afterwards?
RTX 4080 has 22.4 Gbps GDDR6X, but the RTX 4070 Ti Super has 21 Gbps memory. So it's not 717 GB/s, it's 672 GB/s. Unless I'm completely off my rocker... no. It's 21 Gbps memory. I just double-checked on the Asus card. It seems some places have misreported the VRAM clocks.
 
RTX 4080 has 22.4 Gbps GDDR6X, but the RTX 4070 Ti Super has 21 Gbps memory. So it's not 717 GB/s, it's 672 GB/s. Unless I'm completely off my rocker... no. It's 21 Gbps memory. I just double-checked on the Asus card. It seems some places have misreported the VRAM clocks.
Cool. Thanks for fixing the Wikipedia page!
; )

I don't see anywhere that it says on Nvidia's website, but the TechPowerUp GPU database agrees with you.

I'll edit my posts, accordingly.
 
  • Like
Reactions: Order 66
How are you computing those figures? Here's what I'm getting:
Attribute
RTX 4070 Ti​
RTX 4070 Ti Super​
Improvement​
Memory Bandwidth (GB/s)
504​
672​
33.3%​
fp32 TFLOPS (base clocks)
35.5​
44.1​
24.2%​
fp32 TFLOPS (turbo)
40.1​
49.83​
24.3%​
RT TOPS
92.7​
102​
10.0%​
TMUs
240​
264​
10.0%​
ROPs
80​
112​
40.0%​



Why should the driver see it much differently than a RTX 4080 (which indeed performs better)? Do you think the driver is intentionally holding it back? If so, then why'd they give it so much additional compute capacity, only to limit it afterwards?
Your fp32 values are off for the Super. The base/boost clocks of the Super are 2.34/2.61GHz, resulting in 39.53/44.1 fp32 FLOPs, respectively. The rated clocks are nearly identical between super and non-super, but the former has 10% more cores, so ~10% higher compute.
 
Your fp32 values are off for the Super. The base/boost clocks of the Super are 2.34/2.61GHz, resulting in 39.53/44.1 fp32 FLOPs, respectively.
I just ripped them from the wikipedia page, without sanity-checking them.

Your stated clocks match what wikipedia had, but their TFLOPS numbers are still wrong. When I compute them from the stated clocks, I get the same TFLOPS as you do.

Thanks, again. Post updated.
 
RTX 4080 has 22.4 Gbps GDDR6X, but the RTX 4070 Ti Super has 21 Gbps memory. So it's not 717 GB/s, it's 672 GB/s. Unless I'm completely off my rocker... no. It's 21 Gbps memory. I just double-checked on the Asus card. It seems some places have misreported the VRAM clocks.

It's base stats should have it much closer to the 4080 in some of those tests, I suspect something in the driver or card firmware is up to no good. Would be an interesting article to write in another three to fix months to see what the community finds out.
 
  • Like
Reactions: Order 66
Unfortunately we switched to 13900K before 7950X3D was available. Honestly, I often regret this, but I really don't want to try to redo all the GPUs on an X3D at this stage. I really hope I can get a Zen 5 X3D basically at launch and swap to that platform... whenever Zen 5 comes out. Unless Arrow Lake really surprises with performance and efficiency, I suppose.

If you're curious: I mostly regret the 13900K because I have some odd instability/crashing issues. Basically, there are a handful of games where the shader compilation process appears to overload something and I'll get a crash to desktop. The solution: Set affinity (during shader compile) to just the P-cores; when done, set it back to all cores. It's possibly/probably something weird with either the MSI motherboard firmware, or the RAM, or whatever. But my 12900K testbed is 100% rock stable and the 13900K isn't.

So, here's to hoping AMD doesn't delay the X3D Ryzen 9 8000 chips and just goes straight to what a lot of people will want, right out of the gates.
I don't think Arrow Lake is going to be any threat to AMD Zen 5. But AMD shouldn't rest and go for it right at the start.
My guess is that the instability is from the MSI board. I have MSI B550 and occasionally it would blue screen on me. Never had that with my ASUS or Gigabyte boards. And stories about MSI 4070 TiS needing a day one firmware update doesn't bode well for them.
I think ppl will be disappointed with 4080S as it will be the least improved out of the super. Hence the price cut to compete with the 7900XTX. Maybe that's enough for ppl. 60% performance for 50% of the cost or whatever premium the 4090 commands. NVDA really need a 4080 Ti to fill that jumbo size hole.
PS I would gladly redo all those GPUs tests with 7800x3d for you. 😀
 
  • Like
Reactions: Order 66
I think ppl will be disappointed with 4080S as it will be the least improved out of the super. Hence the price cut to compete with the 7900XTX. Maybe that's enough for ppl.
Yes, it's mainly the price cut. That was the biggest problem with it. Performance-wise, I think Nvidia estimated it gains only 3% over the standard RTX 4080.

IMO, the price cut won't be enough for most people. I expect the 4070 Ti Super will be very popular.
 
  • Like
Reactions: Order 66
Wow, that's... disappointing, honestly. Like Jarred mentioned, I too was expecting it to be closer to the 4080. At the very least, I thought it would consistently outperform the 7900XT at 1440p.

I had my eye on this card to finally replace my RTX 2060, but after seeing these benchmarks, I'm not so sure.
  • Part of me wants to have the better memory specs, in case games coming over the next few years will benefit more from it (Alan Wake seems to show that.) And I may upgrade from 1440p to 4K, but not for at least a year. But futureproofing is hard, since I don't own a crystal ball.
  • The other part of me thinks the 4070 Super is close enough in performance, and I could use the saved $200 to buy a 4TB SSD that I could make use of. But I don't wanna regret my decision in 2 years.
But with either choice, I'm sure I'll be absolutely ecstatic when I upgrade from my 2060. : P

In any case - awesome review, Jarred! You do great work. : )
It's not easy. My girlfriend's old gpu had to be replaced last july, so I gave her my rtx 2070 and i got a 4070 ti. It's a good card and it runs the games I play quite good in 4k (I usually play older games: 3 to 6 years old in general). I'm happy with it. It renders 3d very fast as well. But of course, the prices are madness.
I don't think we can really future proof like before. GPU wise, it's probably a question of 3 years maximum now... But honestly speaking, latest games are really lacking, so on the other hand, there are no games that would really justify so much horsepower in my opinion.

I think that for us to see a change, people should start migrating to 4k. It's about time, it's worth it, and that would move the companies to load their cards with more VRAM, bandwith and core power. While everyone is at HD or 1440p, card makers will continue sitting on their asses.
 
I think that for us to see a change, people should start migrating to 4k.
The costs are still too high for it to take off:
-the greater performance impact compared to lower resolutions.
-the cost for a decent 4K monitor, vs lower res solutions.
-it warrants more frequent gpu upgrades to play this never ending game of keep up with new technologies and games.
 
  • Like
Reactions: Order 66