Review Nvidia GeForce RTX 4060 Ti Review: 1080p Gaming for $399

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
They using DLSS as a crutch to make the new 60 card look better than old one.
that seems to be exactly what nvidia is doing now, release a card that needs DLSS to bring the performance up to where it should of been with out it. IMO, DLSS 3 and its frame generation, is exactly just that.

DLSS " should " of been used only to extend the life if a card a few months longer then it would of lasted, or to get that last bit of performance to make a game playable on a lower card, thats it.

tout RT and DLSS all you want as the only reasons to buy geforce over radeon, with nvidia pulling things like this, all amd really needs to do now, is have its zen cpu moment, but for radeon.

either way, based on current prices, my 1060 might be replaced with a 7000 xt or xtx
 
Daniel's defence of Nvidia always seems to come down to.... DLSS 3 helping get "extra" frames but latency is bad at lower res. It depends on what you start with in frame count as to how noticeable the lag is.

I am watching him but not sure where he stands.

FSR 3 meant to have frame gen so lets see what Nvidia does next to stay ahead.

I've never owned an AMD GPU... ever. Every time I have been in the market Nvidia had the better card so I haven't had any desire for AMD's cards.

I have had 3. I didn't buy my 7900xt because of anything Nvidia were doing (apart from maybe power connector). I chose it to support my monitor as last 2 I have owned were Freesync and although Nvidia sort of supports it now, its not fully. All the problems with Nvidia GPU since then hasn't changed my mind on purchase.
 
Last edited:
DLSS " should " of been used only to extend the life if a card a few months longer then it would of lasted, or to get that last bit of performance to make a game playable on a lower card, thats it.

Yep... I personally think DLSS is bad for the gaming industry and it has nothing to do with the fact I'm running a 4090 and don't use DLSS. It should be used for what you said... but instead it's being cheesed to the point we have horrible cards and horribly optimized "oh don't worry we have DLSS" games hitting the market.

Thankfully my system just plows through all the BS with brute force.

His voice is a little much I won't lie but he seems to try and be fair so I tolerate it as I appreciate as many honest perspectives on a new product as possible.

I agree... I just go to the ones that don't annoy me. Another thing though is repeated videos on the same topic. I mean... I get that that GPU market is in shambles... but do you really need to put out 5 videos a week talking about the 4060 and 4070 performance and/or the price of GPUs? Lot of YouTubers are doing that and it's just plain getting old.

Daniel's defence of Nvidia always seems to come down to.... DLSS 3 helping get "extra" frames but latency is bad at lower res. It depends on what you start with in frame count as to how noticeable the lag is.

I am watching him but not sure where he stands.

Yeah he seems like a likeable guy... I should probably give his content another chance. I mean it wasn't horrible or anything it just mostly didn't apply to me... i.e. poor game/GPU performance which is pretty much all he talked about in the videos I saw. That and his voice... 🤣

I have had 3. I didn't buy my 7900xt because of anything Nvidia were doing (apart from maybe power connector). I chose it to support my monitor as last 2 I have owned were Freesync and although Nvidia sort of supports it now, its not fully. All the problems with Nvidia GPU since then hasn't changed my mind on purchase.

Believe me if AMD had a comparable card I would have definitely paired it with the Ryzen. Maybe on the next upgrade...
 
  • Like
Reactions: atomicWAR
Thankfully my system just plows through all the BS with brute force.

I don't need FSR either. It might be nice to upscale to 4k but I have a 1440p monitor and I can't foresee a time where I need to run a game at 1080p to upscale to 1440p, It might happen eventually but a lot of other cards will choke before mine does.

I just looked into it... another feature I don't need. Adds to pile.
I remember getting a GTX 980 and looking at Super resolution and sighing as I was already running a 4k monitor. One more feature I never could use.

Maybe this is thier next feature to get people to ignore fact performance is stagnant and buy their cards - https://www.pcgamer.com/nvidia-ace-ai-avatar/
 
Last edited:
The extremely poor performance of the newest -60 series is entirely due to nVidia choking it's memory bandwidth, likely as a deliberate attempt to prevent it from digging into sales of the highest tier cards. This is a practice known as malevolent market segmentation.

Just to give an idea of how badly the 60, and to an extent the 70 series memory has been hamstrung.

3060 - 240GB/s (8GB), 360GB/s (12GB)
3060ti - 448GB/s, 608GB/s (later release)
3070 - 448GB/s
3070ti - 608GB/s

4060 - 272GB/s
4060ti - 288GB/s
4070 - 504.2GB/s
4070ti - 504.2GB/s

What nVidia did was knock stuff up a tier to convince people it was ok to continue buying performance at mining craze prices. While GPU compute is important, those large parallel processors need massive amounts of data input to keep them going, this is why historically memory bandwidth dominates GPU performance.
 
Just to give an idea of how badly the 60, and to an extent the 70 series memory has been hamstrung.

3060 - 240GB/s (8GB), 360GB/s (12GB)
3060ti - 448GB/s, 608GB/s (later release)
3070 - 448GB/s
3070ti - 608GB/s

4060 - 272GB/s
4060ti - 288GB/s
4070 - 504.2GB/s
4070ti - 504.2GB/s
You should also compare L2 cache, which they significantly increased, to try and compensate for the narrower memory bus. Basically, Nvidia is trying a similar thing as AMD did, in RDNA2, with Infinity Cache.
 
Last edited:
  • Like
Reactions: JarredWaltonGPU
Right now, at German shop mindfactory you can buy MSI 6800 Gaming Z for 458 euro. At the same shop 4060ti range from 428 to 498 euro. So yes, if you search hard enough you can buy 6800 for (equivalent of) 400$.
Obviously I'm referring to a US market. Converting Euros to USD without accounting for shipping and other factors is really viable. We sometimes do stories about "early" pricing on hardware using non-US sources and exchange rates, but we try to make it clear that such direct price conversions are not usually what the end markets see.

AMD traditionally has had better pricing in Germany relative to Nvidia and Intel. I don't know if that's because they built a German fab back in the day, or special government deals, or something else. But Germany prices are not US prices. If you had originally said, "RX 6800 is selling for 458 Euro in Germany, which is $400 USD..." I wouldn't have questioned your statement.

But in the US, outside of eBay, you will not find Radeon RX 6800 for less than ~$489 right now. That's based on PCPP and Newegg prices, the former of which you can see the US data even if you're not located in the US. Furthermore, I'm not sure how widely available Mindfactory prices are, even in Germany. If you look at PCPP Germany prices, right now the RX 6800 starts at €502.90. Apparently they don't index Mindfactory, and I couldn't say way (maybe just not an affiliate?)
 
  • Like
Reactions: bit_user
@JarredWaltonGPU my point is, if there is 6800 available for 400 somewhere in the world then there is no reason it would not be (sometime,somewhere) available in US at that price. At least I don't see such reason. Maybe partpicker is not as good finding best deals as we all think.
 
Not to contradict you, but just to add: the best price I've seen was $450 (Newegg - new, sold by Newegg; after $20 rebate (IIRC)). That was at least a couple months ago, and prices have been gradually creeping up, ever since.
That's why I said "right now." There's an RX 6800 XT for $500 at Newegg right now, and I know we saw the 6800 for $450 at Newegg a month or two back. Actually, I think it was right at the time the RTX 4070 came out that AMD did a short-term sale on a bunch of Navi 21 GPUs. As I noted in the "RTX 4070 vs. RX 6950 XT" article, as far as pricing goes, a short-term unofficial sale price isn't the same as something that is generally available.

But when RX 6800 XT is only $10 more, it would be stupid to buy the RX 6800 at $490. In fact, given it's typically about 15% slower than the 6800 XT, the most I'd recommend paying for an RX 6800 is $425, and $400 would be better. Not surprisingly, current average eBay price on the 6800 is $378, so definitely lots of (probably two years old and mined hard) RX 6800 cards are available used for under $400. Especially if you try for an auction, where you could potentially get one for less than $300.

I do think potentially buying a used GPU off eBay isn't a terrible idea, especially for certain models like the RX 6950 XT. That came out right near the end of Ethereum mining, so worst-case it was used to mine for a few months. Of course, current (past 30 days) eBay prices are averaging $586, so it's not exactly an awesome deal even then. Save $50 versus a brand-new card bought off Newegg? Yeah, I'd pass. I'd only go eBay for stuff you can't generally find elsewhere, or where it's at least 30% cheaper than brand-new.

And on a related note, RX 6800 XT on eBay averaged $455 during May. So eBay prices are tracking a lot better with what I'd expect to pay right now (meaning, about 20% more than the RX 6800). What's crazy (and stupid) is that 78 RTX 4070 cards were sold on eBay in the past month, with an average price of $616. I don't get why anyone would pay full retail price but buy on eBay. Maybe that's just me?
 
  • Like
Reactions: Matt_ogu812
I'd only go eBay for stuff you can't generally find elsewhere, or where it's at least 30% cheaper than brand-new.
Like enterprise SSDs, but I digress...

I don't get why anyone would pay full retail price but buy on eBay. Maybe that's just me?
Money laundering? Maybe attempting to circumvent foreign customs duties by avoiding large retailers?
 
I came here to say the same thing.

"RTX 4060 Ti comes in just ahead of the RTX 3070 at 1080p, but falls behind the RTX 3060 Ti at 1440p and 4K."

"Being faster than the RTX 3070 is at least something, but the lead is very slim, and the RTX 3060 Ti isn't far behind either. Gen on gen, we're looking at native performance that's only 13% faster with the RTX 4060 Ti."

That's not 3 1/2 starts worthy.

I'm getting the impression that Tom's doesn't want to bite the had that feeds it.
This is where I generally stop and do a skeptical review of the reviewer's review.
A checker doing the checking of the article, potential built in bias.
 
That's why I said "right now." There's an RX 6800 XT for $500 at Newegg right now, and I know we saw the 6800 for $450 at Newegg a month or two back. Actually, I think it was right at the time the RTX 4070 came out that AMD did a short-term sale on a bunch of Navi 21 GPUs. As I noted in the "RTX 4070 vs. RX 6950 XT" article, as far as pricing goes, a short-term unofficial sale price isn't the same as something that is generally available.

But when RX 6800 XT is only $10 more, it would be stupid to buy the RX 6800 at $490. In fact, given it's typically about 15% slower than the 6800 XT, the most I'd recommend paying for an RX 6800 is $425, and $400 would be better. Not surprisingly, current average eBay price on the 6800 is $378, so definitely lots of (probably two years old and mined hard) RX 6800 cards are available used for under $400. Especially if you try for an auction, where you could potentially get one for less than $300.

I do think potentially buying a used GPU off eBay isn't a terrible idea, especially for certain models like the RX 6950 XT. That came out right near the end of Ethereum mining, so worst-case it was used to mine for a few months. Of course, current (past 30 days) eBay prices are averaging $586, so it's not exactly an awesome deal even then. Save $50 versus a brand-new card bought off Newegg? Yeah, I'd pass. I'd only go eBay for stuff you can't generally find elsewhere, or where it's at least 30% cheaper than brand-new.

And on a related note, RX 6800 XT on eBay averaged $455 during May. So eBay prices are tracking a lot better with what I'd expect to pay right now (meaning, about 20% more than the RX 6800). What's crazy (and stupid) is that 78 RTX 4070 cards were sold on eBay in the past month, with an average price of $616. I don't get why anyone would pay full retail price but buy on eBay. Maybe that's just me?
I don't get why anyone would pay full retail price but buy on eBay. Maybe that's just me?

'A Fool And It's Money Soon Part Ways', more than likely.
 
  • Like
Reactions: evdjj3j
This is where I generally stop and do a skeptical review of the reviewer's review.
A checker doing the checking of the article, potential built in bias.
Or potentially the people saying, unequivocally, that "it's slower than the RTX 3060 Ti at 1440p and 4K" are the biased ones. This isn't all really directed at you, but at the whole zeitgeist that's trying to make things worse than they perhaps really are. Anyway... here's the data comparing the RTX 4060 Ti and RTX 3060 Ti, again:
1685476657044.png
Based on those results, there is almost no case to be made for saying the RTX 3060 Ti is universally faster at 1440p. I'm not saying there are NO games where it would be faster, but in general, the 4060 Ti is clearly going to win out at 1440p in any test suite that's reasonably comprehensive. Also note that anything less than about 3% in the above table can't really be considered "faster" — for either GPU. It's a tie.

What about at 4K? There are more games where the additional memory bandwidth of the RTX 3060 Ti helps it come out ahead at 4K. And we can probably toss the Cyberpunk 2077 4K results for the 3060 Ti as being old/wrong. It happens. (A game patch, drivers, or some other factor may have improved performance on the 4060 Ti; I need to retest the 3060 Ti.) But using the sample data of 15 games? Yes, the RTX 4060 Ti will still generally beat the RTX 3060 Ti. Anyone pushing a stronger narrative about it being slower might not be doing it for the right reasons, IMO.

Furthermore, the cases where the 4060 Ti loses don't matter. Why? Because 4K generally doesn't matter on either of these cards. Even if we discount the DXR results, because they're not even remotely viable on either GPU, we get rasterization performance of 40.5 fps on the 4060 Ti versus 38.8 on the 3060 Ti. That's pretty close to a tie, yes, but the 4060 Ti has four clear "wins" (10% or more faster) while the 3060 Ti has only two "barely" wins.

Based on this, I still stand by the review and score and feel that I would rather have RTX 4060 Ti with typically better performance, AI improvements, and the potential to use DLSS 3, then to have an RTX 3060 Ti with potentially slightly better performance at 4K in scenarios where it often doesn't matter. (RTX 3060 Ti doing 15 fps versus RTX 4060 Ti doing 10 fps as an example would have the older card "winning" by 50%, but it's a meaningless win because both are far too slow.)

Of course there's also "bias" built into my selection of games. Brand new games may be more demanding — and they may also be horrendously buggy. (See: Star Wars Jedi: Survivor, Last of Us Part 1, Resident Evil 4 Remake, and Lord of the Rings: Gollum for examples of "buggy" / "badly coded.") However, I have a long-established test suite and a reasonably wide selection of games. I didn't just add some random new game that might have horrible performance/bugs for the sake of this review.

(Yes, some places basically did that. For the record, I disagree with the use of Hogwarts Legacy, Last of Us Part 1, Star Wars Jedi: Survivor, and Resident Evil 4 Remake in a current GPU test suite. All of them were and are questionable due to bugs and other concerns. Go ahead and test those games, like I tested Gollum, but don't forget to point out that they're outliers and feel broken on a technical level.)

Finally, as I've pointed out elsewhere, the best fix for the RTX 4060 Ti and RTX 3060 Ti and other 8GB GPUs running out of memory is... well, there are a few options. First, don't play new/demanding games at 4K on 8GB cards, because 4K can use significantly more VRAM.

Second, turn down the texture settings one notch. Maxing out at 1K textures instead of 2K textures will prevent a GPU with 8GB from running out of VRAM on nearly any reasonably designed game. And by that, I mean a game made for current hardware — which includes consoles! Also, show me a game where the difference between 1K and 2K textures, at 1440p, is even remotely significant. It basically doesn't happen, outside of bad coding, because most objects use less than 1K resolution textures, and even if you're at 1440p and do get a 2K texture, it won't really look much better. A 2X upscaling of texture is not a problem. 4X or more? Yes, but not <2X.

Back to the console comment I just made, until we have consoles with more memory devoted to textures and GPU stuff, we're not likely to see games truly need more than 10GB of VRAM, and 8GB is close enough that it would rarely matter (i.e. if you turn down the texture quality one notch). This is especially true for any cross-platform games. Xbox Series X has 10GB dedicated to the GPU and 6GB for the CPU, with the CPU side handling OS, game logic, etc. PlayStation 5 isn't as clear on how much is really for the GPU versus the whole system, but it doesn't really matter because the Xbox Series X already drew a line in the sand: Don't use more than 10GB of data for textures and GPU related stuff. So PS5-exclusive ports might conceivably use 12-13GB VRAM, but that's a worst-case scenario. Dropping texture quality one step meanwhile normally cuts VRAM requirements by at least a third (i.e. back to <8GB).

When things do go wrong with the above, however, the answer is almost universally that the game in question needs better coding, particularly for the PC side of things. Someone (bit_user) earlier mentioned the potential to use tiled resources. Low-level APIs do support this, but it's almost never used, because it adds complexity and can cause stutters. That's basically what I heard from Nvidia technical people when I was asking questions for the "Why does 4K use so much VRAM" article. I don't have any reason to doubt them.

Tiled textures (or only loading the part of a texture that's needed "on demand") might sound interesting, but often in practice you end up with something like a game loading part of a texture for one frame, but then the very next frame (or at most a few frames later), the viewport has changed enough that other parts of the texture are now in view. So if a game engine tried to optimize VRAM use by only loading in part of a texture, it can end up having to do that for every part of every texture over a relatively short time. And again, determining what's in VRAM and what's paged out is more complex, so there are performance implications that can make things messy.

id Software's Rage (idTech 5 engine) actually did do something like this ("megatextures") and was quite good at it. But when you look at how many games licensed idTech 5 (or idTech 6 / 7), it's a very small number. I don't know if this ever changed, but back in the day when Unreal Engine was the upstart, the word on the street among game developers was that id's engines ran faster but were far more difficult to understand / modify, while Unreal Engine was slower but easier to understand. You only need to look at how many games now use Unreal Engine to see how that played out. (Epic also radically changed its licensing costs/royalties, which in turn boosted use of UE4 and later.)

We'll see more games push VRAM use beyond 8GB in the coming years. Will those games truly need 16GB, though, or are they just pushing the limits because "bigger is better?" I'd venture it will be 99% the latter. Even now, there are relatively few games that won't run reasonably well on a GTX 970 at 1080p medium. That's a nine years old GPU. We're a way off from 8GB becoming the new 4GB in my book, because there are diminishing returns. For example:

Gollum-Test-Image-1.jpg
Gollum-Test-Image-2.jpg
Gollum-Test-Image-3.jpg

Which of those is running maxed out "epic" settings, which dropped shadows and textures to "high," and which has shadows and textures at "medium"? Even flipping through the uncompressed PNG files, I wouldn't be able to tell you for certain if I hadn't taken those screenshots.

Put another way, a "high-end" GPU from late 2014 that cost $330 at the time of launch still performs roughly on par with a budget $170 GPU from 2019 (GTX 1650). 2GB of VRAM is now basically dead (certain games won't really even try to run, at all, without seriously degraded settings). It's been dead for maybe three years. 4GB is now nearing the end of usability; it's not dead yet, but it's not getting better and it's not going for a walk anytime soon. 6GB is "fine" for budget GPUs but that's about as far as I'd take it. 8GB is fine for mainstream GPUs and will likely handle 1080p and medium (or higher, depending on the game) settings for at least another five years.

Anyway, back to the heart of the matter. My testing and experience with games indicate that:
  1. RTX 4060 Ti in general beats the RTX 3060 Ti. Period. If you only play a few games and one of those happens to be in the category of favoring the RTX 3060 Ti at 1440p, fine, but those games are the exception rather than the rule. And that's without DLSS3, SER, OMM, or DMM.
  2. Playing games at settings and resolutions that work poorly on an 8GB GPU, like the 4060 Ti, is the wrong approach. Most reasonably optimized games look almost as good running ~high settings rather than ~ultra and won't exceed 8GB of VRAM use, even at 1440p. (And this really isn't a 4K card.)
  3. Caveat to point 2: There are definitely games that have been poorly coded that will exceed 8GB VRAM use even at medium to high settings. (See: Gollum.) Any current test suite that has a large number of games where 8GB at 1080p is truly a problem has probably intentionally selected games to try to prove the point that "8GB VRAM GPUs are dead!"
  4. 8GB will eventually be a problem with more games, even at medium/high settings. When will that happen, on popular games that aren't ridiculed as being poorly coded? Probably not in significant numbers until the 4060 Ti has been superseded by the 5060 Ti, 6060 Ti, and maybe even 7060 Ti.
  5. Is the RTX 4060 Ti a 3-star or a 3.5-star GPU? If it's a 3-star, then the RX 7600 also becomes a 3-star. I definitely feel, in retrospect, that I overscored the RTX 4070 at 4-stars. That's largely thanks to being under a time crunch to get things done, and ultimately one "point" (half-star) higher or lower isn't a huge deal to me. There are many days where I'd be happier not having to assign a score at all. The text explains things, you can read it and decide on your own internal ranking what a GPU deserves. Because that's what happens anyway.
 
What's crazy (and stupid) is that 78 RTX 4070 cards were sold on eBay in the past month, with an average price of $616. I don't get why anyone would pay full retail price but buy on eBay. Maybe that's just me?
Nope, I have marveled at the prices some people cough up on eBay for a long time. It often makes no sense at all. There are bargains that can be had for the astute buyer, but apparently the old adage of “There's a sucker born every minute” holds as true as ever.
 
  • Like
Reactions: Matt_ogu812
RTX 4060 Ti in general beats the RTX 3060 Ti. Period. If you only play a few games and one of those happens to be in the category of favoring the RTX 3060 Ti at 1440p, fine, but those games are the exception rather than the rule. And that's without DLSS3, SER, OMM, or DMM.

I think the point was that the two should never come close and the 4060ti should always win by a decent margin. The 8GB isn't such a big deal because we can always just turn texture down one or two notches to force not-4K textures, as your screen shots demonstrate there is virtually no difference in quality. The 128-bit memory bus is what hurts the performance, and as we've seen in all previous GPU's, that is put there specifically to prevent a lower tier SKU from competing with a higher tier one. The 4060 has that 128-bit because nVidia put the 4070 at 192-bit and wants to upsell people to the 4080 and 4090, "the more you buy the more you save" and all that.
 
  • Like
Reactions: evdjj3j
The 128-bit memory bus is what hurts the performance, and as we've seen in all previous GPU's, that is put there specifically to prevent a lower tier SKU from competing with a higher tier one. The 4060 has that 128-bit because nVidia put the 4070 at 192-bit and wants to upsell people to the 4080 and 4090, "the more you buy the more you save" and all that.

I'm not the most tech savvy guy but I swear that my 1080 Ti had something like a 384 bit bus back in 2017.

Hard to believe given what's being released today.
 
  • Like
Reactions: bit_user
I'm not the most tech savvy guy but I swear that my 1080 Ti had something like a 384 bit bus back in 2017.

Hard to believe given what's being released today.
Your RTX 4090 has a 384-bit bus. Their top-end (sub-Titan) cards have had a 384 or 352-bit bus ever since the GTX 700 series (Kepler).

What changed is that the 70-tier card cut the width to 192-bit, when that's traditionally been a feature of the 60-tier cards. It's normally the 50-tier cards which are 128-bit.

Where Nvidia's decision making probably got complicated was that bus width and memory capacity are a bit intertwined. So, decisions about bus width weren't necessarily just about bandwidth, but you might've had a card with a wider bus width just to they can fit more memory.
 
@bit_user Ahhhh yeah thanks for clearing it up. Makes more sense now. I wasn't familiar at all with the recent lower end cards... I went from 1080 Ti to 3090 to 4090 since my 2017 build.
 
PlayStation 5 isn't as clear on how much is really for the GPU versus the whole system, but it doesn't really matter because the Xbox Series X already drew a line in the sand: Don't use more than 10GB of data for textures and GPU related stuff. So PS5-exclusive ports might conceivably use 12-13GB VRAM, but that's a worst-case scenario. Dropping texture quality one step meanwhile normally cuts VRAM requirements by at least a third (i.e. back to <8GB).
I think the TLOU port was originally assigning 20% of resources to background processes which has some postulate the PS5 may save 20% of its 16gb for OS, and give rest to GPU. So approx 3.2 for OS & 12.8 for games.

As AMD make both Sony & MS hardware, just watch their ram amounts I guess to foresee what the next consoles may have.

Why do you make us look at gollum? I can't stand look of that game, and I don't normally get impressed by graphics. Another example would have been nice :)
 
Or potentially the people saying, unequivocally, that "it's slower than the RTX 3060 Ti at 1440p and 4K" are the biased ones. This isn't all really directed at you, but at the whole zeitgeist that's trying to make things worse than they perhaps really are. Anyway... here's the data comparing the RTX 4060 Ti and RTX 3060 Ti, again:
View attachment 255
Based on those results, there is almost no case to be made for saying the RTX 3060 Ti is universally faster at 1440p. I'm not saying there are NO games where it would be faster, but in general, the 4060 Ti is clearly going to win out at 1440p in any test suite that's reasonably comprehensive. Also note that anything less than about 3% in the above table can't really be considered "faster" — for either GPU. It's a tie.

What about at 4K? There are more games where the additional memory bandwidth of the RTX 3060 Ti helps it come out ahead at 4K. And we can probably toss the Cyberpunk 2077 4K results for the 3060 Ti as being old/wrong. It happens. (A game patch, drivers, or some other factor may have improved performance on the 4060 Ti; I need to retest the 3060 Ti.) But using the sample data of 15 games? Yes, the RTX 4060 Ti will still generally beat the RTX 3060 Ti. Anyone pushing a stronger narrative about it being slower might not be doing it for the right reasons, IMO.

Furthermore, the cases where the 4060 Ti loses don't matter. Why? Because 4K generally doesn't matter on either of these cards. Even if we discount the DXR results, because they're not even remotely viable on either GPU, we get rasterization performance of 40.5 fps on the 4060 Ti versus 38.8 on the 3060 Ti. That's pretty close to a tie, yes, but the 4060 Ti has four clear "wins" (10% or more faster) while the 3060 Ti has only two "barely" wins.

Based on this, I still stand by the review and score and feel that I would rather have RTX 4060 Ti with typically better performance, AI improvements, and the potential to use DLSS 3, then to have an RTX 3060 Ti with potentially slightly better performance at 4K in scenarios where it often doesn't matter. (RTX 3060 Ti doing 15 fps versus RTX 4060 Ti doing 10 fps as an example would have the older card "winning" by 50%, but it's a meaningless win because both are far too slow.)

Of course there's also "bias" built into my selection of games. Brand new games may be more demanding — and they may also be horrendously buggy. (See: Star Wars Jedi: Survivor, Last of Us Part 1, Resident Evil 4 Remake, and Lord of the Rings: Gollum for examples of "buggy" / "badly coded.") However, I have a long-established test suite and a reasonably wide selection of games. I didn't just add some random new game that might have horrible performance/bugs for the sake of this review.

(Yes, some places basically did that. For the record, I disagree with the use of Hogwarts Legacy, Last of Us Part 1, Star Wars Jedi: Survivor, and Resident Evil 4 Remake in a current GPU test suite. All of them were and are questionable due to bugs and other concerns. Go ahead and test those games, like I tested Gollum, but don't forget to point out that they're outliers and feel broken on a technical level.)

Finally, as I've pointed out elsewhere, the best fix for the RTX 4060 Ti and RTX 3060 Ti and other 8GB GPUs running out of memory is... well, there are a few options. First, don't play new/demanding games at 4K on 8GB cards, because 4K can use significantly more VRAM.

Second, turn down the texture settings one notch. Maxing out at 1K textures instead of 2K textures will prevent a GPU with 8GB from running out of VRAM on nearly any reasonably designed game. And by that, I mean a game made for current hardware — which includes consoles! Also, show me a game where the difference between 1K and 2K textures, at 1440p, is even remotely significant. It basically doesn't happen, outside of bad coding, because most objects use less than 1K resolution textures, and even if you're at 1440p and do get a 2K texture, it won't really look much better. A 2X upscaling of texture is not a problem. 4X or more? Yes, but not <2X.

Back to the console comment I just made, until we have consoles with more memory devoted to textures and GPU stuff, we're not likely to see games truly need more than 10GB of VRAM, and 8GB is close enough that it would rarely matter (i.e. if you turn down the texture quality one notch). This is especially true for any cross-platform games. Xbox Series X has 10GB dedicated to the GPU and 6GB for the CPU, with the CPU side handling OS, game logic, etc. PlayStation 5 isn't as clear on how much is really for the GPU versus the whole system, but it doesn't really matter because the Xbox Series X already drew a line in the sand: Don't use more than 10GB of data for textures and GPU related stuff. No PS5-exclusive ports might conceivably use 12-13GB VRAM, but that's a worst-case scenario. Dropping texture quality one step meanwhile normally cuts VRAM requirements by at least a third (i.e. back to <8GB).

When things do go wrong with the above, however, the answer is almost universally that the game in question needs better coding, particularly for the PC side of things. Someone (bit_user) earlier mentioned the potential to use tiled resources. Low-level APIs do support this, but it's almost never used, because it adds complexity and can cause stutters. That's basically what I heard from Nvidia technical people when I was asking questions for the "Why does 4K use so much VRAM" article. I don't have any reason to doubt them.

Tiled textures (or only loading the part of a texture that's needed "on demand") might sound interesting, but often in practice you end up with something like a game loading part of a texture for one frame, but then the very next frame (or at most a few frames later), the viewport has changed enough that other parts of the texture are now in view. So if a game engine tried to optimize VRAM use by only loading in part of a texture, it can end up having to do that for every part of every texture over a relatively short time. And again, determining what's in VRAM and what's paged out is more complex, so there are performance implications that can make things messy.

id Software's Rage (idTech 5 engine) actually did do something like this ("megatextures") and was quite good at it. But when you look at how many games licensed idTech 5 (or idTech 6 / 7), it's a very small number. I don't know if this ever changed, but back in the day when Unreal Engine was the upstart, the word on the street among game developers was that id's engines ran faster but were far more difficult to understand / modify, while Unreal Engine was slower but easier to understand. You only need to look at how many games now use Unreal Engine to see how that played out. (Epic also radically changed its licensing costs/royalties, which in turn boosted use of UE4 and later.)

We'll see more games push VRAM use beyond 8GB in the coming years. Will those games truly need 16GB, though, or are they just pushing the limits because "bigger is better?" I'd venture it will be 99% the latter. Even now, there are relatively few games that won't run reasonably well on a GTX 970 at 1080p medium. That's a nine years old GPU. We're a way off from 8GB becoming the new 4GB in my book, because there are diminishing returns. For example:

View attachment 256
View attachment 257
View attachment 258

Which of those is running maxed out "epic" settings, which dropped shadows and textures to "high," and which has shadows and textures at "medium"? Even flipping through the uncompressed PNG files, I wouldn't be able to tell you for certain if I hadn't taken those screenshots.

Put another way, a "high-end" GPU from late 2014 that cost $330 at the time of launch still performs roughly on par with a budget $170 GPU from 2019 (GTX 1650). 2GB of VRAM is now basically dead (certain games won't really even try to run, at all, without seriously degraded settings). It's been dead for maybe three years. 4GB is now nearing the end of usability; it's not dead yet, but it's not getting better and it's not going for a walk anytime soon. 6GB is "fine" for budget GPUs but that's about as far as I'd take it. 8GB is fine for mainstream GPUs and will likely handle 1080p and medium (or higher, depending on the game) settings for at least another five years.

Anyway, back to the heart of the matter. My testing and experience with games indicate that:

  1. RTX 4060 Ti in general beats the RTX 3060 Ti. Period. If you only play a few games and one of those happens to be in the category of favoring the RTX 3060 Ti at 1440p, fine, but those games are the exception rather than the rule. And that's without DLSS3, SER, OMM, or DMM.
  2. Playing games at settings and resolutions that work poorly on an 8GB GPU, like the 4060 Ti, is the wrong approach. Most reasonably optimized games look almost as good running ~high settings rather than ~ultra and won't exceed 8GB of VRAM use, even at 1440p. (And this really isn't a 4K card.)
  3. Caveat to point 2: There are definitely games that have been poorly coded that will exceed 8GB VRAM use even at medium to high settings. (See: Gollum.) Any current test suite that has a large number of games where 8GB at 1080p is truly a problem has probably intentionally selected games to try to prove the point that "8GB VRAM GPUs are dead!"
  4. 8GB will eventually be a problem with more games, even at medium/high settings. When will that happen, on popular games that aren't ridiculed as being poorly coded? Probably not in significant numbers until the 4060 Ti has been superseded by the 5060 Ti, 6060 Ti, and maybe even 7060 Ti.
  5. Is the RTX 4060 Ti a 3-star or a 3.5-star GPU? If it's a 3-star, then the RX 7600 also becomes a 3-star. I definitely feel, in retrospect, that I overscored the RTX 4070 at 4-stars. That's largely thanks to being under a time crunch to get things done, and ultimately one "point" (half-star) higher or lower isn't a huge deal to me. There are many days where I'd be happier not having to assign a score at all. The text explains things, you can read it and decide on your own internal ranking what a GPU deserves. Because that's what happens anyway.
Nvidia will be proud of you, you're money well spent.
 
This bit-width (effective vs literal bandwidth?) discussion reminds me of the phrase: "there's no replacement for displacement".

Do you want a 5.0 V8 or a 2.0 I4 Turbo?
How true is that, any more (assuming we're talking about two engines with comparable output)? Big engines are heavy. The traditional selling point of natural aspiration is throttle response, but I wouldn't think that would be an issue with superchargers. I've heard even turbos have "anti-lag", though I don't know enough about it to say how well it counters "turbo lag".

Anyway, to carry your analogy forward, I guess you mean wide bus + small cache vs. narrow bus + large cache? After seeing how much Infinity Cache helped RDNA2, I had to wonder why GPUs didn't ramp up their cache sooner.