Review Nvidia GeForce RTX 4060 Ti Review: 1080p Gaming for $399

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
How true is that, any more (assuming we're talking about two engines with comparable output)? Big engines are heavy. The traditional selling point of natural aspiration is throttle response, but I wouldn't think that would be an issue with superchargers. I've heard even turbos have "anti-lag", though I don't know enough about it to say how well it counters "turbo lag".

Anyway, to carry your analogy forward, I guess you mean wide bus + small cache vs. narrow bus + large cache? After seeing how much Infinity Cache helped RDNA2, I had to wonder why GPUs didn't ramp up their cache sooner.
Yes, but don't read too deep into it and keep it at a high level or the idea will be lost, I'd say... With my analogy/point I'm trying to move the goal a bit to something else. Like... Isn't it a matter of answering "what do you like more?".

Simplifying the argument a tad more: you have two competing products which have more similarities than not and the differences between the two come to light under loads that task (put emphasis/pressure) on each product's strengths/weaknesses respectively.

If you have a use for the extra bandwidth in any of your tasks, then the 4060ti is a non-starter, right? If you care about power consumption, the 3060ti is a non-starter, right? If you need more than 8GB VRAM, neither is an option anyway, right? And so on...

Circling back to the analogy, you could reduce the topic to those same basic questions: "do you have the space for a V8?" "do you want the weight of a V8?" "do you care about throttle lag?"... Even something as dumb as "do you want the 5.0 V8 badge on the side?". Each answer will skew your option towards what you really want and, probably, need.

What bugs me about the 4060ti, taking a few steps back, is that it is too damn close to the 3060ti, which I still cataloge as the best card of the previous gen, with too little to make a case for itself. If someone wants DLSS3 and lower power consumption for a bigger asking price, they have that option, but I'm fairly confident in saying most people will want a lower asking price (upfront cost) and a stronger perceived bandwidth (V8 over I4, regardless of what technologies the I4 uses to be "on par" with the V8 output) for higher resolutions, even if in 90% of the scenarios you can measure effectively it won't matter (much like a V8 vs an I4 turbo behave on a track with all the nuance it brings).

Regards.
 
  • Like
Reactions: palladin9479
What bugs me about the 4060ti, taking a few steps back, is that it is too damn close to the 3060ti, which I still cataloge as the best card of the previous gen, with too little to make a case for itself. If someone wants DLSS3 and lower power consumption for a bigger asking price, they have that option, but I'm fairly confident in saying most people will want a lower asking price (upfront cost) and a stronger perceived bandwidth (V8 over I4, regardless of what technologies the I4 uses to be "on par" with the V8 output) for higher resolutions, even if in 90% of the scenarios you can measure effectively it won't matter (much like a V8 vs an I4 turbo behave on a track with all the nuance it brings).

Well it comes down to nVidia seeing that consumers were willing to pay out the nose for mediocre performance during the mining craze and they are now using malicious segmentation to try to permanently set market prices. Normally competition prevents this from happening but AMD will likely do the same thing, simply too much money to be had with this sort of soft price fixing. Your analogy was pretty accurate though likely lost on most who aren't gearheads.

For those left wondering, the displacement isn't so much the size of the engine but the volume of air moved during a full cycle, higher displacement means more fuel can be combusted resulting in more power per stroke meaning more power at the wheels. More modern engines let you get more power out of the same volume of air, but those same technologies would get even more power out of a larger engine displacement as well. GPUs absolutely drink memory bandwidth, the more the better until the compute units are saturated. This is why memory overclocking usually does more for the lower tier cards then core over clocking and why GPU manufacturers use memory bandwidth as a class divider.

nVidia's new design allows a 50 class card to perform like a previous generation 60 class card, so they just decided to call it a 60 class card and charge accordingly.
 
Last edited:
Why do you make us look at gollum? I can't stand look of that game, and I don't normally get impressed by graphics. Another example would have been nice :)
Well, it's a game that's clearly exceeding 8GB of VRAM use. I just finished testing it so it's still installed. I could have used Redfall, Dead Island 2, Star Wars Jedi: Survivor, or any number of other games. The main thing is that I don't know for certain (without more testing) which games are for sure using more than 8GB. Also, Gollum let you turn of Texture Streaming, so you do get the full resolution textures loaded.

If you're wondering, the images above are high, then medium, then epic. Medium runs up two twice as fast as epic on GPUs with 8GB VRAM. 🤷‍♂️
 
They all looked the same to me. I didn't study them though. I just don't like look of that game enough to look that hard. I didn't mean to derail discussion :)
That's a good point to bring up though: DPI vs the perception of quality.

I'm willing to say the quality perception of a texture in 4K will be different on a 50" TV than on a 27" display (assuming both are "4K") or through a VR headset.

That's also why VR requires so much VRAM 😛

Regards.
 
Well it comes down to nVidia seeing that consumers were willing to pay out the nose for mediocre performance during the mining craze and they are now using malicious segmentation to try to permanently set market prices. Normally competition prevents this from happening but AMD will likely do the same thing, simply too much money to be had with this sort of soft price fixing. Your analogy was pretty accurate though likely lost on most who aren't gearheads.

For those left wondering, the displacement isn't so much the size of the engine but the volume of air moved during a full cycle, higher displacement means more fuel can be combusted resulting in more power per stroke meaning more power at the wheels. More modern engines let you get more power out of the same volume of air, but those same technologies would get even more power out of a larger engine displacement as well. GPUs absolutely drink memory bandwidth, the more the better until the compute units are saturated. This is why memory overclocking usually does more for the lower tier cards then core over clocking and why GPU manufacturers use memory bandwidth as a class divider.

nVidia's new design allows a 50 class card to perform like a previous generation 60 class card, so they just decided to call it a 60 class card and charge accordingly.
I think there's way, WAY more going on than just "Nvidia + mining." The AI situation is a real and present danger to gaming GPUs. As in, no matter what anyone on here thinks, AI is making vast amounts of profits for Nvidia. Look how much Nvidia stock went up in just the past couple of weeks. And AI needs lots of VRAM. AI also isn't going away. Jensen isn't talking hyperbole when he discusses all the areas that AI is likely to impact, HARD, in the coming decades. We're only just getting started.

Nvidia probably didn't want to put "too much" VRAM on lower tier cards, because then people and businesses might be tempted to run lower priced GPUs instead of paying for the professional stuff with double the VRAM. I think that's also why the RTX 4060 Ti 16GB isn't releasing until July, to provide more of a buffer. (I'm actually rather surprised that there will be a 4060 Ti 16GB, even if it's $100 extra... but even 16GB is pretty paltry for a lot of the AI LLMs.)

The memory capacity tied to bus width is a problem, though. Nvidia only wanted to do a 128-bit bus on the 4060 and 4060 Ti, not to screw over consumers, but because it felt that's all it needed. The larger cache generally overcomes the reduced bus width and raw bandwidth. AMD did the exact same thing with RDNA 2 (and is doing it with RDNA 3 as well). But a 128-bit bus limits capacity to 8GB mostly, 16GB if you do chips on both sides. It's too big of a jump for a "mainstream-budget" class GPU.

RTX 3060 was a really good compromise solution: 192-bit bus, 12GB VRAM. If the pandemic and supply chain stuff, coupled with Ethereum mining, hadn't been a thing, I'm pretty certain all the rumors of a 20GB RTX 3080 refresh would have happened (maybe that would have been the 3080 Ti), and a 16GB RTX 3070 Ti. RTX 3060 getting 12GB was evidence Nvidia was moving in that direction. But then demand was all jacked up from mining and shortages, and so Nvidia decided to stick with lower capacities on everything except the 3060. 3070 Ti with 8GB was still one of the worst decisions of 2021 IMO.

Fundamentally, I think in 2023 that ~$400 GPUs should have 12GB VRAM. That's a good balance between capacity and complexity. But it requires a 192-bit bus to do that (or 3GB non-binary chips that don't exist for GPUs), so we're left with 8GB and 16GB on a 128-bit bus as the only options. 8GB is too little, 16GB is too much — "too much" for the amount of actual GPU horsepower we're talking about. Or alternatively, 32MB of L2 cache (for Nvidia) isn't enough for the target market. A 48MB L2 cache would have been better, or even 64MB maybe, to overcome the 128-bit limitations, but that would have required a fundamental reworking of the architecture. Probably would have been more expensive than just doing a 192-bit interface as well. Dropping from GDDR6X and 21~24 Gbps to GDDR6 and 18 Gbps was also a compromise.

The other big problem is Nvidia's continued pushing of DLSS 3 Frame Generation as a "higher FPS" solution. I don't mind having DLSS 3, but it's really just smoothing out the frames to screen. It can look better on high refresh rate displays. But it doesn't usually feel better. The charts Nvidia shows where it's 50% or even 100% "faster" with FrameGen are super misleading in that sense.

It's all damn irritating, basically. I know all the reasons why Nvidia chose this route. They're not even wrong reasons, from a business perspective and mostly from a performance perspective. That's why this is a solid C- in terms of my score. It did the homework, half-heartedly, and left out some critical items. It had a fundamentally flawed premise. Maybe it should have been a 3-star (D-). Maybe a 10% jump is too big and we need to be more granular. If I was still writing for PC Gamer, it would have been a 65 instead of a 3.5-star. Ironically, PC Gamer actually scored it a 79, which is far too generous IMO.
 
When Nvidia can make 200k on 1 system sale, do you think they really care about 4060 TI not being received well. Maybe in a few years once the market for AI machines might reduce, they may glance back at it.

Nvidia probably didn't want to put "too much" VRAM on lower tier cards, because then people and businesses might be tempted to run lower priced GPUs instead of paying for the professional stuff with double the VRAM. I think that's also why the RTX 4060 Ti 16GB isn't releasing until July, to provide more of a buffer. (I'm actually rather surprised that there will be a 4060 Ti 16GB, even if it's $100 extra... but even 16GB is pretty paltry for a lot of the AI LLMs.)
Yep. Why buy Pro cards when a 16gb RT card is good enough. They want them paying more.
 
  • Like
Reactions: JarredWaltonGPU
I think there's way, WAY more going on than just "Nvidia + mining." The AI situation is a real and present danger to gaming GPUs. As in, no matter what anyone on here thinks, AI is making vast amounts of profits for Nvidia. Look how much Nvidia stock went up in just the past couple of weeks. And AI needs lots of VRAM. AI also isn't going away. Jensen isn't talking hyperbole when he discusses all the areas that AI is likely to impact, HARD, in the coming decades. We're only just getting started.

Nvidia probably didn't want to put "too much" VRAM on lower tier cards, because then people and businesses might be tempted to run lower priced GPUs instead of paying for the professional stuff with double the VRAM. I think that's also why the RTX 4060 Ti 16GB isn't releasing until July, to provide more of a buffer. (I'm actually rather surprised that there will be a 4060 Ti 16GB, even if it's $100 extra... but even 16GB is pretty paltry for a lot of the AI LLMs.)

Only putting this because I don't want a huge quote box. While AI is definitely a very big thing for business intelligence, this is a consumer gaming card that is marketed to consumers buying a gaming card, so AI is largely irrelevant to this discussion. As mention before 8GB is perfectly fine as a capacity, it's the 128-bit memory bus that is far more damaging. To make it 192 they would of had to add another unit of memory, so likely 12GB but then it's too close to the 4070 territory and making the 4070 256 bit makes it too close to the 4080 and 4090, which is what they want us to buy.

In free market economics the market sets the price of a commodity, during the mining craze nVidia made huge profits due to cards flying off shelfs faster then they could deliver. The additional demand from mining caused prices to skyrocket and gamers had to pay more for the same performance, this set a "new" market rate for that performance. Now mining has crashed and that extra demand is gone, the gaming consumers should expect the price for performance to go back down to pre-mining rates, this would be lower profits for GPU manufactures.

The 40x series is a very large generational increase in performance, and nVidia is using that as an opportunity to rebadge the low to mid tier cards an entire price category up to maintain the same previous price-to-performance ratios. You want to see the real comparison, just do 3050 to 4060 and 3060ti to 4070. Heck they were going to do this to the "4080" but the consumer backlash forced them to cancel it and re-release it as a "4070ti".

And lets all say it together now, "The More you Buy, the More you Save!"
 
Last edited:
Nvidia may decide it doesn't need to worry about what consumers of 4060 want if its making way more selling AI machines. That is point being made.

Its all related since its same company. Nvidia found a way to maintain its profits without mining boom. It may not need consumers of gaming cards to achieve that right now.
 
  • Like
Reactions: JarredWaltonGPU
This bit-width (effective vs literal bandwidth?) discussion reminds me of the phrase: "there's no replacement for displacement".

Do you want a 5.0 V8 or a 2.0 I4 Turbo?

Regards.
I'll take the 5.0 V8 because from what I've been reading 2.0 l4 Turbo will need some serious moto work done to it way before the 5.0 V8.
Given how lax people can get with oil changes, which is VERY CRITICAL for Turbo engines, the two are a bad combo.
The first time one has to replace the turbo negates most of your savings on the 'elusive MPG advantage' of a turbo 4.
 
  • Like
Reactions: palladin9479
Nvidia may decide it doesn't need to worry about what consumers of 4060 want if its making way more selling AI machines. That is point being made.

Its all related since its same company. Nvidia found a way to maintain its profits without mining boom. It may not need consumers of gaming cards to achieve that right now.

nVidia renaming a 4050 into a 4060 and previously canceling the "totally not 4080" only to release a near identical 4070ti says otherwise. They very much are here to milk the PC market for as much as possible and have turned to malicious segmentation to upsell higher tier SKU's. AMD isn't free of this either, they are watching how this goes for nVidia and will adjust fire from there. The best phrase I heard of this tactic is "reconditioning the market".

Lets all say it together again "The More you Buy the More you Save".
 
  • Like
Reactions: Matt_ogu812
nVidia renaming a 4050 into a 4060 and previously canceling the "totally not 4080" only to release a near identical 4070ti says otherwise. They very much are here to milk the PC market for as much as possible and have turned to malicious segmentation to upsell higher tier SKU's.
That isn't showing a lot of care. That shows they willing to squeeze value out of product, but its not what I would call the reaction consumers wanted. Nvidia don't need to give you want you want, they just giving you what they want. And if you don't like it, buy something else.

They own hearts/minds of most of market. It might take a while for the market to notice they not interested in giving you better hardware, but just living off previous earned goodwill. Treading water, their eye is on another ball.

Consumers gave them an almost unbeatable monopoly, why do they need to make it better? Just look at Google search.... monopolies just mean they can ignore what got them there.

The 4090 is still best card you can buy but the rest of series is just there to make you wish you could afford a 4090.
 
  • Like
Reactions: Lucky_SLS
I think Nvidia should just do better and software/driver lock the AI LLM features if it fears ppl would use the lower end models instead of workstation GPU. atleast gamers can understand and support that decision.

I thought that was the entire reason why HBM3 were used in the proper workstation class stuff and it did not feature even in the ultra expensive 4090 cards. but then again, i do not think even 4k and 8k gaming were reaching the memory bandwidth limits in the 4090...

with the current crop of bandwidth limited 60 class and VRAM limited 4070, i still have some hope for a "super" refresh without the memory bandwidth/Vram bottlenecks...
 
I think Nvidia should just do better and software/driver lock the AI LLM features if it fears ppl would use the lower end models instead of workstation GPU. at least gamers can understand and support that decision.

I thought that was the entire reason why HBM3 were used in the proper workstation class stuff and it did not feature even in the ultra expensive 4090 cards. but then again, i do not think even 4k and 8k gaming were reaching the memory bandwidth limits in the 4090...

with the current crop of bandwidth limited 60 class and VRAM limited 4070, i still have some hope for a "super" refresh without the memory bandwidth/Vram bottlenecks...
People would be PISSED if Nvidia tried to lock out AI workloads. It's been shown that such tactics don't even work long-term (see: anti-mining lock on RTX 30-series cards that was eventually cracked/broken). Plus, AI workloads are not something that you can easily detect.

HBM (HBM2e/HBM3) are about more bandwidth, more capacity, and substantially higher costs. GDDR6/GDDR6X provide enough bandwidth at a cheaper price that the consumer cards and even professional (i.e. RTX 6000 Ada Generation) don't need to go with HBM.

Nvidia can't increase the bus width of a GPU without making a new chip. So, AD102 has a 384-bit interface, AD103 has 256-bit, AD104 has 192-bit, and AD106/AD107 have 128-bit. You can take a higher GPU and disable a channel or two if you want, but bigger chips are more expensive.

Conceivably, we could see an RTX 4080 Ti with a 320-bit interface an 20GB using AD102 at some point. We could also see something like RTX 4070 Super with 14GB and a 288-bit interface maybe, though I'm not sure if Nvidia supports disabling a single 32-bit memory interface or not (L2 cache is linked with the memory, and it might be that the cache is built around 64-bit blocks). But there's no way to do an AD104 chip with 16GB and a 256-bit interface, or AD106 with 192-bit.

What Nvidia could do is double the VRAM on any given level. So AD102 can support 24GB or 48GB. (Rumors of a 48GB Titan RTX Ada are out there.) AD103 can do 16GB or 32GB. AD104 can do 12GB or 24GB, and AD106/AD107 can do up to 8GB or 16GB. So far, professional cards (what was formerly the Quadro line) with double the VRAM are available. I don't think Nvidia has any intention of cannibalizing those sales by making consumer models with double the VRAM, outside of the RTX 4060 Ti 16GB.
 
^ People would be pissed yes, but its those ppl nvidia is trying to force into buying the workstation cards. The mining cards were getting sold out and even the normal ones were getting scalped. those were supply demand related. The AI market does have kind of market demand as the crypto mining times. This is just bang for buck/value related. And yes, they tried to lock out the mining card with LHR models and they failed, thats why i am ranting that they should get better XD

i just wish that with the potential "super"series they can give even better performance, just like 3070 matching the 2080ti. This is the thought that all gamers have and is very much apparent with the demand for the 4060ti...

I was talking about the HBM gpu models like the A100 and H100 pcie cards, those were proper AI dev cards. shouldnt have called them workstation cards, my bad.

TL;DR - Gamers are not willing to pay what nvidia is asking for the 60 and 70 class cards for the 40 series.
 
People would be PISSED if Nvidia tried to lock out AI workloads. It's been shown that such tactics don't even work long-term (see: anti-mining lock on RTX 30-series cards that was eventually cracked/broken). Plus, AI workloads are not something that you can easily detect.
Oh, c'mon. This is Nvidia we're talking about. You don't think they're going to protect the fat margins of their pro cards? What they've done is to cripple performance of the tensor cores, for certain classes of operations used primarily in training (16-bit tensor product with 32-bit accumulate).

They've been doing this ever since they began offering tensor cores in gaming cards (RTX 2000). And people still buy them, because they have the best software ecosystem and they still offer the best AI performance for the $.

"Turing is capable of accumulating at FP32 for greater precision; however on the GeForce cards this operation is limited to half-speed throughput. This limitation has been removed for the Titan RTX ..."

Source: https://www.anandtech.com/show/13668/nvidia-unveils-rtx-titan-2500-top-turing

It's not like AMD is entirely above this sort of thing, either. They nerfed the fp64 rate of the Radeon VII by half of what the Vega 20-based Pro and MI cards offer.

What Nvidia could do is double the VRAM on any given level.
Yes, and you correctly cite that as another key differentiator of their pro cards. So, that won't happen.
 
Last edited:
  • Like
Reactions: -Fran-
hmm, I can afford this amount. But sadly, the gpu at 240mm is too long for my case. If a mini version would be released that is 215 mm or less, then I would consider buying it.

was not able to buy an RTX 3060, because all versions sold locally are too long for my case. Now observing if there would be a short gpu from the 40 series.
 
hmm, I can afford this amount. But sadly, the gpu at 240mm is too long for my case. If a mini version would be released that is 215 mm or less, then I would consider buying it.

was not able to buy an RTX 3060, because all versions sold locally are too long for my case. Now observing if there would be a short gpu from the 40 series.
This might sound crazy, but perhaps a long-term upgrade you could consider is a larger case?

That said, I know cases can be a personal thing and maybe you're quite attached to your current one.
 
  • Like
Reactions: Tac 25
3.5 stars for what is a fairly negative review.

Who is the target for this product? A console will perform better for the same money and eliminate the cost of the rest of the computer.
For people who use cuda cores. (plus people who don't want to replace their 500W PSU).
Overall it's a badly priced product to make 4060Ti and 4070 looks like a better option.
 
@bit_user

oh, sorry forgot to reply here.

reason I observe for 215mm or less, simply want the gpu to fit in all the case here in the house. Nvidia was able to make RTX 3050 shorter than 215 mm.. perhaps they could do it again with a 40 series. I'm not in a rush to get a gpu, this is simply a "want", not a "need". All games I own still run fine.
 
  • Like
Reactions: bit_user
The most recent blunder from a source with a history of bad calls. Contrast that with Gamers Nexus flat out title of "Do Not Buy" review or the title of Hardware Unboxed's "Laughably Bad at $400" review. At this point the history of JayzTwoCents bad advice to consumers is too much to warrant any confidence in their content.

Well today was the day I dropped JayzTwoCents from my sub list. How did that guy get 4M subscribers? Was it all the free hardware giveaways?

Compared to guys like Gamers Nexus and DeBauer his content is complete garbage.
 
  • Like
Reactions: CeltPC
Well today was the day I dropped JayzTwoCents from my sub list. How did that guy get 4M subscribers? Was it all the free hardware giveaways?

Compared to guys like Gamers Nexus and DeBauer his content is complete garbage.
Gamers Nexus and der8auer are both great, other YouTubers that I think are worthwhile and credible also include:

ActuallyHardcoreOverclocking - Especially good on motherboards from a component analysis perspective, and memory optimization settings. Just have to get used to his unstructured rambly style :) .
Level1Techs - Wendell's actual tech vids are good- he is very knowledgeable, the 3 panel topic videos are something I skip.
Hardware Unboxed - Some of the best on graphics cards and particularly monitors (they have a companion channel just on monitors).
PCWorld - Gordon is an old hand at the PC related game, and his experience shows.
 
Gamers Nexus and der8auer are both great, other YouTubers that I think are worthwhile and credible also include:

ActuallyHardcoreOverclocking - Especially good on motherboards from a component analysis perspective, and memory optimization settings. Just have to get used to his unstructured rambly style :) .
Level1Techs - Wendell's actual tech vids are good- he is very knowledgeable, the 3 panel topic videos are something I skip.
Hardware Unboxed - Some of the best on graphics cards and particularly monitors (they have a companion channel just on monitors).
PCWorld - Gordon is an old hand at the PC related game, and his experience shows.

Im subbed to all but Level1Techs… will have a look. Thanks!