News "Very Few Are Interested" in RTX 4060 Ti 16GB GPUs, Nvidia AIB Sources Reportedly Say

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Saying "8GB is not enough" or "modern AAA games require more than 8GB vram" is not misleading because it is said within a certain context. The context is: 8GB is not enough to run modern AAA games at a level you would expect from a newly released GPU in 2023 at prices like these. Even Hardware Unboxed, who really started this "movement" (god bless them), have pointed out numerous times that their beef is not with 8GB cards, but 8GB cards at these prices.
I was responding to blanket statements about 8GB. If you need 500 words to explain how such blanket statements aren't misleading by pointing out a handful more considerations about when 8GB is or isn't appropriate, I think that proves that the blanket statements are indeed sorely lacking in specificity.

Even the price needs to be specified because you can get 8GB GPUs under $200 just like I did, you just need to shop one generation older.

Does 8GB on a $300 GPU in 2023 and having to make compromises because of it feel like robbery? Sure. But that is a separate story.
 
  • Like
Reactions: palladin9479
Saying "8GB is not enough" or "modern AAA games require more than 8GB vram" is not misleading because it is said within a certain context. The context is: 8GB is not enough to run modern AAA games at a level you would expect from a newly released GPU in 2023 at prices like these. Even Hardware Unboxed, who really started this "movement" (god bless them), have pointed out numerous times that their beef is not with 8GB cards, but 8GB cards at these prices. How is this not clear by now? Why are we being so pedantic about supposed lack of nuance when the argument was made clear thousands of times before? Yet at the same time we have these corporations deliberately butchering their products, withholding value from consumers, desperately trying to cling to their cryptoboom profit margins.

Speaking of "misleading", this whole "vote with your wallet" mantra is exactly that. My "vote" of not buying a budget gpu means nothing against someone else's purchase of a gpu nearly three times as expensive as that. For example, the 4060 vs 3070 Ti (even if it's a regional case). The only way my vote counts is through community pushback. And now that we have it, i find it silly that we're insisting on accurately presenting the claim every time, or not making it at all.

Oh really? Did you buy it in 2023 for +400$? I bet not.

As for gpus somehow being connected to self-validation, epins and personal insecurities... only a child would make that connection. Implying that's the issue with the 8GB vram discussion is completely missing the point. Also, doesn't sound very "secure" to me, whatever that means and whoever "you" and "your world" is.

But since having an old gpu is so cool, apparently, i should mention that i'm sitting on a 10 year old 2GB GTX 750Ti as a main and only option. Ikr, i'm such a badass. Now where is my Medal of honor?

My point is, me and others here and elsewhere are not arguing the 8GB vram issue from a standpoint of spoiled brats, as some here would like to present it. I'm not even speaking for myself, but for the sake of people like me, especially the young ones. You know, these people someone crazy enough would call "gamers". Not Nvidia tho (or AMD for that matter, although they are slightly more "generous" imo), they refer to a different type when they use that word. You know, the type to act as a gatekeeper of having fun. The type to enjoy fighting pixelated dragons just like when he was a child, but claim everyone else is entitled for wanting value for their often hard earned money. The type to suggest that if you dislike the malicious practices these companies employ, you're only allowed to not buy their products and then shut up about it, unless you make sure you wrote an entire essay explaining the full context of something so obvious. The type to prefer developers optimizing games for cutdown products instead of fixing essential bugs or working on new games and content. The type to parrot that most people play on 1080p, without ever asking himself: Why is that? ...
I 100% disagree, OK lets look at hogwarts it is clearly displayed on the website what it takes to run what resolution at the settings.

I run a 2080 and run every game just fine @ 1440p because I was not part of the entitled I MUST HAVE ULTRA EVERYTHING even though my PC don't meet the requirements!!!!!!!!!!!!!!!!


If you have no knowledge of a PC parts your bad. EDIT at the average consumer level not directed at you.

For most of these poor running games today on PC they're poorly optimized console games for PC rushed out because they are ready to start making money.

EDIT then a few weeks later when it's out that the game runs like crap ( so people stop buying it) you get a patch to help when they should of made the patch before the release.

EDIT again I might of missed some of your points since it was just a wall of text.
 
Last edited:
The issue with 60 series is the memory bus. You could stick 32GB of VRAM and there wouldn't be any difference as it saturates the interface long before size becomes an issue.

Modern titles absolutely do not need more then 8GB of VRAM, unless putting the slider to "Max My PC". And why on earth would anyone do that without "Max My PC" class hardware?
 
  • Like
Reactions: atomicWAR
I was responding to blanket statements about 8GB. If you need 500 words to explain how such blanket statements aren't misleading by pointing out a handful more considerations about when 8GB is or isn't appropriate, I think that proves that the blanket statements are indeed sorely lacking in specificity.

Even the price needs to be specified because you can get 8GB GPUs under $200 just like I did, you just need to shop one generation older.

Does 8GB on a $300 GPU in 2023 and having to make compromises because of it feel like robbery? Sure. But that is a separate story.
Like i said, i think the underlying context is there and obvious, therefore i don't agree it's a blanket statement, especially under an article about the 4060Ti 16GB release as opposed to 4060Ti 8GB, for 100$ more. Most people understand/accept this and they don't need 500 words to explain all the obvious nuances when expressing the sentiment, instead they sum it up as "8GB is not enough". And again, within that context, 8GB gpus feeling like robbery is not a separate story and "8GB is not enough" is not aimed at cheap(er) and all 8GB gpus, just the insanely priced ones.

Bottom line, disagreement is fine, outright dismissal and insults are not. Like this guy:

Because they're the normie entitled brat who thinks paying for a ##60 class card should grant them everything in the universe for 1080P

... Screaming entitlement at people who spent most of their life playing at minimum settings. Coming from a person that cannot comprehend that in huge parts of the world these GPUs cost an arm and a leg (compared to the living standard)... which in turn means people are expecting a better value for that price, whether in the form of better visuals or longevity and relevance... which is the root of the "8GB is not enough" sentiment. Calling these people "entitled brats"... talk about being out of touch.

I 100% disagree, OK lets look at hogwarts it is clearly displayed on the website what it takes to run what resolution at the settings.

I run a 2080 and run every game just fine @ 1440p because I was not part of the entitled I MUST HAVE ULTRA EVERYTHING even though my PC don't meet the requirements!!!!!!!!!!!!!!!!


If you have no knowledge of a PC parts your bad. EDIT at the average consumer level not directed at you.

For most of these poor running games today on PC they're poorly optimized console games for PC rushed out because they are ready to start making money.

EDIT then a few weeks later when it's out that the game runs like crap ( so people stop buying it) you get a patch to help when they should of made the patch before the release.

EDIT again I might of missed some of your points since it was just a wall of text.
Again, i believe the claim is made within a context and is not aimed at 8GB gpus in general, but at the ones with deliberately introduced limitations at insane prices.

Of course rushed game releases are a problem and they need to receive backlash. So are the corporations that decrease value at expense of their profits being intact. The "8GB is not enough" and other variants of the same argument is just that, a very needed backlash. Even if it's misleading (again, i don't think it is), it is necessary and a step in the right direction.
 
Like i said, i think the underlying context is there and obvious, therefore i don't agree it's a blanket statement
Because people are supposed to universally agree that "the context" is "most demanding current and future AAA games at Ultra quality" ? The 60-tier has always required some degree of compromising when running the most demanding stuff. The only thing that changed here is how much Nvidia is asking for 60-tier stuff and gamers finally voting with their wallets telling Nvidia this isn't good enough for the money.

As I wrote earlier, value per dollar and "8GB (not) being enough for AAA games" are two separate debates.
 
  • Like
Reactions: palladin9479
... Screaming entitlement at people who spent most of their life playing at minimum settings. Coming from a person that cannot comprehend that in huge parts of the world these GPUs cost an arm and a leg (compared to the living standard)... which in turn means people are expecting a better value for that price, whether in the form of better visuals or longevity and relevance... which is the root of the "8GB is not enough" sentiment. Calling these people "entitled brats"... talk about being out of touch.
But those aren't the people who are being whiny in the area's of the internet that I'm dealing with.

I'm dealing with completely different set of people then you are.
 
Let us perhaps NOT try to enforce our opinions on what is very much a subjective thing.

As far as the card itself, and how it may fit my personal use case? I will have to wait for reviews. The only graphically intense title I play is MSFS, it is a VRAM juggernaut. Even on my 3060 with DLSS balanced enabled @1440p it can chew up to 11 GB at times. The 4060ti IS on my radar but I'm concerned the memory bandwidth may prove to be a limiter. So far I have not seen any deep dives into bandwidth usage under MSFS for anything, most card reviews seem to be pretty high level looking only at min/max/average framerates and 1% lows at various resolutions. It's likely just not going to be an issue in MSFS but time will tell. I have however seen recent reviews of the 4k series suggesting the cache is not very effective at alleviating the bus width bottleneck (I believe it may have been GN, or HWUB, not looking it up again). I expect like most things, it will depend on the title. I look forward to the day I can get those last few sliders up to max and enjoy the full experience at a smooth 60fps native, as that is very much what this title is about for me. Full immersion.
 
Just picked up a 3070 for $250. That's close enough performance to this $500 card for me to give up the 8gb, Maybe when the card drops down to a reasonable price I'll pick one up for the extra VRAM.
 
As far as the card itself, and how it may fit my personal use case? I will have to wait for reviews. The only graphically intense title I play is MSFS, it is a VRAM juggernaut. Even on my 3060 with DLSS balanced enabled @1440p it can chew up to 11 GB at times.

A note about GPU memory utilization, GPU drivers will not evict data out of the GPU memory unless the application is no longer running or there isn't enough remaining space to load the next thing. So looking at max utilization is a bad idea as a good chunk of that is likely stale data that is only there on the off chance the game might need it later. Unfortunitly the only way to know how much is really needed is to test each individual area / scene because different scenes require different amounts of GPU memory. Graphics assets are first loaded into system memory, and then transferred into GPU memory when they might be needed. Performance will only suffer when there isn't enough GPU memory available for the entire scene forcing the graphics drivers to load and reload assets several times a second.

This is why texture sized were discussed, loading larger then necessary textures into memory can introduce scene stutters if they don't all fit.
 
We're all arguing over vram because the problem is that nvidia is cheaping out on $20 worth of ram to increase their margins and then nvidia fanboys are standing by on high alert to jump into the conversation to try to shame those who expect more for their money. If you respond by arguing with me that means that you are an nvidia fanboy who was standing by on high alert to jump into the conversation.
 
AIBs aren't making many RTX 4060 Ti 16 GB variants, says HarwareLuxx editor. Suggests pricing is too close to RTX 4070.

"Very Few Are Interested" in RTX 4060 Ti 16GB GPUs, Nvidia AIB Sources Reportedly Say : Read more
This is why I'm passing on GeForce 40-series, AM5 and possibly AMD's 7000 series. Nvidia and AMD are still having visions of massive profits in the wake of the Covid/mining party and are offering us off crappy products at inflated prices. I say "enough". I'm perfectly happy with RTX 20-series cards, AM4 and RX 6000 series.
 
We're all arguing over vram because the problem is that nvidia is cheaping out on $20 worth of ram to increase their margins
its not just the vram size... the whole 40 series is a joke, less the 4090, everything else is moved up 1 tier, and is charged that tiers price.
 
  • Like
Reactions: pf100
A note about GPU memory utilization, GPU drivers will not evict data out of the GPU memory unless the application is no longer running or there isn't enough remaining space to load the next thing. So looking at max utilization is a bad idea as a good chunk of that is likely stale data that is only there on the off chance the game might need it later. Unfortunitly the only way to know how much is really needed is to test each individual area / scene because different scenes require different amounts of GPU memory. Graphics assets are first loaded into system memory, and then transferred into GPU memory when they might be needed. Performance will only suffer when there isn't enough GPU memory available for the entire scene forcing the graphics drivers to load and reload assets several times a second.

This is why texture sized were discussed, loading larger then necessary textures into memory can introduce scene stutters if they don't all fit.
It depends on the engine. MSFS for instance does not reuse many textures like traditional games, much of what is used is AI upscaled satellite imagery and is NOT stored locally. It's being streamed from MS servers, augmented by said AI and then shifted through System RAM and VRAM. Don't forget, this thing is set up to render the entire planet one little 3D cone at a time. The game does do local tecxture caching if you fly in the same area, or need to pre download textures due to poor internet speed, but you need to set this up. VRAM usage in MSFS is directly related to the complexity of the current scene, and typically peaks at low altitude and more so when landing or flying over photogrammetry areas. IE, the worst possible times. It seems to be very good at using what you have and no more, but when panning the camera textures have to be loaded quickly from wherever they may be stored at the time (assuming you have LOD set high enough and the system resources to support that) requiring a lot of CPU overhead. This is when the stuttering starts. The game is quickly and briefly mainthread limited. If you don't have LOD set high enough, or the system/VRAM to support such a high setting, said textures sometimes need to be generated, or they just don't show up and you get the stock, blurry image from Bing maps.

So, in short, MSFS is constantly loading and clearing both system RAM and VRAM. Usage is constantly changing. The more you have, the better performance will be had in low altitude and photogrammetry areas, where it is critical. Much of the above has been sussed out from interviews with Asobo staff, or in the forums where the team is quite forthcoming about the difference between this particular engine and what has been done before.
 
It depends on the engine. MSFS for instance does not reuse many textures like traditional games, much of what is used is AI upscaled satellite imagery and is NOT stored locally. It's being streamed from MS servers, augmented by said AI and then shifted through System RAM and VRAM.
In the context of VRAM size, textures do get reused a lot since it makes no sense to re-download textures continuously from scratch for every frame in a given area. You'd need an internet connection with 100+GB/s of bandwidth to do that.

BTW, MSFS uses a "rolling cache" similar to Google Earth by default. If you fly over the same area regularly, you can set the cache to be large enough (say 100GB) to contain most data for it and not need much texture streaming anymore.
 
  • Like
Reactions: CelicaGT
In the context of VRAM size, textures do get reused a lot since it makes no sense to re-download textures continuously from scratch for every frame in a given area. You'd need an internet connection with 100+GB/s of bandwidth to do that.

BTW, MSFS uses a "rolling cache" similar to Google Earth by default. If you fly over the same area regularly, you can set the cache to be large enough (say 100GB) to contain most data for it and not need much texture streaming anymore.
I covered rolling cache in my post (admittedly poorly), it's actually of limited use to most and recommended to be turned off by some who are encountering stutter issues for reasons unknown, though deleting and re-enabling the cache seems to resolve the issue temporarily. As for the rest, most people fly point to point, flights taking many, many hours. The textures, for the most part are unique, being synthesized with data from Bing maps (or Google via some trickery, if you prefer) and other sample data (to reduce the internet requirements to reasonable levels). The samples are likely kept in system RAM, while the map data and local synthesized textures are loaded and dumped as required. I have no special knowledge of this process, just stuff gleaned from Asobo interviews and several forums I frequent. MSFS initially had lofty system requirements, but most performance issues were overcome with SSD's, more/faster system RAM, and larger frame buffers. The 3060 is a known performer because of this, punching well above its class. The trick to running MSFS well seems to lean more towards shuffling texture/model data as fast as possible, augmented by a fat-cache CPU, with the graphical rendering requirements being somewhat modest relative to the rest. At least, that has been my experience.

(edit: Changed VRAM to system RAM regarding texture samples, though it likely resides in both in retrospect)
 
... Right now I can buy a 3060TI 8GB for $250, and that's actually a better buy than a 4060 even if this new card also costed $250.
It is not logical to compare a video card that is leaving the market with a video card that has just appeared. It has always been the case that the new is more expensive than the old, just because it is new.

People are paying a $200 premium for lagged fake frames (DLSS3) and stutter (RT) that make their experience worse. That's what is really happening. If dlss3 actually made the game play faster or RT actually improved the eye candy, maybe someone could say "yes it's worth to pay $200 more for this", but nope, just marketing and delusion.

DLSS3 is a great technology and definitely worth the money, the fact that for some reason you don't like this technology is your personal opinion.
Hardware growth will never catch up with software optimization.
But where marketing is really bad is not having breakthrough technologies, putting more memory on obviously weak video cards than they need. The 4060 with 12GB of memory does not allow it to compete with either the 3060ti or the 4060ti
And the panic about the lack of 8GB of video memory is due to the fact that high-definition textures have been imported into games (relatively recently), which are very necessary for 4K, but practically do not make any sense for 1080p. Many people who are used to setting the settings to ULTRA in 1080p resolution make a mistake when they cite this as an argument. They were smart enough to count the bus bits, but not to think about why specifically memory consumption in AAA is growing.
 
I'm interested in a 16GB 4060 Ti as an upgrade from my RTX 2060. Just not for $500 (or higher for partner models).
Since Nvidia couldn't be bothered to make and sell an FE model for this one, partner models are your only choices here. How well the 16GB 4060Ti scales beyond its 8GB counterpart with only a 128bits memory bus is going to be interesting. My bet is although its performance may hold up better once assets routinely break 8GB, it will hit an early ceiling due to the bandwidth bottleneck.