Review Nvidia GeForce RTX 4060 Review: Truly Mainstream at $299

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

spongiemaster

Admirable
Dec 12, 2019
2,353
1,329
7,560
I don't think you understand how computer prices work. They don't go up with time - they do down. Memory and storage have done nothing but drop in price as the years passed. Boards have tracked with inflation and CPUs have gone up a little bit, but nothing close to inflation. GPUs are the exception and most of their pricing has been due to shortages, crypto demand, and anti-competitive behavior (price fixing). The first two issues have worked themselves out. AI may be using GPU-like silicon, but off-the-shelf GPUs are not useful for commercial AI.
I don't think you've moved on from the early 2000's. There is no longer a cost per transistor improvement with new nodes. Nvidia is very likely paying TSMC significantly more for their custom 4N node than they were paying Samsung for the 8nm node for Ampere. GPU die sizes don't typically shrink with new nodes either. They're pretty much the same size with more transistors packed in for better performance. So there is no cost benefit of squeezing in more dies per wafer.

The costs of developing a GPU have also skyrocketed since the early days of 3D acceleration. All the low hanging fruits have long since been picked, and it is no longer just about rasterized performance. While Nvidia continues to improve rasterized performance, their focus is shifting to ray tracing cores and tensor cores. The complexity of the software stacks are also incomparable. DLSS, media encoders and decoders, super resolution video playback, Reflex and everything else Nvidia is developing costs money to make. It all adds up.
 

adunlucas

Prominent
Nov 5, 2022
8
13
515
I don't think you've moved on from the early 2000's. There is no longer a cost per transistor improvement with new nodes. Nvidia is very likely paying TSMC significantly more for their custom 4N node than they were paying Samsung for the 8nm node for Ampere. GPU die sizes don't typically shrink with new nodes either. They're pretty much the same size with more transistors packed in for better performance. So there is no cost benefit of squeezing in more dies per wafer.

The costs of developing a GPU have also skyrocketed since the early days of 3D acceleration. All the low hanging fruits have long since been picked, and it is no longer just about rasterized performance. While Nvidia continues to improve rasterized performance, their focus is shifting to ray tracing cores and tensor cores. The complexity of the software stacks are also incomparable. DLSS, media encoders and decoders, super resolution video playback, Reflex and everything else Nvidia is developing costs money to make. It all adds up.
I think that AMD and nVidia have seen "people buy GPU even overpriced, so I can raise prices and have more profit while selling a little bit less". Also nVidia focus is on R&D AI and RT to get more contracts from enterprises. So we will see less gains from new generations.
 
With modern upcoming AAA games, 8 GiB won't be enough.


IMO, VRAM Requirements for upcoming AAA games:
@ 1K | Min = _8 GiB | Recommended = 12 GiB | Future Proofed = 16 GiB
@ 2K | Min = 12 GiB | Recommended = 16 GiB | Future Proofed = 20 GiB
@ 3K | Min = 16 GiB | Recommended = 20 GiB | Future Proofed = 24 GiB
@ 4K | Min = 20 GiB | Recommended = 24 GiB | Future Proofed = 28 GiB

There are already games out there where @ 2K resolutions, they're blowing past 8 GiB of VRAM and causing major issues.

I've yet to run into a game I can't make work with 8GB of VRAM, normally it's the textures that are the big culprit and people thinking "Ultra Settings" is a good idea. Screen resolution has very little to do with video memory needed to play these days, it's all in texture sizes and occasionally AA level. Just turn the textures down one, maybe two notices, whichever gets us to 2048x2048 textures and it magically works.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,426
954
20,060
I've yet to run into a game I can't make work with 8GB of VRAM, normally it's the textures that are the big culprit and people thinking "Ultra Settings" is a good idea. Screen resolution has very little to do with video memory needed to play these days, it's all in texture sizes and occasionally AA level. Just turn the textures down one, maybe two notices, whichever gets us to 2048x2048 textures and it magically works.
Tell that to those who buy a ##50/60 card and whine / complain that they can't have "Ultra" for everything and run into "Out of VRAM issues" and don't realize that it's nVIDIA that screwed them over by under equipping the ##60 in terms of VRAM.

Hell the 4060 is a LITERAL down grade in VRAM capacity from the 3060 with nVIDIA choosing to go with 8 GiB on the 4060, while the 3060 had 12 GiB.

How does that make you, the customer feel, knowing that nVIDIA literally screwed you over on VRAM capacity on a newer generation of Video Card.
 

SSGBryan

Reputable
Jan 29, 2021
156
135
4,760
Oops. Some of this was written very late at night / early in the morning. I had the idea in my head that AD106 was x16 and AD107 was x8, but you're right: they're both x8. I've updated that paragraph. As for whether that matters for performance, most tests have shown that, provided you don't exceed a card's VRAM capacity, x8 is usually fine (not so much on Arc maybe?), even on PCIe 3.0 connections. 1% lows can drop a bit (e.g. if some new texture gets pulled from RAM on the fly, it takes twice as long on a 3.0 vs. 4.0 link), but over a longer gaming session, with a well-coded game, it shouldn't be much of an issue.
The 4060 is trading blows with an A750.

An A750.
 

bit_user

Titan
Ambassador
I don't think you understand how computer prices work. They don't go up with time - they do down.
You have this thing about trends...

Trends are fundamentally descriptive, not prescriptive. Did you ever hear the standard investment disclaimer: "past performance is no guarantee of future results"? That applies quite broadly.

The proper way to use trends is to understand the underlying dynamics of a system. Upon modeling those dynamics, you can make projections. The improper way to use trends is to expect them to continue forever. Simple extrapolation is a bad idea, because virtually every trend will eventually breakdown, at some point. Pretty much the only trends immune from that are cosmological, like the expansion of the universe - and even that is true only in aggregate, but not locally.

Talking specifics of historical tech pricing trends, they have to do with things like:
  • manufacturing nodes increasing in density faster than wafer price
  • uncharacteristically long period of low-inflation
  • a more competitive semiconductor manufacturing sector

Factors that no longer apply to the same degree, if at all.

Memory and storage have done nothing but drop in price as the years passed.
As a long term trend, yes. In the shorter term, like on the scale of a year or so, we've seen DRAM price increases by nearly 2x, from one year to the next. I'm not sure NAND has ever increased by that much, but it's not been a monotonically-decreasing trend, like you paint.

It also needs to be pointed out that NAND has been benefiting from two additional factors that don't apply to any other semiconductor product, currently in production:
  • multi-bit cells
  • 3D cell structure

These two techniques have allowed NAND bit-densities to increase significantly faster than any other aspect of semiconductor technology, over the same period of time. Therefore, it's wrong to use NAND as a standard by which to judge other chips.

Boards have tracked with inflation and CPUs have gone up a little bit, but nothing close to inflation. GPUs are the exception and most of their pricing has been due to shortages, crypto demand, and anti-competitive behavior (price fixing).
First, CPUs didn't have to respond to the crypto craze in the same way. You could host many GPUs running crypto from a single CPU, and even a fairly low-end one, at that.

Furthermore, with GPUs, I think Nvidia and AMD both saw what prices the market seemed willing & able to support. This informed their decision about how big to make their next generation.

Second, GPUs rely on die area much more than CPUs, for their performance. A $600 Raptor Lake i9-13900K is just 257 mm^2 on a 7 nm node, whereas a RTX 4070 Ti is 294.5 mm^2 on a more expensive node, plus it has to include DRAM, a PCB, a 285 W thermal solution, and there's board maker who applies their own markups between Nvidia and the channel. This makes GPUs much more sensitive to wafer pricing than CPUs.
 
Last edited:

lmcnabney

Prominent
Aug 5, 2022
192
190
760
As a long term trend, yes. In the shorter term, like on the scale of a year or so, we've seen DRAM price increases by nearly 2x, from one year to the next. I'm not sure NAND has ever increased by that much, but it's not a smooth trend like you paint.
But we aren't talking about short term for the 4060. The chip running on this board is an AD106 which is made by TSMC on their 5nm process. The 5nm process has been running since 2019. It is a mature process and is in no way cutting edge. If you want to talk about high prices due to using current tech you would be talking about Apple's 3nm chips. Nvidia is saving big money by using old lithography. Even then the chip is pretty damn small (190mm²).

A $600 Raptor Lake i9-13900K is just 257 mm^2 on a 7 nm node
Sigh - Raptor Lake is a 10nm process. I would also remind you that the i9, i7, i5 and i3 processors are all the same 257mm² chip. They are all binned and have defective or ill performing (hot) cores disabled. Some will sell for $600 and some will sell for $150.

Are you new here?
 

TJ Hooker

Titan
Ambassador
Sigh - Raptor Lake is a 10nm process. I would also remind you that the i9, i7, i5 and i3 processors are all the same 257mm² chip. They are all binned and have defective or ill performing (hot) cores disabled. Some will sell for $600 and some will sell for $150.
i5 13600K and higher use the ~250 mm^2 die, the rest of the 13th gen i5s use a 215 mm^2 die, and the 13th gen i3s use a 163 mm^2 die (the latter two being rebadged Alder Lake/12th gen dies).

Edit: And I'm not sure why some are so insistent on going by Intel's original node branding (10nm) vs the new branding (Intel 7). The "nm" value doesn't correspond to any actual feature size for any company's 7nm-class nodes, and Intel's density is on par with TSMC's and Samsung's. Intel 7 is as much "7nm" as TSMC N7 or Samsung 7LPP is, as far as I can tell.
 
Last edited:
  • Like
Reactions: bit_user

spongiemaster

Admirable
Dec 12, 2019
2,353
1,329
7,560
But we aren't talking about short term for the 4060. The chip running on this board is an AD106 which is made by TSMC on their 5nm process. The 5nm process has been running since 2019. It is a mature process and is in no way cutting edge. If you want to talk about high prices due to using current tech you would be talking about Apple's 3nm chips. Nvidia is saving big money by using old lithography. Even then the chip is pretty damn small (190mm²).
Again, you are living in the wrong decade. Try keeping up with current events. Prices of even established nodes, are not going down because they are "mature", they're going up.

TSMC warns of a price hike for its chips and Apple, its largest customer, is unhappy

Another key piece of information from this article:

As it is, chips by TSMC are already around 20% more expensive compared to those from its direct rivals.

Like I said, Nvidia is going to be paying significantly more for their current chips than they were for Ampere dies from Samsung. Additionally, Nvidia is not using the standard 5nm TSMC node. They are using a custom 4n node. Would like to wager whether the cost of these are going to be less than, equal to, or more than the standard 5nm node TSMC produces?
 
Last edited:

spongiemaster

Admirable
Dec 12, 2019
2,353
1,329
7,560
Sigh - Raptor Lake is a 10nm process.
The name of the node is irrelevant. Intel's 10nm process has density comparable or better than the competition's 7nm nodes. That's why they renamed it Intel 7 to get their level of lying in line with the competitions'. As pointed out by others, pretty much everything else you've said is factually inaccurate as well.
 

sherhi

Distinguished
Apr 17, 2015
80
52
18,610
>Here's the thing: You don't actually need more than 8GB of VRAM for most games, when running at appropriate settings...The other part is that many of the changes between "high" and "ultra" settings — whatever they might be called — are often of the placebo variety. Maxed out settings will often drop performance 20% or more, but image quality will look nearly the same as high settings.

I agree with this. And given that both AMD & Nvidia have now set 8GB as the defacto standard VRAM allotment for the largest PC base for this generation, future PC games will be optimized for this amount, including console ports.

I'm surprised that the $300 price point is seen as a immutable number by many. Inflation in the last couple of years has been substantial in the US, and even more so elsewhere. Using the govt's CPI inflation calc (https://bls.gov/data/inflation_calculator.htm), $299 today is worth $260 when 3060 launched (close to the 3050's $249). And 3060's $329 price then is worth $378 today, or close to 4060 Ti's launch price. In short, per inflation alone, pricing has shifted almost a tier upward. That's a considerable amount that no reviewer ever mentioned when they harp on pricing.

Some peeps are calling the 4060 grossly overpriced, and I just had to laugh. In finance, Time Value of Money (TVM) is a fundamental concept that underpins pretty much every calculation you make. It's strange that a crowd obsessed with numbers would be so ignorant of it.
You don't need 8gbs for most games...so true. Also not all Chinese people are smaller because Yao Ming is tall! This kind of arguments don't help anyone.

High/ultra is tested to show the power of the card, sure you can lower settings on new games released THIS year...but what does it tell you about the future of such card? If there are major titles forcing you to lower settings on this brand new "midtier card" right at launch isn't it more of a marketing/journalists' PR marketing? Because clearly this card is not "mainstream" or "midtier" by any means, it's just barely entry level card or to be more precise "retro/legacy gaming card" for just older major titles (even that sometimes doesn't stand, only +3 FPS in 4k compared to 2060 in some older games? Scaling of this card is horrible).

Standard for vram has been set almost 3 years ago by current consoles and it's not 8gbs, market of major titles works exactly in opposite way of what you described. Sure many PC games will be targeted at 8gbs or less because they want to cover the market as much as possible but we are then not talking about major titles but rather less demanding (indie) games, eSports, mobas, MMOs etc and you certainly don't need 360-400€ GPU for those...and honestly I would not even go this route, that's just bad marketing for 8gb cards. You can use the same argument for 6gb cards or something because solid chunk of users (steam HW survey) has those cards and that's why this argument just does not stand.

Is it overpriced because there are consoles really stomping these cards in price/performance and also I bought 60s series card almost 10 years and inflation adjusted over this period I would dish out the same money on another 60s card for the same 1080p/60fps experience. That's called stagnation imposed by duopoly (it's not natural true free market situation so don't even bring this here, we have no idea how much these cards truly cost to make).
I've yet to run into a game I can't make work with 8GB of VRAM, normally it's the textures that are the big culprit and people thinking "Ultra Settings" is a good idea. Screen resolution has very little to do with video memory needed to play these days, it's all in texture sizes and occasionally AA level. Just turn the textures down one, maybe two notices, whichever gets us to 2048x2048 textures and it magically works.
Me too, I play 10 years old paradox games so this 60 series card for 360-400€ and for 1080p medium settings gaming is justified. /s
The 4060 is trading blows with an A750.

An A750.
This is very sad and hopefully very good for average consumer, it should push prices down hard...unless average consumer is stupid?
 
  • Like
Reactions: oofdragon
Without having read all the comments, my reaction to the 4060 is basically who cares? It’s nice they released a 300 dollar mainstream card, do they want a medal?

In my opinion unless you are talking about a 4050 or lower class of card, 12gb of vram should be the minimum period. 300 bucks for an 8gb card that 200-250 dollar amd cards from last generation can compete with is sad, good job nvidia way to show you care. This card at 300 bucks in my opinion should be doa. Even if you are coming from something like an rx 580 or an older gtx 1060, you could still score an rx 6650xt on Newegg for around $250 bucks. Or they had a 6700xt 12gb for $319.99.

So in my mind, if cost savings are what you want, there are better value cards than the 4060. If you are after performance, an rx 6700xt or 6750xt for not much more is a better choice than the 4060.

I’m not a professional reviewer but just someone who’s tinkered with computers and gaming a lot over the years and this is just simply my opinion.
 

bit_user

Titan
Ambassador
Is it overpriced because there are consoles really stomping these cards in price/performance
Huh? If you mean consoles stomp entry-level gaming PCs in price/performance, that's pretty much always been true - at least, until late in a console's lifecycle, anyway. If you mean a console stomps this specific card in price/performance, then no - not for equally optimized titles, at least. This GPU has 50% more compute performance than a PS5, for instance.

PC gaming isn't about saving money vs. consoles. That's generally a losing proposition.
 

Tac 25

Estimable
Jul 25, 2021
1,391
421
3,890
this is of course a subjective opinion.. would always choose a gpu over a console. With a gpu, I get a whole pc and not just for gaming. I chat on discord, facebook, forums, and there's also the Magic the Gathering gallery that I take good care of - these require a pc. In regards to gaming, a gpu is easier to upgrade than a console. Was once a console gamer, but shifted to pc gaming as it is more useful in my situation.
 
  • Like
Reactions: bit_user

sherhi

Distinguished
Apr 17, 2015
80
52
18,610
Huh? If you mean consoles stomp entry-level gaming PCs in price/performance, that's pretty much always been true - at least, until late in a console's lifecycle, anyway. If you mean a console stomps this specific card in price/performance, then no - not for equally optimized titles, at least. This GPU has 50% more compute performance than a PS5, for instance.

PC gaming isn't about saving money vs. consoles. That's generally a losing proposition.
Who cares about compute performance? This card barely pulls 20fps in 4k (excluding lack of VRAM issues). End result in gaming is what matters. Show me new PC build with this GPU below 550€ that can atleast rival 4k console gaming...well obviously you can't so what are we even talking about. Want to play modern titles with new GPU and your budget is 550€ or less? Go buy console. Still stands even almost 3 years since new consoles'release. This is Nvidia doing on purpose due to their official marketing strategy to offer average gaming GPU price around the price of a new console...which is designed to fail.
 

Gillerer

Distinguished
Sep 23, 2013
366
86
18,940
Introducing an RTX 4060 at $299 would be "great for mainstream gamers" if Nvidia hadn't used a GPU with size and cost similar to what would have been used for a xx50 class product in the past.

Add to that the fact that VRAM buffer size, VRAM bus width and PCIe lane count have all actually decreased gen-on-gen, and this is arguably an entry-level product. The partner cards also remind me more of xx50 designs and quality.

When have entry level graphics cards had a $299 MSRP? How is that reasonable or "great for mainstream gamers"?

"RTX 4060" is just a name. This gives Nvidia the excuse to set a price $80-100 higher than they could on an exact same card with a "4050" badge. It will work fine on people who aren't interested enough to read reviews and just walk in a store and buy the best new Nvidia card they can afford.

(Nvidia did this same "shifting up" of their entre stack from the RTX 4080 down - I believe most if not all of them use a class lower die than the previous generations used to.)
 
Last edited:

bit_user

Titan
Ambassador
Who cares about compute performance? This card barely pulls 20fps in 4k (excluding lack of VRAM issues). End result in gaming is what matters. Show me new PC build with this GPU below 550€ that can atleast rival 4k console gaming...well obviously you can't so what are we even talking about.
LOL. Consoles don't do native 4k rendering. They use tricks like checkerboard rendering and upscaling akin to FSR.

So, for a fair comparison, you should be using DLSS on this card.
 

bit_user

Titan
Ambassador
(Nvidia did this same "shifting up" of their entre stack from the RTX 4080 down - I believe most if not all of them use a class lower die than the previous generations used to.)
No, I think the RTX 4080 is exactly where it was meant to be. The RTX 4070 is where things got off-track. Normally, it would've used a partially-enabled version of the RTX 4080 die, but still with 256-bit memory interface. Historically, their 60-tier has the 192-bit interface.

The 3000-series was weird, because they were clearly trying to figure out how to increase memory capacity without going 2x. That's why I think the 3080 had a 320-bit memory interface - so they could outfit it with 10 GB - not because it actually needed all of that bandwidth. But, as usual, the 70-tier had a 256-bit interface.
 
  • Like
Reactions: atomicWAR

kb7rky

Prominent
BANNED
May 11, 2023
43
19
535
A lot of those specs are worth considering...do you want a mediocre card, or one that can handle the newest games without so much as a hiccup?

Remember...you get what you pay for. In this case, it seems like you're getting all the promises of a top-shelf card, at bargain basement prices.
 

Elusive Ruse

Estimable
Nov 17, 2022
456
594
3,220
this is of course a subjective opinion.. would always choose a gpu over a console. With a gpu, I get a whole pc and not just for gaming. I chat on discord, facebook, forums, and there's also the Magic the Gathering gallery that I take good care of - these require a pc. In regards to gaming, a gpu is easier to upgrade than a console. Was once a console gamer, but shifted to pc gaming as it is more useful in my situation.
Why is it always mutual exclusivity with your types? I have always used both consoles and PCs, dating back to Atari 2600 and Intel 486 days (to be clear the PC was my dad's I'm not that old). Let me tell you a secret, you can be a PC enthusiast and an avid console gamer at the same time.
 
  • Like
Reactions: sherhi
D

Deleted member 431422

Guest
The prices and availability are taken from a single shop. Prices include 23% tax.

Cheapest RTX4600 I found is from Gainward Ghost at ~350$, most expensive Gigabyte Aorus ~450$.
RX7600 ~342$ and ~350$. There were only two cards listed, both from XFX. I don't know if there's a short supply or it's only this retailer.

Performance is tied too at the price point so to each their own I guess. I'd go with AMD, but that's my personal preference.
 
Tell that to those who buy a ##50/60 card and whine / complain that they can't have "Ultra" for everything and run into "Out of VRAM issues" and don't realize that it's nVIDIA that screwed them over by under equipping the ##60 in terms of VRAM.

Hell the 4060 is a LITERAL down grade in VRAM capacity from the 3060 with nVIDIA choosing to go with 8 GiB on the 4060, while the 3060 had 12 GiB.

How does that make you, the customer feel, knowing that nVIDIA literally screwed you over on VRAM capacity on a newer generation of Video Card.

It's not nVidia that screwed them out of settings, its video game manufacturers who are all trying to be the next Crysis and doing dumb stuff with presets, like setting texture sizes to 4096x4096. There is no quality difference between 4K and 2K textures are resolutions south of 4320p (7680x4320).

Would you blame nVidia for a game dev running Ethereum hashs during frame rendering, just so reviewers can talk about how much of a "monster" the game is on resources.

Me too, I play 10 years old paradox games so this 60 series card for 360-400€ and for 1080p medium settings gaming is justified. /s

See the above. I said modern titles, it's not hard, just go in and adjust the textures slider down one or two notches and viola works on 6~8GB cards. GPU memory is not nearly as important as GPU memory bandwidth, we can have an entire discussion but the long and short of it is that programs load all the necessary textures for that zone/area/etc.. from system memory prior to it being rendered. The GPU memory only needs to hold the assets needed for the current area and potentially the next area if it's seamless zoning like Diablo 4. After you've loaded what is needed, any extra GPU memory is effectively a cache incase you need someone again later.

And to give an idea on why 4K textures are dumb, they are loaded into GPU memory with the primary texture and two down sampled versions. During rendering the game engine will chose the version based on target rendered space and distance from viewer in a process called MIP mapping. Then it will further down sample and rotate the texture to fit into the target screen space.

This is per-texture

16,777,216 (4096x4096 texture)
4,194,304 (2048x2048)
1,048,576 (1024x1024)
total: 22,020,096 (84MB prior to compression)

4,194,304 (2048x2048)
1,048,576 (1024x1024)
262,144 (512x512)
total: 5,505,024 (21MB prior to compression)

Yeah 4K textures take up a ridiculous amount of GPU memory and are rarely chosen because the target render space for a single texture on a 4K (3840x2160) will rarely exceed a 200x200 box. Instead it'll chose the 2K or 1K versions and then down sample them to fit into that box. Now this will likely change in the next five to six years once 4320p or even 8640p screens become more common.

To anyone wondering why I've been putting effort into informing folks, it's because there are many people who have older cards or have more important financial issues then to upgrade to more expensive graphics cards. They should not feel left out because ignorance has everyone on youtube thinking "you need 12GB to play anything". All the reviewers are doing is going into the game, setting the default preset to "Ultra" then running FPS benchmarks because that is what people will click on and like and subscribe to. Even if it's horribly optimized settings, even if game devs would be running Ethereum hashs to make their product standout as "more graphically intensive", they would still do it for the clicks.
 
Last edited:
It's not nVidia that screwed them out of settings, its video game manufacturers who are all trying to be the next Crysis and doing dumb stuff with presets, like setting texture sizes to 4096x4096. There is no quality difference between 4K and 2K textures are resolutions south of 4320p (7680x4320).

Would you blame nVidia for a game dev running Ethereum hashs during frame rendering, just so reviewers can talk about how much of a "monster" the game is on resources.
I'm not sure if any games are actually doing 4K textures or not. I know game engines support it, but game artists still have to use the feature. But even 2K textures are generally overkill for 2560x1440 and lower gaming. I believe quite a lot of recent games do have 2K textures available, and that's what has caused them to exceed 8GB of VRAM use. But then mipmapping still means those 2K textures aren't applied in most cases (other than the floor close to the user as an example).

The problem with mipmaps is that I don't think games are usually good about, for example, loading 1K and lower texture sizes into memory, and then only loading the 2K texture when they're needed — because how do they known in advance when they're needed? So they often will load but not use 2K textures (if you opt for that setting in the game), just in case. Otherwise you get potentially stuttering / texture pop. Not that you'd notice a "pop" if an object used a 1K texture and then upgraded it to a 2K texture via streaming.

This is why games like Red Dead Redemption 2 put in hard limits on options. You want Ultra settings (1K textures, AFAICT), you need at least 4GB of VRAM at 1080p, and 6GB for 1440p/4K IIRC. But then the game is pretty smart about managing memory use and working within those constraints. Other games (Far Cry 6) tend to be less reliable in their VRAM management.
 
I'm not sure if any games are actually doing 4K textures or not. I know game engines support it, but game artists still have to use the feature. But even 2K textures are generally overkill for 2560x1440 and lower gaming. I believe quite a lot of recent games do have 2K textures available, and that's what has caused them to exceed 8GB of VRAM use. But then mipmapping still means those 2K textures aren't applied in most cases (other than the floor close to the user as an example).

The problem with mipmaps is that I don't think games are usually good about, for example, loading 1K and lower texture sizes into memory, and then only loading the 2K texture when they're needed — because how do they known in advance when they're needed? So they often will load but not use 2K textures (if you opt for that setting in the game), just in case. Otherwise you get potentially stuttering / texture pop. Not that you'd notice a "pop" if an object used a 1K texture and then upgraded it to a 2K texture via streaming.

This is why games like Red Dead Redemption 2 put in hard limits on options. You want Ultra settings (1K textures, AFAICT), you need at least 4GB of VRAM at 1080p, and 6GB for 1440p/4K IIRC. But then the game is pretty smart about managing memory use and working within those constraints. Other games (Far Cry 6) tend to be less reliable in their VRAM management.


All the modern games have 4096x4096 textures loaded along with 2 lower res versions, small versions are then created on the fly with down sampling. You are correct that those 4K versions are rarely used but they are still being loaded into GPU memory on "Ultra" settings. When you load a texture you don't just load 1 version into memory, it automatically loads the other versions stored with it, textures are stored in native resolution along with 2~3 down sampled copies. Graphics engines are pretty smart about figuring out which one to rasterize but they'll still load into memory what they are told to load into memory. The culprit to all this wastefulness is the default "Ultra" settings everyone likes to use combined with the "can it run Crysis" mentality. Game creators know you reviewers are going to instantly go to "Ultra" to show how "demanding" their game is, so it's setup to be extremely wasteful of resources. Screen resolution doesn't really matter anymore, textures are the single biggest source of memory utilization and they are the same size reguardless of screen settings.

I've actually gone into games and proven this, go into an area on "Ultra kill my computer" and observe memory utilization. Exit, slide textures down one notch, go back into same area and notice GPU memory utilization plummeted. Need to pay attention if the game engine has three or four settings, if three then it's usually (1K/2K/4K) as default, if four the last two might both be 4K but with or without some sort of filtering technique applied. I could get Diablo 4 playing smoothly on 8GB VRAM on Ultra by simply lowering the texture slider. This is for games that are relatively new, you know the ones everyone is screaming "you need 12GB or higher". If a game is smart enough to manage it's own memory footprint then it's likely not going to sabotage itself in the memory department.
 
Last edited: