Review Nvidia GeForce RTX 4060 Review: Truly Mainstream at $299

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

DougMcC

Reputable
Sep 16, 2021
169
119
4,760
The MSI product page for the GeForce RTX™ 4060 Ti GAMING X 8G states "PCI Express® Gen 4 x16 (uses x8)". Does this mean it's a physical x16 bus but is software limited to x8? Couldn't find other explicit confirmations of this.
It's almost certainly that it's more of a hardware limitation than a software one. E.g. x16 connector using only x8 lanes electrically. They use a full width connector to provide stability for a long card, but a bunch of those connector wires don't lead anywhere.
 
  • Like
Reactions: deesider

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,392
924
20,060
With modern upcoming AAA games, 8 GiB won't be enough.


IMO, VRAM Requirements for upcoming AAA games:
@ 1K | Min = _8 GiB | Recommended = 12 GiB | Future Proofed = 16 GiB
@ 2K | Min = 12 GiB | Recommended = 16 GiB | Future Proofed = 20 GiB
@ 3K | Min = 16 GiB | Recommended = 20 GiB | Future Proofed = 24 GiB
@ 4K | Min = 20 GiB | Recommended = 24 GiB | Future Proofed = 28 GiB

There are already games out there where @ 2K resolutions, they're blowing past 8 GiB of VRAM and causing major issues.
 

TJ Hooker

Titan
Ambassador
The MSI product page for the GeForce RTX™ 4060 Ti GAMING X 8G states "PCI Express® Gen 4 x16 (uses x8)". Does this mean it's a physical x16 bus but is software limited to x8? Couldn't find other explicit confirmations of this.
Basically, the PCB connector is physically x16 in size, but only half the lanes are actually connected (because the GPU die itself likely only has a x8 connection). You may see this referred to as "physical" vs "electrical" lane counts.

The statement you quoted from this article is incorrect in saying that the 4060 Ti is different than the 4060 in this regard. Both the 4060 and the 4060 Ti are x16 physical, x8 electrical.
 

evdjj3j

Distinguished
Aug 4, 2017
361
389
19,060
With modern upcoming AAA games, 8 GiB won't be enough.


IMO, VRAM Requirements for upcoming AAA games:
@ 1K | Min = _8 GiB | Recommended = 12 GiB | Future Proofed = 16 GiB
@ 2K | Min = 12 GiB | Recommended = 16 GiB | Future Proofed = 20 GiB
@ 3K | Min = 16 GiB | Recommended = 20 GiB | Future Proofed = 24 GiB
@ 4K | Min = 20 GiB | Recommended = 24 GiB | Future Proofed = 28 GiB

There are already games out there where @ 2K resolutions, they're blowing past 8 GiB of VRAM and causing major issues.
Of course it won't be. Games will target the memory on consoles which is 16 GB.


Edit: The PS 5 has a 512MB DDR 4 memory pool for the OS leaving the full 16GB GDDR available. The Series X splits the 16GB GDDR into a 10GB 320 bit pool and a 6 GB 192 bit pool allowing the Xbox to allocate how they see fit. I really hope XBox Windows doesn't need 6GB for the OS, if they do they need to change to a different OS. Wikipedia says the XBox ends up with about 13.5GB of usable RAM for graphics. So expect new games to require between 13.5-16GB of VRAM to run optimally. 8GB is defiantly not enough.
 
Last edited:
Has anyone done a PCIe 3.0 vs PCIe 4.0 test with this card? or I guess any other card that was only rocking 8x PCIe4 lanes?

I feel like this could be a big issue as the target market for this tier of card is the same kind of market that will still be rocking an older PCIe 3.0 motherboard. AKA your budget gamer.

Since the improvements are not gigantic that means this card could potentially be a waste of money if PCIe 3.0 degrades its performance by even a marginal amounts. Especially on the $/FPS metric.
 
  • Like
Reactions: atomicWAR

Giroro

Splendid
It sounds like a bunch of people are going to get an RX 7600 at $250, and a lot more at $200 when it drops a bit down to finally hit the magic mainstream price.

They should probably be pretty happy with their RX 7600, and that's fine.

What moneybags Jensen still doesn't understand (or care about) is that your typical mainstream gamer has no upsell potential. They don't sit down and build a price/performance matrix in excel to figure out if it's worth spending a 30% more money to get 20% higher performance. That's what enthusiasts do.
A mainstream PC gamer will just try to buy the best card that they can get for the amount of money they have left over in their computer budget, which is usually around $200 (maybe even less, right now).
Just because Nvidia dictates that their cheapest and worst card is $300, that doesn't magically change the mainstream price to $300. People only have as much money as they have. We have other priorities.
Mainstream gamers usually don't think they need to upgrade to a new desktop GPU, so they only willingly upgrade when the price it right. They're locked into things like WoW, LoL, minecraft, etc - which run well enough (30fps) on old hardware and sometimes an iGPU.

If there aren't any good-enough cards available at that magic $200 number, it works out that your typical mainstream PC buyer will change their mind and either buy a gaming laptop (which Nvidia is ok with), or a buy a cheap non-gaming PC and game on a console (which Nvidia is less ok with). We've been watching this happen loudly for the last couple years. That's why there's a good chance that the PS5 will be the best-selling console of all time, even though it's a giant monster that brings almost nothing unique to the table hardware-wise, and brings even less with its software/game library.
 

TJ Hooker

Titan
Ambassador
Has anyone done a PCIe 3.0 vs PCIe 4.0 test with this card? or I guess any other card that was only rocking 8x PCIe4 lanes?

I feel like this could be a big issue as the target market for this tier of card is the same kind of market that will still be rocking an older PCIe 3.0 motherboard. AKA your budget gamer.

Since the improvements are not gigantic that means this card could potentially be a waste of money if PCIe 3.0 degrades its performance by even a marginal amounts. Especially on the $/FPS metric.
Probably a bit too soon right now, but I could see techpowerup doing that sort of testing with the 4060, as they have done lots of similar testing on other cards. In the meantime, they do have that analysis done for the 6600 XT. https://www.techpowerup.com/review/amd-radeon-rx-6600-xt-pci-express-scaling/

Edit: generally speaking, it'll only make a significant difference when you're running out of VRAM, forcing the card to start streaming assets over PCIe in real time. But your PCIe bandwidth is going to be much lower than VRAM bandwidth either way, so in that case you're probably better off just turning down settings (usually texture quality) to get VRAM usage in check.
 
Last edited:

lmcnabney

Prominent
Aug 5, 2022
192
190
760
Consoles have 16GB total memory, which is shared between CPU and GPU, including OS. So it would (kind of) be equivalent to a PC with 8GB of RAM and 8GB of VRAM, or some other distribution.
No it won't be. A console doesn't have any of the garbage that PCs do. The OS eats hundreds of MBs not GBs. It doesn't duplicate data like PCs do (loading the same data in system and graphical memory).
 
  • Like
Reactions: sherhi
Jun 28, 2023
2
2
15
The review explains this in more detail, so I suggest reading the review, but to reply here: it's a hardware limitation. It's simply nVidia making the chip not use the full X16 pins from the socket and just use the X8 equivalent.

Does this really affect the GPU? Yes, but I'd argue it's not a massive problem in PCIe4. Maybe there will be an impact with PCIe3, but I'd say it at worst 10%? Maybe it'll be bad if you saturate the VRAM on a PCIe3 link, though. I'd like seeing that tested. I can't even imagine how bad it would hit the performance in that scenario.

Regards.

It's almost certainly that it's more of a hardware limitation than a software one. E.g. x16 connector using only x8 lanes electrically. They use a full width connector to provide stability for a long card, but a bunch of those connector wires don't lead anywhere.

Basically, the PCB connector is physically x16 in size, but only half the lanes are actually connected (because the GPU die itself likely only has a x8 connection). You may see this referred to as "physical" vs "electrical" lane counts.

The statement you quoted from this article is incorrect in saying that the 4060 Ti is different than the 4060 in this regard. Both the 4060 and the 4060 Ti are x16 physical, x8 electrical.
All makes sense - a physical x16 connector but that isn't electrically connected. Cheers everyone!
 
Consoles have 16GB total memory, which is shared between CPU and GPU, including OS. So it would (kind of) be equivalent to a PC with 8GB of RAM and 8GB of VRAM, or some other distribution.
Not quite.

It's more nuanced than that, but it is fair (and accurate) to say that a game in a PS5 can allocate (and use) over 8GB memory for whatever they want/need. Also, it's not a 50/50 split either. I doubt the OS in the PS5 uses a full 8GB and keep in mind it's a unified memory model, so there is overlap, kind of like with an APU, but without the need to allocate exclusive space for the VRAM as the OS can just "use" the whole range. As I said, it's more nuanced, but game devs can definitely use over 8GB for the video side of things.

And that's at "console" levels of detail as well. In PC you can go over whatever "minimum" detail quality levels the developers want to use, so that only increases the amount of memory they want/can work with. Not just texture, but resolution (polygon complexity and count) and other supporting structures.

Quite a lot more to go over, but suffice to say: it's fair to make that statement about the video memory.

Regards.
 

sherhi

Distinguished
Apr 17, 2015
80
52
18,610
Consoles have 16GB total memory, which is shared between CPU and GPU, including OS. So it would (kind of) be equivalent to a PC with 8GB of RAM and 8GB of VRAM, or some other distribution.
No it won't be. A console doesn't have any of the garbage that PCs do. The OS eats hundreds of MBs not GBs. It doesn't duplicate data like PCs do (loading the same data in system and graphical memory).
Usually its more like 14gb vram/2gb ram or 12gb vram/4gb ram based on game tests I have seen and even thats just like "leftover" ram hanging in the void, its not like that system needs that much ram. This is also the reason why consoles are more efficient (per currency) at gaming, PCs just have to throw more hardware at it.



I dont care about "not beating 3060" or something, 6700xt (12gb ram) is around 375€, A770 (16gb ram) is also around that and both cards are dangerously close or above this 4060 at similar price point (300usd is surely going to be 360€-400€ in my country). Is Nvidia serious? Well Intel sure is going to be it seems...

Anyway, imagine typical scenario: you enjoy some older games (like 3 years old Cyberpunk) and you have some older card like 2060 and you want to have better experience or something (maybe you go from 1080p/60 monitor to 1440p/165 one or to 4k/60 etc...) in such older games and you buy newer version of 60 class card like this one and get like extra 3fps...thats just terrible.
 
Basically, the PCB connector is physically x16 in size, but only half the lanes are actually connected (because the GPU die itself likely only has a x8 connection). You may see this referred to as "physical" vs "electrical" lane counts.

The statement you quoted from this article is incorrect in saying that the 4060 Ti is different than the 4060 in this regard. Both the 4060 and the 4060 Ti are x16 physical, x8 electrical.
Oops. Some of this was written very late at night / early in the morning. I had the idea in my head that AD106 was x16 and AD107 was x8, but you're right: they're both x8. I've updated that paragraph. As for whether that matters for performance, most tests have shown that, provided you don't exceed a card's VRAM capacity, x8 is usually fine (not so much on Arc maybe?), even on PCIe 3.0 connections. 1% lows can drop a bit (e.g. if some new texture gets pulled from RAM on the fly, it takes twice as long on a 3.0 vs. 4.0 link), but over a longer gaming session, with a well-coded game, it shouldn't be much of an issue.
 

bit_user

Titan
Ambassador
Nice RTX4050 card, but too bad it's priced so high.
Agreed. The specs make it look exactly like it was intended to be a RTX 4050 or RTX 4050 Ti, but they couldn't call it that due to the price.

Anyway, I still think naming is a distraction. Price, performance, memory capacity, and power are the only things that should matter.
 
Last edited:

baboma

Notable
Nov 3, 2022
283
338
1,070
Jarred, first, thanks for your work. More than that, though, thanks for being the voice of reason in the mad house. It seems like everyone is running around, yelling at the top of their voice to get others to accept their opinion. OK, par for the course, then.

Some observations:

>Here's the thing: You don't actually need more than 8GB of VRAM for most games, when running at appropriate settings...The other part is that many of the changes between "high" and "ultra" settings — whatever they might be called — are often of the placebo variety. Maxed out settings will often drop performance 20% or more, but image quality will look nearly the same as high settings.

I agree with this. And given that both AMD & Nvidia have now set 8GB as the defacto standard VRAM allotment for the largest PC base for this generation, future PC games will be optimized for this amount, including console ports.

I'm surprised that the $300 price point is seen as a immutable number by many. Inflation in the last couple of years has been substantial in the US, and even more so elsewhere. Using the govt's CPI inflation calc (https://bls.gov/data/inflation_calculator.htm), $299 today is worth $260 when 3060 launched (close to the 3050's $249). And 3060's $329 price then is worth $378 today, or close to 4060 Ti's launch price. In short, per inflation alone, pricing has shifted almost a tier upward. That's a considerable amount that no reviewer ever mentioned when they harp on pricing.

Some peeps are calling the 4060 grossly overpriced, and I just had to laugh. In finance, Time Value of Money (TVM) is a fundamental concept that underpins pretty much every calculation you make. It's strange that a crowd obsessed with numbers would be so ignorant of it.

<more observations later>
 

lmcnabney

Prominent
Aug 5, 2022
192
190
760
I'm surprised that the $300 price point is seen as a immutable number by many. Inflation in the last couple of years has been substantial in the US, and even more so elsewhere.
I don't think you understand how computer prices work. They don't go up with time - they do down. Memory and storage have done nothing but drop in price as the years passed. Boards have tracked with inflation and CPUs have gone up a little bit, but nothing close to inflation. GPUs are the exception and most of their pricing has been due to shortages, crypto demand, and anti-competitive behavior (price fixing). The first two issues have worked themselves out. AI may be using GPU-like silicon, but off-the-shelf GPUs are not useful for commercial AI.
 
Thanks for including RTX 2060 in these benchmarks! IMO that would be the main target audience looking to upgrade. Based on this review, I'll be giving the RTX 4060 a hard pass. Might consider the RTX 4060Ti (16GB) due in July if nVidia prices it reasonably. Can't believe 4.5 years have passed since the RTX 2060 (6GB), and this is all they could produce in the same basic mid-price range (i.e. well short of double the performance and a paltry 2GB of additional VRAM).
 

adunlucas

Prominent
Nov 5, 2022
8
13
515
Has anyone done a PCIe 3.0 vs PCIe 4.0 test with this card? or I guess any other card that was only rocking 8x PCIe4 lanes?

I feel like this could be a big issue as the target market for this tier of card is the same kind of market that will still be rocking an older PCIe 3.0 motherboard. AKA your budget gamer.

Since the improvements are not gigantic that means this card could potentially be a waste of money if PCIe 3.0 degrades its performance by even a marginal amounts. Especially on the $/FPS metric.
I won't quote here because it's from another site, but Techspot has a comparison on a 3080 with PCI-E 3 x8 and x16 (also PIC-E 4 x16) and very little difference. When RX6500 was launched at PCI-E 4x4 it was horrible at PCI-E 3x4, but AMD improved a lot (i think it was just AMD usual bad drivers at beginning).

The problem is it's not going to be $300 after all the variant surcharges are attached. When it hits $250-$300 for a card capable of native 1920x1080 60fps max detail (generally), and with DLSS capability, anyone with a GTX 1000 series card, and that is a big chunk of the Steam Hardware Survey at about 20% combined with the 1660, 1650, 1060, and 1050 will be likely grabbing this card.

3.5 stars I'd agree with. It's not a powerhouse, but paired with a 5800x3d it's a great budget card, until the 4060Ti drops to $300.

People I see using 1050 and 1650 usually don't buy x60 level. Prices are a bit too different. Or if do, is like buying a cheap 3060 in a few months for about USD$200. Here in my country, many bought 3050 when it was for USD$320. They dreamed about a 3060 but it's price was close to $400. Someone with 1060 or 960 would go for 3060 once prices went down after bitcoin crash. Bot who had these cards kept them untill prices dropped, none bought a 3050 or an overpriced 3060.
I bought a chinese (yes, Aliexpress) 6600M for less than $200 (was $320 here the 6600 vanila from Asrock). Upgrading from an ancient RX550. For those who wonder, it is a Soyo card with heatpipes and had 0 problems so far. Performance on par with 6600 form other people. AMD driver working perfectly direct from site. A bonus that heats less than regular 6600.
 
Last edited:
  • Like
Reactions: bit_user