News Nvidia Reveals RTX 4060 Ti, 4060 with Prices Starting at $299

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Giroro

Splendid
So does anybody know which, if any of the benchmarks Nvidia gave might actually indicate raster performance?
All of the frame gen results can be thrown right in the trash IMO, but the disclaimer "DLSS and RT on in games that support it" ... which I assume is all of the remaining games?
Did Nvidia say what version of DLSS was used where? DLSS 3 vs DLSS 2 is not a like-for-like comparison.
DLSS 3 is supposed to run about 25%-30% faster than DLSS 2. So when the benchmarks are only showing roughly a ~15% improvement... Is Nvidia pushing their upscaling and fake frame tech so hard because the raster performance is actually going to be worse (or at least not really better) this gen?

I mean if there was good news for raster performance, then wouldn't Nvidia be trying to highlight it? I get the impression they are trying to bury bad news.
 

thestryker

Distinguished
Apr 19, 2016
1,104
492
19,590
I thought it was pretty wild that Nvidia chose to do a separate die just to cut the size from 188 mm^2 down to 159 mm^2. That's a relatively minor shrink in size, and it speaks to both the costs of TSMC's 4N as well as the volumes Nvidia expects for the budget/mainstream GPUs. GA106 was 276 versus GA107 at 200 mm^2, so that's 38% larger on the previous generation, compared to 18% larger on this gen.
I think it's fair to say this move was completely driven by the laptop market. I've seen more low end (but usable) GPUs in laptops the last two generations than I think my entire life before this combined. I may despise the way pricing has gone versus performance increase on the desktop side, but the laptops have gotten more performance with less power consumption without prices really moving.

I'm glad you pointed out the reason behind the 16GB cost in your article, because while the increase is too high in context of what we're getting there's a very real reason for it. With these low bus width designs I don't see any way for the price to drop with increased capacity until Samsung has that stacked GDDR at scale and reasonably priced.
 

Thunder64

Distinguished
Mar 8, 2016
76
93
18,610
Also if you compare the memory bandwide the 4060 is better than 3060 even it has narrover memory bus because it has so much bigger memory cache!
So the memory bus is not the only thing you have to looks at. 453 Gbps in 4060 vs 360 Gbps in 3060. So 4060 has desent advantage in memory speed compared to previous gen!
4060ti has even bigger cache and has 553 Gbps so allmost 80% faster memory than 3060 has and 3060ti has 448Gbps so even normal 4060 beat 3060ti memory speed!

Someone drank the cool aid. They just came up with BS "effective bandwidth" numbers to try to hide the fact that its really memory bandwidth is far less than its predecessor.
 

atomicWAR

Glorious
Ambassador
I think it's fair to say this move was completely driven by the laptop market. I've seen more low end (but usable) GPUs in laptops the last two generations than I think my entire life before this combined. I may despise the way pricing has gone versus performance increase on the desktop side, but the laptops have gotten more performance with less power consumption without prices really moving.

I'm glad you pointed out the reason behind the 16GB cost in your article, because while the increase is too high in context of what we're getting there's a very real reason for it. With these low bus width designs I don't see any way for the price to drop with increased capacity until Samsung has that stacked GDDR at scale and reasonably priced.
I thought as much myself and think your opinion here is extremely valid. Like you I am not thrilled at the desktop prices but lappies have become extremely powerful gaming rigs. I think it started with the gtx 900m series personally (we had a GTX 970m max tdp ver than a RTX 2070 mobile which is still awesome considering), kicked into high gear with the rtx 2000 mobiles but your not wrong that these last two gens (3000->4000) have seen amazing gains in such a small form factor.
 
  • Like
Reactions: thestryker

Loadedaxe

Distinguished
Jul 30, 2016
153
82
18,690
As history has taught us, MSRP is never on target.
The 4060 will be $400, the 8GB Ti will be 450 and the Ti 16gb will be ~$525-550
Nvidia doesn't care about consumer graphics like they did a few years ago, with Ai advancing and the other techs they are focused on, consumer graphics are the bottom of the totem pole.
Pay them what they want or gtfo is their motto now.
 

atomicWAR

Glorious
Ambassador
Can't wait for Tom's articles about how these are innovative cards that gamers should just buy.
I sense a tad of sarcasm? Maybe I am wrong? While I thought Jarred may have been a hair to kind in his review of the RTX 4070 for example, his perspective wasn't without some validity considering the GPU market as a whole is over priced at the moment. I think Tomshardware is about as fair as sites come to tech. Are they perfect? No! Can Anton be a little bias against AMD...maybe sometimes. But as far as I can tell the folks on Tom's do there best to be fair in there reviews. Everyone has bias's and no tech site is perfect. Some are horrid to be honest. The term Tame Apple Press didn't get coined for nothing and at least were not looking at that level of bias here. But hey that's just my two cents you're certainly welcome to yours.
 
Last edited:

Thunder64

Distinguished
Mar 8, 2016
76
93
18,610
I sense a tad of sarcasm? Maybe I am wrong? While I thought Jarred may have been a hair to kind in his review of the RTX 4070 for example, his perspective wasn't without some validity considering the GPU market as a whole is over priced at the moment. I think Tomshardware is about as fair as sites come to tech. Are they perfect? No! Can Anton be a little bias against AMD...maybe sometimes. But as far as I can tell the folks on Tom's do there best to be fair in there reviews. Everyone has bias's and no tech site is perfect. Some are horrid to be honest. The term Tame Apple Press didn't get coined for nothing and at least were not looking at that level of bias here. But hey that's just my two cents you're certainly welcome to yours.

Maybe you missed the Super 2080 "Just buy it" nonsense. "When your life flashes before your eyes, how much time do you want to have spent gaming without ray tracing?". I think Paul is pretty biased. Jarred has been pretty good on being more fair.

Anandtech used to be my go to as they seemed to be the least biased, but Ryan Smith has driven that site into the ground.
 

atomicWAR

Glorious
Ambassador
Maybe you missed the Super 2080 "Just buy it" nonsense. "When your life flashes before your eyes, how much time do you want to have spent gaming without ray tracing?". I think Paul is pretty biased. Jarred has been pretty good on being more fair.

Anandtech used to be my go to as they seemed to be the least biased, but Ryan Smith has driven that site into the ground.
Didn't miss it. And yeah I agree that was a bit of a black eye for Tom's but I have yet to read a tech site who hasn't done that to themselves so to speak. And yeah I use to love Anandtech but as of late it seems to be lacking... Though at least anandtech is not going the WCCFTech rumor mill route.
 

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
758
1,260
Your wrong and its been rehashed and proven with benchmarks, videos and other sources that your wrong on Vram in regards to console vs pc. Give it up you're demonstrably incorrect.
PC GPU lack VRAM. Period.

Playstation went from 8GB to 16GB VRAM for PS5. This has huge consequences on the size of assets developers create.

Consoles are a $60 billion gaming market.
PC is a $34 billlion gaming market.

Games are designed for consoles. Consoles set the technological bar, whether you like it or not. And 8GB PC GPU are going to struggle for a whole console generation.

The number of game developers willing to redo all their assets for a PC port for people with 8GB GPU are few and far between. 2023 is just the beginning of a new generation of games using lots of high memory assets.

8GB PC GPU are already struggling to handle console ports and the PS5 Pro dev kits with even more memory bandwidth are already in circulation.

No one should be buying 8GB GPU at this point. No one, they are already outdated.

I am just not buying 8gbs or less for these prices
100% agree
 
Last edited:

SSGBryan

Commendable
Jan 29, 2021
117
110
1,760
I think the 4060 and 4060 Ti will be best-sellers this year, simply because they hit the $300/$400 sweet spot that mainstream buyers care about, and they perform (incrementally) better than the previous-gen.

I consider Intel parts a fringe element at this point, as existing Arc cards won't have much impact on the dGPU market regardless of price, and Battlemage isn't until 2024. Maybe if Intel has a fire sale on Arc.
If the "sweet spot" is $300 - 400, then there needs to be more than 8Gb of ram. Incrementally better isn't a selling point in the current economy. The 16Gb card is $499 MSRP. $350 got me an RTX 3060 w/12Gb of ram a year ago.

At $350 - the a770 is a better deal. The hardware is better, and Intel has an incentive to continue their push for better drivers, because as you point out, Battlemage is a year out.
 

atomicWAR

Glorious
Ambassador
PC GPU lack VRAM. Period.

Playstation went from 8GB to 16GB VRAM for PS5. This has huge consequences on the size of assets developers create.

Consoles are a $60 billion gaming market.
PC is a $34 billlion gaming market.

Games are designed for consoles. Consoles set the technological bar, whether you like it or not. And 8GB PC GPU are going to struggle for a whole console generation.

The number of game developers willing to redo all their assets for a PC port for people with 8GB GPU are few and far in between. 2023 is just the beginning of a new generation of games using lots of high memory assets.

8GB PC GPU are already struggling to handle console ports and the PS5 Pro dev kits with even more bandwidth are already in circulation.

No one should be buying 8GB GPU at this point. No one, they are already outdated.
Again with this. Dude everyone and their grandma struck you down in the 4K thread with facts and sources, here a couple in case you forgot.


View: https://youtu.be/wyCvEW0DCbk
 

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
758
1,260
Microsoft is a actually a good example of why 8GB VRAM is insufficient.

The series S is a console in big trouble because it only has 8G VRAM.

And this idea that developers are to blame or that it's due to poor optimization, is baloney.

A good example is Baldur's Gate 3 that has been delayed on Xbox because Larian Studios can not allocate enough graphical memory on Series S. Larian is a studio with 30 years of experience, they know how to make games. Developers just can't develop a game for one audience that has 16GB VRAM and another audience that has 8GB VRAM.

So developers develop for the biggest market, and that's PS5, a console that happens to have blazing fast custom I/O chips, custom decompression chips, and 16GB GDDR6. PS5 can pull in assets and decompress textures like no other machine can. PC are struggling to keep up, the bare minimum has been set, 16GB VRAM, anything below that and PC will struggle for a whole generation.

580×334 jpg
48 kB

fsdggdgdg.jpg
 
Last edited:

octavecode

Distinguished
Nvidia still has a lot to learn and i believe it will happen very soon.
Especially now with their main customers gone (the crypto Dbags)
400 for a 4060 in US means 450-500 in europe.
Who the hell would buy a 500eu gpu with 8GB Vram in 2023 when you can get a RX 6800 with 16GB for 450?
I'll just wait for the upcoming AMD products like the rest.
I diched intel a couple of years ago i'll just ditch nvidia too.
 
  • Like
Reactions: mhmarefat

Ar558

Proper
Dec 13, 2022
228
93
160
$100/25% is a big premium for the extra 8GB, it's gonna future proof the card more but given it will be a limited set of circumstances when it will be required I'm not sure it will be worth it. I wanted to see a 16GB version but $50 difference seemed to be a difference that you could justify I don't think that's true at $100.

That said these MSRP's are still cheaper than I expect especially for the 4060, if you can get one close to $320-330 in the real world that could actually be decent value.
 

Newoak

Prominent
Jul 26, 2021
7
0
510
Nvidia officially announced the pricing and availability of it's RTX 4060-class GPUs today, with the 4060 Ti 8GB at $399 launching May 24, a 16GB model coming in July for $499, and the RTX 4060 8GB also coming in July for $299.

Nvidia Reveals RTX 4060 Ti, 4060 with Prices Starting at $299 : Read more
Seems somewhat disappointing.
Haven't we all read articles, telling us that ray tracing barley looks better, while causing A substantial hit to performance? While other articles say that most games are not really designed for it.
On the other hand, those graphs implied that two thirds of the most modern games were getting substantial boosts in frame rates. Heck I dont even understand what DLSS is.
If its the same price, you might as well go for the 40 series, rather than the 30 series. But many people think that way apparently, so the situation now is one where you can get A used 3080 10gb on ebay for A few bucks over $400, A way better system in my opinion.
For me this is like muscle cars, that I read for pleasure alone. In cars A 10% increase in power is A solid improvement over last years model. But in video cards, the reality is people aren't happy, or excited to upgrade, unless you get A 30%-40% improvement in real performance. Surprising that Nvidia does not know that, Maybe they do. In G-d I trust.
* I looked up the GPU round up, and the 3080 is almost tied to the 4070 at 2k which is probably where those two cards are likely to be. The 3080 solidly beats the 4080 in 4k! There are many people using 3440 X 1440 screens out there thats in between 2k and 4k.
 
Last edited:

JarredWaltonGPU

Senior GPU Editor
Editor
Someone drank the cool aid. They just came up with BS "effective bandwidth" numbers to try to hide the fact that its really memory bandwidth is far less than its predecessor.
Are AMD's Infinity Cache and effective bandwidth numbers also BS? It's in the architecture, and Nvidia didn't just estimate how effective the cache is, it modeled performance with 2MB L2 versus the 32MB L2 to show how many additional cache hits there are — and tested at 1080p, 1440p, and 4K. Then it averaged those results to show how many accesses go to VRAM rather than being cache hits. That's on slide 11 (at the bottom of the article), though also included elsewhere.

That's about as accurate as you're likely to get. Yes, some games will be a bit worse, but some will also be better. 4K results as an example will always have worse hit rates (more data = lower hit rates). I've talked about this before, like with the RX 6600 XT. That card has a 128-bit bus and yet ends up performing the same as an RX 5700 XT with a 256-bit bus. It's all thanks to the large Infinity Cache.

Nvidia's numbers on the RTX 4060 Ti with a 32MB L2 cache indicate that the additional hit rate compared to a 2MB L2 cache was 48%. So, divide the 288 GB/s of bandwidth by 0.52 (52% of VRAM accesses still go to the GDDR6) and you get 554 GB/s effective bandwidth. That's "effective bandwidth as compared to the RTX 3060 Ti architecture" if you want to be extremely precise.

Similarly, the RTX 4060 only has a 24MB L2 cache. It's additional hit rate (versus a 2MB L2) was 40%. 272 GB/s divided by 0.60 gives 453 GB/s effective bandwidth (as compared to the RTX 3060 approach to memory controllers and L2 cache, if you want to be pedantic).

TL;DR: It's not Kool-Aid, it's a smart GPU architectural design decision. AMD did it first with RDNA 2, and it proved so damn successful that Nvidia would have been stupid to not take a similar approach with Ada. It's interesting that AMD and Nvidia both ended up at 96MB as the maximum this generation — basically, the benefits of 128MB over 96MB didn't matter that much for the die size consumed is what AMD decided.

I still wonder if we'll see an RX 7950 XTX with two stacks of cache per MCD, with 192MB total. The benefits might be slight, but I'm sure some people would pay for it.
 

Thunder64

Distinguished
Mar 8, 2016
76
93
18,610
Are AMD's Infinity Cache and effective bandwidth numbers also BS? It's in the architecture, and Nvidia didn't just estimate how effective the cache is, it modeled performance with 2MB L2 versus the 32MB L2 to show how many additional cache hits there are — and tested at 1080p, 1440p, and 4K. Then it averaged those results to show how many accesses go to VRAM rather than being cache hits. That's on slide 11 (at the bottom of the article), though also included elsewhere.

That's about as accurate as you're likely to get. Yes, some games will be a bit worse, but some will also be better. 4K results as an example will always have worse hit rates (more data = lower hit rates). I've talked about this before, like with the RX 6600 XT. That card has a 128-bit bus and yet ends up performing the same as an RX 5700 XT with a 256-bit bus. It's all thanks to the large Infinity Cache.

Nvidia's numbers on the RTX 4060 Ti with a 32MB L2 cache indicate that the additional hit rate compared to a 2MB L2 cache was 48%. So, divide the 288 GB/s of bandwidth by 0.52 (52% of VRAM accesses still go to the GDDR6) and you get 554 GB/s effective bandwidth. That's "effective bandwidth as compared to the RTX 3060 Ti architecture" if you want to be extremely precise.

Similarly, the RTX 4060 only has a 24MB L2 cache. It's additional hit rate (versus a 2MB L2) was 40%. 272 GB/s divided by 0.60 gives 453 GB/s effective bandwidth (as compared to the RTX 3060 approach to memory controllers and L2 cache, if you want to be pedantic).

TL;DR: It's not Kool-Aid, it's a smart GPU architectural design decision. AMD did it first with RDNA 2, and it proved so damn successful that Nvidia would have been stupid to not take a similar approach with Ada. It's interesting that AMD and Nvidia both ended up at 96MB as the maximum this generation — basically, the benefits of 128MB over 96MB didn't matter that much for the die size consumed is what AMD decided.

I still wonder if we'll see an RX 7950 XTX with two stacks of cache per MCD, with 192MB total. The benefits might be slight, but I'm sure some people would pay for it.

Perhaps I should've said it is marketing BS. AFIAK AMD never came up with an "effective bandwidth" number to hide the fact that they trimmed the bus width in half. Instead, they called it what it was, a cache in front of the framebuffer. It is a clever idea and that is why Nvidia is using it but it does have compromises.

The RX 6600 XT had a refined architecture and massive boost in core clock speed to keep up with the 5700 XT. That being on top of the new Infinity Cache as well. It looks like Nvidia is doing something similar with the 4000 series.

I am far from the only one who is critical of the 4000 series. I think a lot of it has to do with naming and pricing. Nvidia is trying to sell DLSS 3 hard and I am not sure many people are buying it. It looks like a very modest upgrade at best compared to the 3060 Ti without the new features. Time will tell.

So, again, I should have said BS marketing slides. Sort of like when they doubled the CUDA cores on slides which wasn't quite right either.
 
  • Like
Reactions: Roland Of Gilead
Dec 10, 2022
24
9
15
Still nowhere near enough, it's easy to enrage your customers and once the raging fire is set, you need to offer back a lot more to gain back the reputation. This year the Nvidia greed already pushed a bunch of ppl on rage and decide to game at lower settings and call it a day or even move on to other hobbies, once that is established, lowering your C/P ration to where it should be actually won't gain back market, as for past year say a lot have moved to consoles, once that is in your home, there's little incentive to put another pile of money to upgrade your PC also. What they actually need is a budget high end card to make ppl have the huge upgrade itch to ramp up hype again.
 
  • Like
Reactions: Thunder64

atomicWAR

Glorious
Ambassador
The series S is a console in big trouble because it only has 8G VRAM.
Agreed, in the long run. MS under spec'd it IMHO.
And this idea that developers are to blame or that it's due to poor optimization, is baloney.
When you make a game for a system and you know its exact specifications. Then the game doesn't function properly on the system you made it for? No sorry, blaming devs isn't all baloney. Though I will say MS has it fair share of blame to own on the Series S.
So developers develop for the biggest market, and that's PS5, a console that happens to have blazing fast custom I/O chips, custom decompression chips, and 16GB GDDR6. PS5 can pull in assets and decompress textures like no other machine can. PC are struggling to keep up, the bare minimum has been set, 16GB VRAM, anything below that and PC will struggle for a whole generation.
Now that is so far off base I don't know where to start. Yes the PS5 is the big chunk of the gaming pie, but it is not the whole pie, not even close. And devs want their games on every system humanly possible, because...more sales. So that means PS5, XBS S/X, Switch and PC (low to high end). So when devs code their games, on some level they have to aim for the lowest common denominator and scale up from there. Granted not all games can make cut to every system. The Switch as its pretty dated and MS and Sony are pushing their games to the 'new' consoles. Plus oddly your whole argument(s) is based, frequently no less, on strictly the PS5. For Sony exclusives thats great. They can aim for the stars and maximize their titles. But most games are multi-plat. So the rest of the industry, does not have that luxury. Lets do some quick napkin math...

(I took hardware sales out of the equation for obvious reasons)

Games sales:
PC boxed/downloads/browser worth 40.5 billion
Console games 58.6 billion

source: https://www.protocol.com/bulletins/mobile-gaming-decline-newzoo-2022

Break it down further considering Sony has 70% of the market share in world wide console games sales. So the 58.6 billion becomes 41.02 billion, approx.

source: https://en.as.com/meristation/2023/02/21/news/1677012238_006400.html (ie these are MS numbers but still its ball park)

But hey not all sites agree on these numbers you get Statista stating higher sales for just digital for PC.

PC 45.6 billion
Console 37 billion


And why they don't all agree on the numbers, you can see some trends form. Clearly PC is good for about 40-45 billion or more in games sales with console coming up in the range of 45-58+ billion range. And PS5 isn't 100% of that slice (70%ish). This is proof the PS5 isn't the big slice of the market you seem to think it is (?). Yes its a huge chunk of the console games market but it isn't THE big player in the overall market like you think it is, PC is right there with it. PC and Sony are practically tied. AND because they are, devs will target PCs various (nearly countless) different specs...including the ever so common 8GB card. 27.9% of Steam users have 8GB cards....the largest group by far.

https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam

Yes I agree 8GB needs to be entry level 50 class then evaporate after that. 8GB is just weak sauce at this point for PC. But at the end of the day devs will code for 8GB being a extremely common vram set-up for most PC gamers. Seeing as RTX 90/80 users are a tiny fraction of the PC user base.
 
Last edited:

ottonis

Reputable
Jun 10, 2020
159
124
4,760
For people who just want a modern GPU to help with video encoding and VFX, the vanilla 4060 is plenty of compute power that won't break the bank at 299 USD.
Even more importantly: it needs only 115W of power, which is amazing and which shows once again how immensely power efficient TSMC's 4N node is.
And.. even the vanilla 4060 can play all modern games at 1080p at reasonable frame rates.
So, a new computer build primarily intended for content creation and with the casual 1080p game in min, there is no smarter choice than the 4060 8 GB.
 

Thunder64

Distinguished
Mar 8, 2016
76
93
18,610
For people who just want a modern GPU to help with video encoding and VFX, the vanilla 4060 is plenty of compute power that won't break the bank at 299 USD.
Even more importantly: it needs only 115W of power, which is amazing and which shows once again how immensely power efficient TSMC's 4N node is.
And.. even the vanilla 4060 can play all modern games at 1080p at reasonable frame rates.
So, a new computer build primarily intended for content creation and with the casual 1080p game in min, there is no smarter choice than the 4060 8 GB.

Maybe we should, you know, wait for reviews?
 

TRENDING THREADS