News Nvidia Reveals RTX 4060 Ti, 4060 with Prices Starting at $299

baboma

Respectable
Nov 3, 2022
284
338
2,070
>4060 Ti 16GB = $500, 4060 Ti 8GB = $400, 4060 8GB = $300

It's what I expected, given the $600 4070 (12GB). My expectation extends to the forthcoming AMD midrange line-up, from 7800XT to 7600, ie that AMD will position them at price parity, but with better (rasterization) perf. 7800XT should be at $600 or maybe slightly less, and 7600 should be at $300.

The only unknowns for me is how these will fare against their previous-gen counterparts, the 3060s and 6600s series. While Nvidia can make a case for 4060's better value prop, given the relative high price of the older parts, AMD will have a tougher sell to make, since the 6600 parts are substantially lower than MSRP.

I'm assuming that THW has received all these parts and is just waiting for the embargo to lift to reveal perf benchmarks. When will that be, Jarred?

Edit: Answering my own questions.

>Nvidia has set the launch date for the RTX 4060 Ti 8GB card for May 24, and as usual you can expect reviews of the Founders Edition and non-overclocked models to go live the day before.
 
Keep in mind the silicon inside the 4060 is a 50-class die and the 4060ti is a 60-class die if you look at history.

So, while this is arguably good, that caveat is still important to make: nVidia is giving you what is essentially the replacement of the 3050 as a 4060 for $300 base. Also, I still do not classify $300 as "budget" or "reasonable" for what is essentially the bottom of the line up.

Let's see what AMD does now as it is an interesting move by nVidia.

Regards.
 
D

Deleted member 2731765

Guest
I think gamers should base their buying decision to get these 40-series SKU only on pure rasterization performance. DLLS3 makes the comparison less appealing, and it can also be slightly misleading.

On top of that previous-gen RTX don't support DLLS3, only DLLS2. So for a fair comparison for an upgrade, just look at raw rasterization performance in games.
 
Last edited by a moderator:
$500 for a 60 class card?
Nvidia can kiss my grits!
$400 for a 60-class card, $500 for double the VRAM version, $300 for the true (non-Ti) 60-class. 2060 Super was $400, 3060 Ti was $400. We've never had a non-vanilla 60-class for less than $400 from Nvidia in recent years (unless you count the 1660 series, which was the "dumbed down non-RTX" series). GTX 760 Ti would quality, but that's a decade ago and it was an OEM-only part.
Keep in mind the silicon inside the 4060 is a 50-class die and the 4060ti is a 60-class die if you look at history.
That's a gross oversimplification. The 50-class and 60-class are volume parts, and so Nvidia opted to actually do a smaller die for the cost savings. There are a lot of similarities between AD106 and AD107.

I don't know if AD107 has an x8 PCIe interface (like GA107 did), but it's still a 128-bit interface, same as AD106. L2 cache is smaller and it looks like it tops at out 24 SMs where AD106 tops out at 36 SMs. Using a harvested AD106 with part of the L2 cache disabled would give effectively identical performance (and honestly, that's even if AD107 "only" has an x8 PCIe link — now that multi-GPU is effectively dead, there's almost no benefit to x16 versus x8, particularly on mainstream parts).

If you look at GA106 and GA107 as an example, GA106 had up to a 192-bit memory interface, more SMs, and an x16 PCIe link. So the main change that would truly matter, is the potentially wider bus. But with the larger L2 cache, even that doesn't really matter that much on Ada Lovelace GPUs.

I thought it was pretty wild that Nvidia chose to do a separate die just to cut the size from 188 mm^2 down to 159 mm^2. That's a relatively minor shrink in size, and it speaks to both the costs of TSMC's 4N as well as the volumes Nvidia expects for the budget/mainstream GPUs. GA106 was 276 versus GA107 at 200 mm^2, so that's 38% larger on the previous generation, compared to 18% larger on this gen.
 
That's a gross oversimplification. The 50-class and 60-class are volume parts, and so Nvidia opted to actually do a smaller die for the cost savings. There are a lot of similarities between AD106 and AD107.

I don't know if AD107 has an x8 PCIe interface (like GA107 did), but it's still a 128-bit interface, same as AD106. L2 cache is smaller and it looks like it tops at out 24 SMs where AD106 tops out at 36 SMs. Using a harvested AD106 with part of the L2 cache disabled would give effectively identical performance (and honestly, that's even if AD107 "only" has an x8 PCIe link — now that multi-GPU is effectively dead, there's almost no benefit to x16 versus x8, particularly on mainstream parts).

If you look at GA106 and GA107 as an example, GA106 had up to a 192-bit memory interface, more SMs, and an x16 PCIe link. So the main change that would truly matter, is the potentially wider bus. But with the larger L2 cache, even that doesn't really matter that much on Ada Lovelace GPUs.

I thought it was pretty wild that Nvidia chose to do a separate die just to cut the size from 188 mm^2 down to 159 mm^2. That's a relatively minor shrink in size, and it speaks to both the costs of TSMC's 4N as well as the volumes Nvidia expects for the budget/mainstream GPUs. GA106 was 276 versus GA107 at 200 mm^2, so that's 38% larger on the previous generation, compared to 18% larger on this gen.
Fair points, but I'll be looking forward to what you find in your review of it :D

Specially the FPS scaling across the different models compared to historical data.

Regards.
 

hannibal

Distinguished
Also if you compare the memory bandwide the 4060 is better than 3060 even it has narrover memory bus because it has so much bigger memory cache!
So the memory bus is not the only thing you have to looks at. 453 Gbps in 4060 vs 360 Gbps in 3060. So 4060 has desent advantage in memory speed compared to previous gen!
4060ti has even bigger cache and has 553 Gbps so allmost 80% faster memory than 3060 has and 3060ti has 448Gbps so even normal 4060 beat 3060ti memory speed!
 
  • Like
Reactions: jabliese

atomicWAR

Glorious
Ambassador
Keep in mind the silicon inside the 4060 is a 50-class die and the 4060ti is a 60-class die if you look at history.
That's a gross oversimplification. The 50-class and 60-class are volume parts, and so Nvidia opted to actually do a smaller die for the cost savings. There are a lot of similarities between AD106 and AD107.
While it is an over simplification it not excatly without merit either. I do agree with you but I don't fully disagree with Fran either. And Nvidia, until now, hadn't done themselves any favor with the consumer base with bad pricing, low vram/memory buses on some models and the usual steep decline in performance gains going down the stack some how feels worse this gen. It left a lot if salty users out there. Point being her comment is hardly suprising in fact I have said similar myself...right wrong or otherwise (i am salty too lol). But I do look forward to your review. I hope the cards do well as finally Nvidia has done some decent pricing this gen.
 
Last edited:

atomicWAR

Glorious
Ambassador
4060 only has 8GB for $299...
...Looks like PC gaming is set for another year of struggling to run console ports.
Every day with this really? Your wrong and its been rehashed and proven with benchmarks, videos and other sources that your wrong on Vram in regards to console vs pc. Give it up you're demonstrably incorrect.
 
Last edited:
3060 had 12GB for $299.
4060 only has 8GB for $299.

At $499, the 16GB 4060 Ti is out of reach for most consumers.

Looks like PC gaming is set for another year of struggling to run console ports.
Not quite. 3060 had 12GB for $329 nominally. Even right now, over two years after the launch, RTX 3060 cards are mostly priced above $329. It's a good card, so I understand that on the one hand. But in practice, the cases where the 12GB versus the 3060 Ti's 8GB actually matters are much ado about nothing.

Across our full test suite, 3060 Ti is 30% faster at 1080p medium, 33% faster at 1080p ultra, 34% faster at 1440p ultra... and 22% faster at 4K ultra. (That's including RT and rasterization... without DXR, it's 38 fps vs. 29 fps at 4K ultra.)

So yes, the 8GB does impact the scaling at 4K... but also note that average FPS at 4K was 25 fps on the 3060 Ti and 20 fps on the 3060. Meaning, it's effectively useless. Also as noted elsewhere, turning down texture quality one notch usually keeps 8GB VRAM from being a problem, and the loss in image fidelity is very slight, even at 4K.

My assumption right now is that, without Frame Gen, RTX 4060 is going to fall slightly below the RTX 3060 Ti. Factor it in and it's probably the better card. We'll see what happens when we run the numbers in ~6 weeks. 4060 Ti meanwhile is probably going to be a relatively minor upgrade over 3060 Ti, for the same price, while using ~50W less power and offering some new features like AV1 encoding and DLSS 3.

I'm not super concerned about the 8GB. More VRAM would have been nice, but it's difficult to get there, as evidenced by the clamshell 16GB card costing $100 extra. And no, I don't think that's a BS price increase. If the BOM goes up say $40 or $50 due to having double the VRAM with chips on both sides of the PCB, that usually translates into a $100 retail price increase.
 

baboma

Respectable
Nov 3, 2022
284
338
2,070
>But when we look at RTX 4060 for $299, and the rumored RX 7600 for the same $299, that seems like one matchup where Nvidia can win handily. We'll find out soon enough!

@jarred, 4060 launches on May 24, and 7600 is rumored to launch on May 25, so it's probably safe to say that you already have both and have ran prelim benchmarks. So, is the above a foreshadowing of the head-to-head compare next week? :)

I forecast a great gnashing of teeth to come from the peanut gallery... :)

Edit: Oops, only 4060 Ti launches next week, not 4060. Still, we can interpolate, no?
 
Last edited:
  • Like
Reactions: Sluggotg
>But when we look at RTX 4060 for $299, and the rumored RX 7600 for the same $299, that seems like one matchup where Nvidia can win handily. We'll find out soon enough!

@jarred, 4060 launches on May 24, and 7600 is rumored to launch on May 25, so it's probably safe to say that you already have both and have ran prelim benchmarks. So, is the above a foreshadowing of the head-to-head compare next week? :)

I forecast a great gnashing of teeth to come from the peanut gallery... :)
I actually haven't tested either one yet! But soon enough, we shall see...

I'm just going off of base specs and what we already know. Like, I fully expect ~RX 6650 XT levels of performance from the 7600, maybe even a bit less than that. And probably RTX 3060 Ti levels of performance from the future RTX 4060. (That's a couple months out, so I do not have one for sure!)

Right now, across my test suite, RTX 3060 Ti is 30–50 percent faster than RX 6650 XT. It's 30% faster at 1080p medium, 42% faster at 1080 ultra, and 50% faster at 1440p ultra. That's for the full test suite that includes nine rasterization and six ray tracing games.

If we limit it to just rasterization, RTX 3060 Ti is still 12% faster at 1080p medium, 21% faster at 1080p ultra, and 26% faster at 1440p ultra. Looking just at DXR games, RTX 3060 Ti is 64% faster at 1080p medium, 80% faster at 1080p ultra, and 93% faster at 1440p ultra.

Now, toss in DLSS, which continues to be more widespread than FSR2, and the overall lead only increases. Toss in DLSS 3, and it grows even more (even if you only count it as 10~20 percent faster, rather than the 50~100 percent increase it shows in benchmarks).

That's a big gap, basically, so either RX 7600 will need to be much faster than I anticipate, or RTX 4060 will need to be much slower, or AMD will need to drop the RX 7600 price real fast. (Part of me still wonders if AMD was trying to get Nvidia to spill the beans on RTX 4060 pricing before announcing the RX 7600 price. We'll see soon enough... if the rumors prove accurate! :sneaky:)
 
  • Like
Reactions: baboma
299 is no buy 399 hard sell 499 just get a 6800xt
Sure, if you want a 300W card instead of a 200W card and don't care about ray tracing or DLSS. Same old story from AMD. I expect the 6800 XT will win by ~20% (give or take) in rasterization, lose by ~15% in ray tracing, and lose by ~30% in games that support RT and DLSS (and not FSR2 — still a 15% loss if a game also supports FSR2, i.e. CP77).
 

baboma

Respectable
Nov 3, 2022
284
338
2,070
>I'm not super concerned about the 8GB. More VRAM would have been nice, but it's difficult to get there, as evidenced by the clamshell 16GB card costing $100 extra. And no, I don't think that's a BS price increase.

While I don't doubt what you said about 8GB for gaming, I was hoping Nvidia would continue the 12GB allotment for 4060, to allow for 13B LLM installs. Having to jump to $500 to get above 8GB for LLM tinkering is probably a bridge too far for me. Contemplating just getting a 3060 once price drops on it.
 

SSGBryan

Reputable
Jan 29, 2021
162
147
4,760
Sure, if you want a 300W card instead of a 200W card and don't care about ray tracing or DLSS. Same old story from AMD. I expect the 6800 XT will win by ~20% (give or take) in rasterization, lose by ~15% in ray tracing, and lose by ~30% in games that support RT and DLSS (and not FSR2 — still a 15% loss if a game also supports FSR2, i.e. CP77).
I don't care about power draw (I have Xeon workstations in my past). I don't care about ether RT or DLSS - those only matter for gaming, and only in certain limited ways.

Neither RT nor DLSS help with anything outside of gaming. And gaming isn't the only reason I upgrade GPUs.

They didn't show 1440p benchmarks. If they were good, they would have shown them. $499 for a 1080p card isn't the flex Nvidia thinks it is.

The 4060 is going to sell a lot of AMD and Intel cards.
 

Giroro

Splendid
"People were mad that the RTX 3050 was way too expensive at $250 and did not perform as well as expected. How do we stop the RTX 4050 from being DOA?"

"Easy: Raise the price by another $50 and call it an RTX 4060"

I'm sure the RTX 4060 will technically perform near the RTX 3060 since it's cut-down/entry-level hardware is overclocked AF, but higher clocks can only get you so far.
Mostly though, this whole "Effective bandwidth" marketing pitch makes me unhappy. It's proven that a bigger cache improves performance, but they way they're trying to sell the memory bandwidth of a few MB of cache as the "effective" bandwidth for the remaining 99.5% of the memory definitely rubs me the wrong way.
 
>I'm not super concerned about the 8GB. More VRAM would have been nice, but it's difficult to get there, as evidenced by the clamshell 16GB card costing $100 extra. And no, I don't think that's a BS price increase.

While I don't doubt what you said about 8GB for gaming, I was hoping Nvidia would continue the 12GB allotment for 4060, to allow for 13B LLM installs. Having to jump to $500 to get above 8GB for LLM tinkering is probably a bridge too far for me. Contemplating just getting a 3060 once price drops on it.
Yeah, it's fundamentally a problem/choice with the memory bus width. With a 128-bit bus, you can do 8GB or 16GB (the latter via clamshell). Would have been nice if Nvidia had done 128-bit on AD107, 192-bit on AD106, 256-bit on AD104, 320-bit on AD103, and 384-bit on AD102. But it didn't, opting instead to save on costs and reduce the VRAM capacities at basically every level except the RTX 4090.
 

JamesJones44

Reputable
Jan 22, 2021
867
807
5,760
Those who wanted a lower wattage usage card got their wish with the 4060. 115 TGP is less than I thought they would be. The Overall performance uplift isn't all that great of a boost when you remove frame gen, but going from 170 to 115 with a similar performance profile should please the power conscience crowds.
 

atomicWAR

Glorious
Ambassador
Yeah, it's fundamentally a problem/choice with the memory bus width. With a 128-bit bus, you can do 8GB or 16GB (the latter via clamshell). Would have been nice if Nvidia had done 128-bit on AD107, 192-bit on AD106, 256-bit on AD104, 320-bit on AD103, and 384-bit on AD102. But it didn't, opting instead to save on costs and reduce the VRAM capacities at basically every level except the RTX 4090.
I can't agree with you enough here. I think buyers would have been far more forgiving of prices in the 70/80 class cards too if Nvidia had done this.
 

baboma

Respectable
Nov 3, 2022
284
338
2,070
>The 4060 is going to sell a lot of AMD and Intel cards.

I think the 4060 and 4060 Ti will be best-sellers this year, simply because they hit the $300/$400 sweet spot that mainstream buyers care about, and they perform (incrementally) better than the previous-gen.

I would say the same for AMD's 7000 midrange, but we still don't know how AMD will price them, which will depend on how they size up against Nvidia's line-up. If what Jarred predicts about 4060-vs-7600 comes true, I expect 7600 to come in at $250, with perhaps 7600XT taking up the coveted $300 slot to go head-to-head against 4060. 4060's $300 price will thus have a knock-on effect on the competition as well as on older parts.

More broadly, yes, the release of the midrange parts is a good thing for consumers, as they will advance the value proposition. Many bemoan the fact that the new midrange cards are only marginally better than the previous gen, but the bottom line is that they do perform better at the approx same price. They also allow for a broader value spectrum, as previous-gen cards will get a price cut. So, if you think the new parts are overpriced, you can buy an older part for less.

I consider Intel parts a fringe element at this point, as existing Arc cards won't have much impact on the dGPU market regardless of price, and Battlemage isn't until 2024. Maybe if Intel has a fire sale on Arc.