Review Nvidia GeForce RTX 5060 Ti 16GB review: More VRAM and a price 'paper cut' could make for a compelling GPU

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Right now, I can drive down to Micro Center and there are actually RTX 5060 Ti 16GB cards in stock, starting at $479. There are no RTX 4060 Ti 16GB (or 8GB) cards available. Online, 5060 Ti shows up as out of stock everywhere, but it's only the day after launch, while the lowest price on a 4060 Ti 8GB is $533.

Given we've seen 5070 relatively available, at 10~15 percent above the MSRP, I expect the same to happen with 5060 Ti cards. It will take a few weeks. But speaking of Micro Center, my local (ish) store also has 18 RTX 5070 cards in stock, at $549. So given the performance, no, I wouldn't recommend the 5060 Ti 16GB at anything above $450, which is why I expect it to generally fall back toward MSRP. That or the 5070 MSRP cards will need to forever disappear.
There are? That's good to know at least.

Doing a quick check, they're all for "store purchase only" and not available, so not sure how's the in-store situation.

We'll definitely know more soon.

As for its value... I have to disagree on one specific point: it doesn't beat the 4070. Same issue with the 5070 and 5070ti and 5080: you already had that level of performance in the previous gen. New buyers, sure can get it now, but they're getting less performance for a very similar price point a year or two late. Like WTF nVidia.

This card moves the needle very little to not at all, which is depressing and that just doesn't scream "great value" at me.

Regards.
 
  • Like
Reactions: Thunder64
Pretty much a useless review. Why? All game graphs start at 7600 / 4060. And then above 5060 there's a a bunch of GPUs like 9070. Shouldn't it be other way around? Showing eg 2060 and 3060 users what they'd gain by moving to 5060? Or are we just trying to entice everyone to get 5070/9070 cards. I doubt many 9070 owners are looking at downgrading to 5060, nor do I think many 4060 owners plan upgrade to 5060 this soon and at theae prices (and low perf bump). Nice amount of graphs and pages, but - useless.

Ehh it's a solid review, it's showing the entire gamut of performance from the $270 USD RX 7600 to the $800+ models. It's there to show where the "new" product lands in comparison to the other products and then spaced out by gaming situation (1080/1440/2160). If you are on or about card X, you can expect Y increase for going to product Z.
 
Not the best Nvidia could do, but I don't mind minor improvements for roughly the same cost on the same process node.

If they did this AND a process node shrink, then there would be something to complain about.

Look back on the 28nm slump. We got 'refreshes' GTX 660/760/960 all pretty much the same core count with somewhat similar performance increases of around 20%. (Though the 960 was sold for about $20 less, likely due to node maturity)

I saw so many reviews titles 5060 Ti doesn't outperform RTX 4070, like the expectation that previous gen cards MUST out do the previous one.

People complaining about misleading Nvidia slide decks. Is that new? That is why you wait for reviews like this one.

On a side note: I do think the pushback against 12GB VRAM on the 5070 is one of the big factors of it actually staying on shelves. If I can't get a big Intel card and the 5070 Ti remains high, I will consider it as a power reduction to replace my 3080 Ti. (And probably pick up an RTX 3050 6GB for PhysX, just so I have it)
 
I saw so many reviews titles 5060 Ti doesn't outperform RTX 4070, like the expectation that previous gen cards MUST out do the previous one.
This specific point is about what the historical expectations have been and what the "new" (modified by, what is now, years of abuse) expectations of a beaten wife are.

The 4070 has hovered around $500, which is not that far away from this, with a lowest (I can quickly find) of $440. And that card is faster than this, a whole 2 years before this launched with, arguably**, the same feature set and a bit more power draw.

May as well just keep manufacturing the 4070 and just lower the price thanks to manufacturing costs going down instead? Slightly more performance for slightlty less price (over time, theoretical).

That is where the annoyance comes from.

Regards.
 
This specific point is about what the historical expectations have been and what the "new" (modified by, what is now, years of abuse) expectations of a beaten wife are.

The 4070 has hovered around $500, which is not that far away from this, with a lowest (I can quickly find) of $440. And that card is faster than this, a whole 2 years before this launched with, arguably**, the same feature set and a bit more power draw.

May as well just keep manufacturing the 4070 and just lower the price thanks to manufacturing costs going down instead? Slightly more performance for slightlty less price (over time, theoretical).

That is where the annoyance comes from.

Regards.
I guess, but I also don't expect these companies to care much about how they are perceived. Somehow I keep ending up buying computers and electronics regardless.

3060 Ti. 4060 Ti, 5060 Ti. Same rough price point is what I am getting at. 399, 399/499, 379/429. Honestly, them doing a realistic price gap between the 8GB and 16GB is what they should have done with the 4060 Ti.

On the 5070, Manufacturing costs went up though. TSMC increased wafer prices. GDDR became more constrained due to AI production, and they are still on the same process node and the 5070 die is only slightly smaller than the 4070/Super/4070 Ti die. 30% more cost, but also maybe higher yields mellowed it out a little. But it does reflect in the MSRP reductions, even if the street price hasn't followed. Now the 5080 is still overpriced, and the 5070 Ti less so, but still high.

I don't see how it makes sense to keep making the 4070 die or even the 4060 die if they can do a same cost replacement with a little better performance using the same production capacity.
 
  • Like
Reactions: JarredWaltonGPU
Personally, I'm far, FAR more concerned with having the new features Blackwell supports (DLSS 4, faster AI, even MFG) than I am with keeping support for a not-widely-used tech that has been defunct for a decade (PhysX). Yeah, it's annoying PhysX and 32-bit CUDA were dropped, but things that are actively in development and use have switched from 32-bit already. I haven't played a game that used GPU PhysX in well over five years.
My point is simply that generalizing a pro that might be a con struck me as a non sequitur.

My use case is unique but my perspective on PC gaming is that old games must remain playable, and properly so. Further, I do not find MFG a net positive (yet) although I am definitely not in the ‘fake pixels’ crowd. DLSS has been a benefit for a while now.

As stated, PC gaming is about backwards compatibility for me. Feel free to dismiss my concerns (neither you nor nVidia “owe” me anything) but know they impact my judgement on the value of an upgrade. Loss of 32-bit hurt and will negatively impact when I replace my gaming GPU… and may involve me keeping it just for the PhysX kludge…
 
  • Like
Reactions: snemarch
The one advantage for the 8GB model is the fact that it's useless for the better, bigger AI models. So, it might actually be in stock for longer than 5 minutes.
 
This seems like an okay card with an okay price if the MSRP were real (though I think they should have just eliminated the 8GB version and made the 16GB the only choice and used $400 MSRP). The one pleasant surprise is that despite the 20W higher power target it doesn't really seem to use that much more than the 4060 Ti. Overall I suppose not being disappointed is a win for this generation from nvidia. Still hoping AMD brings right priced competition with the 9060 cards and Intel decides to take a run at mid range because we're still talking $400+ cards with minimal generational uplift.

One note regarding the 4060 Ti: 16GB model has 165W power target compared to 160W on 8GB.

That tracks that these are more power hungry 6000 series or super cards will be a refined model
 
I don't see how it makes sense to keep making the 4070 die or even the 4060 die if they can do a same cost replacement with a little better performance using the same production capacity.
You do realize it's still a new PDM and design overall, right? That is not free over keeping the same production of the current GPU line with zero changes and just take advantage of maturity and lowering costs.

In any case, I'll stop here. Value varies wildly from person to person, hence the "shrinkflation" in my eyes.

Regards.
 
You do realize it's still a new PDM and design overall, right? That is not free over keeping the same production of the current GPU line with zero changes and just take advantage of maturity and lowering costs.

In any case, I'll stop here. Value varies wildly from person to person, hence the "shrinkflation" in my eyes.

Regards.
That is actually a good point. But that also begs the question how far out can they predict things and will sticking with an old design give the competition an advantage if they are conservative.
 
3060 Ti. 4060 Ti, 5060 Ti. Same rough price point is what I am getting at. 399, 399/499, 379/429. Honestly, them doing a realistic price gap between the 8GB and 16GB is what they should have done with the 4060 Ti.
~25% performance increase from 30 series to 50 series isn't exactly a good thing from a buyer perspective. That's ~4 years and 5 months of separation so if we look at the 1070 vs 3060 Ti which is around the same amount of time that's ~90% performance increase. While that sets the bar high I don't think it would be unreasonable for people to expect ~50% increase over that time.
On the 5070, Manufacturing costs went up though. TSMC increased wafer prices.
The wafer prices had not gone up at all when the wafer buys would have been made.
GDDR became more constrained due to AI production
This isn't particularly the problem so much as the limited suppliers. Only Samsung was running volume production of GDDR7 in time for launch. I think they may also have had the highest clock speed of the initial production as well. What will be interesting to see here is how long it will take for AIBs to get good deals directly as nvidia was apparently buying up so much of the production it was cheaper for them to buy VRAM from nvidia with the GPUs than direct.
 
People are way too entitled, expecting 40, 50 , 80, 1000 percent generational increase. Like those days are loooonnnngggg gone, all the low hanging fruit has been picked clean. Process nodes are so small that quantum effects are starting to be an issue and they need to fine creative ways to deal with it. It's no longer a matter of "we have a better laser that lets us double the density".
 
~25% performance increase from 30 series to 50 series isn't exactly a good thing from a buyer perspective. That's ~4 years and 5 months of separation so if we look at the 1070 vs 3060 Ti which is around the same amount of time that's ~90% performance increase. While that sets the bar high I don't think it would be unreasonable for people to expect ~50% increase over that time.

The wafer prices had not gone up at all when the wafer buys would have been made.

That depends on what criteria you make the comparison on. 3060 Ti is a much bigger die and on a smaller node, two process nodes newer. Price wise, the launch of the 1070 was at 449, vs the 399 of the 3060Ti. So you also got more for your money. 30 series kind of re-stratified the GPU classes. RTX 2060 Super was more comparable to the 3060 Ti, just not anything quite like it in the 10 series line up.

I'm not sure we know that about 50 series production. It still sounds like Nvidia made a pretty large batch of bad Blackwell chips, and ended up re-doing the AI chips. (Would be cool if someone knows) Which may have impacted production/pricing etc if they had to use different alottments for the gaming GPUs. (also accords well with the idea of using Intel 18A for next gen gaming cards)
 
People are way too entitled, expecting 40, 50 , 80, 1000 percent generational increase. Like those days are loooonnnngggg gone, all the low hanging fruit has been picked clean. Process nodes are so small that quantum effects are starting to be an issue and they need to fine creative ways to deal with it. It's no longer a matter of "we have a better laser that lets us double the density".
That is what I am saying.

Magical miracles of die shrinkage are going to be keep stretching out. Probably going to see Nvidia and AMD going to something akin to the Tick Tock cycle that Intel used to do.
 
  • Like
Reactions: JarredWaltonGPU
Price wise, the launch of the 1070 was at 449, vs the 399 of the 3060Ti.
You're thinking of the FE the AIB MSRP was $379.
That depends on what criteria you make the comparison on. 3060 Ti is a much bigger die and on a smaller node, two process nodes newer.
Process node and a half because 12nm was part of the 16nm family. As for criteria you were talking price and from a buyer standpoint that's the only thing that matters.
I'm not sure we know that about 50 series production. It still sounds like Nvidia made a pretty large batch of bad Blackwell chips, and ended up re-doing the AI chips.
Sure we do because TSMC didn't announce price hikes would be coming until this year. There's also no way nvidia is doing a rushed wafer buy for low margin consumer parts.
RTX 2060 Super was more comparable to the 3060 Ti
Okay let's play that game: ~40% performance increase with one node and one generation. This arguably makes it even worse that buyers have only seen ~25% performance increase over one node and two generations.
 
  • Like
Reactions: Eximo
All very valid points. This has been one of the more fruitful discussions I've seen on a GPU topic in ages.

Hard to to do that comparison, that class of GPU kind of just popped up there. And man, good old 10 series, Nvidia will never be that kind again. 3060 Ti kind of became the new 70 class in terms of mid-range vs high end gaming. And then the pricing on 70 and up just went crazy even starting with the 20 series.

Weirdly that 5060 Ti / 5070 range is kind of what peaks my interest these days. I had been a steadfast 80 series consumer since the 8800 GTS days.
 
  • Like
Reactions: thestryker
Weirdly that 5060 Ti / 5070 range is kind of what peaks my interest these days. I had been a steadfast 80 series consumer since the 8800 GTS days.
All my cards used to be in a rough $200-350 range, but I also bought a lot more (I had 275/460/560 Ti/660 Ti for example). I was planning on getting a 3070 after my 970 lasted so long, but the 8GB VRAM concerned me for long term usage and then availability was awful.

I think the 60 Ti/70 series has the possibility of being a great choice, but we've been given a lot more diminishing returns. I wasn't going to buy a 40 series card, but the 4070 being flat out slower than the 3080 12GB despite the $100 gen on gen price increase was disappointing. I think a $500 4070 would have been good even without it exceeding 3080 (I think the 3080 is the prime example of an outlier in price v perf).

I know why we're not seeing better performance increases, but the part that bothers me the most is that things would be better if there was actual competition. I'm still hoping the lesson from the success of the 9070 XT is that people are willing to try not nvidia if the price v perf is right. I'd certainly rather buy from Intel or AMD at this point, but I don't think I'll be looking for an upgrade until next generation at the earliest.
 
The lack of movement in the gaming performance over time isn't due merely to node progress slowing, Nvidia has continually devoted more and more die space to AI and RT with each generation. If Nvidia were investing all of their efforts into raster performance, you probably would see that 50%+ improvement instead of 25%. In the past it was the gaming line subsidizing the AI chips, now it's reversed and the GeForce line is essentially AI sloppy seconds. Nvidia is making way too much money to mind it, especially since even with their efforts diverted, the competition still lags behind. Things might change with the 6000 series, but we'll have to wait years for that. If there even is a real 6000 series and Nvidia hasn't peaced out from the gaming space entirely by 2029.
 
People are way too entitled, expecting 40, 50 , 80, 1000 percent generational increase. Like those days are loooonnnngggg gone, all the low hanging fruit has been picked clean. Process nodes are so small that quantum effects are starting to be an issue and they need to fine creative ways to deal with it. It's no longer a matter of "we have a better laser that lets us double the density".
That is a very shortsighted and incomplete analysis.

When Intel was stagnated every one was drinking the coolaid of "but process nodes hard; IPC increases harder!", until AMD just did it and proved that no, you can't assume things are not advancing due to not having a way forward.

Someone already pointed it out, but the die space allocation nVidia, AMD and Intel (to a degree) are dedicated to "AI" things is clearly eating away at traditional raster used space and techniques. We also got SLI/XFire shoved to the side instead of keep refining it because arbitrary reasons and I could keep going. Valid reasons or not, they are reasons you can absolutely point to when thinking about this stupid stagnation we're facing on performance increases.

It just saddens me people, specially like you, just throw in the towel so fast instead of deep analysing the reasons why we're here in the first place. "No, it's too hard to do; I'll let big companies keep screwing me up with less value gen over gen". If you think DLSS is actually a "value add", then we'll just have to disagree.

EDIT: Related, but salt and all that:
https://www.techpowerup.com/335630/...-behind-16-gb-sibling-in-dlss-4-test-scenario

Regards.
 
Last edited:
That is a very shortsighted and incomplete analysis.

When Intel was stagnated every one was drinking the coolaid of "but process nodes hard; IPC increases harder!", until AMD just did it and proved that no, you can't assume things are not advancing due to not having a way forward.

Someone already pointed it out, but the die space allocation nVidia, AMD and Intel (to a degree) are dedicated to "AI" things is clearly eating away at traditional raster used space and techniques. We also got SLI/XFire shoved to the side instead of keep refining it because arbitrary reasons and I could keep going. Valid reasons or not, they are reasons you can absolutely point to when thinking about this stupid stagnation we're facing on performance increases.

It just saddens me people, specially like you, just throw in the towel so fast instead of deep analysing the reasons why we're here in the first place. "No, it's too hard to do; I'll let big companies keep screwing me up with less value gen over gen". If you think DLSS is actually a "value add", then we'll just have to disagree.

EDIT: Related, but salt and all that:
https://www.techpowerup.com/335630/...-behind-16-gb-sibling-in-dlss-4-test-scenario

Regards.
I'm right there with you regarding AI, DLSS, and whatnot. And there could very well be a lot of headroom remaining for performance. But eventually, they will approach a hard ceiling because of limits of the natural world, and what then will they do?
 
Yeah, you shouldn't pay too much attention to what Steve says. His coverages were lacking for quite some time now and this time he doesn't understand that he shouldn't tie flagship die as a measurement basis.

RTX 5060 Ti 16 GB just hit Lithuanian market and I was surprised that it is well priced from a get go. It is just a bit over its MSRP (22 euros) and is readily available. This seems like a great GPU release. The only real problem with it is its odd VRAM configuration. Nvidia really needed to redesign their GPU die to have different memory configuration or just use 3 GB modules and sell it in lower volumes with 12 GB base specification.
 
Because flagships always were weird. You can only use proportional analysis in relationship through same class of GPUs. In this case, Blackwell is pretty much the same as RTX 4000 series.

If you use proportional size then you get into all sorts of weird stuff. How about those double GPUs flagships we had previously? That logic doesn't apply universally. Also, flagships die size is pretty much meaningless. Sometimes we get duds like RTX 3090 where it is the same die and doesn't have much of an uplift over RTX 3080. Each generation has its own context which isn't considered. Like GTX Titan Z or GTX 690 is proportionally double that of the next in line GPU.

We have to look more deeply and with context in mind. This generation it is not that rest of Blackwell is smaller. Actually it is the same or bigger than it was previously. RTX 5090 is oversized, because Nvidia couldn't produce meaningful generation uplift in order to make flagship feel exciting. So, they made it just bigger and even more dumber. However, he is not talking about 25% price increase for its size. Nvidia instead reduced prices for rest of its lineups rather than increased it while maintaining its size. They could of course given more of an GPU, but that would also came with price jumps across the board.

Steve is barking at the wrong tree here. He needed to release that video during Lovelace era. I found youtubers and community strangely quiet when Nvidia renamed RTX 4050 to RTX 4060 and RTX 4060 to RTX 4060 Ti. However, when Nvidia does nothing wrong, [Mod Edit}storm gets unleashed. I feel that all this pent up anger just got released at a random time. Nvidia certainly shrunk their value offerings, but it wasn't today.

And not to be corporate bootlicker, but we must face reality. We have 3 competitors. Nvidia still produces the best product despite it setting the bar lower and lower. It is just what this industry is at the moment. If it would be easy to make better product, AMD or Intel would had certainly done it.
 
Last edited by a moderator:
  • Like
Reactions: JarredWaltonGPU
LOL. I played it back in the day. Even used it as a benchmark at AnandTech for a while. I don't need to return to it, or the Batman Arkham games (which are still the best implementation of PhysX IMO). I have hundreds of games I'd rather try out before I decided to go back to things I already played.

I think that it should be more important than you had put it. I as many gamers have a long backlog of games. Some of them were PhysX 32 bits games. Without it, you cannot even run Mirror's Edge properly. Even set at minimum it has poor 1% lows. I think that Nvidia did pulled the trigger way too fast on depreciating its support. Those games are still actively played and Blackwell is underwhelming generation to consider removing features.

In fact, this was so important to me that I decided against purchasing Blackwell GPU and just this morning I won a bid on refurbished RTX 3080 EVGA FTW 3 Ultra. I really want to play Alice: Madness Returns, Batman games and Mirror Edge. I still have to go through Mafia 2. So, this removal had made me to go through all of those games a lot sooner than normal. It also made me lose all desire to buy these cards and buy something for my retro build. I think that this speaks volumes to just how important it is to some people.
 
  • Like
Reactions: JarredWaltonGPU