Review Nvidia GeForce RTX 5060 Ti 16GB review: More VRAM and a price 'paper cut' could make for a compelling GPU

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
This and the 5060 will be everywhere so much that anyone with $500 will end up buying them for builds or in pre-mades. I don't think it makes sense to get one unless moving up from 4060 to 5060 ti. The value in this card I see is that it is going to have more support than any other card on the market and it won't blow up.
 
Someone already pointed it out, but the die space allocation nVidia, AMD and Intel (to a degree) are dedicated to "AI" things is clearly eating away at traditional raster used space and techniques.
There is also the issue of the platform to consider. It would be great to have a GPU that can produce 400 FPS, but it is also up to the game engine and CPU to make that happen. While it would be great for the consumer to have a GPU keep getting faster and faster with new CPU upgrades, it would mean no new sales for said GPU manufacturer.
We also got SLI/XFire shoved to the side instead of keep refining it because arbitrary reasons and I could keep going. Valid reasons or not, they are reasons you can absolutely point to when thinking about this stupid stagnation we're facing on performance increases.
Latency kills that idea pretty quickly, it didn't even work well with two chips on the board. (I ran SLI twice) Better to do chiplet style GPUs at that point.
 
Latency kills that idea pretty quickly, it didn't even work well with two chips on the board. (I ran SLI twice) Better to do chiplet style GPUs at that point.
As a matter of fact, I gave this very same point some thought: Frame Interpolation in XFire/SLI would be the solution to consistency at least. Latency, it'll depend. It'll be the same thing as with DLSS or FSR: since you're coming from a lower frame time processing you do "free up" latency and eat some of it when composing the final frame in the main GPU. This is a very thin superficial idea, but I think it does hold some water. And guess what, Frame Interpolation does not require "AI stuff" to work decently. If people is already putting up with decreases in visual quality, then why not bring back SLI and XFire? Well, it's called "mGPU" now, but point stands. AMD could try and leverage that to try and catch nVidia off guard in the gaming space. Also, it would help GPU clustering in the data center as well, I'm sure.

EDIT: I forgot to say I agree with the chiplet comment, just to be clear. That is, more or less, what nVidia and AMD need to figure out to keep scaling up. Keep in mind nVidia did touch on the subject from different angles in the GDC presentation. It was interesting: "we're still using SLI, but not for consumers and in different more creative ways". I mean, come on, why not bring some of that to consumer in a scale that makes sese? Oh welp.

Regards.
 
Last edited:
  • Like
Reactions: Eximo
Just some EU retail datapoints, from Germany with 19% VAT included:

5060ti 8GB is €399, €449 for 16GB
5070 is €600
5070ti is €900 (there are cheaper offers, but without immediate availability)
5080 is €1200
5090 is €2800

That's obviously not the high-end models with tons of bling, but the more economic varieties e.g. from Inno3D, which have the added benefit of not wasting extra slots.

All of these are available in stock, for some reason I no longer see any eBay offers from scalpers via Geizhals.

That means lower-end cards are dipping below EU MSRP and only the 5090 is still €600 above that.

Availability hasn't been an issue for weeks here and prices have almost normalised: what you see in the US seems to be mostly MAGA driven.
 
Last edited:
Curious why you think it's unreasonable to use proportional size as a basis for value analysis.
Because he intentionally left off prior generations flagship called Titan. The 90 models are just Titans with a new name, complete with insane pricing.

https://en.m.wikipedia.org/wiki/Nvidia_Titan

The 20 series top model was the Titan RTX at $2500 price. Notice the Titan label disappeared and the xx90 appeared. Yeah it's cause everyone ridiculed it as over priced for performance. But somehow if it's called xx90 then it's acceptable.
 
Last edited:
Because he intentionally left off prior generations flagship called Titan. The 90 models are just Titans with a new name, complete with insane pricing.
I can't tell if you just don't know what they were looking at or if you don't understand. They based proportional size (in cores) off the largest die of the generation. All titans used the same die as the highest consumer card except for V because there were no Volta consumer cards and I suppose you could say Z because it used two die.
 
  • Like
Reactions: helper800
Because flagships always were weird. You can only use proportional analysis in relationship through same class of GPUs. In this case, Blackwell is pretty much the same as RTX 4000 series.

If you use proportional size then you get into all sorts of weird stuff. How about those double GPUs flagships we had previously? That logic doesn't apply universally. Also, flagships die size is pretty much meaningless. Sometimes we get duds like RTX 3090 where it is the same die and doesn't have much of an uplift over RTX 3080. Each generation has its own context which isn't considered. Like GTX Titan Z or GTX 690 is proportionally double that of the next in line GPU.

We have to look more deeply and with context in mind. This generation it is not that rest of Blackwell is smaller. Actually it is the same or bigger than it was previously. RTX 5090 is oversized, because Nvidia couldn't produce meaningful generation uplift in order to make flagship feel exciting. So, they made it just bigger and even more dumber. However, he is not talking about 25% price increase for its size. Nvidia instead reduced prices for rest of its lineups rather than increased it while maintaining its size. They could of course given more of an GPU, but that would also came with price jumps across the board.

Steve is barking at the wrong tree here. He needed to release that video during Lovelace era. I found youtubers and community strangely quiet when Nvidia renamed RTX 4050 to RTX 4060 and RTX 4060 to RTX 4060 Ti. However, when Nvidia does nothing wrong, shitstorm gets unleashed. I feel that all this pent up anger just got released at a random time. Nvidia certainly shrunk their value offerings, but it wasn't today.

And not to be corporate bootlicker, but we must face reality. We have 3 competitors. Nvidia still produces the best product despite it setting the bar lower and lower. It is just what this industry is at the moment. If it would be easy to make better product, AMD or Intel would had certainly done it.
Guessing you didn't watch the video and probably don't actually watch much GN period. They repeatedly called out the 40 series for being branded at least one tier higher than they really were (aside from 4090).

As for the video in question they were using maximum core count not die size because if they were using die size you'd end up with oddities like the 30 series where there were 5 GPUs released using the same die. Continuing along with this the 3080 used proportionally more cores than any other 80 series had which easily explains the 3080 v 3090 situation.

During the period they were looking at, 700 series and newer, there was one dual die video card which was when nvidia released two Kepler Titans. While this is an anomaly I don't see how it changes the resulting data. It still used the same die as the highest tier products it just used two of them.

I'm not sure why you think the video is just an indictment of Blackwell when it uses historical data and clearly shows Ada is where the big shift came from and Blackwell just extends that.
 
I can't tell if you just don't know what they were looking at or if you don't understand. They based proportional size (in cores) off the largest die of the generation. All titans used the same die as the highest consumer card except for V because there were no Volta consumer cards and I suppose you could say Z because it used two die.

The die size is irrelevant to the argument, only the delta in price/performance of the top model and everything else. The reason Steve didn't include it is that it completely destroys the narrative being pushed, that big evil Nvidia is stealing performance from the poor peasant gamers. As to why he's pushing that narrative, it makes his company money via increased viewership. GN is a corporate entity complete with tax forms, property, lawyers and employees.

As to the real issue, as I mention, it's pure entitlement. The demand that people's money in 2025 will purchase 40~100% more performance then it did in 2023/2024 itself is absurd. Not only does your money have less value in 2025 then it did in 2023/2024, but those generational lifts from cheap process node shrinking are gone. Maintaining your obscene vendor demands will only result in outrage farming outlets making even more money off you. The vendors themselves are business's and operate under the same rules as everyone else.
 
  • Like
Reactions: JarredWaltonGPU
The die size is irrelevant to the argument, only the delta in price/performance of the top model and everything else. The reason Steve didn't include it is that it completely destroys the narrative being pushed, that big evil Nvidia is stealing performance from the poor peasant gamers. As to why he's pushing that narrative, it makes his company money via increased viewership. GN is a corporate entity complete with tax forms, property, lawyers and employees.
I see so you just refuse to understand willful ignorance is the best. Adding the Titans to their charts wouldn't change anything but the "flagship" adjusted pricing. It doesn't magically change the proportional size of the lower SKUs nor their relative performance.
As to the real issue, as I mention, it's pure entitlement. The demand that people's money in 2025 will purchase 40~100% more performance then it did in 2023/2024 itself is absurd. Not only does your money have less value in 2025 then it did in 2023/2024, but those generational lifts from cheap process node shrinking are gone. Maintaining your obscene vendor demands will only result in outrage farming outlets making even more money off you. The vendors themselves are business's and operate under the same rules as everyone else.
That's some high quality corporate bootlicking right there I hope you're paid well. Poor nvidia isn't making all the money in the world so buyers should be happy getting less value for their money.

How about this reality for you: RTX 3060 Ti to RTX 5060 Ti nets about 25% performance gain for about 12.5% less cost (adjusted for inflation) at MSRP. Maybe you think this is a good deal after ~4.5 years, but I sure don't. Even if we look at CPUs which have notoriously low returns they're higher than that performance increase over the same period of time and significantly so if you include multicore.

Now if someone is actually demanding constant 40-100% increases gen over gen (I've never seen anyone say this seriously, but the world is a big place) that's obviously ridiculous. Expecting something in the 20-30% range each generation on the other hand is not.
 
  • Like
Reactions: helper800
I see so you just refuse to understand willful ignorance is the best. Adding the Titans to their charts wouldn't change anything but the "flagship" adjusted pricing. It doesn't magically change the proportional size of the lower SKUs nor their relative performance.

That's some high quality corporate bootlicking right there I hope you're paid well. Poor nvidia isn't making all the money in the world so buyers should be happy getting less value for their money.

How about this reality for you: RTX 3060 Ti to RTX 5060 Ti nets about 25% performance gain for about 12.5% less cost (adjusted for inflation) at MSRP. Maybe you think this is a good deal after ~4.5 years, but I sure don't. Even if we look at CPUs which have notoriously low returns they're higher than that performance increase over the same period of time and significantly so if you include multicore.

Now if someone is actually demanding constant 40-100% increases gen over gen (I've never seen anyone say this seriously, but the world is a big place) that's obviously ridiculous. Expecting something in the 20-30% range each generation on the other hand is not.

what makes the 3060 ti better is it isnt knee capped by a 128 bit bus or restricted by the x8 brain dead design.

the performance would be greater if the bus was back to 192 bit. and not knee capped again by the x8 lanes.

the 4060/4060 ti/ 5060 /5060 ti are just crap design.
 
  • Like
Reactions: helper800
what makes the 3060 ti better is it isnt knee capped by a 128 bit bus or restricted by the x8 brain dead design.
That's not necessarily accurate based on what we can extrapolate from comparing the three recent 60 Ti generations.

The 5060 Ti series massively increased the memory bandwidth to where it matches the 3060 Ti (~55% more than the 4060 Ti). The 5060 Ti also has ~5. 8% more cores and ~4.2% higher base clock (this part doesn't necessarily seem meaningful from reviews I've seen) compared to the 4060 Ti for ~14.5% more performance. So it would seem the additional memory bandwidth definitely matters but it also doesn't seem like the GPU needs anywhere near all of it (for rendering at least).

TPU did some PCIe scaling on the 5060 Ti since it has a PCIe 5.0 interface and the drop to 4.0 was margin of error difference. So while there's definitely more of a difference on PCIe 3.0 so long as the platform is 4.0 it's not a real loss.

edit: didn't realize I forgot the link https://www.techpowerup.com/review/nvidia-geforce-rtx-5060-ti-pci-express-x8-scaling/
 
Last edited:
  • Like
Reactions: helper800
That's not necessarily accurate based on what we can extrapolate from comparing the three recent 60 Ti generations.

The 5060 Ti series massively increased the memory bandwidth to where it matches the 3060 Ti (~55% more than the 4060 Ti). The 5060 Ti also has ~5. 8% more cores and ~4.2% higher base clock (this part doesn't necessarily seem meaningful from reviews I've seen) compared to the 4060 Ti for ~14.5% more performance. So it would seem the additional memory bandwidth definitely matters but it also doesn't seem like the GPU needs anywhere near all of it (for rendering at least).

TPU did some PCIe scaling on the 5060 Ti since it has a PCIe 5.0 interface and the drop to 4.0 was margin of error difference. So while there's definitely more of a difference on PCIe 3.0 so long as the platform is 4.0 it's not a real loss.

Fact that it matches a 3060 ti in bandwidth should be a embarrassment my issue is if the bit bus was wider at 192 bits it would probly gain more. My issue is the x8 cut down is nonsense until the 4060 ti and 4060 to they never done that before.
 
  • Like
Reactions: helper800
Fact that it matches a 3060 ti in bandwidth should be a embarrassment my issue is if the bit bus was wider at 192 bits it would probly gain more.
None of the 50 series cards have scaled with their massively increased memory bandwidth over 40 series so it stands to reason the memory bandwidth is more than the GPUs can use for rendering purposes. Not that Ampere and Blackwell cores can be directly compared, but the 3060 Ti has ~5.5% more cores than the 5060 Ti does. I'm quite certain bumping that up would do significantly more than boosting the bus width would.
My issue is the x8 cut down is nonsense until the 4060 ti and 4060 to they never done that before.
I think cutting down the PCIe interface to 8 lanes was purely to maximize profits. Given the huge margins nvidia is making selling GPUs this allows them to make a smaller die and AIBs to save a bit on the boards. It'd certainly be nice if the 5060 Ti (and 4060 Ti for that matter) was still 16 lanes for those still on PCIe 3.0 though.
 
That's actually a little worse than I was expecting, but perhaps that's because the 5060 Ti GPU is a solid performance upgrade for the market position. I like how he also used upscaling to get more playable frame rates which really shows the chasm between 8GB and 16GB should you want to drive 1440p/4k and/or use RT.