News RTX 5060 Ti and RTX 5060 may arrive in March to steal AMD's spotlight — Chaintech hints at higher Average Selling Prices

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
So his video was pretty faulty, largely because he was looking at the 30 to 40 series jump and assuming that would happen again.
I can't tell if your post is in bad faith, you couldn't be bothered to watch the video or if you just don't understand. Quite frankly it doesn't matter which because the end result is the same: you spreading misinformation.

First of all aside from the 3090 to 4090 the 40 series was a bad value and generally poor upgrade to the 30 series.

Secondly he compares 7 generations worth of cards and isn't just comparing flagships.

Thirdly the 40 series is the first time nvidia has scaled price/perf off of the flagship part. They got away with it so it's absolutely expected they would do the same thing with the 50 series.

Lastly consumers might be getting the same price/perf as the 40 series which as I already pointed out was the first shift to this scaling off flagship model. That means consumers are not getting what used to be the case but rather screwed two generations in a row.
 
  • Like
Reactions: SSGBryan
Plain and simple:
In The Old Days, hardware wasn't powerful, so programmers had to be more careful with how games were coded.
Back in the days when home computer meant a machine with a 68000 CPU, you had to count how many CPU cycles you had left over from any given set of instructions. Game code in those days was almost exclusively written in assembly and, by necessity, tighter than a duck's ass. In this respect, modern PCs are much more forgiving (and I don't think you can cycle count an out of order CPU with anything like the accuracy you can a trusty old Motorola).

Anyway, 8 GiB is acceptable in an RX 480, not so in the latest and greatest Nvidia whizz bang wunderwaffe.
 
  • Like
Reactions: Mindstab Thrull
"may arrive" is a pretty accurate statement. Maybe before that they can actually produce and distribute cards. 🤔
You would think AMD and Nvidia's GPUs were hand made when you look at their general (lack of) availability. I suspect we are getting the data center rejects.
 
I will never understand what all the fus is about 8GB cards? 8GB is still plenty to run 99.99999999999% of games at max settings at 1080p. There are very, very few games that need more. The 60 and 60Ti cards are supposed to be budget 1080p cards, not 4K240 cards. 8GB is still lots, even for 2025, at the 60 tier for 1080p gaming.
You do realize most new console ports will allocate well over 8GB at 1080p max, right?
 
You do realize most new console ports will allocate well over 8GB at 1080p max, right?
Allocating (V)RAM =/= needing VRAM. Games will often allocate more than they need to run; I think the point is that, in 2025, GPUs marketed at gamers should include more VRAM than GPUs released in 2016.
 
  • Like
Reactions: KyaraM
I think with the absence of XX50 cards, and 90 cards north of $2000, I think we can call $350-$400 budget cards. I think that's the point we're at now. Lower cards, like $249 Intel GPU cards.... I would call those entry level, a step below budget. It's sad that $400 is considered "budget" now, but that's kind of how it is. I can remember in 2003, when I built my first PC, the 5090 of that era was $450 at that time. Unfortunately, times have changed, and when you look at GPU prices now, 350-400 is "budget" as sad as it is to say. It shouldn't be that way, but if we're honest, it really is. In the fact of 2-3000 dollar 32GB cards, 8GB at 400 is what I would expect, and still plenty for 99.9999999% of games at 1080p
You drank ALL the Nvidia Kool-Aid didn’t you? Retailers still sell gt1030 cards as “Fartnite capable”. The BOM on a 5060 is WAY under $100. The BOM on a 5090 is probably under $500.
 
  • Like
Reactions: George³
The BOM on a 5090 is probably under $500.
Absolutely not, the GPU alone by the most optimistic calculations is around $280, 32GB GDDR7 is a bit north of $300.

And that is even before we start talking about the quite complex PCB and cooling. Your average 5090 BOM would probably be pushing $1k, even before other R&D and production expenses.
 
  • Like
Reactions: KyaraM
You do realize most new console ports will allocate well over 8GB at 1080p max, right?

Short answer: no.


Long answer:

Games no longer "allocate" anything, that entire task has been moved over to the OS and it's graphics pipeline. Modern OS's now pool both system ram and graphics card ram into a single pool of memory called display memory. Any resource or buffer can be located in any memory space at any point in time. The display manager will then attempt to predict what resources are needed immediately and have most moved into the graphics cards ram, if that ram is full then the oldest resource is evicted out to system memory. What this does is turn graphics card ram into a massive L3 cache.

Anyone trusted the "Memory used" part of GPU-Z is still living in Windows XP / DX9 era.

The PS5 for example has 16GB of 256-bit GDDR6 that is used by games for both code/data and graphics.
 
  • Like
Reactions: KyaraM
I will never understand what all the fus is about 8GB cards? 8GB is still plenty to run 99.99999999999% of games at max settings at 1080p. There are very, very few games that need more. The 60 and 60Ti cards are supposed to be budget 1080p cards, not 4K240 cards. 8GB is still lots, even for 2025, at the 60 tier for 1080p gaming.
well if you want to use ray tracing or you are playing a bad port you need more then 8gb guarenteed.
 
Absolutely not, the GPU alone by the most optimistic calculations is around $280, 32GB GDDR7 is a bit north of $300.

And that is even before we start talking about the quite complex PCB and cooling. Your average 5090 BOM would probably be pushing $1k, even before other R&D and production expenses.
If you think the PCB s add another $500 to the GPU and memory, you’ve clearly never ordered PCBs in bulk. I’m referring to the the GPU+ memory kits Nvidia sells AIBs
 
I will never understand what all the fus is about 8GB cards? 8GB is still plenty to run 99.99999999999% of games at max settings at 1080p. There are very, very few games that need more. The 60 and 60Ti cards are supposed to be budget 1080p cards, not 4K240 cards. 8GB is still lots, even for 2025, at the 60 tier for 1080p gaming.
If you actually talk to game developers, many will say VRAM limitations are holding back graphics capabilities; that they have to spend more time optimizing a game to make sure it stays within that 8 GB envelope at 1080p at max settings rather than spending time on coding the game itself. Moreover, the tech community is rightly upset about a $400 GPU only having 8 GB of VRAM.

Sometimes, yes, it's also a matter of sloppy console ports. Either way, games are going to continue to be somewhat artificially held back so long as this magical 8 GB VRAM barrier exists. nVidia is the GPU market leader, so they set the precedent.

Always great when folks give mega corps a free pass. Intel demonstrated that 12 GB 1080p/1440p-straddled GPUs -- B580 and B570 -- can be affordable and performant.
 
I can think of 3 reasons:

1) Price. nVidia is charging more than ever for entry level gaming cards, yet they're using VRAM size as a way for product segmentation. There have been articles in the past couple of years on TH about modders having upped the VRAM on these cards and shown a not-insignificant gain, so the performance of these cards is artificially being gimped by nVidia.

2) More and more games are using near or above 8GB VRAM at 1920x1080, especially when ray tracing is enabled. Techspot did a decent article about it last year, I suggest you give it a read.

3) DLSS frame generation. Those extra frames have to have memory to fit in, and I'll borrow a chart from that Techspot article to demonstrate, those extra frames can easily push VRAM usage over 8GB at 1920x1080, so imagine what memory usage will be once DLSS4's additional frames add to that.

SSH-f-p.webp


4)
Even at 4k with a benchmark suite updated for the 5090 review, the difference between the 16GB and 8GB 4060ti was only about 6%. The difference at 1440p was 0.6fps.
average-fps-2560-1440.png


For ray tracing, the difference at 1080p was 2%. For 1440p, the difference was a pretty substantial 30%. However, the performance is so bad that it doesn't matter. The cards run out of compute performance before they run out of VRAM.
 
  • Like
Reactions: KyaraM
It's sad that $400 is considered "budget" now, but that's kind of how it is. I can remember in 2003, when I built my first PC, the 5090 of that era was $450 at that time.
In 2003, the top end 5950 Ultra MSRP'd at $500. That's about $850 today. That's not $2000, but it isn't cheap by most people's standards either. That flagship GPU was only 207 mm2. By comparison, the upcoming $550 5070 is a 263 mm2 die. $550 today is the equivalent of $320 in 2003. 5090 is nearly 4 times the size of the 5950 Ultra at 750 mm2. Comparing pricing with cards that old is practically meaningless as the products have evolved so much that there is very little basis for comparing.
 
Short answer: no.


Long answer:

Games no longer "allocate" anything, that entire task has been moved over to the OS and it's graphics pipeline. Modern OS's now pool both system ram and graphics card ram into a single pool of memory called display memory. Any resource or buffer can be located in any memory space at any point in time. The display manager will then attempt to predict what resources are needed immediately and have most moved into the graphics cards ram, if that ram is full then the oldest resource is evicted out to system memory. What this does is turn graphics card ram into a massive L3 cache.

Anyone trusted the "Memory used" part of GPU-Z is still living in Windows XP / DX9 era.

The PS5 for example has 16GB of 256-bit GDDR6 that is used by games for both code/data and graphics.
Yep and normally at least 10GB is always allocated to the GPU in consoles while the CPU just makes do with whatever’s left.
If you were referring to just that don't use the word BOM. I don't read your mind. Chipset/SoC/GPU these things are also part of BOM.
You’re right. I should’ve been much more clear but a company like Gigabyte doesn’t pay hardly anything for bulk ordered printed PCBs. Even 14 layer is cheap in bulk.
 
Yep and normally at least 10GB is always allocated to the GPU in consoles while the CPU just makes do with whatever’s left.
There is 0 "allocated" to the GPU. That's the great part about linear memory addressing modes with a unified memory architecture, no need for it.

To understand this better, on something like Windows (or Linux / MacOS) the way graphics memory works is that VRAM segments are memory mapped into the virtual address space. The kernel drives can then address that directly by using that virtual memory space. If you want to see this just open up device manager (devmgmt.msc), navigate to your display adapter, right click -> properties -> resources and viola, that is the kernel address range that that GPU's memory is mapped into.

Because modern consoles all use a single giant pile of memory for programs, there is no distinction between "system ram" and "graphics ram", it's all just "ram". Graphics resources are just regular game data, just like string tables and enemy information. Now a production team might have a rule that says "plan for around X value to be used for graphics resources", but that's not a hard limit as it's all just one big pile anyway.
 
Last edited:
I understand what you're saying. I know that HellBlade, Hogwartz Legacy, IJATGC, Avatar, and a few that will push north of 8GB when maxed out. But what I'm saying is, for the vast majority of games, 8GB will work. What I mean is this: There are 90,000 games on steam. I bet there are no more than 10 that require >8GB for gaming with at least HIGH settings (excluding ray tracing) at 60FPS, which is still very playable. That's such a tiny fraction. This is the masses that XX60 and XX60Ti cards appeal to. And I think NVidia is just saying: For the people who wants to play those 10 games, go get an XX70 and XX70Ti. For everyone else, 60 and 60Ti is enough. I think a 60 or 60Ti with >=12GB is pointless, because if a game is demanding enough to use 12GB, the core won't be able to produce the frames, even though the textures memory pool is sufficient. I think 60 is geared towards what would suffice for most people: 1080p high for only $350 instead of $2000 lol. I think thats what they are getting at.

I apologize if I sound like I am defending NGreedia's price gouging. That is NOT what I am trying to do. I'm trying to do an objective, neutral, impartial, honest, realistic, real world TECHNICAL take on the matter. In the vast majority of common, real world use cases, with a few small exceptions, 8GB is still enough.
Something nobody here will want to hear I'm sure.
My 4060 laptop constantly pushes over 60 FPS in Hogwarts Legacy, with everything including RT set to the highest settings, native 1080p. There are no stutters (and I run it from an external SSD, even), and pop-ins are in that game even when you set to medium or low and only on the overworld, neither in Hogwarts nor Hogsmeade nor on ground level (only in the air), when the game reserves only 6GB, so that argument doesn't really matter either. I tested it. So even that game isn't an argument.