News [The] Majority of gamers are still playing at 1080p and have no use for more than 8GB of memory': AMD justifies RX 9060 XT's 8GB of VRAM

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Don't know what you consider expensive but 1440p and even 4k monitors aren't that bad. GPU's to make best use of them can be. It sounds like you live in a place where higher end hardware is very expensive, and I am sorry if that is true.
To put it into perspective... high end 1080p monitors for many are already prohibitively expensive... this kind of consumer won't likely take the risk on low end 1440p or above especially because proper upcsaling would require higher end CPU's which would necessitate even more upgrades. At 1080p the CPU only affects rendering because upscaling to 1080p barely does anything which is why the benchmarks show the discrepancies they do.

Typically the pattern seems to be that a consumer can afford a GPU and monitor at the same price.... but he is more likely to spend three times more for a GPU because getting the game running well is more important than getting it displaying well. Many people also upgrade their GPU's three or more times as quickly as they do their monitors.

It's not only the monitor cost you have to take into account, it's a matter of stepping up to the next tier. For many getting a 1440p monitor means he has to basically upgrade everything and this is even more true for 4k where a high end CPU and perhaps faster RAM is basically essential where you might have been able to plod along at 1440p. But that said.... for many monitors are much more expensive than they are for others. Even as much as tarrifs are inflating tech prices in the US tied economies now they are still far cheaper than they are elsewhere.

Think of it yet another way.... if you save up all year and you can only afford to upgrade one component upgrade every 2nd or 3rd year if at all.... how will that affect your spending?

Where I live the price someone in the US used to pay (who knows atm due to the fluctuations) basically doubles or tripples for the same thing by the time it gets to me... which of course means that China dominates all tech markets.
 
So you think everybody just cranks everything to Ultra? Not in the real world but if that's what you do great for you.
It's not about the settings themselves, it's about how you can use them to track the lifetime of the GPU. Only the wealthy upgrades their GPU if they don't really need to, the rest of us buy's something where you can currently crank everything to 11.... and then you will notice that gradually you need to crank things down and new generations of games come until you reach the point where even on minimum you cannot run them anymore.... THEN you upgrade your GPU.

The issue is current gen GPU's were you CANNOT crank everything to 11 because they will obviously age faster than one's that can.
 
It's not about the settings themselves, it's about how you can use them to track the lifetime of the GPU. Only the wealthy upgrades their GPU if they don't really need to, the rest of us buy's something where you can currently crank everything to 11.... and then you will notice that gradually you need to crank things down and new generations of games come until you reach the point where even on minimum you cannot run them anymore.... THEN you upgrade your GPU.

The issue is current gen GPU's were you CANNOT crank everything to 11 because they will obviously age faster than one's that can.
So you think you can buy a budget 300 dollar card and just crank everything up? Yes $300 for a gaming card in 2025 is a budget gaming card.
You need to do a bit of research before buying a card. Don't blame it on the card. If 8GB is not enough for you then don't buy it.
The way your saying it is like I'm going to build a PC for MFS get me a i3, 16GB of memory, and a 300 buck video card and play it. For MFS that's not going to work but for most games you have a decent PC that will play them with the right settings just fine @1080p
That's why games list min, and recommended specks for games.
 
  • Like
Reactions: palladin9479
So you think you can buy a budget 300 dollar card and just crank everything up? Yes $300 for a gaming card in 2025 is a budget gaming card.
You need to do a bit of research before buying a card. Don't blame it on the card. If 8GB is not enough for you then don't buy it.
The way your saying it is like I'm going to build a PC for MFS get me a i3, 16GB of memory, and a 300 buck video card and play it. For MFS that's not going to work but for most games you have a decent PC that will play them with the right settings just fine @1080p
That's why games list min, and recommended specks for games.
You are contradicting yourself.... if 1080p is a budget resolution then logically you can crank up everything at 1080p on a budget GPU.
 
So you think everybody just cranks everything to Ultra? Not in the real world but if that's what you do great for you.
No. I was explaining why the results you posted were different since you thought it was something driver related. Realistically though if these $300+ 8GB cards had 16GB VRAM instead you could without issue.
So you think you can buy a budget 300 dollar card and just crank everything up? Yes $300 for a gaming card in 2025 is a budget gaming card.
$300 is not budget. Just because that's the cheapest MSRP a company is willing to sell at doesn't make it "budget". That would be like calling a $280k Ferrari budget because it happens to be the cheapest model they sell. There are no budget cards anymore from amd/nvidia and the $300+ 8GB cards are compromised by the amount of VRAM they have not the GPU powering them.
 
  • Like
Reactions: rambo919
few days ago i decided to finally try to play horizon zero dawn from 2020 gifted to me by a friend
(not even the remastered version , just the regular old thing) ...
i already had to dance around the settings to get the VRAM usage on my rtx 2070 super
below 7,5gb at 1080p in this 5 years old game -
the game still looks great with some things set to high instead of ultra -
and i can accept this on my 6 years old video card which i bought 3 years ago as a second hand part for €220 .
however i would not accept this on a brand new 60 class €300 video card
and i as a casual gamer still have no motivation to upgrade .
but i´m certain i will be upgrading to 16gb of VRAM when the time comes ...

it is incredible just how much the 60 class cards have fallen in the past 6-7 years -
the last good 60 class card was rtx 2060 (delivered the performance of 1080) ,
3060 was only partially good - it has 12gb of VRAM but not enough horse power really
and the rtx 4060 was the worst 60 class card released in the last decade .

of course people are still going to buy it in prebuilds and even as a solo product
but that doesn´t mean it is a good product , it just means
regular users don´t have enough awareness when it comes to PC parts ...

also the FPS don´t tell you the whole story , rtx 5060 in pcie 4.0 slot
is a disaster when it hits the wall of that frame buffer -
it doesn´t even need to hit it for prolonged period of time ,
just a little bit of stuttering here and there , or missing textures for a few seconds etc -
it is enough to make the gaming experience "not that great" ...
the benchmark will still tell you that you average 80FPS and you have 1% lows at 50FPS
but the actual experience in games that can get close to that 8gb and even
occasionally exceeding it is not that good anyway ...
 
Last edited:
It's the most common gaming resolution by a very wide margin. 55.25% is 1080p with 1440p being the next most common at 19.90%, less then half. Most common VRAM is 8GB, most common system RAM is 16GB with six core CPUs. 2160p is less then 5%, same with higher end cards.

This is where echo chambers have people convinced that every drives a Mercedes to work and their Lamborghini on the weekends.
Its the most common because nvidia and amd have been making crap 300 gpus for a decade now. If there is no push back against 8 gb gpus for 1080p, amd and nvidia would love to sell you a 3060 ti 8 gb/6650 xt for the fourth time. If you don't give gamers better choices, they just can't play at 1440p at resonable price. Also, its the most common because 1080p has been on the market way before the other monitor resolutions. Is the Mercedes 1440p and the Lamborghini 4k? I don't think get the analogy. Maybe we should strive for the Xbox series s experience on pc and keep the detail low with our very popular gtx 1650.
 
So you think everybody just cranks everything to Ultra? Not in the real world but if that's what you do great for you.

Yes people on social media really do think that. Distorted views of reality have them honestly believing that if they aren't playing on maximum settings then they are somehow inferior and have less "street cred". Social media outrage farming and the outrageous GPU price inflation didn't help the matter.

I mean people demanding more then bottom tier memory from a 128-bit bus product kinda frames the whole discussion.

https://www.techpowerup.com/gpu-specs/geforce-gtx-1650.c3366


https://www.techpowerup.com/gpu-specs/geforce-gtx-950.c2747


Most recently
https://www.techpowerup.com/gpu-specs/geforce-rtx-3050-8-gb.c3858

Do we notice something they all have in common? Those cards all have exactly four GDDRAM chips on them, each connected to a single 32-bit memory channel. The 40 series has nVidia pulling hard core shenanigan with price/performance that people still haven't wrapped their heads around. Expecting to run a 50 tier card on "Ultra" is just hilariously bad humor, like it might be possible on older titles, 1080p or carefully modified engine settings, but as a general rule it's gonna barf.
 
Last edited:
The point is the settings are so complicated people don't know what they do.... so they assume higher is better and that's that.

It's not complicated a five minute google search will tell you want you need to know. I know we live in a world where everything is 140 words or less, but effort in equals effort out.

Here is the bottom line for managing around entry level cards, which anything with a 128-bit memory bus is going to be. First is that things like DLSS/MFG eat up 1.5GB+ alone, disable those. Upscaling is fine but frame gen, like anything "AI" related is very memory intensive to start with. Second is that you need to ensure you are on high or less, or really just set it to high and if you are getting micro stuttering, reduce texture size one notch and it'll stop. While there are other things in VRAM, generally speaking texture size = VRAM usage.

And that's it, that will get pretty much every game out there running on a card with 8GB or less of VRAM. I know this because downstairs I have a 3050 LP (6GB) that does everything at 1080p medium upscaled to 2160p. If I can get that little 75W card working then people should have no problem getting their entry level 4060, 5060 and 7060 8GB working just fine.

I honestly don't think the xx60 TI 128-bit should even exist, but nVidia is playing games with model numbers to milk more money so whatever.
 
Yes people on social media really do think that. Distorted views of reality have them honestly believing that if they aren't playing on maximum settings then they are somehow inferior and have less "street cred". Social media outrage farming and the outrageous GPU price inflation didn't help the matter.

I mean people demanding more then bottom tier memory from a 128-bit bus product kinda frames the whole discussion.
yeah we all know these cards are more like 5050/5050Ti tier in disguise .
nobody has a problem with entry level cards being 8gb per say
but the problem is their naming and pricing ...

tech youtubers are absolutely correct to demand that the brand new $320 class 60 card
should be able to max out settings at 1080p delivering at least 60 FPS ...
(and that is regardless of if they are intentionally rage farming or not)

1060 6gb delivered the performance of gtx 980 for $300 ,
rtx 2060 delivered the performance of gtx 1080 for $350
gtx 560ti almost matched gtx 480 and gtx 560Ti 448 edition (with gtx 570 core) surpassed it ,
gtx 660Ti was able to beat gtx 580 and gtx 760 was just 15% shy of the gtx 680 ...

and thats the problem :
class 60 cards was always synonymous for best bang for your buck
and for 1080p high experience gaming .
that is why we are all outraged (especially old gamers)
to see that the new 60 class card is 70-80% shy of the last gen 4080
and it has the performance of a 3 generations old 2080Ti (with less VRAM)
it can barely match 2 gens old upper mid class cards like rtx 3070 or 3060Ti in rasterization .

so either name it rtx 5050/50ti and drop the price to $200-250
or give it a more powerful chip (cut down 5070 comes to mind) and 12gb of VRAM
if you want to name it rtx 5060(Ti) ...
 
Last edited:
1*mHaijYuh4Gfqz2hV-nwv0A.jpeg
 
According to Frank Azor, the RX 9060 XT 8GB is built for the majority of gamers who continue to play at 1080p, with esports being a primary focus.

[The] Majority of gamers are still playing at 1080p and have no use for more than 8GB of memory': AMD justifies RX 9060 XT's 8GB of VRAM : Read more

8gb isnt enough for 1080p with all bells and whistles on maybe 10gb but not 8gb.

only good thing about a 9060 xt is it didnt get a bandwidth cut x16 lanes which means works better on older systems.

when a gpu runs out of memory you get horrible pop in fps will only get you so far. i could have 200 fps but if its unstable as crap or looks like ass then it isn't worth the silicon.
 
  • Like
Reactions: ilukey77
Frank's response is a bit asinine if there wasn't a duopoly and AMD didn't copy uncle Jensen every generation to play keep up they could up the 1080p cards to 160 bit instead of 128bit (which is pretty weak for over a decades standard) and make them 10gb cards, 256bit for 1440p 16gb, and 384bit for 4k 20gb. Instead they defend uncle Jensen so they can sell their $199. 99 cards for $300 then turn around and say there wasn't enough and now they have have to charge $550

funny thing about that a card did exist a 6700 non x was 160 bit at 10gb.
 
but the problem is their naming and pricing ...

Price is a completely separate issue then "a card having 8GB in 2025 is dead". Saying that the totally-not-50 series cards have bad price/performance is a completely valid statement. Saying "8GB cards are dead in 2025" is an invalid statement.

GPU price inflation and the economics behind it really should be it's own article.

As for tech YT and other media outlets, they are just outrage farming for clicks and ad revenue. Your attention is the product that they sell to various third parties for revenue that then goes to pay their employees. Right now there is a serious dissatisfaction in the community about GPU prices vs performance, and those media outlets are fanning the flames to generate revenue.
 
Price is a completely separate issue then "a card having 8GB in 2025 is dead". Saying that the totally-not-50 series cards have bad price/performance is a completely valid statement. Saying "8GB cards are dead in 2025" is an invalid statement.

GPU price inflation and the economics behind it really should be it's own article.

As for tech YT and other media outlets, they are just outrage farming for clicks and ad revenue. Your attention is the product that they sell to various third parties for revenue that then goes to pay their employees. Right now there is a serious dissatisfaction in the community about GPU prices vs performance, and those media outlets are fanning the flames to generate revenue.
yeah i get what you are saying .
and of course they are exploiting a "business opportunity"
that´s not to say they are wrong .
they just phrase it in a way that adds a bit of a dramatic theatre into it for views ...

most of their viewers and seasoned tech forum members
will not buy an 8gb card for $320 in 2025 anyway .
those that will don´t watch their channels or read forums ...

sadly there are far more unsuspecting customers out there
than there are those who are at least asking some questions before they buy .

i mean sure for some people rtx 5060 may be completely fine for what they need
but should they pay $320 for it ? absolutely not .
 
Or.... like me i have a BenQ second monitor and sony bravia tv's and none are 4K and i don't give a crap about 4k.... 1080p's just fine.
 
That's assuming the extra 8 GB can be addressed by the physical GPU. AMD could artificially limit the addressable address space to 8 GB

So quick class on how these are wired. Each GPU memory bus is 32-bits wide and has a single GDDRAM chip soldiered to it. That 32-bit bus connection is split into two 16-bit lanes such that the GPU can access two separate WORDS (that is a 16-bit value) per clock cycle. Currently GDDR6 memory has a max capacity of 16Gbe (2GB) per chip, thus 128-bit bus equals four 32-bit chips equals 8GB of GDDRAM. Now GDDR has a special low performance mode called "clam shell" in which the GPU will instead address four BYTES (8-bit values) per clock cycle allowing for double the number of GDDRAM chips at half the bandwidth.

So not only would you be paying twice the amount of money for GDDRAM on the product, but the product would consume more power and have the same memory performance. This configuration mode exists as a way to double capacity for datacenter and professional class cards, using it on a lowly entry level card with a measly 128-bit interface is dubious at best. It's not impossible just something the market has to demonstrate a demand for.

People keep trying to treat these entry level cards like they aren't... and it's kinda weird. Guys if you see a 128-bit memory bus on a product made in the past decade, immediately put it in the entry level category.

How does this relate to a driver artificially limiting the addressable address space? The board could have 1 TB of GDDRAM with proper bus layout and the driver or firmware could still limit the reported space to 2 GB if they wanted. Not sure this relates to my comment which is not specific to hardware.
 
How does this relate to a driver artificially limiting the addressable address space? The board could have 1 TB of GDDRAM with proper bus layout and the driver or firmware could still limit the reported space to 2 GB if they wanted. Not sure this relates to my comment which is not specific to hardware.

GPU driver doesn't do anything remotely like that, WDDM 2 along with whatever user mode driver (not GPU driver) is what manages memory.

https://learn.microsoft.com/en-us/windows-hardware/drivers/display/gpu-virtual-memory-in-wddm-2-0

Anyone with a dGPU is going to be in GpuMmu mode since IoMmu is for iGPU's.

https://learn.microsoft.com/en-us/windows-hardware/drivers/display/gpummu-model

WDDM 2 is going to be mapping page segments into address space, from the programs point of view it's a single continuous memory range, the application doesn't load anything into VRAM and instead just loads resources into memory. The framework will then determine which resources should be in VRAM and then map those VRAM segments into the virtual address space. Resources that can be in system memory would have their segments also mapped into that same linear address space. 32-bit programs obviously have limitations and we have to use page frame flipping to handle those resources, 64-bit programs have a ridiculously large virtual address space.

https://learn.microsoft.com/en-us/w...a-and-later-display-driver-model-architecture

This is why it's virtually impossible for modern games to "run out of graphics memory". Available graphics memory is GPU VRAM + 50~75% of system memory. WDDM 2 and beyond treat it all as one giant pool and then map resources around as it believes is necessary. GPU's can access system memory over the PCIe bus but it's much slower then VRAM and having to load that resources into VRAM also takes time, this is represented as a stuttering effect as the frame is delated a few ms to execute the transfer.
 
Last edited:
For what the costs of GPU's are today ..

NO card should be less than 16gb of Vram ..

I dont care what resolution people expect to play at .. anything less than 16gb is a scam !!

Both AMD and Nvidia are not only shorting consumers with 8gb rubbish but scamming consumers with 9060xt 8gb and 16gb packaging to roll unsuspecting customers buying the wrong product !!

Same with Nvidia's 5060 ti 8gb and 16gb BS..

There should be VERY CLEAR descriptions between 8gb crap and 16gb cards not this same name plate but better vram between products !!

To some degree while not illegal its surely disgusting anti consumer marketing !!

I would argue borderline deceptive marketing which is illegal !!
 
  • Like
Reactions: ak47jar3d
GPU driver doesn't do anything remotely like that, WDDM 2 along with whatever user mode driver (not GPU driver) is what manages memory.

https://learn.microsoft.com/en-us/windows-hardware/drivers/display/gpu-virtual-memory-in-wddm-2-0

Anyone with a dGPU is going to be in GpuMmu mode since IoMmu is for iGPU's.

https://learn.microsoft.com/en-us/windows-hardware/drivers/display/gpummu-model

WDDM 2 is going to be mapping page segments into address space, from the programs point of view it's a single continuous memory range, the application doesn't load anything into VRAM and instead just loads resources into memory. The framework will then determine which resources should be in VRAM and then map those VRAM segments into the virtual address space. Resources that can be in system memory would have their segments also mapped into that same linear address space. 32-bit programs obviously have limitations and we have to use page frame flipping to handle those resources, 64-bit programs have a ridiculously large virtual address space.

https://learn.microsoft.com/en-us/w...a-and-later-display-driver-model-architecture

This is why it's virtually impossible for modern games to "run out of graphics memory". Available graphics memory is GPU VRAM + 50~75% of system memory. WDDM 2 and beyond treat it all as one giant pool and then map resources around as it believes is necessary. GPU's can access system memory over the PCIe bus but it's much slower then VRAM and having to load that resources into VRAM also takes time, this is represented as a stuttering effect as the frame is delated a few ms to execute the transfer.

Drivers can most definitely can define their own address space and map to physical memory regions in a driver. That is the whole point of Virtual Address Spaces. Virtual address spaces are 100% compatible with GpuMmu.

https://learn.microsoft.com/en-us/windows-hardware/drivers/gettingstarted/virtual-address-spaces

GPU virtual address space management. For engines running in the virtual mode, a GPU virtual address needs to be explicitly assigned to an allocation before it can be accessed virtually. For this purpose, VidMm offers the UMD services to reserve or free GPU virtual addresses and to map specific allocation ranges into the GPU virtual address space of a process. These services are flexible and allow the UMD fine grain control over a process GPU virtual address space.

From

https://learn.microsoft.com/en-us/windows-hardware/drivers/display/gpummu-model
 
You guys have definitely gone off course with this conversation. So what does address space have to do with the 9600 xt at 1080p?

Someone has suggested AIB partners could just slap 16 GB of RAM onto the board to get higher frame rates or better performance at higher resolutions. I simply suggested AMD could prevent that and it has spiraled into a tangent as usual.
 
  • Like
Reactions: ak47jar3d