AMD has absolutely given "effective bandwidth" numbers on RDNA 2/3 chips. And again, it's not just marketing, it's engineering. Because when people look specs and see a drop in bandwidth, they get worried. Looking only at bus width or bandwidth is as misguided as looking only at theoretical teraflops.
The RX 6600 XT has fewer cores at higher clocks to get 10.6 TFLOPS, and RX 5700 XT has 9.8 TFLOPS. The point isn't that they have similar compute, it's that the 6600 XT has 256 GB/s of bandwidth while the 5700 XT has 448 GB/s. How can it deliver similar performance with 43% less bandwidth? Infinity Cache. How does Nvidia deliver a big generational boost in performance with the RTX 4090 over the RTX 3090 Ti, even though they have the same GDDR6X configuration and bandwidth? With a much bigger L2 cache.
You can call BS on Nvidia's pricing. You can question how good DLSS 3 Frame Generation really is. You can complain about the lack of VRAM capacity. But the "effective memory bandwidth" figures are probably the least problematic aspect of the GPUs. The only real issue is that getting more effective bandwidth from a narrower bus means it's possible to end up with less VRAM because there aren't as many memory channels to go around.
Proof:
AMD:
View attachment 247
View attachment 249
View attachment 250
View attachment 252
View attachment 251
Honestly, I appreciate having the "effective bandwidth" data. It's AMD and Nvidia saying, in effect, this is the average hit rate of our L3/L2 caches. AMD didn't publish effective bandwidth data on the earlier RDNA 2 GPUs. Actually, it's a bit hit and miss right now. RX 6500 XT, RX 6700 10GB, and RX 6700 XT list effective bandwidth (along with the above five cards). RX 6900 XT, RX 6800 XT, RX 6800, RX 6600 XT, RX 6600, and RX 6400 do not.
The issue in that screenshot is likely just a bug, not an issue of genuinely running out of VRAM. Borderlands 3 can run on cards with as little as 2GB of VRAM, and you can find plenty of reports of PC gamers getting the same error, even with cards like a 2080 Ti or a 3090.Microsoft is a actually a good example of why 8GB VRAM is insufficient.
The series S is a console in big trouble because it only has 8G VRAM.
And this idea that developers are to blame or that it's due to poor optimization, is baloney.
A good example is Baldur's Gate 3 that has been delayed on Xbox because Larian Studios can not allocate enough graphical memory on Series S. Larian is a studio with 30 years of experience, they know how to make games. Developers just can't develop a game for one audience that has 16GB VRAM and another audience that has 8GB VRAM.
So developers develop for the biggest market, and that's PS5, a console that happens to have blazing fast custom I/O chips, custom decompression chips, and 16GB GDDR6. PS5 can pull in assets and decompress textures like no other machine can. PC are struggling to keep up, the bare minimum has been set, 16GB VRAM, anything below that and PC will struggle for a whole generation.
580×334 jpg
48 kB
![]()
Anandtech used to be my go to as they seemed to be the least biased, but Ryan Smith has driven that site into the ground.
Yeah , I agree. I would have liked an apples to apples comparison, with DLSS 2/Or stock used to see the real difference between these cards and older ones! DLSS 3/Frame Generation is an odd mix. I don't think I'd buy a 4xxx card purely based on that. 15% Increase doesn't seem that much given the uplift the higher end cards have had.I think gamers should base their buying decision to get these 40-series SKU only on pure rasterization performance. DLLS3 makes the comparison less appealing, and it can also be slightly misleading.
On top of that previous-gen RTX don't support DLLS3, only DLLS2. So for a fair comparison for an upgrade, just look at raw rasterization performance in games.
In 5 years these consoles will be 8 years old and new consoles will most likely come. And as I wrote developers are just starting to fully utilize their ram and all CPU cores. Games look decent on current consoles already and huge chunk of the market is bound by them so games will be optimized to look decent enough unless developers go "F*ck you" like they did in Redfall which doesn't even look that graphically pleasing tbh.When a console is new, it is current, but as it ages, it quickly becomes outdated. PC's are able to move with the latest technology, consoles can't, because they are a closed architecture. The hardware that it comes with when you buy it, is all it's ever going to have. Nothing is upgradeable. 5 or 6 years from now, you're going to be looking at the quality of the games on your console, and comparing them to the latest PC versions, and thinking your console is crap.
Again, you're missing the point of what I was saying. 4090 has a lot more compute, but if it didn't have more (effective) bandwidth, the extra compute would be wasted. You need to balance more compute with more bandwidth. If you had an RTX 4090 with no L2 cache and a 128-bit memory interface, the cores would all end up waiting for data and you'd probably have something around RTX 3060 performance.Fair enough, I have never seen that. Perhaps because I did not peruse AMD's website. I never saw it on any of their marketing slides though. Also, how can you neglect the fact that the 4090 has a much higher clock speed than the 3090 Ti? It is not at all just because of increased L2 cache.
It's because I left! :-DWhat's happened at Anandtech is really sad. Their deep dives into the actual chip technology and exploration used to be the best source around.
Someone drank the cool aid. They just came up with BS "effective bandwidth" numbers to try to hide the fact that its really memory bandwidth is far less than its predecessor.
the 4060 non ti might actually be a 50 tier..as its actually going to performa worse in some aplications than a 3060 due to fact it has less stuff.But calling these xx50 level gpus is pure horse...
A better question may be: does the envelope need to be pushed any further? IMO, at this point, graphics are already beyond what I could care for in games and I'd be 10X more interested in seeing novel concepts than pushing graphics any further.Assuming we still have local gaming, will developers find a benefit to increased VRAM at a faster rate or a slower rate? VRAM capacity gains have slowed relative to several generations of looking - it wasn’t unusual to see capacity double gen on gen back in the late 90s/early 2000s. If capacity is slowing, how will developers keep pushing the envelope?
When your choices of GDDRx chips is between 2GB and 2GB, your only options for a given bus width are 1X or 2X the amount of memory. 6GB was already known to be too little from the 2000-series, which leaves 12GB as the only other option on a 192bits bus short of mixing 1GB and 2GB chips. Mixing VRAM chips makes little sense when the cost difference between 1GB and 2GB is less than $2 and would introduce a bunch of unnecessary complications.My 2 cents..
I'm glad to see $299 on the RTX 4060. However, the issue of vram still bugs me. Particularly the fact that Nvidia can put more vram on slower cards shows they don't know what they're doing when it comes to planning an entire generation of video cards.
This is the reason:A better question may be: does the envelope need to be pushed any further? IMO, at this point, graphics are already beyond what I could care for in games and I'd be 10X more interested in seeing novel concepts than pushing graphics any further.
So, you're saying 192-bit bus simply isn't cutting it. Good point.A better question may be: does the envelope need to be pushed any further? IMO, at this point, graphics are already beyond what I could care for in games and I'd be 10X more interested in seeing novel concepts than pushing graphics any further.
When your choices of GDDRx chips is between 2GB and 2GB, your only options for a given bus width are 1X or 2X the amount of memory. 6GB was already known to be too little from the 2000-series, which leaves 12GB as the only other option on a 192bits bus short of mixing 1GB and 2GB chips. Mixing VRAM chips makes little sense when the cost difference between 1GB and 2GB is less than $2 and would introduce a bunch of unnecessary complications.
Not really. More like 8GB isn't enough for the amount of details a 19 TFLOPS32 GPU can push and 12GB isn't an option on 128bits with currently available GDDRx chips. Nvidia needed to make the 4060s 192bits to hit the 12GB sweetspot they should have been at, regardless of whether the GPU needed the bandwidth.So, you're saying 192-bit bus simply isn't cutting it. Good point.
Absolutely! I'm saying 256-bit is where the 4070 should be!Not really. More like 8GB isn't enough for the amount of details a 19 TFLOPS32 GPU can push and 12GB isn't an option on 128bits with currently available GDDRx chips. Nvidia needed to make the 4060s 192bits to hit the 12GB sweetspot they should have been at, regardless of whether the GPU needed the bandwidth.
6GB is enough for some games, but not all of them. 12GB is not too much to ask for on a $299 GPU in 2023.well, as far as i am concerned, the 60 series cards were always 1080p cards. and 6gb is enough for 1080p.
But 299 USD for a 60 series card is what i am not ok with.
reduce 100 usd from 4060, 4060ti and 4070 and i am happy.
The 4080 should be 1000 USD. only the 4070ti pricing makes sense.
When you have a 19 TFLOPS32 GPU, 6GB isn't enough to crank details as high as the GPU should be comfortably capable of at 1080p. It already wasn't enough for the RTX2060 to do everything it may have been capable of. Heck, even the 1660/s/Ti got stiffed a bit with only 6GB.well, as far as i am concerned, the 60 series cards were always 1080p cards. and 6gb is enough for 1080p.
Agreed. The 20 series is where Nvidia started doing things that didn't make sense. Hell, even the naming scheme is dumb. Come up with something new. Meanwhile, Intel calls their parts 13900KS-F-what-ever-the-f. Something is terribly wrong and it's not just computer parts.When you have a 19 TFLOPS32 GPU, 6GB isn't enough to crank details as high as the GPU is should be comfortably capable of at 1080p. It already wasn't enough for the RTX2060 to do everything it may have been capable of. Heck, even the 1660/s/Ti got stiffed a bit with only 6GB.
Exactly this.Yeah, it's fundamentally a problem/choice with the memory bus width. With a 128-bit bus, you can do 8GB or 16GB (the latter via clamshell). Would have been nice if Nvidia had done 128-bit on AD107, 192-bit on AD106, 256-bit on AD104, 320-bit on AD103, and 384-bit on AD102. But it didn't, opting instead to save on costs and reduce the VRAM capacities at basically every level except the RTX 4090.
The easier thing for Nvidia would have been to price the GPUs more accordingly. The RTX 4080 costs 71.5% higher than the RTX 3080 and offers about the same 71.5% in performance increase. That means we didn't get anything new from the 40 series except a more expensive option for more performance. I would have bought a 4080 if it were $699, maybe even $799. Instead, they gave me a 4070 Ti for $799 and less than desirable memory specs. No thanks.I can't agree with you enough here. I think buyers would have been far more forgiving of prices in the 70/80 class cards too if Nvidia had done this.
I don't disagree. But if nvidia was so intent on these prices whether out of need or want (likely the latter), then at least higher capacity ram could have been used as a reason for the price increase. As it stands now it looks like Nvidia is charging more for less or less than we expected for a generational leap. But at the end of the day your not wrong IMHO.The easier thing for Nvidia would have been to price the GPUs more accordingly. The RTX 4080 costs 71.5% higher than the RTX 3080 and offers about the same 71.5% in performance increase. That means we didn't get anything new from the 40 series except a more expensive option for more performance. I would have bought a 4080 if it were $699, maybe even $799. Instead, they gave me a 4070 Ti for $799 and less than desirable memory specs. No thanks.
I'm more on the side of the viewpoint that no consumer GPU should cost $1,999. That's the prosumer Titan territory of pricing and the flagship fastest gaming GPU on the planet should stick to around $999 or less. Unfortunately, I suppose the value of the USD$ is going to crap. What's the goal of The Fed btw? Please remind us. Since the beginning of The Fed, we've gone through The Great Depression and 2 World Wars and lost our very soul (mind, will, and emotion) to some group of satanic occultists who literally utilize technology to control the human mind. That's just my schizophrenic viewpoint of the matter.
So humble, lol. No really we are glad to have you Jarred! Its been a pleasure reading your stuff and interacting with you online.It's because I left! :-D
Or maybe because Anand left. But probably me!