News Nvidia Reportedly Cancels RTX 4090 Ti, Plans 512-bit Bus Next-Gen Flagship

InvalidError

Titan
Moderator
Nvidia going: "Oh crap, we nerfed memory bus widths silly in the 4000-series, we've got to go back up for our pricey GPU lineup to make sense, not be bandwidth-starved and have options for fitting memory!"

To be at least a little fair, GDDR6 cost about $12/GB back when the RTX4000 series was being finalized. Now that cost is down to $3-4/GB, there is no excuse for being so stingy with next-gen memory load-out.
 
  • Like
Reactions: FunSurfer
we've got to go back up for our pricey GPU lineup to make sense, not be bandwidth-starved and have options for fitting memory!"
if you dont think nvidia'd not rise prices becasue of those changes...

they'd price em higher and say "well the higher bus & more memory cost more"

back to $2000 halo product prolly.
So bandwidth matters after all?
they know it does.

for the 4060 ti they even stated it was a downgrade vs 3060 ti in most stuff but showed real gains in dlss 3 situations (which is true but again its only like 1% of stuff support it...)

likely just want ppl to pay more for the higher tiers
 
  • Like
Reactions: PEnns

tamalero

Distinguished
Oct 25, 2006
1,231
244
19,670
Nvidia going: "Oh crap, we nerfed memory bus widths silly in the 4000-series, we've got to go back up for our pricey GPU lineup to make sense, not be bandwidth-starved and have options for fitting memory!"

To be at least a little fair, GDDR6 cost about $12/GB back when the RTX4000 series was being finalized. Now that cost is down to $3-4/GB, there is no excuse for being so stingy with next-gen memory load-out.
They probably have tons in inventory that they have to sell at a price point to do not make loses.
 
for the 4060 ti they even stated it was a downgrade vs 3060 ti in most stuff but showed real gains in dlss 3 situations (which is true but again its only like 1% of stuff support it...)
No, Nvidia absolutely did not say it was a downgrade versus the 3060 Ti. At most it said, in briefings, that there might be edge cases where it could end up slightly slower. But overall, without DLSS 3, it's typically 10–15 percent faster than the 3060 Ti, at resolutions and settings that matter for either GPU. (Meaning, not 4K ultra.)

What Nvidia did do was to heavily push the DLSS 3 Frame Generation narrative. Yes, uptake is relatively swift, especially compared to the original DLSS. But right now there are still only 39 released games that use DLSS 3. And yet Nvidia's marketing material for RTX 4060 Ti used 18 games for benchmarks, 11 of which were DLSS 3 enabled.
 

InvalidError

Titan
Moderator
back to $2000 halo product prolly.
When you pass the $1000 mark, you are usually dealing mostly with people who aren't going to care much if their GPU-for-the-0.1% costs $2000 or $3000.

For the more cost-sensitive people, last I heard, Intel was aiming for roughly twice the performance per dollar with Battlemage. Even if Intel delivers only half of that, it'll still give AMD and Nvidia a serious run for their money at ~$350 for performance generally exceeding the 4070 (by the raw numbers, the A770 should be able to trade blows with the 4070Ti, lots of seemingly untapped potential left) backed by 16GB of VRAM on 256bits. Next year should be interesting unless Battlemage gets delayed into 2025 or poofs out of existence.
 
  • Like
Reactions: khaakon
When you pass the $1000 mark, you are usually dealing mostly with people who aren't going to care much if their GPU-for-the-0.1% costs $2000 or $3000.

For the more cost-sensitive people, last I heard, Intel was aiming for roughly twice the performance per dollar with Battlemage. Even if Intel delivers only half of that, it'll still give AMD and Nvidia a serious run for their money at ~$350 for performance generally exceeding the 4070 (by the raw numbers, the A770 should be able to trade blows with the 4070Ti, lots of seemingly untapped potential left) backed by 16GB of VRAM on 256bits. Next year should be interesting unless Battlemage gets delayed into 2025 or poofs out of existence.
Do you mean a "performance doubled" B770? Because the A770 16GB is nowhere near the 4070 Ti in performance right now, and theoretical performance is ~20 teraflops versus ~40 teraflops. But yeah, if Intel can do Battlemage with 2x the A770 performance for $350, that would be a nice option. Right now, the 4070 Ti is about double the performance of the A770, at nearly double the price.
 
D

Deleted member 2950210

Guest
RTX 4090, is proving to be one of the best purchases ever! And it gets even better by the day!
 
Last edited by a moderator:

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,452
996
20,060
When you pass the $1000 mark, you are usually dealing mostly with people who aren't going to care much if their GPU-for-the-0.1% costs $2000 or $3000.

For the more cost-sensitive people, last I heard, Intel was aiming for roughly twice the performance per dollar with Battlemage. Even if Intel delivers only half of that, it'll still give AMD and Nvidia a serious run for their money at ~$350 for performance generally exceeding the 4070 (by the raw numbers, the A770 should be able to trade blows with the 4070Ti, lots of seemingly untapped potential left) backed by 16GB of VRAM on 256bits. Next year should be interesting unless Battlemage gets delayed into 2025 or poofs out of existence.
Intel needs to get their driver situation sorted out if they plan on being competitive.

That means it can't crash, it needs to work consistently, it needs to work with the same range of games that a nVIDIA / AMD driver can work with.

Customers don't care about excuses, they just want results.
 

RedBear87

Commendable
Dec 1, 2021
150
114
1,760
Nvidia going: "Oh crap, we nerfed memory bus widths silly in the 4000-series, we've got to go back up for our pricey GPU lineup to make sense, not be bandwidth-starved and have options for fitting memory!"
Except the RTX 4090 was the only GPU where they didn't nerf the memory bus, 384bit was the same memory bus of the 3090/3090Ti, so I'm not really seeing the logic here. Rather, if this report is accurate we can deduce two things:
1) AMD won't even try to release an improved RNDA3 GPU in this generation that could vaguely try to get the performance crown from the 4090.
2) Nvidia fears that AMD could actually try to aim at the top tier in the next generation.
 
  • Like
Reactions: KyaraM

oofdragon

Distinguished
Oct 14, 2017
327
292
19,060
You know what.. who cares about 4000 /7000!series? There is not a single game the 3060 (even the 2060S) can't play at max settings 1080p, around 60fps, and it dirty cheap around $200. Same goes for 3070 at 1440p and 3080 at 4k, this last one now around $400. For as long as the PS5 stocks around that will be the case, so when consumers do actually start needing a upgrade there will be series 6000/9000 and by then the 4000 series will be super cheap as well and so on
 
  • Like
Reactions: colossusrage

InvalidError

Titan
Moderator
Except the RTX 4090 was the only GPU where they didn't nerf the memory bus, 384bit was the same memory bus of the 3090/3090Ti, so I'm not really seeing the logic here.
That is also only 20% more than the 3080Ti for a ~70% faster GPU that should be more than twice as fast based on raw numbers, which hints at a scaling bottleneck - bigger caches can only take you so far when dealing with 8+GBs of active data. Though my comment wasn't only about bandwidth but also memory load-out flexibility: with 384bits, it is either 12 or 24GB with standard chip sizes, which is problematic when nearly everyone agrees that anything costing over $400 should have at least 16GB at current GDDR6 prices and most would agree that 128bits isn't going to yield enough bandwidth to fully leverage 16GB. Everything from the 50-tier through the 80-tier needs a bump up for memory bandwidth and sizes to make sense and the 90-tier may need a bump up on top of that to maintain its clear halo-tier status.

Bumping the 90-tier to 512bits would give Nvidia a lot more room to smoothly scale memory interface and VRAM size across the product stack instead of the weirdness with lower-end models sometimes having more VRAM than higher-end models because those are the only options available and the smaller one isn't enough.
 
  • Like
Reactions: palladin9479

blppt

Distinguished
Jun 6, 2008
579
104
19,160
What the world really needs is a PowerVR type of revolution in which a highly efficient next gen GPU takes us away from needing a portable nuclear plant and liquid nitrogen cooling system to play the latest ray tracing games.
 
  • Like
Reactions: adbatista

spongiemaster

Honorable
Dec 12, 2019
2,364
1,350
13,560
I'm highly skeptical of this leak. While it may be true that the 4090Ti has been put on the shelf for now. I can't see any scenario where Jensen's ego would allow Nvidia to not release a GPU faster than the 4090 in 2023 and 2024 if next gen has really been pushed back to 2025. It's simply impossible to see that happening.
 

InvalidError

Titan
Moderator
What the world really needs is a PowerVR type of revolution in which a highly efficient next gen GPU takes us away from needing a portable nuclear plant and liquid nitrogen cooling system to play the latest ray tracing games.
Real RT will be ludicrously compute-intensive and by extension power-hungry no matter who makes the hardware. If you don't want the hefty compute and power cost, stick to approximated RT.
 
You know what.. who cares about 4000 /7000!series? There is not a single game the 3060 (even the 2060S) can't play at max settings 1080p, around 60fps, and it dirty cheap around $200.
Yeah I'm sticking with my RTX 2060 (6GB) for another gen I guess. The RTX 4060 is too disapppointing to be worth the cost (not even twice as fast and just 2GB more VRAM after over 4 years !). It's not the end of the world dropping settings from Very High to High for the latest games.
 

RedBear87

Commendable
Dec 1, 2021
150
114
1,760
That is also only 20% more than the 3080Ti for a ~70% faster GPU that should be more than twice as fast based on raw numbers, which hints at a scaling bottleneck - bigger caches can only take you so far when dealing with 8+GBs of active data. Though my comment wasn't only about bandwidth but also memory load-out flexibility: with 384bits, it is either 12 or 24GB with standard chip sizes, which is problematic when nearly everyone agrees that anything costing over $400 should have at least 16GB at current GDDR6 prices and most would agree that 128bits isn't going to yield enough bandwidth to fully leverage 16GB. Everything from the 50-tier through the 80-tier needs a bump up for memory bandwidth and sizes to make sense and the 90-tier may need a bump up on top of that to maintain its clear halo-tier status.

Bumping the 90-tier to 512bits would give Nvidia a lot more room to smoothly scale memory interface and VRAM size across the product stack instead of the weirdness with lower-end models sometimes having more VRAM than higher-end models because those are the only options available and the smaller one isn't enough.
The big assumption here is that Nvidia is actually interested in bumping the memory bus interface across the whole line, instead of focusing just on the 5090. In my opinion, that's going to happen only if they believe that AMD is going to do the same. People tend to forget it, but it was AMD with RDNA2 that started this gimmick of raising the L2 cache ("Infinity Cache") while castrating the memory interface (RX 5600 XT: 192 bit memory bus > RX 6600 XT: 128 bit memory bus); Nvidia happily joined the club when they noticed that it kind of worked and back then it didn't seem so controversial.

Also I'm not really sure at all if having more room to scale memory interfaces would remove that weirdness with lower-end models sometimes having more VRAM than a higher end model, last time Nvidia used a 512bit memory bus for their top of the line GPUs, back in 2009, they used to have models of the GTS 240 and 250 with 1GB VRAM (both on a 256bit interface) and the GTX 260 with 896MB VRAM (448 bit interface)... Nvidia probably doesn't really consider it an issue per se.
 
Last edited:
  • Like
Reactions: Matt_ogu812

InvalidError

Titan
Moderator
The big assumption here is that Nvidia is actually interested in bumping the memory bus interface across the whole line, instead of focusing just on the 5090. In my opinion, that's going to happen only if they believe that AMD is going to do the same.
Nvidia allegedly barring its AIBs from making Intel GPUs tells me Nvidia might be more worried about Intel than AMD. If Intel delivers on hype about the B770 being twice as fast as A770 for about the same price, Nvidia will have to compete against a ~$350 card with 16GB of VRAM on a 256bits bus that could give the 4070Ti a run for its money. Nvidia will have to keep that in mind for whatever it plans to launch in the 60-80 tier range.
 

Matt_ogu812

Honorable
Jul 14, 2017
160
32
10,620
Could there be is some caution being thrown in the wind because all the speculation of a downturn in the economy/Wall Street that never happened ....YET.
With a crucial presidential election coming up in the US who knows how that will pan out regardless of one's political leaning.
I don't think any of the manufactures would to be stuck with all their 'best of the best' graphics cards in warehouses if the economy tanks hard. It's bad enough that there is a downturn in the PC market now but compound that with a DoA economy and that would be the 'Perfect Storm'.