News Zotac accidentally lists RTX 5090, RTX 5080, and RTX 5070 family weeks before launch — accidental listing seemingly confirms the RTX 5090 with 32GB...

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
waiting to see the less is more slogan again.... more so for the Chinese market D version selling at the same price as the regular 😀

and is it just me or same for the others, for that headline picture my first thought is "that 12vhpwr adapter bending tight isn't going to end well"...
 
They never will give more than 16gb on a gaming card... they will try to sell the workstations card for the consumer "4090/5090" or a quadro with other features.
The gaming division of Nvidia don't make money. They want only keep the top dog.

It's why I have the 4060 😀 can play some games at quad hd if don't go back to full hd.
That 1080ti never will come back!
 
I hope it is a massive increase in gaming performance and then I can feel bad about having an RTX 4090, but have to look forward to waiting for the 6000 release before I can legitimately argue I should upgrade.
I'm always looking for an excuse to upgrade but it would have to be an insane increase in gaming performance to feel bad about having a 4090. Even then I still couldn't justify upgrading from a 4090. Of course it depends on what you're playing and at what resolution. I play on a 1440p 240Hz monitor and a 3440x1440 120Hz monitor and I would probably need a 4K 240Hz monitor to justify a 5090.
 
  • Like
Reactions: valthuer
Very much disagree with your "solution" to insufficient VRAM. If the only thing holding back performance is a lack of VRAM then the card doesn't have enough. The capacity of VRAM should match the capability of the GPU.

There's an increasing number of instances of VRAM capacity being the limiting factor which from a consumer side should be something not stood for. If you're buying a $200 or less card sure expect limitations, but on something that level it's unlikely to be able to run things at a playable frame rate which would also use up all of the VRAM. Intel has already put the B580 forward saying 12GB at $250 can be done and nobody should be willing to accept less.

Realistically the 4080 Super was just a price reduction due to backlash.

Intel is likely not making money with the B580, so not sure that is a great example. This is more about building good will with gamers.

My points were about the 16GB 5080 as compared to the 5090 32GB. I understand what you are saying, but it doesn't change the GPU design they have gone with. It is a smaller die, and yes, should be the 5070 die at a more reasonable price, but 5070 Ti 16GB should be closer to that middle ground at least. As for 8GB cards, I agree, they are at the end of their useful life for AAA gaming, but not all games need 8GB so there is still a market for smaller cards. Fortnite doesn't take an amazing system.

Not sure what to say in regards to my "solution", people want a high end product that lets them run whatever game at any settings and doesn't cost a lot. Corporations wants to make money. I agree the 4090/5090 are overpriced, one of the reasons I don't have one, but I also don't push for a 4K monitor running 120 FPS. Graphics aren't that critical to gaming fun. Just need a reasonable frame rate.

By used. Lower settings. It is what gamers have been doing since 3D PC gaming became a thing. Back when I was strapped for cash, it is certainly what I did. Mid-tier cards, lower resolutions, turning of AA if I had to.

Only way to get them to change would be to stop buying. Which may very well be the main driver for the 4080/Super price reduction. Too expensive for what it could do. 5080 is likely a repeat scenario, but we'll see. I think that might come down to what AMD decides to launch.
 
  • Like
Reactions: Flayed
Intel is likely not making money with the B580, so not sure that is a great example. This is more about building good will with gamers.
They certainly aren't making nvidia's margins, but they're also not selling these at a loss. It's an example of what one should expect from a market with competition instead of just accepting what the biggest player will give.
Not sure what to say in regards to my "solution", people want a high end product that lets them run whatever game at any settings and doesn't cost a lot.
I'm not totally sure you understand the point I'm making. If the 16GB 4060 Ti can have much higher performance, better frame times and no texture loading issues when compared to the 8GB model which has identical specs other than VRAM capacity that means the 8GB model didn't come with enough VRAM in the first place. Nobody should be advocating for products that are artificially limited to suit a giant company's margins. It's not about having a high end product for cheap so much as a well balanced product at a reasonable price.
 
  • Like
Reactions: adbatista
It probably won't be some massive increase in gaming, Nvidia is not likely to jump to much more advanced transistors, just to make the same basic thing, but bigger. Most RTX 4090s were bought as entry-level workstation cards for AI workloads and Nvidia knows it. A lot of people were paying $1800 for a gaming card to avoid paying $3500 for a workstation that performed close enough. So there's a good chance the RTX 5090 is not designed for gaming. Although the RTX 4090 also wasn't really meant for home gamers, but non-sponsored people still bought them apparently, so who knows.

Although there's always still a chance Nvidia severely nerfs the 5090's AI to stop from eating themselves out of their own market. They've always limited compute performance, started limiting hash rates for crypto back when people cared about that, so why not artificially nerf AI? The bump in memory would only be useful for AI, though.

I expect the price of the new cards to scale linearly with performance again, if not a bit worse. So maybe well get a card that games 15% better than a 4090, but at 20% higher prices and probably >20% higher power consumption. Unless they don't nerf AI, then the price jump could be a lot more. Nvidia wants their $3500 from these people.
The 4090 was a sweet spot in the line up last gen. It is equal to or faster than the $7-9k RTX 6000 until the workload need more than 24GB, then the 6000 dominates. The 5090 will be "nerfed" for AI the same way the 4090 was... by having high TDPs and 3.5+ slot coolers. If you're building an 8 GPU workstation for AI, there's virtually no scenario where the consumer cards make sense. Nvidia very much wants people to get their feet wet with 1 or 2 GPU starter systems.
 
They certainly aren't making nvidia's margins, but they're also not selling these at a loss. It's an example of what one should expect from a market with competition instead of just accepting what the biggest player will give.

I'm not totally sure you understand the point I'm making. If the 16GB 4060 Ti can have much higher performance, better frame times and no texture loading issues when compared to the 8GB model which has identical specs other than VRAM capacity that means the 8GB model didn't come with enough VRAM in the first place. Nobody should be advocating for products that are artificially limited to suit a giant company's margins. It's not about having a high end product for cheap so much as a well balanced product at a reasonable price.

I'm not advocating for their margins.

4060Ti 16GB only rarely shows an improvement with the extra memory, it is very situational. Just not a very large GPU to start with which is part of your argument but in the opposite position.

Since the 5080 is a relatively small card compared to the 5090, they just didn't put in the hardware in the GPU to make more memory feasible. Now they could go to 24GB using 3GB modules, but then people would probably complain that the bus width is too low. "Why didn't they make it a 384 bit card." Then you would be looking at 4090 size dies for your 5080, and it would certainly be over $1000. Doesn't have to be that expensive, but that is what they would do.

People should care about what the card can do for the money. If the money isn't right, speak with your wallet and buy the AMD or Intel alternative. Or buy used. All I am saying.
 
Since the 5080 is a relatively small card compared to the 5090, they just didn't put in the hardware in the GPU to make more memory feasible. Now they could go to 24GB using 3GB modules, but then people would probably complain that the bus width is too low. "Why didn't they make it a 384 bit card." Then you would be looking at 4090 size dies for your 5080, and it would certainly be over $1000. Doesn't have to be that expensive, but that is what they would do.
Bus width only matters if the GPU is bandwidth starved which is something a lot of people don't seem to understand. It's unlikely that this would be a valid complaint based on prior releases (barring those money grabs like the 3060 8GB where they cut the bus). While the 40 series had some seemingly weird bus widths they also didn't see linear improvements from memory overclocking (probably due to cache). I wouldn't be surprised if the reason nvidia is putting out a 16GB 5080 is that 24Gb IC wasn't ready yet.
4060Ti 16GB only rarely shows an improvement with the extra memory, it is very situational.
It's a situation that keeps happening more often as games are advancing. It's the same problem 3070 owners are running into and 3080 10GB owners are starting to. The GPUs have plenty of performance but increasingly not enough VRAM. It's an odd corner to cut from a performance standpoint and as has been mentioned multiple times in this thread seems to have more to do with protecting the higher margin enterprise cards than anything to do with consumer cards.