News Edward Snowden slams Nvidia's RTX 50-series 'F-tier value,' whistleblows on lackluster VRAM capacity

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
a problem ive been seeing with amd is that they dont really try to make a gpu on their own, they try to base it off nvidia.
No, I don't agree. AMD is the one who invented Infinity Cache, not Nvidia. This is what enabled RDNA2 to take the lead over Ampere. AMD went to chiplets, in RDNA3. Nvidia still has yet to do this, in a client GPU (though you can count the B200 as chiplet based). AMD was also the first to use HBM.

I really don't see how you can justify calling AMD a copy cat off Nvidia. I don't really know where they copied Nvidia, other than when GCN switched to SIMD/SMT.

often times it seems like amd isnt even trying to create gpus better than nvidia, just gpus that perform simular (in gaming at least) for a fair bit less.
Sometimes, they aren't. Polaris and RDNA1 were like that. RDNA2 clearly wasn't. RDNA3 wasn't either, but I think AMD didn't execute RDNA3 terribly well and underestimated just how big Nvidia would go with Ada. The result was AMD falling well short of what would've made them competitive, but it wasn't for lack of trying. The whole GCD/MCD thing isn't something you do, if you're just "phoning it in".
 
16GB VRAM is probably the most cringe thing about 5080. I understand why, but as a consumer that understanding does not help.

Of all the GPUs out there outside the xx90s, 5080 could have used 24GB the most.
Stop legitimising it, call it for what it is, a 5070 Ti at best. 5070 would historically be called 5060, 5060 5050.
Even 6080 won't beat 4090 at this rate of decline.
 
AMD has been able to catch Nvidia, on a few occasions. For instance, their Fury X was neck-and-neck with the GTX 980 Ti.

More recently, their RX 6950X actually beat the RTX 3090 Ti on rasterization performance, which I'm surprised so many people don't seem to know. I guess because it's when GPUs were super-expensive (if you could even find one in stock), due to the pandemic X crypto craze X scalpers. So, maybe some weren't paying much attention to the reviews & benchmarks.
True but thinking more in terms of leading the industry vs following. Take their trajectory in the CPU space … they purchased remnants of DEC and parlayed it into Athlon and outright dominance for a few years … Intel came back … but short they became a serious competitor… they’ve had less impact in the GPU space … data center / AI parts with standing , the purchase of ATI didn’t turn into the same trajectory in GPU …. Yeah,There have been a few moments here or there not enough to move the needle and that’s no good for the average consumer in this space.
 
Snowden is right that Nvidia's monopolistic practices are resulting in overpriced GPUs with crippled performance, but the VRAM quantities are actually the least offensive part of that. Remember when Nvidia tried to release a 4080 that was configured like a 70-class card but had to unlaunch it and rename it to 4070ti when everyone found out? Yeah, they got away with it this time. The 5080 is configured like a 70-class card, the 5070 is configured as a 60-class card, etc. That is by far the most infuriating aspect of this lineup. Everyone's scrambling to pay $1000-1400 for what should have been named a 5070.
 
This way, they can bring a refresh like a TI or a super with more ram. And aren't they able to use a new compression technology that should lower the ram use? But games still need to adopt the new technology.
 
Edward Snowden has unexpectedly shared his opinion on RTX 50 series VRAM
Snowden is now a Russian citizen and has about as much accurate information as Linus Sebastian of Linus Media Group. Both Snowden (net worth $57.7M) and Sebastian (net worth $85) are in it exclusively for the money ... tell me a story, what do you want to hear, I'll tell you that for a fee.

In regards to DDR7 VRAM, 8GB is currently about $18 (probably less for nVidia). So I agree that 16GB to 32GB VRAM is only costing nVidia $36, but 5080 (GB203-400-A1) and 5090 (GB202-300-A1) use completely different GPUs even if both 5nm. So I agree that nVidia are killing it in profit margins by using VRAM as an "upgrade" tool. BUT, this isn't new news, nVidia have been doing this for a long time. AMD on the other hand don't use VRAM as an "upgrade" tool.

So Snowden is pointing out facts of which we knew about over a decade ago ... why? ... we know why ... $$$ ...
 
No, "everyone" is not.

Maybe people with GHIN syndrome (Gotta Have It Now)
But so far, that is only the scalpers, hoping to make a few $$.

This has been the thing for 2+ decades of 'new release'. Phones, playstatios, GPU, ec, etc.
There were dozens, sometimes hundreds of people lined up outside of microcenters for this launch, even after the people in line found out there weren't nearly enough 5090s for everyone. If you look up pictures of the same places for the 4080 release you can practically hear the crickets. Or, if you took my post literally and thought that I was saying that literally all 8 billion people on earth were waiting to buy one then I don't know what to tell you. Either way, thanks for cherry picking a single sentence from my post and missing the entire rest of the message.
 
  • Like
Reactions: KyaraM and kyzarvs
This has been the thing for 2+ decades of 'new release'. Phones, playstatios, GPU, ec, etc.
True ... so after 20+ years no one has figured out supply and demand yet? But, there is a difference when it comes to nVidia, $1 Trillion company that isn't aware of demand or is aware but don't "want" to supply ... a total of about 332 5090's for the entire US on release day ... for most companies that doesn't even register as "supply".

From my perspective as an ex-stockholder (lucky I got out as nVidia's stock is in freefall), the last thing I want to hear from the company I've gambled with is "we can't or will not produce enough to meet demand" ... that's an instant "sell off" for me.
 
No, I don't agree. AMD is the one who invented Infinity Cache, not Nvidia. This is what enabled RDNA2 to take the lead over Ampere. AMD went to chiplets, in RDNA3. Nvidia still has yet to do this, in a client GPU (though you can count the B200 as chiplet based). AMD was also the first to use HBM.

I really don't see how you can justify calling AMD a copy cat off Nvidia. I don't really know where they copied Nvidia,
im not saying that amd isnt good at gpus, but they havent seem to had there 1080ti moment yet. they havent really made a gpu yet that makes the average gamer say "oh shoot, amd is better than nvidia". nvidia nows it, thats why their charging so much foor the 4000 and 5000 series gpus. i wish amd had another r9 moment, a card that was not onnly cheaper than its nvidia counterpart, but also marginally faster. the 6900xt was good, but it was still neck and neck with the 3090. ofc now im just rambiling but the truth is we need some market disrupting gpus.
 
So, I guess what I'm saying is your clickbait actually worked this time. But don't keep doing that. I'll stop reading this site if it becomes "random celebrity not known for hardware expertise slams hardware manufacturer."
Came into the comments here to say basically the same thing.

Saw the headline, got curious what the heck was going on... and after reading the article and checking out his Twitter, it doesn't seem like he's got any significant involvement/interest in desktop PC hardware. Seems like he just put out a tweet about the thing that everyone in the tech space was talking about about on the side that most people seem to be taking in order to get some attention or something, and Tom's took the bait and even decided to get cute and go with "whistleblows" instead of "comments". That makes it sound like he had some kind of inside insight into the lackluster VRAM that some Nvidia contact passed to him to release or something but nope, just his opinion as someone who can't even legally purchase one of these if he wanted to.
 
  • Like
Reactions: bit_user
They are designed, for gamers. AI or training is a different thing. And to provide support i'd say go buy a pro or enterprise based card then if computational is so important.

I'm sure the amount of VRAM is sufficient for the resolution it is designed for. It makes no sense to release cards with overblown VRAM capacity if the GPU can't tax it. People are so consumed by the fact to have so much VRAM like it's really needed,

Only a few understand that there is a big difference in between cached vs allocated. Cached is where most of the large VRAM comes from, actually allocated or being used is far less then what is cached.

Cached textures is useful to not have these transport over the PCI-E bus from the CPU or main memory. But the impact on performance is less then a percent for you to even notice. And now with things like Rebar and stuff it's not even needed anymore.

I remember having a RX580 with 8GB of ram. There was another 480 out there in the wild with only 4GB of ram. Even with the double amount of VRAM the card was not any faster then the one having only 4GB.

And i maxed out Doom Eternal for all it had consuming 7GB of VRAM. Was it any faster? No.

Not even today highest end game, can fully tax a high end or ultra high end card. And even if you could your not going to see any performance benefits (at all really) other then some 1 to 3 FPS difference on a large number of FPS anyway.

I still run a 6700XT with 12GB of VRAM; it's designed for 1440p so 1000% this card will last for some time still. Even a 16GB VRAM based card will do. It's just within the niche it's designed for (i.e fully capable 120Hz 1440P gaming card).

People who will play FPS where latency absolutely matters like turn all the candy down. And then you have 75% of VRAM actually being unused, lol.

You buy the things you need now. Looking forward for the 9070XT.
 
  • Like
Reactions: KyaraM
they havent really made a gpu yet that makes the average gamer say "oh shoot, amd is better than nvidia".
Oh shoot, AMD is better than nVidia.

I have 6 Windows based computers (3060Ti, 3090, 4070Ti Super Super, 4090, 7900XT, 7900XTX), 1 Linux (1080Ti), and 3 iOS devices. "better" needs more context. If you judge higher FPS as "better" then sure nVidia has the advantage. If you judge image quality, then AMD is better. Both AMD and nVidia do ray tracing but nVidia has the lead on FPS with RT enabled and supported by the title/app/game/sim.

RT is not done well by either AMD nor nVidia (not even close to what would be rendered in movies) - it's a compromise with limited "tracing", the reflections are too reflective and unrealistic as if to make you notice ... in some titles RT look worse not better, especially water surface that act more like finely polished mirrors than a real water surface. RT is being used to make it look more "dynamic" not more "realistic".

I have zero need or desire to go above 60Hz/60 FPS and NO interest at all in "fake frames" (DLSS). Again, it's another technology that compromises. I don't have "miracle" eyes that can detect the difference between 60 FPS and 120 FPS let alone be able to react to it ... now if you're one of those super humans that claim they can detect and respond to different images every 8ms, then I recommend you go visit an F1 team and I'm sure they'll hire you as an F1 driver 😉

If Tom's Hardware want's to stir the pot and bring some meat to the table, ask AMD/nVidia why they are bringing technologies that really aren't improving image quality? Consumers want "fake frames"? Consumers want overly reflective surfaces everywhere? Consumers want 120 FPS, 240 FPS (ok some do but they really have no clue)?
 
ask AMD/nVidia why they are bringing technologies that really aren't improving image quality?
The usual and perhaps correct(?) answer is that they do not currently improve the actual visuals of games, but that this will become a reality in the future, when local computing resources become sufficient to render full path tracing. Or when, as in MFS2024, the heavy calculations are done in the Microsoft cloud and delivered ready-made via a broadband Internet connection, while the user side remains mainly for interaction.
 
  • Like
Reactions: KyaraM and bit_user
It makes no sense to release cards with overblown VRAM capacity if the GPU can't tax it. People are so consumed by the fact to have so much VRAM like it's really needed
VRAM requirement comes from DLSS and the fake frames it generates based on prior "rendered" frames. Pending the level of DLSS applied the VRAM requirements increase significantly. Also depends on the title/game/sim/app and resolution of textures, compression type, and how much they want to cache in VRAM.

For those "Pros" that are FPS hunters, it's more psychological than actual. The average human response time is 250ms, if someone is claiming they can respond to 8ms (120FPS) or 4ms (240 FPS) then they are not human.
 
  • Like
Reactions: KyaraM
The average human response time is 250ms, if someone is claiming they can respond to 8ms (120FPS) or 4ms (240 FPS) then they are not human.
Funny example a little not from professional PC gaming:
NBA prospect Victor Wembanyama reacted to basketball great LeBron James calling him an “alien”
In reality, there is no person who can act much faster (in times) than the measured and scientifically proven average reaction rate. When there are reactions that seem to be times faster than possible, this is actually based on experience and knowledge of what is about to happen in a given scene. Whether, when and in what exact place the enemy character will appear. And an automatic, usually unconscious calculation of the location thanks to the previously seen location of the enemy in a previous part of the game and an assessment of how quickly he can move and where he chooses to appear. It is somewhat like the scene in which the Terminator shoots the calculated image of the police officer through the wall in the police station.
 
Stop legitimising it, call it for what it is, a 5070 Ti at best. 5070 would historically be called 5060, 5060 5050.
Even 6080 won't beat 4090 at this rate of decline.

4070 TI and 4080 use the exact same die, in effect 4070 ti's are just defective 4080's. The 5080 is using the same pattern as the previous 4080 and the 5070 TI is going to just be a defective 5080. The 5080 is not a "5070" or such silliness, the 5070 will have 192-bit memory bus and 12GB of VRAM, the same as the 4070.

As for everyone moaning about memory, you can't just solder another VRAM chip on, it won' work. You need to add that VRAM chip to a memory bus, meaning you need to add another 32-bit memory bus to the die complete with all the supporting elements. NVidia has standardized the memory bus width of each model in a such a way as to ensure cost/performance is the same on every model. It's a real crappy thing for them to do and hopefully AMD and Intel will step up with some mid range products that offer better cost/performance curves.
 
Last edited:
  • Like
Reactions: KyaraM
If this was from Edward Snowden, it would have come out long before Jan 30th.
Not everyone has the ability or knowledge or just experience to trace information back to its source, even when it is easy. Can I assume that you have discovered the origin of the information that is written to be from Edward Snowden and that this is a different person or organization?
 
they’ve had less impact in the GPU space … data center / AI parts with standing , the purchase of ATI didn’t turn into the same trajectory in GPU ….
I'll agree that AMD did flounder about, after the ATI acquisition. They made many promising moves, like in GPU compute and (as I mentioned) with Fury. However, I think these intentions were deeply undermined by the way the company was circling the drain, financially. Their GPUs were good enough to keep them in the console business, which provided a vital financial life line when they needed it most.

But, Nvidia is a force to be reckoned with. Nvidia has been continually innovating and their attempts to break into the phone/tablet and embedded markets forced them to jump on the efficiency train early, and that served them well. You can see the effect it had on Maxwell and Pascal, for instance. Nvidia is also quite ruthless in their business practices, which complemented their engineering prowess and provided it with the fuel needed to realize their GPU compute ambitions in a way AMD couldn't. And it was that success in GPU compute which put them in the prime position to benefit from the Deep Learning revolution, which they recognized and exploited boldly and very effectively.

I'm not sure it's really fair to judge AMD using the measuring stick of Nvidia, because I think AMD innovated faster in its GPUs than just about anyone else would've. Let's not forget there were reasons ATI and Nvidia were the lone survivors of the first decade of hardware-accelerated 3D.

Lastly, if you use AMD vs. Intel as the standard for comparing AMD vs. Nvidia, you're missing one glaring detail. In the fight against Intel, AMD was able to ride TSMC's meteoric rise, at a time when Intel's foundries couldn't seem to do anything right. I don't know if you recall the bad old years when Intel got stuck on 14 nm for about 6 years, hence the reason why they kept launching Skylake derivatives. TSMC propelled Zen 2 and 3 just when Intel was at its most vulnerable. Well, AMD had no such advantage vs. Nvidia, who also used TSMC during that period. In fact, one of the reasons RDNA2 was able to catch (and slightly pass) Ampere was Nvidia's il-fated decision to move over to Samsung.

As tough as it's been to watch AMD struggle against Nvidia, I think it's been for the best. It's meant that neither company could afford to be lazy, and that's put them in a position where even the (once) mighty Intel couldn't really touch them. At one time, this would've been unthinkable! Not to mention the Chinese upstarts gunning for them now.

Yeah,There have been a few moments here or there not enough to move the needle and that’s no good for the average consumer in this space.
Let's face it: Nvidia is the Ferrari of GPUs. They have a brand cache' that buttresses them when they make the rare misstep, and has helped support stronger pricing than they've sometimes deserved.

I wouldn't count out AMD, but I'm not anticipating them leapfrogging Nvidia any time soon. There's always a chance they'll do something radical, like their rumored RDNA 4 flagship (now definitely cancelled) that employed die-stacking. It's not outside the realm of possibility that they pull another rabbit out of their hat, like they did with Infinity Cache.

pEoMWlr.png


dhEYUT6.jpeg


Source: https://videocardz.com/newz/amd-nav...e-complexity-of-the-high-end-rdna4-gpu-design

I know AI-based rendering is a fraught topic among gamers, but they did announce a partnership with Sony to create a truly next-gen neural rendering architecture, where both companies can utilize the resulting IP. Let's see what comes of that.
 
Last edited:
  • Like
Reactions: KyaraM