News Edward Snowden slams Nvidia's RTX 50-series 'F-tier value,' whistleblows on lackluster VRAM capacity

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Why do I care what Edward Snowden thinks? Maybe he should have had the guts to stand by his convictions and make his argument as a whistleblower in court
He wanted to, but the US refused to allow him to make a public interest defense, if he returned to stand trial.

instead of running off to a fascist state to be used as a useful idiot propaganda tool.
He got stranded in Russia. It wasn't his final destination, but word got out and no country would give him safe passage to one of his intended destinations. He was allowed to remain in Russia as a special guest, but the stakes got raised a couple years ago and he was basically given no realistic alternative but to apply for Russian citizenship.
 
  • Like
Reactions: tamalero
aren't they able to use a new compression technology that should lower the ram use? But games still need to adopt the new technology.
IMO, neural rendering is basically at the same point that raytracing was in 2018 (i.e. at the launch of Turing - the first GPUs with RT cores). You technically can use neural textures, and might derive advantages from doing so in some cases, but I think it'll probably take a couple more generations for the technology to come into its own.
 
  • Like
Reactions: Peksha
In regards to DDR7 VRAM, 8GB is currently about $18 (probably less for nVidia). So I agree that 16GB to 32GB VRAM is only costing nVidia $36,
You got a source on that? You know it's GDDR7, right?

The thing to keep in mind is that, without increasing the memory interface (which would be otherwise pointless, since 256-bit @ 30 GT/s already gives the 5080 more than enough bandwidth), they actually have to switch to 24 Gbit GDDR7 chips. Pricing aside, it's rumored to be unavailable right now. If true, then Nvidia faced a choice of either delaying the launch, or launching at 16 GB now and then following with a Super having 24 GB. Realistically, there's no way they were going to double the DRAM chips (i.e. by using the double-sided "clamshell" configuration) and go with 32 GB, on the RTX 5080.

AMD on the other hand don't use VRAM as an "upgrade" tool.
Sure, they do. They offer workstation variants of their graphics cards, just like Nvidia. And just like Nvidia, one way those cards are often differentiated from gaming cards is in memory capacity. The Radeon Pro W7900 is basically a RX 7900 XTX with a blower-style cooler and 48 GB of memory.
 
If you look up pictures of the same places for the 4080 release you can practically hear the crickets.
At a list price of $1200, the RTX 4080 was the worst deal in the original RTX 4000 lineup. If you compute the perf/$, it should've cost only about $1050 to offer the same value as the RTX 4070 Ti and the RTX 4090. Nvidia fixed that, in the Super, but then also improved the value proposition of the RTX 4070 Ti Super. So, even at $1000, the RTX 4080 Super was not the most attractive offering.
 
For those "Pros" that are FPS hunters, it's more psychological than actual. The average human response time is 250ms, if someone is claiming they can respond to 8ms (120FPS) or 4ms (240 FPS) then they are not human.
You're mistaken in your characterization of the latency aspect. First, the thing about latency is that it stacks. So, every part of the rendering pipeline and game engine adds a bit of latency, and then there's the monitor on top.

Now, the issue with latency is that it determines the point at which the photons reach your retina. Then, add your reaction time on top of all that. If you take two elite gamers with similar reaction times and skill levels and you give one of them a few milliseconds advantage, that could be just the thing they need to come out ahead. No, it's not enough of an advantage for a low-skill gamer to beat a better one, but for those struggling to find any edge on their competitors, there's actually sound reasoning behind it.

As for your comment about smoothness, I have a gaming monitor and an older 60 Hz monitor right next to each other. Side-by-side, I can easily see that the motion on the gaming monitor is smoother. And I don't think my eyes are exceptional, in this regard.
 
4070 TI and 4080 use the exact same die,
Some do, including the 4070 Ti Super. However, the original RTX 4070 Ti uses the AD104 die, which is 294.5 mm^2, features a 192-bit memory interface, and has 35.8 billion transistors. In contrast, the RTX 4080 (and RTX 4070 Ti Super) uses the AD103 die, which is 378.6 mm^2, features a 256-bit memory interface, and has 45.9 billion transistors.

As for everyone moaning about memory, you can't just solder another VRAM chip on, it won' work. You need to add that VRAM chip to a memory bus, meaning you need to add another 32-bit memory bus to the die complete with all the supporting elements.
Incorrect. GDDR memory has a special "clamshell" mode, wherein a second chip can be added per channel by soldering it directly on the opposite side of the PCB as the first, thereby keeping the signal paths nearly identical. In this mode, the two chips act like a single chip of double the capacity. How else do you think Nvidia and AMD workstation cards are able to offer double memory capacity, using the same GPU die as the gaming cards??

I've told you this probably 2 or 3 times, already. Maybe someday you'll actually read my reply and stop spreading wrong information about this subject.
 
Last edited:
As for your comment about smoothness, I have a gaming monitor and an older 60 Hz monitor right next to each other. Side-by-side, I can easily see that the motion on the gaming monitor is smoother. And I don't think my eyes are exceptional, in this regard.
Same for me – and having them hooked up to the same system makes it pretty easy to disprove the "HUMAn EYe CaN't seE PAst 24 fpS" nonsense :)

I personally don't notice much improvement beyond 120fps. The sweet spot seems to be around 90fps for me, between 90-120fps the returns are diminishing – it overall feels a lot better to have a consistent framerate (and if there's variance, not have the lows drop below 90) than going for those insane 200+ fps.
 
  • Like
Reactions: bit_user
In the tables displayed for free to visitors of dramexchange, there are no prices shown and there is not even a row for GDDR7. I do not exclude that in the reports available to members of different ranks there is such information. Exclude numbers/digits of real deals which is private between companies.
 
Infamous former NSA contractor and whistleblower Edward Snowden has unexpectedly shared his opinion on RTX 50 series VRAM quotas.

Edward Snowden slams Nvidia's RTX 50-series 'F-tier value,' whistleblows on lackluster VRAM capacity : Read more
So the biggest lose on out generation is crying because he can't afford the best graphics card on the planet! Lol who'da thought? I guess he'll have to stick with some budget part from the also-ran microprocessor device maker. So sad.
 
I can certainly see why people would view Snowden as a detestable individual for what he did - that's one thing.

But comparing the validity of his views to - say - a dentist, or Miley Cyrus, like some commenters did, is unfortunate at best.

The fact that he defected to Russia, doesn't take away from his skill - not one bit.

He's still a tech guy - one of the best out there, in fact - and his opinion matters as much as anyone's in the field.

And he's not wrong.

VRAM-wise, Nvidia is purposefully crippling and overpricing its low and mid tier GPUs, in order to funnel PC users to its halo products.
 
Last edited:
Incorrect. GDDR memory has a special "clamshell" mode, wherein a second die can be added per channel by soldering it directly on the opposite side of the PCB as the first, thereby keeping the signal paths nearly identical. In this mode, the two dies act like a single die of double the capacity. How else do you think Nvidia and AMD workstation cards are able to offer double memory capacity, using the same GPU die as the gaming cards??
Oooh, I thought clamshell simply meant putting memory modules on both sides of the PCB – didn't realize it meant precise opposite side and splitting the data bus, but it makes a lot of sense :)
 
  • Like
Reactions: bit_user
They are just using snowden's name for clicks. They could have picked some other celebrity whose opinion on GPU is no better than any of forum posters here.

Doesn't really matter if you like the snowden or not the reason we now have HTTPS is the tin foil hat guys were actually right the government was actually spying on us.
 
4070 TI and 4080 use the exact same die, in effect 4070 ti's are just defective 4080's. The 5080 is using the same pattern as the previous 4080 and the 5070 TI is going to just be a defective 5080. The 5080 is not a "5070" or such silliness, the 5070 will have 192-bit memory bus and 12GB of VRAM, the same as the 4070.
This is misinformation. The 4070TI and 4080 die are completely different, cut from different wafers and manufactured on different production lines. They have different sizes and everything else. There is nothing in common between AD103 and AD104 except the architecture.
 
Last edited:
  • Like
Reactions: bit_user
This is misinformation. The 4070TI and 4080 die are completely different, cut from different wafers and manufactured on different production lines. They have different sizes and everything else. There is nothing in common between AD103 and AD104 except the architecture.
He probably got confused because the RTX 4070 Ti Super did get the AD103 and he just thought 4070 Ti always used that die.

I think there were some non-Supers which used it with the memory 3/4th populated and the shaders disabled to match the specs of the AD104, also. Nvidia sometimes does those sorts of things, when they need to burn through some excess inventory or defective chips of a larger ASIC.
 
<Mod Edit for politics>


It is what it is. There's always going to be sanctions. Strong arming is not new. And it's not only exclusive to one side of the political aisle.

I just hope that if I need a GPU I will be able to go out and buy an Nvidia GPU at some point. They might not be heavily in stock now but they will be over time. I don't know why people want to hurry up and rush out all the time to get a new one.

I have a 4090 and 3090 so I don't really need to get a new one. I can also understand why Nvidia didn't put more vram on the 5080. It isn't really that much more powerful than the 4080 super.

If a 5080 TI happens to come out later which it probably won't then I would expect that to have at least 21 to 24 GB of vram.
 
Last edited by a moderator:
I don't really have a problem with Eddie and I really appreciate him exposing the ...
Seriously bro. Look at the history of legislation and programs in this area. It's stuff like Patriot Act that were passed and put into force shortly after 9/11.

I'm not big into Deep State-type conspiracy theories, but especially when you get into the realm of intelligence and defense, these are organizations, cultures, and programs that are insular and don't change much from one administration to the next. I think it's a mistake to try and politicize, because it mostly transcends politics. In fact, those organizations are specifically designed to be apolitical.

As for the things he's said more recently, I don't know what sort of pressure he's under to be making the political statements that he has. I don't totally give him a pass there, but I also don't assume it's what he truly believes.
 
The issue of VRAM amounts seems to crop up more because of the cost of the cards than the performance. I don't necessarily believe that it is game limiting yet, but if it is or there are other applications (such as movies, though I think we are talking pure gaming cards) that suffer with lower VRAM I'd love to know. Does MFG need VRAM at all?
 
Does MFG need VRAM at all?
Yeah, it needs not only the frames it's interpolating between, but also rather a lot of info about them. It definitely needs analytic motion vectors, probably Z-buffer, optical flow vectors, and possibly more. Easily 16 bytes per pixel, if not more, which adds up to ~63 MB per 1440p frame. Probably multiply that by two, to get ~127 MB of required frame data, when targeting 4k.

Beyond that, it's going to need the actual AI models used for the different parts. And if Transformer is used, it'll need to keep some intermediate state. So, it's not insignificant, but also shouldn't be that huge, relative to total memory capacity. People can just try switching it on/off and looking at available GPU memory to see. I'd guess someone has already done that, if you want to try searching for it.
 
Last edited:
This is misinformation. The 4070TI and 4080 die are completely different, cut from different wafers and manufactured on different production lines. They have different sizes and everything else. There is nothing in common between AD103 and AD104 except the architecture.
Should of said "4070 Ti Super" and not "4070 Ti". The 4070 Ti is a 192-bit AD104 while the super is a 256-bit and uses the AD103, the same as the 4080.
 
  • Like
Reactions: Peksha and bit_user
The issue of VRAM amounts seems to crop up more because of the cost of the cards than the performance. I don't necessarily believe that it is game limiting yet, but if it is or there are other applications (such as movies, though I think we are talking pure gaming cards) that suffer with lower VRAM I'd love to know. Does MFG need VRAM at all?

MFG and DLSS upscaling both require more VRAM as they require more buffers to be made.

The whole VRAM thing is overblown as most just do not understand how it works and it's not like many publications bother with it either. In the old school days of Win XP / DX9, programs could query the display device and one of the returned values would be VRAM. Games were then written around expecting a certain amount of this value because they would allocate and deallocate resources. Starting with WDDM 1.0 (Vista) DX9 and above could instead use a form of virtual memory, where both the graphics VRAM and system memory were combined to form a pool. Whenever a program loads a resource it gets loaded into this large virtual space and it become the responsibility of the OS to ensure resources were copied into graphics VRAM prior to being used by the drivers. In this way graphics VRAM became a form of cache and the program would no longer have to actively manage memory. BTW a scheme like this has been around since Win98 with AGP and GART but it was terribad and broke so nobody used it.

This model was optional up until WDDM 2.0 and Windows 10, then it became the standard and everyone use's it now. In Linux a similar system is accomplished through DRM and GEM. Modern consoles also use this kind of shared memory model.

The goal is to leave memory management of the GPU to the OS and have the game developers focus just on their game without having to bother trying to manage platform resources.
 
Nvidia has been continually innovating and their attempts to break into the phone/tablet and embedded markets forced them to jump on the efficiency train early, and that served them well.

As far as efficiency, AMD leads the way. nVidia 5090 600-700Watt load consumption is double that of early AMD 9070XT power consumption leaked reports with the 9070XT showing performance both in raster and RT around the same as nVidia 5080. 9070XT $800, nVidia 5080 $1200 (if you can find one at MSRP). AMD don't want to sell in a 5090 market, they want to sell more units to a mass market. It's not that AMD can't make a competing GPU to a 5090, it's that they don't want to as it's not a wise investment. You think AMD couldn't just double the power requirements and come up with a GPU that performs the same as a 5090? nVidia's stock price is free falling downwards, this is clearly a BIG mistake for nVidia on several fronts.

You got a source on that? You know it's GDDR7, right?
Both DDR and GDDR are double data rate, PAM3 (more data per cycle) signaling helps GDDR7 with bandwidth vs. DDR5 that is timed for lower latency CPU work. Samsung make the 24Gb GDDR7 and Micron make 32Gb GDDR7. GDDR7 comes with a 70% increase in thermals ... these air cooled cards just aren't going to cut it and also why those few that actually got a 5090 are reporting system lockups after 1-3 hours of continuous use. "right?" yeah ok whatever.

And just like Nvidia, one way those cards are often differentiated from gaming cards is in memory capacity. The Radeon Pro W7900 is basically a RX 7900 XTX with a blower-style cooler and 48 GB of memory.
Workstation cards are an entirely different market and you're paying for specific application support/drivers, not so much the hardware and memory. Not to mention the memory bandwidth is actually lower on the W7900 as it uses ECC GDDR6 where as the 7900XTX does not use ECC.

As for your comment about smoothness, I have a gaming monitor and an older 60 Hz monitor right next to each other. Side-by-side, I can easily see that the motion on the gaming monitor is smoother. And I don't think my eyes are exceptional, in this regard.
I didn't make a comment about "smoothness"? I made a comment about reaction time and humans can't react to 8ms in any meaningful way regardless of "smoothness". Great, it "looks" smoother ... and?

<Mod Edit - removing political rants>