News Nvidia GeForce RTX 3080 Founders Edition Review: A Huge Generational Leap in Performance

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
You can make fun of it as you wish , but people who gamed at 8K and shown by Nvidia were AMAZED by 8K gaming like something never seen before !

View: https://www.youtube.com/watch?v=09D-IrammQc

Yep, and they are amazed by Apples "magical" design too. Doesn't mean it's real. A number of Apple's products are flawed yet they are supposed to be "The best". For example we have bad keyboards (keygate), slow processors, bad reception, purple camera tinge, intentional software slowdowns (batterygate), circuit board failure (touchgate), to weak phone cases (bendgate).

Higher numbers or "special wording" is a common psychological effect used by marketing to imply something is better (Subjective). But when you put it in a lab, the numbers don't pan out. (Objective)

Case in point, NVIDIA conflated the numbers (Twice as fast as a 2080) Well this lead to false beliefs about true performance and a lot of people were let down by actual numbers of 50%-70% faster. or 30% faster than 2080Ti.

Intel also implied Tiger Lake was going to blow away the 4800U. Hardly. It's a neck and neck tie depending on what metric you are more interested in.

It's all just marketing. But if you blow things up too much, it will back fire in your face eventually.
 
Last edited:
An interesting thing about Ampere how it has fewer transistors than expected from a simple extrapolation from the previous gen. The 3080 uses 28.3 trillion transistors to implement 8704 shader cores, so 3.3 mil per core. The 2080 uses 13.6 trillion transistors to implement 2944 cores, so 4.6 mil per core. The calculation here is rough of course as there're other functional units on the dies. These were beefed up in Ampere though. And we know the 3080 has a couple thousand cores' worth of dead transistors.

Now if we look at the Team Red's efforts, we see the numbers move in the opposite direction gen-to-gen: the 5700 XT is 4.0 mil per core while the 590 is 2.5 mil.

My hypothesis is that TSMC's process is actually inferior to Samsung's when it comes to building GPUs. Restrictive design rules force you to use more transistors to implement a functionality. And the more you have the higher the chance one of them will be bad.
I think the Turing CUDA cores (FP + INT datapaths) are probably not significantly less transistors than Ampere CUDA cores (FP + INT/FP datapaths). The tensor cores would be larger, but half as many of them. The RT cores would be larger as well. But there's also a ton of transistors going into the L1/L2 caches, which are both much larger than Turing.

You can also get some idea of how much complexity exists outside of the main processing cores by looking at GA102 and GA104. The former is 28.3 billion and 628.4mm square, while the latter is 17.4 billion transistors and 392.5mm square. In terms of functional units, GA102 has 84 and GA104 has 48. So...

X = SM size or transistors (approximate)
Y = Memory controller size (approximate)
Z = NVDEC/NVENC/etc blocks -- basically everything else (also approximate)

84X + 12Y + Z = 28.3 billion
48X + 8Y + Z = 17.4 billion
------------------------------
38X + 4Y = 10.9 billion?

We could try to estimate die area as well from the images Nvidia has provided.
70

That's a 1780x2048 pixel image, for 628.4mm square. Doing some image analysis and math, the main GPC + TPC + SMs occupies 1515x1385 pixels, and 361.7mm square. Now someone just needs to do the same for Turing and figure out how many more transistors are in each SM! (I'm too busy right now.)
 
Hello ? what caps for devs ? ART is not Dev work it is CG team work and the games will allways have higher texture detail for PC that are different from console files ... Higher detailed Textures will exist for PC ALL TIME. it is just a matter of added texture files. even Game Modders know this already.

There is no CAP for DEVS , Consoles will just work in lower detail than PC .
That's assuming they'll even bother. Also remember that developers may target lower frame rates for consoles than what PC gamers want. Having more time to render your scene means more time to do things in the background that allows for less VRAM requirements. A cursory glance at late release titles on the Xbox One X and PC shows no appreciable difference in image quality for AAA games.

Also note that higher texture resolution for smaller assets (i.e., character models and such) requires higher rendering resolution to fully appreciate them. Until 4K becomes the norm in PC game land, which it looks like it may be with this generation, higher resolution textures don't make sense as you wouldn't be able to see that detail anyway. Even if you could load in higher resolution textures and have it downsample, that's effort that could've been used elsewhere more meaningful.
 
  • Like
Reactions: Shadowclash10
That's assuming they'll even bother. Also remember that developers may target lower frame rates for consoles than what PC gamers want. Having more time to render your scene means more time to do things in the background that allows for less VRAM requirements. A cursory glance at late release titles on the Xbox One X and PC shows no appreciable difference in image quality for AAA games.

Also note that higher texture resolution for smaller assets (i.e., character models and such) requires higher rendering resolution to fully appreciate them. Until 4K becomes the norm in PC game land, which it looks like it may be with this generation, higher resolution textures don't make sense as you wouldn't be able to see that detail anyway. Even if you could load in higher resolution textures and have it downsample, that's effort that could've been used elsewhere more meaningful.

You are mixing Devs work with CG work. all what you have said is wrong. CG team will give you the Texture solutions, the Devs will give CG team the requirement for each resolution and different hardware . and in General CG works use higher standard then go down ...
 
Yep, and they are amazed by Apples "magical" design too. Doesn't mean it's real. A number of Apple's products are flawed yet they are supposed to be "The best". For example we have bad keyboards (keygate), slow processors, bad reception, purple camera tinge, intentional software slowdowns (batterygate), circuit board failure (touchgate), to weak phone cases (bendgate).

Higher numbers or "special wording" is a common psychological effect used by marketing to imply something is better (Subjective). But when you put it in a lab, the numbers don't pan out. (Objective)

Case in point, NVIDIA conflated the numbers (Twice as fast as a 2080) Well this lead to false beliefs about true performance and a lot of people were let down by actual numbers of 50%-70% faster. or 30% faster than 2080Ti.

Intel also implied Tiger Lake was going to blow away the 4800U. Hardly. It's a neck and neck tie depending on what metric you are more interested in.

It's all just marketing. But if you blow things up too much, it will back fire in your face eventually.

Stay on the subject of 8K gaming and 8k running at 60 fps and stop this way of zigzagging !

8K gaming is here , even at 60 fps . in the past 1080 ti did not even reach 40 fps in 4K and was benchmarked regardless .

Tomshardware ignored the test to avoid the 10GB low Memory ... they are doing what nvidia is telling to do ! so people dont notice that the lower VRAM is jus to lower the prices to crush AMD release. Later Nvidia will release the 16 GB and 20 GB Variants
 
8K gaming is here , even at 60 fps . in the past 1080 ti did not even reach 40 fps in 4K and was benchmarked regardless .
Citations, please?

Tomshardware ignored the test to avoid the 10GB low Memory ... they are doing what nvidia is telling to do ! so people dont notice that the lower VRAM is jus to lower the prices to crush AMD release. Later Nvidia will release the 16 GB and 20 GB Variants
Citations, please?
 
Stay on the subject of 8K gaming and 8k running at 60 fps and stop this way of zigzagging !

8K gaming is here , even at 60 fps . in the past 1080 ti did not even reach 40 fps in 4K and was benchmarked regardless .

Tomshardware ignored the test to avoid the 10GB low Memory ... they are doing what nvidia is telling to do ! so people dont notice that the lower VRAM is jus to lower the prices to crush AMD release. Later Nvidia will release the 16 GB and 20 GB Variants

I'm not. I'm claiming 8K gaming is bull pucky!
 
  • Like
Reactions: WiseElf and King_V
Does Windows even support 8K properly? Do any games? I thought when they did a test of 8K gaming at Tom's Hardware, there were all kinds of strange issues, NOT related to the performance of the GPU trying to render games in 8K.
 
Does Windows even support 8K properly? Do any games? I thought when they did a test of 8K gaming at Tom's Hardware, there were all kinds of strange issues, NOT related to the performance of the GPU trying to render games in 8K.

Is there any 8k gaming monitors because they slowly strated doing 4k 144hz refesh rates mointors over these years and there expensive wonder how much 8k 144hz refesh rate will be when they bring them out in future
 
The 20GB 3080 and 16GB 3070 rumors are out there. We know Micron is not too far from doubling GDDR6X density next year.

Greater memory capacity without greater memory bandwidth is sort of useless for gaming purpose. The 3080 has 10 GB VRAM and a bandwidth of 760.3 GB/s. Dividing the latter by the former, you get 76 /s. So if you touch every byte 76 times a second, all your bandwidth would be exhausted. A scene simply can't make use of that much memory if it's to render at playable frame-rate.

Incidentally, the GeForce 256 has 32 MB VRAM and 4.8 GB/s bandwidth. That's twice the bandwidth per memory available.
 
First , thanks for your efforts and hard work reviewing the card.

now to the serious stuff :

1- No 8K benchmarks ? COME ON !!! this card should be tested on 8K as well , you tested GTX 1080 ti in 4k and this card is better in 8K than 1080 ti in 4K I dont care if it shows 30 fps it should be 8K benchmarked .

2- Why didnt you include Memory usage in each benchmark ? VRAM usage should be part of ANY benchmark table and ANY resolution from now on. add it ! make it Min Max memory usage !

3- You are a REVIEW site , you claim Memory used by VRAM is not the actual memory needed , and some are caching. FINE TEST IT . TESSSST IT , we wont take your words on it , and we wont takie "Just buy it " advice anymore . it is EASY TO TEST , you have 8GB cards , 10 GB cards , 11 GB cards you can find the spot where the game slows DOWN and YOU CAN TEST HOW MUCH VRAM IS REALLY NEEDED.

4-

no we wont "just stop" and we wont "just buy it"

DO YOUR HOMEWORK AND TEST MEMORY USAGE , or we will move to another review site.

5- Funny you did not mention the RTX 3070 Ti 16GB VRAM leaked by Lenovo Documents by accident? and you still say stop it and buy it for 10GB VRAM RTX 3080 ?

Why test 8K when Nvidia clearly stated that the 3090 is the card designed for this task? 3080 is aimed at 4K and Tom’s team stuck to that, so well done to them! What is the point showing the card can’t run a resolution it is not designed for? Utter waste of time.

In case VRAM is exceeded, the game just crash due to memory allocation. Again, what‘s the point testing that? It’s not as if you can increase the memory on the card anyway. Besides, memory usage is so dependant on graphic settings.

Should Tom’s team completed all these pointless tests, this review would come out in 3 months. Just be grateful they manage to release this one in so little time.

Please feel free to move to another site, or better, create your own site that no one will read. We will continue reading Tom’s quality articles.

Again, very well done and much appreciated Tom! 👍👍👍
 
We could try to estimate die area as well from the images Nvidia has provided.
View attachment 70

In the Tu102 the SMs take up roughly the same amount of space: ~55%.
813-die-shot.jpg


It's interesting how we can visibly see the difference between the two processes. The Samsung produced die is more squarish, whereas the TSMC die has a definite preferred direction.

Anyway, there doesn't seem to be any extra functional units in Turing that would accord for the extra transistors. I think it's down to the process.

Not sure what the things around the perimeter are. They're proportionally bigger in Ampere.
 
This kind of reminds me of someone who paid a couple-hundred dollars for a 2GB SD card back when those were new, claiming they probably wouldn't ever need a higher-capacity SD card than that. : P
Subjective (graphics quality) vs. objective (storage space) is a common comparison error, but I get your meaning.

Substantial growth in the development of immersive environments (which of course RTRT is an important part), hopefully enabled by next-gen GPUs, would then become more objective (to support your comparison), which is part of what I was trying to get at when I mentioned AI. There is no doubt Nvidia intends to continue to be a big player in that space, which means the GPU will be a truly useful GPGPU, thus worthy of another upgrade. In that case we are certainly not at a dead end, but the onus is on the game/graphics designers to make newer hardware truly stand-out, which gets more and more difficult as time goes on (i.e. one reason why the 2080 was 'meh' for many people).
 
  • Like
Reactions: Shadowclash10
I think the Turing CUDA cores (FP + INT datapaths) are probably not significantly less transistors than Ampere CUDA cores (FP + INT/FP datapaths). The tensor cores would be larger, but half as many of them. The RT cores would be larger as well. But there's also a ton of transistors going into the L1/L2 caches, which are both much larger than Turing.

You can also get some idea of how much complexity exists outside of the main processing cores by looking at GA102 and GA104. The former is 28.3 billion and 628.4mm square, while the latter is 17.4 billion transistors and 392.5mm square. In terms of functional units, GA102 has 84 and GA104 has 48. So...

X = SM size or transistors (approximate)
Y = Memory controller size (approximate)
Z = NVDEC/NVENC/etc blocks -- basically everything else (also approximate)

84X + 12Y + Z = 28.3 billion
48X + 8Y + Z = 17.4 billion
------------------------------
38X + 4Y = 10.9 billion?

We could try to estimate die area as well from the images Nvidia has provided.
View attachment 70

That's a 1780x2048 pixel image, for 628.4mm square. Doing some image analysis and math, the main GPC + TPC + SMs occupies 1515x1385 pixels, and 361.7mm square. Now someone just needs to do the same for Turing and figure out how many more transistors are in each SM! (I'm too busy right now.)

I think they're lying to us!! I counted only 17,399,999,998 transistors! Unless I missed a couple. Hold tight, lemme start over.
 
Stay on the subject of 8K gaming and 8k running at 60 fps and stop this way of zigzagging !

8K gaming is here , even at 60 fps . in the past 1080 ti did not even reach 40 fps in 4K and was benchmarked regardless .

Tomshardware ignored the test to avoid the 10GB low Memory ... they are doing what nvidia is telling to do ! so people dont notice that the lower VRAM is jus to lower the prices to crush AMD release. Later Nvidia will release the 16 GB and 20 GB Variants
Sorry, say again? Are you.... referring to the 3090? Because we really don't know how fast that will be at 8K. Saying that 8K60 is here, when the only possible card for doing so is still a few eeks out, unbenchmarked, is outright wrong.
 
  • Like
Reactions: King_V
Why test 8K when Nvidia clearly stated that the 3090 is the card designed for this task? 3080 is aimed at 4K and Tom’s team stuck to that, so well done to them! What is the point showing the card can’t run a resolution it is not designed for? Utter waste of time.

In case VRAM is exceeded, the game just crash due to memory allocation. Again, what‘s the point testing that? It’s not as if you can increase the memory on the card anyway. Besides, memory usage is so dependant on graphic settings.

Should Tom’s team completed all these pointless tests, this review would come out in 3 months. Just be grateful they manage to release this one in so little time.

Please feel free to move to another site, or better, create your own site that no one will read. We will continue reading Tom’s quality articles.

Again, very well done and much appreciated Tom! 👍👍👍
Yep. Besides, I'm not even sure if @JarredWaltonGPU has an 8K display 🙃 🙃
 
I wonder... do we know how much space the RT cores in Turing and Ampere take up? And the Tensore cores? Cause if they take up a significant amount of die space, well, I won't know about you, but I would rather Nvidia have ditched the RT cores (but maybe keep the Tensor cores, because DLSS 2.0 is legitimately useful), and replace them with CUDA cores, I guess?
 
You can make fun of it as you wish , but people who gamed at 8K and shown by Nvidia were AMAZED by 8K gaming like something never seen before !
Streamers tend to overreact to get views, and those streamers were specifically selected by Nvidia and invited to their headquarters for the sole purpose of providing quotes for an advertisement, and who knows what kind of payment or free hardware they got for doing so. That undoubtedly results in the most un-genuine reactions possible, and is hardly evidence of 8K gaming providing any tangible visual benefit over 4k gaming.

Plus, those games were not even running at native 8K. According to the video, they were playing Control at max settings with RT enabled, and a 3090 is not likely to manage much more than 60fps at 1440p at those settings, let alone 9 times that resolution. They were clearly making heavy use of DLSS there to upscale a game being rendered at around 1440p. So, if you want approximate performance numbers for that, you can look at the 1440p results.

as for the "need" for 8K ? well sorry , 65 inch TV with 8K is the same pixel density of 27 inch in 4K .. your point is null
4K is arguably overkill for even a 27" display though. That size is far better suited to 1440p, with less than half the pixels. And while one might potentially lean in within a foot or so of a 27" display, it would be completely impractical to sit that close to a 65" screen. You're more likely to sit twice as far away so that the screen fills a similar field of view, at which point there are then four times as many pixels within the same area compared to 4K, which you are probably not going to be able to discern.
 
4K is arguably overkill for even a 27" display though. That size is far better suited to 1440p, with less than half the pixels. And while one might potentially lean in within a foot or so of a 27" display, it would be completely impractical to sit that close to a 65" screen. You're more likely to sit twice as far away so that the screen fills a similar field of view, at which point there are then four times as many pixels within the same area compared to 4K, which you are probably not going to be able to discern.
Not to mention that, while in the minority, there's a rather significant number of us who are very comfortable with the ppi that you have on a 27" with a "mere" 1920x1080 resolution, for monitor use. Or a 34" screen with 2560x1080 resolution.

My TV is a 60" 1920x1080 screen. There was no 4K at the time (or maybe it was, but it was at the level of exotica), but, given that my eyes are a 8 to 9 feet away from that screen when watching, it looks fine.

Don't get me wrong, I've been to stores and seen 4K screens, and they look crisper, but that is very likely more than just resolution, since they're not displaying 4K content. Displays have improved in quality, not JUST in resolution, since my TV's 2012 purchase date.
 
  • Like
Reactions: Phaaze88
Streamers tend to overreact to get views, and those streamers were specifically selected by Nvidia and invited to their headquarters for the sole purpose of providing quotes for an advertisement, and who knows what kind of payment or free hardware they got for doing so. That undoubtedly results in the most un-genuine reactions possible, and is hardly evidence of 8K gaming providing any tangible visual benefit over 4k gaming.

Plus, those games were not even running at native 8K. According to the video, they were playing Control at max settings with RT enabled, and a 3090 is not likely to manage much more than 60fps at 1440p at those settings, let alone 9 times that resolution. They were clearly making heavy use of DLSS there to upscale a game being rendered at around 1440p. So, if you want approximate performance numbers for that, you can look at the 1440p results.


4K is arguably overkill for even a 27" display though. That size is far better suited to 1440p, with less than half the pixels. And while one might potentially lean in within a foot or so of a 27" display, it would be completely impractical to sit that close to a 65" screen. You're more likely to sit twice as far away so that the screen fills a similar field of view, at which point there are then four times as many pixels within the same area compared to 4K, which you are probably not going to be able to discern.

And review sites work is to see their reaction being true or over reacting. my original post was about Tomshardware not benchmarking 8K at all. maybe they dont have a budget to spend on 65 inch 8K TV ?

as for 27inch overkill for 4K well there are 21 inch 4k as well , Even IMACS 21 inch are 4k so please stop this already does not make sense.
 
Sorry, say again? Are you.... referring to the 3090? Because we really don't know how fast that will be at 8K. Saying that 8K60 is here, when the only possible card for doing so is still a few eeks out, unbenchmarked, is outright wrong.

no I am talking about RTX 3080 and it should be benchmarked for 8K as well. and it is nvidia who said they reached 60 fps on 8K and I want to see RTX 3080 on normal , medium and high settings on 8K , I know Ultra is hard for this card in 8K , but other details should be reviewed as well
 
Last edited:
Does Windows even support 8K properly? Do any games? I thought when they did a test of 8K gaming at Tom's Hardware, there were all kinds of strange issues, NOT related to the performance of the GPU trying to render games in 8K.

You can use 4 screen each 4K on windows , so yes it does support 8K ..
 

TRENDING THREADS