News Nvidia GeForce RTX 3080 Founders Edition Review: A Huge Generational Leap in Performance

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Nowhere near 2x the performance, either compared to the original 2080, or in price/performance. Who would have thought the marketing from nVidia would be total BS? Still it's a solid upgrade nonetheless.
 
There was an update for MSFS2020 today. I'm wondering how much that impacts CPU performance.
I actually just tested it. The answer: no change for ultra settings that I can see.

Before:
1440p ultra 3080: AVG: 49.0 0.99MIN: 38.8
4K ultra 3080: AVG: 40.5 0.99MIN: 31.2

After:
1440p ultra 3080: AVG: 47.0 0.99MIN: 37.9
4K ultra 3080: AVG: 40.6 0.99MIN: 32.2

That's with the 9900K CPU. So 1440p performance dropped, 4K performance didn't improve. Bah. Maybe medium or high settings performs better?
 
  • Like
Reactions: Soaptrail
Hey Jared!

Having spent $950 in 2018 as an early adopter on the Alienware AW3418DW, I plan to "slum it" for a at least two more years with this 3440x1440p/120Hz/G-SYNC/IPS/CURVED display.

But I'm going to to hold off until the end of the year to see if a "3080Ti" comes out with higher clocks and more VRAM. The 3080 would be a massive upgrade over my 1080Ti, but I know from experience that nVidia keeps the xx80ti a secret until later...

The price discrepancy of $1500 to $700 tells me there will be a $1000/$1100 3080Ti option with higher clocks and more VRAM over the 3080. Plus, I'm waiting for water block models. Been with Gigabyte for eight years, but may go EVGA this time around...
I'm not convinced we'll see an in-between 3080 Ti this year -- maybe in six to 12 months? But there was nothing between 2080 and 2080 Ti, despite a $700 to $1200 gap, until nearly a year later when 2080 Super came out. I suspect supply of GA102 is going to be so tight for the next three months at least that there's no chance to do a 20GB 3080 Ti in 2020. Maybe January, more likely some time around June next year, we'll get a product bump like 3080 Ti. If you can wait that long for improved performance and more VRAM at probably a higher price, that's fine. Maybe it will even replace the 3080 at the $700 mark, but I doubt it -- not unless Navi 21 is far better than current rumors suggest.
 

animekenji

Distinguished
Dec 31, 2010
196
33
18,690
Take the reports of double the performance of the vanilla 2080 with a grain of salt. I watched an episode of LTT yesterday where Linus analyzed that data and the only place where the 3080 delivers double the performance of a 2080, is in compute heavy benchmarks like Blender and Cinebench, which I guess is a good thing if you need to use those apps for extended periods of time and the savings add up. In games the gains are closer to 70-80% most of the time, and are even less when compared to a 2080Ti.
 
No 8K benchmarks ? COME ON !!! this card should be tested on 8K as well , you tested GTX 1080 ti in 4k and this card is better in 8K than 1080 ti in 4K I dont care if it shows 30 fps it should be 8K benchmarked .
8K is not really practical and is extremely niche at this time. Television companies might be trying to push it on some of the latest high-end models, but it's questionable whether there's even a need for anything more than 4K in a television. Even 4K is arguably a bit overkill at typical viewing distances, and most will struggle to notice much of a difference between 1440p and 4K in games, let alone four times that resolution. Any improvements going from 4K to 8K are likely to be placebo-level, unless perhaps you are sitting within a few feet of a wall-sized display, which would make for a completely impractical viewing experience.

If you want to know how native 8K performs in the latest games, the answer should be clear, and that would be "abysmal". Games typically get around one-third the performance at 4K as they get at 1080p, assuming frame rates are not being limited by CPU performance. It's the same difference in resolution going from 4K to 8K, so best case scenario, divide the 4K numbers by three, which should give you sub-30fps frame rates in many games. And sure, the 10GB of VRAM might actually limit performance further in some titles, but even without that limitation, the performance wouldn't be good. Considering the massive difference in frame rates compared to the imperceptible difference in resolution, gaming at 8K makes no sense, and would arguably be a waste of time to even test, at least for the main review.

A couple sites tested 3x vs 4x and found 4x to be about 1% faster. Only problem was that the test had to be done on an AMD platform and the same sites found the AMD side to be 10% slower than Intel. People need to stop worrying about PCI 3.0 being a bottleneck. Here's one of them.
I looked over TechPowerUp's scaling benchmarks, and they did show around a 3% difference between PCIe 3.0 and 4.0 in some titles, while there was no difference in others, and others still landed somewhere in-between. So, the exact difference will be game-dependent, not just a fixed 1%. And of course, that might have been limited by the 3900X's somewhat lower gaming performance than the 9900K at lower resolutions, though that could potentially change with the upcoming Ryzen CPUs or future Intel models. So I would say "up to a 3% difference in today's games" would be a more accurate way to describe it, even if that's still a pretty minimal amount.

The article also went on to point out in it's conclusion that it's possible PCIe 4.0 could make more of a difference in future games utilizing RTX-IO and DirectStorage once games start using that technology. The upcoming consoles will apparently be utilizing this kind of direct SSD-to-VRAM data transfers to stream in game assets efficiently, so it will likely make its way to the PC as well for games designed with those capabilities in mind. Unfortunately, there's no way to test that at this time, so that still leaves some potential PCIe 4.0 benefits up in the air.

And as for the Ryzen system being "10% slower" than the 9900K, that was at 1080p resolution, which would be kind of silly to get a card like this for. At 4K, the 3900X was just 1% slower than the 9900K on average, and I imagine most current CPUs would perform about the same at that resolution, so either of those processors are arguably overkill.

And to be honest the next gen consoles are the 3080/3090 biggest competitors hence their price points...not AMD.
But, the new consoles both use AMD GPUs. : P

This looks like an 'end-of-the-road' card for 99% of users.
If I were thinking about a 3070, I would buy the 3080 anyway... still need to look at Navi, just to be sure (and maybe price availablility issues on 30X0 will be sorted by then).
This kind of reminds me of someone who paid a couple-hundred dollars for a 2GB SD card back when those were new, claiming they probably wouldn't ever need a higher-capacity SD card than that. : P

The graphics quality of games will increase to match the new hardware in the coming years. Turn on raytraced lighting effects in a game like Control, and you're already looking at below-60fps frame rates at native 1440p. And this is still only "hybrid" raytracing that's only being applied to certain effects, with full raytracing being significantly more demanding.

The same could have been said about the 1080 Ti a few years back, which was over 30% faster at 4K than the 1080 that launched for the same price just 9 months prior, and it was generally accepted as being a pretty good card for 4K. Fast forward a couple years later, and it was already getting frame rates in the 20s in a game like Control at 4K ultra settings, even without raytracing enabled. Turn that on, and it can't even maintain 30fps at 1080p resolution.

I'm just hoping the 3090 ends up being to the 3080 what the 2080ti was to the 2080, and DOESN'T end up being what the RTX Titan was to the 2080ti.

My 4k 144hz and 1440p 240hz monitors are desperate to stretch their legs.
Don't get your hopes up. As the article points out, if you look at it's specs, it shouldn't be more than 20% faster, and realistically, the typical difference is going to be even less than that. So, over double the price for maybe around 10-15% more performance at 4K is what you should be expecting from it. The 3090 is absolutely a "Titan" card, just with different branding.

And speaking of raytracing, it's a bit disappointing to see very little performance improvement over the 20-series in that area. If raytracing caused a 40% hit to performance in a particular game on Turing, then it appears to cause nearly as much of a hit on Ampere as well. The overall performance at a given price point is a decent amount higher, but the relative performance hit from enabling raytracing seems rather similar. Maybe that's something they'll address with their next-generation cards though, once raytracing has established itself as the norm for ultra graphics settings.

The same goes for efficiency, at least on these top-end cards. There's more performance, sure, but the power draw to get there has risen nearly as much, resulting in not much more than a 10% efficiency gain compared to the 2080 Ti, despite the process node shrink.

And perhaps most disappointing, this architecture seems to be designed first and foremost with mining in mind, rather than gaming. There appear to be huge gains to the mining/compute performance of these cards relative to gaming performance. The 3080 supposedly gets around 30 Tflops of FP32 compute performance, more than double that of a 2080 Ti, or close to triple that of a 2080. But as far as performance in today's games goes, even at 4K we're typically only looking at a little over 30% more performance than a 13.5 Tflop 2080 Ti, making the 3080 roughly comparable to an 18-19 Tflop Turing card. That's still a good performance difference given the price, but will these cards actually be available for their advertised prices in any capacity post-launch? If crypto-operations decide to buy them up, gamers won't be getting them at any remotely reasonable price. I get the feeling Nvidia made that design decision when seeing increased sales during the mining craze a few years back, and that the architectural change was incorporated into Ampere primarily to make the card more attractive to miners.
 

wr3zzz

Distinguished
Dec 31, 2007
108
44
18,610
I would like to see a more seriously written article about VRAM than what's in this review. The dismissal tone that you don't need more than 10GB for textures or just go get the 3090 is not helpful for people who are more likely to make 4-5 year purchase decisions instead of the 2-3 year cycle of the past. This is almost as bad as the infamous "just buy it" article about RTX2080.

The 20GB 3080 and 16GB 3070 rumors are out there. We know Micron is not too far from doubling GDDR6X density next year. We also know that consoles have 10GB VRAM for its GPU and games using 10GB VRAM in consoles will definitely need more in PC, often a lot more for the PC ports to shine. The new game engines will rely more on geometry going forward than textures vs. what we are using now, which I read allows visuals to scale better using VRAM than textures. These are legitimate concerns for people on the fence about whether to wait few more weeks or few more months of a purchase that needs to last 4-5 years.
 

m3city

Reputable
Sep 17, 2020
25
18
4,535
Hi, long time TH reader, first time poster here.
So, a 32% more FPS using 25% more power? That is upgrade, but not that much. Power draw values are horrendous for my personal taste. I will never put 330 W of heat inside my desktop.
I've been reading every news leak, heads-up on Ampere and was hoping for something AMD like - a real step-up in performance per W. Like when I had A6-3670 APU (95W) and now have 2400g (65W) and CPU benchmarkss have gone 4x up, gaming ~2-3. I do realize that it has been 7 years between these APU and few generations, and AMD was very outdated with their first gen APU. Anyway, I don't treat 3080 an upgrade to 2080 today. It would be, if power draw stayed the same and performance went up.
 
Last edited:
We also know that consoles have 10GB VRAM for its GPU...
Actually, the PS5 and Series X have 16GB of shared memory to be used as both VRAM and system RAM combined (or 10GB for the Series S). It's not really an apples-to-apples comparison, but the PCs these cards are going to be paired with will typically have an additional 16-32GB of system RAM, meaning they will have access to significantly more total memory than those upcoming consoles, even if some of its not as fast. The increase in RAM of the new consoles is actually quite low compared to prior generations, with the idea apparently being that data can be streamed directly to VRAM off fast SSD storage to reduce the amount that needs to be held in memory at any given time. And we will likely see that coming to PC games as well, with Microsoft already announcing their DirectStorage API, which Nvidia's RTX-IO will apparently make use of.

Sure, we may eventually see 10GB of VRAM limiting performance in some games at 4K ultra settings, but realistically these cards are probably not going to be running many demanding new releases well at 4K ultra settings within a few years. Things like raytracing performance are likely to be more of a limiting factor there. Already, the 3080 can't stay above 60fps in a game like Control at 4K ultra with raytracing enabled, even with DLSS upscaling from 1440p, and more VRAM isn't likely to make a difference there.
 

wr3zzz

Distinguished
Dec 31, 2007
108
44
18,610
Actually, the PS5 and Series X have 16GB of shared memory to be used as both VRAM and system RAM combined (or 10GB for the Series S). It's not really an apples-to-apples comparison, but the PCs these cards are going to be paired with will typically have an additional 16-32GB of system RAM, meaning they will have access to significantly more total memory than those upcoming consoles, even if some of its not as fast. The increase in RAM of the new consoles is actually quite low compared to prior generations, with the idea apparently being that data can be streamed directly to VRAM off fast SSD storage to reduce the amount that needs to be held in memory at any given time. And we will likely see that coming to PC games as well, with Microsoft already announcing their DirectStorage API, which Nvidia's RTX-IO will apparently make use of.

Sure, we may eventually see 10GB of VRAM limiting performance in some games at 4K ultra settings, but realistically these cards are probably not going to be running many demanding new releases well at 4K ultra settings within a few years. Things like raytracing performance are likely to be more of a limiting factor there. Already, the 3080 can't stay above 60fps in a game like Control at 4K ultra with raytracing enabled, even with DLSS upscaling from 1440p, and more VRAM isn't likely to make a difference there.

This is why I want to see a proper analysis about VRAM needs in the next gen console era rather than another "just buy it" article. Last gen consoles were 8GB unified memory devices. I think some said half of that were for GPU. To get console ports to max out eye candies on PC we need 8GB video cards. So it's either 1:2 for console ports GPU to GPU memory used or 1:1 for unified memory to PC video card. Either way that math comes up to that 16-20GB should be the base line for flagship card to jump start this era. I remembered how quickly Nvidia ramp up VRAM from 2014-2016 on its flagship card during the last console launch. It was either Nvidia had to play catch up, or it was accelerated product obsolescence by design. Either way good journalism should inform its readers how this will play out.

Ray tracing not only puts big headwind on performance but also changes how we need to see VRAM usage. With ray tracing developers will start using more geometry in place of textures and my early reading is that geometry could need more memory as you scale up resolution than textures.
 
Last edited:

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
8K is not really practical and is extremely niche at this time. Television companies might be trying to push it on some of the latest high-end models, but it's questionable whether there's even a need for anything more than 4K in a television. Even 4K is arguably a bit overkill at typical viewing distances, and most will struggle to notice much of a difference between 1440p and 4K in games, let alone four times that resolution. Any improvements going from 4K to 8K are likely to be placebo-level, unless perhaps you are sitting within a few feet of a wall-sized display, which would make for a completely impractical viewing experience.

Does not matter , the same could be said about 4K resolution few years ago and still 4K was reviewed .

as for the "need" for 8K ? well sorry , 65 inch TV with 8K is the same pixel density of 27 inch in 4K .. your point is null
 

Soaptrail

Distinguished
Jan 12, 2015
302
96
19,420
I actually just tested it. The answer: no change for ultra settings that I can see.

Before:
1440p ultra 3080: AVG: 49.0 0.99MIN: 38.8
4K ultra 3080: AVG: 40.5 0.99MIN: 31.2

After:
1440p ultra 3080: AVG: 47.0 0.99MIN: 37.9
4K ultra 3080: AVG: 40.6 0.99MIN: 32.2

That's with the 9900K CPU. So 1440p performance dropped, 4K performance didn't improve. Bah. Maybe medium or high settings performs better?
Maybe the patch helped Ryzen owners since per Techspot:

Unfortunately the performance gains in Microsoft Flight Simulator 2020 at 1440p are very weak and this is largely due to our choice to use the 3950X, the game just doesn’t utilize Ryzen processors very well right now and is in desperate need of low-level API support.
 
Does not matter , the same could be said about 4K resolution few years ago and still 4K was reviewed .

as for the "need" for 8K ? well sorry , 65 inch TV with 8K is the same pixel density of 27 inch in 4K .. your point is null

Given a human with perfect 20:20 vision, your arc resolution is 1 arc minute (or .02 degrees)

Your 65" monitor which is 56.65" wide. Using an 8K display, each subpixel is .00245" wide

Hyp * sin .02 = .00245" SO
Hypotenuse = 7.01873313288"
Hyp * cos .02 = 7.0187327052"

So are you telling me you sit 7" from your 65" 8K gaming monitor?

Blah ha ha ha. Hello Wall Eye
wash-out_medium.jpg
 
Last edited:

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
I'm not convinced we'll see an in-between 3080 Ti this year -- maybe in six to 12 months? But there was nothing between 2080 and 2080 Ti, despite a $700 to $1200 gap, until nearly a year later when 2080 Super came out. I suspect supply of GA102 is going to be so tight for the next three months at least that there's no chance to do a 20GB 3080 Ti in 2020.

Nvidia seems to be getting phenomenal yields out of Samsung 8LPP. If numbers at TechPowerUp can be believed, the 3070 will be sporting 96% of the GA104's available 6144 cores. Since this is a mainstream part, a large portion of chips off the wafer are probably meeting the threshold. Assuming similar yield on the GA102, Nvidia is throw away a lot of good cores shipping the 3080 with just 81% of the GA102's 10752 cores. Meanwhile, there won't be too many sub-par dies that need to sold as 3070 Super. It would be more sensible for Nvidia to cover the gap between the 3070 and the 3080 by lowering the price of the latter (by 60 bucks, lets say) and launching a 11GB 3080 Super at a higher price point (say, 60 bucks more) to capture the profits currently left on the table.

Now doesn't seem to be the right time for a 3080 Ti. Makes more sense to wait for prices of 8K displays and GDDR6X memory to drop. Nvidia probably want something that can hose Intel with too when Xe HPG finally emerges from the woodwork.
 

Shadowclash10

Prominent
May 3, 2020
184
46
610
8K? Who own's that? #TotalWasteOfTime
Makes me wonder.... at some point I'll upgrade from 1440p to 4K, obviously, but then what? Okay, 4K 144hz Ultra. Then what? I don't care about 240hz - the only people who do are esports people and it'll be easier to achieve 4K240 for esports than in more demanding games. 8K is useless for most of us. I'm not saying we're gonna be done after 4K, just wondering what's next.
 
I would like to see a more seriously written article about VRAM than what's in this review. The dismissal tone that you don't need more than 10GB for textures or just go get the 3090 is not helpful for people who are more likely to make 4-5 year purchase decisions instead of the 2-3 year cycle of the past. This is almost as bad as the infamous "just buy it" article about RTX2080.

The 20GB 3080 and 16GB 3070 rumors are out there. We know Micron is not too far from doubling GDDR6X density next year. We also know that consoles have 10GB VRAM for its GPU and games using 10GB VRAM in consoles will definitely need more in PC, often a lot more for the PC ports to shine. The new game engines will rely more on geometry going forward than textures vs. what we are using now, which I read allows visuals to scale better using VRAM than textures. These are legitimate concerns for people on the fence about whether to wait few more weeks or few more months of a purchase that needs to last 4-5 years.
We can make guesses about the future of games all you want. The facts are:

  1. Ultra quality is usually placebo level improvements in visuals vs. high quality. I've looked at dozens of new games every year where ultra quality performs 20% slower and in screenshots -- and especially in motion -- I'd never notice the difference without spending serious effort.
  2. Consoles will be pushing games to use up to 10GB VRAM (Xbox Series X), or 16GB (PS5 shared). That's effectively the cap for devs
  3. PCs don't need more VRAM than consoles to look better -- they only need more VRAM if they're using higher quality assets. But going higher quality than 10GB right now goes back to point 1.
  4. You have no other options right now. 10GB 3080 or 24GB 3090 -- 16GB RX 6900 in two months probably.
Over a five year span, will 10GB vs. 16GB matter? A bit, maybe. 10GB vs. 24GB will be in the same boat. But we are rapidly approaching a lot of plateaus in the PC and gaming world. I've been running a 16GB PC since at least 2013. Seven years later, 16GB is still fine. There are a few instances where 32GB helps a bit, but we haven't come close to doubling our system RAM requirements. GPUs are going to hit that same point of diminishing returns, and in fact arguably already have.

8GB isn't going to be enough VRAM forever, but it's been sufficient for 99% of use cases for at least five years. Actually, it was overkill five years ago and became recommended three years ago, and is still sufficient today. At some point in the next five years, yes, 8GB is going to end up in the "not enough memory" category. Or rather, not enough memory for ultra textures and geometry. And at that point, there will be 16GB cards and probably 32GB or even 48GB cards will replace the 3090 at the extreme overkill level.
 
Will someone make a post about 3080 20gb version? It`s already confirmed...
I would say "confirmed" rather than unequivocally saying it's happening. I could definitely see Nvidia doing a 20GB 3080 card (probably not 3080 Ti or 3080 Super) in October or November and charging $100-$200 extra. I mean, we'll see. It's still going to be in that <10% faster than 3080 range.
 
Nvidia seems to be getting phenomenal yields out of Samsung 8LPP. If numbers at TechPowerUp can be believed, the 3070 will be sporting 96% of the GA104's available 6144 cores. Since this is a mainstream part, a large portion of chips off the wafer are probably meeting the threshold. Assuming similar yield on the GA102, Nvidia is throw away a lot of good cores shipping the 3080 with just 81% of the GA102's 10752 cores. Meanwhile, there won't be too many sub-par dies that need to sold as 3070 Super. It would be more sensible for Nvidia to cover the gap between the 3070 and the 3080 by lowering the price of the latter (by 60 bucks, lets say) and launching a 11GB 3080 Super at a higher price point (say, 60 bucks more) to capture the profits currently left on the table.

Now doesn't seem to be the right time for a 3080 Ti. Makes more sense to wait for prices of 8K displays and GDDR6X memory to drop. Nvidia probably want something that can hose Intel with too when Xe HPG finally emerges from the woodwork.
No company reveals actual yields these days. RTX 3090 is using 82 of 84 SMs for GA102 -- that's 97.6% enabled! But then RTX 3080 is using 68 of 84 (81% enabled). That's a big drop. All indications are that RTX 3090 supply is extremely limited. Not enough chips meet the requirements. RTX 3070 is most of a GA104, true, but it's also a smaller chip which inherently improves yields ... and probably half or more of the chips are still going to be binned as RTX 3060 Ti or whatever.

I think a 3080 with 20GB is far more likely than yet another 11GB card. Either is possible of course, as is 12GB. What will it be called and how much will it costs? That last one is the real question. Probably $800 minimum, and possibly as much as $1000.
 
I would like to see a more seriously written article about VRAM than what's in this review. The dismissal tone that you don't need more than 10GB for textures or just go get the 3090 is not helpful for people who are more likely to make 4-5 year purchase decisions instead of the 2-3 year cycle of the past. This is almost as bad as the infamous "just buy it" article about RTX2080.

The 20GB 3080 and 16GB 3070 rumors are out there. We know Micron is not too far from doubling GDDR6X density next year. We also know that consoles have 10GB VRAM for its GPU and games using 10GB VRAM in consoles will definitely need more in PC, often a lot more for the PC ports to shine. The new game engines will rely more on geometry going forward than textures vs. what we are using now, which I read allows visuals to scale better using VRAM than textures. These are legitimate concerns for people on the fence about whether to wait few more weeks or few more months of a purchase that needs to last 4-5 years.
Even if Jarred or anyone else put VRAM consumption in their reviews, people would just misinterpret it because VRAM usage as reported by most apps is vague. Usage is reported by total committed, not by in-use unlike how system RAM is reported. In fact, in Task Manager, the value of "Dedicated Video Memory" is simply the amount reserved for use, not the amount actually in use (see https://devblogs.microsoft.com/directx/gpus-in-the-task-manager/).

We simply don't have the means to definitively profile how much VRAM a game actually needs. By "actually needs", the point at which VRAM pressure is enough that appreciable performance degradation (say 10%) happens. At best, maybe we could take something like the Unreal SDK, load up a high-quality sample, and profile that. But that's like saying 3DMark is how all games will behave.

Also DirectX I/O may potentially reduce VRAM pressure since the GPU can load assets directly. But of course, that depends on if developers even use it.

EDIT: I misinterpreted what the Microsoft devblog said what was being reserved, since I saw later down the road:
The amount of dedicated memory under the performance tab represents the number of bytes currently consumed across all processes, unlike many existing utilities which show the memory requested by a process.
Though while you can isolate how much memory is being used by a process, it's only for an instant. You have to use PerfMon to gather metrics over time.

Although there's still the point that if we dig further into application memory usage, what these utilities report may not be reflective of what actually is going on. After all, a leaky application isn't really using all of the bytes the utility reports.
 

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
Given a human with perfect 20:20 vision, your arc resolution is 1 arc minute (or .02 degrees)

Your 65" monitor which is 56.65" wide. Using an 8K display, each subpixel is .00245" wide

Hyp * sin .02 = .00245" SO
Hypotenuse = 7.01873313288"
Hyp * cos .02 = 7.0187327052"

So are you telling me you sit 7" from your 65" 8K gaming monitor?

Blah ha ha ha. Hello Wall Eye
wash-out_medium.jpg

You can make fun of it as you wish , but people who gamed at 8K and shown by Nvidia were AMAZED by 8K gaming like something never seen before !

View: https://www.youtube.com/watch?v=09D-IrammQc
 

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
Not enough chips meet the requirements. RTX 3070 is most of a GA104, true, but it's also a smaller chip which inherently improves yields ... and probably half or more of the chips are still going to be binned as RTX 3060 Ti or whatever.

An interesting thing about Ampere how it has fewer transistors than expected from a simple extrapolation from the previous gen. The 3080 uses 28.3 brillion transistors to implement 8704 shader cores, so 3.3 mil per core. The 2080 uses 13.6 brillion transistors to implement 2944 cores, so 4.6 mil per core. The calculation here is rough of course as there're other functional units on the dies. These were beefed up in Ampere though. And we know the 3080 has a couple thousand cores' worth of dead transistors.

Now if we look at the Team Red's efforts, we see the numbers move in the opposite direction gen-to-gen: the 5700 XT is 4.0 mil per core while the 590 is 2.5 mil.

My hypothesis is that TSMC's process is actually inferior to Samsung's when it comes to building GPUs. Restrictive design rules force you to use more transistors to implement a functionality. And the more you have the higher the chance one of them will be bad.
 
Last edited:

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
  1. Consoles will be pushing games to use up to 10GB VRAM (Xbox Series X), or 16GB (PS5 shared). That's effectively the cap for dev

Hello ? what caps for devs ? ART is not Dev work it is CG team work and the games will allways have higher texture detail for PC that are different from console files ... Higher detailed Textures will exist for PC ALL TIME. it is just a matter of added texture files. even Game Modders know this already.

There is no CAP for DEVS , Consoles will just work in lower detail than PC .