News Nvidia GeForce RTX 3080 Founders Edition Review: A Huge Generational Leap in Performance

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Thabks for the interest and the review!

What is 4K 120hz 4.4.4 gaming like? It's never been possible before.

What is VariableRefresh Rate like? It's brand new

What's Auto Low Latency Mode like? Also brand new.

Is Quick Media Switching any good?

What's Quick Frame Transport like?

To be fair there are not many HDMI 2.1 displays available yet but I own one and am super interested in what it can do! This gpu is the first opportunity to find out.

I was wondering the same thing. It was big news when they first started talking about Turing cards being able to be used with some TVs. Sadly, a lot of that info turned out to be false - at the beginning they named a number of Samsung TVs that have VRR that worked with the Turing cards, just to retract it a couple of months later, after many, including myself, bought the TV. So I'm curious if these new ones will work. They have HDMI2.1, so they should. It should be tested, and properly this time.
And the low latency mode that they've hyped up, and all the other stuff. It's great that these cards seem to be pretty fast, and I'm definitely upgrading. I just don't know which one yet, and still need proper info about these other technologies.

I was also wondering about if there might be any difference from PCIe Gen 3 vs PCIe Gen 4. Something else that is big to consider now days. Especially Intel vs AMD.
 
I don't have a problem with the new design for the FE cards, it looks kinda like the more recent Radeon cards, which I love aesthetically. However, if I was upgrading to Ampere, I wouldn't want an FE card simply because there's too much silver. What can I say, I care about looks 😛?
 
I don't have a problem with the new design for the FE cards, it looks kinda like the more recent Radeon cards, which I love aesthetically. However, if I was upgrading to Ampere, I wouldn't want an FE card simply because there's too much silver. What can I say, I care about looks 😛?
I forgot which reviewer said it, but that silver is actually bronze.




EDIT: Correction. Anodized bronze.
 
  • Like
Reactions: Shadowclash10
I forgot which reviewer said it, but that silver is actually bronze.




EDIT: Correction. Anodized bronze.
Yup, it's not silver. The 2080 was silver. The 3080 is more bronze or pewter.

2020-09-15-image-6-j.webp
 
  • Like
Reactions: Phaaze88
Cut them a break. It takes a lot of time to do benchmarks properly. They can't cover everything in that short amount of time.

I will cut the workers a break and I already saw they are going to add to the review. But this is a business and the business owners are slowing killing this site. I mean if you want a PC case do you use the reviews here or another site like gamernexus where they do many more reviews.
 
What about temperature ? Is the fan enough for running those games in the long terms ? I understand that the fan was enough for the benchmarks ?
Under high load, in an open bench setup it did not go above 74C at default fan curve. With a custom fan curve 52%speed instead of 40% fan speed) it did not go above 61C
 
  • Like
Reactions: Gurg
I will cut the workers a break and I already saw they are going to add to the review. But this is a business and the business owners are slowing killing this site. I mean if you want a PC case do you use the reviews here or another site like gamernexus where they do many more reviews.

You also have to consider that not every site is privy to getting all the exact same products to review at the exact same time, in the same quantity or of the same batch. There are many variables which go into what products are reviewed by each site.

As someone who reviews CPU cooling for Tom's, I also see the differences as to what is covered across many online venues. In fact, Tom's was one of only two US sites who covered the experimental Threadripper thermosiphon a while back. Jayztwocents and GamersNexus didn't get one...but I did.
 
I will cut the workers a break and I already saw they are going to add to the review. But this is a business and the business owners are slowing killing this site. I mean if you want a PC case do you use the reviews here or another site like gamernexus where they do many more reviews.
As convenient as it would be to have one site for everything, the reality is there isn't.

And honestly, I think that's better for everyone.
 
  • Like
Reactions: King_V and Phaaze88
This looks like an 'end-of-the-road' card for 99% of users.
If I were thinking about a 3070, I would buy the 3080 anyway... still need to look at Navi, just to be sure (and maybe price availability issues on 30X0 will be sorted by then).

If native 8K rendering (not up-scaling) ever becomes a thing, then sure, buy a next-gen card then... but I assume at that point we are talking about gaming on an 80+ inch panel. Right... but, maybe we will make room (to do that in comfort).

Now if the AI performance is ever fully utilized for future games, then we can talk bigger/better cards, but going for a 3090 at this point (even for 4K) is likely not that.

Edit [July 3rd 2021]: This is not proof of the above, but it does support my line of thinking:
View: https://www.youtube.com/watch?v=_s9EFWjM6bg
 
Last edited:
  • Like
Reactions: Gurg
About that part where Jared discussed the memory overclock:

"Remember that bit about EDR we mentioned earlier? It works. 1200 MHz appeared stable, but performance was worse than at stock memory clocks. I started stepping down the memory overclock and eventually ended up at 750 MHz, yielding an effective speed of 20.5 Gbps."

From what I understand, EDR is a feature that should be built in since the RTX 20 series, is that correct? I got an RTX 2060 and since I learned that EDR can somewhat slow down the performance when a memory overclock is applied, I wanted to validate what my performance sweet spot is using unigine superposition.

I attempted to bench my card at stock frequency first, then I applied a 200 mHz increment for each subsequent test. Each time, the score kept going higher until I reached my stability limit of 1100 mHz. This result is a bit unexpected: I thought that at some point the score would start going down after busting my sweet spot, but that did not happen.

I'm trying to figure out if using unigine superposition was a bad idea and if I need to use another benchmark that's more suited to test my memory, or is it possible that the EDR feature just didn't kick in during testing?
 
  • Like
Reactions: JarredWaltonGPU
About that part where Jared discussed the memory overclock:

"Remember that bit about EDR we mentioned earlier? It works. 1200 MHz appeared stable, but performance was worse than at stock memory clocks. I started stepping down the memory overclock and eventually ended up at 750 MHz, yielding an effective speed of 20.5 Gbps."

From what I understand, EDR is a feature that should be built in since the RTX 20 series, is that correct? I got an RTX 2060 and since I learned that EDR can somewhat slow down the performance when a memory overclock is applied, I wanted to validate what my performance sweet spot is using unigine superposition.

I attempted to bench my card at stock frequency first, then I applied a 200 mHz increment for each subsequent test. Each time, the score kept going higher until I reached my stability limit of 1100 mHz. This result is a bit unexpected: I thought that at some point the score would start going down after busting my sweet spot, but that did not happen.

I'm trying to figure out if using unigine superposition was a bad idea and if I need to use another benchmark that's more suited to test my memory, or is it possible that the EDR feature just didn't kick in during testing?
I don't know if EDR was fully supported on Turing. I seem to recall hearing about it ... but I never had a clear instance where a higher memory clock dropped performance. It was always either crash or success before. Ampere is the first time I saw a very clear drop in perf while supposedly hitting higher speeds. Anyway, YMMV, but I think most GDDR6 Turing cards could do close to 16 Gbps, while Ampere looks like GDDR6X will do 20.5 Gbps.
 
I don't know if EDR was fully supported on Turing. I seem to recall hearing about it ... but I never had a clear instance where a higher memory clock dropped performance. It was always either crash or success before. Ampere is the first time I saw a very clear drop in perf while supposedly hitting higher speeds. Anyway, YMMV, but I think most GDDR6 Turing cards could do close to 16 Gbps, while Ampere looks like GDDR6X will do 20.5 Gbps.
Alright, thanks for the answer!
 
  • Like
Reactions: JarredWaltonGPU
I understand but most people game on HDTVs...not monitors. I know very few people that even own high powered gaming PCs. They own consoles. These GPUs along with the upcoming consoles should definitely focus on HDTV compatibility as well as it's a huge selling point. Who will review the consoles when they drop? And to be honest the next gen consoles are the 3080/3090 biggest competitors hence their price points...not AMD. Even IGN has 3080 review up. Gaming is gaming and HDTVs are a HUGE part of that equation.
I highly doubt most people (PC)game on HDTV!

Edit.
 
Last edited:
I understand but most people game on HDTVs...not monitors. I know very few people that even own high powered gaming PCs. They own consoles. These GPUs along with the upcoming consoles should definitely focus on HDTV compatibility as well as it's a huge selling point. Who will review the consoles when they drop? And to be honest the next gen consoles are the 3080/3090 biggest competitors hence their price points...not AMD. Even IGN has 3080 review up. Gaming is gaming and HDTVs are a HUGE part of that equation.
I highly doubt most people game on HDTV!

I think >95% of console gamers game on their TVs. And the vast maajority of PC gamers game on monitors/laptops. IMHO there are probably slightly more PC gamers who game on TVs than console gamers who game on monitors. TH is, guess what, a PC site. The only real reason why they are concerned with consoles is because: consoles affect the technologies we get on PC, and a small amount of us are console gamers. But really, stretching it to TVs is too far.

Really? Really? Anyone in the market for a 3000 series card is not gonna replace their rig with a console. Budget PC gamers on old budget cards or something like a 1650, sure. Although likely not. But if you are considering a new >$500 GPU, you're not gonna move to consoles. You might buy a console for exclusives, but not as your primary platform. How is a console priced somewhere from $299-$499 a competitor to a GPU (note, 1 component) priced at >$699, or >$1,499?? Even anyone considering a 3070, for >$499 isn't buying a PS5/Xbox Series S or X.
 
  • Like
Reactions: JarredWaltonGPU
Because it's moot and only useful as a data point for people to misinterpret.

Using something like GPU-z or MSI Afterburner isn't a good idea because they report VRAM usage in its entirety, meaning it's the game + whatever everyone else is using. Windows already uses a good 500MB on my system, it could be different for anyone else, and it could change over time. I know of a method to gather an individual app's VRAM usage that involves using PerfMon (it's a built-in Windows tool), but it's a pain in the ass to gather it. The tool records VRAM usage by PID, which changes every time the game is launched.

In the end though, it's as what Jarred said: an app may have more VRAM allocated to it than necessary but not actually use it. Various games already fill out VRAM yet suffer no real performance degradation for it (FFXV an Call of Duty come to mind). And I'm almost certain that not everything in VRAM is actually necessary to render a frame.

Although if @JarredWaltonGPU is interested, I did find an app that tries to allocate a bunch of VRAM and keep itself in there if he wants to go down this rabbit hole.

Stop repeating what are fed to you , I explained in my reply That you CAN as a reviewer KNOW the exact VRAM need by using 8G , 10G , 11GB cards , and see where the game fps slows down and see the VRAM REAL USAGE.

When the FPS goes down from 80 to 15 fps thats the Memory all used up , and can be tested by using another card at the same spot and see if the speed at that same spot fall down or not.

It is easy to test .

But the site does not want to make Nvidia Angry at them. people have the right to know if 10GB is enough or not from real tests not just words.
 
First , thanks for your efforts and hard work reviewing the card.

now to the serious stuff :

1- No 8K benchmarks ? COME ON !!! this card should be tested on 8K as well , you tested GTX 1080 ti in 4k and this card is better in 8K than 1080 ti in 4K I dont care if it shows 30 fps it should be 8K benchmarked .

2- Why didnt you include Memory usage in each benchmark ? VRAM usage should be part of ANY benchmark table and ANY resolution from now on. add it ! make it Min Max memory usage !

3- You are a REVIEW site , you claim Memory used by VRAM is not the actual memory needed , and some are caching. FINE TEST IT . TESSSST IT , we wont take your words on it , and we wont takie "Just buy it " advice anymore . it is EASY TO TEST , you have 8GB cards , 10 GB cards , 11 GB cards you can find the spot where the game slows DOWN and YOU CAN TEST HOW MUCH VRAM IS REALLY NEEDED.

4-

no we wont "just stop" and we wont "just buy it"

DO YOUR HOMEWORK AND TEST MEMORY USAGE , or we will move to another review site.

5- Funny you did not mention the RTX 3070 Ti 16GB VRAM leaked by Lenovo Documents by accident? and you still say stop it and buy it for 10GB VRAM RTX 3080 ?
8K? Who own's that? #TotalWasteOfTime
 
Hey Jared!

Having spent $950 in 2018 as an early adopter on the Alienware AW3418DW, I plan to "slum it" for a at least two more years with this 3440x1440p/120Hz/G-SYNC/IPS/CURVED display.

But I'm going to to hold off until the end of the year to see if a "3080Ti" comes out with higher clocks and more VRAM. The 3080 would be a massive upgrade over my 1080Ti, but I know from experience that nVidia keeps the xx80ti a secret until later...

The price discrepancy of $1500 to $700 tells me there will be a $1000/$1100 3080Ti option with higher clocks and more VRAM over the 3080. Plus, I'm waiting for water block models. Been with Gigabyte for eight years, but may go EVGA this time around...
 
I'm just hoping the 3090 ends up being to the 3080 what the 2080ti was to the 2080, and DOESN'T end up being what the RTX Titan was to the 2080ti.

My 4k 144hz and 1440p 240hz monitors are desperate to stretch their legs.