GPU Performance Hierarchy 2024: Video Cards Ranked

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

King_V

Illustrious
Ambassador
Have you not noticed the image?
Xdnx7sssjbq3hSASkiFjsV.png


So yeah, anything with less than 4GB fails at 1080p ultra on RDR2. And the GTX 780 failed to run Far Cry 6 as well — the game says it needs GTX 900-series, not sure exactly why.

Actually, I've updated the charts to just remove the "could not run" results rather than counting them as a "0" (one), which gives much improved standings that aren't entirely accurate, but whatever. It's as good as I'll get without testing more games and/or using different settings. 🤷

4k8wpFVdo5pwGDJ9iZ4PrF.png

I, uh I did not. But, I was looking in the tables, and didn't notice the graph stuff.

Yeah, it's damned if you do, damned if you don't, with the GPUs that fail in a game.

Though, wait . . now that I think about it... maybe it's not an unfair advantage if you just remove the did-not-run games from the calculation. It probably still, mathematically speaking, correctly reflects the performance capability of the GPU in question, which is what I think we're looking for, but, obviously, the note needs to be there about some games requiring a minimum of 4GB VRAM to run them. But their relative performance positioning looks about right.


As for the GTX 780, it only supports DX12 feature set 11_0, and, from what I understand, FC6 requires DX12 feature set 12. I literally only stumbled across a video about why some older cards are unusable despite still having the GPU horsepower, and the various different feature levels of DX 12, etc., within the past week. It's the only reason it occurred to me to look that up for FC6 and the GTX 780. Fortuitous timing!
 
I, uh I did not. But, I was looking in the tables, and didn't notice the graph stuff.

Yeah, it's damned if you do, damned if you don't, with the GPUs that fail in a game.

Though, wait . . now that I think about it... maybe it's not an unfair advantage if you just remove the did-not-run games from the calculation. It probably still, mathematically speaking, correctly reflects the performance capability of the GPU in question, which is what I think we're looking for, but, obviously, the note needs to be there about some games requiring a minimum of 4GB VRAM to run them. But their relative performance positioning looks about right.


As for the GTX 780, it only supports DX12 feature set 11_0, and, from what I understand, FC6 requires DX12 feature set 12. I literally only stumbled across a video about why some older cards are unusable despite still having the GPU horsepower, and the various different feature levels of DX 12, etc., within the past week. It's the only reason it occurred to me to look that up for FC6 and the GTX 780. Fortuitous timing!
Yeah, it's some feature set thing... but it's a pretty odd thing to "require" when the game does support DX11 still. Anyway, lots of games basically look as good as FC6 and still can run on older GPUs, so I feel there's an element of lazy programming to it.

As for the overall score, we're dealing with a geometric mean of eight games, normally. Removing one result (RDR2) won't skew the numbers too much, but unless RDR2 happens to run at exactly the overall average fps, it will cause the scores to go up or down a bit. The GTX 1050 Ti manages to run RDR2 at 1080p ultra and gets 16.8 fps, compared to an overall average (and I mean "geometric mean" when I say "average") of 19.8 fps. So, if I remove the RDR2 result from that game, I get an overall score of 20.3 — a relatively small 2.5% change.

For the GTX 780, it's two games that get removed, but FC6 does tend to run faster than the overall average result, so it probably balances out. Going back to the GTX 1050 Ti, if I drop both FC6 and RDR2, the score changes from 19.8 to 19.1, a 3.5% change. Which means, we're still close enough generally speaking that the tables are "meaningful" even if they're not perfectly apples to apples in all cases.
 
  • Like
Reactions: King_V

eboethrasher

Prominent
May 23, 2022
1
0
510
I mean, Tom's staff, since for some reason A) this is set up as an article that doesn't appear every month separate, but instead is overwritten every month and B) the website is inexplicably excluded from the Internet Archive, it would behoove you to at least provide a link to the old version of the GPU hierarchy that provided actually useful information based on test data, even if it wasn't always perfectly aligned? It had been fairly useful before.
 
I mean, Tom's staff, since for some reason A) this is set up as an article that doesn't appear every month separate, but instead is overwritten every month and B) the website is inexplicably excluded from the Internet Archive, it would behoove you to at least provide a link to the old version of the GPU hierarchy that provided actually useful information based on test data, even if it wasn't always perfectly aligned? It had been fairly useful before.
I have no idea what you want that isn’t available. The old “legacy” list was basically garbage. It just listed some of the GPUs that hadn’t been tested in over a decade, without specs or any other information on the GPUs other than a pseudo-ranking.

It was pretty much just “GTX 670 is faster than GTX 660 Ti” sort of data. If you wanted an HD 5870 vs GTX 680 comparison, all you had was two lines, and one GPU was “higher” than the other based on unknown data / opinion. Was it more correct in its rankings? Hardly. There were multiple cases where it was clear the generational groupings were simply best guesstimates. I posted the old table a few comments back if you want to see it.

The current legacy table at least has some criteria for the sorting and lists specs. It also includes more GPUs. If you preferred the old presentation, just ignore the specs and pretend it’s less useful because “reasons” I guess. Or Google old reviews for the GPUs you want to compare, though of course that doesn’t tell you how the various GPUs aged over time.
 

King_V

Illustrious
Ambassador
I think the only real issue with the new legacy, for those of us lunatics* still looking at these GPU fossils, is that, a lot of the time (but not every time), the GFLOPS score seems to be independent of the memory type. Don't get me wrong, though, as that disclaimer is literally mentioned right up front in the opening paragraph for that chart.

*and yes, while I was looking at it, I briefly wondered how the old HD 4850 I once had compared to the HD 6670 that I still have lying around somewhere... I think something might be wrong with me.

I need to move on from the ancient days . . there are even "modern" cards that fail gaming tests because of insufficient VRAM!

A shame they didn't really document the methodology, or where guesswork was required. I sort of remember the "if the new card is within 3 tiers of the old card, you probably won't see much of a difference," though I think even that cautionary statement became quite inaccurate in the later generations of cards.
 
I think the only real issue with the new legacy, for those of us lunatics* still looking at these GPU fossils, is that, a lot of the time (but not every time), the GFLOPS score seems to be independent of the memory type. Don't get me wrong, though, as that disclaimer is literally mentioned right up front in the opening paragraph for that chart.

*and yes, while I was looking at it, I briefly wondered how the old HD 4850 I once had compared to the HD 6670 that I still have lying around somewhere... I think something might be wrong with me.

I need to move on from the ancient days . . there are even "modern" cards that fail gaming tests because of insufficient VRAM!

A shame they didn't really document the methodology, or where guesswork was required. I sort of remember the "if the new card is within 3 tiers of the old card, you probably won't see much of a difference," though I think even that cautionary statement became quite inaccurate in the later generations of cards.
Honestly, as the guy who came into TH just over two years ago and took over the hierarchy and testing, the old methodology was at best highly suspect. Pre-2020, for example, the GPU hierarchy used testing from just three games. I believe those were Ashes of the Singularity, Forza Horizon 4, and Shadow of the Tomb Raider. Ashes is a terrible GPU benchmark, as it often ends up CPU limited, Forza heavily favored AMD GPUs, and Shadow was the only game I felt came close to putting AMD and Nvidia GPUs on equal footing (without DXR enabled, at least). That was the 2019 iteration, and you can guess that things weren't any better with previous versions of the GPU testing. It's one of the reasons TH wanted me to come on board, because I would actually do a huge amount of testing.

But again, I posted the old "legacy" list on the previous page. Let me call out a few massive discrepancies just to point out the problems. The dual-GPU solutions aren't always included, but when they are, placement is highly suspect. GTX 690 sits right below GTX 980, which sits in a bracket above GTX 1060 6GB. Then you get the GTX Titan, 1060 3GB, 970, 780 Ti, and 770 before we finally get the GTX 680. Was the dual-GPU really eight positions ahead of the single GPU variant? Maybe in 3DMark, but not in most modern games and only a handful of older games that had good SLI support.

Part of the problem is that there's no indication of what one "rank" in the listing means, though. Sometimes two cards are adjacent and are basically tied in performance, other times there might be as much as a 10% performance gap between two line. And it varies throughout the listings.

On the AMD side, the R9 295X2 sits above Vega 56, but one tier below Vega 64... even though there's only a 10% gap between the two Vega GPUs. Then comes Fury X, Fury, Fury Nano, 580 8GB (no 590), 480 8GB, 570 4G and "570 4GB" (what's the difference!?), 390X, 390, and finally 290X. That means the dual-GPU 295X2 got 11 positions higher than the single-GPU variant, and three "tiers" (groupings) higher. It's nonsensical!

Or how about the HD 7970 (GHz edition as a bonus!). The vanilla 7970 isn't listed in the old legacy hierarchy, but the GHz is a "tier" (grouping) above the R9 280, whereas the 280X sits right above the HD 7970 GHz. Except all of those use the exact same Tahiti GPU, and the 7970 GHz had the highest clocks of them all (1050 MHz boost, compared to 1000 MHz on the 280X).

I could have tried to do a weighted sorting, where GFLOPS was combined with VRAM bandwidth and capacity plus an "architectural scaling factor" (so maybe 1.0x for GCN1.4, 1.1x for Pascal, whatever...) That gets very nebulous and starts to imply actual benchmarks, however, so I just left it with theoretical GFLOPS and then listed a few specs next to the GPU. Adding VRAM capacity and bandwidth to the columns might be worth the effort, if I have a day where I'm bored. LOL
 
  • Like
Reactions: King_V and alceryes
I'm glad you've got 3+ generations in there now.
So many charts out there drop everything but the last two gens. This can cause people who have oldies but goodies (1080 Ti/Vega 64) to wonder where they rank against newer gen cards.
 

King_V

Illustrious
Ambassador
Honestly, as the guy who came into TH just over two years ago and took over the hierarchy and testing, the old methodology was at best highly suspect. Pre-2020, for example, the GPU hierarchy used testing from just three games. I believe those were Ashes of the Singularity, Forza Horizon 4, and Shadow of the Tomb Raider.
Ooof.... ouch. Isn't Ashes the one that's mocked as "Ashes of the Benchmark" or something along those lines?

Also: "Welcome. Here's what we have currently. There's no records/documentation behind it. Good luck!"


I could have tried to do a weighted sorting, where GFLOPS was combined with VRAM bandwidth and capacity plus an "architectural scaling factor" (so maybe 1.0x for GCN1.4, 1.1x for Pascal, whatever...) That gets very nebulous and starts to imply actual benchmarks, however, so I just left it with theoretical GFLOPS and then listed a few specs next to the GPU. Adding VRAM capacity and bandwidth to the columns might be worth the effort, if I have a day where I'm bored. LOL

"So, was that one run at 1024x768 resolution?" :LOL:

Kidding aside, while I think the scaling factor probably wouldn't work, just having the info about the memory bandwidth might.

Well, that is, for the tiny minority of us who care about these relics. And, for me, that doesn't go beyond intellectual curiosity. Realistically, it may be only useful/meaningful for the relatively most-recent of the "too old to be on the modern chart" architectures.
 
  • Like
Reactions: JarredWaltonGPU
Ooof.... ouch. Isn't Ashes the one that's mocked as "Ashes of the Benchmark" or something along those lines?

Also: "Welcome. Here's what we have currently. There's no records/documentation behind it. Good luck!"
It was more like, "Okay Jarred, you're in charge of the GPU hierarchy (which I had already been doing for PC Gamer), best graphics cards buying guide, and all GPU related content."

My first reaction: "Holy cow. Paul, do you realize that the GPU hierarchy is based off scores in three highly questionable gaming tests? This is ludicrous and I can't believe anyone would even trust our rankings." To make matters worse, I'm pretty sure it was all 1920x1080 testing. A big part of the problem was that the person primarily in charge of the hierarchy was by that point a "freelancer" (even though Chris had previously been EIC at Tom's Hardware). That just encourages a person to do the least amount of work possible unless the pay is good, which I suspect it probably wasn't.

"We need you to keep the GPU hierarchy updated."
"How much will you pay for that?"
"Hmmm... how about $100 per month?"
"Okay, how about I just use three easy to test games?"
"Whatever..."
 
  • Like
Reactions: King_V

froggx

Distinguished
Sep 6, 2017
85
34
18,570
Would it be possible to add links to the list that lead to the relevant GPU reviews on the site? Some people (i.e. me) might want more information before making a purchase than an eight game geometric mean (or six game geomean for RT scores) can provide.
 
  • Like
Reactions: King_V

King_V

Illustrious
Ambassador
Would it be possible to add links to the list that lead to the relevant GPU reviews on the site? Some people (i.e. me) might want more information before making a purchase than an eight game geometric mean (or six game geomean for RT scores) can provide.

I like this idea.. having a link to the full review would be nice.

That said, while there's other info, it ultimately does make a performance conclusion based on performance from the gaming suite. Just that the older cards will have a review that has older games, rather than the current suite.
 
I like this idea.. having a link to the full review would be nice.

That said, while there's other info, it ultimately does make a performance conclusion based on performance from the gaming suite. Just that the older cards will have a review that has older games, rather than the current suite.
I'd have to see about tweaking things. I can tell you that the ecomm links to buy the GPUs are basically required and have a higher priority than our reviews. Sad, I know. You can always do a search for "site:tomshardware.com [GPU name] review" and that should pull up the relevant article.
 
  • Like
Reactions: King_V

King_V

Illustrious
Ambassador
I'd have to see about tweaking things. I can tell you that the ecomm links to buy the GPUs are basically required and have a higher priority than our reviews. Sad, I know. You can always do a search for "site:tomshardware.com [GPU name] review" and that should pull up the relevant article.
Ah, yeah, that's not surprising. And I've resorted myself to often searching in a similar manner.

I don't suppose with the content management system they've got for the site, that it's an easy thing to add an "Our Review" link under the label of each GPU, is it? (I feel like I'm asking a rhetorical question that should be answered, in a somewhat foreboding voice "of COURSE it isn't that easy.. what on earth were you thinking?")
 
Ah, yeah, that's not surprising. And I've resorted myself to often searching in a similar manner.

I don't suppose with the content management system they've got for the site, that it's an easy thing to add an "Our Review" link under the label of each GPU, is it? (I feel like I'm asking a rhetorical question that should be answered, in a somewhat foreboding voice "of COURSE it isn't that easy.. what on earth were you thinking?")
Our "Best graphics cards" (and other buying guides) actually do have a feature where you can link to a review... which is currently borked! LOL. (It used to let you click on the title of each item in the list, but I've just resorted to manually adding it as a link now.)

For the Hierarchy, it's all just a standard article but I have spreadsheets I use to create the sorted tables. No, the CMS does not let you paste tables directly from Excel — that would be FAR too easy! Basically, I just need to decide which column should link to the review, then add in some additional formulas that will make the HTML link text, and add a cell with the review URL. It's not horribly difficult, though it will take a bit of time to find all the links and come up with a reasonable approach. Ideally, I'd just put a linebreak under the GPU name and then "Our review" would link to the article... but our tables don't support line breaks. Awesome, right? I'll probably have the specs column link to the reviews, as that should be okay.
 

King_V

Illustrious
Ambassador
Ideally, I'd just put a linebreak under the GPU name and then "Our review" would link to the article... but our tables don't support line breaks. Awesome, right?
(cue in left eye twitching spasmodically)

I'll probably have the specs column link to the reviews, as that should be okay.
That sounds cool. Maybe a little awkward, but if it gets the job done...

I'm going to try and forget what I read about the unsupported line breaks...
 
(cue in left eye twitching spasmodically)


That sounds cool. Maybe a little awkward, but if it gets the job done...

I'm going to try and forget what I read about the unsupported line breaks...
If you look at the new and improved GPU benchmarks hierarchy, you'll be happy to see 4090, A770, A750 in their respective positions now, and the specs cells all link to the TH reviews. Yeah, that only took several hours (for all of it)...
 
  • Like
Reactions: King_V

King_V

Illustrious
Ambassador
If you look at the new and improved GPU benchmarks hierarchy, you'll be happy to see 4090, A770, A750 in their respective positions now, and the specs cells all link to the TH reviews. Yeah, that only took several hours (for all of it)...
That is absolutely awesome! I mean, well, not so much the amount of your time it sucked away, but still, definitely a two thumbs-up.
 
  • Like
Reactions: JarredWaltonGPU
Oct 24, 2022
3
0
10
Nice article for sure and lots of work put into it!

I could be wrong, but according to the first few google results I tried, RTX 3090 is faster than the RX 6900 XT ...

Of course this was an old review, but it was vs those two cards:

"Overall, the GeForce RTX 3090 is undoubtedly the fastest gaming GPU currently available. AMD's RX 6900 XT looks pretty good if you confine any performance results to games that don't support ray tracing or DLSS, but add in either of those, and it falls behind — often by a lot."

- https://www.tomshardware.com/features/geforce-rtx-3090-vs-radeon-rx-6900-xt

Also another website:

https://gpu.userbenchmark.com/Compare/Nvidia-RTX-3090-vs-AMD-RX-6900-XT/4081vs4091
 
Nice article for sure and lots of work put into it!

I could be wrong, but according to the first few google results I tried, RTX 3090 is faster than the RX 6900 XT ...

Of course this was an old review, but it was vs those two cards:

"Overall, the GeForce RTX 3090 is undoubtedly the fastest gaming GPU currently available. AMD's RX 6900 XT looks pretty good if you confine any performance results to games that don't support ray tracing or DLSS, but add in either of those, and it falls behind — often by a lot."

- https://www.tomshardware.com/features/geforce-rtx-3090-vs-radeon-rx-6900-xt

Also another website:

https://gpu.userbenchmark.com/Compare/Nvidia-RTX-3090-vs-AMD-RX-6900-XT/4081vs4091
The rankings are based off of current testing using our updated test suite. So in the games available back in 2020 when the RTX 3090 and RX 6900 XT first launched, the 3090 was faster. With updated drivers and a new suite of games, which card places ahead of the other varies. Across our eight games for the standard test suite, at 1080p ultra, the RTX 3090 got 126.6 fps compared to the RX 6900 XT at 129.7 fps. The RX 6900 XT also wins at 1080p medium, 184.6 fps to 178.1 fps. At 1440p, it's a slight advantage to the 3090: 106.5 fps to 105.5 fps. And at 4K ultra, the 3090 definitely comes out ahead with 68.8 fps to 63.1 fps.

The table is sorted by 1080p ultra as the baseline because 1080p remains the most popular resolution, yes, but more importantly its a resolution where I test (nearly) all of the GPUs — or at least attempt to do so. There's no point in testing about half of the cards at 1440p ultra, and definitely not much use in testing any cards with less than 8GB VRAM at 4K ultra, so I don't. But if you care about those resolutions, you should be able to easily look at the table and determine which card ranks higher.

The same goes for the DXR table with ray tracing. Some people (AMD users especially) want to say ray tracing is stupid and unnecessary, and they can easily ignore that table. But the results are all there in black and white for people to judge for themselves. Factor in DXR performance and the RTX 3090 easily leads even the RX 6950 XT. If you want to try and factor in ray tracing performance, here's an alternative view. This puts even the 3080 Ti and 3080 12GB ahead of the 6900 XT and looks at the geomean of 1080p, 1440p, and 4K non-DXR performance plus the 1080p medium and ultra DXR performance.

GPUOverall Performance
GeForce RTX 4090
182.5​
GeForce RTX 3090 Ti
144.7​
Radeon RX 6950 XT
137.7​
GeForce RTX 3090
136.0​
GeForce RTX 3080 Ti
132.4​
GeForce RTX 3080 12GB
129.0​
Radeon RX 6900 XT
127.6​
GeForce RTX 3080
123.9​
Radeon RX 6800 XT
120.8​
Radeon RX 6800
108.0​
GeForce RTX 3070 Ti
106.0​
GeForce RTX 3070
100.2​
GeForce RTX 2080 Ti
97.9​
Radeon RX 6750 XT
95.7​
GeForce RTX 3060 Ti
91.0​
Radeon RX 6700 XT
90.2​
GeForce RTX 2080 Super
83.4​
Intel Arc A770 16GB
80.9​
GeForce RTX 2080
79.9​
Radeon RX 6650 XT
73.7​
GeForce RTX 2070 Super
73.7​
Radeon RX 6600 XT
72.0​
Intel Arc A750
71.4​
GeForce RTX 3060
69.8​
GeForce RTX 2070
65.5​
GeForce RTX 2060 Super
62.2​
Radeon RX 6600
60.8​
GeForce RTX 2060
51.9​
GeForce RTX 3050
50.6​
Intel Arc A380
25.8​
Radeon RX 6500 XT
25.8​
Radeon RX 6400
19.8​
 
  • Like
Reactions: King_V
Oct 24, 2022
3
0
10
The rankings are based off of current testing using our updated test suite. So in the games available back in 2020 when the RTX 3090 and RX 6900 XT first launched, the 3090 was faster. With updated drivers and a new suite of games, which card places ahead of the other varies. Across our eight games for the standard test suite, at 1080p ultra, the RTX 3090 got 126.6 fps compared to the RX 6900 XT at 129.7 fps. The RX 6900 XT also wins at 1080p medium, 184.6 fps to 178.1 fps. At 1440p, it's a slight advantage to the 3090: 106.5 fps to 105.5 fps. And at 4K ultra, the 3090 definitely comes out ahead with 68.8 fps to 63.1 fps.

The table is sorted by 1080p ultra as the baseline because 1080p remains the most popular resolution, yes, but more importantly its a resolution where I test (nearly) all of the GPUs — or at least attempt to do so. There's no point in testing about half of the cards at 1440p ultra, and definitely not much use in testing any cards with less than 8GB VRAM at 4K ultra, so I don't. But if you care about those resolutions, you should be able to easily look at the table and determine which card ranks higher.

The same goes for the DXR table with ray tracing. Some people (AMD users especially) want to say ray tracing is stupid and unnecessary, and they can easily ignore that table. But the results are all there in black and white for people to judge for themselves. Factor in DXR performance and the RTX 3090 easily leads even the RX 6950 XT. If you want to try and factor in ray tracing performance, here's an alternative view. This puts even the 3080 Ti and 3080 12GB ahead of the 6900 XT and looks at the geomean of 1080p, 1440p, and 4K non-DXR performance plus the 1080p medium and ultra DXR performance.

GPUOverall Performance
GeForce RTX 4090
182.5​
GeForce RTX 3090 Ti
144.7​
Radeon RX 6950 XT
137.7​
GeForce RTX 3090
136.0​
GeForce RTX 3080 Ti
132.4​
GeForce RTX 3080 12GB
129.0​
Radeon RX 6900 XT
127.6​
GeForce RTX 3080
123.9​
Radeon RX 6800 XT
120.8​
Radeon RX 6800
108.0​
GeForce RTX 3070 Ti
106.0​
GeForce RTX 3070
100.2​
GeForce RTX 2080 Ti
97.9​
Radeon RX 6750 XT
95.7​
GeForce RTX 3060 Ti
91.0​
Radeon RX 6700 XT
90.2​
GeForce RTX 2080 Super
83.4​
Intel Arc A770 16GB
80.9​
GeForce RTX 2080
79.9​
Radeon RX 6650 XT
73.7​
GeForce RTX 2070 Super
73.7​
Radeon RX 6600 XT
72.0​
Intel Arc A750
71.4​
GeForce RTX 3060
69.8​
GeForce RTX 2070
65.5​
GeForce RTX 2060 Super
62.2​
Radeon RX 6600
60.8​
GeForce RTX 2060
51.9​
GeForce RTX 3050
50.6​
Intel Arc A380
25.8​
Radeon RX 6500 XT
25.8​
Radeon RX 6400
19.8​

Thanks for the response! I had heard that drivers in recent times really increased some performance on some GPUs. I have not looked into it much yet, just been doing research when I can, but was getting confused between all these different website results. I suppose most websites still have the old results before driver updates.

I am not sure about all the different features like ray tracing, dlss, or whatever other things are out there, but I need to look those up. If features exist, I assume we should use it ... AMD users say it is stupid, hmm? I did just look that up and looks like it is newer for AMD, so I guess they must be jealous of NVIDIA, unless there is some other reason?

I am still using GTX 650 Ti Boost and may be for a long while due to money problems, but just trying to catch up at least with knowledge of current GPUs.
 
Thanks for the response! I had heard that drivers in recent times really increased some performance on some GPUs. I have not looked into it much yet, just been doing research when I can, but was getting confused between all these different website results. I suppose most websites still have the old results before driver updates.

I am not sure about all the different features like ray tracing, dlss, or whatever other things are out there, but I need to look those up. If features exist, I assume we should use it ... AMD users say it is stupid, hmm? I did just look that up and looks like it is newer for AMD, so I guess they must be jealous of NVIDIA, unless there is some other reason?

I am still using GTX 650 Ti Boost and may be for a long while due to money problems, but just trying to catch up at least with knowledge of current GPUs.
My general attitude is that, if a feature exists that does something useful, I would generally enable it where possible. DLSS falls into that category, along with FSR 2.0 and XeSS. DXR (DirectX Raytracing) and ray tracing in general are a bit harder to quantify. I say AMD users often discount RT as unimportant, and part of that is because there are a lot of games with RT effects where the visual fidelity gains don't remotely make up for the loss in performance. There are other games where ray tracing is a nice extra — Control, Cyperpunk 2077, Spider-Man Remastered, Watch Dogs Legion, and a few other games definitely have RT effects that I "miss" when they're not present. But it's not at the level where I'd suggest they're required.

Nvidia pushed out RT hardware in 2018. AMD finally had a competing solution in late 2020, two years later, by which time Nvidia's second generation of RT hardware was out. RTX 4090 (and the upcoming 40-series in general) are third generation RT hardware. So Nvidia is very much ahead in RT performance, and anyone that doesn't like Nvidia needs to find ways to minimize that. Again, RT absolutely isn't required for most games to be enjoyable, but it can make some games more enjoyable. How much so? That's very subjective and up to the individual, so I mostly try to provide my opinions along with data to back it up, and let others decide for themselves what they value.

If you want an alternative view of performance, take all the RTX cards and multiply their standard score by 120% (for DLSS and better DXR hardware). AMD's RX 6000-series gets multiplied by 110% (for DXR hardware). Everything else gets the base score. It's a quick and dirty way of adjusting the numbers that I feel mostly matches up with the end user experience.
 
Oct 24, 2022
3
0
10
My general attitude is that, if a feature exists that does something useful, I would generally enable it where possible. DLSS falls into that category, along with FSR 2.0 and XeSS. DXR (DirectX Raytracing) and ray tracing in general are a bit harder to quantify. I say AMD users often discount RT as unimportant, and part of that is because there are a lot of games with RT effects where the visual fidelity gains don't remotely make up for the loss in performance. There are other games where ray tracing is a nice extra — Control, Cyperpunk 2077, Spider-Man Remastered, Watch Dogs Legion, and a few other games definitely have RT effects that I "miss" when they're not present. But it's not at the level where I'd suggest they're required.

Nvidia pushed out RT hardware in 2018. AMD finally had a competing solution in late 2020, two years later, by which time Nvidia's second generation of RT hardware was out. RTX 4090 (and the upcoming 40-series in general) are third generation RT hardware. So Nvidia is very much ahead in RT performance, and anyone that doesn't like Nvidia needs to find ways to minimize that. Again, RT absolutely isn't required for most games to be enjoyable, but it can make some games more enjoyable. How much so? That's very subjective and up to the individual, so I mostly try to provide my opinions along with data to back it up, and let others decide for themselves what they value.

If you want an alternative view of performance, take all the RTX cards and multiply their standard score by 120% (for DLSS and better DXR hardware). AMD's RX 6000-series gets multiplied by 110% (for DXR hardware). Everything else gets the base score. It's a quick and dirty way of adjusting the numbers that I feel mostly matches up with the end user experience.

Again, thanks for the informative reply!

I have only used Nvidia GPUs in desktops, not that I ever had a reason not to get AMD ... just in almost 20 years I have only had two GPUs of my own I guess.

Had Windows 95/98/XP with the family ... then got my own newer XP in 2005 from my family as a present ... so that had GeForce 8400 GS which I then upgraded to GeForce GTX 650 Ti Boost ... which is now in the system I tried building a few years ago when the GPU shortage came at the same time I was looking for a GPU.

I agree, if there is a useful feature to use, then of course use it. I suppose I would have to try and see performance gain or loss for myself to agree or disagree one way or another.

This probably isn't the right place to ask, but is there some fast wireless display software or adapter that keeps performance from a PC to a TV or a monitor? Or if certain graphics cards work better doing that than others? I assume connection via HDMI will always be the best performance just like connection via ethernet is the best performance vs wifi. I have been using TVs as my PC display for years now