It wasn't just the 7900 series though - the 6950XT was also notably higher than the 4070Ti despite coming in lower in direct comparisons, and even higher than the 4080!
Can you clarify precisely what you're referring to? If you're talking about the rasterization charts, this is expected behavior at 1080p. AMD's RX 6950 XT was generally faster than the RTX 3090 Ti and even beat the RTX 4090 at 1080p medium. This is in part because of drivers and game changes, I suspect, but also note that the RX 6950 XT has 128MB of L3 cache and seems to scale better at lower settings and resolutions. You'll note that at 1440p and 4K, the 6950 XT clearly falls behind all of the latest generation cards. I'm going to recheck my RTX 4080 numbers today, just for sanity's sake, but I don't expect any massive changes. On a 12900K, CPU bottlenecks mean some of the newer GPUs can't strut their stuff until higher resolutions.
Here's the raw numbers and percentages for a couple of comparisons for the rasterization charts, though. You can definitely see that either the 6950 XT performed exceptionally well at 1080p, or the 7900 XTX and 4080 underperform at those resolutions.
That's definitely going to get you a lot of pitchforks from angry AMD users (and even some NVIDIA players). Ray tracing it still not the default standard by any means, so even as a current 4070Ti owner myself who enjoys using RT I'm not convinced that's the right approach. I doubt other major reviewers will do the same yet, so it will definitely split Tom's Hardware from other major reviewers. Maybe I'm wrong about that. I know it means double the testing, but it's no different than comparing Ultra settings from Medium. It's just part of the playing field at the moment.
This is just for the "overall geometric mean," at least in part because it's a pain to maintain two fully separate spreadsheets and tables of data and to continually make explanations for cards that underperform with ray tracing. I figure we have all Nvidia RTX cards going back to 2018, and all AMD cards going back to 2020, and that's sufficient for a look at how things currently stand at the top of the GPU charts. People can look at the previous generation hierarchy if they want to skip the ray tracing tests being lumped in, or else just look at specific games. At some point, ray tracing support will become pretty prevalent on both the hardware and software side. I'd argue that it's pretty close to the tipping point now for major games —
Hogwarts Legacy has it, for example, and it does improve the visuals.
Frankly, if you really want to show how big the Nvidia lead can be, testing with DLSS2 and FSR2 where supported, at quality mode, would be entirely justifiable in my book. The potential for visual artifacts is easily outweighed by the performance increase, and DLSS2 is in more games and more "popular" games than FSR2 right now.
Looking at my current 15 game test suite:
Borderlands 3: no upscaling
Bright Memory Infinite Benchmark: DLSS2 only
Control Ultimate Edition: DLSS2 only
Cyberpunk 2077: DLSS2/3 and FSR2.1
Far Cry 6: FSR1
Flight Simulator: DLSS2/3 and FSR2.0
Forza Horizon 5: DLSS2 and FSR2.2
Horizon Zero Dawn: DLSS2 only
Metro Exodus Enhanced: DLSS2 only
Minecraft: DLSS2 only
Red Dead Redemption 2: DLSS2 and FSR2.1
Spider-Man: Miles Morales: DLSS2/3 and FSR2.1 (and XeSS!)
Total War: Warhammer 3: no upscaling
Watch Dogs Legion: DLSS2 only
You could argue about the selection of games, but I know I didn't intentionally try to skew in favor of Nvidia. Several games only added DLSS and FSR later in their lives, for example. Anyway, that's 11 of 15 with DLSS2, 5 of 15 with FSR2, and 1 of 15 with XeSS. For games that support both DLSS and FSR2, I think generally speaking the gains from DLSS2 on Nvidia are larger than the gains from FSR2 on AMD, and DLSS2 still wins the quality comparisons by a small amount.
I don't know... maybe I'll end up recanting and sticking to separate overall performance charts for DXR and rasterization games, but it's certainly a thorn in my side.