There is absolutely nothing in your current testing that lets us extrapolate what the performance would be at lower detail settings. At the end of the day, we turn down detail settings (which costs nothing), before buying a new card (or a new monitor, for that matter). Ultra becomes High, etc - isn't quantifiable, give us actionable data.
Ignoring the mistakes in pricing and specs and other stuff from your post, let me run though this in detail.
I test at 1080p medium because that provides a lower demand performance result. It's generally far enough away from 1080p ultra to be more interesting, and also pinpoints where other limits (usually CPU) are a factor. I have done 1080p medium testing since I began testing GPUs at PC Gamer nearly a decade ago. I also tested at 1080p medium, along with 720p low and other resolutions, when doing laptop reviews at AnandTech for the decade prior to that.
Is every tested setting and resolution combination important on every card? Perhaps not, but they can provide data that forms the basis for a deeper analysis.
I have done testing in individual games many times over the years where I've checked even more settings. As you would expect, the vast majority show lots of overlap, like gears on a bicycle. If you have a 30-gear mountain bike with three rings on the front and ten on the rear cassette, that's 30 gears total. But it's really about the ratios, so if your rear cassette has 10–50 teeth and the front rings have 28/34/40 teeth as an example, then your lowest gear would be 28/50 = 0.56 ratio while your highest gear would be 40/10 = 4.00. But in between? You have a bunch of options that would all have a ratio of around 2.0. (This is why a lot of modern mountain bikes only have a single front ring and then a 12-gear rear cassette. That gives 12 'unique' ratios with no overlap, and basically matches what you might have gotten on a more complex "30-gear" setup.)
For GPUs, often 1080p medium ~= 1440p low, 1080p high ~= 1440p medium ~= 4K low, 1080p ultra ~= 1440p high ~= 4K medium. And yes, it's very approximate and varies by game. The point is, there is always overlap, but finding precisely where it happens would require testing everything everywhere. You can't just assume it will be the same without testing, but testing is a huge time sink. So I compromise with four settings, 1080p medium, 1080p ultra, 1440p ultra, and 4K ultra.
That gives you a clear curve, four distinct "ratios" where there should be zero overlap. And from that basis you can extrapolate. If you look at any budget GPU I've reviewed, you'll see that my focus definitely isn't on 4K ultra or even 1440p ultra for GPUs that don't handle those resolutions. And on extreme GPUs, the 1080p results are typically provided merely as reference points with minimal commentary. I show those results for the curious, but they're definitely a sidenote rather than the main attraction.
Someone wants to know approximately how 1440p medium will run? Look at the 1080p medium vs 1080p ultra results, and extrapolate that against 1440p ultra. It's math and it's not perfect, but we can look at other cards to determine where scaling should land. For example:
Find the scaling factor for a different GPU from the same vendor, one that's clearly GPU limited at 1080p. If RX 7600 runs ~1.7X faster at 1080p medium as at 1080p ultra (which it does), then even though the 7600 starts to choke at 1440p, you can apply that same ~1.7X scaling for a GPU like the 7900 GRE. But CPU limits still come into play for some games, so then you have to look at the 1080p medium results for that GPU and know that 1440p medium wouldn't run faster than 1080p medium.
The 7900 GRE only shows 36% scaling at 1080p medium vs ultra, so CPU limits are definitely a factor. 1080p ultra also runs about 20% faster than 1440p ultra (compared to a 40% delta on the 7600). With that data, which is already in the GPU hierarchy, you can get a pretty good idea about where the GRE will land. It will show higher scaling at 1440p medium vs ultra than at 1080p, but it can't exceed the 1080p medium result. Which means a 7900 GRE should run about 45~55% faster at 1440p medium than 1440p ultra (depending on the game).
Is that an
exact answer? No, but it's close enough and doesn't involve massively increasing the amount of testing time. Which gets to the final point.
If VRAM limitations aren't a factor, 1440p medium usually performs about the same at 1080p ultra. I have tested this in the past, and it's usually quite binary: A bunch of games show nearly identical 1080p ultra and 1440p medium performance, and then a handful have different requirements and may run out of VRAM at ultra settings. But otherwise, when VRAM doesn't kill performance, increasing the resolution from 1080p to 1440p drops performance about 25%, while going from medium to ultra drops performance by 25~50 percent.
Testing 1080p medium and ultra shows one thing very clearly on a per-game basis: Is this game hitting some bottleneck — VRAM capacity, bandwidth, or compute? That's a useful piece of information and let's you see when and where 6GB, 8GB, or even 12GB might be a problem.
Suppose it didn't hit a limit, in some of the games. Well, the only place to go from there is up — if you change both resolution and settings you don't know which was more impactful. That's why from 1080p medium to ultra, the only thing that I changed was the settings. And from 1080p ultra to 1440p ultra, I only changed the resolution (and the same for 4K ultra). Eliminate the variables, don't add to them.
If I start testing all the "it makes sense on this card" settings, and then looking at the cards around, above, and below that level as comparison points, I can quickly end up with 1080p, 1440p, and 4K all tested at low, medium, high, and ultra. Which would be lovely to do, given a lot more time (and more test PCs and people doing the testing). But even if you trim out the cruft (like dropping 4K entirely on budget GPUs), it generally ends up being more work, plus going back and swapping GPUs more often because invariably there are things that get missed.
It's also important to do testing of cards at settings that find where they collapse. If I only test 8GB budget cards at 1080p high, 1440p medium, and 1440p ultra, the first two will probably be equivalent workloads while the third may be a problem, but we don't know if the problem was 1440p vs 1080p or ultra vs medium (or more likely both). You're eliminating some questions and adding others.
Basically, I choose to keep my workload somewhat manageable and leave it to the readers to be smart enough to interpolate results where necessary. Most of our TH readers are thankfully pretty intelligent and can do that. It's also why the GPU hierarchy, which is literally hundreds of hours of work (and retesting regularly takes it to thousands of hours) is so useful. It's generally the most trafficked page on TH as a result. Which is what 20 years of actual benchmarking has taught me.