No 8K benchmarks ? COME ON !!! this card should be tested on 8K as well , you tested GTX 1080 ti in 4k and this card is better in 8K than 1080 ti in 4K I dont care if it shows 30 fps it should be 8K benchmarked .
8K is not really practical and is extremely niche at this time. Television companies might be trying to push it on some of the latest high-end models, but it's questionable whether there's even a need for anything more than 4K in a television. Even 4K is arguably a bit overkill at typical viewing distances, and most will struggle to notice much of a difference between 1440p and 4K in games, let alone four times that resolution. Any improvements going from 4K to 8K are likely to be placebo-level, unless perhaps you are sitting within a few feet of a wall-sized display, which would make for a completely impractical viewing experience.
If you want to know how native 8K performs in the latest games, the answer should be clear, and that would be "abysmal". Games typically get around one-third the performance at 4K as they get at 1080p, assuming frame rates are not being limited by CPU performance. It's the same difference in resolution going from 4K to 8K, so best case scenario, divide the 4K numbers by three, which should give you sub-30fps frame rates in many games. And sure, the 10GB of VRAM might actually limit performance further in some titles, but even without that limitation, the performance wouldn't be good. Considering the massive difference in frame rates compared to the imperceptible difference in resolution, gaming at 8K makes no sense, and would arguably be a waste of time to even test, at least for the main review.
A couple sites tested 3x vs 4x and found 4x to be about 1% faster. Only problem was that the test had to be done on an AMD platform and the same sites found the AMD side to be 10% slower than Intel. People need to stop worrying about PCI 3.0 being a bottleneck. Here's one of them.
I looked over TechPowerUp's scaling benchmarks, and they did show around a 3% difference between PCIe 3.0 and 4.0 in some titles, while there was no difference in others, and others still landed somewhere in-between. So, the exact difference will be game-dependent, not just a fixed 1%. And of course, that might have been limited by the 3900X's somewhat lower gaming performance than the 9900K at lower resolutions, though that could potentially change with the upcoming Ryzen CPUs or future Intel models. So I would say "up to a 3% difference in today's games" would be a more accurate way to describe it, even if that's still a pretty minimal amount.
The article also went on to point out in it's conclusion that it's possible PCIe 4.0 could make more of a difference in future games utilizing RTX-IO and DirectStorage once games start using that technology. The upcoming consoles will apparently be utilizing this kind of direct SSD-to-VRAM data transfers to stream in game assets efficiently, so it will likely make its way to the PC as well for games designed with those capabilities in mind. Unfortunately, there's no way to test that at this time, so that still leaves some potential PCIe 4.0 benefits up in the air.
And as for the Ryzen system being "10% slower" than the 9900K, that was at 1080p resolution, which would be kind of silly to get a card like this for. At 4K, the 3900X was just 1% slower than the 9900K on average, and I imagine most current CPUs would perform about the same at that resolution, so either of those processors are arguably overkill.
And to be honest the next gen consoles are the 3080/3090 biggest competitors hence their price points...not AMD.
But, the new consoles both use AMD GPUs. : P
This looks like an 'end-of-the-road' card for 99% of users.
If I were thinking about a 3070, I would buy the 3080 anyway... still need to look at Navi, just to be sure (and maybe price availablility issues on 30X0 will be sorted by then).
This kind of reminds me of someone who paid a couple-hundred dollars for a 2GB SD card back when those were new, claiming they probably wouldn't ever need a higher-capacity SD card than that. : P
The graphics quality of games will increase to match the new hardware in the coming years. Turn on raytraced lighting effects in a game like Control, and you're already looking at below-60fps frame rates at native 1440p. And this is still only "hybrid" raytracing that's only being applied to certain effects, with full raytracing being significantly more demanding.
The same could have been said about the 1080 Ti a few years back, which was over 30% faster at 4K than the 1080 that launched for the same price just 9 months prior, and it was generally accepted as being a pretty good card for 4K. Fast forward a couple years later, and it was already getting frame rates in the 20s in a game like Control at 4K ultra settings, even without raytracing enabled. Turn that on, and it can't even maintain 30fps at 1080p resolution.
I'm just hoping the 3090 ends up being to the 3080 what the 2080ti was to the 2080, and DOESN'T end up being what the RTX Titan was to the 2080ti.
My 4k 144hz and 1440p 240hz monitors are desperate to stretch their legs.
Don't get your hopes up. As the article points out, if you look at it's specs, it shouldn't be more than 20% faster, and realistically, the typical difference is going to be even less than that. So, over double the price for maybe around 10-15% more performance at 4K is what you should be expecting from it. The 3090 is absolutely a "Titan" card, just with different branding.
And speaking of raytracing, it's a bit disappointing to see very little performance improvement over the 20-series in that area. If raytracing caused a 40% hit to performance in a particular game on Turing, then it appears to cause nearly as much of a hit on Ampere as well. The overall performance at a given price point is a decent amount higher, but the relative performance hit from enabling raytracing seems rather similar. Maybe that's something they'll address with their next-generation cards though, once raytracing has established itself as the norm for ultra graphics settings.
The same goes for efficiency, at least on these top-end cards. There's more performance, sure, but the power draw to get there has risen nearly as much, resulting in not much more than a 10% efficiency gain compared to the 2080 Ti, despite the process node shrink.
And perhaps most disappointing, this architecture seems to be designed first and foremost with mining in mind, rather than gaming. There appear to be huge gains to the mining/compute performance of these cards relative to gaming performance. The 3080 supposedly gets around 30 Tflops of FP32 compute performance, more than double that of a 2080 Ti, or close to triple that of a 2080. But as far as performance in today's games goes, even at 4K we're typically only looking at a little over 30% more performance than a 13.5 Tflop 2080 Ti, making the 3080 roughly comparable to an 18-19 Tflop Turing card. That's still a good performance difference given the price, but will these cards actually be available for their advertised prices in any capacity post-launch? If crypto-operations decide to buy them up, gamers won't be getting them at any remotely reasonable price. I get the feeling Nvidia made that design decision when seeing increased sales during the mining craze a few years back, and that the architectural change was incorporated into Ampere primarily to make the card more attractive to miners.