Eggz
Distinguished
blazorthon :
[SEE PREVIOUS POST FROM blazorthon]
Point taken on "possibility." I guess it is possible for even a resolution of 0x0 to bog down the best card using a prime number formula, or even calculating decimals of pi forever. Perhaps the better way to frame the issue is based on what happens in the market place. Developers do offer some graphical features that are hard to notice, but they won't go so far as to offer unlimited AA - or very high multiples beyond 32 x.
Just like you can expect the 680 paired with a sufficient CPU to crush 640x480 for the life of that card, you should be able to have an analogous expectation for 1080p with a more powerful card - though that card may not yet exist. Resolutions do create boundaries. When the 680 was the king, 1080p was still standardizing. People were running 720p and lower resolutions, and 1080p games were programmed with current hardware in mind. At the same time, though, Nvidia was aware of Crysis 3's impeding release when the 680 was launching, and the 680 was never up to snuff for that game. Not even the 690 could keep a min of 60 fps with everything cranked.
Going back as far as Crysis Warhead, which was a 2008 game, the 680 was averaging less than 50 fps, and the 680 came out in 2012 (link to benchmark graph). Running 1080p was always a stretch for the 680 when it came to very intensive graphics, since the resolution was still maturing. It's mainly a historical point.
But there were established resolutions that the 680 would dominate - no question. For instance, 640x360 would be obviously fine for even modern titles on the 680 using a good CPU. I'm not denying that the 680 could run 1080p, since there's plenty of documentation saying that happened for a lot of people. It's just that the card never dominated that resolution in the same way that it can dominate lower resolutions, by which I mean it won't present a graphics bottleneck in actual games.
When I mentioned the current flagship cards (Titan X, 980 ti, Fury X, and Fury Nano), I was trying to point out that I think those cards are just now beginning to approach the point where people can scoff at 1080p as incapable of creating a graphics bottleneck in actual games, though I don't think we're quite there yet. It's still roughly 2.1 million pixels to push. That's a lot even for the Titan X, which only maintains an 81 fps min on Crysis 3 with all settings maxed, except no AA (click to see page with benchmarks).
This is all assuming gaming graphics, though. If we start talking about doing photo-realistic renders, then I don't think any processor of any kind currently exists that can do that in real time at modern resolutions. Not only is it currently too much data to crunch in real time, but there hasn't been enough R & D time or money put into making the formulas and codes efficient enough to properly utilize the processors. Graphics processing is still relatively new, and I think it will be a long time before computers can generate images to the point where the images are indistinguishable from those that cameras capture - all in real time that is. Way too many gaps still need to be filled, and not ever decades of time and billions of dollars have overcome the challenges. So I won't begin to pretend that I have the answer to that set of problems.