Nvidia's Current List of RTX 2080-Optimized Games Is Pretty Short

Status
Not open for further replies.

rantoc

Distinguished
Dec 17, 2009
1,859
1
19,780
Considering how long the tech have been there, i would have to say its a rather big list. And consider how easy it is to implement compared to have to "fake" all effects my guess is that adaptation will be pretty good... at least until the console makers will pour their heart out and ask the devs to stop make the PC games gfx to good (or force with ok ok you cant release it on our console if its sub-pair) and we will be back to another era of consoles holding the future of gaming back.
 
Aug 21, 2018
1
0
10
Very disappointed that Anthem was not a part of either of these lists. If anyone from the Anthem development team sees this comment, please forward the question up through your management team.
 

hannibal

Distinguished
Also most games only use part of what could be done, just like first dx10, dx11 and dx12 games were mainly dx9 games with some minor add on effects. Only these three will have raytrasing, the 2060 series has not... so very few users actually can play any games that have this technology.
But this is promising. 3=5 years and there may be Also middle range cards that have this technology, and so most users can benefit these upgrades if the game developers start to use them in bigger scale. So this is/may be future technology, just don`t Expect to see it see it fully utilised for Many years!
 

deesider

Honorable
Jun 15, 2017
298
135
10,890

I'd be surprised if Anthem didn't include either or these new features, since it uses the same engine as Battlefield V, which will have the ray-tracing option. Anthem not being released until next year probably meant it just didn't make it to Nvidia's list yet.

 

bit_user

Polypheme
Ambassador

I thought that was describing how they trained the neural network used by DLSS.

I wonder whether they're offering this for upscaling movies to 4k. That would make it the ultimate video processsor, especially since they've separately demonstrated temporal interpolation.
 

bit_user

Polypheme
Ambassador

This is the big question.

Here's a quick analysis (note that GTX 1080 assumes 11 GHz GDDR5X; not the 10 GHz used at launch):

Code:
IMPROVEMENT
===========
Pascal          Turing        fp32    bandwidth    launch MSRP     price (FE)
------          ------        -----   ---------    -----------     ----------
GTX 1070        RTX 2070       12%       75%          32%            33%
GTX 1080        RTX 2080        8%       27%          17%            14%
GTX 1080 Ti     RTX 2080 Ti    11%       27%          43%            72%
So, we see a large bandwidth increase for the x70's. But raw compute performance changes little.

The only hope is that architectural changes in the cores manage to boost utilization, although I'd be surprised if we saw the kind of double-digit % increase in that department that would be needed to keep pace with the memory bandwidth improvement. IMO, the main point of the big bandwidth increase is to benefit the Tensor and RT cores.

In any case, we're probably looking at a ceiling of about 30% improvement. That wouldn't be insignificant, but it's smaller than the improvement of both previous generations over their predecessors.
 

cangelini

Contributing Editor
Editor
Jul 4, 2008
1,878
9
19,795


You're right, bit. Updated to reflect.
 
Status
Not open for further replies.