Nvidia GeForce RTX 2080 Founders Edition Review: Faster, More Expensive Than GeForce GTX 1080 Ti

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Yep, I'm aware of DLSS likely having a positive impact on performance at a given quality level, and was simply referring to the overall performance impact of the various new technologies, whether positive or negative. We can't accurately assess the gains or losses from enabling either of these features still, since there are no proper comparisons available. Logically, I would think that if DLSS performed really well, Nvidia would have made sure that there were games available with the feature in time for reviews. The fact that there are not, leaves one to assume that either the feature doesn't perform quite as well as Nvidia would like people to think, or it has bugs or other issues. From what I've heard, these cards may have been delayed from their previously planned launch by a month or more, so particularly if that's true, then there's little reason why the feature shouldn't be available already. These new additions are the only things that could possibly make the new generation of cards worth paying more for, yet they are still completely unavailable.


True, though again, either option is priced well out of range for nearly everyone shopping for a gaming graphics card, so it's not really relevant to most. It might be something worth getting excited about for the fraction of a percent of people in the market for such a card, but for everyone else it doesn't really matter. One could even argue the same about the current pricing of the 2080 as well, though there might be a few more people willing to stretch their budgets in that case.


At least at online retailers in the US, RTX 2080s currently cost around $100-$150 more than a number of 1080 Tis. Nearly all of the 2080 partner cards cost more than the Founder's Edition at launch, aside from one that costs just $10 less. And these are priced upward of $800, again placing them well above what the vast majority of people are willing to spend on a gaming card.


We can make some educated guesses though. There's no way that a GTX 1070 will perform anywhere close to a 1080 Ti in most games, since that's right where the 2080's performance falls. So, I suspect that it will only be a bit faster than a 1080 in cases where DLSS/RTX isn't being utilized. And Nvidia already announced that the Founder's Edition of that card will be $600, and if the partner cards follow a similar trend as with the 2080 launch, they might cost even more. Meanwhile, it's currently possible to buy a new 1080 Ti for as little as $650, or some 1080s for under $450. That means you'll likely be paying over 30% more for DLSS and raytracing. RTX and DLSS might end up being great additions that help justify the substantial price increases, but as of now they don't really do anything.

As for the 2060, that involves a lot more speculation, but if the 2070 doesn't offer much more than 1080-like performance for around $600, we can safely assume that the 2060 will be quite a bit less powerful than that. As an absolute best case scenario, I wouldn't expect more than 1070 Ti performance, though closer to 1070 performance is probably a lot more likely, and RTX/DLSS probably won't be present unless they significantly raise prices over the 1060.
 

bit_user

Titan
Ambassador

You're confusing DLSS and DLAA. The distinction is that DLSS uses a lower-resolution render target, then upsamples with the aid of an intelligent filter trained using Deep Learning. So, it's possible the DLSS framerate can actually exceed rendering at native res w/ no AA.

DLAA is what you described.
 

Krazie_Ivan

Honorable
Aug 22, 2012
102
0
10,680


i stand corrected - thx :)
all i know of it was from the Nvidia launch presentation, which didn't feel too clear at the time.
 

TJ Hooker

Titan
Ambassador

Are you sure about that? Here's what Tom's said about DLSS: "The only way for performance to increase using DLSS is if Nvidia’s baseline was established with some form of anti-aliasing applied at 3840x2160. By turning AA off and using DLSS instead, the company achieves similar image quality, but benefits greatly from hardware acceleration to improve performance."

https://www.tomshardware.com/news/nvidia-rtx-2080-gaming-benchmarks-rasterized,37679.html

In general I can't really find much info about DLAA. Is DLSS possibly synonymous with DLAA, or a form of it (seeing that supersampling is a form of anti aliasing)?

Edit: Nvidia's description of it also doesn't seem to describe it as upsampling in order to output a higher resolution than what was rendered by the GPU, but rather just a better form of AA.

https://developer.nvidia.com/rtx/ngx
 

Yeah, I also think that's the case. The point of supersampling is to avoid aliasing artifacts by combining more samples than there are pixels on the screen. Rendering at a lower resolution would be upsampling, which is pretty much the opposite of supersampling. However, I do suspect that the same hardware could be used for doing that as well. Actually, looking at that page, Nvidia seems to call it "AI Up-Res" there, something seperate from DLSS. With that, it might be possible to render a scene at a lower resolution, and upscale it to a screen's native resolution, using machine learning to fill in the missing pixels in a way that results in the image being much sharper than if it were simply scaled up. I'm not sure if that functionality will available though, or if it would even work in realtime on the hardware in these cards, since Nvidia has been a bit vague in their description of the hardware's capabilities.
 

bit_user

Titan
Ambassador

Good points. I don't know exactly where I read everything I think I know about DLAA, but I did find where I first saw it.

If you compare their SIGGRAPH 2018 keynote and their Gamescon 2018 keynote (delivered one week later), you find the same slide with only the name changed.

SIGGRAPH 2018:
22.jpg

Source: https://www.anandtech.com/show/13215/nvidia-siggraph-2018-keynote-live-blog-4pm-pacific
(If the image doesn't load, click the link and search for "Now discussing Deep Learning Anti-Aliasing" - the slide is just above this.)

Gamescon 2018:
25.jpg

Source: https://www.anandtech.com/show/13240/nvidia-gamescom-2018-keynote-live-blog
(If the image doesn't load, click the link and search for "Deep learning super sampling" - the slide is just below this.)

Right above this, you'll note another slide and discussion of super resolution ("getting high resolution from a low res image"), among other things. That probably fed into some misunderstanding of the feature.

So, we should probably assume Nvidia's marketing department simply changed their mind on how to brand a singular, underlying feature. DLAA -> DLSS.

I'm sorry for spreading misinformation, although I think Nvidia deserves much of the blame. Furthermore I don't believe I fabricated this distinction, myself. I'm pretty sure I read it somewhere else, like in a comment in some thread or another.

Anyway, thanks for calling me on it. I'd rather be corrected sooner rather than later. This is what great about these forums - by sharing and discussing, we can improve the quantity and quality of our information.
 

bit_user

Titan
Ambassador

Yeah, it can be ambiguous, depending on your perspective. You could imagine that, by Deep Learning Super Sampling, they mean that Deep Learning is used to create a comparable result as you'd get by supersampling. This matches their description of the technique, where deep learning is used to train a model to predict what the supersampled result would be, given a non-AA input.

In general, AA is a family of techniques used to combat aliasing, and supersampling is a more specific subset of these. Neither really implies upsampling or scaling, but then it's not necessarily advantageous for them to highlight that aspect, if it's what's actually going on.

One way we'd know if the native render target is lower (i.e. without somehow interrogating the rendering pipeline) would be if DLSS can deliver higher framerates than rendering at native resolution with no AA. That would be a dead giveaway, since DLSS cannot take a negative amount of time (at best, it could be "free", though I think even [ii]that[/i] probably isn't true).
 

bit_user

Titan
Ambassador

True.

If you click the "Details" link, under DLSS, they claim:
DLSS requires a training set of full resolution frames of the aliased images that use one sample per pixel to act as a baseline for training. Another full resolution set of frames with at least 64 samples per pixel acts as the reference that DLSS aims to achieve.
(emphasis added)

Well, that's certainly unambiguous enough.
 

bit_user

Titan
Ambassador

Yes, the FE cards have always been an exclusive product.

The 2-per-customer thing seems applicable only to new or high-demand products. They have a limit of 10, on their GTX 1080:

https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080/
 

If they did manage to efficiently upscale a lower-resolution render to look nearly as good as native resolution, and if it worked well, that could be huge for the 20-series cards, potentially giving them a decent performance boost over their Pascal equivalents in games that offer support for the feature. That could at least help justify some of the increased cost, depending on how much it helped performance. It could also help make up for some of the performance hit resulting from raytracing. However, if that were the case, one would think they would be clearly highlighting that capability, and showing off detailed examples of it. Perhaps they simply delayed these features to help clear out existing stock of 10-series cards before their prices need to be reduced more though.
 

Olle P

Distinguished
Apr 7, 2010
720
61
19,090
Just like with RTX there's a bunch of games in the pipe that is supposed to support DLSS when they're launched. To me it seems like Nvidia just didn't hold the card release long enough after providing the specs for DLSS and RTX to game developers.
In this discussion it's irrelevant how many users it applies to. It's about a certain level of performance and the monetary cost to get it. The RTX 2080Ti is currently the cheapest option available for broad spectrum gaming at that level of performance.
All RTX cards are in low supply for now. Let's see where the prices stabilise in a few months.
I expect even less in terms of performance, but also an AIB pricing around $500 for the less flashy variants once supplies are established.
At worst the 2060 is just a re-branded 1060 with higher clocks...
 
What are the odds Nvidia decides to drop the price on the 2060 relative to where the 1060 landed? If it doesn't offer a big performance jump, and it lacks the RT and AI features that supposedly justify the higher prices, it would make sense right? Anyone think that has a chance of actually happening? It should be either that, or using the die shrink to add CUDA cores but I can't imagine they would risk cannibalizing 2070 sales.
 
If the 2060 were a rebranded Pascal card, I think they would at least rebrand something like a 1070. It has already been over two years since these cards launched, after all, and they probably want something that would encourage people to upgrade. I don't think they would rebrand 1060 cards as 2050 cards though, due to the notably higher power requirements. Even a move to the 12nm fabrication process wouldn't likely be enough to get them down to around 75 watts.
 

bit_user

Titan
Ambassador

Actually, check out the Tesla P4.

http://images.nvidia.com/content/pdf/tesla/184457-Tesla-P4-Datasheet-NV-Final-Letter-Web.pdf

It uses the same base GP104 chip as the GTX 1070 and GTX 1080, but in a 75 W power envelope (and maybe even 50 W?). They just disabled a few more cores and scaled back the clocks. And this is similar to their laptop GPU approach. So, the same should be possible with the GTX 1060's GP106. I'm not saying they will, but they certainly could.

The bigger concern with both of these ideas is cost. Not necessarily the GPU chip, itself, but of the required memory configuration. Using a 256 bit bus + 8-chip GDDR5 configuration won't be cheap, but if they cut it down to the 192-bit / 6-chip solution employed by GTX 1060 (6 GB), then you don't have much margin for additional speed. Same goes for slotting the GP106 into the 2050 platform.
 
I did have FE of the last edition to sell this is the 1st time for me I can't get the FE edition. I wonder why Nvidia did this way this time.