Nvidia Teases GeForce RTX 2080 Performance in Rasterized Games

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Tha's pretty much correct in price structuring but there is no such thing as a real world "average" outside of intangible statistics which mean nothing to individuals. There was never a standard on the terminology on what main stream (aka: mid-range) GPUs were in pricing. Theoretically you could sub-class the mid-range cards from lower to mid to upper mid-range not unlike the generically categorized middle class broken down (lower middle class, solid middle class, upper middle class).

When you go back in history, there hasn't been a big change in pricing structure from Nvidia's GTX x60 and x70 series over the generations. Case in in point: launch price of the 1GB GTX 460 in 2010 was $229. That was the same exact launch price of the 6GB GTX 1060. Same with the 1.28GB GTX 470 and the GTX 970 non-FE launch prices: $329.

However, that changed after the 970 when Nvidia's non-FE launch price of the 1070 jumped $50 to $379. Apparently that number is going to jump again with the 2070 according to Guru3D with the non-FE launch being $499 which moves it into high end x80 GPU pricing structure from a historic perspective. Things have changed and what were once considered to be mid-tier GPUs are moving into the next tier in performance (and pricing).

Afterthought: what can be taken into consideration is the memory price increases over the last years or two for this GPU price jump. I'd love to see a comparison of historic market GDDR memory prices graphed with new generation GPU VRAM increases and see where the curves of each meet. Might make an interesting, and enlightening, visual.
 

I disagree. I see it as a somewhat misplaced bet on VR, where image fidelity plays a substantial role in trying to fool the brain. I think they expected VR to be more popular than it currently is.
 

It's funny you regard it as free.

Do tensor cores not increase die size (and thus price)? Do they not burn power? Is this not die space and power that could instead go towards more CUDA cores to boost shading/AA performance?
 

@solonar's post isn't entirely accurate, but the core point about original vs. refresh of the 1080 is on point. It launched with 10 GHz GDDR5X and was later refeshed with 11 GHz memory.
 
@MCMunroe
That's where I will disagree.
Just recently moved from 4K@60Hz(20ms input lag) to 3440x1440@120Hz(4ms input lag) and difference is like day and night.

A Little bit missing full 4k, mostly for video, but not going back to sub 60 fps range.
 
It is amazing the controversy and debate over GPUs which we know very little about. Most of these comments are nothing but speculation and conjecture.

The only thing we do know is the price and EVERYONE knew these GPUs were going to be expensive. So once we get past the "fake outrage" of pricing we can move on. Of course these are not mainstream GPUs. $500, $700, and 1k is a lot of money to drop on one component when entire "midrange" systems can be had for the same price. But what they do show is what to expect in future mainstream cards.

For example, I see a NIB GTX 1080 on ebay for $399 (even still this is inflated from crypto). There was a period of a year that the 1080 was the king of consumer GPUs and is still widely considered the second best gaming GPU in the world. Upon release of the 1080 (before the crypto craze) the cost was 2 times more than what you can get for one today. The HEDT GPUs (cause that is what the 2080ti is) of today will be the mainstream GPUs of tomorrow.

The fact is, if Nvidia releases a HEDT GPU that can run games at max settings at 4k +100hz it would not sell well. Not because it is a good GPU, but there only 2 panels on the market that run 4k/144hz and they are basically overpriced prototypes coupled with the fact that HDR support is limited to only a handful of games. The technology is just not there yet.

So pushing the ray tracing seems like a pretty good strategy. If boosting fps for technology that does not exist is not a good option, why not do something that makes games actually look better. We will see how ray tracing adoption in games goes moving forward, and what the performance impact is. If only a few games adopt it, then there is not much of a point. There is a good chance that it is a gimmick like VR in the 90s or that the tech is just not there yet. Nivida has the unique opportunity to take the risk by promoting the ray tracing tech because right now their competitor does not have any products in the HEDT space. Without competition, Nvidia can afford to take some a risk here with how games are rendered.

For me, I am really curious about what these new GPUs will bring. Not because I will buy one (I will wait for unbias reviews), but because it shows what the future of gaming may look like. There are only a couple of games that my 1080ti cant average 60fps and anything over 60 is useless for my monitor. So even if there was a 50% bump in performance, 95% of the games I play will have zero impact because they already push my monitor to its limitations.

In the next generation or two the mainstream GPUs will have similar performance to what these HEDT GPUs have today. Just like the HEDT CPU market when the i7 6900k cost 1k for an 8 core part and 2 years later you can get the 2700x with the same performance for a third of the cost.
 


Being skeptical and not taking everything at face value does not equal hate. You need to untrain your brain into that way of extremist thinking. The fact is, Nvidia had a history of deception and lies to get where they are. I could never take them at their word. If they had a history of honesty and honor, I could take these charts at their word, but that's not the case with them. We'll have to wait for tech reviewers like Tom's to get real hardware and do real benchmarks to determine whether or not these cards are worth the massive asking price.

 


Never said free in that quote. However my point stands. If DLSS can improve image quality better than current AA without the insane performance drop good AA gives us its better than compromising. Right now the big thing to do is run 4K or do DSR (run 4K in the GPU the scaled down to 1440P or 1080P) as 4K doesn't always need AA. However thats the same thing we said with 1080 when it was more extreme and lower res was mainstream. Eventually 4K might need AA but any technology that can do better is good for me.



I doubt the improvement in performance is enough to make a drastic change. I also wouldn't know which one they used. I would assume a current FE model so I would assume they are running a GTX 1080 using the 11GHz memory.

Of course the real tests will be when the AiB cards come out to see what they can get the GPUs to with better cooling and what performance increase we should see under those conditions.
 

The problem here is that the 2080 should not be compared as if it were a successor to the 1080, because it's not priced like a 1080. Nvidia played a little trick here in adjusting the model numbers at any given price point to give the impression that there's a larger performance gain than there really is. The 2070 is launching at a price you might expect a 2080 to launch for, the 2080 is launching at a price you might expect a 2080 Ti to launch for, and the 2080 Ti is launching at a price you would expect a Titan to launch for. They simply shifted the model numbers to make the generational gains look larger, and maybe convince some people to go with a higher priced card than they might have otherwise. Compare these 2080 numbers against 1080 Ti performance, and they represent a much less significant performance gain. In fact, the 2080 Founder's Edition costs $100 more than the 1080 Ti Founder's Edition did when it launched a year and a half ago, so yes, more performance than a 1080 Ti should absolutely be expected from it. Comparing an RTX 2080 against a GTX 1080 this generation would be a bit like comparing a GTX 1080 against a GTX 970 the previous generation. You can, but the information is of limited relevance, since the cards are launching in totally different price brackets. The 2070 should be treated as the 1080 successor, and the 2080 should be treated as the 1080 Ti successor.

And I think this relates back to that whole GPP thing Nvidia was trying to push earlier this year before it got a lot of bad press for being anti-competitive. Why would Nvidia try to lock out competitors from manufacturer's gaming brands when they were already in a strong position as the market leader? An obvious answer is that they didn't feel quite as confident in their next generation of graphics cards, and were concerned about their competitors potentially becoming more competitive over the next year or two. It seems like they tried to cover limited performance gains by adjusting the model numbers of their cards to give people the impression that the raw performance had improved more than it had, only without adjusting the prices accordingly.

The cards themselves are almost certainly an improvement over the similarly-priced models from the previous generation, and it's good to see new features like raytracing, even if they might only see limited use this generation, but I think people expected a bit more performance for their money after two and a half years. These extra features might turn out to be great, but it remains to be seen how much they get utilized by developers, and how much of a performance impact they might have.
 

Yes, it was. They advertised incorrect specs for both the ROP count and amount of L2 cache, in addition to failing to disclose that 0.5 GB of the VRAM had crippled bandwidth compared to the advertised bandwidth.

As an aside, a 1 TB hard drive really is a true terabyte, based on the SI definition of tera prefix meaning 1 trillion (10^12). The issue arises from the fact that a computer defines a TB differently, namely as 2^40 bytes (which is now referred to as a tebibyte, TiB).
 
Actually, there was a similar controversy over hard drive space. Not over the bits/bytes distinction, but over formatted vs unformatted capacity. As well as the file size vs space on disk. Most of that went away with the shift to 64 bit, along with HD capacities increasing at such a rate that a few megabytes (at the time) no longer seems significant.
 

Then why'd they change it? 11 GHz gives you 10% more memory bandwidth. For titles that are bandwidth limited, that's like the difference between 50 and 55 fps. Definitely not insignificant.


This one is obvious: they used the thing which will show the biggest improvement for the 2000-series. And "current FE model"? I think there was only one FE model. In any case, if you check their web site, the GeForce GTX 1080 specifies its memory as GDDR5X 10 GHz.

https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080/


What's funny about this is that Nvidia is listing higher boost clocks for its 2000-series FE cards than the non-FE cards. How is that enforced? When can we expect AiB cards with equal or higher clocks?
 

Really? Then what did you mean by this?

If you can get the same or better results without compromising performance I say why not.
Same or better as what? As non-AA? To me, that sounds like you're saying DLAA is free.

Well, why not ditch the Tensor cores, add more CUDA cores, and your AA performance goes up for the same cost and power budget.

What's more: DLAA is a post-processing operation. It's difficult for me to see how it won't add latency & potentially reduce the framerate.
 


https://www.forbes.com/sites/antonyleather/2017/06/13/nvidias-new-gtx-1080-with-11gbps-g5x-memory-tested-how-much-faster-is-it/#5d486c811524

1-4FPS depending on the game and the settings. I still doubt it will be enough to make that much of a difference.

Its not obvious what model or specs they used because many manufactures released a BIOS for the FE versions that clocked the memory to 11GHz after that came out.

As for clocks I have no idea as I haven't seen any official specs from the AiB partners yet. You can't pull up anything on Asus site stating anything more than basic specs, no clock or memory clock speeds have been posted yet. The AiB GTX 1080s were vastly faster than FE because the after market cooling could handle more than the FE cooler could.



Do you see the word "free" in there? Because I do not.

And by same or better I mean the same as using the best available AA. I mean that is how they are comparing this. The entire purpose of DLSS, not DLAA, is to be able to provide equal or better visuals while hurting performance less. Hence what I said. IF, thats the key term I have used, you get equal or better quality without the same massive drop to performance how is that bad?

Having dedicated hardware to handle this, the Tensor cores, is better than having none and relying on software (DX) or multi-function programmable hardware that can do the task but it not dedicated to it.

Lets put it this way. I am waiting for reviews. My whole statement is that if this is possible since AA is a major performance hit, for the good AA not FXAA, then its a great boon for gaming. If its not then its just another gimmick. Right now it seems it will be possible in a handful of games, one that peaks my interest is FF XV since I do like to play that, but like any new idea or tech it will take time to adopt.

Also while this is nVidias implementation in the form of RTX, this will be available and I am willing to bet AMD will do their own version of it be it using just DXR or making their own like they did with TressFX (their version of HairFX). However I cann't be sure if AMD will do dedicated hardware for this task or will still rely on programmable shader cores to do it instead.
 

Nvidia, themselves, have advertised the clocks of the FE and non-FE versions.

Code:
BOOST CLOCKS
============
                FE    Reference
               ----   ---------
RTX 2070:      1710     1620
RTX 2080:      1800     1710
RTX 2080 Ti:   1635     1350
Source:
https://images.anandtech.com/doci/13240/33.jpg
https://images.anandtech.com/doci/13240/29.jpg
https://images.anandtech.com/doci/13240/30.jpg

The question is: are AiB's in any way restricted from upstaging Nvidia's own FE versions?

Edit: here's a breakdown of offerings from different AiB, including some boost clocks. It semes Nvidia isn't constraining AiBs from surpassing the 2080 FE's boost or at least equalling the 2080 Ti FE's boost.

https://www.anandtech.com/show/13268/custom-geforce-rtx-2080-quick-look
 

You're still not acknowledging that there's any trade-off involved, talking about it as if it could be a pure "win". Something that provides a benefit with no down-side can be seen to be "free", in a sense.

Let's not get (or stay) hung up on this point. If you say you don't consider it free, then I accept that. I'm just explaining what how I saw it.

So, here's the pessimistic view:

  • ■ Only works with specific games.
    ■ Adds latency (likely).
    ■ Hurts framerates (probably).
    ■ Tensor cores are not independent - they're used by dispatching instructions and data from CUDA cores. So, anything using the Tensor cores in fact does burden the CUDA cores that could be doing other things.
    ■ Requires Tensor cores, which burn die space and power that could be used by more CUDA cores that would benefit all games (whether in the form of enabling faster AA or however you choose to use the additional computing power).


Sure, if you take the feature as a given. Also, Tensor cores are multi-function programmable engines, BTW.


Sure, but reviewers won't know how much die area or power is consumed by the tensor cores. They can hopefully quantify any impact on latency and framerate, at least.

That said, I'm willing to accept that it's a significant net win. Perhaps a revolutionary development in gaming graphics, even. But that's certainly not a given. We need to see the evidence and thorough analysis. I weight Nvidia's propaganda as little more than noise.


Wait, what? We're talking about DLAA/DLSS. What does DXR have to do with it?
 


Not going over it again but I will go for the last part.

DLSS is a RTX feature from nVidia. However it is made possible thanks to DXR. It is a feature that will probably be able to be designed by AMD for their own use.

As the example I gave, TressFX/HairFX. Similar principles and technologies from two competitors. DXR is what really allows a lot of the features Turing touts to exist. I would not be surprised if AMD doesn't find a way to match this otherwise if it does somehow live up to nVidias expectations how will AMD compete? What game wouldn't want to include this if it does what nVidia says it does?
 

A lot of points you didn't even touch once.


Let's break that down. From https://developer.nvidia.com/rtx

The RTX platform provides software APIs and SDKs running on advanced hardware to provide solutions capable accelerating and enhancing graphics, photos, imaging and video processing. These include:

  • ■ Ray Tracing (OptiX, Microsoft DXR, Vulkan)
    ■ AI-Accelerated Features (NGX)
    ■ Rasterization (Advanced Shaders)
    ■ Simulation (CUDA 10, PhysX, Flex)
    ■ Asset Interchange Formats (USD, MDL)


As you see, only the Ray Tracing feature is covered by DXR. Indeed, DXR stands for "DirectX Raytracing". Check the GDC slides, and all you will see is ray tracing.

https://msdnshared.blob.core.windows.net/media/2018/03/GDC_DXR_deck.pdf

The games supporting DLAA/DLSS must be designed to do so, certainly involving the use of proprietary Nvidia libraries.

Indeed, according to https://developer.nvidia.com/rtx/ngx

NGX software: The RTX feature is integrated into any game, application or plugin through the use of the NGX SDK. This is the main code for the AI-accelerated functionality, but it also requires a trained neural network in order to function. NGX has been designed to be thin - a header file that points to a DLL in the NVIDIA driver, making it simple to augment any application with these RTX features.The NGX SDK provides access to a number of RTX technologies and features for games and digital content creation and editing applications.The capabilities of NGX are tightly coupled to the NVIDIA driver and hardware and make use of Tensor Cores found in RTX-capable GPUs.
 
i am beginning to wonder about tom's hardware, are they taking advantage of it's followers and profitting on leading the flock to water
 
Status
Not open for further replies.