Nvidia Announces GeForce RTX 2080 Ti, 2080, 2070 (Developing)

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Bottom line: if you are upgrading from a 1080 ti the only card you can get is the RTX 2080 ti or you will be disappointed (until the new games with raytracing support find their way into the world and onto your computer).
 


The 980Ti had more cores than the 1080 (2816 vs 2560) and higher memory bandwidth overall (336GBps vs 320GBps). More doesn't always mean better. It is always possible for the new uArch to be more optimized in how it handles the game and information.
 


Seems like there may have been a reason for the dodgy performance.

UPDATE: The Shadow of the Tomb Raider devs have responded to the perceived poor performance in a tweet:


Tomb Raider

@tombraider
The Nvidia Ray Tracing technology currently being shown in Shadow of the Tomb Raider is an early work in progress version. As a result, different areas of the game have received different levels of polish while we work toward complete implementation of this new technology.

5:36 PM - Aug 21, 2018
337
74 people are talking about this

https://www.forbes.com/sites/jasonevangelho/2018/08/21/nvidias-flagship-rtx-2080-ti-cant-hit-1080p-60fps-in-new-tomb-raider/#470b08791558
 
Keep in mind that a Titan XP is only slightly faster than a 1080 Ti in games, and was generally considered a terrible value for gaming, costing almost twice as much. So, saying that a 2070 outperforms a Titan XP is only a more marketable way of saying that it slightly outperforms a 1080 Ti. So, we're talking about a card that is launching for $600 that may outperform a card that launched for $700 one-and-a-half years ago. It's not quite so impressive when you look at it that way.

They might be calling it a 2070, but its really priced more like an "80" card, while the 2080 is priced like an "80 Ti" and the 2080 Ti is priced like a Titan. They're just shifting the model numbers in an effort to hide the fact that using numbers more representative of their price levels, the raw performance gains might have been considered a bit mediocre.

Not sure how that's a bad thing? A GPU that costs less than and outperforms a card that sold 2 years ago. I'm not seeing the issue.
 


In my previous comments I was absolutely making the assumption that they are NOT and I hope I'm wrong. I did read somewhere, maybe in this thread, that the AI component might help speed the process for non-raytracing. I believe 10 to 15 percent improvement was mentioned. If all the silicon on that die (not just CUDA cores) can work toward processing rasterized images, then it would definitely be a big leap for non-raytracing and again I hope that's the case...but we'll know for sure when the benchmarks come out.
 

Technology is always advancing and it's always assumed each new generation of GPUs will offer better performance per dollar than the last. The concern is that here we have a longer than usual wait between generations (thus in theory allowing more time for technological improvement) and yet the perf/$ improvement here looks to be noticeably less than previous generations.
 


I think you've done an excellent job of summarizing the pricing problem with Turing. We don't just want more performance, we also want more performance per dollar. Turing utterly lacks that.
 


What!?! That's totally illogical. You cannot asses the price/performance ratio....until you know the performance.
 


That's very wise mlee. We don't know the performance of Nvidia's Turing graphics cards yet so we can't do a price / performance analysis. We know that the 2080Ti Founder Edition is priced at $1200 for certain. If the MSRP of a 1080Ti is $700 then the 2080Ti FE needs to be 58% faster to meet the price/performance ratio of the 1080Ti. TJ Hooker's statement earlier was that every generation of new technology should have BETTER price/performace. In order for that to be true the 2080Ti needs to be 59% or more faster than its Pascal counterpart. We'll see if it can do it or not but I'm pessimistic about it.
 



Actually, it'd be 42% since the 100-58 was missed in your calculation. Your point is still definitely clear about the price/performance ratio though.

The last I heard the 2080ti is about about 10-15% faster than the 1080ti in raster 3D; although not verified outside hardware specs. 42% is a long ways to go. Even if you pick up a non-founder edition at the suggested $1000 price point, it'd still need to be 30% faster. The biggest reason for the large increase in price is the inclusion of ray-tracing capabilities. But just like first generation 3D cards like the Virge was, it will take a couple card generations before that's truly practical. Unless you 'must' have the newest tech, most sites (Forbes, etc) are saying avoid this generation (or wait for prices to fall) if you already have the 1080ti.

*update*- According to the article Tom's just posted: https://www.tomshardware.com/news/nvidia-rtx-2080-gaming-benchmarks-rasterized,37679.html , it looks like 30%+ might actually be possible across the board (instead of just cherry picked). This could prove interesting if so.
 
Both 1080 Ti and 980 Ti offered close to 50% (or higher) increase in perf/$ compared to the previous gen X80 Ti (with 18 to 21 months between releases). If the 2080 Ti only offers a big enough performance increase to give it roughly equal perf/$ as the 1080 Ti (with 18 months between releases), I'd still consider that at least a little disappointing to be honest.
 
I have mixed feelings pulling me in opposite directions here. I'm cautiously encouraged by the new Tom's article with the graph comparing the 2080 to 1080 and showing 35% to 55% increase in plain old rasterized graphics performance, keeping in mind that the info only shows relative performance of fps and the source is NVidia themselves. That's the (somewhat) positive. The absolute negative is the Titan pricing of the Ti model. I don't like that they are forcing 1080 Ti potential buyers or current owners into RTX's Titan level pricing to get the same 11GB of vram (is 8GB ok for 4k?), but it seems the ones with deep pockets are being given the ultimatum of funding the raytracing component here if they still want that 11GB of vram. I'm on a gtx 970 right now and have been waiting for 1080 Ti prices to come down ever since crypto threw a monkey wrench into the market, so with my current card the 1080 Ti is still a real contender for me despite the RTX release. I do like things that are new and shiny, though. I think the plan could be to buy an EVGA 1080 Ti very soon, see how the benchmarks go with the RTX series several weeks from now, and take advantage of the EVGA Step-Up program if the RTX proves itself worthy.
 
Kind of like how Nvidia bought Ageia in 2008, and therefore PhysX wasn't their tech even several years later, and no work done on it since 2008 should ever be credited to Nvidia. Who is being dense here? The block integrated into their mobile GPU isn't just a Caustic chip bolted onto a mainboard next to the SoC.

They absolutely could have used it much like Nvidia is today, but scaled down for mobile. Kind of like mobile 3D rendering in general. If Apple didn't (inevitably) go all in-house, they would be pushing hybrid rendering using raytracing for selectively adding enhancements. Everything would be running at a lower resolution and fidelity, often at a ~30fps target, but that's nothing new for mobile. Wizard was designed with realtime hybrid rendering in mind. That's just the first-gen too, it was originally targeted for 28nm. Unfortunately for them they relied on Apple far too much.
 


They're cheaper, come with a built in cooler (tongue), and last at least a dozen years.

You do however have to deal with occasional core dumps though.

 
Status
Not open for further replies.