Nvidia Reveals Pascal: GTX 1080 And 1070 To Beat Titan X, GDDR5X Debuts

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Robert_164

Commendable
Mar 5, 2016
9
0
1,520
Do they outperform Titan X when the workload is not games, but CUDA and OpenCL programs with heavy use of double precision instead? If they do, I'll consider buying for one of my computers, but not the other (which can't boot with Nvidia boards more recent than the GTX 500 series).
 

bit_user

Polypheme
Ambassador
I just wanted to point out that AVX2 is not really comparable to AVX. You'd want to compare AVX2 with SSE2. Likewise, AVX is comparable to SSE.

SSE2 and AVX2 added support for integer data types, whereas SSE and AVX were concern only with floats.

All installments of SSE instructions operate on 128 bits. With AVX, the vector registers were extended to 256 bits, but pretty much just the float SSEn instructions were extended to operate on their full width. Then, AVX2 came along and extended the integer instructions.

I'm sure you don't mean to be "spreading misinformation", so hopefully I won't be pilloried for this correction.
 

bit_user

Polypheme
Ambassador
For what do you need double-precision?

If you really care about double-precision GPU compute performance, then you can't do better than the 2-year-old Titan Black (for the money). Pascal is unlikely to change that, as Nvidia intentionally cripples double-precision performance of their consumer GPUs.

The original Titan, Titan Black, and Titan Z had Tesla GPUs on consumer boards. Titan X is based on the next generation, which had far worse double-precision performance. And I wouldn't count on them using an uncrippled GP100, in the new Titan.

See https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_700_Series
 

Robert_164

Commendable
Mar 5, 2016
9
0
1,520


Certain BOINC projects, such as Milkyway@Home, do most of their work in double precision, and therefore run much slower on Nvidia boards with a GPU that does not have good support for double precision.

My computers run in a location with a strong restriction on how much electric power they can use. None of the Titan boards for which I've found adequate information will fit this restriction, or I'd already have some type of Titan installed.
 


Thanks for the clarification, no pillorying coming from me! :)

Hopefully my point still stands. To equate the performance of specific new features to the overall performance of a CPU/GPU is pretty irresponsible. It wouldn't stand in a forum debate/discussion, let alone in the announcement of a major product launch like this.
 


I can't find something that is worded in a way that you would understand so you should go looking yourself, start with the Wikki and work out from there. I don't bookmark every page I read and I really don't care whether you think I'm wrong or not, it's not something I'm going to lose sleep over.
 

bit_user

Polypheme
Ambassador
Hmmm... I wonder if you could down-clock an original Titan board to fit your power constraints.

Otherwise, maybe a Skylake-R CPU, with 65 W TDP and 72 execution units would be the best choice. They provide ~330 GFLOPS of double-precision performance and support OpenCL 2.0.

http://www.tomshardware.com/news/intel-skylake-r-i7-6785r-i5-6685r-i5-6585,31726.html
 

TJ Hooker

Titan
Ambassador
Ha, so it's not you don't have evidence, it's just that I wouldn't understand it? Well that's convenient, and somewhat condescending. Anyway, I realize you have no reason to care if I believe you. But at the same time, I don't see why people should care when you declare that TSMC's and GloFo's processes are the same if you're unable or unwilling to provide sufficient evidence to back it up. And it would seem that neither of us have much interest in continuing this discussion, so I guess that's that. Cheers.
 
I have to agree with TJ Hooker here, I think you're making assumptions that he won't understand the material. I would also be very interested in seeing any further evidence you have Mousemonkey, in part it could be used to further my own studies, so any resources are helpful. ;)
 

bit_user

Polypheme
Ambassador
I'm just wondering, what possible outcome could arise from continuing this exchange, that would justify the time and energy you've put into it? What's the likelihood of that happening? And what's the worst that could happen if you just walked away?

Just curious.
 

Sharky36566666

Reputable
Feb 11, 2016
22
0
4,510


Thank you so much for the reply.

Since the 970 was 145w ,do you know if the 1070 will have the same power connector and card size as the 970? I have a new Alienware x51 r3 with the gtx970, it has a limited power supply and size, so the 1070 has to be a very close match.

 


It take two 970s now to do that in todays more demanding games and I doubt we will see a twofold increase in performance in just one generation




The 970 was a unique case.... I suspect nVidia took advantage of its dominant market position and used "predatory pricing" at a time when AMD was having sever financial difficulties

http://videocardz.com/nvidia/geforce-900/geforce-gtx-970
GTX 970 Launch Price = $329 USD
GTX 770 Launch Price = $399 USD
GTX 670 Launch Price = $399 USD

If the 1070 does hit at $379, that's $20 cheaper than the 770 and 670 and, considering inflation, that's a pretty good deal.

 


Again ...

Freesync = Vertical Sync
G-sync = Vertical Sync + a hardware module that costs money to provide strobing of the backlight.

Yes, the cost is different because one includes the strobing hardware module and one doesn't.
Yes, some manufacturers do in fact provide a strobing component, the quality of which varies from model to model and, adds to the cost.
Yes, the technologies are not compatible, (vertical sync has no value with strobing)




The 780 dropped from 700+ to $500 when the Ti came out.... and that wasn't a new generation.




Double the performance needs context. If i double RAM performance for example, that does not equate to a doubling of application or game performance.
 


The 980 ti is 35% faster (on average) than a 970
Two 970s are (on average) 70% faster than a 970

My son plays on a 144 Hz 1440p monitor w/ twin 970s. Nothing he's played so far has been a problem.





Until THG starts testing what we are all interested in, I'll be looking elsewhere. Seeing a difference of say 4% the reader walks away thinking the performance is comparable. But when one over clocks 30% and one over clocks 6%, that a 27+ % advantage and that should factor into most peeps purchasing decisions.
 

Which is exactly why this site really should correct or amend this launch article... because it's wrong!

If you only read this article you come away expecting a 1080 to push higher frames than SLI Titan X. A quick look through the comments here shows that this is exactly what a number of people are taking away, which is fair enough because it's exactly what this article says. Yet Nvidia themselves never claimed anything like that sort of performance. It's just a quote taken out of context and put up in a major launch article, very irresponsible IMHO.
 

grave13

Honorable
Jan 8, 2013
2
0
10,510
I currently have i5-3570K, 16GB of DDR3 RAM and most importantly:G1 Gaming GTX 980 SLI.

So now that hard part, upgrade now to a single 1080p? Or wait?
Also must I full upgrade my rig / build a new system to use this card properly?

I just don't know. For that price range (which is still very expensive but cheaper than what Titan X was) getting x2 performance of Titan X is just tempting.

But maybe I should wait for 1080ti or whatever comes next? Pascal is still in its diapers basically, is skipping it wise?

You can wait forever thinking like this.

Whatever you buy and whenever, a year leter there will be better card.

The biggest leap is always linked with process change. So after more than 4 years of 28 nm, the most wise decision is to buy the new 16 nm hi-end GPU NOW.

New architecture, new process and great leap in perf and perf/watt.

The question is do you really need the new card? If yes - buy NOW beacuse waiting doesn't make sense at all.
 

bryanlarsen

Reputable
May 27, 2014
4
0
4,510
"For VR rendering, Nvidia takes this idea even further. It dedicates four view ports per eye for an HMD and prewarps the image before hitting the lenses. The end result is a clearer image with more accurate proportions."

That's wrong. Normally, VR uses a pixel shader to warp a flat projection onto a spherical one. This is accurate, but means that the GPU ends up rendering pixels that get thrown away by the shader. The described tech uses 4 flat planes to approximate a spherical projection. That's *less* accurate, but faster. They didn't say whether they also add a pixel shader to get the accuracy back but lose some of the performance gain. Perhaps that's optional.
 
Status
Not open for further replies.