Nvidia GeForce GTX 1080 Pascal Review

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Mcrsmokey

Commendable
Jun 21, 2016
2
0
1,510

I can tell you now with no oc my titan x gets 80+fps at 4k maxed out. With an oc its in the 100s there is no way you got 43fps max on a titan x lol not unless there is serious issues with your PC or GPU. I'm getting double that and I'm only running an i5 cpu. I can prove this if any one thinks its not true lol.

 

BradleyJames

Reputable
Feb 18, 2014
421
0
4,960
Ive got 2 980 tis in sli right now, a rog matrix platinum and an evga ftw. i can overclock the matrix to 1504 and the ftw to 1448. been doing research to see if there is much of a difference between the ocd matrix and a 1080. from my own benchmarks on the matrix( just the matrix, not in sli) and the 1080 isnt much better, 5 to 10 percent. it seems that the only thing that puts the 1080 ahead is slightly faster clock speed. thats just my input on in.
 


You are CONFUSING terms, and clearly have a limited understanding of GPU architecture. A lack of ACE does not mean a lack of asynchronous compute hardware.

Both cards have hardware level asynchronous compute methods, however they are slightly different in implementation. Here's a link:
http://www.eteknix.com/pascal-gtx-1080-async-compute-explored/

Testing has already proven that enabling asynchronous compute in Pascal improves the FPS (it helps reduce STUTTER as well). Yes, it's likely AMD's method is slightly better but this is only one part of the story. There are other software and hardware changes that can tip the balance.

Do you even know what "asynchronous compute" means?
It simply means code can be processed in parallel. In this context we're referring to Graphics AND Compute (for example, Textures and PhysX). When the code is synchronous it would be Graphics first, then Compute second.

The problem with synchronous is that if a bundle of code is too large, it takes too long to process and can cause issues like STUTTER. If the code is too short then we end up wasting precious processing cycles.

Preemption is used to halt and restart code that is not as essential so that high priority tasks can be given priority such as asynchronous time warp in VR to prevent lag when moving your head which can cause one to feel sick.

Summary:
GPU's are complicated beasts. You can draw some cursory data from reading a few reviews, but most people fall into the trap of thinking they have a good grasp on the situation when they do not.

Let's not also forget performance on existing and older games as well, and other things like NVidia's VR plugins. When I buy a video card I look at the BIG PICTURE.

I'm rooting for AMD. I've even recommended the RX-400 series (when prices are reasonable), but I do so with a good understanding of the hardware and software. I'm buying a GTX1080; if AMD had something of similar performance I'd give it some serious thought, however their financial situation and relatively poorer software support in recent times sway me to NVidia. If they can improve and maintain that I will reconsider my card purchase after this (in like five years).

(The RX-480 averages the R9-390 but swings from about 80% to 120% of its performance. Basically they made some architectural changes that are meant to HELP future games but HURT older games. AMD seems to have a history of making changes that benefit the future at the expense of the present.)

Anyway, you may wish to do more reading if you plan to comment in the future.
 

xdidgex

Commendable
Sep 5, 2016
2
0
1,510
I have a little problem regarding with the Showcase 2013 Benchmark test. We bought a workstation with configuration:
Win_10_Pro
Xeon E5-2687W (2X)
64 GB ram
GTX 1080
Acctualy we are using Showcase 2016 for rendering our machines. The previous computer was a i7 4770 with 16 GB ram and the gtx 750 ti, and it was way to poor to handle the large models with 6.000.000 and more LODs (FPS aroud 4 or less, some larger models even under 1 fps). So we decided to buy a new workstation with the configuration mentioned above. The gpu gtx 1080 was purchased on the benchmark basic provided from this site. The problem now is that our new system is still bad in performance when not even worse,… the drivers we are using for the GTX is 372.20.

Please help me out here,… ty
 

mapesdhs

Distinguished


What results do you get when running Viewperf 12 for the Showcase 2013 test? Can you post them please? Or feel free to PM/email me if you don't want to clog up the thread. I've accumulated some Viewperf 12 data here (set Page Style to None from the View menu in Firefox; other browsers may vary):

http://www.sgidepot.co.uk/misc/viewperf.txt

For testing a 980 Ti I used the 364.72 drivers and the results appear to correlate ok with the 980 and 1080 numbers in the review here.

If you're getting very slow performance running the benchmark or real-world tests, what does the system appear to be doing at the time? ie. usage monitoring via Process Explorer, CoreTemp, etc. Does your system appear to be running normally for other CPU benchmarks, eg. what result do you get for Cinebench R15, or the Blender BMW test? I have Blender/CB data here (slightly older version of Blender used):

http://www.sgidepot.co.uk/misc/tests-jj.txt

Any relevant messages in the system logs?

Ian.

 

xdidgex

Commendable
Sep 5, 2016
2
0
1,510
MAPESDHS. Thank you very much for your reply. I have tested the system with Viewperf 12 for Showcase and i have got little bit lover results as the provided in this article (about 110 FPS). the test are provided for showcase 2013 and i am using "real world" showcase 2016. In "real world" when rendering, all CPU cores are loaded to 100%, while in hardware rendering mode the GPU is hardly used, maximum 10% (measured with Process Explorer as you have mentioned above). So i have when working with larger models frame rates 2 FPS and bellow.

I have also tested Cinebench and the results are OK. The Blender BMW test was not working with the GTX 1080 as it does not support this GPU.

The big question remains why does the GPU not release its full potential in hardware mode and what to do too unleash it.

Maybe my system configuration is bad. I have time till next week "Friday" to solve this problem otherwise i have opportunity to change my workstation with some other configuration. Yes and when running Autodesk Inventor it does also not use the GPU to its fully potential.

Thank you very much for your help and support, hopefully there is a solution. Worst case - system configuration change.
 

mapesdhs

Distinguished
xdidgex, I don't think Process Explorer is the best way of determining GPU usage during a test. I would normally use MSI Afterburner (though for clarity you'll need to turn off many of the graphs which are reporting irrelevant data).

I'm not sure what you mean when you describe Showcase 2016 as "real world". I don't know how the 2016 edition relates to the 2013 edition, indeed my knowledge of this app is minimal. What are the CPU cores doing when working in HW rendered mode?

What Cinebench R15 score do you get?

For Blender, there's probably an option to change the GPU support to Experimental.

As for your main question, it could be that the GPU is processing data as fast as it can because the bottleneck lies elsewhere. You need to find out what else the system is doing when the GPU is processing in HW mode, ie. CPU cores/threads, memory I/O, storage I/O.

Also what settings are you using in the NVIDIA Control Panel (NCP)?

Ian.

 
Status
Not open for further replies.