Nvidia GeForce GTX 970 And 980 Review: Maximum Maxwell

Page 13 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

PolanaMaster

Reputable
Sep 1, 2014
71
0
4,630
Toms Hardware do you even know which Gigabyte card are you reviewing? You wrote Gigabyte GTX 970 oc windforce and later when you showed the backplate it had G1 gaming written on it. I have ordered Gigabyte windforce oc and it came without backplate and I got really angry. Sort this out.
 


The topic of "intellectually honest debate" should be required reading:

There are two intellectually-honest debate tactics:

1. pointing out errors or omissions in your opponent’s facts
2. pointing out errors or omissions in your opponent’s logic

That's it. Dishonest tactics are too numerous and could be it's own thread. But there's a good read here:

http://www.johntreed.com/debate.html



 

Isaiah4110

Distinguished
Jan 12, 2012
603
0
19,010


Oh I have definitely seen the ridiculously good CUDA numbers people get with GTX 580 cards. Unfortunately I don't think that is an option. I spec'ing out a system to replace a one-of-a-kind system (mid-2010 Mac Pro) in our enterprise and comparing it to a current Mac Pro (not chance of CUDA there) and the best PC we could buy on contract plus upgrades. I'm hoping to be able to convince my supervisors that a custom built system will be the best way to go to meet our user's needs.

So I'm fairly confident an old, used GPU with no remaining warranty will not be an option, even though it would likely be the absolute best bang for buck CUDA card out there.
 

mapesdhs

Distinguished
Isaiah4110 writes:
> Oh I have definitely seen the ridiculously good CUDA numbers people get with GTX 580 cards. ...

Hehe, yep, my system has four 3GB 580s; for the Blender Cycles test, it's quicker than two Titan Blacks.


> ... I'm hoping to be able to convince my supervisors that a custom built system will be the best way
> to go to meet our user's needs.

Often difficult in companies, especially larger organisations which tend to have more complex rules.


> So I'm fairly confident an old, used GPU with no remaining warranty will not be an option, even
> though it would likely be the absolute best bang for buck CUDA card out there.

Indeed, bit of a shame as it's certainly possible to put together something pretty good for not much cost.
I've built several systems of this type for people so far. One thing though, with the 970/980 now out, 580
prices do need to drop a bit in order for the cost tradeoff to be definitely worth the additional power, heat
& noise. 1.5GB 580s aren't so bad, but the more relevant 3GB models still seem to hold some value.

Ian.



 
The newer cards should be faster as they have more cores. The only time I would think you'd want the GTX580 is when you are doing FP64 math. The GTX580 was the last card Nvidia put out where that wasn't nerfed. If you don't need double precision then the newer cards should be faster.
 

mapesdhs

Distinguished
That's wrong. The cores in the 580 run at a much higher clock. NVIDIA halved
the core clock rate after the 580 (to make power delivery easier), and along with
other changes (eg. less bandwidth per core overall) it means the newer cards
need _three times_ the no. of cores to match the 580's normal FP32 performance
for some tasks (like AE). Check any toms review of 600/700 series cards, the 580
beats all of them except the 780 Ti and Titan in most cases.

One cannot judge based on the number of cores. NVIDIA changes the tech with
each generation. Using the number of cores as a metric is as bad as the old MIPS
measurement of CPU speed.

As I said before, my four 580s are faster than two Titan Blacks for the Cycles test
(look on the forums here for walter's posts). Check the Creativecow AE CUDA
benchmark thread for similar info.

Whether the 580's core design advantage still holds true vs. the 900 series I have
no idea. Need to get one and try it out. Could be that the 980's lower power is
enough to mean 580s must become cheaper once more to be a sensible alternative.

Ian.

 
That's just it though. There are only 512 shaders on that. Yes they run at 1500M+, but still only 512. You said other cards need 2x-3x, which the 670 already has. (1344) I know the 580 is a great consumer/prosumer card. More so as the FP64 isn't nerfed to heck. But when you look at other newer cards you might be better off with them, more so if you aren't using FP64. The 960/960TI will probably be a great card for this use.
 

mapesdhs

Distinguished
For grud's sake, I've done the tests! :D Check the reviews! It doesn't matter that the
670 has that many cores, it's SLOWER than a 580 for just about every possible
CUDA task. Check toms' own reviews! I don't mind if someone has a theoretical
query about something, but in this case the tests have been done, there's plenty
of data, period. A 580 is faster than ALL the 600 cards for CUDA (at least for AE
anyway) and it beats ALL the 700 cards except the 780 Ti & Titan. Check the
Creativecow thread, you'll see loads of results showing 580s beating any 600 card
and most 700s:

https://forums.creativecow.net/thread/2/1019120

Indeed, the guy who started the above thread answered your query more than
two years ago (this is old news):

GTX 680 (2 GB VRAM) = 6 minutes 11 sec to render
GTX 580 (3 GB VRAM) = 5 minutes 42 sec to render

Likewise, check the Arion benchmark page, a test which scales much better
with multiple GPUs:

http://www.randomcontrol.com/arionbench

My system is no. 18 in the table, while a quad-680 is down at position 25. Note
the table has typo, number 15 is a triple-780.


Also, if you do check the toms reviews, note that one of the tests can show the 580
behind in certain cases, but that's only because part of the test involves normal 3D
ops for which of course the newer cards are better (IMO the test should not be used
because it's not therefore a proper CUDA test as its data is averaged with normal 3D
test data). I've tested with a Quadro K5000, it's slightly slower than a 580 for CUDA.
Check how many cores the K5000 has.

I've talked about this a lot with Chris Angelini and other toms staff, the shader info I've
mentioned comes from them.


Anyway, as I say, the above only applies to the 580 vs. 600/700 series cards. I don't
know how the 970/980 behave yet.

Ian.

PS. Before replying, check the references I've given. And btw, for tasks like AE,
from what I've been able to tell with my own tests, bandwidth-per-shader core
may also be important, which in the case of the 580 is just enormous. This, the
clock difference and other factors I've not mentioned are why the 600 cards are
so poor for CUDA compared to the 580, likewise most of the 700 cards. These
are indisputable facts.



 

Isaiah4110

Distinguished
Jan 12, 2012
603
0
19,010
For me the alleged performance increase in CUDA of the first Maxwell card released, the 750 Ti, has me itching to see those 970 & 980 numbers. I have no idea what Nvidia is using to gauge CUDA Compute Capability on their website, but they gave the 750 Ti a better rating than any other card prior to the 900 series cards being released.
 
Check toms' own reviews!

I did.

mediaespresso-mpeg2.png


mediaespresso-h264.png


Again, I love what the 580 can do. But we are reaching the point where the increased core count on the newer cards is making up the difference in clock speed and nerfing by Nvidia. I'm sure the 970 is even faster. Another mod suggested we keep this focused on the 970/980, so this is probably off track as well.
 
CUDA test results and recommendations ..... 9xx series not included yet tho

http://www.studio1productions.com/Articles/PremiereCS5.htm

Lots of good information ... some interesting quotes below from the testing"

Here is a chart with a basic guideline for the amount of video ram you have on your video card.

SD Footage 1 GB is fine
HD Footage 1 GB is min. 2 GB is better
2K Footage 3 GB is min. 4 GB is better
4K Footage 4 GB is min. more than 4 GB is better
5K Footage 6 GB is min. more is better


Another thing you will notice is there is NOT a big difference between a NVidia card with 96 cuda cores vs. one with 480 cuda cores, when rendering the timeline.

You will notice on both systems, that the more cuda cores the faster it is to export to the MPEG2-DVD format

When exporting to MPEG-2, the more ram you have the faster the exporting time will be. When I fist did the MPEG-2 rendering test, I tried it with only 8 gigs of memory. Once I upgraded it to 16 gigs, the MPEG-2 test were about 40% faster. By adding more system memory, you can actually speed up the time it takes to export to MPEG-2 DVD with what ever NVidia video card you are using.

Important Note About the NVidia 600 series of Video Cards: The NVidia 600 series of video cards have a lower memory interface than the 500 series. However, due to the new architecture (or design) of the 600 series of video, can turning in slightly better results with lower specs. In addition, they consume less power and run cooler. NVidia slightly crippled the 600 series of video card by reducing the memory interface width and reducing the memory bandwidth (data transfer rate). They increase the number of CUDA cores on the 600 series of video cards, which in some application can make up for the crippling.

Important Note About the NVidia 700 series of Video Cards: The NVidia 700 series of video cards have better specs over the 600 series of video cards. They have a faster GPU clock speed, more cuda cores, a wider memory interface width and a higher memory transfer rate over the 600 series of video cards. The GTX-700 series of video cards are the ones I would go with at this time.

Two specs that most people overlook when selecting or recommending a video card are the Memory Interface Width and the Memory Bandwidth.


And here's how nVidia rates the Compute capability of the cards:

https://developer.nvidia.com/cuda-gpus

GeForce GTX 980 5.2
GeForce GTX 970 5.2
GeForce GTX 780 3.5
GeForce GTX 750 Ti 5.0
GeForce GTX 750 5.0
GeForce GTX 680 3.0
GeForce GTX 580 2.0
GeForce GTX 480 2.0

Compute power alone however does not always mean increased performance
 

mapesdhs

Distinguished


I have a 750 Ti, not had time to test it yet though.

I'm ordering a 980 tomorrow.

Ian.



 
I would have referenced that chart Jack, but it doesn't have the GTX580 on it. You can guess where the 580 would be by looking at the 570 which is on there. But I'm afraid Mapesdhs has spoken and the 580 is god unless you get the titan. Yours and my links are garbage compared to what he has said. The user in me wants to debate this more, but I'm wearing my Mod hat so I'm going to insist that we ALL move along. The GTX580 is a great card, and depending on what site you look at (more like what tasks you are really doing.) there are other options.
 
Nvidia has me baffled on the compute / CUDA front .... they owned compute for years and if ya did anything other than gaming, you bought an AMD card .... then with the 6xx, after AMD just invested heavily in compute capability to "catch up", they "broke" compute and CUDA performance went with it. It would seem that their approach has been to release a top card that is faster than the competition by so much rather than just put out the best card they can build on the particular platform. It was rather suspicious that all the "leaked" data on the upcoming 670 and 680 didn't "match" the released cards..... the 670 specs seemed to mirror the leaked 680 specs and we never saw the leaked 680.
 
I'd take a leak for what it's worth; no more.

I too am baffled by nVidia's assorted nerfs, from gimped compute to lack of SLI and pinched memory bandwidth. I think they just want to keep people on the upgrade treadmill, but that won't work once the money's not there.
 
I think they stopped the SLI thing at the lower end I think because of out of context forum posts about micro-stuttering and such. Some people has $140 cards in SLI and they stuttered not because of SLI technology but because they were $140 cards. Any one who experienced that not only writes off SLI but then posts that SLI'ing two $600 cards will lead to micro stuttering. There's also a lot of confusion in that SLI isn't going to do much for a game with 140 FPS .... if it goes from 140 to 210, that gets written off as a 50% improvment that makes no difference .... what isn't mentioned is the 95% improvement ya get in demanding games like Crysis 3
 
I never had problems with SLI. But I had experience with 580s, 680s and 780s. Never tried it with the lower-level cards, but as an Nvidia customer, I would expect it to work as advertised whether I had two 560s or two 580s.
 
560s are fine.... 650 Ti Boost is fine .... every time I have seen someone report problems it's been with < $140 cards and not seen in several years .... most likely as there are few if any < $140 cards made recently that do SLI .... never done a gaming build w/ less than *60 Ti myself

760 is lowest 7xx series that does SLI
650 Ti Boost was slowest 6xx series

545 was the last low budget SLI capable card I can remember.
 

mapesdhs

Distinguished
4745454b, quick update that might be of interest to others - is it just me or is the stock of 980s in the
UK becoming a bit of a joke along with the pricing? I was going to order one from Scan, but now they're
shown as overdue (EVGA 04G-P4-2983-KR). Worse though, the price at 480 UKP is about a 33% markup
over the US $retail, grrr... (though I see newegg is out of them too atm)

Oh well, just have to wait for EVGA to get them sent out I guess. overclockers has it too,
but its price is 500 UKP which is just silly.


Btw, I had two 900MHz 1GB 560Tis SLI for a while (EVGA Crysis editions), worked really well. The only
thing that let them down was VRAM issues for customised Crysis, but they were great for everything
else. Oddly enough I did get hold of two 700MHz 2GB GTX 460s which worked surprisingly well SLI
because of the extra VRAM, handling situations which a couple of 1GB 560s I obtained couldn't cope
with, and they both oc'd to 800 no prob (so the same clock as a Palit Platinum).

Re stuttering I'd say it varies greatly by the game one plays, probably due to the 3D engine involved.
Stalker does it quite a bit, FC2 less so. Crysis was ok though; I just kept upping the detail (custom view
distances, texture sizes, very distance LODs, etc.) until the fps on 3GB 580 SLI was about the minimum
I'd tolerate (approx. 45fps). I'm hoping one 580 will match or beat the two 580s, that's the plan.

Ian.

PS. It's certainly a pity the 750 Ti doesn't have SLI, but then as someone said above, maybe these days NVIDIA
just doesn't want us all using two lesser GPUs to give what one much more expensive card can do. It's why
I bought two good 850MHz 460s back when they were new, so much cheaper and faster together than a crazy
expensive, hot, loud & power hungry 480. Maybe NVIDIA figures such an option means people hang onto older
cards for too long.

 

mapesdhs

Distinguished
Though one could argue it doesn't quite harm sales that much because I doubt I'm the only one who, for
example, could afford to buy two 460s but could not afford a single 480. For a long time, two lesser cards
combined have been cheaper & faster, but often have less VRAM, and of course in many cases make
more total noise, use more power combined, etc. The 460 was an oddball; it did have less RAM, but two
of them were quieter than a 480 by miles. Also, the profit margin on two 460s was probably more than
the profit margin on a single 580.

To be fair, there is stock of various 980s at Scan, but not the model I want. With even reference cards
being 440 UKP in the UK ($715+), I figure I may as well get a decent aftermarket edition as it's only +$65
for +139MHz, assuming the darn things do become available at some point...

Ian.

 
It's not a matter of affordability but "best bang for the buck" .... I'd \still take twin 560 Tis over the 580 at the same price, but why pay $500 for 40% of the performance of $400.

In a way there's an advantage to waiting a bit..... while some of the cards are "same ole, same ole", the EVGA is defective in that the cooler misses 1/3 of the GPU.... undoubtedly that will be fixed with the next stepping.

MSI introduced a new feature that controls the two fans independently. Aside from half the people, even reviewers, thinking there was a problem when the fans shut off, half of those who knew about the fan cut-off feature at low temps, thought something was wrong when one fan was off. This is also be design but in some cases the start up voltage in the BIOS was too low to kick start the fans and ya had to give a finger assist to get it going. Running the fans for an hour at full speed seems to break in the bearing and eliminate the problem but a vBIOS upgrade or some other "fix" will undoubtedly be employed in current batches shipping.

It's a fight, but I usually try and wait a few months after anew product or feature is introduced, ..... not all the time ..... Z87 => Z97 was such a minimal change (same board, new chipset) but Z87 brought us quite a few bugaboos that weren't fixed in the 1st stepping ... and some that haven't been fixed yet.
 
Status
Not open for further replies.