Nvidia GeForce GTX 960: Maxwell In The Middle

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


I think your objectivity on is a little suspect. I can't say I worry about the price out side of the US but for low $209 and it beats the AMD 285. It looks like a good option to me, now we will see how much AMD will drop its prices to compete. This looks like a good time start thinking about upgrading your VGA card.
 
It really doesn't beat the 285 though. This review used overclocked GTX 960 vs stock R9 285. A typical overclocked 285 will beat a 960. Only can recommend the 960 for those with OEM's, that have 400w-450w PSU's, and the gigabyte mini itx model for ITX builds.
 


I suspect most tests done at a lower res (like 1280x1024 or at most 1680x1050) would have very high
frame rates anyway with any recent GPU, so they probably don't reveal that much useful info. What you
really need is a mix of CPU/GPU crossover, but that needs older tests to show what upgrade makes
any difference. This is probably beyond the time budget of review sites as there are too many possibilities.
Are you playing the latest games, or older titles?

At particularly low resolutions, newer cards may not show up as being any better than older cards at all,
as the bottlenecks lie elsewhere (CPU, RAM, I/O, mbd, etc.), though in such cases the frame rates are
so high it doesn't normally matter.

I've done a few tests with an older platform (i5 760 on a P55 board) using GTX 980s, it handles one card
very well, but in some cases adding a 2nd for SLI doesn't show much of a gain, though as implied above
the performance is so high with one 980 that it doesn't matter. Also tested 7970s with this CPU, and 580s.
GPU scaling did of course reveal itself in synthetic tests like Unigine Heaven and 3DMark, but real games
often behave differently. See this comparison for an interesting example (the i7 untitled config belongs to
a friend of mine; compare the individual test results).

In other words, unless you up the resolution with a newer monitor, buying the very latest tech may not
produce a speedup that's significantly better than numerous older viable upgrades, though as I say this
does depend on the game.

Ian.

 

Well, as of today, Anand's review doesn't have any bench scores at all ( it's just reporting on the card specs, ) so I'm not sure what you're trying to say that they have different results than Linus. If Linus has anything listed, I can't read it right now since the site is down at the moment. And no one here has suggested that drastically different bench setups will produce exactly the same numbers. We're saying that outside wildly different OS and driver versions, you'll see relatively repeatable results within a small variance. If the results are close enough that the scores can flip-flop in subsequent tests, the reasonable person says they're essentially tied. If you do end up with wildly different numbers, the smart person first questions the methodology before the product.



What are you talking about? The scores shown here included actual games, and that's very much "real-world performance." Here are some more scores of running actual games. You'll notice at 1920x1200 the 270X only catches the 960 once, in Hitman, which favors AMD cards. Everywhere else the 270X is notably behind. In a few games, like Bioshock and Crysis 3, the 960 nearly catches the 280X.



No one has said that a 128-bit bus isn't a limit in certain use cases. Most of us here made the same point when the 660 was trimmed back to 192-bit. However, in the 660's and 960's cases, the vast majority of the time the memory pipe isn't a factor at the cards' preferred resolution of 1080p. By the time the memory pipe is stressed, you're likely at the GPU's stress point as well. The 270X you keep touting can only maintain 35fps in Shadow of Morder @ Ultra detail 1080p. The 960 can do 41fps minimum with half the pipe. That's not exactly disappointing for a ~$200 GPU.



That depends on your definition of low-budget, which not everyone shares. I consider the $200 price range right in the middle of mainstream, with the low-budget somewhere between $100 - $150. But that's beside the point. The 270X is rarely available below $170 and more commonly at $180 while the 960 is at $200. Given the 960 consistently outperforms the 270X, I think the $20 premium is deserved. I don't know why you keep bringing the 270X into the discussion. The shootout is between the 960, 280, and 285.



Do you even bother reading what was actually said? No one said to use NVidia cards specifically to bitcoin mine, only that mining itself is much more GPU dependent than VRAM bandwidth.



So a base Honda Civic is a crap car because it can't compete on the track? What does it matter if it's bottle-necked at 1440p or 4K if you're not actually going to encounter it? What does it matter if the 270X did better if you're still watching a slide-show? You're talking about this bottleneck as if everyone hits it. They don't. No gamer drives a 1440p or 4K screen on a $200 GPU because no gamer spends two or three times the money on a display as they do on graphics power. Please show me where this 128-bit problem at 1080 is "proven" to the point it makes the game unplayable or dissatisfying.



Please, show me where you can get a 280X for under $200. I'll buy one today. Best I can find is a $220 Diamond ( which I wouldn't buy, ) and $230 Asus ( but that's only after $20 MIR. )

Meat Loaf, enough with the strawmen. You keep ranting and raving about a problem that largely doesn't exist and one that none of us have been advocating.




What is a 270X supposed to do with 4GB of VRAM? The GPU would struggle at the resolutions, AA, and other detail settings required to utilize that much memory. You'd have to XFire them to use the VRAM, and for that cost you're better off getting a 4GB 970 or 290X.
 
... disappointing specs. Would expect things like this on GTX 950 if it will ever exist. Wouldn't put it past Nvidia to release it and rebrand current 960s as such.
 


To be honest though its price at $209 MSRP match's it's performance. If the card was $250~300 USD MSRP then I would of expected more but at it's marketed segment it's not a big issue. It's got absurdly high performance for it's energy usage which makes it a great option for a low power / compact build.
 


Maybe not 'just getting started', Maxwell is on its second revision. However Maxwell impresses me, and as a result I have set my personal expectations for Nvidia very high, especially for future architectures.
 


It is, but they are yet to release a x10 chip. The highest they've gone is the x04 chip with the 970/980, and Kepler made a lot of strides between x04 chips and x10 chips, going from the 680 (GK104) to the Titan Black and 780 ti (GK110B). Going from x04 to x10B on Kepler was almost a straight-up double of performance, and I hope to see even more out of Maxwell 🙂
 




Not to mention the fiasco Nvidia is currently having with their GTX 970. Nvidia lied to their customers knowing full well when the 970 was in production that hardware wise it is bottlenecked by the memory bandwidth. If you look at all the disappointed people who has 970 but cannot use the full 4gb vram. The maximum utilisation is only 3.5gb, because of poor design, the last 0.5gb block of memory actually performs slower and worse than the 3.5gb which proves memnory bandwidth is having issues. It is not ROPs from 64 (as advertised) to 56 is causing the problem that many think. It is the SMM that controls the memory bandwidth.

Considering the 960 is just the same 970 card but with lower spec, we can probably expect similar things happening with the 128bit bus. For games like Shadow of Mordor, Skyrim, GTA 5 that will be using all the allocated vram, 960 can struggle. Linus already proved in their videos by increasing the resolutions that uses more memory, it significant;y performs slower than R9 270 with the same graphical settings.

In conclusion, nvidia is just milking their consumers thinking it will get on the same BS 970 train. At the current price point of 960, its not even worth buying the card when AMD offers a better position with thteir R9 280 giving you GTX 770 performance.
 

Wtf...

There is so much misinformation in that most...

The 970 and 980 use silicon off the exact same assembly line (GM204), chips that pass QA are allocated for 980's, ones that have up to three defective SM's and / or one defective ROP unit are allocated to 970's, any others are trashed. The chips for the 960 (GM206) are an entirely different die and production queue. The 970 has eight 32-bit GDDR5 memory channels but only seven ROP units that link with the crossbar, the forces one of the memory channels to be slaved to another ROP unit. The results is a 224-bit GDDR5 memory bus that interleaves across all seven fully functional channels using 1K blocks and the final 512MB DRAM chip with the slaved memory channel being on it's own in a dedicated segment.

This is actually good engineering as the alternatives would be to cut off the defective channel entirely and thus only have 3.5GB of local graphics memory. Since that last partially functional 512MB of local GDDR5 memory is still much faster then going to system memory, the segmentation is a performance improvement over non-segmentation.

Here are your scenarios and the performance results

Program uses less then 3.5GB of graphics resources => all methods utilize a 224-bit GDDR5 memory interface.

Program use's more then 3.5GB of graphics resources and there is no segmentation => extra data must be stored in system RAM and accessed from across the PCIe bus, severe lag and performance degradation happens.

Program use's more then 3.5GB of graphics resources and there is segmentation => extra data is stored in the final 512MB of slower local memory, slight lag and performance degradation.

Program use's more then 4.0GB of graphics resources => all methods result in data being stored and fetched from system memory

The segmentation can only improve performance as not having it would always result in the data being stored in system memory. It was a very elegant solution to a complex engineering problem revolving around the binning process. Anandtech did a good writeup involving pretty deep technical details. If you want to be pissed at the misinformation (deliberate or not) then that's all dandy, but don't go around spreading misinformation regarding manufacturing and design.

As for the 960, it's a mid range card designed on a separate die and targeted much lower then the 970, 980 or even previous 760. $200 USD starting MSRP is on the low side for midrange cards, frequently territory that an upscaled x50 model would be in. It has a 128-bit memory interface with half the processing resources of the 980.

http://www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation

http://en.wikipedia.org/wiki/GeForce_900_series#Second_generation_Maxwell_.28GM20x.29
 


This all comes from kids playing games with the VRAM graph up and thinking, "Wait, don't I have 4? Why is it at 3.5? Hmm. . . . It's broke!"

As it turns out, the process of engineering micro processors is actually more complicated than the process of teenagers jumping to conclusions.
 


Its not misinformation when PCPER analyzed the card for a while and came to the conclusion that SMM were affecting the performance of the GTX 970 when it needs more than 3.5gb of ram. They also interviewed the Chief Engineering Officer Alben and he knew there were problems in the design of the card that somewhere down the line part of the vram will be bottlenecked. Its not an elegant solution to a design when it fails to deliver the performance when needed. Don't try to sugar coat the problem when its right there and tested by many reviewers and FCAT and Nai's tests. Nvidia openly admitted they lied about the specifications of the cards and they knew about the problem hence the selling price of 970 was cheaper than any card they have produced in the past. Nvidia blames it on the fact there was a "miscommunication between engineering and marketing". It doesn't matter how you put it, fact is fact, the card performance degrades as more vram is getting used.

For games like shadow of mordor that will be using more than 3.5gb vram you will notice lag and stutter. This vram issue also cannot be fixed as some claim to be with a bios or software upgrade. The problem is inherent within the design of the card which is failing its consumers when people buy the 970 to expect top notch performance.

The worst part of of nvidia actions is telling people 970 has 64 rops and 2mb of L2 cache. L2 cache plays a significant part on the gpu that it increases performance and speed, just like how CPU has L1-L3 cache. The fact that in the revised specs it only has 1.75mb instead of 2mb that is detrimental to performance.

NVIDIA --> THE WAY IT'S MEANT TO BE PLAYED, trololol

http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-Discloses-Full-Memory-Structure-and-Limitations-GTX-970
 
So now were talking about the GTX 970 I thought this thread was about the GTX 960? Funny about that, well one word on the 970 is still beats all the AMD cards with .5 VRAM tied behind its back I think that is hilarious!
 



It's just typical bashers wanting to incite anger and spread misinformation. The 960 was a mid tier card aimed at a lower power / lower budget situation, typically those needing an upgrade for their OEM box's or lighter weight builds. The 970 delivers ridiculous performance for it's cost, it's a binned 980 with one ROP and three SMM's disabled for $350 USD. After seeing how good the 970 was everyone expected the same from the 960, that it would have the same gap from the 970 as the 970 does from the 980. Of course people were disappointed, and that disappointment turned into anger. Trolls and others of like mind grasped onto that anger and have done their best to fuel it by spreading rhetoric and misinformation. So linking the disappointment of the 960's product placement (it's performance is in line with it's cost) with the tempest-in-a-tea-cup of the incorrect specs of the 970 is a natural step for those with little value for truth and fairness.
 


Easy here is a R9 280X selling for less http://www.ncix.com/detail/club3d-radeon-r9-280x-royalqueen-15-94043-1356.htm
while the GTX 960 here is http://www.ncix.com/detail/gigabyte-geforce-gtx-960-g1-73-105120-1481.htm

And here is another from newegg showing R9 280X http://www.newegg.com/Product/Product.aspx?Item=N82E16814125726&cm_re=r9_280x-_-14-125-726-_-Product
http://www.newegg.com/Product/Product.aspx?Item=N82E16814127843&cm_re=gtx_960-_-14-127-843-_-Product
Different brands will have slightly lower or higher pricing. But considering on the US market the MSI version is only 15$ more with with more performance than GTX 960.
 


requires a power supply larger than 450w and is not 7" long. these two cards are aimed at completely different rigs. the gtx960 is pure and simply aimed at store bought computer owners that have at least a free 6 pin pcie connector as a drop in worry free graphics card.
 

They can most likely "fix" it by reserving the whole 512MB block for internal driver and general non-3D use so games cannot use it. There you go, no more stutter from games trying to use it.

Unless the game is written to only work with 4GB GPU RAM, it should have no trouble coping with having 512MB less to work with; albeit possibly at slightly lower detail level.
 

Fail on a few counts. First, you're being dishonest since you're not including the two cheapest 960 cards available at that site ( those would be these two. ) So there you have a both an ITX version and normal version of the 960 for less than the clearance price of the 280X. If you compare prices, try doing so fairly.

Second, paying international shipping and insurance on that card brings it up to $220USD for me ( and either of those 960s would still be cheaper. ) Now, finding a huge discount on a 280X is great, but again, it's not something you're going to regularly see. No one on these forums has ever said don't get the more powerful card if you can find a steep discount. But until those sales and discounts become the norm, we don't treat them as the regular street value.



I have no idea what point you're trying to make here. First of all, that's a $20 difference between those two cards you linked, not $15. Second, and once again, you're not using the cheapest available 960 in your comparison, which means you're being unfair and dishonest. If you're going to compare the cheapest 280X to the cheapest 960 on Newegg, it's a $40 difference, not $15 or $20.

I have no interest in debating with you if you're not going to be fair and honest, and thus far you've simply been raving about things that are largely non-issues. No one is saying the 280X isn't the superior performing card if all you're concerned about is sheer 3D gaming frames. Then again no one is saying the 290X isn't superior to the 280X on the same criteria, so I don't know why you keep bringing that up. But when you start including factors like pricing, availability, power use, heat, and noise, things start getting much more complicated, and that's something you seem to refuse to acknowledge.
 
I don't know why people are now trying to compare the 960 to a 280X, never should have even been compared to a 285. Toms only used the 285 because it is $50 more expensive than the 280 that the 960 should have been compared to. Let it be known that the 285 used in these comparisons was a reference model as well. Lower clocks on the reference 285 allowed the 960 to be marginally faster where as a comparable "factory OC" 280 would have not only been $30 cheaper it would have been faster than the 960 as well. The 960 overall is a totally underwhelming card, toms did their best to make it look competitive and make it look like a good value but the honest truth of the matter is that the 960 is neither a good value or competitive at it price point. Too bad too, I was holding out for this card to be something special.
 
Status
Not open for further replies.