Asus GeForce GTX 950 Strix Review

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


I installed it on an i5-2500k with 8GB RAM and am getting acceptable frame rates in GTA V with normal settings at 1080p
 


Yeah, my point is more that the average gamer who owns a $587 CPU is not going to be the audience for a $150 GPU, so it would be much better to see results with other mid-low end hardware like i5/i3/Pentium, etc.
 


Well, yes and no. There's a very legitimate reason to test GPUs with the best possible CPU: It eliminates CPU bottlenecking to the greatest possible extent. Not only does that mean you can see a much wider range of performance (if the CPU bottlenecked, a lot of GPUs would get essentially the same performance, only the slowest ones would fall behind the pack), it also means the impact of different CPU load from AMD vs. Nvidia drivers is minimized.

But it's true that it's difficult for the average Joe to look at such a review and predict what kind of performance he's going to get with the tested GPU along with his more realistically priced CPU.
 


Still not the right answer to me, but I understand they don't always have time. Best solution would be to toss a Pentium, an i3 and an i5 in the mix so people with those systems can see how it fares on a cpu similar to theirs; or perhaps test on the top end first and maybe a couple weeks later do the low end cpus. Or compromise and toss in an i3 along with the high end like this http://www.eurogamer.net/articles/digitalfoundry-2015-nvidia-geforce-gtx-950-review where you can see up to 12FPS slower in games with the 950 on an i3 vs an i7.
 
Adding three more CPUs to the review process would take four times as long as it does now. If you want timely reviews, that's not the way to go. With hotly anticipated cards, the reviewer sometimes only has a day or two to test it before sending it back. That's not enough time to test the GPU with four different CPUs.

It may not be ideal, but this is the best balance between getting things out quickly and having relevant information. A top-end CPU removes any processing limitations and shows what the card is capable of doing. Also, CPUs don't bottleneck games nearly as much as a lot of people think they do. With the exception of a few select titles, an i3 performs close enough to an i5, i7, or anything higher that testing those CPUs is largely academic. Only the Pentium and Athlon X4 chips would potentially be far enough off where your actual experience and enjoyment might suffer.
 


The idea is to check CPU reviews for what the CPU is capable of, and GPU reviews for what the GPU is capable of.

Running those reviews separately takes less work and also provides better results, it just makes it more difficult for the reader to estimate performance with a particular combination of hardware (than if that particular combination was covered; bear in mind reviewers could still only review a handful of combinations, everyone else would be out of luck).
 
What about TDP, cause 280,380,7950 etc.. have really an elevated MAX TDP when compared to NVIDIA maxwell new cards, and that is something to concern about.
Then somebody please explain me why?
If a GTX960 has very similar performance to a R9 280X, why i have to stand the extra 130W TDP(250w-120W=130W) in my system?
So NVIDIA has better efficiency and many new features like DSR for instance.
Why do I have to even think about AMD cards, at least for now?
 
The 960 is markedly below the 280X in performance, especially at resolutions above 1080p. Even the regular 280 can top it in some games and detail settings. The extra heat is not a problem in most cases. ITX is the only place heat from the 280 and up would be a concern.

NVidia may have an advantage in electrical efficiency right now, but they have no compelling GPUs between the 960 and 970, and that's a huge performance gap. The 280, 380, 280X, and 290 all offer better performance than the 960 and cost less than the 970. That's only one reason you should consider AMD cards.
 


I'm not so sure about the 380, I have seen quite a few posts from folk complaining about "choppy performance" whilst gaming and from what I'm seeing it isn't a strong a folding card as the 960 either.
 

InvalidError

Titan
Moderator

The 380 is a rebranded 285, which means 256bits memory instead of 384bits for the 280/280X. Having 33% less memory access concurrency and 25% less total memory bandwidth is is bound to hurt performance in memory-intensive scenarios where the 285/380's new texture compression might not be enough to compensate.
 


Because electricity is not expensive where you live. For me, electricity is very cheap, so I calculated the 390 giving me another 75 cents to one dollar a month over the 970, so with the 390's performance, I found AMD to be the better option. If you are someone with a quality power supply, good airflow, I see AMD as the better option. My GPU stays cool anyway, 45C with no fans running on idle, never goes over 70C under 100% load with fans on 40%. I have experienced both Nvidia and AMD cards now, and I see the benefits of both, but I love my AMD card right now!

As I always say, don't look at if the card is a rebrand, look at its performance.
 


8-15% better according to this site:http://www.game-debate.com/gpu/index.php?gid=2826&gid2=1914&compare=geforce-gtx-960-gigabyte-g1-gaming-2gb-edition-vs-radeon-r9-280x-gigabyte-3gb-oc-edition



In terms of overall gaming performance, the graphical capabilities of the AMD Radeon R9 280X Gigabyte 3GB OC Edition are very slightly better than the Nvidia GeForce GTX 960 Gigabyte G1 Gaming 2GB Edition.

The GTX 960 has a 241 MHz higher core clock speed than the R9 280X, but the R9 280X has 64 more Texture Mapping Units than the GTX 960. As a result, the R9 280X exhibits a 48.6 GTexel/s better Texture Fill Rate than the GTX 960. This still holds weight but shader performance is generally more relevant, particularly since both of these GPUs support at least DirectX 10.

The GTX 960 has a 241 MHz higher core clock speed than the R9 280X and the same number of Render Output Units. This results in the GTX 960 providing 7.7 GPixel/s better pixeling performance. However, both GPUs support DirectX 9 or above, and pixeling performance is only really relevant when comparing older cards.

The GTX 960 was released over a year more recently than the R9 280X, and so the GTX 960 is likely to have better driver support, meaning it will be more optimized for running the latest games when compared to the R9 280X.

Both GPUs exhibit very powerful performance, so it probably isn't worth upgrading from one to the other, as both are capable of running even the most demanding games at the highest settings.

The R9 280X has 1024 MB more video memory than the GTX 960, so is likely to be much better at displaying game textures at higher resolutions. This is supported by the fact that the R9 280X also has superior memory performance overall.

The R9 280X has 175.8 GB/sec greater memory bandwidth than the GTX 960, which means that the memory performance of the R9 280X is massively better than the GTX 960.

The GeForce GTX 960 Gigabyte G1 Gaming 2GB Edition has 1024 Shader Processing Units and the Radeon R9 280X Gigabyte 3GB OC Edition has 2048. However, the actual shader performance of the GTX 960 is 1803 and the actual shader performance of the R9 280X is 1915. The R9 280X having 112 better shader performance and an altogether better performance when taking into account other relevant data means that the R9 280X delivers a massively smoother and more efficient experience when processing graphical data than the GTX 960.

The GeForce GTX 960 Gigabyte G1 Gaming 2GB Edition requires 120 Watts to run and the Radeon R9 280X Gigabyte 3GB OC Edition requires 250 Watts. We would recommend a PSU with at least 400 Watts for the GTX 960 and a PSU with at least 600 Watts for the R9 280X. The R9 280X requires 130 Watts more than the GTX 960 to run. The difference is significant enough that the R9 280X may have an adverse affect on your yearly electricity bills in comparison to the GTX 960.


All that is paper,and the heat from 280X is real, however in real world The r9 280x isn't that much faster than the GTX 960, in some games the GTX960 actually pulls ahead:http://www.ocaholic.ch/modules/smartsection/item.php?itemid=1633&page=3

Take into account that the FPS at 2160p resolution with 280X in Battlefield 4 are 18,62 FPS(ASUS Radeon R9 280X DirectCU II), so you can not play at all with that FPS anyway.

So, the GTX960 is a 1080P card not above, wich is the most common resolution nowedays, and the extra 1-2FPS performance from 280X at higer resolutions does not matter to me(at least for now as I just said before).

For my opinion the R9 280X for 1-2 FPS more than GTX960 at 1440P (ASUS GeForce GTX 960 Strix 2GB vs ASUS Radeon R9 280X DirectCU II) that huge extra heat and elevated TDP IS NOT JUSTIFIED AT ALL, it is NOT whorth.
And for NVIDIA card is REALLY worth the performance it offers with only 120W TDP is just unbeatable.

When GTX960 Ti and GTX965Ti come, then you have your compelling GPU between the GTX960 and GTX970.


 


Is the extra power draw what concerns to me( I just said EXTRA cause NVIDIA offers the same or better performance sometimes with almost the half of TDP). the ASUS GeForce GTX 970 Strix 4GB performs even BETTER than MSI Radeon R9 390X Gaming 8G on BF4 at 1080P and 1440P :http://www.ocaholic.ch/modules/smartsection/item.php?itemid=1633&page=3

is only inferior at 2160p ONLY BY 1-2 FPS once again, ironic.

if you can deal with extra strain to your PSU then go ahead, it is your opinion and way off thinking, I do not.
In my country for instance the R9 390 and the GTX970 cost just the same with some minor 10-20 difference.

400 cuc - MSI GAMING R9 390 OC 8GB GDDR5 512-Bit banda 390Gb/s DirectX12 pciexp 3.0 newwww 5339 2858
400 cuc - Shappire//MSI R9 390 8GB GDDR5 512-bits direcX12 --389Gb/s banda pasante 76433210*//54152504 Todo nuevo/Transporte Negociable Todo 0km NEW.
400 cuc - EVGA GeForce GTX970 SSC ACX 2.0 CoolerGPU 1342M DirectX12 4GB Backplate New caja 53547283

Then 390 TDP 275W against 145W from GTX970 is just CRAZY.

 


That site is useless, and relying on it invalidates your entire post.
 


Nvidia's TDP ratings for the GTX 970 and 980 are essentially fake. The cards draw significantly more power in the real world. There's still an efficiency advantage, even against AMDs more efficient cards (285, 380, Fury/X), but it's not as big as Nvidia wants to make it look.
 


Then NVIDIA is FAKE !!
OK, show me something, cause till now, what I have learned from NVIDIA overstand the TDP, recommending always more watts than it really needs.


 


All the sites get you to the same 8-15%, I was just an example from many other sites.

Its real the GTX960 beats the R9 280X in some games, and viceversa the R9 280X beats the GTX960 in others, everybody knows that already, so for me they are really even:
http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/68817-nvidia-gtx-960-5-way-roundup-10.html



 


This review shows the GTX 970 drawing 179W in a gaming load. 243W in a GPGPU workload even, though Nvidia probably wouldn't call that fair (and the gaming readings are more important for most people). Still, the 179W reading is well above the 145W TDP they claim for the GTX 970. It's even above the 165W TDP of the GTX 980.
 


Power consumption is not the same than TPD, they are diferents things, the PC is normaly higher than TDP, however:
I dont see the reference GTX970 (there are no actual GeForce GTX 970 reference card designs for sale)what I see is a GIGABYTE winforce OC GTX970 which is not reference.
So I will take the GTX980 reference to be more acurated:

the 145W TDP GTX970, and 165W TDP of the GTX 980 are for reference cards only, no OC ones, So NVIDIA and I wouldn't call that comparison chart fair, taking into account the the rest of the cards are reference.

Even so the reference GTX980 shows 185W power consumption in gaming, and 177W in torture(not taken into account as that kind of bench gets TDP higher), and non reference GTX980 windforceOC does better = 173W wich are good normal results according to its TDP. And I must say that GIGABYTE are one of the most power hungry within all manufactures out there.

I would like to see the non reference AMD cards on that charts, for instance the R9 280X reference Power Consumption in gaming is already 212W, imagine non reference manufacturer cards like XFX, MSI ,GIGABYTE. etc..

 


TDP is thermal design power. It's literally the power consumption that the cooling system must be designed to handle. At steady state, the power consumption is identical to the heat dissipation (thermal flux). TDP should never be lower than average power consumption in any real-world task.

You don't see a reference GTX 970 because there is no reference GTX 970. They took a completely ordinary GTX 970, which is also what the performance measurements are based on.
 

Please support this assertion with the overwhelming evidence that your term "useless" implies.

Different sites sometimes cherry-pick tests, if they want to present biased results. That is why the most useful information will be benchmarks of the games you actually intend to run.

 


I diagree with you, TDP itself won't tell you how much power the chip consumes, even belonging to the same family might all share the same TDP, but still the higher clocked version will consume more power when fully stressed.
Besides, do you really think that this is a completely ordinary GTX 970? :http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-970-maxwell,3941-5.html
Just look at GPU and boot's speeds.even higher than a reference GTX980.

 


The site compares primarily based on specs, which are useless. Oh, this one has higher MHz so it must be better. Oh, this one has more shaders so it must be better. Useless - or worse than useless, since it often misleads people who don't know any better.
 
Status
Not open for further replies.