Nvidia Gtx 970 or AMD Radeon R9 290x?

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

DrRorschach

Reputable
Oct 2, 2014
8
0
4,510
Basically I need to decide between these two cards and I don't know which one to choose...

My processor is a 4GHz AMD A8 6600k with integrated Radeon HD8570D graphics and my motherboard is an MSI FM2-A55M-E35 uATX.

Which card will run best on this board and CPU?

(P.S. my mobo has AMD Crossfire so is it possible that the 290x will be able to recieve a boost from my integrated graphics?)

Thank you all.
 
Solution


Question's been answered.

Again the 290x is "not in the race" (as in "not competitive")....unless you are over 2560 resolution....

perfrel_1920.gif


At 1920 x 1090 out of the box, it's $200 more money to go slower and with both overclocked as far as they can go, the gap widens substantially.

At 2560 x 1600 out of the box, it's $200 more money to for a tie and with both overclocked as far as they can go, the 970 runs away.

The only time CPus come into play as far as one versus the other is when you have multiple GFX cards and THG...
And how more open can you get but to allow Khronos (OpenGL/CL/AI, etc) to use Mantle as the backbone of their API even allowing them to remove parts and add other features?

that is one thing but we are talking about AMD promised to make Mantle itself as an Open Source API. taking some of the Mantle spec into Vulkan and make Mantle itself as an Open Source API is two different thing. if they cannot live to that promised to begin with then they should not keep hyping about it over and over.

I believe AMD is doing a good thing driving the WHOLE industry forward with Mantle and FREEsynch while nVidia is closing off their technologies similar to how Apple and Intel operate.

it is good thing no doubt but they should be more clear about their goal from the very beginning. not giving same vague clue or answer about what they want to do with it and give false hope to the public. even with freesync they choose the word FREE to take a jab at nvidia solution. if you follow freesync from the very beginning you'll notice how different it was when they first talk about it compared to the solution they have today.

Perhaps if AMD were on top they would not feel the need to be so open and sharing with their technology.

that will be the case. the reason why AMD adopting OpenCL because their proprietary solution (Stream) cannot compete with CUDA. above all else i think AMD knows how business work. but they like take to take jab an competitor while they are not really that different than their competitor themselves.

I hope that AMD, nVidia and Intel continue to compete for ever more or at least until other competitors enter the market.

the reality is harsh. on the x86 front intel has done all it's might to make sure they are the only dominant player in x86 world. AMD are no longer a threat to them now they shift their focus to destroy ARM so we can have boring world where everything is intel and the rapid innovation/competition cease to exist. for AMD part they should spend less on useless marketing and channel their very limited resource to bring in competitive product on all aspect. probably they should keep Richard Huddy mouth shut or just kick him out from AMD.
 
I've said it before and I'll say it again. I am an Nvidia Fanboy.

But let's be very clear here if it weren't for AMD Nvidia wouldn't have to push itself to make better and better technologies in the first place. Nvidia is the leader and drags AMD behind it, and AMD is snapping at Nvidia's heels keeping them on their toes. They have a symbiotic relationship important to everyone involved but most importantly to us, the consumer.

Let's think this through too, why is AMD #2? Because they aren't focused just on GPU's they also make APU's/CPU's. So they face number 1 Nvidia on the GPU side and number 1 Intel on the other. And because their production is more cost effective their chips have gotten into a console gaming market that's been a steamroller (<---- note the pun) in the gaming industry.

And yes I belong to the PC crowd because although more expensive it is a better experience than to play on my TV, but honestly if it weren't for console gaming developers wouldn't have the revenue to produce a game like GTA V, long after the console people have funded the PC development and in the end made a better product for PC gamers, I'll be playing a modded version of GTA V with updated graphics 5 years from now. Look at Skyrim and you'll get what I mean.

One side drives the other in all of these things. The planet is bipolar, North and South. Nvidia-AMD, Intel-AMD, Console-PC, Apple-PC... and on and on it goes. In the end it gives a better product to us, and we get two flavors to choose from. Better than having 50 like in the 80's with Tandy, Commodore, T.I., Atari, Nintendo, Sony, PC, Apple.... the punch is more concentrated, more liquor less Hi-C.

Gotta go... *puffpuffpass*
 
Renz496 you sure don't like to lose an argument, even when the facts are against you 🙂

Mantle was used to bootstrap the process and speed its development, making Vulkan a derivation of sorts of Mantle (think Unix family tree). = Mantle being Open Source

They (Intel) have asked for access, and we will give it to them when we open this up, and we'll give it to anyone who wants to participate in this." = Mantle being Open source when it's ready, see above.

Just because AMD have now decided that they shouldn't waste much more time, money or effort on Mantle because it has had the desired effect, DX12 and Vulkan being far more to the metal doesn't make them liars, it makes them heroes to all the PC gamers out there who have been spending hundreds of pounds each time they buy a GPU only to have it hampered by the very API that was designed to facilitate it's use.

Microsoft is the villain here because for decades XBox consoles have been fitted with mediocre GPU's that vastly overpower PC GPU's because the API for the consoles are to the metal while the PC has suffered with an inefficient API which suits Microsoft perfectly well because it allows them to continue making money from underpowered console hardware.
 


Agreed! And WTF, you're messing up the rotation man, puff puff pass! 🙂 lolz
 
Mantle was used to bootstrap the process and speed its development, making Vulkan a derivation of sorts of Mantle (think Unix family tree). = Mantle being Open Source

let me remind you again that i'm not talking about Mantle forked (Vulcan) here. i was talking about Mantle itself. when you do Open Source you do it together with other people. but with Mantle 1.0 all the spec was done by AMD. no input from intel, Vivante, Qualcomm, Imagination Technologies, ARM and Nvidia at all. you can said that Vulcan is the true open source version of Mantle. i can accept that. but it doesn't mean Mantle itself was open source. and now they even refused to release Mantle public SDK so open community can tinker with it.

They (Intel) have asked for access, and we will give it to them when we open this up, and we'll give it to anyone who wants to participate in this." = Mantle being Open source when it's ready, see above.

that's why i said it was funny. they have no problem for give a peek and even let game developer to use Mantle for their games but when hardware vendor asked to look at it they deny it with 'beta' excuse. is the Mantle they give to game developer to work with is not beta? intel and nvidia asked for the spec because their earlier claim about the API being open for other to use. when Richard Huddy talks about how Intel interested with Mantle it was not actually a news to some. an intel guy have been talking about it on another forum few months before Richard talk about it in the media. but the guy mention that intel decides to go with DX12 because AMD has been denying intel request for several times already and it seems DX12 might also going to do what Mantle was meant to do.

Just because AMD.....

high level API exist for a reason. and not because MS want to gimp PC performance. they have been developing directx well before entering the console world. if not for Direct X each vendor will want to come up with their own API. even as far as going to make certain game only work with their API like what happen with Glide. the only downside of Direct X was it was MS proprietary API. OpenGL was supposed to give us the true open solution but Open Community also have their own issue. straight from anandtech:

Unlike consoles, PCs are not fixed platforms, and this is especially the case in the world of PC graphics. If we include both discrete and integrated graphics then we are looking at three very different players: AMD, Intel, and NVIDIA. All three have their own graphics architectures, and while they are bound together at the high level by Direct3D feature requirements and some common sense design choices, at the low level they’re all using very different architectures. The abstraction provided by APIs like Direct3D and OpenGL is what allows these hardware vendors to work together in the PC space, but if those abstractions are removed in the name of performance then that compatibility and broad support is lost in the process.

http://www.anandtech.com/show/7371/understanding-amds-mantle-a-lowlevel-graphics-api-for-gcn
 


Why worry about predictions when the data is there to be read. Here's the performance of the 970 in SLI at 2560 x 1600 .... completely free of the dire predictions you describe

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_970_SLI/20.html

Tomb Raider goes from 29.8 to 58.7; scaling = 96.98%
Battlefield 3 goes from 62.1 to 121.4; scaling = 95.49%
Far Cry 3 goes from 35.6 to 68.8; scaling = 93.26%
Crysis 3 goes from 22.5 to 43.3; scaling = 92.44%
Thief goes from 70.8 to 136.1; scaling = 92.23%
Bioshock Infinite goes from 76.7 to 143.9; scaling = 87.61%
Splinter Cell: Blacklist goes from 49.5 to 92.2; scaling = 86.26%
Battlefield 4 goes from 45.0 to 83.2; scaling =84.89%
Metro LL goes from 40.7 to 74.6; scaling = 83.29%
Batman: Arkham Origins goes from 81.8 to 148.3; scaling = 81.30%


And if you're gonna come back with 4k.... until there's a pair of cards that can do 60 fps across the board in TPUs test suite, 4k isn't on my radar. As you'll see below, it currently affects only on 0.06% of the gaming market and I din't see it being a factor untilo 2016 or later. But even at 4k, scaling in demanding games is ... well the numbers speak for themselves.

Crysis 3 96.43%
Battlefield 3 94.85%
Splinter Cell BL 94.05%
BioShock Infinite 93.59%
Tomb Raider 93.41%
Batman: Origins 93.27%
Thief 93.26%
Far Cry 3 91.96%
Assassins Creed 4 90.07%
Battlefield 4 88.57%
Watch Dogs 79.58%
Metro LL 71.86%





No need to guess..... go here and see who's hitting steam servers (not office workers):

http://store.steampowered.com/hwsurvey

Scroll down a bit and hit (Under Feb 2015) "Primary Display Resolution"

1366 x 768 = 26.43%
1920 x 1080 = 34.02%
2560 x 1440 = 1.03%
3840 x 2160 = 0.05%
5760 x 1080 = 0.06%

You can also look at what GFX cards are being used

GTX 970 = 1.80 %
All R7 + R9 cards combined = 1.62 %

That represents cards hitting steam servers in the month of February which represents 5 months of sales on the 970 and up to 16 months of sales on the R9 / R7 series.




 
I fully understand why Direct X and Open GL exist, and I thought I knew why they had to be so un-optimized but now it's just a coincidence that Microsoft choose NOW to make Direct X 12 To the metal? And Vulkan too?

The fact is that the same situation with multiple GPU's and multiple hardware, nothing there has changed, the only real difference is Mantle was released with the promise of it being open source, and Open GL taking AMD up on the offer so Microsoft had no choice but to compete.

I would not be surprised if DX12 didn't have at least some code from Mantle in it.

The fact is that Mantle has made the change happen, Microsoft had no reason to optimize DX until a threat or competitor showed promise, this is always the same with business.
 
I fully understand why Direct X and Open GL exist, and I thought I knew why they had to be so un-optimized but now it's just a coincidence that Microsoft choose NOW to make Direct X 12 To the metal? And Vulkan too?

it was called progress. now that DirectX and OpenGL already solved the problem with PC various configuration now developer wants something similar to console: low level API on the PC. but make no mistake. not all developer interested going low level on the PC. but for experienced developer having a direct control over the resource will probably give them a lot of benefit.

The fact is that Mantle has made the change happen, Microsoft had no reason to optimize DX until a threat or competitor showed promise, this is always the same with business.

what is it that is not optimized with directx? if anything it was AMD that refuses to use DX11 feature like DCL. optimized properly you can still gain performance with directx 11. we know that 780Ti and 290X is neck to neck in general. Mantle was supposed to give them an edge over DirectX 11. but despite that nvidia with DX11 able to perform as good as 290X in Mantle in BF4:

http://www.hardocp.com/article/2014/03/04/bf4_amd_mantle_video_card_performance_review_part_1/3#.VPdX1vyUfA0
 
Mantle was / is a bit of a boon to low budget systems.... don't see where it belongs in a 980 discussion. A build with top tier CPU / GFX GFX card will show little to no benefit from mantle.

Also make sure to distinguish between "outta the box" and "overclocked" performance. While the 290x can compete with the 780 Ti., it certainly doesn't when both are overclocked "bawlz to the wall". Even the 780 takes the 290x down at 1920 and 2560 with both BttW.
 


The thing about Mantle is, everything Mantle came up with is going to be rendered moot by DX12. Other than the specific parts of Mantle that's going over to Vulkan to power the VR portion of that API, Mantle has nowhere to go other than a very narrowed focus in the market. DX12 is broader and will be better implemented because DX12 learned from Mantle's first steps and then, leapfrogged them.

As far as the "outta the box" conversation, now you're talking about how much overclocking a particular manufacturer does before selling it to you. That can come down to an apples to apples comparison as the differences between a reference design and a FTW+ type solution involves the cooling that's put on it, the cooler they make it the more a manufacturers overclock can be pushed.

So pick an Nvidia or AMD then see who's making the best card for the money in the way of overclocks and cooling and see how much more you pay for how much more performance.

The difference in performance between a reference 980 you can just barely squeeze into your budget vs a FTW+ 970 that is easily in your budget isn't as big a difference performancewise than if you compare a 970 ref vs 980 ref.

Please don't mistake the last thing I said, they're not the same performance, but the gap narrows when you're talking a top tier 970 vs a reference 980. At least if Vram isn't the issue for you, and if you're driving a 1080p 60hz monitor the differences are very very small.
 
The difference between say the Classified and SC is narrowing every year. This is due to both the physical and legal restraints the nVidia has placed on the cards and their vendors. Thru the 6xx and 7xxx series, I used to experience about the same overclocks with either camp..... and I am taking about the "enthusiast gamer" cards .... the ones that make up 85% of ales and which the Big 4 referred to as

Asus - DCII (now Strix)
EVCGA SC
Gigabyte Windforce (now G1)
MSI Twin Frozr (now Gamning Series)

Up until the 6xx / 7xxx series, both camps operated in the 65-80C range. But everything changed with the 700 series from nVidia.... AMD retooled and put out the R9 series and "all of sudden "95C was OK".

http://www.tomshardware.com/reviews/amd-ama-toms-hardware,3672-3.html

Of course nVidia still says 97C on their spec sheets now as they did then but we never say those kinda numbers "outta the box". With AMD it was routine. While better designed coolers came along later, and narrowed the gap..... the operating temperatures of the R9's in any wrapper never got near even the reference nVidia designs.

Don't get me wrong, I applauded AMD for that. Some would argue "what choice did they have" ? But my position was "why not "be all you can be, push the edge .... so what if it results in a few returned cards which, given the chip lottery, won't be able to handle the advertised speeds". The consumer benefits.

But looking at the rare review that test "ballz to the wall" and my own experience..... 7 - 12% was the range I see on the R9s (16% once), but I have never gotten less than 22% on an nVidia card , best I have seen was 30% (560 Ti).

I don't agree with Linus much, but I do appreciate that he does test cards overclocked.... and here it's with reference cards

https://www.youtube.com/watch?v=djvZaHHU4I8

I thot the R9 would improve with water cooling .... but ... nVidia's gap widened when both were water cooled.
https://www.youtube.com/watch?v=RqaHh-y51us

 
All this talk of DirectX 12, it's probably important to note that only the GTX 970/980's are true DirectX 12 cards. You're already obsolete if you go for that 290X.

"only GM2xx GPUs will fully support the newer rendering features like Conservative Rasters and Raster Ordered Views. Developers can access such features through the DirectX 11 API, hence the support for DX12 in Fermi and above, but as we understand it native support for them and therefore, presumably, better performance when using them will only be available in the latest GPUs, such as GM204."
http://www.bit-tech.net/hardware/graphics/2014/09/19/nvidia-geforce-gtx-980-review/1
 


Heck, I'm on 2 768p monitors and I'm still satisfied and not ready to move up!
 
people talk a lot about Mantle and low level access but the truth is Mantle is not like low level API that used in console. sure it is more low level than DirectX and OpenGL but still on higher level than console low level API. i still remember when AMD first reveal mantle everyone things it will be exactly like console low level. i think that mind set was further enhance when AMD themselves said about humiliating Titan with Mantle. then BF4 with Mantle comes out. with the result it is clear that Mantle helps a lot on cpu overhead in drivers. but not turning 270X in Mantle as fast as 290X in DX11. also with console the low level access let developer to do more with less resource. but it did not happen with Mantle. in BF4 the Mantle version of the game use more VRAM than DX11 counter part. from what i can see Mantle is not really about giving true low level API like console to the PC but more about giving developer access to the resource that is not possible with current DirectX and OpenGL and to reduce the CPU overhead.
 
lol @ this mantle shit. :) we have CPU for cpu power, and GPU for gpu power. why would you want to buy expensive gpu to run it 50% gpu and 50% cpu and that will be slower than an oldder cpu anyway... just upgrade to i5 4690k and gtx 970. you will be happy for years
 


:lol:

you know what i've probably recommend more Radeon than Geforce on this board for years because of AMD price advantage. probably i have few issues that i don't like with them (same with nvidia) but that's it. if i was a total hater of AMD i don't think ever recommend AMD stuff no matter how good they were hahaha :pt1cable:
 


The reason why you see such a big difference is, your R9 290x was heavily bottlenecked by the Intel I7 920 at stockspeed....
 
FPS is still the benchmark for me, it doesnt matter how fast or faster a GPU is as long as it renders at more than 40 fps, once you get to 50 or 60, then higher frame rates are pointless as the difference is unnoticeable on most monitors or during gameplay, immho.

I'm running an Asus R9-290 with an AMD 8core FX8350 on an Asus sabertooth MB and 8GB ram. I have 4 fans doing the business of keeping everything cool. I run Thief on ultra settings and the heat coming out the back is normal, the fans are not running fast so its quiet. Same thing with Mordor and AC unity, although with AC, I can just about notice a difference in fan speed and heat build up at certain times with graphic overload, but only out of curiosity do I try to notice any change.

All those games run at between 35 and 80 fps, depending on graphic load at top settings, so I'm quite happy that my rig will run The Witcher 3 when it comes out in two months. How high the settings will be is moot until then, but going on games Ive played so far, I'm not worried.

So you can tell me the NVidia GTX 980 will run cooler and more quiet, but that wont make any difference to me and I've got my rig at £50 cheaper with the Radeon GPU, so its all down to personal preference, both boards will do the trick and until I need to go beyond acceleration and use crossfire (which will prob be never) hopefully AMD will have pulled their finger out and produce better drivers.

 
The 970 is the price range is certainly comparable to that of the 290x..... twin 970s can be had for just $46 more than 980 and it brings a 50% speed increase. Comparing two 970s versus two 290x's..... On a strict performance basis, the numbers say 290x at 4k... 970s for 1440p and below assuming all the cards will be OC'd.


1. Cost Adjustment - You get two Witcher 3 coupons, one of which you can sell at a $10 discount for $50
2. Cost Adjustment - the 750 watter w/ (2) 970s becomes a 900 watter w/ (2) 290x's ...
3. Cost Adjustment - The difference in power cost could easily reach $100 over the life of the cards under normal usage.

1. Out of the box, the 970s have the lead at 1080p, 290x's at 1440p
2. With all cards overclocked, the 970s take 1440p by a very small margin.
 


and there is also why Nvidia is there. AMD with or without aftermarket cooling, is still noisy as FK, hot as fk. unless watercooled which costs alot more and isnt everyones choice, and it still isnt taht silent on AMD cards.

i saw too many comparisons by now that AMD simply is garbage with bad engineering, and i will never buy amd again...nvidia always has done the job perfectly for me and will keep doing so.
 
....and there is also why Nvidia is there. AMD with or without aftermarket cooling, is still noisy as FK, hot as fk. unless watercooled which costs alot more and isnt everyones choice, and it still isnt taht silent on AMD cards.

i saw too many comparisons by now that AMD simply is garbage with bad engineering, and i will never buy amd again...nvidia always has done the job perfectly for me and will keep doing so.


Sorry, but as a generalisation of AMD and the R9 290 card, that is just plain wrong and if you want to read an informative and exhaustive review below from a kosher site, then you will be missing out, I'm sure, in the future.

http://www.bit-tech.net/hardware/graphics/2014/03/03/asus-radeon-r9-290-directcu-ii-oc-review/2

Its not being compared with the GTX 970 (and that is a good card anyway) but the temperature/power runs are not consistent with some peoples opinions on here. You can see the huge difference the ASUS card has over the AMD version. For parity sake the 970 review is below....

http://www.bit-tech.net/hardware/graphics/2014/09/19/nvidia-geforce-gtx-970-review/1

Ive seen a GTX 970 for £260 and an Asus R9 290 for £225

 
It really depends on how much does it cost where you live what would you prefer, both cards will give you very good results at FHD and QHD, UHD will require a SLI or CrossFire. Main thing is that AMD is cheaper in most cases, cards like Tri-X are running at 70° on load. GTX 970 is an amazing card too. Check how your CPU copes with those card before you proceed, but check it in many different websites. I was previously on AMD FX 8320 CPU before, I upgraded for Intel, I was worried about "bottlenecking" truth to be said. 8320 or 4790k made no difference. Both my CPU and GPU are OC and whilst playing BF4 online at FHD Ultra my FPS are around 110, barely drops below 80. So it really depends on you, both choices are good.
 


Yes, it seems cards vary considerably from country to country, but hopefully Amazon or Ebay can deliver bang for buck across the great divide.

I'm running an R9 290 DCU II OC with an AMD FX 8350 and 8gb ram on a Sabertooth MB. All my games, esp Assassins Creed Unity run on ultra and haven't had a problem with heat exhaust or power surges, which prompted me to reference the Asus card over the AMD card, they are a world apart. AMD got it terribly wrong with their version, you could cook eggs on them they ran so hot, but in their haste to cut the legs off the price of a GTX card, they put a bog standard fin and fan on and X'ed fingers hoping we wouldn't notice.

Nvidia have taken the lead with better technology overall, but the price is inhibiting to many and as in the past when AMD knocked Nvidia off their perch, they claimed it back again. I'm sure this seesaw affect will continue, and provide us PC'ers with great cards because of the competition between them. When one excels in the tech dept, the other will cut corners, so its up to sites like this to sort the chiff from the chaff.

 


WOAHAHAHAHAHA :lol: :lol: :lol: :pt1cable:
 


Got a 27 inch 1440p Korean IPS for a 300 australian a few years ago. Runs at 110hz. 1440p looks great for the strategy games I play.
 

TRENDING THREADS