GTX 970 over AMD 390 (non x)

Status
Not open for further replies.

Voxxie

Reputable
Feb 15, 2015
21
0
4,510
hello, I am struggling with graphic card choice, can you sugest me, wich one would be better? i kinda want to go with GTX 970 but at the same time, 390 has 8 gigs of vram instead of 3.5 gigs of vram which GTX 970 has. i am gaming on 1080p and im not going for a bigger monitor.
 
Solution
What power supply do you have? The GTX 970 uses less power so it might be the better option if your power supply is a bit small. Otherwise they are pretty close at 1080p so get which ever you can get for cheaper. I've never run into issues with hitting the vram max on my GTX 970 playing at 1080p. I might in the future have to turn an option off or two but right now I've got pretty much everything maxed at 1080p.
They're both fantastic cards but the 390 is generally slightly better, obviously has a much larger frame buffer, it has ACE technology built into the GPU and is generally a little bit cheaper; saying that though, a 970 can outperform a 390 when overclocked and has a lot more headroom than the 390. I'd personally go for the 390 over the GTX 970 though.
 
What power supply do you have? The GTX 970 uses less power so it might be the better option if your power supply is a bit small. Otherwise they are pretty close at 1080p so get which ever you can get for cheaper. I've never run into issues with hitting the vram max on my GTX 970 playing at 1080p. I might in the future have to turn an option off or two but right now I've got pretty much everything maxed at 1080p.
 
Solution


Well, The R9 390 is 10 dollars cheaper here in my country.
 


Now i have 500W PSU, but im planning to upgrade that too to about 700 - 750W
 
Yes the 500W PSU is the recommended min for the GTX 970. I wouldn't risk it with the R9 390 with anything below 600-650W for system stability reasons. A new 700 - 750W should be more than enough.
 


I have an old slow Athlon II X4 620 i think.
 


As long as your 500w PSU is a decent brand you're more than fine.

That said that processor is kind of weak, and is going to bottleneck the 970. Just be prepared you may not get the performance you expect out of it and you will find benchmarks will be noticeably lower than they should be.
 


I am doing a completely full build, I'm changing everything, also the CPU, which will be 4790k. only thing I can't choose is GTX 970 or R9 390, but yeah, i think i will go with r9 390.
 


Ok good, I was under the impression you were using this with your current build.
 
Definitely the R9 390. For longevity, you don't want anything nVidia. And with reports of Async compute being broken on nVidia cards, your best bet is the R9 390.

There's a track record of both brands. AMD cards can easily go 4 years, while nVidia cards generally need to be upgraded every two years. I'm not talking about reliability as in the cards breaking or anything, but simply long term support. When GTX 960 can outperform a 780 Ti, you know you wasted your money if you bought the 780 Ti.
 
I'm not following here. Your example is that a fast, cheap card came out afterward so nVidia is crap in the long term? I've had plenty of nVidia cards that have lasted at least 4 years. If both cards have exactly the same performance and build quality they will last exactly as long in my mind, brand independent. Sure there are cards that are over-priced, but that doesn't make them perform worse. The GTX780Ti is still a fast card I don't see why anyone would be upset with the performance (maybe the price they paid at the time)

Regardless in this case you are right. The R9 390 is probably the better card in the long term but your answer smacks of a bias.
 
A Radeon HD 7970 user has been playing games since late 2012 and early 2013. At the time the GTX 680 was a better performer. How many people can still use their GTX 680 today for their games compared to HD7970 users? The HD7970 is basically an R9 280x. The cards are still keeping up. If you buy nVidia, you have to upgrade more often than if you buy AMD.

Another example... Seeing the performance of Ashes of Singularity benchmarks... Say you bought an R9 290 for $349 back in 2013... You'd be quite content in seeing the potential of DirectX 12 no? If you bought two for $700 back in 2013... You'd be pretty happy to hear about DirectX 12 with Multi-Adapter and Split Frame Rendering technology no? Because that 4GB frame buffer limit under DirectX11 now turns into an 8GB frame buffer. You can play DirectX 12 games throughout 2016 at a 4K resolution, on that new 4K monitor you bought instead of upgrading your GPU, while having played every game in 2014 and 2015 without so much as a hitch.

Now that's quite the investment wouldn't you agree?

Now compare that to the $650 GTX 780 Ti you bought in Q1 2014...

And let's not even start talking about GameWorks...

It has nothing to do with bias. It's simply nVidia's track record....
 


I HAVE A 2GB GTX 680 AND I RUN 1080P AND 3840X1024 NO ISSUES STILL BEAUTIFUL

 


Thats because Nvidia usually actually upgrades their card while AMD just renames them. You said it yourself... a 7970 is a 280x... and then they went and did it again with much of the 300 series... Your card is still viable because its the same card with a new name, a slight overclock, and updated firmware

I wont say Nvidia hasnt done this , just that its less frequent.

All of us users, IMO, need to read reviews and make sure we are buying new tech, not renamed tech...
 
Why does it need to be new? AMD's old tech is supporting features that only now are starting to get used. New tech is only relevant when all the useful old tech has been superseded. Quite a few of the useful 'old' tech of the GCN architecture hasn't even been used, let alone superseded. Async compute is the main one. And the fact that GCN is bindless in resources is a really huge deal. nVidia's GPUs still aren't.
 
The Moderation team strongly prefers to not alter Best Answers selected by the Original Poster, however the selected answer in this case also contained sufficient misinformation as to possibly be unhelpful to a later reader, looking for an answer to a similar question.
The selected answer is the one we thought was most appropriate. Performance differences at 1080p will be similar enough in actual use, that the only practical difference may in fact be the power used. Both companies have driver issues from time to time, so AMD vs. nVidia generalizations are simply not helpful.
 


And what do you have that shows that Async will benefit games? Nothing out there shows anything beyond asingle game benchmark that was co-developed by AMD.

Personally I think it is all just rubbish until we see actual games. A GTX 980 doesn't support Async but does support FL 12.1 yet no AMD cards do. So what does that mean?

It is much like the draw calls. Everyone is going crazy over them yet so far they are just a big number, much like TFLOPS, that don't show any sign that they will be indicative of game performance.

Did you know that a R9 290X has the same compute performance as a GTX 980Ti? Yet the GTX 980Ti is much faster in games. So again, what does it mean?

It means that all those big numbers that synthetic benchmarks spew out are pointless until we have a real world situation and game.

And Maxwell is bindless, it is one of the features in FL 12.0 and in order to support a FL you have to support all the main features.
 
Very good that you mention that. I did not know that "Onus" was a moderator at the time of unselecting the answer, since there was no visible indication or anything to point out that he is. I refrained from choosing an answer, even though I could've chosen anti-duck's answer, if I really wanted to play the AMD vs nVidia game. I unselected the answer because the choice was changed after almost a month, and I didn't expect a moderator to do this, so, I unselected the answer without choosing a new one, which, well, you already know what happened. The rest I will send privately, in order to avoid further 'off-topic' discussions which will be pinned on me being at fault.


Oh wow. Really? From your reply, I can already tell you didn't do your homework. Let me explain...
First of all, the boosts AMD got were both in Ashes and in Fable Legends, not just 'the game AMD helped develop'. In Fable the Fury cards didn't perform great, probably because of driver issues. They are faster with an i3 than with an i7. The R9 300 cards all got a much larger boost compared to nVidia.
Secondly, as I already explained clearly in another thread with this post, AMD has a marketing deal with the publisher of Ashes of Singularity, not a development deal with the developer. In fact, Oxide (the developer) worked more with nVidia than with AMD, and this was stated in their own words.
Thirdly, we all know (I hope) that in DX11 AMD cards are driver limited while nVidia's aren't. If performance is equal in DX11, is it so preposterous to suggest that the API that will reduce driver overhead gives AMD a bigger boost and thus more longevity under DX12?
Fourthly, Async compute benefits games because it can calculate graphics and compute tasks at the same time. If your graphics calculation takes 16.7 ms and your compute task takes 16.7 ms, normally it will take 33.3 ms to calculate (33.3 ms = 30 fps). With Async compute, you can reduce this to say 20 ms, and then your framerate becomes 50 fps rather than 30 fps. Same hardware, same everything. The idle parts in the GPU are eliminated, boosting performance.

Software has been developed to test the Async compute on AMD vs nVidia. Here's a comparison;
ac_980ti_vs_fury_x.png


What are we looking at here? I'll probably have to explain. First the top. You have two graphs for each card. On the left you see the 980 Ti. The graphics part and the compute part are calculated separately here. I don't remember the numbers exactly, but the compute tasks were programmed to be executed at four different loads. With the 980Ti, you clearly see this in the graph, where the higher the load, the higher the bar. This means it's taking longer for the calculation to take place.
On the Fury-X side, you see that the load doesn't matter. The result is given in the same time frame. This time frame is higher than all the compute tasks on the 980 Ti, independent of load.
On the bottom part of the top graphs you see the graphics portion. The height is the indication of time again. So again, the 980 Ti is faster in computing graphics than the Fury-X.

Now let's go to the bottom two graphs. Here, Async compute was turned on, and the GPUs need to complete both tasks. What do we see? On the 980 Ti, the new graph is pretty much the time it takes to do the graphics calculations, added to the time it takes to do the compute tasks. Async is obviously not working, because the time is supposed to be reduced with it, but we see a simple addition.
On the Fury-X, what do we see? The time it takes to do both the compute tasks and the graphics tasks are pretty much the same as just the compute tasks only of the graphs above, shooting slightly a bit higher in a few occasions. Async is obviously working, and we're getting the whole graphics calculation for 'free'. So much so, that at the highest compute load, the Fury-X performance surpassed the 980 Ti, despite both separate tasks being faster on the 980 Ti.

Is it clear now, why Async will benefit games....? If you actually want, there's a whole thread on Beyond3D with a bunch of people running the benchmarks and posting results;
DX12 Performance Discussion And Analysis Thread

It means that they support conservative rasterization and raster order views in hardware. Those are two very specific tasks. Async compute is for graphics tasks and compute tasks in general, meaning it has more applications. The FL12_1 tasks can realistically be done in software also through the CPU. Async, well, you're probably going to kill the CPU trying to do it at anywhere near the same level as through the GPU.

The draw calls test that was released by Futuremark is how much information able to be scheduled to be sent to the graphics card for calculation. It is separate from the quickness of the calculation of the task itself by the GPU.

Well, nVidia is able to make much more draw calls under DX11 compared to AMD. So basically, it means the draw calls AMD is able to make with their drivers under DX11 is insufficient to make use of all that compute performance, which is exactly why the R9 cards get that big boost under DX12.

It only means this if you're unaware of what's really going on inside the hardware through the available software. It's easier to say 'we don't know' and sit on the fence, rather than putting in the time to investigate what's actually happening.

You're actually right, in the sense that I didn't specify it correctly enough. AMD cards are fully bindless, meaning the full heap is available. nVidia cards are limited to a specific number of heaps. It's the reason AMD is considered Tier 3 in resource binding, and Maxwell2 is considered Tier 2. Not saying nVidia doesn't have any advantages. They have volumed tiled resources while AMD hasn't for example. It just so happens that AMD is missing the least important features. Look here;
e7i7KlH.png


Two that AMD does not have but nVidia does, are the FL12_1, and the volumed tiled resources advantage from nVidia is the Tier 3 vs Tier 2 of the tiled resources in the list. If you ignore importance and just check amount, AMD has tier 3 resource binding compared to Tier 2 of nVidia, Stencil reference value from PS which is absent in nVidia hardware, full heap UAV compared to nVidia's 64 UAV slots and Async shaders. That's still 4 vs 3 advantage on the AMD side, and that, from an architecture that's supposedly too old and is receiving complaints of being rebranded.

Even funnier is when I say that AMD is better for longevity than nVidia, and people think I'm crazy. That table over there has another evidence of this. Maxwell 1 was released in 2014, and only achieves FL 11_0, while GCN released in 2011 is FL11_1. Was I spreading 'misinformation'?
 
Shouldn't you be more mad at amd for providing junk drivers and limiting performance on their cards for dx11 instead of shilling for them, praying for a miracle fix that finally puts them closer to or slightly above performance wise. You're finally starting to understand on the fx 8350 front, so maybe you should take a harder look on this front too. You mention these things like Nvidia has no way to alter performance for big and little maxwell and amd will end up with the performance edge.

The problem lies in the fact that the more significant gains AMD has using Async, only brings them up to par or gives them a slight edge reference to reference...all of these ashes and fable benchmarks are against reference cards at reference speed and for example a game that you are now tied at reference level with 60 fps, in an aftermarket card with no additional overclock besides the factory overclock, you're looking at another 10-12 fps and overclocked above factory overclock, another 5 or 6 fps.

Good for AMD, they needed a boost because they couldn't optimize drivers in dx11


You asked about the 780 ti, looks to be holding it's own, even in 1440P and keep in mind those are not aib cards, they are reference. How much was the 290x again at launch? $550 but somehow the 780ti is not worth it.

perfrel_1920.gif


perfrel_2560.gif


This below is reference clock speeds and brand new drivers for both amd and nvidia...seems the gap is starting to close, even in ashes of the singularity. You mentioned before Nvidia has no way to improve performance but it looks like it's starting to happen.

NVIDIA-and-AMD-Ashes-of-the-Singularity-DX12-New-Drivers-635x505.jpg

 
For 1080p gaming (a larger monitor is not a consideration, per OP), choosing between the GTX970 and R9 390 is sort of like deciding between a ball peen hammer and a mallet for pounding tent stakes; there is no meaningful difference. In any case, the OP has selected the R9 390, so this thread has run its course. Have a nice day!
 
Status
Not open for further replies.

TRENDING THREADS