970 v. 390 Soft Features Comparison

Jonathanese

Distinguished
Jun 7, 2010
273
0
18,790
Someone is willing to buy my GTX660 for $150, so that puts me on the market for a new card.

Like basically everyone on the planet who is buying a GPU these days, I am torn between the GTX 970 and the R9 390. Also, I am running mostly at 1920x1080.

Pros for R9
- Basically better in absolutely every way on paper.
- They usually say you won't use the 8GB buffer unless you have "heavily modded games".
But I'm interested in games like Assassin's Creed: Unity, Skyrim (With everything possible to improve the shoddy graphics.), Mass Effect: Andromeda. And using some sort of DSR/VSR where applicable.

Pros for 970
- Cooler operation (Not too concerned)
- Lower power (Not too concerned)
- High overclocking potential (Quite interested. Might outpace 390 with a good OC)
- I've been using Nvidia since the Geforce MX 4000

From a hardware standpoint, I'm pretty much set on the 390. What it comes down to, for me, is the SOFT features. So this is where I want the AMD fanboys to chime in.

Last time I used AMD/ATI drivers, they seemed pretty limited in their functionality, and felt clunky to use. You could also only set features like FSAA universally, not on a game-by-game basis. I also hear the drivers are much slower to update.

With Nvidia drivers, I like being able to have it auto-detect every game, and set every single feature (including CUDA GPU) for every single game individually. I like being able to inject Ambient Occlusion on many titles. I like the ability to shove FXAA where I want it, or make extremely detailed choices about what AA modes I use. I'm also interested in TXAA, and I doubt that is something AMD will support.

So what can you guys tell me about AMD drivers, AMD software, just the whole AMD graphics experience in general. Not from a performance standpoint, but from the standpoint of features, compatibility, etc?
 
Hi, I used to be an AMD fan boy but now not.

NVIDIA

Pros ;
- Adaptive Vsync : Very Useful. You have Nvidia's solutions.
- Very Good Drivers
- Geforce Experience
- TXAA, MFAA AA (Nvidia Exclusives)
- Great overclocking
- Power Efficient

Cons ;
-Bit more expensive

AMD

Pros;
-Mantle and GCN architecture.
-Affordable

Cons;
- No Vsync support from AMD. You have to choose unofficial softwares.
- Not very good at overclocking
- Not power efficient.

 
So this is where I want the AMD fanboys to chime in

lol. in their view team green always lose or evil. to them:

nvidia doing rebrand: cheating customer
AMD doing rebrand: smart strategy
nvidia high price: milking customer
AMD high price: that reflect it's performance. so it is justifiable
nvidia pushing new tech: that's locking people to their hardware
AMD pushing new tech: that's moving industry forward

:lol:

well joking aside i will try to answer you question bit by bit with what i know :)

I also hear the drivers are much slower to update.

yes they are. but probably not much of an issue unless the game cannot work or work properly without driver updates. and if you're not using CF setup. for CF setup driver support is very crucial. that's why despite CF advantage over SLI i have hard time recommending CF.

I like being able to inject Ambient Occlusion on many titles.

i still think this is one advantage of having nvidia cards. to this day i still never heard something similar being offered by AMD

I like the ability to shove FXAA where I want it, or make extremely detailed choices about what AA modes I use.

instead of FXAA AMD have MLAA. honestly i don't know what the status for MLAA right now. not much coverage being made on MLAA. last i heard there will be MLAA 2. anyway similar to FXAA you can force MLAA in games via AMD CCC. though i heard MLAA seems have bigger performance hit than FXAA. actually FXAA is nvidia reaction to AMD MLAA to begin with.

I'm also interested in TXAA, and I doubt that is something AMD will support.

TXAA is nvidia exclusive mode of AA. while the feature need to be implemented by game developer in their games the feature need specific nvidia hardware. that hardware only exist in kepler and future nvidia gpu. hence AMD gpu will not be able to use TXAA. IQ wise some people hate TXAA with passion because of the blurring. i'm fine with TXAA though. but what i don't like was performance hit of TXAA haha.

So what can you guys tell me about AMD drivers, AMD software, just the whole AMD graphics experience in general. Not from a performance standpoint, but from the standpoint of features, compatibility, etc?

those are question for those owning both AMD and Nvidia gpu right now.
 


and stuff like this make it worse for AMD:

http://www.phoronix.com/scan.php?page=news_item&px=Linux-Shadow-of-Mordor

 
Thanks guys!

I think I might stick with NVidia this round. If memory ever does become a problem, I think the card will easily last me until Pascal or later.

Since I can't buy anything until I start my job, feel free to continue the discussion.
 
As for power efficiency, the 390 is no match for the 970, but it has Frame Rate Target Control, which can net great savings in power consumption. http://www.pcworld.com/article/2942163/tested-amds-frame-rate-target-control-delivers-real-benefits-for-radeon-gamers.html

You can *probably* achieve the same results with RivaTuner's framerate limiter, but it's a thing to consider when talking about cards like the 390s or Furys.

AMD has VSR in response to DSR, and their drivers are good, but spread further apart (though they have released faster lately because of Windows 10. You can apply MLAA in games via the control panel, it's less blurry and performance hit is the same as FXAA.

The 390 has a potential bigger boost in DirectX 12, and of course the bigger framebuffer can come in handy, though not very often. For the same price, I'd pick a 390, but it's quite overpriced compared to the 290 (+100€), except for some models (which are +50€).
 
OK, so I was looking around. Apparently the AMD drivers DO have game profiles now. I would say "Thank God", but now this is starting to create a debate all over again.

I also looked up some game memory usage. Apparently many games like Shadow of Mordor and Crysis 3 use 3.5GB fairly quickly at 1080.

Then there is future compatibility. With DirectX 12, we will hopefully being seeing an epic increase in draw calls. However, this also means that VRAM usage will increase faster than performance decreases, which decreases the relevance of the argument "By the time you are using more than 3.5GB, the GPU will be too slow for it to matter."

So those are a few wins for the 390.

When looking up directx feature levels, I find that the 970 supports 12_1 feature levels compared to the 390's 12_0. It also supports Raster Order Views and more tiled resources, but fewer UAV formats. I think Nvidia wins there in terms of overall features, once you include DXGI, HBAO+, TXAA (No, I don't mind the blurring.), and PhysX.


Wow. What a nightmare of a decision. I wish I could just buy the 390, try it out for a few days, and if I don't like it, go back and swap it out for a 970. Doubt that is an option, but MAN would it make this decision easier.
 


Maxwell cards are great overclockers while Radeons are rebrands( But still decent). The only bad thing I see in Geforce is the comparatively lack of VRAM. But if 980 and 970 come with 8gb at some point, there's no reason to choose Radeon over the GTXs.
 
Buy a good 390 (Sapphire for example) and don't worry about VRAM for years, it's also faster than the 970 out of the box. You can OC a 970 more than a 390, but the latter is already ahead so any OC will push it further to 390X/980 territory.

Use frame rate target control to keep your temperatures and power consumption in check.
 


if there are really 8GB variant of 980/970 we will see them long ago. i think board partner will be interested with that but nvidia themselves probably prevent that from happening. just look how long does it take for 960 $GB to appear. nvidia doesn't need to wait 390X/390 to offer 8GB variant of 980/970 because before this the 290s already offering such option.
 
"if there are really 8GB variant of 980/970 we will see them long ago"

Yeah, I remember the rumors. Personally, I don't think Nvidia will be doing 8GB at this price range any time soon. I'm pretty sure their next generation will hang out around the 6GB mark. Just throwing names out here, but the GTX 1080/Ti would probably be an 8GB card, with the 1070 line being 6GB. Or perhaps they will scale back the 1070 like they did the 970, but this time they will have learned their lesson and not included any ram that isn't full speed, resulting in 5.25GB or 7GB. I would think these would be fine.

DECISION UPDATE:

It seems like every night, I flip-flop between the 390 and 970.

Tonight, it is the 970. Despite the lower stock speeds, I am pretty set on overclocking whichever card I get. An overclocked 970 runs basically alongside the 980, which makes performance within normal memory ranges not too much of a problem.

One of my all-time favorite lighting effects (besides ambient occlusion) is shadows with real, distance-based penumbra, which is a feature generally exclusive to NVIDIA cards (With the exception of Crysis 2). This affects games I am certain I will be playing, such as Far Cry 4 and Assassin's Creed: Unity. Even on the 660, where performance suffered, I never lowered my shadows below PCSS, because by now, I feel it should be standard.

I still think NVIDIA isn't being a good sport by making all these proprietary features. AMD gets my kudos for creating new open standards.

Also, since I don't mind the slight blurring of TXAA, especially to reduce texture flicker, this means that games like Witcher III and Shadow of Mordor will not get a huge benefit from Ultra textures, especially at 1080.

One reason I was interested in more memory was the ability to play games with 4X DSR (IMO, the only setting that actually improves image.) However, looking at the 4k performance of either of these cards on new games, that seems nonsensical on either card. For older games, it makes sense. But with older games, I also won't be using the extra memory, and therefore negates the advantage.

The big game I hope to prepare for is Andromeda, which doesn't come out for what, another year or so? I'll probably sell my current card and get a new Pascal card by then, thus negating the need to be overly future-proof.

The main thing remaining is FreeSync, which I find superior to GSync due to the open standard built into the new DisplayPort spec. But I don't plan on shelling out the extra cash for new monitors all that soon, anyway.

So that's my conclusion for tonight. My friend is going to buy the card and I am going to pay him back. Since he gets paid tomorrow, we'll just have to see what my conclusion is tomorrow night.

Thanks for the feedback, guys!
 
Adaptive Sync while open right now only AMD use them. So far I haven't heard anything from Intel side despite adaptive sync might benefit them the most. So right now using Adaptive Sync means you need to use AMD card to take advantage of FS.
 
Yeah. I mean Nvidia has obvious reasons not to adopt FreeSync, since they put so much research into G-Sync, that it would be a wasted investment, since manufacturers would flock to FreeSync the moment it became universal.

Intel I'm not so sure about. They seem quick to adopt new standards, so maybe we will be seeing something come up shortly.

Adaptive Sync is one of the technologies I am most looking forward to. However, given the competition over a new standard, it might be worth waiting to see how they mature over time. Personally, I think the VESA standard will become a standard monitor feature that we adopt and never look back. Kinda like how nobody buys a 4:3 monitor anymore, or one that is less than 1080. I would LOVE to see it on an OLED display.

I also think that resolution we are quickly reaching a point of diminishing returns as far as resolution goes. We might top out at 4K or 8K, then refresh rates might become the next thing. But I think at some point in the near future, resolution, antialiasing, and refresh rates will be something we don't put much consideration into very often.
 
Not only investment nvidia believe having tight control over their product will give the best experience to their customer. With FS AMD hoping to only control the driver side and leave the monitor side to monitor manufacturer. Means you're experience might differ if you were using different brand of monitor. Also I think nvidia keeping FPGA in GSync for good reason even that's mean the cost will increase.

So if you look at financial perspective the way nvidia doing things probably are more costly. Hence AMD using more open approach in some things they do. By looking at FS alone AMD only need to develop driver for FS while the ASIC needed by FS will be develop by monitor scalar maker instead. That actually really clever on AMD part. Because they did not need to spend money to develop their own module like nvidia. But the caveat is it is much harder to control the end product. Hence were are seeing FS monitor with different VRR range.
 
OK, so I went ahead and got the 970 STRIX from ASUS.

It has a backplate, which is what I was looking for. But the whole point of finding a backplate was so that I could cool the RAM. None of the RAM on this board is attached to any form of heat sinking. WTF?!

Will this be a problem, or does the newer memory run cooler or something? They seem pretty insistent on not cooling the ram on a lot of cards.
 
To be honest I've seen conflicting ideas when it comes to backplates:

1) the back plates will help with thermal dissipation
2) it was there merely to protect the back of PCB and stregthen the PCB itself because of heavy cooler attached to the card. If anything the backplate will hinder with thermal dissipation itself.

Personally I'd go with the second one. Reference 980 have backplates. But with Titan X nvidia decides to ditch backplates because it will hinder the cooling of VRAM chip located at the back of PCB.
 
OMG HOLY JEZUS WTF JUST HAPPENED?!

I may write a separate post about this, but after rebuilding my desktop and putting in the new card, my system has slowed to an insane crawl. It takes 5-10 minutes to start up, my 15Mbps connection is running at 2kbps.

I tried running Unreal Tournament 3 to test the GPU. As soon as it has to render anything, the game crashes "Forced to terminate in an unusual way."

I tried swapping back to the old GPU, no change. I tried only having my mouse, keyboard, and boot drive hooked up. Still slow.

I tried swapping my boot drive with a different drive running Windows 10. Same problems.

Checked the drives with CrystalDiskMark. They are running full-speed. Windows still seems fairly snappy. Chrome locks up completely.

The whole computer just became a giant ball of glitch.
 
Well, I found the culprit. It was my wifi card. It was apparently glitching out sending tons of IRQs, causing everything else to halt.

Everything is running fantastic now. (Using my Galaxy S6 as a wifi adapter).

I can't even TOUCH that memory space. I tested Crysis 3 maxed out at 1080 with TXAA x4, and was only seeing about 1750MB usage. So only half the full-speed memory. So I think choosing the NVIDIA features instead of the extra memory was totally worth it in that case.

I am rather disappointed by the lack of RAM cooling. I'll have to overclock and see how it does as far as build is concerned. I also notice a bit of coil whine at 200-300FPS, but no biggie.
 
OK, I'm back.

I installed the overclocking software from the ASUS site. the "STRIX" overclocking software. It looks different than standard GPU Tweaker. Slightly cheaper.

I turned the max TDP to 120%, and proceeded to overclock the GPU first. Ended up maxing the software out at 1400MHz. Heaven v4 reads 1600MHz. The card still ran perfectly fine. So I started on the memory. I started with 20MHz incriments, but quickly jumped to 100MHz at a time. Maxed out the software again, with a memory clock of 8000MHz. Didn't touch any voltages or anything, but maxed out the clocks by the software. Interesting.

So do you happen to know of any overclocking software that you prefer? I might just create a new thread for that....
 
Good thing it wasn't anything serious. Crysis 3 while graphically impressive I never heard the game eat that much VRAM. that's why I think if optimized properly game will not use excessive VRAM. (Probably depends on game engine as well). But if you have the chance do try games like ACU, Dying Light, Shadow of Mordor.

For gpu overclocking I have been using MSI After Burner for years. But seriously overclocking gpu is something that I haven't done since my 460.
 
Wow, I just tried ACU, and I see what you mean.

I can max it out with txaa, and it just inhales the game. But yeah, 3460MB usage. That screams bad optimization.

I'm loving this card though. And I found out I can extend the range of strix overclock software. Good boost, tempo and performance.
 
Some people blame it on the new console and i do think that it might be the case lol 😛

The new console have massive jump of available RAM compared to the previous gen. If i'm not mistaken PS4 have about 8GB and XB One got around 5GB or so compared to 256mb-512mb on previous gen. So some dev probably no longer have too think too much about maximizing resource until games become much more graphically complex later.

Another game with good optimization probably the Wither 3 (and dev still releasing optimization patch right now). In TPU test many were surprised that the game barely use 2GB VRAM at 4k resolution!