• Now's your chance to speak with MSI directly! Join our AMA and get entered to win a RTX 2060 GPU, plus more! Ask your questions here.

    Catch the next Tom's Hardware livestream on May 28 at 2PM Eastern: Catch the show here!

    Need Hardware Help? Include the Info Requested Here.
  • Join our Best PC Builds competition and get on the Tom's Hardware home page. Check it out here.

Question RX 5700 XT or 2070S

kingkobe21

Commendable
Dec 19, 2017
63
0
1,530
0
Will the AMD drivers eventually improve or should I just buy a 2070 Super? Will the extra dollars I pay for the 2070S be worth it? Can it perform better overtime than the 5700 XT? I also want to know what’s better paired with my Ryzen 5 3600.
 

popatim

Titan
Moderator
Either card will work and both companies work on getting out better drivers. The 5700 being newer is earlier in the driver development stage and will have more bugs that need addressing but they are working on it!

Me, I've been Nvidia only for 10+ years now.
 

kingkobe21

Commendable
Dec 19, 2017
63
0
1,530
0
Either card will work and both companies work on getting out better drivers. The 5700 being newer is earlier in the driver development stage and will have more bugs that need addressing but they are working on it!

Me, I've been Nvidia only for 10+ years now.
I’ve just heard that the AMD card has a lot of major problems still and it’s been out for months now. So should I still buy it just because it’s cheaper or just go straight for that 2070S because from what I see AMD drivers are very messed up and weird for a lot of users, a lot of them not even being able to play games properly without major problems. Since I want this build to last long too which do you think I would get more longevity from?
 

popatim

Titan
Moderator
Months is still new, ie- bleeding edge, and yes it can hurt hence the nickname.

I wouldn't get one either and many people have returned theirs which pushed AMD to put more development on the drivers so they can actually sell them. But then again I am biased right from the start. LoL

It used to be Nvidia had the larger 'drivers' department but AMD had supposedly increased theirs too so they could put out more stable drivers. You can't tell that from looking at these forums though. <grin>
 
Mar 22, 2020
9
1
15
0
So I have used the same exact model of both the 5700 XT and the 2070 Super (Gigabyte Windforce, highly recommended.) I didn't have any driver problems with the 5700 XT, and I would say its better value than the 2070 S as it performs on average slightly worse for 100$ less. However, raytracing, DLSS, or the NVENC encoder for streaming might make the price difference worth it for you. When it comes to longevity, I actually returned my 5700 XT for the 2070 Super because I think that the features in DX 12 Ultimate would make the 2070 S age better. However, as the 5700 XT is more powerful in some respects (namely teraflops,) it could potentially age better as the drivers mature. Both are excellent cards and will age quite well, but I personally hedged my bets on the 2070 Super for DX 12 and DLSS.

I use a Ryzen 5 2600; the majority of driver issues seemed to have been solved with the 20.2.2 drivers, so I think you probably won't have to worry about buggy drivers.
 
Last edited:
Reactions: King_V
Will the AMD drivers eventually improve or should I just buy a 2070 Super? Will the extra dollars I pay for the 2070S be worth it? Can it perform better overtime than the 5700 XT? I also want to know what’s better paired with my Ryzen 5 3600.
it is possible. if the new game start taking advantage new performance enhancing features of DX12 ultimate (VRS, mesh shading and feedback sampler).
 

kingkobe21

Commendable
Dec 19, 2017
63
0
1,530
0
So I have used the same exact model of both the 5700 XT and the 2070 Super (Gigabyte Windforce, highly recommended.) I didn't have any driver problems with the 5700 XT, and I would say its better value than the 2070 S as it performs on average slightly worse for 100$ less. However, raytracing, DLSS, or the NVENC encoder for streaming might make the price difference worth it for you. When it comes to longevity, I actually returned my 5700 XT for the 2070 Super because I think that the features in DX 12 Ultimate would make the 2070 S age better. However, as the 5700 XT is more powerful in some respects (namely teraflops,) it could potentially age better as the drivers mature. Both are excellent cards and will age quite well, but I personally hedged my bets on the 2070 Super for DX 12 and DLSS.

I use a Ryzen 5 2600; the majority of driver issues seemed to have been solved with the 20.2.2 drivers, so I think you probably won't have to worry about buggy drivers.
Do you think I should just wait for the new gen? Or just get these cards and wait until my card isn’t good anymore? Because the most i’ll be doing is 1440p at best. And would the new gen prices be higher than this price range?
 

King_V

Distinguished
My son's newly acquired RX 5700 (non-XT) has been working flawlessly for 5 days straight of gaming. I used DDU to remove previous drivers, and installed the latest WHQL Adrenalin drivers (20.2.2, released March 5). Zero problems.

Of the cards you requested, the 2070 Super does have bit of a performance edge over the 5700 XT, but not nearly enough to justify another $100.

That said:
Do you think I should just wait for the new gen? Or just get these cards and wait until my card isn’t good anymore? Because the most i’ll be doing is 1440p at best. And would the new gen prices be higher than this price range?
Given that you've considered this, what is your current card? If it's performing adequately for your needs, then I would agree with this, and recommend waiting to see what comes out, what prices drop, etc.
 
Will the AMD drivers eventually improve or should I just buy a 2070 Super? Will the extra dollars I pay for the 2070S be worth it? Can it perform better overtime than the 5700 XT? I also want to know what’s better paired with my Ryzen 5 3600.
Depends what you want. The 5700XT is the better value and will likely age better. AMD is also offering a deal where you get 2 games and 3 months of game pass for PC.
 

kingkobe21

Commendable
Dec 19, 2017
63
0
1,530
0
My son's newly acquired RX 5700 (non-XT) has been working flawlessly for 5 days straight of gaming. I used DDU to remove previous drivers, and installed the latest WHQL Adrenalin drivers (20.2.2, released March 5). Zero problems.

Of the cards you requested, the 2070 Super does have bit of a performance edge over the 5700 XT, but not nearly enough to justify another $100.

That said:


Given that you've considered this, what is your current card? If it's performing adequately for your needs, then I would agree with this, and recommend waiting to see what comes out, what prices drop, etc.
I’m supposed to be building a pc after all this virus stuff is over, right now I have a pc but it can barely run much. Would the prices be worth my wait though because I need something thats in my budget of around 400-500$ for a gpu
 

kingkobe21

Commendable
Dec 19, 2017
63
0
1,530
0
Let's say the new stuff that comes out is too expensive. So what? The previous generation stuff, which would be more than enough, would be of even lower price than they are today.
I feel like it’s either I take the the 5700 XT risk or wait for the new gen because the 2070S is way too far priced and I wont really be using much of its features anyway.
 
Mar 22, 2020
9
1
15
0
Do you think I should just wait for the new gen? Or just get these cards and wait until my card isn’t good anymore? Because the most i’ll be doing is 1440p at best. And would the new gen prices be higher than this price range?
To be honest, I feel like the next gen will be decently cheaper but not by too much-- mostly I think Nvidia will try to match AMD in price so stuff like the 3070 will be $400 rather than $500 like the 2070 Super is. I don't think it's a bad idea to wait for the next gen, but keep in mind that most likely coronavirus will continue to cause problems until 2021, so you might find yourself waiting quite some time for the next gen. I might return my 2070 Super and save the money for the next gen, but that depends if AMD or Nvidia release any more details in April.
 
Last edited:

kingkobe21

Commendable
Dec 19, 2017
63
0
1,530
0
To be honest, I feel like the next gen will be decently cheaper but not by too much-- mostly I think Nvidia will try to match AMD in price so stuff like the 3070 will be $400 rather than $500 like the 2070 Super is. I don't think it's a bad idea to wait for the next gen, but keep in mind that most likely coronavirus will continue to cause problems until 2021, so you might find yourself waiting quite some time for the next gen. I might return my 2070 Super and save the money for the next gen, but that depends if AMD or Nvidia release any more details in April.
Well if this coronavirus stuff isn’t over ‘til then, I wont be able to buy anything cause we’re in a lockdown/community quarantine atm. So I guess waiting is my best bet for the most longevity.
 
Depends what you want. The 5700XT is the better value and will likely age better. AMD is also offering a deal where you get 2 games and 3 months of game pass for PC.
better value? yes. but age better? that i think will still open for debate. right now RDNA lacks the support for DX12U. well for the most part people probably did not care too much about RT even for Turing based GPU but other DX12 U features such as VRS, Mesh Shading and feedback sampler are performance enhancing features. once game really being build to take advantage those features nvidia turing based GPU might start pulling ahead in performance vs AMD RDNA.
 
better value? yes. but age better? that i think will still open for debate. right now RDNA lacks the support for DX12U. well for the most part people probably did not care too much about RT even for Turing based GPU but other DX12 U features such as VRS, Mesh Shading and feedback sampler are performance enhancing features. once game really being build to take advantage those features nvidia turing based GPU might start pulling ahead in performance vs AMD RDNA.
Depends if the features are utilised and nvidia keeps up with drivers the plus with the consoles being even more like PCs now and they’re based on RDNA2 then ports should favour the red team.

Just for reference I have a 2060S because I like PhysX in the Arkham games :)
 
Depends if the features are utilised and nvidia keeps up with drivers the plus with the consoles being even more like PCs now and they’re based on RDNA2 then ports should favour the red team.

Just for reference I have a 2060S because I like PhysX in the Arkham games :)
that's why i said it is open for debate. and driver wise nvidia definitely do better than AMD even on older cards. most people point to AMD "fine wine" but did they even understand what really happen? that fine wine happen because of AMD domination on 8th gen console not necessarily AMD optimized their older GPU better than nvidia. and this thing with AMD advantage with console what leads us to DX12 ultimate discussion. nvidia know what kind of advantage (and disadvantage) AMD will have when they are the primary supplier for XSX and PS5 hardware. DX12 ultimate is in a way is nvidia way to directly counter AMD advantage of having the major console under their wing. seeing how all 4 new feature in DX12U is available on nvidia turing and none on AMD RDNA suggesting that nvidia probably successful in influencing MS to push features they want for direct x and at the same time force AMD to have those features on RDNA2. with 8th gen console AMD dominate the hardware and from there they start pushing for features that only beneficial for their hardware. this is the reason why nvidia older GPU get left behind in performance vs AMD GCN in newer tittles. with DX12 U nvidia try to even the play ground so they will have feature parity with AMD before next gen console arrive. to be honest i'm not even surprise if nvidia will directly ask MS what kind of secret sauce AMD is cooking with RDNA2 so AMD will not have definite advantage when games start being develop exclusively for 9th gen console.
 
that's why i said it is open for debate. and driver wise nvidia definitely do better than AMD even on older cards. most people point to AMD "fine wine" but did they even understand what really happen? that fine wine happen because of AMD domination on 8th gen console not necessarily AMD optimized their older GPU better than nvidia. and this thing with AMD advantage with console what leads us to DX12 ultimate discussion. nvidia know what kind of advantage (and disadvantage) AMD will have when they are the primary supplier for XSX and PS5 hardware. DX12 ultimate is in a way is nvidia way to directly counter AMD advantage of having the major console under their wing. seeing how all 4 new feature in DX12U is available on nvidia turing and none on AMD RDNA suggesting that nvidia probably successful in influencing MS to push features they want for direct x and at the same time force AMD to have those features on RDNA2. with 8th gen console AMD dominate the hardware and from there they start pushing for features that only beneficial for their hardware. this is the reason why nvidia older GPU get left behind in performance vs AMD GCN in newer tittles. with DX12 U nvidia try to even the play ground so they will have feature parity with AMD before next gen console arrive. to be honest i'm not even surprise if nvidia will directly ask MS what kind of secret sauce AMD is cooking with RDNA2 so AMD will not have definite advantage when games start being develop exclusively for 9th gen console.
Nvidia gets left behind because they forget about older cards, put less work into the drivers and they fall off. Plus AMD cards have more raw throughout than nvidia cards.

Didn’t nvidia do really badly with DX12 when it first came out? And I’m pretty sure they’re still getting spanked in Vulkan.

By the time pure next gen games are out you’ll be wanting to upgrade anyway, took what 3 years last time so both these cards probably won’t be 1440p ultra quality cards by then and the consoles are on the same general arch that RDNA are on so that should still play AMDs way.
 
Nvidia gets left behind because they forget about older cards, put less work into the drivers and they fall off. Plus AMD cards have more raw throughout than nvidia cards.

Didn’t nvidia do really badly with DX12 when it first came out? And I’m pretty sure they’re still getting spanked in Vulkan.

By the time pure next gen games are out you’ll be wanting to upgrade anyway, took what 3 years last time so both these cards probably won’t be 1440p ultra quality cards by then and the consoles are on the same general arch that RDNA are on so that should still play AMDs way.
Both AMD and nvidia prioritize their latest GPU first. But the one that most often support their GPU longer have always been nvidia. AMD drop support for their 4k series after 3 year plus while nvidia GPU for the same generation was supported for 8 years (plus 2 years of legacy support). Amd 5k and 6k also being dropped much earlier than nvidia fermi. and remember what happen with forza horizon 3 before? When the game first launch the performance between 1060 and RX480 was about equal at the time. Then for older gen gpu some people expect AMD GPU like R9 390 will smack maxwell based GPU since the game is pure DX12 tittle. But what end up happening is R9 390 end up significantly behind nvidia GTX970 because of lack of optimize driver from AMD.

as for vulkan AMD does not necessarily dominate nvidia because vulkan allow the use of extensions. That allows the API to be optimized directly to the architecture even even if certain GPU function are not properly included in vulkan base spec. The only problem with nvidia older hardware is async compute. But turing fix that completely. AMD used to include Doom 2016 as part of their marketing material when showcasing their GPU against nvidia. but after nvidia launch turing AMD stops including doom 2016 in their slides because nvidia turing most often end up much faster than AMD equivalent when it comes to id tech engine.

Yes AMD definitely have some legs with major console under their belt. But it doesn't mean nvidia did nothing to counter them. The changes to maxwell for example directly addressing the issues kepler was having vs GCN being the hardware for 8th hen console. Nvidia for their part will not want another kepler is happening to them. so they need to take measures before next gen console comes out. Hence they try to involve themselves more closely with MS (and Khronos group) to stir the future direction of 3D API to neutralize AMD advantage.

And ultimately turing support for DX12U performance enhancing festures could give it some "extra life" for those that not want to upgrade that more frequenly.
 
Both AMD and nvidia prioritize their latest GPU first. But the one that most often support their GPU longer have always been nvidia. AMD drop support for their 4k series after 3 year plus while nvidia GPU for the same generation was supported for 8 years (plus 2 years of legacy support). Amd 5k and 6k also being dropped much earlier than nvidia fermi. and remember what happen with forza horizon 3 before? When the game first launch the performance between 1060 and RX480 was about equal at the time. Then for older gen gpu some people expect AMD GPU like R9 390 will smack maxwell based GPU since the game is pure DX12 tittle. But what end up happening is R9 390 end up significantly behind nvidia GTX970 because of lack of optimize driver from AMD.

as for vulkan AMD does not necessarily dominate nvidia because vulkan allow the use of extensions. That allows the API to be optimized directly to the architecture even even if certain GPU function are not properly included in vulkan base spec. The only problem with nvidia older hardware is async compute. But turing fix that completely. AMD used to include Doom 2016 as part of their marketing material when showcasing their GPU against nvidia. but after nvidia launch turing AMD stops including doom 2016 in their slides because nvidia turing most often end up much faster than AMD equivalent when it comes to id tech engine.

Yes AMD definitely have some legs with major console under their belt. But it doesn't mean nvidia did nothing to counter them. The changes to maxwell for example directly addressing the issues kepler was having vs GCN being the hardware for 8th hen console. Nvidia for their part will not want another kepler is happening to them. so they need to take measures before next gen console comes out. Hence they try to involve themselves more closely with MS (and Khronos group) to stir the future direction of 3D API to neutralize AMD advantage.

And ultimately turing support for DX12U performance enhancing festures could give it some "extra life" for those that not want to upgrade that more frequenly.
The R9 390 beats the GTX 970... and it’s actually usable today as it has 8GB of VRAM vs 3.5...
 
The R9 390 beats the GTX 970... and it’s actually usable today as it has 8GB of VRAM vs 3.5...
i'm talking specifically about Forza Horizon 3 at launch but my point is more about both how AMD and nvidia put priority towards their latest line of GPU first not debating which GPU is superior performance wise. and GTX970 can use all it's 4GB VRAM just fine. just that there are some weird things on how the memory being setup due to GTX970 is a cut down GM204. not the first time nvidia use such weird memory config. but i can definitely say that they way that nvidia did it GTX970 is their best implementation for such config. because i experienced it first hand with my older GTX660 SLI (192 bit 2GB) setup when memory usage exceeding 1.5GB mark.
 
i'm talking specifically about Forza Horizon 3 at launch but my point is more about both how AMD and nvidia put priority towards their latest line of GPU first not debating which GPU is superior performance wise. and GTX970 can use all it's 4GB VRAM just fine. just that there are some weird things on how the memory being setup due to GTX970 is a cut down GM204. not the first time nvidia use such weird memory config. but i can definitely say that they way that nvidia did it GTX970 is their best implementation for such config. because i experienced it first hand with my older GTX660 SLI (192 bit 2GB) setup when memory usage exceeding 1.5GB mark.
It effectively has 3.5GB of VRAM a class action was successful.
Obviously they put more effort into newer cards but nvidia products seem to tank after a few gens but compare the 290X against the GTX 780 from the same time and price point and it’s a massacre from what was an even contest before.
 
It effectively has 3.5GB of VRAM a class action was successful.
Obviously they put more effort into newer cards but nvidia products seem to tank after a few gens but compare the 290X against the GTX 780 from the same time and price point and it’s a massacre from what was an even contest before.
Look at my sig. I owned GTX970 myself. GTX970 in reality is 3.5GB + 0.5GB making total useable VRAM is 4GB. the primary concerned was when memory usage exceed 3.5GB mark the performance will significantly drop. This is where the accusation of GTX970 is effectively a 3.5GB card because once VRAM usage exceed 3.5GB and start filling the last portion of the "slow" VRAM performance will drop significantly. We are talking about significant bandwith drop from 196 Gbps down to 28 Gbps when in reality the internal nvidia memory workings are not like that. Nvidia did not fight the law suit because their card is effectively 3.5GB + 0.5GB and they are not upfront with that information early on but they are clear the card can use all it's VRAM (4GB) not just 3.5GB like some people speculate. Many new game can easily fill 4GB even at 1080p. And yet i haven't seen this significant performance drop once memory usage exceed 4GB. many tech journalist and tech youtuber investigating this issue did not see the performance drop either.

The situation with kepler is not somthing you can understand by just looking at the surface. You need to understand why they are not as performant as AMD GCN in newer games (which is not an optimization or driver issue). And then why maxwell was left behind by pascal in doom eternal vs pascal despite both are veey similar architecture wise. But if you really understand what has been happenning to all this you will realize why something similar can happen to AMD RDNA vs nvidia turing in the future.
 
Look at my sig. I owned GTX970 myself. GTX970 in reality is 3.5GB + 0.5GB making total useable VRAM is 4GB. the primary concerned was when memory usage exceed 3.5GB mark the performance will significantly drop. This is where the accusation of GTX970 is effectively a 3.5GB card because once VRAM usage exceed 3.5GB and start filling the last portion of the "slow" VRAM performance will drop significantly. We are talking about significant bandwith drop from 196 Gbps down to 28 Gbps when in reality the internal nvidia memory workings are not like that. Nvidia did not fight the law suit because their card is effectively 3.5GB + 0.5GB and they are not upfront with that information early on but they are clear the card can use all it's VRAM (4GB) not just 3.5GB like some people speculate. Many new game can easily fill 4GB even at 1080p. And yet i haven't seen this significant performance drop once memory usage exceed 4GB. many tech journalist and tech youtuber investigating this issue did not see the performance drop either.

The situation with kepler is not somthing you can understand by just looking at the surface. You need to understand why they are not as performant as AMD GCN in newer games (which is not an optimization or driver issue). And then why maxwell was left behind by pascal in doom eternal vs pascal despite both are veey similar architecture wise. But if you really understand what has been happenning to all this you will realize why something similar can happen to AMD RDNA vs nvidia turing in the future.
It had 3.5GB then 0.5GB that was essentially useless as it tanked performance as soon as you try to use it. There was no point fighting it as it was advertised as 4GB of full speed VRAM and it wasn’t. And of course you won’t notice a difference vs the slow VRAM once you get to 2666Mhz DDR4 you’re at 21GB/s In single channel over 30 in dual at the base 2133Mhz and around 40 for 2666.

Kepler dropped off because it’s old, Maxwell is starting to get it now and in a couple of years Pascal will too. Hell even look at the RX and Vega cards that have leapfrogged the 1060 and 1070. Hell the Vega 64 is trading blows with the GTX 1080 now.
 
Last edited:

ASK THE COMMUNITY

TRENDING THREADS