AMD Radeon R9 390X, R9 380 And R7 370 Tested

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Why the hell would they drop only 2 gigs of ram in the 380's. That's just retarded. Hell my HD7950 has 3 gigs.

There is a 4gb card available aswell for around 15-20$ more. I was personally looking for a review of the MSI R9 380 Gaming 4gb card, But alas everyone seem to be reviewing the 2gb version for some reason. Who in their right mind would choose the 2gb version over the 4gb one. Most people looking for a review of this card will be looking for the 4gb version.
 


It's really weird that they used overclocked card versus stock clocks. Especially considering that they didn't point out that 980 is overclocked on their graphs.
 


Yea, i dont see were it says that the 980s are overclocked, but they clearly are in this comparison as there are very close to the 980ti or maybe the amd cards they got are very bad examples. Almost all other reviews find that the 390x is within a few fps of the 980 at higher resolutions. Or just different drivers, ive seen two different amd drivers over the reviews.

In the future, i would like to see the clock speeds of every card tested like other reviews do to clear up the confusion.
 
The power usage is the real deal-breaker for me. Any of AMD's mid-range or high-end cards would require me to get a new PSU, adding at least $80 to the cost compared to a GTX 970 or 980.
 

Toms used an overclocked 980. That's why their results are different.

Toms also used overclocked 200 series parts that got renamed into the 300 series so the comparison is valid.

Your whining is getting old.
 


It was my first post on this forum. Also I just want to see well made benchmarks to know what is true performance of reviewed cards. When some cards are overclocked and some aren't then it's hard to tell what is the real difference between them. I don't understand why you are trying to attack me.
 


Pretty much, but benchmarks shows that it perform better than just overclocked 290X, so there are some upgrades. Nonetheless, if someone has 290X then there is no reason to uprage to 390X.
 


No upgrate. The problem of the oc'ed 290X is the slower RAM. I was not able to get it stable at the same clocks. The 390X has more and fast RAM. But - this difference is really small.






 
You know if AMD would have released Fiji under the 390/390X nomenclature and dropped the 290/290X into the 380/380X slot nobody would be making a peep. The only difference between this example and what AMD actually did was release Fiji under it's own separate product line. A brand new line of GPU's still exists, they are just not named what everyone wants them to be named. I agree that AMD should have named them in the 3XX series because it would have been much less confusing and allowed more powerful products to move down in the stack but they didn't and it is what it is. We shouldn't sit here and act like AMD didn't bring us anything new in the launch because they did. Remember when AMD went from the 5XXX to the 6XXX, 5870 was king of the heap. Next gen there was 69XX SKU's.
 


Thank you! You made my day. I wanted to buy a GTX 980 but the cheapest (non-reference) 980 card is about 70$ more expensive than the MSI 390X and I simply can't afford it right now. Ah well, since where I'm from is very cold in the winter I can at least rise my room temperature by some degrees with the 390x =).
 


Both the 390x and 980 used by Tom's are the MSI models with the same exceptionally good air cooling technology. MSI Gaming APP allows three factory set settings modes for their cards: OC, Gaming and Silent. Tom's used the OC mode setting for both of these cards. That is the best and fairest comparison method possible. If you read the verbage next to the charts the review clearly states both cards were overclocked setting.

Further Toms identifies the manufacturer and model of the other cards used in this review so that you can look up the specs and cooling solutions of the cards. Other available cards used were from various different manufacturers with difference cooling solutions on which Toms also used the highest factory available overclock clock settings.

In contrast the Guru3d review did not include the mfg. models of the non-390x cards so the readers has no idea what cards or cooling solutions were used or whether they were the poor cooling and lower clocked reference cards or aftermarket versions with higher clocks and better cooling.
 


Thanks for divulging that. That gives us a real idea as to where the current state of technology is.

 
AMD mehhhhhhhhhhhhhhhhhhhh
almost 3 years of big lies about these cards and now look at the result.390x even cannot fight with 970 with much less power consumption !but a few month ago ???AMD said that 380 is much faster than 980!!!and 390 /390 x are super computers!!!!!!!!!! LOL
 


I'd tend to believe this is driver related considering AMD's last WHQL drivers were Dec8 2014. They are just finally getting around to using what is there already, but not optimized for previously. Hairworks had to be used a few times for someone to take notice how to optimize for it right? I remember when TressFx hit and NV showed not too good, but then by the end tomb raider etc ran just as good on NV as AMD. It just took NV a while to get it down. We'll probably see the same here though AMD may run with slightly less if needed (16 or 32 vs. 64 for maxwell or something).

Not sure what tessellation level they had in the hardocp review, but only 64 really taxes the crap out of everything but maxwell and that level is just NV using ALL that they have in maxwell pushing it to the limit. I would expect both sides to do this type of stuff to highlight the few things there top cards do that others can't (that is the point of R&D spent on this type of "special sauce"). It's clear the game company was right, AMD just didn't do their driver job. The same has been said by the project cars guys and I think AMD is already addressing that game too. This is what you get when a company loses $6B in 12yrs (7B+ in 15yrs). Hopefully Fury cards and ZEN give a little pricing power and they CHARGE accordingly to get some profits finally. If they are too cheap they'll make nothing and miss what may be their FINAL chance to capitalize on a great hardware product (gpu and cpu wise). I really hope they charge the right price to maximize profits this time, and not commit price war suicide yet again.

Don't get me wrong I don't like high prices, but it's clear AMD is going broke due to not pricing correctly to make money. I think half the people in these forums don't understand AMD is losing money hand over fist for more than a decade!
 


I'd wait a week or two...Fury may be decent, though the size of the die makes me think we'll end up about tied at the top end. For me it comes down to watts/heat more these days in a AZ (and price of the card over it's life using those watts), and whatever special stuff they have I like (currently gsync over freesync, but hoping they fix this so either way is equal). But more and more I'm thinking Q2 next year and just put this purchase off longer. I don't think we need HBM now, but I think Pascal (and AMD's competition for it) might actually be able to use it. I don't think fury needs it much as we're not really bandwidth constrained until we force a situation that you can't play at anyway (fps wise under 30min). The next group of cards will allow todays perf while dropping the heat they put out, so for me probably a major delay in my gpu purchase just because the next ones will blow these away, or at least maybe give better 4K with less heat/watts.

I mean 2016, is 16/14nm, WITH Finfet+ and probably HBM2 (though not too important, even HBM1 would be ok). Way too much improvement in power/perf for me to really look at anything today from either side. NV almost got me, but I'll wait for more power and gsync rev2/freesync rev2 I think to see if things get even better/cheaper. Worst case I get far more power in a smaller watt envelope. How much of what I said above that you actually get, will probably depend on H1 or H2 2016 purchases. Either is far better than today for my issues (gaming for a few hours without being driven out of an AZ bedroom, and electricity costs rising yearly)...LOL. That said, fury should at least give an option to go either way for most at the top, when before Fury it was pretty much NV. I wonder how much HBM will cost AMD, or if savings on the card size etc makes mem cost a moot issue. We'll see in a quarterly report soon I guess; not this one upcoming, but the next one when they've had 3 months of selling the new stuff.
 
Its funny because guru3d.com tested the MSI 390x and the gtx 980 and at 1440p and 4k the 2 cards are almost identical across a whole lot more games tested. Why are these tests so much different???

guru3d tested an after market, factory overclocked MSI 390x against stock speed, reference design nvidia cards. tom's was smart enough to compare AMD custom boards to nvidia custom boards

i thought this was obvious.
 

HBM2 is supposed to deliver about twice as much bandwidth per stack as HBM1, which is going to be critical for keeping up with the amount of compute power that will fit on 14/16nm GPUs without having to use the full-blown quad-stack configuration for lower-end GPUs. With 3-4X as much compute per square centimeter, GDDR5 may no longer be able to deliver sufficient bandwidth to keep up in a cost-effective manner.
 


Dude, a 980Ti or Titan X with water cooling will cost a lot more, so the comparison of 980ti stock against Fury X stock is fair. It's what you get for the *same money* and for $649 your getting more performance with the Fury X. Unless nVidia releases a water cooled 980ti with a suitable overclock for the same money it's the comparison that matters.

Also AMD have stated the water cooling and power delivery subsystems on the Fury X can handle *a lot more* than the card is using at stock settings, so whilst yes you can water cool and overclock the top Maxwell gpu, you can do the same with Fiji... until we see overclocked vs overclocked benchmarks you simply cannot state that *maxwell will be faster* as I don't think it's a forgone conclusion (not with 500w of cooling and 375w max power delivery on a 275w card).

I'm not saying Fury X is outright better, but it's certainly looks like an equal to nVidias best. The hype around the HBM is a bit misleading, yeah Fury X has a bit more bandwidth (but the Maxwell cards aren't exactly memory starved given the very high performance Gddr5 being used and the fact Fury X is using HBM 1). No the performance we're looking at is down to Fury X having 4096 (!) shaders. That is a lot of shader power no matter how you slice it.

Also for the record I'm not a 'fanboy' of either side. I own a mixture of kit including AMD, Intel and nVidia. I just think it's worth giving AMD / the Fury a fair chance rather than deciding they've failed from the start (and from what I've seen I don't think they have). Decent competition at the high end has *got* to improve prices for everyone irrespective of which brand you prefer. One thing all this has highlighted to me though is AMD's marketing team deserve to be shot... they should have launched the Fury cards first (providing samples for reviewers), and they should have reconsidered how they went about re-branding the lower cards. Typical AMD, take a winning product and then totally screw up how you launch it.
 
In some technical tests (not to be used to directly compare GPUs) the AMD GPUs were drawing a lot more calls than the Nvidia cards ... some there might be some gains for AMD GPUs when DX12 benchmarks will come but i really doubt it. Till then we have to wait for Fury benchmarks.

 


Yeah AMD are the only company to fully support the new scheduling capabilities of DX12, so they stand to gain most on draw calls. That said, this will only be an advantage where draw calls are the limiting factor- so something like ashes of the singularity from Oxide may well favour AMD. Most other games (e.g. FPS) manage with the current DX11 limitations, and whilst more draw calls will be a benefit (e.g. StarCitizen is hitting limits on draw calls), they probably wont push them to the levels that Ashes will so AMD's advantage won't really impact the results (as NVidia do still get a massive gain in draw calls performance on DX12).

It will certainly be interesting to see what happens though. It's kind of the reverse of the situation with NVidia having a significant advantage in Tesselation performance (probably less so on the Tonga / Fiji based cards), however that doesn't often come up in games as only a few really push it hard enough to expose it as a bottle neck (hair works in Witcha 3 being a notable exception).
 
Status
Not open for further replies.