AMD Radeon R9 390X, R9 380 And R7 370 Tested

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


The EVGA 980Ti with a CLC attached has a 14% higher clock than their 980Ti reference card, while aircooled non-reference MSI has an 18.8% and the GIGABYTE GV-N98TG1 GAMING has a 20% clock increase over the reference card. It will be interesting to see apples to apples FuryX vs aftermarket cooled 980Ti performed by independent benchmarks next week. The market will sort out the pricing within a relatively short time regardless of MSRP.
 


While I agree GDDR5 will have run it's course for top stuff, I disagree for low-end next year. Not sure HBM2 matters much for next gen at all unless in something like Tesla (where you might see the 4-10x perf actually come true). I do not expect to see cards running 3-4x faster than 980ti in gaming with pascal. At most I'd expect possibly 2x faster in some stuff (likely just 4K with massive details pushing ram issues for others), and I think HBM2 (along with 32GB) would likely be Tesla based stuff that can absorb the cost of new mem. The numbers quoting faster perf are for very specific operations that won't be done on consumer cards. You will likely find faster GDDR5 is cheaper for lowend cards since it is so refined at this point and another shrink might mean 1/2 the chips for a 4GB model (this low end next year with today's 980ti mem speeds??). AMD just went 2GB with ~mid-range remember (380 2GB $200). We have no idea at this point how cost works out for HBM1 let alone HBM2. IF benchmarks are real for fury, it would seem HBM1 is more than enough for a 550-580mm^2 20nm gpu and I don't even think HBM1 was needed for fury (maybe dual chip yeah). How much more crap will they be able to cram into 16finfet+ (which the next chips to come are on TSCM CLNFF+) on a like size die? I'm guessing low-end won't tap be fast enough to tap out a 980ti's perf, so GDDR5 with 4GB would be probably fairly cheap. Also since they are doing GP100 first (tesla and maybe top end consumer), then GP104 later, I think lowend will just be rebrands of current stuff with possibly faster GDDR5 + maybe higher clocks and less chips at some point in the next 12 months. AMD just did their full re-brand, so it's NV's turn next...LOL. They'll want to milk old designs as much as possible since neither side is getting rich moving on (AMD losing money, NV still hasn't hit 2007 profits).

Your saying you think low-end 16nm/14nm will beat FuryX etc? I doubt this, but could be wrong and even it doesn't tap out HBM1. Again, I doubt it was even needed or it would blow away 980ti (like 50% faster or more) and it appears it matches OC 980ti's and probably beats stock which makes sense as AMD wasted space on DP when gamers don't use it (and a lot of pros don't either). Since GP100 is aimed at DP again (because maxwell wasn't, so they need a new tesla champ, companies are begging for it), we won't likely see MASSIVE gains on gaming, at least until GP104 knocks out some of the DP stuff maybe and dedicates more to pure gaming. We're both guessing here, but I really think you're overestimating the next gen's power here. Remember AMD chose HBM only for higher end stuff here that could absorb the cost of it or they would roll it out on all the cards, yet they roll out 8GB on 390x 390 and 390 is $329. I'm guessing low-end next year can easily slap on 4GB of fast GDDR5 and be cost effective in the $200 range (or even lower end). Low-end (that's under $200 isn't it?) will be below 970 IMHO, so no worries on gddr5 from what I can see.

You also have to remember that game devs won't be aiming at HBM2 bandwidth (or heck, 1) for a while as we'll have so few (with AMD alone, and only 1/4 of the market being AMD at all) for quite some time. They will be making most of their games fit quite nicely inside GDDR5's bandwidth (and expected amounts of ram) for a while just simply because 95% of the market will be on that for the next 6-12 months. Sure a few devs will buck the trend but you get the point. I doubt AMD's sales of fury cards will be more than 5% of the market (20% of AMD's market share?), but we'll see :) IMHO the words Critical and HBM2 don't belong in the same sentence for quite a while for consumers and I'd think AMD/NV know this too and it's a great way to separate the low from high end still for some time. Currently it's a buzzword that means nothing right? :) I guess we'll just have to see how cost effective HMB1 or HBM2 really are. At some point you'll be right, but not this gen coming up. AMD not rolling it out across the board kind of proves my point (it isn't needed, and milk the old designs cow).
 

Game developers do not "aim" for memory bandwdith of specific technologies since all of the memory-specific details are abstracted away by the hardware and drivers and the game needs to work on anything from IGPs to 512bits-wide GDDR5 or quad-stack HBM2.

The big problem with continuing to use GDDR5 is that modern IGPs are starting to catch up with 128bits GDDR5 GPUs, so low-end GPU performance needs to increase considerably to remain relevant - especially considering how AMD plans to put HBM (unspecified generation) on at least some of their Zen-based APUs. The fastest GDDR5 currently available is only about 20% faster than typical GDDR5, making it roughly even with single stack HBM1. Not enough to keep up with a doubling in lower mid-range GPU performance without either going 192/256 bits wide or dual stack HBM1. With HBM2, a low-end GPU could get away with a single stack.

I would expect a single stack of HBM2 to be quite cost-competitive with 12 chips of the fastest GDDR5 in existence, 16 chips of standard GDDR5 (plus the cost of extra PCB layers to route all those signals) or two HBM1 stacks (with the larger silicon interposer). There is also the power efficiency factor.

Fury is the only product confirmed to use HBM1 and might end up being the only one to ever use it. HBM2 is the real production stuff and as such, the much higher production volume will make it much cheaper to produce than HBM1.

For low-end APUs and GPUs, the HBM stack controller could even be integrated on the APU/GPU die and the HBM DRAM dies stacked directly on or under it to eliminate the separate logic die and interposer.
 
Serious question: So how much did you guys get paid to run this anti-AMD article? techpowerup, techspot, guru3d, and hardwarecanucks all show the 390x matching or beating the 980 in performance and yet you can't seem to get it to perform better than a 970 somehow? I guess we know how you afford all of your fancy testing toys.
 


perfrel_1920.gif


You were saying?
 
It's so early in the release, maybe driver updates will give these a better performance lead of the previous models. I can't understand why anyone would pay more for less than 1% performance bumps, and when games are starting to use 4GB+ at 1080p, how could they release the 380 with only 2GB of VRAM? The Fury better be insane.
 


Well, usually when games are eating over 2 GB, its when the user is adding all the filter-effects and playing at 1080p+ resolutions. Game designers are finding better ways to utilize RAM (Alien: Isolation and Witcher 3 are testimonies to this) and, so too, GPU manufacturers - the R9 380 and Maxwell both use a type of lossless color compression. As games grow in graphical complexity, we'll always need more and faster memory. But not everyone needs to play at maximum settings.
 


The HardOCP review of the 980Ti reference includes VRAM usage for the Titanx, 980Ti and 980 in some games at 1440 and 4k. Pretty interesting.

http://www.hardocp.com/article/2015/06/15/nvidia_geforce_gtx_980_ti_video_card_gpu_review/10#.VYjdK_lVhBc

 
Nice to see 980 Ti still stomps everything, glad I bought one... a wise investment!
I grabbed a Ti as well since I was sick of waiting any longer. Would like to see how the Furry X competes though. I find it strange that we get Furry X made for 4k, but only has 4GB of RAM yet they give the 390x 8GB? I am also hooking up the ti to a 4k TV so the lack of HDMI 2.0 in all of these cards is a deal breaker.

Maybe the RAM CAPACITY is not an issue over a 4096bit bus that will come with the Fury X? If that RAM is significantly faster, then surely it will deal with swapping out stuff much quicker and wouldn't need to store so much? Efficiency is always the answer. I'm only guessing, and I can't remember if it is a 4096bit bus - just working from memory. Don't hate 😉
 
The R9 390X hangs with the GTX 980 (or it could be said the GTX 980 hangs with the 390X) at one site, wins against the GTX 980 at two other sites at UHD resolutions, while swapping blows at 1440p. However, here at Tom's, not only is the 390X re-badge a bust with completely different results, but the conclusion is negatively biased against the product. Who is walking in the mist? Tom's or everyone else? I entertain that possibility in the name of fairness. Mostly, I have wonder if anyone is wrong and the system setup used in the testing make that much of a difference? Or did MSI in particular fail hard?
 

No need to swap things out. Most of the RAM is filled by multiple copies of game assets across memory channels and with 60% more bandwidth, HBM1 should be able to make-do with that many fewer duplicates.

If there was a need to swap things out with system RAM, PCIe 3.0x16 at 16GB/s would be the bottleneck even on an R7-240 which has about 30GB/s worth of memory bandwidth.
 
Its funny because guru3d.com tested the MSI 390x and the gtx 980 and at 1440p and 4k the 2 cards are almost identical across a whole lot more games tested. Why are these tests so much different???
did they even test them here? all it says is check our old 7000 and 200 series benchmarks.
 
vga card total 468 WATTS

Good gawd....

average 368 WATTS

No way, no thanks, you stoked the core it burned through the earth's crust and now heats the fires of hell
 



That doesn't sound like a serious question just a slap at Tom's. It is still early in the release I think more testing needs to be done before a clearer picture of this release and where its strong points are. If you are looking for a AMD leaning site try Anandtech they are very positive towards this new line up from AMD.
 


If they released a new 290X under the name of the 380X they would raise the price anyway so then people would complain that the 380X is way more money than the 280X.
 


I actually agree with the article: if you DO take everything into consideration, and without Fury results in, AMD can't come close to Nvidia if we're talking about efficiency. People are always crying about the cost of Nvidia products, but they forget that when you buy a slightly cheaper AMD initially, then you're spending significantly more on electricity for as long as you run that inefficient card. Let's just give it a period of 18 months ownership - if you took the entire cost into account I think you'll find that AMD is nearly as expensive as Nvidia over the period.

We should be looking at EFFICIENT technology. Can you imagine what Nvidia would be capable of producing if they weren't concerned about efficiency? Take their GTX980 (not even the Ti model) - imagine what they could achieve if they said "let's just produce something potently fast, no matter the TDP? They've proven they can produce fast AND efficient GPU solutions, and for that, they deserve crazy props. Compare them to AMD on that front. Then the article stands: AMD can't come close.
 

yes, we can imagine..
it's called a GTX 480, or a GTX 550TI - off the top of my list of cards i don't care for...

edit: wait, both those cards are real... so i guess we don't have to imagine...
 


nVidia's new 'focus on efficiency' is a really new phenomenom. AMD were *more efficient* than nVidia from the HD 4000 seriese, HD 5000 seriese, HD 6000 series, HD 7000 seriese until the launch of Kepler (GTX 6XX cards) which were marginally more efficient (mostly) than HD 7000 (Pitcarin / HD 7800 was the GCN efficiency sweet spot and has as good perf / w as anything based on kepler). nVidia has fainally taken a real efficiency lead against AMD with Maxwell (at least against AMD's now old GCN architechture), and I don't think they will have much of an advantage against the new Fury cards.

It's amazing how short of a memory people have with these things (and back when nVidia were less efficient, all the nVidia fans were saying 'efficiency don't matter' lol). I like nVidia kit for the record (had a fair few of their cards over the years), I just don't view them as being as far superior as many like to make out. Lets at least take a look at the Fury benchmarks before declaring a winner 😛

Also what will be most interesting is comparing AMD and nVidia's new 14 / 16nm architectures due out next year. They will both be on equivalent process tech, both be using HBM 2 and both have updated architectures so we'll have a really interesting contest. This generation AMD only has the 1 new gpu designed to compete, they'll have to build a full range next time around due to new process tech. I personally think that Pascal and Artic Islands will be neck and neck with each other (as always).
 


That's your example? A card from 2010 (250w) and 2011 (116w - so don't get your inclusion of this card) respectively. Good job keeping up with the times...

Let's look at the GTX480 - it was a beast at that time. TDP 250W. Let's compare any AMD product at the same TDP irrelevant of year, heck, let's make it fair to AMD and choose a relatively recent card, the R9 280:

head to head theoretical comparison

4 YEARS of technology between these 2 cards, but look at the difference - not nearly as eclipsed as the 480 should have been, right?







 

keep arguing with the straw man.

you wanted to know what'd nvidia do if it wasn't concerned with efficiency, i provided you with some examples. real ones. as in not imaginary.
 


Your examples aren't relevant anymore. I can quote from 1999 to prove a point as well. We're talking about in this day and age. Not 4 or 5 years ago.
 


Ok lets look at *now* then shall we?
http://www.tomshardware.co.uk/amd-radeon-r9-fury-x,review-33235-9.html#react33235

Toms review of Fury X... what's this? it uses 12W less power under gaming loads than the 980ti whilst performing about the same (meaning it has marginally higher perf / w)?! :) Also whilst HBM helped, a lot of the power saving is down to a more granular power management system (similar to what nVidia did with Maxwell).

The point is, as usual, AMD and nVidia are pretty close to each other when comparing tech from same gen. What AMD did do wrong, was to not update their lower GCN gpu's with the same improvements.
 


That's where we totally agree.
 
Status
Not open for further replies.