AMD Radeon R9 390X, R9 380 And R7 370 Tested

Page 10 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Why the hell would they drop only 2 gigs of ram in the 380's. That's just retarded. Hell my HD7950 has 3 gigs.

It's a per-manufacturer decision.

I'm personally willing to be the Sapphire set of cards(& btw, sapphires 380 & 370 both have 4GB as a standard release), would have performed far better than MSI's gear.
 
Nice to see 980 Ti still stomps everything, glad I bought one... a wise investment!
I grabbed a Ti as well since I was sick of waiting any longer. Would like to see how the Furry X competes though. I find it strange that we get Furry X made for 4k, but only has 4GB of RAM yet they give the 390x 8GB? I am also hooking up the ti to a 4k TV so the lack of HDMI 2.0 in all of these cards is a deal breaker.

It depends on manufacturer, as I said in another comment.

Just like how Sapphire was one of the only companies who put 8GB RAM into the R9 290x(not the 290x2, but the single edition. 8GB usable VRAM), the Furyx will likely have at least 8GB, maybe even up to 12GB of VRAM, depending on the manufacturer.

I'm betting Sapphire's versions of the cards will run the best. As they always have.

Sapphire & XFX have always been the high performers for AMD cards. I've always found MSI AMD GPU's to be extremely sub-par, pathetic in some cases.

Keep in mind, just because the architecture is AMD's, doesn't mean everything is. GPU's(at least AMD versions), typically have different volumes of VRAM, clock speeds, etc. Even cooling.
 


Sadly due to using 'HBM' memory, it's not possible to put more than 4gb on per card. This limitation will be removed for the next generation of cards when 'HBM 2' is released next year.
 
Nice to see 980 Ti still stomps everything, glad I bought one... a wise investment!
These are rebadge cards, their new cards are due out in days. Fanboy
...however the R9 Fury X will only have 4GB HBM1 memory as AMD ran into issues with the memory configuration concept that limits each stack of memory chips to 1 GB. In the configuration, only four memory stacks are fit on the Interposer surrounding the GPU chip. 4GB may be fine for most games, however it is almost useless in a workstation for GPU rendering as the full geometry and texture load of a scene must fit totally in the GPU's memory or the process will crash (Nvidia's Iray defaults to CPU mode if the video memory limit is exceeded)..

The only unbiased render engine I've seen that offers decent and stable hybrid (GPU + CPU) rendering is Otoy's Octane, however it is still CUDA based and will not support OpenCL GPUs until Ver. 3.0. Lux who's Luxrender engine is OpenCL based, abandoned their hybrid GPU + CPU mode due to instabilities and poor performance. Instead they have moved to a hybrid biased/unbiased mode for their speed boost (in pure unbiased CPU mode Luxrender is glacially slow compared to Nvidia's Iray). Also, Nvidia has since improved their OpenCL driver development particularly with the move to their Maxwell architecture and I have read reports that it supports Luxrender very well.

AMD announced there will be an 8 GB "Fury X2", however it will be is a dual GPU unit with 2 x 4 GB HBM1 memory which for 3D rendering is useless because video memory of multiple cards is not cumulative (basically for rendering purposes it will still be a 4 GB GPU). It is also projected to have a whopping price tag of around 1,400 - 1,500$ comparable to the 2 x 6 GB Titan-Z. The only advantage will be having 8192 stream processing threads which while fast would mean nothing if the scene file cannot be held in memory.

So far it appears AMD is still playing "catch up" to Nvidia.(and this isn't a "fanboy" speaking as I have dual HD 7950 DDs that I used for Luxrender until they announced they were scrapping their GPU/CPU mode).

The HBM concept is an interesting development, however its configuration for now is its limiting factor as it only allows for four stacks of four memory chips. Once HBM2 memory is available (about another year away) which will double each stack to 2 GB then the AMD can offer a true 8 GB version of the Fury and possibly even an 8GB HBM Firepro. As I read in one source, memory boosting is pretty much contingent on development costs of the individual memory chips used in each stack. However increasing the capacity of individual chips in each stack to say 750MB or 1 GB could significantly increase the price of the GPU to the point it will be out of reach of most gamers and enthusiasts.

Until then (and in spite of a claim I read on an AMD promo) GDDR5 is "not quite dead yet".
 

There could be more than four stacks but that may require additional routing layers on the interposer and possibly a larger APU/GPU die to fit all the uBGA connections.

The HBM stacks have 8x128bits interfaces on the riser, each interface connecting to one memory macro. In a four dies stack, each die has two 128MB macros, each macro consuming a 128bits interface and "limiting" the stack to four dies before all interfaces are used. Use DRAM dies with a single 256MB macro each, you can stack eight of them since each die only ties up one interface.

Since HBM1 is effectively little more than a proof-of-concept with FuryX possibly being the only product that will ever use it, it is understandable that DRAM manufacturers (only Hynix at the moment) are in no hurry to offer multiple HBM1 products. HBM's real market life starts with HBM2 next year.
 
The important takeaway from this article is that enthusiasts must wait until Fury launches in a few days for the flagship we're expecting. AMD isn't wasting resources developing new low end and mid range GPUs when the old ones hold up rather well.

The rebranding is disappointing, but perhaps AMD is playing this smart. Why spend the resources developing a new GPU based on the new architecture for every price/performance point when the older GPUs still hold up decently if you slide them down a spot - especially with their R&D budget in shambles like it is. It is much more efficient to rebrand them to keep all the naming on the same page, it allows them to keep the prices down for low end and mid-range customers and put more resources toward developing a better flagship for enthusiasts. NVIDIA may be doing better financially, but not much since Intel started shipping respectable iGPUs (a bad storm sinks all boats), I'd expect them to adopt a similar strategy next generation.
 
If I am not mistaken the gtx 970 is a cut down 980 and the 980ti is a titanX with less VRAM. If you look at it from that perspective, Nvidia has release really only 4 different GPU's in the past 18 months (980ti/titanx, 980/970, 960 and 750ti) and AMD has released 3 different GPU's (r7 265, r9 285 and r9 Fury series). That is really not that big of a difference. Again, I could be wrong on the nvidia side, but I think I'm right.

That in mind, is rebranding a big deal? What really is getting people up in arms is the release dates. Thats about it. AMD release all of theirs over a period of time, while nvidia released most of theirs at the same time. Nvidia probably played their cards better. People like to see a whole new line instead of new cards trickle in. At the end of the day though, I would say both parties set pretty even (which pains me to say cause I really am an AMD fanboy at heart).

Adding the extra VRAM as standard to the 390/390x I think was a good move. I game with triple 1080p monitors, and I really dont think I will need more than a 390, but I really want to crank the AA so the extra VRAM is very welcome.
 
waste of time , i'll wait to see what fury and fury x2s does ... sadly those cards will likely be super expensive. shame on you AMD. They should not have wasted manufacturing time and resources with this line of cards and just offered a broader rang of fury chips instead.
 


Only thing they really wasted was time.
 


While true about them being positive, let me know when they actually review 300's or fury 😉 They just shill for AMD, so spit out AMD PR, with so far ZERO backing. Almost two weeks since release of 300's and we haven't got a benchmark from them...LOL. Prelude to a fury article and a live blog...NO BENCHMARKS yet...Nothing on Fury. I guess they'll post when they find some winning AMD situations 😉

Having said that, toms needs to be slapped just as anyone else when doing a poor job (fury no OCing?), just as I slapped anandtech just now...ROFL. Not a comment on this article, but fury review at least needs help. AMD bragged about the cooler touting 500w, but toms didn't bother to OC the new chip or new mem? LOL. Ok.. I think they did, and it sucked so chose not to report (others get 5-10%, embarrassing for water). Tame review compared to others IMHO. But leaving out data (or simply not testing?) isn't as bad as blatant lies I guess :)
 


In fairness I think the issue with overclocking Fury right now is there are no under the hood tweaks available. You can't adjust any voltages for example.

I bet when one of the many 3rd party OC tools crops up that allows you *proper* access to the GPU settings we'll see more of an overclock given how far you can push many other GCN gpu's and the cooling on offer. I don't think I've ever seen a good overclock relying on the standard tools in Catalyst as they're always really cautions to avoid people risking damaging the gpu.
 
Its funny because guru3d.com tested the MSI 390x and the gtx 980 and at 1440p and 4k the 2 cards are almost identical across a whole lot more games tested. Why are these tests so much different???
I noticed that too on guru3d with GTA V it says 47 at 1440p but here it says 49 at 1080p for the R9 390X, but what's weirder is the R9 380 gets 40 at 1440p on guru3d but 38 at 1080p here?
 
One point as I read many other reviews on fury, I guess toms has a reason for not ocing memory as it is locked according to some. But nothing for the gpu when many others did it? Maybe I missed them saying why they did NOT do it in the article? If so, my apologies...If not...
 
We all knew it would happen a refresh of the last GPU,s But its not just AMD does this Nvidia has as well at this point if you want a card the R290 R290x the 970 980 are the ones to buy.. The 3xxx cards are a bit pricey at the moment and only offer a modest to little bump in speed no I am no fan boy I own both Nvidia and Amd....
 
We've seen a lot of rebrands lately, and this is down to one simple factor, we're still stuck at 28 nm, which means fixed transistor budget. Thermal properties and so on. There's only so much you can do within certain constraints and its resulted in both amd and nvidia building the largest gpus either company has ever made (which oddly enough preform very close to each other despite the massive difference in arch).
 

The bulk of the GPU's die area is an array of ALUs. There aren't many efficient ways to build adders, multipliers, register files and buffers. They may shuffle things around a bit and use different nomenclature on their functional diagrams but at the fundamental level, they might be more similar than ever. The big difference is in how efficiently they manage that large array.
 


In AMD's case the problem isn't the ALUs though (TechReport did a nice deep dive on Fury X looking at performance in a range of areas).

When you compare Fury X to 980Ti, what becomes very apparent is the balance of resources are very different. AMD has put everything into ALU's at the expense of Rops (Fury X has the same back end as Hawaii) and geometry throughput. I think that has something to do with why the nVidia cards pull ahead at lower resolutions (probably geometry in particular limits the Fury from stretching it's legs as far?).

it's an interesting read:
http://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/4

I guess as the resolution goes up it shifts the focus towards ALU's and memory bandwidth, which is why Fury is notably faster than the 290X / 390X at 4k?
 

Depends on whether you want to stick with AMD vs Nvidia, game at 1080p or higher, energy usage, heat, noise levels, and what games you intend to play, i.e. gaming evolved vs gameworks. 390x is a good gpu but so is a GTX 970. If you want to see its performance among a whole giant sample of games then go to

http://www.techpowerup.com/reviews/MSI/R9_390X_Gaming/

I would wait until the fury pro and fury nano are release. At least prices might come down by then.
 

My guess is similar to yours: ROPs are considerably restraining the number of scenarios where the Fury can give its extra ALUs and memory bandwidth the workout they could (and should) have.
 


They might have had a real winner if they had concentrated on where 95% of us play (1440p and below). Even if they won at 4K handily (and they don't), it is still confusing why you'd concentrate there with so few using it in 4K and with many games having to turn details down to stay above 30fps (and that's a bare minimum for me to even ponder playing with details maxed). Nvidia (rightfully) aims at 1080P and possibly 1440p (far fewer here also) situations. It's ok to do 4K good (as an afterthought for now, those people can buy two cards etc), but you MUST cover the majority of the gaming crowd FIRST or you've wasted your efforts. 4K is the job of then next gen after a die shrink, not 28nm and even then they'll only make me happy probably SOME of the time. The rest of the time I'll be shifting games back to my lower res monitor, probably my dell 24 1200p monitor by then (have a larger monitor purchase coming up to replace 22in 1680x1050), especially for games where I want ludicrous fps - I'm thinking multiplayer crap here (or just fps in general).

AMD needs to stop concentrating on the few and satisfy the MANY. IE, 1440p and below (well next gen maybe 4k a priority next year), and 4 HIGH IPC cores and DRIVERS, DRIVERS, DRIVERS, and also should have FORCED monitor makers to choose at least X quality components to make freesync great out of the gate (where X is whatever works as good as Gsync). Nobody needs slower (or reduced details) 4k today as you buy 2 cards or more for that, and nobody needs 8 cores that win in a few situations (RARE) that take advantage of it, and we didn't need consoles with mobile massively gaining yearly. They should have let some other fools spend R&D on consoles. ZEN looks like a correction to this philosophy, but Fiji missed the boat here and they really needed cash today, not mid 2016 etc. That is another year of bleeding as they won't make money from fiji or rebrands and apus are getting squeezed by the minute from Intel racing down ARM racing up. They've been touting 4K for the entire 200's series now 300's, and it's a joke still today for single cards.

This illustrates the waste of Mantle R&D:
http://www.anandtech.com/show/9223/gfxbench-3-metal-ios
Draw calls 4x faster (even better on mantle, dx12), but only ~10% or less in regular gaming scenarios. It might be good for future stuff (dx12/vulkan games MADE to exploit this stuff years from now) but it was never going to topple Nvidia cards the way people thought. It's like winning in f@h or something. Who cares? You can show how fast something MIGHT be one day if coders take advantage, but useless for what we do now (which is key to selling cards, cpus etc). Cool to show how fast you can run Star Swarm (which ironically faster on NV now by far in DX12) but again, who cares today (maybe one day they'll throw massive draw calls at the screen in a future game)? You can take care of today with an eye toward the future, but you can't take care of the future and ignore the present. That is a recipe for market share losses and bankruptcy. It's too bad AMD hadn't put more stock in what we are doing TODAY rather than tomorrow.
 

I do not think making you happy figures anywhere on AMD's agenda.

GPU improvement has been mostly stagnant for the past 3-4 years from being stuck at 28nm and the way I look at Fury, I see it like an open-beta for HBM: HBM will become necessary to get the most out of 16nm when it becomes available. Having it out in the wild a year before 16nm hits gives AMD that much more time to review their future HBM(2) and architecture plans based on field results. Fury might be a questionable success but that's alright since it was apparently never meant to shatter frame rate records in the first place with its 64 ROPs limiting it to the same pixel fill rate as the R9-390X.

In principle, the 980Ti's 96GP/s pixel fill rate (vs 67GP/s for Fury) make it much better suited at driving 4k, albeit at possibly lower details.
 
Status
Not open for further replies.