AMD RX 400 series (Polaris) MegaThread! FAQ & Resources

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
^ nope, powercolor managed it on air with no throttling & temps below 80c with the devil 13 390x/390 cards.

Bearing in mind the 480 has close to 100w less tdp as a single card then a little down clocking from full spec & with a well designed custom PCB I'd say its 100% possible to get a twin 8 pin solution out there that will manage fine with air cooling.
 


I have to agree with a good air cooler I don't see how this is a bad idea. My RX 480 with the reference cooler sits fine at 81 degrees, I'm sure it would be quite a bit cooler with a good AiB setup, and the power requirements can easily be taken care of with 2 8 pin connectors.
 




I never said it couldn't be cooled or that it's power requirement couldn't be met. The point was, why would they bother? A single more powerful GPU based card would be cheaper to produce and easier to sell to the masses.
 
Unless AMD has some breakthrough tech going on (or gets tons of game devs to suddenly support DX12 explicit multiadapter), a dual-GPU card will be subject to the usual caveats and probably not earn them that much money.

Though producing two small GPUs is probably easier and cheaper than one big GPU, it's all downhill from there - more complicated PCB and cooling is one thing, but all the extra effort that has to go into the driver etc...
 
Is it really that complicated though from a technical standpoint to get 2 GPU cores running in tandem properly on a single PCB & sharing a single 8gb memory config without having to rely on automatic/integrated crossfire ? I ask this question seriously as I personally have no idea.

If its been the plan right from the design stage then surely it is feasible??

As far as I know its never been attempted from that aspect, the dual GPU setups in the past have always been an afterthought (& a fairly badly planned one)

The 480 is the first SKU amd have designed that actually has a tdp capable of being paired on a single board without having plain silly power requirements.
 
One issue is that you'd be looking at power consumption numbers right up there between a 290x and a 295x2 on this chart, but performance matching cards down near the bottom third.

power_peak.png
 
that assumes it doubles the power usage.

1070 and 1080 sli testing i have seen shows less than 200% power usage. usually around 180% if i recall right. what did the 480 xfire tests show for power usage? somehow i doubt it was a complete doubling of the power. it's still a lot of power approaching 300w or more but if the price is right...

remember no one cares about power usage unless it is an amd product. when the FE cards showed much higher usage than nvidia suggested, the first reaction was "who cares about power consumption!!" but all of a sudden here............
 


Actually some of us do care about power consumption, I don't recall posting anything about the FE's power consumption by way.
 


Maybe... but I have to wonder why it wouldn't have happened already, then.

It is pretty infuriating that you can literally have thousands of shader cores on a chip working in parallel absolutely perfectly, but then getting two chips working in parallel is clunky, and 3-4 is downright bad.
 
The simple reason for not creating complex circuitry to make 2xGPUs work as 1 is cost, as usual.

The necessary components to make the OS think it has 1 GPU instead of two, means they have to design a *third* component capable of managing 2 different pieces of silicon. It is the same idea as multicore CPUs nowadays. Why would you do 2S or 4S when you can just have 1S with twice the core count and leave all the associated deficiencies with it behind.

For academic purposes, I do like the idea of AMD creating something *interesting*, but from the practical point of view (and business case, really), they are better of with a bigger chip. When they can't go bigger, then making a dual becomes something within reason to expect.

Well, at least that is my take on this topic.

Cheers!
 
AMD has this Heterogenous System Architecture thing, supposed to let CPUs and GPUs share memory. Perhaps it could be applied to two GPUs sharing memory?
 


Indeed, but it would be fascinating to see if it could be done and done well. They could even use different configurations on each chip perhaps for more efficiency in performing tasks perhaps?
 


Sharing the memory is just a thing that helps with GPGPU tasks mainly. The CPU doesn't really need to access the GPU memory at any given time, from what I remember and the GPU only needs it when moving stuff across it's own buffer. And, even if it does help, it's just one part of the whole thing they need to develop.

I do remember when the 4870X2 was launched and it sported it's own internal PCIe bridge. I was never used IIRC. Wasted money in R&D that could've gone to a better GPU or something.



Well, if they still have space in the process node, I really don't see how putting an outside chip is *better* in terms of all the costs associated than just including it in the design (integrate it to the GPU). It's kind of the same deal with fixed pipelines. You could have them outside of the GPU, but why bother?

Would you think nVidia would put a PPU (as they were called) alongside a GPU nowadays?

And I really don't think they would be more efficient. Once you move stuff off-core (or off-GPU in this case) you will always incur in performance penalties. It *might* help with cooling, but that's about it...

Cheers!
 
There's also the whole thing about AMDs strategy to conquer the mainstream market. Marketing wise, they are not interested in a card competing at the high-end with minimal sales and reduced profit margins.

Then there is the fact that their past few forays into dual GPU cards actually hurt them more than helped. The 6990 and 7990, and to a lesser extent the 295x2, were blatant advertisements for all that was bad about AMD: hot, power hungry, and noisy, with horrible frame-pacing to boot. It's taken them years to overcome the perceptions that came out of those reviews, a process that continues to the present.
 
I'm pretty much an NVidia guy, but AMD has repeatedly shown that they are not scared of factory water cooling and that they can pull it off. How well a dual GPU card scales is an issue that, if solved, would, at the very least, open the conversation regarding, "What price is high performance worth?" For some, it's worth anything they have to pay. All a dual, x480 card has to do to be viable is beat dual x480 CF by a reasonable amount (20%?, 30%?) for the same or less money and I would consider it. With water cooling, one might be able to get a significant clock bump to go with it. What do you think?
 


Have any of the AMD dual cards performed better than two standard cards in Crossfire? I thought I remembered reading that two R9 290x's in crossfire performed slightly better than a R9 295x2.
 


To be frank, I don't know the answer to that, but, in my experience, technology is no respecter of history. I based my premise on the assumption that some smart engineers and programmers could advance the present state of the art in multi-GPU utilization that would enable better scaling on a dual GPU card. Pie in the eye thinking, I know, but I've seen stranger things happen.

So, I started thinking, "This information is somewhere," so I went looking. In Tomb Raider, an R9 295X2 gives a frame rate of 166fps. Two R9 290X's in CF gives 145fps. In Medal of Honor-Warfighters an R9 295X2 gives 113fps and 2 way CF 290X's give 104fps. It carries along that way through all the games they share. No matter how good or bad the gains are from having a second 290X available, and some of them are barely any gains at all, the 295X2 out performed the 2way CF 290X in every case where they shared a game configuration..
 


I think it's highly unlikely that we'll see a card that performs better than a pair of base cards in x-fire.

The advantage of a dual gpu single card is:
- Space saving (should take up a similar amount of space / slots as any other single card)
- Reduced power requirements
- Potentially cheaper than two single cards as less duplication of components.

At the end of the day, I think a dual Polaris 10 card could make sense if the cost is decent, although frame pacing and support in software is likely still an issue. To be viable imo it needs to significantly undercut the price of a performance competitive single card to compensate for the inherent issues involved with dual gpu cards.
 
Status
Not open for further replies.