AMD RX 400 series (Polaris) MegaThread! FAQ & Resources

Page 26 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


I believe SLI is built differently than Crossfirex. Which is one reason why it's so much more reliable.

I like SLI bridges though, it looks weird seeing two R9 390Xes, Furies, or RX 480s without a CFX bridge.
 

Just take a nice hard PCB SLI bridge, take off the connectors, and hot glue each end to its respective card... pooooof instant SLIfire
 


:lol:

 


it is possible to use 1060 in tandem using DX12. for nvidia own SLI they still need the bridge. the thing is we don't really know what goes through the bridge and PCI-E interface. even when ask nvidia will not give the detail.
 


In my simple mind, and taking into account how AMD influenced the current gen graphical APIs, I would imagine they built some of their PCIe connection backend logic into both specs. I know it sounds weird, but the spec could be as broad as "in multi-GPU, data has to travel from one card to the other". That in turn would mean nVidia would have to comply with it somehow if they want the "DX12 multi-GPU compatible" sticker, sort of speaking; be it through the SLI bridge or PCIe transport.

Cheers!
 


From what I've heard online, the bridge is there for added bandwidth and to transfer frames from slave GPU to master GPU.
 


AFAIK nvidia already said that while they officially removed SLI support from nvidia the card will still work with DX12 multi gpu. hence we see a lot of testing done with Ashes with dual 1060 when 1060 first comes out. AFAIK DX12 have it's own way doing multi gpu that is not the same as nvidia SLI or AMD CF.
 
Considering how AMD seems to be ahead in DX12 support, I'd assume that they will embrace multi-GPU and it might actually benefit them by removing some support and frame pacing issues. Multi-GPU could potentially solve issues that AMD has been struggling with.

But as noted in the link below on the conclusion page, devs will have to embrace it as well and since most games are made for consoles first, it's probably going to be rare that a dev puts the necessary time and money into proper multi-GPU support.

Though the RX 480 can scale impressively, it's very inconsistent:
https://www.techpowerup.com/reviews/AMD/RX_480_CrossFire/19.html
 
i recall something like that as well. don't recall which multi-gpu type it was but one of the more in depth ones i recall saying it would work that way.

i'd have to look up all the various multi-gpu ways to find the one again though.
 


I recall reading that as well, however they also said you could multiGPU with integrated graphics which we know is not the case. The reality is you need 2 cards close in both memory and performance for it to really work right.
 


Yes, so now the question becomes "how are they doing it for DX12?".

I haven't read the spec nor the API itself, so I don't know if they are forcing an implementation through PCIe or something better that doesn't rely on the PCIe BUS logic.



Yes. Since Devs can now administrate each *GPU* independently (kind of like CPU cores I would imagine?), they can make them work as individual units with their own individual allocated objects in memory; as in, not forced object sharing which is how current SLI and XFire work.

What is unknown to me, is if that would be the rule or the exception for the common DX12 multi-GPU implementations we will see. At first glance, sounds more like niche market to me.

EDIT: Changed wording of a phrase.

Cheers!
 
again forget which type of multi-gpu method it is but the one where each card handles different parts of each frame would allow for each card to use its own memory independently. this sounds to me like the "best" way to use multiple gpu's effectively but is probably also the hardest to code and implement. soince each combination of cards would bring a different combo of strengths and weaknesses that have to be accounted for in splitting each frame up.

for instance a weaker gpu could draw the background (which may not change as much and is therefor easier to draw for the weaker gpu) while the stronger one handles the more intricate foreground stuff. would still need some form of data sharing but i believe in this case memory would not have to be completely duplicated. the cpu overhead though is enormous as it has to decide each frame what gets sent where. this type of overhead is why i believe we saw better utilization of many core cpu's in the testing done with AoS. 8 cores in this case is very useful as there is a lot of extra work for the cpu to do to balance the load of the multiple gpu's or in this case decide what part of each frame goes to each gpu to be drawn.
 


So Vega will be the first to use HBM2, and possibly 16gb of it to boot. Now I wonder if Vega is ready and waiting for HBM2 to be massed produced in sufficient quantities or if AMD just timed Vega for it.

If the few details there are correct, Vega should give nVidia a run for it's money. It's just unfortunate that it is taking so long to come out...giving nVidia a long time with the top end.

I do like that the 480 and lower cards have given AMD a resurgence in market-share, that is good for everyone, and it seems AMD's plan worked exactly as they had hoped.
 
Well it seems that 480/470 supply has finally caught up with demand...at NewEgg anyway. The 480 4GB starts at $230 and $260 for 8GB...with all models in stock except the original $200/$240 'bait' reference cards...imagine that. The 470 4GB starts at $200...$240 for 8GB. I'm thinking that the 480 prices have prolly dropped down as far as they are going to...for now anyways. The 470s are still jacked up compared to the $180 msrp considering there were no ref. cards. Also looks like the 1060 has nearly stabilized as well.

I predicted a while back that in the end there would be no such thing as a $199 RX480...unfortunately it looks like that will be the case. I'm pretty sure that's just as they planned...pump out just enuff bottom-end 480s to cover their ass for announcing to world+dog they could be 'VR Ready' for less than $200. Close enuff to the good old 'bait n switch' tactic (which is illegal btw) to draw the ire of many loyal but budget restrained customers...myself included. If nothing else, it was a lesson learned and I'll always be more of a sceptic come launch time. But who knows...maybe they'll hit $200 or less due to high volume this xmas season and tempt us to go SLI...or not...lol
 
Actually, $230 sounds about right. If the reference cards are $199, then the aftermarket, with superior designs and factory overclocks are usually a bit more.

Not really a bait and switch, but the price gouging for the reference cards was really bad. I still think that some stock was held back by vendors to keeps prices inflated as long as possible but not much AMD can do about that. Their part of the supply chain ends at selling the chips to board partners.
 


Right, but I think his point was (and I brought it up a page or so ago as well) that the $199 Reference 4gb RX 480 is no longer going to exist and that seems correct. The prices of the non-reference cards are definitely reasonable, but the promised $200 card barely existed.
 


PCPer make similar explanation using a video. the video presentation probably will make some people able to understand the concept better:

http://www.pcper.com/reviews/Graphics-Cards/New-Perspective-Multi-GPU-Memory-Management
 
saw that video a while back. probably what i was remembering as i typed that. can't recall all the names but big picture is still in my head for it :)

thanks for the link. should help some folks understand the various methods of multi-gpu a bit better
 
Vega 10, 16GB HBM2, 10 TFlops?

According to Mr. Papermaster this will be the case.

http://techfrag.com/2016/09/16/amd-vega-gpu-ship-h1-2017/

That would actually be really close to Titan XP. Wonder if it turns out true and when it does, what kind of dollar will AMD want for such a beast.
 
That HBM alone could end up the card costing and arm and leg? Well at least that is my impression with AMD Fury series last year. Also it will be interested what changes can be seen in Vega. Will Vega still largely GCN or something new entirely? Papermark mentioned Vega will be better in terms of power efficiency. Will they achieve that with major architecture changes or did they intend to gain efficiency through HBM like Fury?
 
Status
Not open for further replies.