AMD Vega MegaThread! FAQ and Resources

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
seen that rumor at VCZ (still it was originated from tweaktown). is the yield for HBM2 is really that bad at SK Hynix? only 16k at launch? AMD probably can increase their yield if they were using much slower HBM. even for Quadro GP100 the HBM 2 that use for the card is only rated at 1.4Gbps (and SK Hynix HBM 2.0 was supposed to operate at least at 1.6Gbps speed).

http://www.anandtech.com/show/11102/nvidia-announces-quadro-gp100

also AMD main partner in developing HBM is SK Hynix. Nvidia have been using samsung solution since the very beginning. if there is really an issue with HBM2 yield nvidia probably end up using all HBM2.0 that samsung can produce to date. and demand for tesla P100 will be very high even at launch because some of the client already place their order (in magnitude of thousands of GPU per client) even before the card officially launch. AFAIK majority if not all GP100 nvidia able to produce was fully booked in 2016. that's why Quadro GP100 only coming out almost a year after nvidia officially launch Tesla GP100.
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985
AMD Vega Dual GPU Liquid Cooled Graphics Card Spotted In Linux Drivers:

http://wccftech.com/amd-dual-gpu-vega-liquid-cooled-graphics-card-spotted-in-linux-drivers/
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


I like it! Now that looks like it would give the 1080Ti and Titan good competition!
 
though AMD tend to price their dual gpu at very high price. right now AMD called them pro duo. dual Vega? expect no less than $1500. that's how much dual fiji card cost last time. and recently AMD did release dual polaris card. priced at $1k. you can get 4-5 RX480 with that much :D
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985
I have hoping for this news for a long time.... I think first it may be marketed as a server part (at a ridiculous price).. But eventually I think it will be released to consumer market at a much more reasonable price.

Also I think it's going to be utilized through CrossFire, I have seen lot's an lot of updates over the last 6 month's or so to crossfire in driver support..kinda been waiting for this as a result... an it's working excellent btw
Or it may be managed by a chip or firmware on the card itself which alternates code to each GPU.. which would be great, but but not that likely..

Edit:
There's code in the driver for a PLX Chip.. which it looks like is to increase the amount of PCI Express Lanes (16x/16x) going to the slots... Possibly aimed at x390/x399 ?
I know that the x370 only has 24 PCI Express 16x/8x. I've heard nothing of PLX Chip's so far..


 


we have specific thread for nvidia here. though it might be the time to make a new thread since that thread is dedicated to pascal.
 
For the dual GPU talk: AMD has always used special inter-connects in their PCBs with dualies to increase effective bandwidth between the GPUs. The benefit has always been on the low side, since intra-GPU talk has never been a thing, AFAIK. Maybe with DX12 and Vulkan that has changed? Not sure. In any case, extra PCI lanes in the MoBo are worse than having them in the GPU PCB directly, but I guess it's harder to code for that way (i.e. driver dependency for effective use of it).

However it turns out to be, it has been proven that dual GPU configs are good for numbers, but not that good for real life experience. I have to wonder is DX12 and Vulkan would alleviate that.

Cheers!
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


The dual GPU setups have offered close to twice the performance in the past.
Radeon-Pro-Duo-Specs.png

Radeon-Pro-Duo-Far-Cry-Primal-Performance.png

Radeon-Pro-Duo-FireStrike-Extreme-Performance.png


http://hothardware.com/reviews/amd-radeon-pro-duo-benchmarks

GeForce + Radeon: Previewing DirectX 12 Multi-Adapter with Ashes of the Singularity
by Ryan Smith on October 26, 2015 10:00 AM EST

78164.png

http://images.anandtech.com/graphs/graph9740/78166.png

"Ultimately as gamers all we can do is take a wait-and-see approach to the whole matter. But as DirectX 12 game development ramps up, I am cautiously optimistic that positive experiences like Ashes will help encourage other developers to plan for multi-adapter support as well."

http://www.anandtech.com/show/9740/directx-12-geforce-plus-radeon-mgpu-preview

I think we will see great performance from dual GPU cards with DirectX 12.

EDIT found a couple more benchmarks
AMD-Radeon-Pro-Duo-Benchmarks-Results_1080P.png

AMD-Radeon-Pro-Duo-Benchmarks-Results_4K.png


http://wccftech.com/amd-radeon-pro-duo-benchmark-results-leaked/
 


I'm willing to see the glass half [strike]full[/strike] empty, just because the stuttering AMD has had in the past with their multi-GPU configs has been horrible.

So, that being said, it doesn't really matter how good the internal shenanigans of the PCB are if the drivers are going to make the Dualies suck. Hence, I'm wondering if Vulkan or DX12 will give the Dual GPU boards some advantage over what has been out there in the past (full dependent on drivers).

Cheers!

EDIT: Striked bit.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


Realistically, from past performance in some games ~50% performance gain in games DirectX 12 or not.
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985


Can Dx12 can break up the workload an farm it to each GPU.. ?

 
the thing with DX12 is the job was supposed to be done by game maker. it was the main pitch behind DX12: more direct control for game developer instead relying on IHV support. but realistically if you have been paying attention to what's happen since the introduction of DX12 with windows 10 majority of game developer have no desire to do it themselves. recently Gear of War 4 rolling out the very long awaited multi GPU support that everyone has been waiting. but to use multi GPU you still need one of nvidia latest driver that include multi GPU support for the game. and here i thought DX12 multi gpu is totally independent from IHV drivers (they can like how it was done on Ashes hence two 1060 in tandem able to work together in that game despite nvidia officially did not support SLI for the card). meaning even if you're using old drivers multi gpu should work as long as you patch the game to the latest update. that's why some people said DX12 will be the last nail in killing multi gpu. because you're pushing the responsibility from the one that want to push the tech (GPU maker) to the one that try to avoid using the tech as much as possible (game maker).
 


That's not exactly true - a game had to be tailored for multi GPU in older revisions of DX, and those usually had to be identical (Nvidia's SLI) or at the very least from similar generations (AMD's CrossFire) to work at all - nevermind the need for validated drivers, which are a requisite for it to work, or the limited kinds of load sharing you could perform: alternate frame rendering, scanline rendering were pretty much all you'd get. DX12 allows compositing rendering, where GPUs work on different objects and then one composites them together on-frame.

DX12 makes it so that you can mix and match whatever hardware resources you have, provided you go and make use of them the same way you' d go and detect what CPU core you have, how fast and how any of them there actually are. Of course, that requires from the game maker to detect and probably benchmark the capabilities of whatever GPU hardware it can find (it does add complexity) but this is neither an unknown (see CPU cores) nor repetitive: once the graphics engine is geared towards this kind of detection and balancing, it's DONE - no need to look further. So of course engine makers have some work ahead of them, but most of them (or at least, the good ones) actually enjoy having more capabilities: straightforward API geared towards harnessing more resources are much easier to work with than finding workarounds and hacks to do the same.

Now of course, whenever a hardware maker simply shuts down his GPU when another is used for rendering makes all of this moot: Nvidia didn't approve of the use of their GPU for PhysX computations when actual rendering was done on an AMD card and wrote a shutdown routine in their drivers, but this is actually the kind of load balancing DX12 (and soon, Vulkan) would allow.
 
That's not exactly true - a game had to be tailored for multi GPU in older revisions of DX

i know that. UE4 for example are not AFR friendly. in the past there are people asking Epic about how to take advantage of SLI when developing their games using UE4 and one of Epic developer respond was to avoid using SLI (or multi gpu in general) if they want to use all UE4 features.

Of course, that requires from the game maker to detect and probably benchmark the capabilities of whatever GPU hardware it can find (it does add complexity) but this is neither an unknown (see CPU cores) nor repetitive: once the graphics engine is geared towards this kind of detection and balancing, it's DONE - no need to look further.

except multi GPU is not the same as multi core CPU. i was thinking something similar before: why games cannot be optimized for multi gpu down to game engine level the same way it was done on multi core CPU? they said it might be possible to do such thing with DX12 but so far i haven't seen any proof of this yet. i want to see real world implementation of this before we discuss this further.

Now of course, whenever a hardware maker simply shuts down his GPU when another is used for rendering makes all of this moot: Nvidia didn't approve of the use of their GPU for PhysX computations when actual rendering was done on an AMD card and wrote a shutdown routine in their drivers, but this is actually the kind of load balancing DX12 (and soon, Vulkan) would allow.

now if you understand what exactly the issue with PhysX then you will know that nvidia have no reason to block their card from working in DX12 multi gpu even in mixed combination.
 


I'm mentioning multi core CPUs for one simple reason: if you take Bulldozer and Core with HT for example, you will have to balance your threads differently depending on the architecture: loading a pair of Bulldozer cores with FP128 tasks each for example is a great way to bog them down, while it won't bat an eye to loading both of them with int computations; doing that on 2 logical cores in a HT Core system will bog that one down though. So usually, you'll add routines in your code to detect what kind of core you're on, and then you'll dispatch your threads accordingly. DX12 allows you to do that, not with cores but with render targets, while DX11 and older only allowed you to mention what job was independent from the others and the driver had to guess the best way to dispatch them.

As for the "problem" with PhysX, considering that some people hacked Nvidia's drivers to workaround the artificial limitations and managed to run games on AMD hardware with PhysX actually running on a secondary Nvidia GPU, then yes, I could see Nvidia refusing to run their cards alongside other GPUs. Considering they're even now locking down SLI on anything but their most expensive hardware and how little work they're doing on supporting DX12 properly, then yes, I think they don't want customers to use anything but their own hardware.
 


part of the problem with PhysX was licensing. if AMD pay for the licensing for PhysX then they will have direct access for optimizing PhysX in their system. not just hybrid system but running GPU PhysX natively on AMD gpu also possible. there are third party effort called "radeon PhysX" in the past to make this happen (nvidia even willing to help make it happen). but ultimately it is AMD have no desire to make it happen because PhysX is not their tech.

second PhysX is nvidia tech. they have to make sure it works without problem be in nvidia only system or in hybrid system. but the issue on hybrid system most definitely more complicated because there are chance nvidia driver might end up conflicting with AMD drivers. when that happen they have to figure it out themselves. AMD will not going to help and most definitely not going to change the way they handle their driver because of nvidia tech that they have no desire to support. imagine if there is issue with hybrid system that nvidia refuse to fix. some end user might bring the issue to court because nvidia did not want to support tech that they are selling to customer. so before it become that complicated for them (be it on the software side and legal issue that can stem from it) they blocked hybrid PhysX system. sure you can use hacked driver to keep using hybrid system but that way if there is conflict nvidia are not responsible for your issue because legally nvidia only support PhysX on their hardware setup only.

Nvidia blocking hybrid system because PhysX is their tech. not because they simply refuse to see their GPU to be used together with other GPU. DX12 multi GPU is not nvidia tech. so they have no reason to block it.
 


What did Nvidia disallowing the use of their card for PhysX computations have to do with the license? I sure hope Nvidia hardware is licensed to run Nvidia software! I am talking about off-screen computations, which are separate from rendering, and that were allowed regardless of the GPU on PhysX standalone cards before Nvidia bought the company. Nvidia artificially restrict the use of off-screen PhysX computations when it's not a Nvidia GPU doing the actual rendering - eventhough both operations are independent! Thus driver compatibility has nothing to do with it.

And as I was saying, what some people wanted was to get a big, fat AMD card for rendering and a small, cheap Nvidia card for PhysX; Nvidia disallowed this in their drivers, but some managed to hack the drivers and it worked well enough : see here.

Of course neither Nvidia nor AMD would support their competitors' hardware! But to go as far as actively disabling features is bad. And what about Intel? Considering that when you buy an Intel CPU, half the silicon is taken up by a GPU, DX12 is finally a way for game makers to make use of it (whether it be through DirectCompute for physics computations like TressFX or DX12 for managing, say, the game's HUD), who would say "no" to 5-10% more computing resources by using hardware you actually own?

 
back in 2009 nvidia have said openly that they have no problem for AMD to license their PhysX tech. but back then AMD said it is not needed since they are working with Bullet to offer vendor neutral solution from the get go. only after AMD respond that nvidia directly block hybrid system from working. true with hybrid system nvidia software will still run on nvidia hardware. but if AMD have the license for PhysX then they can directly address the problem if any issue arise from hybrid system. try to remember what AMD problem with gameworks are? they said they can't optimize for the game because gameworks is blackbox. amd doesn't not have the access to gameworks source code because they did not pay the licensing fee to get access to the source code. in fact if AMD did license PhysX from nvidia end user can stop using hybrid system altogether because with the license AMD can legally implement GPU PhysX to run natively on their GPU.

yes hybrid PhysX system work well enough with hacked drivers. but can you guarantee it will work 100% without problem all the time? that's the problem here. and that still not counting if AMD directly make their drivers to conflict with nvidia PhysX driver on purpose. AMD can always say "we optimize our drivers for the best way for our hardware. if our drivers are conflicting with nvidia PhysX then it's not our issue. it's their issue for putting the said tech inside the game". then user will forced nvidia to make it work even if they know it is not compatible with AMD hardware combination because PhysX is nvidia tech that nvidia sells to consumer and they have to be responsible with that.

from consumer point of view nvidia blocking hybrid system indeed is bad. but nvidia for their part also try to avoid more complicated issue from arising on their end. be it from consumer or AMD. to my knowledge nvidia never did go after the people that hacked their drivers and ask them to stop from doing it. that is indirectly tell people that if they still want to use hybrid system then use it at their own risk since nvidia will not going to address to your issue.

with DX12 it was developer freedom. they will be the one deciding how to implement things in their games. look at ashes itself. they make it possible for 980ti and Fury X to work together. they also make it possible for two 1060 work together despite nvidia did not support SLI on 1060s. did nvidia patch in new drivers to stop that from working?
 


Who can guarantee that a graphics card can work with all motherboards? Nobody does. Why would it be any different for what amounts to a physics coprocessor? Because that's exactly what PhysX is: a dedicated circuit for physics computations; the results are rerouted through the CPU to the display controller afterwards, nevermind what branding the coprocessor bears: AGEIA or NVIDIA, so why should it matter what graphics card I use for display?

The fact that Nvidia finally released the restrictions in their drivers might just be an indication on how much developers may be looking for more open solutions: physics in DirectCompute seem to work quite well on whatever GPU one uses.
 
Who can guarantee that a graphics card can work with all motherboards?

there are thousands of combination but at the very least GPU maker and motherboard maker "hope" by using common standard such as PCI-E interface they can make all of them work together without too much issue. remember when AMD card that use PCI-E 2.1 standards having issues with PCI-E 1.1 slot? back then as long as the motherboard still supported by it's maker they release updated BIOS to their motherboard to solve the issue.

The fact that Nvidia finally released the restrictions in their drivers might just be an indication on how much developers may be looking for more open solutions: physics in DirectCompute seem to work quite well on whatever GPU one uses.

well when it comes to GPU accelerated physics it doesn't matter if there are more open solution or not. because ultimately game developer have no interest to utilize it. we have neutral vendor solution for GPU accelerated physics for almost 7 years now. and i have yet to see them used in game.
 

There are at least two: Rise of the Tomb Raider and Deus Ex: Mankind Divided. Both use TressFX 3, which is MIT-licensed and works through DirectCompute.
 
Status
Not open for further replies.