AMD Details Asynchronous Shaders In DirectX 12, Promises Performance Gains

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
They didn't give a specific GCN version, and it looks like most of the 300 cards are rebrands of the 200 cards. (which are listed on AMD's site as DX12 cards)

Hopefully this means all the 200 cards will have this available on Windows 10.
 
Is this something just AMD will feature, or are they just talking about general DX12 features as if they made them happen?
General DX12 feature, that AMD can support from the beginning. Mainly it is only a way of handling drawing calls more efficiently. So all GPU that have proper hardware can utilize this if the graphic drivers just have been made to support it. Old fixed pipeline gpus are out, but modern gpus with more flexible graphic pipelines should be fine.
 
From the way this article is written, It seems plausible that all GCN GPU's will be DX-12 compatible. If so this would also include the HD-7000 series GPU's. This in itself would make me happy if it comes to pass.

What do all of you think about this?
 
I love the change in focus for working smarter, not harder.
We've long needed improved graphical APIs which not only bring new feature sets, but in more efficient ways. The conspiracy theorist in me wonders who GPU manufacturers are limiting this efficient development. With each new DX iteration, life only gets harder on the consumer to "run the newest thing"
Well, they're introducing the stuff into Vulkan too, and Vulkan is a completely open spec, so...
 
From the way this article is written, It seems plausible that all GCN GPU's will be DX-12 compatible. If so this would also include the HD-7000 series GPU's. This in itself would make me happy if it comes to pass.

What do all of you think about this?

I think that it was publicly announced a long time ago that all GNC cards would support directx12. As in since the time directx12 was announced...
www.amd.com/en-us/press-releases/Pages/amd-demonstrates-2014mar20.aspx#
 
AMD developed ACE's as part of their GCN architecture ~4 years ago, as you may remember as the 7970, Mantle allowed more control over this design and DX12 will feature Asynchronous instruction handling [mostly] the same way Mantle and its successor/clone, Vulkan, does.

I never stated anything about ACE (which isn't all that different from how most other architectures also partition their execution), rather calling them out on their practice of renaming industry standards, good programming practices, and other previously defined specifications for marketing reasons. So far nothing you have stated counters the assumption that this is just an implementation of executeIndirect, where you can program the GPU controller itself to execute multiple commands without input from the CPU.
Obviously executeIndirect relies on the presence of ACE's, regardless of the GPU in question, and all AMD have stated here is that asynchronous shaders are entirely possible in DX12 the same way they were in Mantle, funfact it was *their* industry standard until nvidia eventually adopted their own form of ACE's, of which their documentation is so tight and closed-off I cant even confirm they have any...

Besides, nvidia's the one that's anti-open-standards so I don't know why you're complaining of one of AMD's common innovations...
 
I love the change in focus for working smarter, not harder.
If only AMD could do that on the CPU side, but unfortunately they haven't had a reasonable CPU in that regard since AMD64. That said Intel's anti competitive behavior likely had something to do with that they lost valuable money or rather time since recouped money in their settlement with Intel.

It's not all about CPU sales it's also about platform sales and socket life longevity and upgrade sensibility or viability which was ultimately hurt some because of Intel for a period coupled with a poorly timed acquisition.
 


Most articles only mention the R series GPU's that were based on the new architecture ( like the 260 and the 285 ) and forget that there were some HD 7000 series GPU's that were GCN. This is why it is a concern for those of us with the 7000 series GPU's.
 
I love the change in focus for working smarter, not harder.

Honestly AMD has always been about that too. Sure they have a tendency to take the brute force approach, but their architectures have also always been far more forward thinking than Nvidia's. How else can you explain the 7970 trading blows with the 780, or the 290X beating the 780 Ti in nearly all of the latest games?
 


Somewhat true. Nvidia has made GPUs that are very well fine tuned to the architecture that is most common when the GPU is relased. But in longer term AMD has gained, because they make hardware that supports features that are (maybe) coming. Sometimes it works... sometimes not. In GPU side the AMD have had influence to new APIs, so it has been able to gain foot hold in long run. In CPU side the Intel is in so dominant situation that when AMD try to look in the future (and gambled on multi core CPU performance vs singel core performance) they did lose. Bulldozer would have had strong CPU if there would have been huge investment to multicore programming and so on, but when most CPUs (read intel) where just fine with few cores and advantage in process node allowed to have more complex cores in smaller space. AMD did not have a chance.

On the other hand Nvidia has had it moments too. It was the first to move higher bit colors in GPU long, long time ago in the era of Voodoo cards, but mostly Nvidia has put more effort to present situation.
 
Well clearly the only reason Microsoft is only able to achieve DIrect X 12 is because Mantle was there first. Mantle was there to do the same thing DX 12 was to do and get rid of all those inefficiency of DX 11.

 


Fanboy rage? No one in this thread is fanboying besides you. I look forward to your account getting banned by trolling and spamming the thread.
 


the only carry overs are going to be the 260's the 285's and possibly the 290's. The next AMD GPU chips will be the new architecture that the fore mentioned GPU's are currently using. The rest are re-branded 7000 series GPU's so... No the next gen will not be a re-branding of the 200 series.
 
Sony clearly saw the future moving away from traditional raster-based rendering for GPU's as the hardware evolves towards general compute.

XBOX ONE GPU-

1.18 TF GPU (12 CUs) for games
768 Shaders
48 Texture units
16 ROPS
2 ACE/ 16 queues

PLAYSTATION 4 GPU-

1.84TF GPU (18 CUs) for games + 56%
1152 Shaders +50%
72 Texture units +50%
32 ROPS + 100%
8 ACE/64 queues +300%

Here's how important ACE's are:

http://www.dualshockers.com/2014/09/03/ps4s-powerful-async-compute-tech-allows-the-tomorrow-childrens-developer-to-save-5-ms-per-frame/?

Only one issue that I can think of... PS4 does not use Direct X. They may support it, and eventually get there, but currently they would receive 0% performance increase from DirectX. At least that's what I got from my research.
 
Status
Not open for further replies.