Ashes Of The Singularity Beta: Async Compute, Multi-Adapter & Power

Status
Not open for further replies.
G

Guest

Guest
In other words DX12 is business gimmick which doesn't translate to squat in real game scenario and I am glad I stayed on Windows 7...running crossfire R9 390x.
 
An AMD sponsored title that shows off the one and only part of DirectX 12 where AMD cards have an advantage. The key statement is: "But where are the games that take advantage of the technology?" Without that, Async Compute will quickly start to take the same road taken by Mantle, remember that great technology?
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
An AMD sponsored title
Really? Sponsoring and knowledge sharing are two pairs of shows. Nvidia was invited too. ;)

Async Compute will quickly start to take the same road taken by Mantle
Sure? The design of the most current titles and engines was started long time before Microsoft started with DirectX 12. You can find the DirectX 12 render path in first steps now in a lot of common engines and I'm sure that PhysX will faster die than DirectX12. Mantle was the key feature to wake up MS, not more. And: it's async compute AND shading :)
 
An AMD sponsored title
Really? Sponsoring and knowledge sharing are two pairs of shows. Nvidia was invited too. ;)

Async Compute will quickly start to take the same road taken by Mantle
Sure? The design of the most current titles and engines was started long time before Microsoft started with DirectX 12. You can find the DirectX 12 render path in first steps now in a lot of common engines and I'm sure that PhysX will faster die than DirectX12. Mantle was the key feature to wake up MS, not more. And: it's async compute AND shading :)

Geez, Phsyx has been around for so long now and usually only the fanciest of games try and make use of it. It seems pretty well adopted, but it's just that not all games really need to add an extra layer of physics processing "just for the lulz."
 
Thanks for the effort, THG! Lotsa work in here.

What jumps out at me is how the GCN Async Compute frame output for the R9 380X/390X barely moves from 1080p to 1440p ---- despite 75% more frames. That's sumthin' right there.

It will be interesting to see how Pascal responds ---- and how Polaris might *up* AMD's GPU compute.

Neat stuff on the CPU, too. It would be interesting to see how i5 ---> i7 hyperthreads react, and how the FX 8-cores (and 6-cores) handle the increased emphasis on parallelization.

You guys don't have anything better to do .... right? :)

 
G

Guest

Guest
For someone who runs Crossfire R9 390x (three cards) DX12 makes no difference in term of performance. For all BS Windows 10 brings not worth *downgrading to considering that lot of games under Windows 10 are simply broken or run like garbage where no issue under Windows 7.
 

ohim

Distinguished
Feb 10, 2009
1,195
0
19,360
An AMD sponsored title that shows off the one and only part of DirectX 12 where AMD cards have an advantage. The key statement is: "But where are the games that take advantage of the technology?" Without that, Async Compute will quickly start to take the same road taken by Mantle, remember that great technology?
Instead of making random assumptions about the future of DX12 and Async shaders you should first be mad at Nvidia for stating they have full DX12 cards and that`s not the case, and the fact that Nvidia is trying hard to fix this issues trough software tells a lot.

PS: it`s so funny to see the 980ti being beaten by 390x :)
 

cptnjarhead

Distinguished
Jun 22, 2009
395
0
18,780
For someone who runs Crossfire R9 390x (three cards) DX12 makes no difference in term of performance. For all BS Windows 10 brings not worth *downgrading to considering that lot of games under Windows 10 are simply broken or run like garbage where no issue under Windows 7.

There are no DX12 games yet for review, so why would you assume that you should see better performance in games made for windows 7 DX11, in win10 DX12? Especially in "tri-Fire". DX12 has significant advantages over DX11 so you should wait till these games actually come out before making assumptions on performance, or the validity of DX12's ability to increase performance.
My games, FO4, GTAV and others run better in Win10 and i have had zero problems. I think your issue is more Driver related, which is on AMD's side, not MS's operating system.
I'm on the Red team by the way.
 

ohim

Distinguished
Feb 10, 2009
1,195
0
19,360
For someone who runs Crossfire R9 390x (three cards) DX12 makes no difference in term of performance. For all BS Windows 10 brings not worth *downgrading to considering that lot of games under Windows 10 are simply broken or run like garbage where no issue under Windows 7.

There are no DX12 games yet for review, so why would you assume that you should see better performance in games made for windows 7 DX11, in win10 DX12? Especially in "tri-Fire". DX12 has significant advantages over DX11 so you should wait till these games actually come out before making assumptions on performance, or the validity of DX12's ability to increase performance.
My games, FO4, GTAV and others run better in Win10 and i have had zero problems. I think your issue is more Driver related, which is on AMD's side, not MS's operating system.
I'm on the Red team by the way.
Win 10 and AMD card and no issues here.
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
I've migrated all 8 machines here to Windows 10 (Windows 7 und 8.1 images are saved for my museum) - no problems since months. With a good hosts file all my PCs can't call home to MS, a must-have in each case. Windows 10 runs stable, mostly faster on my six- and eight-cores and I also have no issues with porper programmed old applications from the 90's (32 bit). The only issue are changed rights in a lot of registry keys, but this can be changed manually (with trusted installer rights :p )
 

Math Geek

Titan
Ambassador
what i found most interesting is the cpu bottleneck on even the high end cpu used in the test system. i REALLY want to see this same test done on a 4 core i5 cpu to see what the loss of 4 cores does to the numbers. this is likely what many users have at home and not the super set-up used here.

other thought i had was what this would look like on an fx-8*** system. we know that if you tax the fx cpu on all cores it will out perform an i5 due purely to the extra resources it has. so i wonder how bad the bottleneck will be on an i5 and an fx cpu and speculate that the fx chip might actually get a second life in this scenario. might pull even or even pull ahead of an i5 in this benchmark.

in any case, i'd love to see the numbers run again with these cpu's just to see what happens.
 

Kahless01

Distinguished
Sep 14, 2009
151
14
18,695
so you have to turn async on through the ini? I downloaded this Saturday and running their in game benckmark says im gpu bound by a lot. about 70% on average. im running a 380x and i7-920. plays like total annihilation. I like it. if I can get a little better performance turning async on ill take it. I do get slowdown when I use the bombers and dreadnought launching missles over the horizon.
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
You must start it in DX12 mode. Async is in this case always on (per default). This INI entry hasn't any effect in DX11 mode. I need two high-end cards to run into CPU bound (5960X @4.8 GHz) with my private rig. I get approx. 10% of extra performance without SMT enabled (only 8 threads).
 


He is not talking about DX12 dying but rather that ASYNC Compute will and unless games heavily adopt it it might be a one off feature.

That said, I would not be surprised if nVidia adopts it in Pascal as it is a feature that might also benefit workstation rendering.

Hard to say what will happen in the long run but I think Polaris and Pascal will matter more for DX12 than Fury or the 900 series.
 
That was exactly my point, jimmy.

This game uses the exact same engine developed by Oxide for the Star Swarm demo. Star Swarm was developed as the first benchmark to showcase AMDs Mantle technology.

Now the same development team with the same game engine has released the first demo to take advantage of AMDs Asynchronous Compute capabilities. As with the Star Swarm demo, this is a best case scenario for AMD. The parallels between the hype generated by Oxide for Async Compute and Mantle are hard to ignore. In the end, the results will be the same if only a few games adopt the technology.
 
A few statements...

The idea that the Fury line is more power hungry than the 980 series has been disproved in this TH benchmark.

All mid range and higher AMD cards have been running between ~50% - ~75% of their true capabilities due to front end limitations under DX11. DX12 can bypass these, showing the true compute power of the cards.

nVidia CANNOT compensate for async compute with a software update. Their hardware is simply incapable. Their necessary pre-emption holds them back. Do not get your hopes up.

Pascal will not likely solve this issue. We won't be seeing a properly working async implementation from nVidia until Volta (Pascal's successor).

Async compute is getting implemented in consoles regularly. DX12 will be using this, because it enables developers to do more with the same hardware. Multiple titles will be released this year. The next one is Hitman. Wait for the DX12 benchmarks in March... Don't be surprised if the R9 390 (non-x) reaches 980 Ti levels.

The performance gain from AMD here is representative of all DX12 titles that use async compute. In many cases, it will even be better.

Async compute cannot work under DX11, which is why it appeared that nVidia cards were the better choice. For long term, GCN is and always has been the better choice. Sad to say, but Maxwell 2 cards from 2015 are already more outdated than GCN 1.1 cards from 2013.

Final statement is, do not believe me. You never do anyway, so no reason to change now. Instead, I ask you to wait and see, and you will find out for yourself soon enough.
 


It is hard to trust any one person on a forum vs what we have seen.

One thing you forgot to remember, the PS4 is not using DX12. It uses its own version of OpenGL since it uses a Linux based kernal. The XB1 will use DX12 but from everything I have already read it wont benefit nearly as much as the PC will since the way that the XB1s DX is written currently it already takes better advantage of that hardware that is in there. So even the ASYNC might not benefit the XB1.

Another factor is that DX12 is more than just ASYNC. ASYNC is just one aspect, much like Tesselation was just one aspect of DX11. It is not the end all be all. For all we know, this specific benchmark was optimized solely for ASYNC work and not for other aspects which could make a massive difference.

Since the consoles still have their differences on the API level that could influence game developers and for PC game developers will have to look at the market, which right now is heavily nVidia based.

If AMD really wants ASYNC to take off then they need to do what nVidia is doing and not what they normally do. They normally partner with one company to show case a tech instead of trying to partner with multiple companies. Mantle was mostly shown off in BF4. The only other major title I remember it in was Thief and it only really benefited systems with super high end GPUs and low end CPUs.

I still say this is in no way a 100% picture. We need games that are developed for the purpose of entertainment, not for the purpose of benchmarking.
 
G

Guest

Guest
Async compute does work under DX11 and it was removed by Microsoft in order to have it later on DX12 so people can buy into BS called Window 10.
 
G

Guest

Guest
As I said before in real world gaming DX12 means squat. Crysis 3 which was DX11 game still looks visually superior over any upcoming DX12 titles.
 

airborn824

Honorable
Mar 3, 2013
226
0
10,690
On a positive note both consoles and the future NX can all do Async and that is why it will end up being used to a high percentage quite soon. Nvidia should use it as well, great feature to see more of the GPUs power being used. Like with Mantle this is a gateway to better things. Mantle is now VUlkan and is copied into a lot of DX12. I see some bias in this article and comments. Lets be real, again AMD has brought us all something that will benefit us all and lets hope Intel and Nvidia see it succeed.
 
...I get approx. 10% of extra performance without SMT enabled...
Interesting. So it's possible an i5 without hyper-threading will stand up well in similar scenarios, at least until coders start optimizing for SMT. That might be good for FX Piledrivers, with a big yet unknown for Zen (if it goes SMT, which seems likely).

This thread seems to be taking an almost ominous tone. Async Shaders/Compute in DX12 is a major step forward in GPU optimization. Massive parallelization is a good thing --- give AMD some credit. They've been working on this for 5+ years. The ACE managing their 'Compute Units' sharing the write-back L2 in lieu of hitting VRAM works well.

nVidia will respond with optimizations and design improvements.

It seems a little wacky to bash DX12. Async parallelization means an exponential increase in handling draw calls which equals better graphics, gaming and compute functions. How is that a problem?

 
Status
Not open for further replies.

TRENDING THREADS