Ashes Of The Singularity Beta: Async Compute, Multi-Adapter & Power

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

cptnjarhead

Distinguished
Jun 22, 2009
395
0
18,780
As I said before in real world gaming DX12 means squat. Crysis 3 which was DX11 game still looks visually superior over any upcoming DX12 titles.
Your bases of DX12 performance is total conjecture, also visual comparison for games is more a personal preference like art work, than tech. Once devs start to roll out DX12/vulkan games we can have an honest debate on the pros and cons of windows 10 and DX12 vs win 7 DX11.
 

cptnjarhead

Distinguished
Jun 22, 2009
395
0
18,780
Now if you want speculation, I would spec that MS will push win10 DX12 hard before the year is over. If they can get DX12 to take off, which seems likely, then MS could create more of an agnostic game platform, instead of depending on dedicated hardware, they could expand to tablets, laptops and other mobile devices. Having a gaming platform that does not reside in a static hardware cycle is their goal. Sony is bound by the PS4's life cycle, MS and DX12 on multiple platforms gives them the edge.
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
It makes nearly no difference if you have 6 or 8 real cores. But It is better to have 4 cores with SMT than f.e. an i5 without. The benchmark scales only up to 8 cores.
 

ohim

Distinguished
Feb 10, 2009
1,195
0
19,360
As I said before in real world gaming DX12 means squat. Crysis 3 which was DX11 game still looks visually superior over any upcoming DX12 titles.
Same was told before between DX 9.0C vs DX10 , and then DX11 games... dude.. it`s called evolution, if you like it or not games will come in DX12 flavour, and hopefully win 7 will die.
 

alextheblue

Distinguished
As with the Star Swarm demo, this is a best case scenario for AMD. The parallels between the hype generated by Oxide for Async Compute and Mantle are hard to ignore. In the end, the results will be the same if only a few games adopt the technology.
Async compute/shading isn't a one-off feature and if Pascal excels in it everyone that decried it will be singing its praises. Did you look at third and fourth pics in the article on the first page? Frame time down, framerate up. That's async compute and shading in action on architectures that are capable of pulling it off. Early gen DX12 renderpaths won't use it really heavily in every title, so the impact will vary by title.

Async compute does work under DX11 and it was removed by Microsoft in order to have it later on DX12 so people can buy into BS called Window 10.
DX11 never had async. Not even close. Async computing wasn't even a twinkle in MS' eye (or anyone else's) when DX11 came out. Implementing it in a high-level API is also of questionable value. You're probably thnking of multi-threaded rendering, which DX11 DOES have. But again it's a high-level API and the multi-threading isn't nearly as capable as Vulkan or DX12.

Look at the first couple pictures on page 3. The first is multi-threading in DX11. The second is multi-threading in DX12. DX12 is MUCH better at dividing work amongst cores, and it supports more threads too.

Both Nvidia and AMD benefit from DX12, and Nvidia will benefit from async compute and shading with Pascal and beyond. So buckle up, it's not a fad.
 

f-14

Distinguished
3DFX vodoo 1k-5k series is probably alot closer to the solution than nvidia is with exception to their use of the 3dfx vodoo K to 5K series with dual gpu.
nvidia rarely works on dual gpu except in their X90 cards. dx12 seems perfectly suited for a dual core/tri/quad/quinta/hexa/octo core gpu. something nvidia should have had in the bag since they bought 3DFX in what 2000?
was just looking at my 2 vodoo5000s yesterday trying to recall what amd fcpga Xp series cpu the board had in it when i put back into it's box with the heat sink still attached, when i upgraded to core2quad, wanted to introduce my nephew to total annihilation, the precursor to Ashes Of The Singularity
 


No one has said ASYNC is bad or that it is a one off. Most are just saying that it needs more support than a benchmark.

Considering that the majority of the PC market is nVidia basedand that nVidia works more with developers than AMD they have a heavy sway on the market and where/what technologies will be adopted.

It really should be both but it never is. Normally the market leader has more pull and right now AMD does not have enough pull to fully sway the market with anything unless they plan on pumping a lot of time into working with the developers.

I just think the ASYNC as it stands now is pointless since by the time there are enough games using it to really care we will mostly be on Pascal or Polaris.

We also have to take into account that some very heavy hitters are not even backing DX12, VALVe for example is backing Vulkan so there is that to watch out for.

I personally don't think taking any one side is beneficial to anyone. If only one side benefits then we all lose.
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
I remember the discussions about tessellation a few years ago. Nobody will need it, it kills the performance, the game engines will ignore it... Metro 2033 was a good example for such pro and con discussions. And today? Tessellation still exists and is very often used (more or less expensive).

It is the same now with AC and AS in the future. Let's meet us in one or two years and discuss it again. :)

Also, it is funny how Toms shows an article about something AMD can do better and everyone loses their minds.

This is the typical fanboy reaction. There are people jumping over each stick, which one holds out to them

As I wrote in the review: it is only one of the facets of DirectX 12 and more a feasibility study as a game. But it also shows the features of the current gen GPUs and a possible direction of evolution. :)
 

AndrewJacksonZA

Distinguished
Aug 11, 2011
576
93
19,060
Thanks for the great, in-depth article!

I would just like some clarity on a statement on page 1 please:
"To keep things fair, we're testing all graphics cards with the setting that works best for them."
Do you mean that you turned async shading off when running the benchmarks for Nvidia and leaving async shading on when running them for AMD?
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990


Exactly. All DX12, but Async Compute on (AMD) and off (NV). It makes for NV only a small difference, but I'm fair. :D

 

That likely/simply means that the cards are somehow still being bottlenecked at 1080p.


Ashes uses SSE instructions. FX CPUs aren't really that great at them. So, an i5 would still probably beat an FX CPU in this game, maybe even an i3. However. Take a look at The Division. An FX-8 cpu is about on par with an i5, and it's quite the CPU heavy game... So I do agree that the FX CPUs get a 'second' life, pretty much just like AMD's GCN cards.
 


Consoles yes but not the PC gaming market. That is also important. If you look there are way more TWIIMTBP titles than Gaming Evolved because nVidia has a focus on just GPUs and developers. Developers would be very stupid to ignore the majority of PC hardware just because of consoles, which again any optimizations for consoles have no bearing on PC what so ever.

I have seen the Doom benchmarks from the Alpha gameplay. There is something very wrong with it and it is obviously not optimized yet since there is very little difference between most GPUs and a 980Ti is so much lower in performance than it should be. Then again it is an OpenGL engine and not even swayed by DX12 in any way. For Doom we really need to wait for a official release and official drivers before we see the actual performance of it.
 


1. I have not seen anything stating that the next consoles are AMD based, not that it even matters since consoles /= PC.

2. Again I never said it wasn't going to be used but game devs will be looking at more than just "OMG free performance" for example, did you not see the reviews where the R9 390X jumps a whopping 100W (almost 50%) in power draw? A dev wouldn't want to be responsible for a system crashing due to overheating or PSUs not being able to power a GPU. It will be a usable feature but again until Polaris and Pascal I don;t see it being a major one.

3. Of course it is. It is the easiest way to gain performance since there is so much sitting there doing nothing. My point was only that nVidia has a huge market share GPU wise and works with more developers, big and small, than AMD. That is going to define a lot more than the consoles will for PC gaming.

And remember PC gaming tends to influence consoles, not the other way around. VR is becoming huge in PC and now consoles are considering it, with the PS4 possibly getting an add-on GPU to handle it. PCs are the market leaders. Always have been, always will be. When the console gaming market crashed in the 80s the PC just kept on going with new games.
 
Maybe I've seen more than what most have seen. Not because I have more information available, but because I try to understand the why, rather than accepting the status quo.

PS4 is not using DX12, but it is using a low level API. It's what Mantle was designed to do, and it's what Vulkan will do, and what DX12 is doing. Async will not benefit the Xbox a lot, specifically because of the CPU side. The thing about Async is that it will require a powerful enough CPU to give all the instructions. It's quite clear in these benchmarks that at 1080p, the used CPU is still bottlenecking the 380X. It's likely that something similar will be happening with the Xbox. The GPU however, does have more than enough headroom. Same for the PS4.

Consoles no longer really have significant differences on the API level. Well they do have some differences since they are different APIs, but, Mantle brought these capabilities to the PC. This was its sole focus, and it was advertised as such. Vulkan and DX12 have followed up with this.
Even better, if you look at VR, AMD will be supporting direct-to-display, which means it will be working directly between a VR application and your GPU, directly transferring the information to your VR headset. It bypasses the OS completely along this whole line. This is practically the same as console level.

AMD is doing things differently. RTG is testament to that, and this year alone there are multiple games that will be supporting Async. I hope it doesn't end up like tessellation, being an ATi tech that everyone ignored until nVidia decided to implement it in extreme excess.

Ashes is still a full-fledged game. It is different than other games in that it's an RTS. It will require more CPU power than most other games. The picture will only get worse for nVidia the less the CPU is involved. But again, don't believe me. Wait for more benchmarks if you need to.
 


Mantle didn't bring anything to PC. All it did was opened the door back that was shut when APIs came out. PC games sued to develop "closer to the metal" but back then a game crashing would also take down the entire PC and could also cause the OS to become corrupt. Now we have the API layer that prevents that, for the most part, so we don't run the risk of crashing the OS and corrupting key files.

Glide was technically also the first API that gave us "closer to the metal".

I guess we will have to wait and see. I highly doubt nVidia is going to be that badly off. One thing great about the GPU market is that there is almost never a always 100% clear winner. They push back and forth a lot which makes it vastly better than the CPU market where AMD has been more stagnant than they have been in GPUs.



Are you using rumors as proof? Because that never works. Especially WCCFTech who throw rumors around like they are fact. I didn't even consider Nintendos next console because short of their Nintendo exclusive titles they have not been competitive with the others for a while. They live in their own world and so far Nintendo has said nothing concerning the hardware of the NX.

When I see official specs, not rumors or words from some AMD exec but official specs from Microsoft/Sony/Nintendo then I will believe it. I would not be surprised, especially since it will allow them to easily be backwards compatible, but rumors are not facts.

And no I am not putting my head in the sand. PCs are way more inffluential on consoles than the other way around.

And again DX12 is nothing new or spectacular. Before even Mantle we had Glide. And before Glide games wrote directly to drivers and the hardware. There are new features in DX12 but not because of the consoles.

Hell Consoles now use Tesselation which was a PC exclusive until the current gen of consoles.
 


Vulkan, yes that is pretty much Mantle now. DX12, I am not so sure since there are other reports saying DX12 was in development even before Mantle was announced.

As for Mantle, if AMD wants to claim consoles as their inspiration that is great. I still stand by what I said because PCs had close to the metal APIs and even before APIs so it is not something new to PC.

Still PCs influence consoles more than consoles influence PCs. I didn;t say consoles do not I said that PCs do it more and that I do not think that the current gen of consoles will be able to influence all games since there is a massive disparity in hardware on PC vs the consoles. Developers would be stupid to develop games based on the consoles for PC rather than develop for consoles and PC.

Look at the big games. A lot of them that are next gen only consoles on PC use Gameworks and not Gaming Evolved technology. If the consoles were that influential then I would expect less Gameworks titles.
 

ohim

Distinguished
Feb 10, 2009
1,195
0
19,360
Nvidia users are kinda crazy, this ASYNC Shaders are free to use and implement and Nvidia chose not to! Why are Nvidia guys crying about it and find an excuse is beyond me, it`s not like AMD blocked Nvidia from implementing the tech till now. On the other hand we have Nvidia with closed tech PhysX , G-sync (this doesn`t really matter), Gameworks (gimpworks) and they are just fine with such things.

Look at the latest FarCry Primal benchmarks see how Nvidia gimps the 780Ti and how good all AMD cards do, hope Nvidia won`t bring gimpworks to this title. The trend of better performance on AMD cards even on DX11 is still there.

http://techfrag.com/2016/03/01/far-cry-primal-benchmarks-show-amd-with-performance-lead-over-nvidia/
http://www.guru3d.com/articles_pages/far_cry_primal_pc_graphics_performance_benchmark_review,10.html
 


You keep missing the point. Point is that DX12 could have had other influences than Mantle considering that any good company starts work on projects well before anyone hears about them.

I didn't argue where Mantle came from, just that it is nothing new. I also did not argue that Vulkan is not Mantle as it pretty much is since OpenGL was neglected for so long.

My poiunt with Gameworks was that nVidia has the resources to do so because all they do is GPUs. AMD does not as they have not had a very good past few years CPU wise and they need to keep from spending their money. It is not that it is what made DX12/Vulkan but that with such a large market share:

http://wccftech.com/gpu-market-share-q3-2015-amd-nvidia/

In Q3 of 2015 nVidia had 81% of the AiB GPU market share. A developer cannot ignore that and should be working to optimize the game for the largest market share to garner the largest sales. Any developer will as even the small ones are out to make a profit. That is my point.

I am done with this though as it is way too off the topic.



http://www.anandtech.com/bench/product/1441?vs=1595

The 970 and 780Ti started out pretty close in performance depending on settings and such. If we factor in that the 970 has more VRAM, higher stock clock and higher overclock clocks along with a more efficient uArch I can say the 780Ti looks to be in about the same place, next to a 970. As well a newer uArch will benefit from driver enhancements. My old HD7970 wasn't getting any performance updates from drivers.

I don;t really see where nVidia is gimping the game, especially since most big games have poor memory optimization and like more than 3GB of VRAM at higher settings.
 


You are right I should have not used WCCFTech but this one they at least linked to a decent source:

http://www.jonpeddie.com/publications/add-in-board-report/

Still nVidia has the majority share. I have been arguing that point since my second post:

Since the consoles still have their differences on the API level that could influence game developers and for PC game developers will have to look at the market, which right now is heavily nVidia based.

I said the market is heavily nVidia based right now for PCs which it is and unless AMD can manage a massive win with Polaris over Pascal that will be a hard number to drop.

My only point about ASYNC is that it will not just become a standard because people want it to, there has to be a benefit to the masses not just a small part of the market. Developers have way more to consider than just "OMG free performance for some GPUs". Is it worth the cost to implement that? Because it will cost money as someone will need to spend the time to code the engine to use it then to optimize the engine to use it as well as work with AMD to ensure it is working properly.

And read my first post, I actually said nVidia will probably adopt ASYNC. I have never said it wont become a widely available technology. I am just saying that it is not because of the consoles nor have the current generation of consoles influenced anything. In fact the ACE engine was designed in the HD7900 series of GPUs and was launched in 2011. If anything AMD has been planning on pushing ASYNC and utilized it in their design that won the consoles to allow for a performance bump since nVidia does not have a quite equivalent hardware based ACE engine yet.


 
Status
Not open for further replies.