Nvidia is attempting to sabotage DX12

Did you read it? Here is a quote from the article from the SH*T company you are trying to fan boy bash

"Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11."

"Obviously AMD is plastering this DX12 feature level all over the web to their advantage, please do take the perf slides with a grain of salt."
 


Then they shouldn't ask to have it turn off.
 


They asked to have it turned off on Nvidia Cards only. They simply saw the feature not working properly in a Beta level, early access game, and asked that the company turn off that one DX12 feature for Nvidia cards only for the time being until something can be figured out. This is not unusual... this happens all the time for both card makers. If a feature is broken or not working they ask for the feature to be disabled for their products to prevent issues and problems.

I am not sure why this is creating so much noise. The fact of the matter is, nobody knows whos cards will perform the best out the gate with DX12, and we wont until a fully AAA DX12 developed game is released and it can be tested. I am talking about a title developed from the ground up with DX12. Not a game with a DX12 patch in or anything. That may take a year or so to happen.

How do you explain AMDs poor performance of CPU overhead that Nvidia is "far far better" at? Should AMD be ashamed? Is it "getting out of hand" for AMD as well? Should we stop supporting AMD as well for failing to deliver on another key DX12 feature?

The way it works, and the way it has always worked, is that each of these two are always trading blows with each other. Nvidia will have a bunch of powerful features that AMD does not, and vise versa. Really, why this surprises anyone is beyond me.
 


And who the fan boy?
 
Looks like there is a level of hardware dependency here. AMD's graphics core next technology seems to have their pipelines and hardware infrastructure in line to perform very well with this DX12 API feature. I am not sure how Nvidia's would compare to this.

However, given that DX12 is an API, and is all software driven. The hardware changes that advance any given API are simply designed to better work with certain software features. HBM seems right on queue for this. AMD might have a big win on their hands for having their card designed to run this feature faster and without needed driver simulation wizardry that Nvidia might need.

However, Nvidia has its own tricks up their sleeves as well. Its the "tick for tack" level of competition that always has these two trading off wins from game to game, and use case to use case. I am all for AMD getting some big performance wins again. Nothing wrong with that, and would do well for development across the board.
 


I was perhaps quick to call anyone a fan boy. However, there is really no need to be furious at any company due to a lacking feature, and asking that a broken feature be turned off for their cards to prevent performance issues. I am glad that they are taking it out if its broken. We don't need features that will break our hardware. Until they can figure it out. AMD will win this little performance segment.
 
That was my impression as well. Nvidia ask the feature to be turn off for their card. I haven't seen the specific about nvidia asking the feature to be completely turn off for the game so that Async cannot be used on GCN as well. But as usual i see people take that as nvidia asking dev to gimp the game so their hardware have the advantage. If anything we probably should be worried that AMD will stop optimizing their DX11 drivers if the said game support Mantle or DX12. Fine for people with GCN hardware. But what about those with older 5k and 6k series? Cards like 6970 can still able to handle games decently but with AMD did not optimize their DX11 drivers for games that have DX12 option then what supposed to be playable for these cards turn into unplayable. If this is on nvidia side then we will heard about people screaming how they neglect their user.
 
Its not really news. What is NVIDIA doing with gaming and workstation cards. They want to sell you 1 feature at a time... They intentionally do this, heres the Titan X

http://www.amazon.com/PNY-NVIDIA-Quadro-M6000-VCQM6000-PB/dp/B00UXHQHJS

I dont see why they wouldnt do this as a company, they are a company to make money

AMD is probably doing something similar or NVIDIA are behind. No point buying a newer card for a long time though. A lot of people invested in the 900 series



 


Very likely has something to do with the way the AMD has designed the current generation of hardware. THe Graphics core next architecture probably allows this. Nvidia does not have this, but has something similar. This is Gen 1 for DX12 cards. Next year we will start to see that cards designed 100% around DX12 technologies
 


That's not going to stop AMD or their fanbase from trying to get maximum mileage out of this tiny little non event though! :lol:
 


You mad?
 
There has also been an update to the original article:-
Edit - Some additional info

This program is created by an amateur developer (this is literally his first DX12 program) and there is not consensus in the thread. In fact, a post points out that due to the workload (1 large enqueue operation) the GCN benches are actually running "serial" too (which could explain the strange ~40-50ms overhead on GCN for pure compute). So who knows if v2 of this test is really a good async compute test?

What it does act as, though, is a fill rate test of multiple simultaneous kernels being processed by the graphics pipeline. And the 980 TI has double the effective fill rate with graphics+compute than the Fury X at 1-31 kernel operations.

Here is an old presentation about CUDA from 2008 that discusses asynch compute in depth - slide 52 goes more into parallelism:http://www.slideshare.net/angelamm2012/nvidia-cuda-tutorialnondaapr08 And that was ancient Fermi architecture. There are now 32 warps (1+31) in Maxwell. Of particular note is how they mention running multiple kernels simultaneously, which is exactly what this little benchmark tests.

Take advantage of asynchronous kernel launches by overlapping CPU computations with kernel executions

Async compute has been a feature of CUDA/nVidia GPUs since Fermi.https://www.pgroup.com/lit/articles/insider/v2n1a5.htm

NVIDIA GPUs are programmed as a sequence of kernels. Typically, each kernel completes execution before the next kernel begins, with an implicit barrier synchronization between kernels. Kepler has support for multiple, independent kernels to execute simultaneously, but many kernels are large enough to fill the entire machine. As mentioned, the multiprocessors execute in parallel, asynchronously.

That's the very definition of async compute.

http://www.guru3d.com/news-story/nvidia-wanted-oxide-dev-dx12-benchmark-to-disable-certain-settings.html
 
This is a silly thread that is ultimately a non-issue. I'm a BA, and we are always striving for efficiency in everything - so with new ideas and implementation we end up finding out that X and Y appear not to like each other. In order to fix one or the other, we give them the gift of missing the other to develop their personalities a bit on their own before joining them again to see what happens.

This is all that NV has requested. Switch off that particular character trait and let us go back to the drawing board before we come back to try again.

This is how you get to the best solution, through multiple iterations of trials and tests. I will support NVidia like I have done for their efficiency and performance overall. I'm not bashing AMD though - I really think Fury X is a step in the right direction - they're just not appealing to my sense of "this electricity is expensive here!" - and that is a big enough reason for me to always choose NV over AMD since I'm not changing GPUs every 4 months.
 
In the end we have wait until nvidia come up with their implementation of Async compute. And u see some people with agenda already push this story like there is no hope with nvidia. I wonder for those that ditching their 900 series (because of this story) feel when game actually starts to take advantage of FL 12_1.

The history shows that between nvidia and AMD the former are more successful when it comes to make developer integrate exclusive tech.
 
Both companies, but more NVIDIA, are having awful driver issues with the Windows 10 automatic rollout. Drivers are going to be flaky for a while as any update risks killing millions of machines. In previous versions early adopters would note this and the issues would be fixed, now any driver error is grave. I am personally waiting for Vulkan, the successor to OpenGL, to come out before making assumptions about AMD vs NVIDIA overall though AMD certainly seems to have a big advantage at DirectX12.
 
Have been using win 10 for quite sometime and so far nvidia drivers are stable for me. No issues so far. Right now when it comes to discrete gpu nvidia command more than 80% marke share. And if i don't see it wrong the chart from JPR shows that nvidia probably sell more discrete gpu than AMD selling their discrete and APU combined. So that's mean there are many machines out there using nvidia gpu. So it is expected many issues will be reported since nvidia own majority of the share compared to AMD.

And i would not going to say AMD hold the advantage now since when it comes to DX12 AMD current GCN clearly lack the hardware needed for FL12_1.