Directx 12 compatibility?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Speedstang

Reputable
Feb 14, 2015
423
0
4,810
Well I've been planning on purchasing a GTX 750 ti, but I've heard people say that any pre 900 series Nvidea GPU's won't work that good with directx 12, they will be slower than directx 11. Is this true?
 
Well i heard there is problem with GF forcing AMD to delay their prodcut. But that is kind of usual fair with GF as well. They also late with their 28nm node. And despite their underperforming result AMD still need to pay them each year regardless AMD manufacture their product or not.
 
I don't see a third competitor getting into the PC graphics market unless somebody comes up with a fancy new way of doing things and can upset the current market share. How many investors do you know that want to pursue investments in a shrinking market? I might have a deal for at least one of them.

On the other hand, while I don't see Intel as great competition now, I would consider that at some point in the future, graphics technology will likely hit a sort of plateau where, all three players have about the same or similar performance. They just need to keep at the long game they've been playing.
 
A test sample of one is still a pretty small sample! :lol: If it were the other way around and Nvidia cards had come out on top would we even have heard about it? Is this amateur dev being sponsored in any way or by anyone?

I use both Nvidia and AMD cards for running the same GPU compute program, I am aware that whilst they can do the same thing and are quite comparable they do what they do quite differently.
 
Mousemonkey, you have mentioned a small sample size and an amateur developer more than once, so we might as well address those things and try to figure out how you think they matter here.

If you could clarify who you are aiming the amateur developer comment at it might help.

If you are talking about Oxide, their team is made of non-amateur developers. Read their bios, as they have worked on quite a few AAA titles and even have one of the Microsoft engineers from the HLSL team. If you're referring to an engineer who helped develop DX APIs as amateur, heaven help the industry.

If on the other hand, you're referring to the person who wrote the test program to try and confirm whether both GPU architectures were indeed running async compute functions, well, what's the point of calling him an amateur if he accomplished his goal? It really doesn't matter his skill set, as the results of his test seem to be able to stand on their own. Does it matter if an amateur baseball player or a pro ball player makes a home run? It could, but if we're just talking about the home run as an end in itself, not really. It's more of a red herring at that point.

Saying there is a sample set of one is inaccurate on your part, if referring to NVidia's apparent lack of asynchronous compute. There is a sample set of 2 pointing toward that conclusion. Not a whole order better, but neither is saying the sample set is too small to draw conclusions from.

The AotS "benchmark" is perfectly fine as a real world example of just how the two different GPU architectures will behave running the Nitrous engine, and also how different CPU architectures behave, whether it's a sample size of one, or not. It's not as though you can play the AotS RTS game on another game engine. People looking to see how that game behaves under certain conditions will never get accurate indications by benchmarking other titles. Each game engine will have unique performance characteristics under DX 12, especially when more of it's feature set is tapped into. It's clear that both GPU vendors have focused on differing feature level support so we will have to wait and see what software developers choose to implement, and the ramifications of those choices.

Not sure the relevance of your last comment, about running a GPU compute program on both AMD and NVidia cards. Nobody is saying either GPU can't run compute in parallel, they are simply saying, only one of those GPUs can run compute in parallel with graphics without a large performance hit. In the end, whether NVidia is far faster at compute may end up mattering very little, if you can't use the idle resources on your NVidia GPU because a graphics operation is taking place.
 


That is what I am referring to and whilst you may feel that it's the be all and end all I don't agree and shall prefer to sit back and wait, you may well turn out to be correct but then again you also wouldn't be the first to have got it wrong either.
 
Well, I for one wouldn't sell an NVidia card over this issue, nor would I rush out and buy an AMD card over it either. Neither camp has anything remotely compelling at the moment. AMD's missing HDMI 2.0 and apparent bottlenecking of their massive amount of shaders somewhere, NVidia gimped their compute performance back with the 6 series, while having a tendency to perform software shenanigans when they get the opportunity with game devs, and both high end offerings feel overpriced for their performance. In my own opinion, recommending either company's flagship cards would feel a bit like recommending one of Intel's enthusiast processors.

Something else to think about though, would a 750 ti even have any resources left to throw at asynchronous shading, even if it could do it? It's not a powerhouse of a card, and I actually wonder if percentage of unused resources scales in a linear fashion with resource availability on GPUs.

I've noticed people wondering about why the Fury X isn't clobbering the 390x in the AotS game, which on the surface could point to a severe bottleneck somewhere in the Fury X GPU. On the other hand, maybe there are just so many resources on the 390x under async compute that the processor is failing to keep even the smaller card fed up to it's potential.