synphul :
Yes dx12 is 'supposed' to change how the threads handle the API, not the game's code itself.
No... The API changes how threads are handled, thus the game code must also be adapted to the API in order to maximize its benefits.
synphul :
Dx11 is an api, it's the interface between the cpu and gpu not the cpu's ability to run game code. To assume that games are somehow limited (aside from the interface with the gpu) to a single core is bogus.
It's not an assumption and it's not bogus. There are so many examples of this on both Intel and AMD CPUs, it's not even funny. It's not only DX either. Same goes for OpenGL vs Vulkan and so on...That you bring this up at all is very surprising, in a bad way. Here we go, and after this, I hope things are clear...
Vulkan on Intel, watch the CPU load:
https://www.youtube.com/watch?v=llOHf4eeSzc
That video alone kills your argument because it shows that;
- APIs can limit performance to a single core
- It is not an assumption that games that use such APIs are limited to a single core
- The API influences the CPUs ability to run game code
- The API influences the distribution of load across CPUs
- The API influences the way a game is best coded
synphul :
Multicore, multi processor and multithreaded applications have been around and in use for ages. So yes, in that regard the cpu's ARE already being used to their full potential.
Not for gaming, which was the whole point of this discussion regarding multithreading and DX12. Don't start red herring- ing me.
synphul :
Graphics drivers are notorious for performance issues, from both camps. Just read any particular flailing game discussion, amd or nvidia release new drivers and tada, magic fix.
Valid, based on the DX11 perspective. DX12, we still have to see. Intervening in close to the metal programming with drivers isn't really productive. It's like trying to
synphul :
There is no such thing for cpu's, if they're gimped they're gimped. Apples and oranges.
You're assuming they are gimped, despite multithreaded performance of FX-8 CPUs matching i5 CPUs. Although I get that with low IPC, they can be considered as such under DX11.
synphul :
Absolutely amd has had longer to work on their drivers since dx12 is so close to mantle which amd developed in house long before nvidia had the code to play with.
Again, one thing to blow what you say out of the water. Blog from nVidia:
For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC
Date of article: March 20th 2014
http://blogs.nvidia.com/blog/2014/03/20/directx-12/#sthash.nPz2v8b8.dpuf
So nVidia was working with MS since
at least Q1 2013. And a bonus one for you:
We worked closely with Microsoft to develop the new DX12 standard. And that effort has paid off with day-one WHQL certification."
"More blockbuster titles are on the horizon. As are games using the new DirectX 12 API, which Microsoft reports is seeing rapid adoption. We're working on DX12 on many fronts. Our engineers are providing drivers, working with game engine providers, and co-developing with Microsoft. We're also helping game developers deploy their DX12 titles.
GeForce has been the GPU of choice for key DX12 demos from Microsoft since the API was announced. Combining the world’s fastest GPU hardware with a high-quality graphics driver made for the perfect showcase for the next-generation features of Windows 10 and DirectX 12.
http://blogs.nvidia.com/blog/2015/05/15/dx-12-game-ready-drivers/#sthash.qtt0fSZR.dpuf
Sounds like you're just assuming that what you think in your head is how it happens in reality, and that's not how it goes.
synphul :
Exactly as gamerk316 mentioned, the newer api allows more direct handling and interaction with the gpu at the lower level reducing the need for all the intervening cpu work previously done in dx11 reducing some of the load from the cpu which is why the weakest cpu's (bottlenecks) have the most to gain, they need all the help they can get.
You'd be correct if those interventions were done on the same main thread that was being used for the game itself. I don't know about AMD (probably the same as well), but nVidia uses the unused cores for driver interventions under DX11. So that is not the main reason for the reduced CPU overhead, and if it is one, there are still a whole lot of others.
synphul :
With dx12 it's more likely people could get more out of their 970 or 980 with an i3 than before dx12. There's improvements for all the other cpu's as well.
Correct, but not (only) due to the reasons you mentioned.
synphul :
Why with the introduction of dx12 would an 8350 become as strong as an i5? Take gaming and dx out of the equation, look at real world benchmarks for other heavily threaded applications and multitasking bench's where an fx 8350 can stretch its legs and the locked core i5's continue to surpass them. Again, it is what it is. No magic sauce for fx, they suffer from their architecture. Put all the paint you want on a house, if its foundation is weak its weak. New curtains don't fix the plumbing. New plumbing fixes the plumbing. That's why zen is so crucial for them to attempt to get it right this time, they went wrong with bulldozer and just kept on that path whether because they no longer had the funds to backtrack and correct it or because they were oblivious and stubborn. Either way the outcome is what it is.
i5 continue to surpass FX-8 in multithreaded benchmarks? Really? Ok...
Sisoft’s Sandra’s Multi-Media benchmarking:
AMD FX-8350 321.69 Int X16 / 200.42 Floating Point / 112 / Double X8
Intel I7 4770K 401.11 INT X16 / 393.98 Floating Point / 226.11 Double X8
Intel I5 2500K 157.4 INT X16 / 198.9 Floating Point / 113.9 Double X8
Cinebench Version 11.5
AMD FX-8350 6.9
Intel I7 4770K 8.12
Intel I5 2500K 5.4
CineBench Version 15
AMD FX-8350 639
Intel I7 4770K 791
Intel I5 4690K 592
POV-Ray
AMD FX-8350 1504.3
Intel I7 4770K (OC to 4.2GHZ) 1535.4
Intel I5 2500K 1011.0
synphul :
On a side note, it's funny how a debate over cpu's turns into a deflected debate over gpu's when there's no longer a point to be made for the cpu's.
Except there is. The video of OpenGL vs Vulkan says it all.
synphul :
Amd fans will go off on a tangent just to somehow praise their beloved oem in some form or fashion.
Sometimes I really do wish that AMD does die, so that Intel and nVidia can overwhelm you people with their prices. When someone chooses to support AMD, it doesn't necessarily mean that they're a fanboy. Some people have a vision for what's best for the industry. Some people care about the whole rather than focusing on brand loyalty. Some people actually think for themselves and inform themselves, and try to spread information. Some people are not interested in e-penis debates.
synphul :
None of this really had to do with gpu's. In addition to waiting to see if dx12 makes the fx series 'better' somehow, I'm glad I hung onto my p2 450 - maybe by dx13 it will beat all and I can finally blow the dust off my obsolete hardware and be winning as well.
Your lack of understand is the only thing that allows you to make such statements.
synphul :
Makes about as much sense as fx becoming better with yet more time passing when it hasn't gotten any better in the 4-5yrs it's already been out. It would be the first time in history tech has moved backwards instead of forward and if fx in its current state had half a chance amd would continue building upon it rather than ditching it for zen.
Time passed is irrelevant when the tech didn't move forward. If you have eight handhelds but only one game, you can only play one handheld at the time. Until more games are available... These games are what DX12 is supposed to be for gaming. There are currently so many changes going on in the way games are made. But I guess, when you're unaware of them, you'll keep looking at things through the same old lens.
synphul :
They're purposely focusing on ipc performance because they recognize their current downfall. If it wasn't an issue and wasn't a serious issue, why would the company acknowledge the issue and go out of their way to correct something fx fans refuse to admit is broken or lacking? Just seems like some serious denial by the diehard fans.
Because that's the weakest part of their multithreaded architecture. They've already mastered designing chips for multithreading. It would've been better for them to do it the other way around, which is why they called it a misstep.