Will AMD cpus become better for gaming than intel with direct x12

With direct x12 coming games will be able to use more cpu cores and amd has cheaper 8 core processors (amd fx 8350) , do you think those cpus will become better for gaming than intels 4 core cpus for the same price (i4 4690,i5 4460...)
 
It will make AMD processors better for gaming than they are now, making it a very cheap alternative which might make them more competitive than they are now. However, intel processors are far more powerful than AMD's. Intel processors have proven to perform better for multi-core applications like computer modelling than AMD processors, even when AMD have more cores. WIll an 8 core AMD processor be better than an i3 intel? probably yes, better than an 15? hmmmm probably not, better than i7? definitely nope.

Intel doesn't drop an insane amount of cores to their processors because they don't need to, and because having a high performance multi-core processors can be really expensive and they only make a real difference for work-station type applications. There are drawbacks and complications from adding too much cores on a processor; lots of technical challenges to make all the cores perform efficiently.AMD doesn't seem to care much about that. as their marketing team noticed people started falling for buying processors with more cores since the dual-core processors first appeared on the market, therefore their processors are very inneficient and perform worst than intel. AMD is using a marketing technique to sell more products to ignorant costumers, while Intel focuses in the performance of their processors as they market their Xeon line to high-end computing needs, such as super-computers in research universities. Performance for intel matters the most.
 
This is similar to every other dx12 benchmark out there so far since there aren't any real dx12 games yet or a final dx12 to run them on. Most of the performance is in draw calls which is only one aspect of gaming. Even with higher draw call rates, the answer so far looks like nope. Dx12 isn't the magic fix to make amd cpus better than i5's. In fact if you look at some of the comparisons between the draw call rates of i5 vs fx 8350 and compare the dx11 performance to dx12, the fx actually loses ground to the i5. The rest of the gaming code will still rely on good ol' cpu power so nothing changes there. Will fx performance improve from dx11? Sure. Will every other cpu including intel improve? Yep.

http://www.eurogamer.net/articles/digitalfoundry-2015-why-directx-12-is-a-gamechanger

It makes overall gaming a better experience for everyone, but it's not a pocket full of magic beans that will make fx suddenly as good as or better than an i5. Keeping in mind this is when dx12 actually becomes implemented in games, they have to be written to include it. By then amd zen will likely be out finally, as will intel's skylake and possibly cannonlake. Amd's been sitting on their heels the past 4yrs waiting for their architecture to magically become more utilized and they took a wrong path (intel's done this in the past but recovered much quicker). Which is why zen is trying to improve ipc performance where it's killing the performance of bulldozer/piledriver. There's no magic fix to it except to redo it which is what zen is supposed to do.
 


^
This is a real answer. Mine was just my humble opinion.
 
It appears that with a more direct interface to the gpu, it reduces some of the work the cpu has to do and far as I can tell the weaker gaming cpus are the ones to gain most benefit like the fx 4xxx/i3, fx 6xxx. Less bottlenecking of the higher end gpus and we'll probably see more intense graphics if they can draw more on the screen per frame. I'm not a coder though so I'm not positive what all else still has to be processed besides things like user interaction, relative geometry (where bullets meet targets, player character in relation to objects etc).

Being that dx12 has been rumored for awhile game devs may have had a jump start on it, but I doubt it's a matter of just a few lines of code. They may have to rewrite entire game engines to incorporate dx12 since certain games are based off similar engines (aka, frostbite, cryengine, unity etc). Far as I know it's a framework which allows the game devs to move a game's guts from console and port it to pc (or vice versa) without having to rewrite the whole thing. I think xbox has some dx12 features built in and I don't know if dx12 is fully backwards compatible or not. If not it could pose a problem if the devs don't have a one size fits all framework for both until consoles adopt dx12.
 
No.

What DX12 is going is removing a lot of the CPU side bottlenecks. EVERY CPU is going to get some % performance boost as a result, with lower performing CPUs getting a bigger performance boost. So an Intel i5 is going to see a minimal improvement, since the CPU isn't a major bottleneck, but an Intel Pentium would see a larger improvement, since the CPU is a bottleneck.

In short: The weaker the CPU, the larger the improvement. I fully expect the Core i3 line to become a LOT more attractive, since I believe they'll end up performing close to the way i5's currently do. So I actually expect AMD chips to look less attractive once DX12 hits.
 
Dx12 does offer a potential performance boost over dx11. It will lessen the api overhead (which is just a small part of game performance), particularly by increasing the maximum amount of draw calls in a time (which might potentially hold performance back. This is a part of a part of performance). It will also use new execution instructions, but only if you got the gpu to support them. This is the part where the performance potentially improves without having to do something with the cpu.

Either way, to answer the question: no. Amd 8 core fx cpus won't suddenly become faster (or catch up) with intel cpus by dx12. The gap might lessen a little, but in the case of them striking even that has almost nothing to do with dx12.
 

They won't be stronger, but the gap might lessen to a point where the limiting factor becomes the GPU, and thus the CPUs become almost equal. Also, a 8 core AMD might be competitive with a 4 core i5 in a DX12 scenario.
 
This is part of the problem I see regarding dx12. People want faster/better gaming pc's and they want them on the cheap. Currently aside from cpu intensive games, fx paired with a mid range card (around $200 or so) does decent. Now you alleviate the overhead to the cpu making the lower end cpu's more capable of driving the faster/stronger cards. If someone is using a budget chip like an fx 4xxx/6xxx or an i3 and their complaint for not getting an i5/i7 was $100 cost, or they chose amd over intel because of $50 savings - how exactly do they plan to plunk down $550 or more for a gpu all this new tech will remove the bottleneck from?

Say because of dx12's improvements an i3 will now comfortably handle a gtx 980ti. If you're gaming with an i3 and an i5 was out of the budget, where does a gpu costing almost $700 fit in?

It's good and yet at the same time it's almost pointless. For people with lower budgets, gaming on budget cpus there's a good chance they're gaming on budget gpus also. Going by the cheapest prices on pcpartpicker (and let's use nvidia just as an example):

gtx 960 - $170
gtx 970 - $308
gtx 980 - $480
gtx 980ti - $650

A person who saved $50 on a cpu or can't stretch their budget $50-100 doesn't have $150-200+ for each gpu tier upgrade. Opening the compatibility and options is nice but to fully make this a win for gamers and realize the potential of their budget cpus, gpu prices are going to have to fall drastically and high end cards are going to have to improve much much more in performance. Not take 18-24mo to simply make a couple of tiny tweaks, slap on 4gb vram and call an r9 290 an r9 390.
 
Well you could have a then top of the line Phenom II X6, i7 920, FX 8150 or even the popular 2500K, and you have since replaced the GPU but with DX12, you might not have to upgrade the CPU at all and just keep those trucking along. There will still be differences from current CPUs, but if they get 60fps one might not bother replacing them so soon.

One person can go from having a Phenom II X4 955 and a 6950 to having a Phenom II X4 955 and a 390X. It's not all about the low end, it's also about the older high end.

And there's nothing wrong with buying a cheaper CPU and a greater GPU. Sometimes you only have a certain budget and want to make the most of it.
 


Regarding the performance lost by the API overhead. Yes. That's only a little part of all performance limiters.
If a fx 8xxx comes to catch up to an i4 4460 that mostly has nothing to do with dx12.
 
I sort of see what you're saying, but the 2500k is 4.5yrs old and still not a bottleneck. How much more time do manufacturers really want between replacements? If progress is this slow because a nearly 5yr old cpu is plenty capable and the user has no urgency to upgrade, how much slower will it get? How much more expensive will cpus become as people upgrade even less often to compensate for the mean time between upgrades.

I totally agree, there's nothing wrong with buying a cheaper cpu and pairing with a more expensive gpu. People are already doing this with $80-100 cpus and $200-350 gpus. My point is, on these budgets is where people say $50 more? For intel? That blows my budget. If that's they case and they have a $200-350 gpu, what will they get in a future build? A $700 gpu? That $50-100 saved on the cpu that puts so many out of budget isn't enough for even one generation/level of gpu upgrade anymore. I can't go from a 960 to 970 for $100. Or from a 970 to a 980 for $150. So if this is to help the lower end budget users get more bang for the buck, is a gtx 970 going to suddenly become a more affordable $150?

Under dx12 those older cpus will handle a higher end gpu but people don't have the budget for them anyway at that level. If someone has a 6950 and wants an r9 390x, I'd bet they need a new psu at around $60-80 (conservatively) on top of the $430-450 for an r9 390x. I would think if someone's budget won't allow for the slight price difference between amd/intel for it to be an issue (the one that crops up the most trying to get a 'budget' build) they doubtful have the $530-550 to dump on a psu/gpu. I can't see spending over $500 on 'upgrades' to keep a 7yr old cpu along with outdated motherboard that lacks usb3, m2, sata express and so on.

Trying to make the most of a limited budget is hard to do when the separation between series/performance of gpus is averaging $150-200 per 'leap'. If I were stuck on a budget saying well, if I do this or this it will save me $25 or $50 or even $100 - and I could put that toward a better gpu and get a 970 instead of a 960. It's not going to happen. The next gpu up from whatever's being considered is going to blow the budget regardless (assuming the budget was tight to begin with). Now if the cards were only $50 different, then it would be helpful. That's why I said, given all this new reduction in hit on the cpu, gpus which are already lagging are going to have to step up performance gains and lower their prices. Making better graphics possible on lower end cpus is only part of the equation. When you consider a card that's with a users budget if it's limited - current cpus already aren't much if any of a bottleneck.

It will be interesting to see how many people still have phenom ii x4's by the time dx12 is in full swing and actually being implemented. Many have already upgraded to fx cpus, intel's core series and the stragglers are waiting for skylake and zen which should both be out by then. That will further reduce the minority running on outdated cpus from 5-7yrs ago. It's a head in the right direction, anything that improves efficiency and performance is. But I have a feeling it will be some time before it's mature and actually making a huge difference similar to 64bit computing in the mainstream/home environment.
 
I think the point of dx12 is not so you can rock 2x980tis with an athlon, its so more of the cpu power can be harnessed via proper multi threading to run the games better, think more units/players in game without fps drops. I fully expect games to get bigger and better after dx12 has been out for a while and if the fx series pulls a bit back against intel at the start, the future games should in theory open the gap up again.
 
The games have to be coded to run multithreaded. Dx12 is just an api, even if it's multithreaded it's how the cpu will interact with the gpu. Not necessarily how the game code itself is run. Recoding an entire game or coding it for heavy multithreading is more complex. If it weren't, everything would run on as many threads as we could throw at it. That's just not reality. Multithreaded games have been possible all through dx11, where are they all? Dx12 is more or less about reducing driver overhead, the less work the cpu has to do regarding the drivers and the more direct path there is to the gpu, the more cpu resources are free to do other things.

For cpu's which are bottlenecked and struggling to run a game's code along with the driver overhead, this will help alleviate it. That's why it's said that it shifts more of the load directly to the gpu and less on the cpu. Many have discussed the fact that fx cpus run a bit better with nvidia gpus than their own amd/ati gpus because the amd/ati driver overhead is more significant and has a bigger impact on the weaker core design of amd's cpus. Intel's ipc is high enough it doesn't make much of a difference. It doesn't however mean that boom, witcher 4 comes out and runs on 8 or 16 threads thanks to dx12.

It means that amd's cpus are no longer gimped even worse by high driver overhead and will start to interface better with the gpu - as will intel. Speculation has been that with dx12 AND a higher ipc promised by zen that maybe then amd cpus can start to compete head to head with intel in games. It's still a lot of theory at this point, very little actual testing to be done because dx12 isn't mature, there aren't really any dx12 games out etc. Dx12 alone won't do it, the rest of the game engine has to run and there's more to a game than draw call performance where intel's quad cores continue to dominate fx 8 cores even under dx12 regardless of nvidia/amd gpus.

http://www.eurogamer.net/articles/digitalfoundry-2015-why-directx-12-is-a-gamechanger

One of the features that very well may come along with dx12 is vram stacking but with a catch - ONLY if the game is coded to do it. Dx12 isn't plug and play and instantly we have vram stacking. A lot of games already are poorly optimized using dx11 and look how long that's been in existence. I doubt in the real world this is going to change anytime soon for dx12 either. What it will likely mean is that for games like crysis 3 where it takes nearly a gtx titan or two to get over 50-60fps on high/ultra, dx12 will help alleviate that somewhat. In order to take fuller advantage of it, crysis 3 would need to be patched. I wouldn't hold my breath, many of these companies are busy patching the games they just released because they're gimped and released far before they were 'done'.
 
One of the main reasons games don't scale beyond a few threads is because under DX, with very few exceptions, all the rendering needs to be done within a single thread. DX12 should remove that limitation, but given that top end CPUs aren't bottlenecked anyways, they won't see much performance gain (likely limited to the speedup within the API itself). It's lower tier CPUs, like the i3 and Pentium, which will see the largest proportional performance gain.
 
No, it doesn't have to be done on a single thread. If it does it's because that's how the program devs have created the game. This is from microsoft's site.

"Multithreading is designed to improve performance by performing work using one or more threads at the same time. In the past, this has often been done by generating a single main thread for rendering and one or more threads for doing preparation work such as object creation, loading, processing, and so on. However, with the built in synchronization in Direct3D 11, the goal behind multithreading is to utilize every CPU and GPU cycle without making a processor wait for another processor (particularly not making the GPU wait because it directly impacts frame rate). By doing so, you can generate the most amount of work while maintaining the best frame rate. The concept of a single frame for rendering is no longer as necessary since the API implements synchronization."
https://msdn.microsoft.com/en-us/library/windows/desktop/ff476891(v=vs.85).aspx

Dx11 does have multithreaded support, but if game devs choose not to implement it for whatever reason (convenience, complexity etc) then it makes little difference. Dx12 won't change in this aspect, which is why I mentioned that dx12 won't suddenly create a bunch of heavily threaded games. Much like vram stacking, only IF the devs decided to take advantage of it. We can offer the game devs all the tools in the world to make multithreaded games but getting them to do so is another matter.

It's not to say dx12 won't have better or more elegant multithreading support, obviously it's an improvement over dx11. The issue of games being heavily single threaded can't be blamed on dx11 though.
 
DX11 multithreading had many limitations on it though, due in part to the pipelined nature of the API. Only a few engines use it (Frostbite 3 does, which is why games like DA:I do scale reasonably well). Even then, you typically still have a big render thread, and many smaller GPU worker threads, so the root problem of two threads doing most of the work within an application remains. DX11 was a step in the right direction, but you really had to build around it to get performance gains out of it. It's no shock games started to thread better once DX9/XP support got dropped.

By contrast, the DX12 model is free of those limitations, and gives a lot more control over the various HW components within the GPU, so you can thread more or less trivially.

I do note: None of this actually adds anything to performance if no CPU bottleneck currently exists. Sure, you have lower core usage, but if no core was overworked, additional threading doesn't give any significant performance bottleneck. It's a point I've driven home many times over the past 6 years in the various AMD/Intel sticky threads.
 

Only one core can talk to the gpu even in dx11 multithreaded, pure and simple games are completely limited by the api. The entire point of dx12 is all your cores are talking to your gpu removing the heavy load from traditional single heavy render thread.
 

I hoped this would lead to bigger, better games with more units/players etc not to give old cpus a boost.

 


Why not both?
 
It's definitely a step in the right direction and making better use of the gpus capabilities not being used to their full potential. Games have more involved than just the graphics processing though and where games involve a lot of cpu calculations to run it will still rely on stronger cpus. Which is why fx won't outpace intel with or without dx12. Higher ipc always matters at the core of everything a processor does.

People often think that spreading the workload wider over many cores is the solution. How's that working for consoles? Everyone can agree 2-4 stronger pc cores can outpace a gimped 8 core console any day of the week even with inefficient dx11. I suppose if one could spread the threads out wide enough, eventually the weaker cpus will catch up but when? Will it take 16, 32 threads on a weak architecture to finally match 4-8 efficient threads? While dx12 will be a vast improvement hopefully it doesn't become a crutch for poor hardware design along the way.
 


Currently
FX-4 < FX-6 < FX-8 <= i3 < i5 <= i7

You see this confirmed by even the benchmarks of this very site.

With DX12 maybe:

FX-4 < FX-6 < i3 <= Fx-8 < i5 <= i7

If there are other drastic improvements except for draw calls. Otherwise, things won't change in the slightest.