Will AMD cpus become better for gaming than intel with direct x12

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
LOL yeah right... i3 beating the FX-6 in DX12 is a huge dream. The FX-4 is as fast as an i3 during multithreaded applications. The i3 might have better single core performance, but the hyperthreading efficiency of an i3 is close to only approximately 25%. The second thread of an FX module is generally at least 60% efficient, usually closer to 75%. This makes the FX-4 keep up with the i3 when all threads are fully loading the CPU, and the two additional threads on the FX-6 will practically guarantee superior performance to any i3.

It seems people are not aware of what DX12 actually does... Before DX11, only one CPU core could feed the GPU. Obviously this is extremely limiting.

DX11 improved things (or tried to) by allowing all CPU cores to feed to the GPU, BUT, only one core could do so at the same time. Even if you used all cores, they had to wait on each other to talk to the GPU. So in the end it ended up being easier to just allow one core to communicate. This is the reason why Intel's strong single core performance outperforms AMD all the time.

DX12 however, allows ALL cores to communicate to the GPU simultaneously. This actually makes things easier to program, reducing CPU overhead in that way, and actually spreading the tasks like they should be. CPUs with more threads can actually beat CPUs with less threads this time.

I guarantee that what I'm saying is accurate. But don't take my word for it. Wait for the benchmarks when the time comes :)
 
If draw calls are any indication, the fx 8350 still falls behind the i5 in dx12 multithreaded performance. Both get a boost but the gap actually widens a bit between fx/i5 going from dx11 multithread to dx12 multithread. No doubting improvements for all, making fx 8xxx = to i5? Not going to happen. The core architecture is just too weak, no amount of software can put a bandaid on it. Only a restructure of the architecture, which is what zen is aiming to do.

Even though amd is behind and has less budget than intel, you can be sure they've been testing their hardware with various new tech. Especially considering they were working on mantle to begin with, before they chose to drop it for dx12. They have access to their chips and to intel cpus, you can be sure they're doing head to head testing to see how to improve performance. Their pricing has already been admittedly by amd, adjusted to make it price competitive at the level of performance it corresponds with, which is why fx 6xxx is priced similarly to the i3 and not the i5. No one else short changed amd and said your 6 core is only on par with an i3, amd has acknowledged this and price structured their chips accordingly.

Having known roughly how their cpus perform using mantle (similar to dx12) vs intel for some time now, during the development of zen - it makes sense that zen is still focusing on ipc improvement. If this weren't the case they would have just continued to rehash what they have. They realize they can't keep doing that. Even amd can see fx 8xxx < i5 dx11. Fx 8xxx < i5 dx12 - what's the next solution? Improve ipc at the core of the cpu. Exactly what zen is aiming for.

I would say better multithreading in dx12 gives programmers more control, more options - but it doesn't make the coding task easier. Rather it makes it more complex. They have to make sure there's no bottlenecking of threads, no false cache sharing etc. Mistakes that can cause worse performance rather than better. That simple fact comes from plenty of coders and is highlighted in books like "game engine architecture 2nd ed." by jason gregory.

What I don't get is every place parroting the information about dx11 vs dx12 is saying dx11 is single threaded when the authors of dx11, microsoft, state otherwise. It does however say that programmers can opt out of using thread safe operations and forcing single thread, so is dx11 truly at fault or are programmers? They may choose single threaded because it's less complex and they don't have to worry about conflicting threads fighting for resources. Not as efficient but less problematic. Dx10 and 9 were limited to a single thread.

"Direct3D 12 lets apps get closer to hardware than ever before. By being closer to hardware, Direct3D 12 is faster and more efficient. But, the trade-off of your app having increased speed and efficiency with Direct3D 12 is that you are responsible for more tasks than you were with Direct3D 11."

"Direct3D 12 is a return to low-level programming"

"In Direct3D 12, CPU-GPU synchronization is now the explicit responsibility of the app and is no longer implicitly performed by the runtime, as it is in Direct3D 11. This fact also means that no automatic checking for pipeline hazards is performed by Direct3D 12, so again this is the apps responsibility"

https://msdn.microsoft.com/en-us/library/windows/desktop/dn899194(v=vs.85).aspx

It says nothing about overcoming dx11's single threaded limitation, it speaks of closer working with the hardware, low level programming, higher efficiency and places all the work on that of the developer to code it that way without being aided (or limited) by dx11's intervention. If games are glitchy now with the aid of dx11 making things thread safe, imagine how messed up they'll be when everything is left up to the developer. There's no indication from microsoft that dx11 was single thread bound (as it states were the cases for dx10/9), just people going on about dx12 making this claim. If I choose to drive a car with a manual transmission in first gear because shifting is too complicated, it's a bit hard to say the car is at fault for only having 1 gear. No, the car had multiple gears and I chose not to use them.

If multithreading in dx11 didn't exist, how is it there are dx11 multithreaded vs single threaded benchmarks? I'm fully agreeing that the potential for efficiency is greatly improved in dx12, though it's because the api overhead's been reduced and more communication happens directly between the cpu and gpu, not because it was single thread bound. The results are the same, the theory as to why seems a bit off.
 
"It seems people are not aware of what DX12 actually does... Before DX11, only one CPU core could feed the GPU. Obviously this is extremely limiting. "

You must have literally no idea how games actually work, what direct x is or what affects performance in which way. Because if you had, you'd see how much of a non issue that 'one cpu core at a time' limitation is.

Dx11 does barely limit games performance. Neither did dx10 or dx9. Developers get small efficiency improvements with each version, but it's mainly about acomplishing certain things easier/quicker. DirectX isn't for "us users". It's an API - a programming interface to let the programmer interact with the gpu in set ways.

Dx9-11 have been offering a fairly high level of abstraction. They took a lot of work from the devs but sacrificed a little performance and limited possible low-level routine interaction. Dx12 once again makes certain things easier, but also allows for more control, which results in possibly better managed resources -> less performance drop. It does however also force devs to do things like memory handling themselves (and we all know no one likes missing garbage collection) which probably won't be welcomed.


But lets think of perfect adoption of dx12 with an example of... assassins creed: unity. Say you have 55fps with dx11, where dx11 limits the whole thing by ~5fps. In the case of dx12 reducing all api overhead, you'd now be at 60fps.
If it were to suddenly run at 100fps, that would not be caused by the dx11 -> dx12 change. They'd have to rework a whole lot for that to happen and the api would once again barely come into play.
 


Yeaaaahhh... I dunno, my i3 4160 @ 3.6GHz is way more effective than my FX 4350 @ 4.7GHz is.
 


In theory though, due to the lower level resource control, you could extract much larger gains if you really took the time and effort to optimize your render pipeline. The root problem is, time is not something that is often a high priority during development.

My main concern here, is that it's been a long time since programmers have been exposed to low-level memory access, since even C++ does a fairly good job abstracting it now. I fully expect the first wave of DX12 games to be unstable piles.
 
Looks like its just a few weeks and we can compare the first native dx12 rts game to older rts games like sc2, my monies on alot more units and alot more fps.
 
Still repeating the same stuff I see. Wait for the benchmarks and you'll see...

They know that their biggest market is gamers buying those CPUs, and for gaming right now, FX-6 = i3. This will be changing. But again I'll say, wait for the benchmarks and you'll see...

IPC improvement is something that must happen anyway. And the jump to multi-threading went a lot slower than AMD thought, which is why their architecture failed. They are still very capable CPUs in multi-threaded environments, and gaming is becoming such an environment, which is why we will be seeing the jump. But wait for the benchmarks and you'll see.

Except all those things like bottlenecking of threads, false cache sharing and so on happen automatically with the DX11 API. DX11 is great for Indie developers. For big studios like DICE or Bethesda, DX12, Mantle or Vulkan is a superior choice. And it's not as if they don't have experience with this. They are doing all the things you mention for consoles already. The reason why console ports suck on PC is because of DX11's limitations. DX12 will make it easier for developers to port because they don't have to perform magic to make 6 threads that communicate simultaneously to the GPU to work on a system that uses 4 threads and only one thread can communicate with the GPU at the same time.

DX11 is not single-threaded. It's multi-threaded, but only one thread can simultaneously communicate with the GPU. But it seems that this information is going way over your head.
Put simply, imagine that you have 6 cars that have to race on a certain track from point A to point B. Each car carries four people. You have to do only two tasks, press the gas and turn the wheel. DX11 is being able to turn the wheel on all 6 cars simultaneously, but only being able to press the gas on one car at a time. You can get the 24 people from A to B by trying to control all cars simultaneously, or by simply sending them after each other one by one. Obviously the cars are not allowed to crash. Which do you pick and which do you think gets the people faster to their destination? Options in between are also available, so maybe the compromise of controlling two cars at the same time is better than trying all six. That is DX11, and that is why multi-threading is so inefficient on it. Having two or eight cores doesn't make that much of a difference with it. DX12 allows you to both turn the wheel and give gas on all cars independently at the same time.

Uhm... Second line...
dx12-features-xbox-one-pc-mobile-1-1024x579.jpg

http://gamingbolt.com/directx-12-analysis-new-rendering-features-executeindirect-performance-comparisons

Oh come on. Look at any CPU load during gaming out there, and you'll have one core at 100%, one or two others at maybe 40%, and the rest at practically 0%. And DX11 is the cause of messed up ports. As previously mentioned, the reason why console ports suck on PC is because of DX11's limitations. DX12 will make it easier for developers to port because they don't have to perform magic to make 6 threads that communicate simultaneously to the GPU on a console to work on a system that uses 4 threads and only one thread can communicate with the GPU at the same time. The API limits access. And considering that console versions of games are more stable than PC versions on more than one occasion, the big studios won't have a problem. If DX11 is so great, why are we returning to low level APIs?

See explanation of cars above.

No one ever said DX11 multithreading didn't exist. DX11 multithreading is simply too limiting.


What do you base this on, or how did you determine this?
 
IPC improvement is something that must happen anyway. And the jump to multi-threading went a lot slower than AMD thought, which is why their architecture failed. They are still very capable CPUs in multi-threaded environments, and gaming is becoming such an environment, which is why we will be seeing the jump. But wait for the benchmarks and you'll see.

Adding more threads only helps if it eliminates a CPU bottleneck. No bottleneck, no advantage to adding more cores.

In games, CPUs are not the bottleneck. So single-core performance is what separates CPUs, and thats why the i3 can match the FX-6xxx, and sometimes the FX-8xxx. Despite just two cores, they're powerful enough where in many titles, it can still get all it's work done without starving the GPU.
 
Huh? What? One CPU core at a time is a non-issue... And I have no idea how games work... Bwhahahahaha. The irony is strong with this one. I'll let the readers be the judge of who has a better idea of how games work.

If DX11 barely limits game performance, why does the exact same scene in Star Swarm with the exact same hardware get such a great boost from DX11 to DX12?
And duh. Of course it's for developers. It's a freaking API. The users of end-products like games are not programming and thus won't be using any API themselves. This stands on itself. But obviously a superior API will benefit the end user if it helps the developer, because with the same production resources, more can be done.

Big studios welcome it. They do this themselves all the time on consoles. And they were actually asking for this to come to PC for years. AMD came up with Mantle to cater to these developers, MS came with DX12 as a response to Mantle. Obviously all those companies communicate, and there are indications that DX12 is basically a modified Mantle API. For what other reason would AMD encourage switching to DX12?
For indies it's better to stick with DX11. But then again, they probably won't require many CPU resources either with the kind of games they're making.

How do you know this? Why only 5 fps? Why would the jump from DX11 to DX12 be impossible to get 100fps while previously you only had 55? Have you seen the drop in API overhead that's achieved with DX12? It's so enormous that frame time is practically half at worst, so, why would double the framerate be impossible if the GPU is fast enough to do so?
I get your point that just switching API is not going to increase performance much. But the point is that for DX12 you HAVE to write your own memory handling for example, like you yourself said. This means that just API switching won't happen, since basic things like thread distribution, read, write, fetch, render and so on need to be implemented manually in order for the application to work in the first place. You'd have to be REALLY bad at it to get it to be less efficient than DX11.

In other words, what you're saying has a bunch of hidden assumptions. For what you're saying, the following must apply;

Either;
1) The GPU is at 100% load at 60 fps and the CPU bottleneck is gone. In this case, only upgrading the GPU will increase performance.
OR
2) The GPU is not at full load at 60 fps and the CPU is still a bottleneck, but only slightly less of a bottleneck.

Number 2 directly implies;
A) The performance increase of 5 fps was only attributed to the reduced CPU overhead of the DX12 API.
AND
B) Since the GPU is not at full load, a higher framerate is still possible with a more efficient CPU usage.
AND
C) Additional threads or cores will be used exactly the same way as under DX11, meaning that if it was rendered on two cores, all other cores on the CPU will remain unused.
OR
D) All cores were already used 100% thus more efficient CPU usage is not possible.
AND
E) APIs have little to no influence in the achievable communication efficiency between CPU and GPU.

A might be true. More efficient CPU usage under DX12 (point B above) is only impossible when C and D are true, but they are definitely untrue, along with E.
 


It's true that adding more threads only helps if it eliminates a CPU bottleneck. It's not true that in games CPUs are not the bottleneck, particularly for AMD CPUs. And if CPUs are not the bottleneck, no CPU would outperform another. Since there are differences in CPU performance, well... That's pretty straightforward...

The bottlenecks of AMD CPUs is the whole reason why people keep recommending Intel CPUs. AMD's additional thread computation capabilities are currently not being used, since with something like an FX-8, 4 to 6 cores are doing literally nothing. With DX12 they will be used, thus the AMD CPU bottlenecks will be removed for a large part of games.

It's quite simple really. I don't see why this is so hard to grasp. Often I get the reply that DX12 won't make a CPU stronger. Indeed, the hardware will not be. But the resources that are unused will be used. This equals a boost in performance in practice.
 
The figures I mentioned for the i5 still besting the fx 8350 were from testing, not just my opinion. It was benchmarked. I've seen no evidence from you otherwise except your opinion NightAntilli. I did say that performance improved and efficiency improved with dx12 over dx11, I also pointed out that when you say dx11 is single thread limited when it's really not. I didn't make this up, it comes directly from microsoft. The talk has been of consoles like the xbox which already has access to a few dx12 features but is currently running dx11. If the console is running dx11 same as pc's, how is the pc version of dx11 'limiting' compared to consoles? The limiting part that doesn't change is a pc game/port what have you has to be 'generic' so it will run on anything from an fx 6300, i3 4160, i5 4460, fx 4xxx, fx 8350 etc. That variable of 'potential' system hardware they have to work with isn't the same as a console where they know exactly what cpu/gpu they have to work with.

This part confuses me. "DX12 will make it easier for developers to port because they don't have to perform magic to make 6 threads that communicate simultaneously to the GPU on a console to work on a system that uses 4 threads and only one thread can communicate with the GPU at the same time. The API limits access." For one, ps4 uses its own api not dx anything. Their api can use some dx feature sets. Xbox one uses dx11. They're just now talking about getting dx12 access same as pc's. So how would this have any relevance on a pc port being dx11 limited? No more limited than what xbox is using and if it's a ps4 exclusive title being ported to pc it has to be completely rewritten for a totally different api.

What I find interesting in relation to this are all the arguments that have been made that if pc's were suddenly more like consoles with lots of cores doing the work how they'd be so superior. Yet here stardock ceo says "One way to look at the XBox One with DirectX 11 is it has 8 cores but only 1 of them does dx work. With dx12, all 8 do."
http://www.cinemablend.com/games/PS4-Substantially-Better-Than-Xbox-One-Even-With-DirectX-12-Says-Stardock-CEO-63486.html

Which confirms the fact that yes dx12 will be an improvement but in essence consoles (xbox anyway) are working no more efficiently than pc's. Using the same dx as the xbox already does and having access to 8 cores via the fx 8xxx pc performance lags on the amd's compared to intel chips with half the hardware/cores. They have 8 weaker cores with only 1 doing the dx work. Sounds like an fx 8xxx rig to me. Except for the fact that pc's in general are much more powerful than consoles. Something I've said before, the spreading out of the cores doesn't make it better. It makes it more appropriate for a small cramped console which can't cool a decent quad core chip found in pc's. Fewer stronger cores simply produce too much heat for the tiny space so they had to go low power and wider cores. In the end, xbox one has had the same exact limitations as pc's. Ps4's have had their own api which acts more like dx12 as they had lower level access and explains why the ps4 had better overall performance than the xbox one. Dx12 will finally give xbox the low level interaction ps4 has had to begin with. If this is the case, then there's no way bethesda and dice are doing anything differently when working with the same api confines as pc's regarding the xbox (dx to dx straight comparison).

It's a lot like windows itself. If they knew exactly what motherboard drivers you used, that your language was english, what display you have and so on, the windows os could be heavily streamlined to work perfectly with that system. Instead it's somewhat bloated and complex, why? They don't know if you're running an asus display or lg, a single hard drive or raid, which network adapter you have and so on so it becomes a one size fits all to provide compatibility with as many unknowns as possible.

I will somewhat take that back, in a sense when it comes to drivers, pairing an amd/ati card with an fx cpu it is somewhat single thread bound. That's not the limitation of dx11 though, it's amd's drivers. That's why nvidia cards run better, they run on 3 threads. So again, having the tools to work with isn't the end of the story, actually using them is the other. The one core at a time communication with the gpu isn't the huge limiter really as many things that go together get funneled to the same thread and other small 'busy' activity will then receive a thread. It's in the way the games are coded. The gpu only handles the picture aspect, not the total processing of the game which if it were multithreaded would be beneficial. They're already not making full use of what they have (game devs).

Confetti FX’s founder Wolfgang Engel, who is well-known for his work at Rockstar’s core technology group as the lead graphics programmer says “Sony’s own custom API is more low-level and definitely something that graphics programmers love. It gives you a lot of control. DirectX 12 will be a bit more abstract because it has to work with many different GPUs, while the PS4 API can go down to the metal."

Read more: http://wccftech.com/ps4-api-graphics-programmers-love-specific-gpu-optimizations-improve-performance/#ixzz3hJDNpumX

So again dx12 will be an improvement to dx11 but coming from console developers to explain it, dx12 is still more abstract and not as low level/direct as even sony's own api. This is why console to pc ports shouldn't be compared so much, they're not really the same thing. Dx11 on the pc hasn't been the flaw that's gimped ports from the xbox. It has to be more 'vague' not knowing which hardware will be running on any given pc where it's laid out in black n white for a console.

In your analogy of cars, my fear is this. Look at current games, soon as they're released. Broken, glitchy, broken, broken. This is while only controlling 2 cars on that racetrack. They can't handle that, you want them controlling 6 cars without crashing? Good luck. If they have trouble driving a basic sedan I'd hate to see how they do with something more complex like a 21 speed semi pulling doubles trying to back the trailers up. I'm sure eventually they'll get it under control but being something they're not used to coding for it's liable to be a trainwreck for awhile. Dx12 releases soon. They'll need to code games for it, tackle it, get it ironed out. By the time most of this gets sorted I think we're looking well into 2016, 2017. Adoption of x64 by programs took a long time to slowly become the norm as well. After all that time, even zen won't be the new kid on the block.

I'm curious to see how all this pans out in the real world like everyone else. "wait for the benchmarks and you'll see" sounds very familiar. Wasn't that what amd said when bulldozer came out? We saw alright. Amd still hasn't recovered from that faux pas. The api will reduce the graphics overhead and allow more pretty pictures to be drawn on the screen at once. It doesn't control all the position of the players, the geometry of bullets fired, what constitutes a hit or miss, relative object positioning from ally to foe and the rest of what makes a game a game. That's still on the cpu, that's the part that has to be either single or multithreaded. That can be 32 threads wide if they wanted it to be, even now.

 

What do you think of the video I posted? Its a talk about dx12 vs dx11 from a few days ago.

 
what i know is, Games can sometimes run better on the FX series is cuz they have more physical cores then their intel counterpart. even thou each core capability on the AMD can only produce 75% of what intel core can do, but the sheer numbers of cores available on the FX series can overcome that.
 
^^ Not really; at best, the FX-8350 matches top tier i5's. Hasn't been a major title released in a while where the 8350 matches current i7's, or even current i5's. Even titles that scale well across cores (GTA V, Witcher 3, anything running on Frostbite 3) favor Intel.

Again, it's really quite simple: When the CPU isn't a bottleneck, the only two things that matter is how strong the GPU is, and per-core performance. And the second is controlled during testing, which is why in CPU benchmarks, performance is pretty much in order of per-core performance, rather then number of cores. If you took a FX-6xxx chip and clocked it to 4GHz like the FX-8350 is, they would perform exactly the same in games.

Can we PLEASE stop this "more cores adds more performance" nonsense already? Games aren't really affected by CPU performance because CPUs aren't the bottleneck. Thus, adding more cores does NOTHING.
 
Wrong. When the CPU isn't a bottleneck, the only things that matter is the GPU. Per-core performance doesn't matter either if the CPU is not a bottleneck.

It's based on per core performance because only one core can talk to the GPU at the same time right now. In DX12 all of them can.

Here's your nonsense:

dx12_performance_simulated_cpus_updated-100575301-orig.png


DAI-CPU-Test.jpg


You literally have no idea what you're talking about.
 
^^ "Simulated".

Wrong. When the CPU isn't a bottleneck, the only things that matter is the GPU. Per-core performance doesn't matter either if the CPU is not a bottleneck.

Not exactly. Remember, the CPU needs to do setup and then hand data for the GPU to process. Do the initial CPU workload faster, then you can hand off to the GPU earlier, which will have a minor impact (on the order of 1-2 FPS) on the GPU. There is a point this stalls out because you've hit a hard GPU bottleneck, but when you look at benchmarks, you see a point where faster processors do give a very minimal FPS gain. That's the extra per-core performance coming into play. It's minor, but there.

Secondly, the statement I made only applies when no CPU bottleneck exists. Come up with a benchmark that thrashes the CPU, then yes, more cores lead to more performance. But when no CPU core is being bottlenecked, you will not extract any additional performance via adding more cores.
 
Enjoy your denial while it lasts. The games will show this soon enough :)

If there's any performance increase, the other GPU was bottlenecking.

Nah... That's not the reason. Well, it is, sort of, but not because of the per core performance per se. This phenomenon is easily explained. Obviously there are load changes in the hardware during real-time performance since every rendered frame is different. When peak load to the hardware happens, some CPUs have a bottleneck for a very short time, while the other faster CPU suffers from this either less of the time, or no time at all. This creates the differences of FPS between the CPUs. This is why some CPUs can give a much lower minimum fps, while having still a better max and average FPS.

This is true, which is exactly why AMD CPUs will receive a performance boost.
 
So with DirectX 12/Mantle, will any of the AMD cpus/apus out preform my i7 4790, that is paired with my R9 290x?
 
What we have here is various people argumenting against one, who's basing his whole argumentation on opinion and theoretical figures and has yet failed to see that games performance isn't solely (or even in large parts) determined by the api used to render images. Owns a fx 8350 and denies any current benchmarks showing an I3 above fx 6300 and even fx 8350 on average. I'm sure however dx12 turns out to come, denial will still be present on his side. Probably suddenly jumping to the conclusion fx 8350 > I7 5960x, despite all benchmarks showing it still beaten by I5's.
 
Short answer: No.
To remind, the question was "will AMD CPUs become better than Intel CPUs, in gaming".

New multi-threaded games and directx 12 will work better for AMD CPUs than older games and older directx, but Intel has been leading the competion for so long, that they'd have to fail really hard for many years for AMD to be able to even reach their level. It's not about technology, it boils down to about business and money.

And when we are talking about consumer pricing, you have to remember Intel has a very broad selection, and and very large market share in CPUs and assosiated chipsets. Thus, Intel desides the price. If AMD manages to sqeese a new CPU or other similiar technology to the market which is a major improvement of their previous offerings, it will compete better with Intel, however, Intel having no competetion in the high end gaming market and a very strong market share in those parts where AMD is able to compete a bit, Intel will just adjust their pricing.

AMD will always be the underdog that is able to offer good alternatives in only few categories for Intel.


The graphs linked by NightAntillia shows how DX12 will benefit Intel also, through better usage of hyperthreading. Intel dual core chips are pretty cheap, they aren't as powerful as the best of 4 and specially latest 8 core AMD chips, but with DX12 one can see the models with HT will get closer to 4 core models and yet, they have a superior price to AMD 4 and 8 core models. Plus they are faster in single core beachmarks, which is important because not everything can be multithreaded. Multithreaded games will still need a code that will wrap everything together, for each frame drawn.

So, from that perspective, in business and market share(in gaming category), DX12 might also benefit Intel more than AMD. It might very well make Intel 2 and 4 core models get closer in performance to AMDs top offerings, and they are already cheaper.

On top of that, in AMD FX models, each two pair of cores called modules, share two integer units but only one floating unit. Technically it's very different than hyperthreading but yet, they are not completely individual cores in the common sense.

Complete parallelness is rather difficult to archive, because a game would need a mechanish from OS and hardware to quarantee a timed execution of different parallel code, that's not just graphics, also sounds, possible force feedback controllers and so on, so that the gamer would not be able to notice any disparity. So far that's doable only in consoles with very controlled hardware.

I only read about half of the discussion so far, but it was mentioned CPU is not a bottleneck in most games. Well actually it is. But only when you talk about a limited viewpoint of having a CPU powerful enough, and GPU not powerful enough to bottleneck it, then it isn't. But most people don't have that. Most people don't have a high end CPU. Not to mention as gaming with laptops becomes more and more common. Laptops are a difficult platform because they are limited by their ability to dissipative heat.

Do you own the fastest possible gaming rig in the world? No? Because only a few people can afford it? So we have to remember the money is always the ultimate bottleneck if we talk about practical performance.


Disclaimer: I have an old octa core Intel computer myself. Two quads.
 

No. The i7 is too fast. The FX-8 CPUs would have to be overclocked like crazy to match the stock i7 4790.


Either you understand what I'm saying, or you don't. There is no 'opinion' here. But meh. You will remember me when the time comes.


You're mostly right. But remember that on consoles the developers are forced to use parallelism since the single core performance is atrocious. Despite this, there have been multiple games where the console version is simply superior in stability compared to the PC version due to the hoops jumping that is required for DX11 ports. I'm not saying parallelism is easy. I'm saying developers know how to do it when the right tools are provided. DX12 (and Vulkan) will enable pretty much the same accessibility as on consoles. DX11 usually meant throwing brute force at everything. In DX12 you as a developer can choose between brute force and efficiency.

Aside from that, when looking at multi-threaded benchmarks in general, the potential performance of the FX CPUs comes to light, where FX-4 = i3, FX-6 beats i3, and FX-8 can match i5 depending on model. So I don't really agree that it will benefit Intel more. Remember that the FX CPUs are over 3-4 years old, and only now are they gonna be used like they were supposed to. I would fully expect the prices of FX CPUs to go up after DX12 game performance benchmarks start coming out. But that might not happen since new CPUs are already on the horizon.