Will AMD cpus become better for gaming than intel with direct x12

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Quite a thread this has become
so to recap the basics:
It will make cpus bottleneck less in games
Current AMD cpus will still not beat Intel cpus
DX12 will affect preformance beyond cpu (submission calls...)
how dx12 affects preformance is dependant of the devs
 
I would go as far as to say that it will remove all CPU bottlenecks in games (for the time being), provided the CPU can handle 4 threads at once. That includes AMD CPUs.
Current AMD CPUs will not beat Intel CPUs, but current AMD CPUs that are currently beaten by i3 CPUs will become between the fastest i3 and fastest i5 in performance.
How DX12 affects performance will be indeed be dependent on the devs. So now it will be more transparent who is a good dev and who actually had trouble with DX11's limitations.
 
^ on that post I fully agree 100% .

We should all be happy that the performance increase possible by the release of dx12 & a well coded gane engine will bring.
I expect a performance increase of the same percentage wise of either /all chips.
Well all of us with a minimum of a quad core (be it amd or Intel) ,I'm with nightantilli on the fact that I also believe there will no (or at the most minimal) increase to be found on dual cores - that includes the hyperthreaded i3's - its incredibly easy to saturate the true cores on these now which leaves little room for gain on the hyperthreaded cores.
 
So is this the same for GPUs as well? I heard with DX12 AMDs later modern chips have ACEs (Asynchronous Compute Engines) which is supposed to improve DX12 performance up to 46%, as stated in this article. http://wccftech.com/amd-improves-dx12-performance-45-gpu-asynchronous-compute-engines/
 
AMD's GPUs will improve more than nVidia's. AMD's GPUs are much more limited by the drivers under DX11 than nVidia's are. This is also reflected in the draw calls benchmarks, and also in Star Swarm;

In DX11 the GTX 980 can literally be fed almost four times as fast as the R9 290x.
71451.png

dx12-980.png

2836209-4833852440-dx12-.png


Notice the jump from the R9 290X in this image. It's literally 5 times faster under DX12, where as the GTX 980 is 'only' 2.5 times faster.
71450.png


In DX12, the Fury X will probably obliterate everything. AMD simply can't feed the GPU fast enough under DX11.
 
I've seen that benchmark, but i also saw some the other way around.Maybe the hame preformance will be more dependant on which gpu company tje devs work more closley with since each of them support theit own dx12 features.I guess we can disscuse this here all we want but i guess we cant know for sure until dx12 games come out and we start benchmarking.Witcher 3 will probably be a goid one aince you can compare dx 11 to dx 12 (they said they will support dx12 later on).But in the bottom line with dx12 everyone wins
 
You have to understand that draw calls testing is influenced by a lot of things. The GPU drivers, the GPU's speed to receive what the CPU is sending, the memory controller, the memory speed... It's not that simple. The i5 is still besting the FX-8350, yes. But like I said in another thread. The current limit of the i5 draw calls is around 2.4 million in DX11. The FX-8350 will be able to do over 14 million in DX12. The i5 might be able to do 16 million, but that difference will no longer be relevant. And in terms of multi-threaded speed, the FX-8350 bests the i5 4690K. The only place where the i5 beats the FX is at memory;
http://www.cpu-monkey.com/en/compare_cpu-intel_core_i5_4690k-412-vs-amd_fx_8350-7

Now that the FX CPUs will be used correctly, their multithreaded power will finally be used.

I didn't make anything up either. One only needs to look at DX11 benchmarks to see that reality. Looking at the draw calls across multiple cores is already an indication. But the core distribution says enough. One moment you talk about benchmarks as being the most important, and the next what Microsoft is saying and ignoring benchmarks...

Yep. You yourself answered the question.

From wikipedia:
"DirectX 11.X is a superset of DirectX 11.2 running on the Xbox One.[43] It actually includes some features, such as draw bundles, that were later announced as part of DirectX 12."

Which is why the Xbox One still has slightly more flexibility than the PC under DX11. There are more things on the Xbox that limits its core usage. One is reserved specifically for the OS for example. The rest are free to use, but will all still have to be sent to the main core that needs to communicate to the GPU. One thing that assists in removing the one core CPU limit is the eSRAM. The eSRAM can act as a buffer for the other CPU commands, and the main core can just say 'send all those commands to the GPU', and you've worked around the DX11 limitation. A bit of a pain to program, but it can be done, and probably has been done. Which is why some people are claiming the DX12 thing won't help the Xbox itself that much. Obviously, PCs don't have eSRAM lol.

PS4 GPU is more powerful than the Xbox GPU. This time around, due to the eSRAM along with the DX11 API, the Xbox is harder to optimize for than the PS4. But whatever the case may be, the PS4 GPU will always beat the Xbox One GPU. Thus DX12 won't bring better graphics or anything. You'll see parity across the consoles.

Even though this is true, this is why we have certain standards to avoid differences as much as possible. Standards like HDMI, USB, PCI-e, SATA, DDR3/4, a/b/g/n wifi, and so on. The things that change the most are the CPU and the GPU.

You're mostly correct. But the way that AMD's GPUs are designed, it kind of makes it impossible for them to develop good drivers for DX11. It's one of the reasons they developed Mantle, and they love DX12. They have to rely on close to the metal programming for optimization. nVidia doesn't.

Taking the old Assassin's Creed games as an example, they were developed with the Xbox 360 in mind, and even on PC you can see this. The X360 had three cores. One was used mainly for the OS, the other was about 50% free, and the other one completely free. When you run such a game on the PC, you see 1 core being fully utilized, the other one about 50%, and the other one maybe 3-4%. It was quite easy to run these games on the PC, since most people had at least dual core, and the 3-4% you could make up for. Right now, we have consoles with 8 cores, of where generally 6 threads are used for gaming. If you have to translate that into 2 threads, 3 threads or 4 threads, it can be a big pain.

Sure. 2016 will probably be the time where the real performance starts coming out. I won't be upgrading my CPU until at least 2018. I know it will last till that time at least :) But they will do fine really. They have more tools to control the cars so to speak. There are multiple games already doing the 8 core processing. Ashes of singularity being the most notable one.

Yep. But since single core performance is increasing so slowly (even Intel is only managing maybe a 10-15% IPC improvement right now), we're forced to go wide. AMD miscalculated the bulldozer indeed since it came too soon. But obviously if we have 8 guns firing on screen and one core has to calculate all of them, we could have 64 guns firing on screen if we could use all 8 cores on that same CPU, theoretically speaking.
 
What kind of a statement is that, the i5 will be able to perform 2.4m draw calls in dx11 but the fx 8350 will be able to perform 14m draw calls in dx12 so it's better than an i5? And yet when the i5 tops the 8350 AGAIN, it's irrelevant? That statement is irrelevant. Let's compare apples to apples shall we?

Using a gtx 970, dx11
i5 - 1.3m draw calls
8350 - 1.2m draw calls

They were virtually identical, so draw calls weren't what was holding back the fx 8350 in the first place. It should also be noted (as it is in just about every dx12 review) that draw calls are simple one metric, not the whole story. It will help alleviate the bottleneck fx 8 core cpu's have had with high end gpus but it's not going to make the cpus any stronger. It just isn't.

Theory means little, results are all that matters. If theory worked, amd's bulldozer/piledriver wouldn't have flopped. Desktop cpu's aren't having to go wide, amd chose that path. Intel hasn't had to work very hard, their 4.5yr old cpus are still ahead of an fx 8350. When you're 10 laps ahead in a race there's no reason to floor it. That's why performance hasn't increased much. When it comes to gaming, intel's cpus haven't been a bottleneck for years, they're still waiting on the gpus to catch up.

I don't put much faith in synthetic benchmarks, there's no such thing as a game or program called passmark. Or cinebench. Looking at something real like playing skyrim while encoding video, the i5 leads by a large margin.
http://techreport.com/review/23750/amd-fx-8350-processor-reviewed/9

Looking at current dx11 benchmarks, considering the draw calls are so close between the i5 and fx 8xxx, why is it that i5's and i7's continue to dominate the top of the benchmarks? Along comes dx12 and increases draw calls substantially, but again their comparative output is similar in this regard. So where does the rest of the game's performance rely on? The cpu, as before. Dx11 didn't keep game devs from coding the rest of their game engine to run on multiple cores or threads. The inherent cpu performance remains for each of the chips and intels have the stronger design. Even an i7 is a quad core, competing with amd with twice the cores. You don't see intel going wide for this very reason, when a well built quad core keeps up with the competitions octacores, the additional hex and octacore designs from intel are pure icing on the cake and carry luxury prices as they have no competition. Intel didn't suddenly cut everything but their extreme lineup on 2011v3. The don't have to.

Your theory about the guns firing on screen goes back to why intel can do more with less. It processes the data that much faster. Only in a scenario where core performance was identical would core count be more relevant. While it takes xyz time for an amd core to process 8 guns firing, intel has processed those 8 and gone on to process the next 8 before amd's done with its first 8 gun calculations. Which explains how intel cores get more done with half the hardware. If shouldering weight in a vehicle getting xyz product to get to an end point and my vehicles can only handle a half ton of material vs the competitions 1 or 1.25 tons per vehicle, my only hope is to be able to run twice as many. We've seen this the past 5yrs between the two different architectures.

Ashes of singularity is still in beta and I certainly don't see any benchmarks out for it yet. In its current phase it's still a speculation fest. Speculation does very little for me, good bad or otherwise. What matters are benchmarks, pair a real game with actual dx12 encoding on a real set of cpus and gpus and see what really happens. That's pretty much all I'm interested in, the rest is make believe. I can draw out 50 ways on paper why amd's weak multicore approach should have been better than intels fewer/stronger core approach but at the end of the day it just doesn't pan out. People make dx12 out to be an amd only boon and it couldn't be further from the truth. It will be beneficial to everyone and gaming in general.

The bit about amd being ahead of its time is starting to sound like a broken record and a way to boost morale when their selling points don't pan out. That's what was said about their x64 cpus, that's what was said about bulldozer and piledriver - that's the reason they put mantle on pause and told everyone to go dx12. Let me guess, if zen doesn't deliver on its promises - it too will be ahead of its time? Maybe if they focused more on humbly being in the current time, they'd actually compete for a change. Look at all the smart tv's out and about, all the various new gadgets to try and get basic internet connectivity to users televisions. Does anyone think it's a new concept? Webtv existed in the mid 90's, 20yrs ago. Many doubtful have heard of it, but it did just that. I guess they were ahead of their time too but in the end it wound up a failure. It was innovative, unique, lots of things - but the bottom line, it ceased to exist. Everything kind of comes down to that bottom line, is it getting the job done and is it getting it done better than the competition.
 
You're missing the point. The point is that the FX bottlenecks at its 1.whatever-m drawcalls, but the i5 doesn't bottleneck at 2.4m. When the FX can have 14m draw calls, it's impossible for it to be a bottleneck at the same games that are not bottlenecking with 2.4m. Yes the i5 is better. But if neither of them bottlenecks, the extra money is practically a waste.

Or maybe the difference is between that small 100k draw calls. But let's assume it isn't. What would it be then?

I never said it would make the CPU stronger. I said it will make use of all the power that it has that goes unused in games nowadays. Draw calls are indeed only one metric, but it's an indication of how fast the CPU can feed the GPU.

Oh they must now...

4.5 years... That means start of 2011, that means, you're referring to the i7 2600k? Yeah. Single core performance was very slightly faster. Multicore performance was similar. Price of the FX was MUCH better, making it a better value... But meh.

Uh... Performance hasn't increased much because we're nearing the end of Moore's Law and transitioning into Rock's Law.

GPUs catching up... Not really. It's not like GPUs are 'behind' CPUs. Graphics are simply easier to be pushed than other computations. Changing low res textures to high res textures alone puts a huge difference in load on the GPU. Any CPU can be brought down to their knees by simply using a lot of physics, a lot of light sources or a lot of AI units on screen. CPUs are actually still the limitation.

Using Skyrim as a reference in the way that it was done there is bullocks. Skyrim is a very single threaded game and no results on video encoding have been presented. With no reference to what the video-encoding is actually doing, there's no true picture. If the video encoding is also single threaded, only two cores are used on both CPUs. Obviously the i5 wins in such a case. If the encoding uses all the free cores, then the i5 will still win in Skyrim due its faster single core performance, but the FX will be finished earlier with encoding the video since it has 7 weak cores available while the i5 has 3 strong cores. But obviously that's not shown in the results presented, so still saying the i5 wins in multithreading is short-sighted, and not understanding what's really going on.

Multiple answers. It's a mixture of all of them.
- Small differences in draw calls actually are the difference between bottlenecking and not bottlenecking, even if it's 50.000.
- Draw calls are not constant during real-time performance, and the more 'peaks' there are, the lower CPU will bottleneck more often. This explains how the FX can sometimes reach the framerate of an i5, but simply experiences a lot more fps drops since it's harder for it to keep the CPU filled.
- The difference in draw calls when using an AMD GPU is actually huge between the i5 and the FX. We're talking about more than twice the draw calls here, so drivers are also an influence.

It did keep them from doing it efficiently.

The FX-8 is not really a true octacore. It's basically the same idea as an i7, except the AMD equivalent of 'hyperthreading' is more efficient, which enabled them to call them eight core CPUs. You already know this probably, but these CPUs have four modules. Each module has one full main core, and a secondary core that complements the first core. Unlike Intel's hyperthreading that are basically 30% of a full core, the secondary core is more like 70% of a full core. Intel made the better choice for the time that they released their CPUs.

AMD tried to compensate for this by adding the secondary cores, but they go by unused most of the time. Intel's cars can drive faster. The AMD cars need to drive slower, and well, with DX11, six of the eight lanes are blocked. The roads will be opened with DX12.

Ashes of Singularity has shown enough real time demos to warrant its legitimacy of keeping all threads completely busy.
DX12 is not beneficial for AMD only. But AMD will receive the biggest boost in both their CPUs and GPUs. You don't have to believe me. The benchmarks will speak for themselves.

But it's the truth. GCN was also ahead of its time. In that case it was both a good thing and a bad thing depending on your perspective. In the case of bulldozer it was mostly a bad thing. AMD was wiping the floor with Intel's Pentium 4 for a reason. But they risked too much after that.

Depends on what it is. Things are not labeled ahead of their time because of a brand name. They are labeled ahead of their time when it's an innovation that people don't end up using soon enough, or that (almost) reaches EOL and only afterwards they end up being used like they were intended to. I can tell you that the Fury X is also ahead of its time. But it's only shortly ahead of its time. It will wipe the floor with the Titan X under DX12. AMD simply can't feed the Fury fast enough under DX11. But not that it matters in this discussion. We will see what Zen will be. People are under the impression that I think DX12 is magic. It's not. I simply know what it will do when used properly. But everyone likes to think that things will stay exactly the same as they are, and that things will change exactly the way they have in the past, which is slowly and only more taxing. This time it's different :)

Would suit them great actually, considering they have to cut back on their research department. They have spent a lot of effort innovating on things that go by unused. They overestimate the intelligence and adaptability of their market.

You are right. That's the exact definition of being ahead of their time. Same goes for Kinect on the Xbox for example, or the eye toy for the PlayStation. Being ahead of your time is not a good thing for business most of the time. But it does show the vision that a company has.

But I'd like to make you think a little with a basic question. AMD has the reputation of being power hungry and weak, the exact opposite of what anyone would want in consoles these days. So, why were their CPUs used in the most recent consoles, and not Intel's? Why is the next Nintendo console also rumored to be using AMD? If a $150 i5 is better than a $150 FX-8 in performance, power consumption and heat, why not use the i5 in consoles?
 
I'd now like to point out the very minor fact, that game developers have been using as few draw calls as possible for decades now, to get around that limitation. So point being, while a bottleneck is being removed, you're not going to see a massive leap in performance simply because you can do more draw calls.
 
you're talking about budget PC gamers. if someone saves $75-150 on cpu and motherboard that's already a decent upgrade on your GPU. going from a $200 card to a $300+ card is quite a difference. if someone is saving up for Fury, Titan, or even 980ti that ~$100 savings will cut maybe a couple weeks to a month of wait off also.

by the way you're explaining it, adding 2GB of vram and slapping a "ti" on it also shouldn't make any difference. seems the same situation to me except nvidia added a few hundred more dollars than amd.
 




Skyrim uses two main executing threads. It will slightly benefit from a third core over dual core as there's OS and the little background processes that has to be executed at all time.

The quoted beachmark result makes very much sense, because in FX the thing separating it's FX-module based cores from being genuine SMP capable cores, is the fact that eact two cores share one a floating point unit. So, when Skyrim is able to fully run on two cores, and also have two separate floating units, on a 4 core FX CPU, there's no problem, but when attemping to do video encoding at the same time, which is very floating point math intensive, the 2 module 4 core FX 4xxx models can't take it without performance penalty, where as 3 or 4 module, 6 or 8 core FX 6xxx and 8xxx models can run Skyrin fine without no noticeable penalty, and then do the encoding on the remaining resources.

But besides the results, I'm not so sure about how meaningful such beachmark is. Video encoding should be done hardware accelerated. The hardware decoding&encoding has been a standard feature in decent graphics cards for quite some time now, I don't know why it's not being mostly used, does it require paid-for licence for encoders taking advantage of it and people prefer just to pay for hardware?


Consoles:
Well it's probably about the price and history(experience). Console hardware is pretty cheap and tightly integrated, AMD has had several prior models in past consoles, and also AMD has graphics cores too that are powerful enough for consoles which have been used in prior and current consoles. Intel doesn't have any of that history.


Cores:
In servers both CPU manufactures are going very wide with cores. The number of cores for consumer grade computures will continue to increase, as more and more software is able to easily take advantage of the cores without very difficult and specialized programming.


DX12:
I also believe that it might fundementally change the way graphics are drawn on games, making non-linear progress in both performance and demand for performance, as like mentioned, it will allow much more native simultanious multiprosessing of everything. We've had multiprocessing hardware for ages now but games have been partly limited in their ability to take advantage of it.
 
Comparing consoles and desktop with one another (let alone commercial servers) is apples, oranges and lemons. Asking why they chose amd for consoles when amd are supposed to be so hot and power hungry. We're talking 1.6ghz to 1.75ghz amd based soc's which are low power, low heat designs (similar to those found in smart phones) compared to the fx 8xxx series running 4-5ghz with tdp's of 125-220w. Huge difference. Try putting a dual core cpu from a desktop line in a smart phone, whether it be amd or intel - neither would be kept cool. On the flipside, try using a pc based on a 1.6ghz soc. The experience would be horrendous. Can a smart phone play crysis 3 or run sony vegas or after effects? Not hardly. As weak as smart phones are by comparison, or a tablet compared to a laptop which is well known to be weaker than a desktop equivalent, could you imagine running them on 1-2 cores? Going wide was in fact their only solution even if it meant more extensive coding of programs. What's the alternative, have an iphone with a hyper 212 evo hanging off the back of it? Not to mention a battery that would last all of 10-15min.

Consoles are similar in that they need to be kept down in size, no one wants a console the size of their parents old 4 head vhs hanging around. Soc's are cheaper to manufacture than individual cpus and mobile gpus or consoles could be running 980m's which are MUCH stronger than the soc. Then again the price of a console would jump drastically. In a tight compartment, heat control is extremely important as is power consumption. The alternative is to run a mini itx build with much higher power consumption, larger footprint, higher heat output etc and essentially you're now playing on a pc, not a console. Aside from the fact that both can play games, the two (pc/console) really aren't the same things. In some ways one tries to mimic the other but they're not the same.

Same could be said of pickup trucks and semis. Both can carry trailers over the road but that's about where the similarities end. One might be pulling 10k pounds while the other is pulling 80k. One takes up far more space than the other, has far more gears than the other, more horsepower. The pickup may cost $120 to fuel up while the semi costs over $1000 to fuel up. So why use the pickup? Well if all you have to haul around is a small moving trailer or a speedboat, the rest is overkill. It's more maneuverable, costs less to run, costs less to purchase, cheaper to repair etc. It's a weaker, more efficient design for the task at hand.

Despite the fact that both a dodge pickup and a peterbilt semi can both be running a cummins diesel engine the two aren't the same. Just as the soc in consoles aren't the same as an fx chip, regardless if amd makes both.

I agree that dx12 should allow games to fundamentally change. Whether or not that will happen is another story. In other areas tech has advanced and adoption has been slow. 64bit applications, html5 and so on. Great ideas with a lot of potential but seems like it can take a long time to be incorporated. Plenty of web dev's out there who just refuse to update with changing 'standards' and insist on things like a completely flash driven webpage or using other things that make little to no sense. Hopefully this isn't the same for gaming and dx12.

 


I don't think it's apples, oranges and lemons. Smartphones, consoles, desktop computers and x86/x64 servers have all gone wide in cores, just like everything above them has been for many decades ago.(mainframes, supercomputers, whatnot)

Internally, computer CPUs have been "multicore" for ages, starting from first Pentium and K5. They break down each x86/x64 instruction into micro-ops which may be executed in parallel. Nowadays simple smartphone CPUs are also superscalar and multicore.

It is a progress to which there is no foreseeable end. 10-15 years ago only few software was multithreaded and could execute their main computing in parallel in (regural) computers.

Now, as the processing capacity mostly grows from adding more cores, it means all software that needs serious processing power, will have to internally execute in parallel. And not just the software the user is able to see and interact, the drivers and APIs like DX12, basically everything but Calculator and Solitaire.

The progress is not going to stop at some ricilous arbitary 4 or 8 cores, or X cores. That's very obvious.

Once most of the software is ready for it, one sould expect to see much more cores in meanstream computers. Simple because it's cheaper to improve performance that way, and because the competion exists. One manufacturer makes a CPU that has 6 cores and certain speed. Another makes 10 core model with similiar performance. The "one manufacturer" then makes a 8 core model. Again competior responds with 14 core model.

Of course, adding more cores will stall the performance improvement with every new step, so other improvements in hardware and software will be done, too. It's quite reasonable to see Intel leading the developement and AMD competing specially with price.

On server side AMD is no better. For most equivalent AMD-Intel pair in terms of processing capacity, AMD loses in power consumption, and thus they are cheaper.

That's what I meant when referring to servers. The kind of CPUs we have in mainstream (desktop) computers in 4-10 years is already in use in highend servers today.


I think, in a long run, maybe after 10-15 years we might see sollutions like that,where gaming computers might become just simple terminals for central computers, where all the processing happens. All it takes is fast enough connection interface for the HID devices. Optical fiber cable is fast and powerful enough for that and it's being built into new appartment buildings.



For the question about why Intel doesn't have their CPUs in consoles, is not because AMD CPUs are any better. Intel also has Atom line which has comparable TDP and processing capabilities. Console designers probably have not even considered Intel as Intel has no software expertice in consoles and neither GPU cores powerful enough to have an integrated SoC sollution from one manufacturer like PS4 and Xbox uses.
It simply would be too expensive, too slow and too uncertain to make a console based on Intel CPUs. If someone were to do it, it would have to be Intel or some new entree to console market.


And computers can't be really compared to trucks. Trucks have physical and legal limits for their size as well as speed, that is, their "processing capasity". They have been roughly the same size for decades. With trucks the improvements made go into efficiency mostly. Same for cars, they have no reasonable processing capasity to improve.
 
I didn't say that future cpus won't have additional cores, but the progress on that front is really quite slow. While amd has affordable 8 core cpus, they still can't keep up with intel's quad hyperthreaded designs. Then of course you go to the enthusiast platform like x99 with 6 and 8 core cpus to which there has been no competition for intel. Why would they have to keep adding more cores any time soon? At the rate amd is going, they would have to have a 16 core cpu to try and keep up with 6 core intel cpus. In that light, it's not that cpu's 'have' to have 16 cores, amd's poor compute power requires it. The same doesn't apply to everyone, certainly not to intel who makes better use of their fewer cores.

Trucks are a decent comparison, even in states with little restriction or trucks used on private property they have weight limits because of physical limitations. Some logging firms will use a single tractor to pull 8 trailers, eventually they'll be unable to cope with the load. The average suburban family has no need to haul all that at once, just as most home pc users have no need for server pc's. As it is, most 'general' use users don't stress a quad core much, let alone a 6/8 core cpu.

It would be nice if there were such a solution as cloud compute for gaming, but depending on fast enough internet (at least in the u.s.) that's a pipe dream and a half. Many areas don't have current broadband, are reduced to cruddy speeds, lacking access to cable, dsl, fiber etc much less the capacity for that. To give an example, I live in a smaller town. Within 50mi of a major metro area. The phone lines around here were installed back in alexander graham bell's day I'm pretty sure. Less than a mile down the road from me is a co from the telco, perfectly capable of supplying dsl. Has been the past 13yrs I've lived here. The telco whines that their ancient cables are too expensive to replace to provide dsl. Given that bit of info, does anyone think they'll have another high speed solution around here anytime soon? This is just one example reflecting similar scenarios all around the u.s. We're 17th place (or worse) for internet speed compared to the rest of the world. In 10-15yrs unless a miracle happens, not much will change. That's the reality of it.

Amd and intel have both had quad core offerings for the desktop since 2007. We're now well past halfway through 2015, a good 8 almost 9yrs later and aside from power users or enthusiasts there's no real 'need' for more than 4 cores. Technology isn't exactly exploding in this department. As it is, a good portion of the people moving to the x99 platform are doing so not because they need 6-8 core cpu's but because they want the additional pcie lanes for heavier sli setups. If the current mainstream platform had 40 pcie lanes I'd be willing to bet we'd see even less people going with the 2011/2011v3.

If amd's ipc performance were more in tune with intel's, they wouldn't need to crank out cheap 8 core cpus either. It's already been shown that with a better and more efficient architecture there's no need to add as many cores as they can squeeze onto a die. Intel's been doing it year after years, topping the charts with half the hardware. Many (not all) amd users get the 8 core chips because they're cheap. It's true that in multithreaded or heavy multitasking environments the fx 8xxx is going to be a lot better than say a quad core fx. Then we turn back to intel and see they're doing even better yet with again, only 4 cores. So the 'need' to go wide that's been around the past several years is really only amd's 'need' to put a bandaid on their poor ipc. Intel's been doing just fine with quad core designs and continues to do so.

Will this change 'eventually'? Probably. Anytime soon to be concerned about making current cpus obsolete or bottlenecked before they would have been replaced with updated versions anyway? Not likely. By the time there's a real need for such hardware, it will be available and an obvious consideration. Just as today a quad core cpu with around 8gb of ram is an 'obvious' gaming choice. The sort of competition you're suggestion in a core battle similar to the speed battle between intel and amd isn't realistic. Over the past 8yrs, we've seen intel and amd both with quad cores. Intel is still producing chart topping quad cores and amd has had to try to beat their competition with 6 cores, then 8 cores - while intel is still cruising comfortably at the top with quad cores. It's not a trading blows scenario, it's intel using the same number of cores perpetually waiting for amd to play catch up. This suggests not an industry wide problem, rather an amd specific problem.
 


seems that you are a intel fanboy, don't get me wrong I like intel and it is clearly the winner. But the way you are describing amd is like comparing intel as God and Amd as Satan.
 

depends on which i3 or i5 you are comparing of course. my i5-3570K @ 4.7 got the exact same FPS across all my games as my FX-8350 @ 4.8. same GPU(4GB 770), same memory(8GB 2133). moving on to a i7-4790K and all the same games jumped up ~10-15 fps @ 4.6GHz.
real life figures instead of theoretical and fan hopefulness.
 
Leaving this here. Not necessarily that much DX12 related except for a small mention in the conclusion, but it does show the relevance of the FX CPUs.
http://www.technologyx.com/featured/amd-vs-intel-our-8-core-cpu-gaming-performance-showdown/5/
 

Latest posts