Will AMD cpus become better for gaming than intel with direct x12

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
i think the ultimate answer is it "depends"

It depends on the title, as we've seen with DX11 which allows up to 3 CPU cores to be utilized, almost NOTHING is coded to that DX11 limit, even today. The rare game engine that expands past the DX11 limits usually does it with a mix bag of tricks, dumping non-gpu related tasks on other cores... that sort of thing.

DX12 is supposed to be limited to 6 cores. The chances game designers who rarely design a game to make use of more then 1 or 2 cores when DX11 has been out for 7 years and allows up to 3, will suddenly start coding for 6 is pretty unlikely. Finally, there are parts of a game title that an API still has no influence over, there were be titles that will love fast cores no matter what. While i expect properly coded titles, designed to take advantage over all cores available will perform nicely on an AMD, I still expect an i7 will be able to provide higher min FPS then an fx8 core.

the ultimately consideration will still be "is it good enough". In my opinion, an fx8 core is good enough if it's been overclocked up to around 4.8ghz, as long as you're gaming in 1080p on a 60hz monitor. If you have a 144hz monitor you're going to want an intel. And i don't think DX12 will really change that either.
 


Do remember though: The CPU is the driver of everything, and you need to consider the cost of the initial CPU setup required before the GPU can begin it's work, as well as the time it takes to go through the software driver, which again, is CPU bound. So increasing per-core performance results in very small FPS gains, simply because the GPU can begin it's work a few ms sooner then it would on a slower CPU, which can gain a handful of FPS. That's why you'll often see benchmarks, where after a point, you see newer CPU architectures gain 2-3 more FPS then the previous one, even though it's clear you are CPU bound. So there's very minor effects to increasing CPU IPC when GPU bound, to a point. It's a second order effect on performance though.
 
The answer is in the bag, I think. It's SSE. At first I thought the game was using AVX2, but it's using SSE2 instead. Like Oxide said, it's built into their engine;

"We have heavily invested in SSE ( mostly 2 for compatibility reasons ) and a significant portion of the engine is executing that code during the benchmark. It could very well be 40% of the frame. Possibly more."

SSE2 is an Intel technology;
https://en.wikipedia.org/wiki/SSE2

AMD implemented this, but, its CPUs are not as good at it as Intel, and the performance difference is quite similar to the performance differences in the Ashes benchmark;

212765c1_AIDA-Photo1-670x217.jpeg


The FPU is heavily used during SSE2, but steamroller improved some of these weaknesses, which is why an A10 is beating the FX CPUs. An Athlon 860k is a better choice in this regard than an FX-8.

Generally, the SSE instructions are the part where FX CPUs perform the worst. In integer math, an FX-8 matches an i7 4790k and beats a 6700k. Generally, games do use the FPU of the CPU rather than the integers. I don't think there are any games out there that rely mainly on integer math since that's unpractical. Combine the FPU limits of the FX CPUs with it being memory starved, plus SSE2 being Intel tech, and we know why ashes performs like it does on FX CPUs.
 
That would make sense and is one of the reasons amd's cpus find themselves limited. The module design strikes back with too many shared resources between the cores which sparks the debate of whether fx 8xxx are true 8 core cpus or not. Every module consisting of 2 cpu cores has only one fpu (among other resources) to share where intel gave each core its own fpu. Cutting corners to make cheap multicore cpus is biting amd in terms of performance. If they hadn't opted for modules and treated each core as its own entity amd would be performing quite a bit better. On top of it it's still suffering from much lower ipc.

I think that's what is being seen in the benchmark, 4c cpus beating out 8c cpus. They both have 4 fpus and so the difference between them is a result of the ipc difference. It could be made worse by conflict when 2 amd cpu cores within a module are fighting for the same resources (fpu).
 
I'm in the process of trying to find someone that can disable a core on each module, to see how it performs in that case since no FPU sharing takes place. But there aren't many motherboards that support this, and not many people are willing to pay the price of a full game for a pre-beta benchmark.
 
its hard to say... I have an fx8350 with a r9 270x 2gb and I play all games on max setting and never have any issues I never seen under 55 fps and for games like shadow of mordor when recording the gameplay and have 4 apps open and downloading , its good. i've seen i7's do worst. but all im saying is no one can say one is better then the other. its the user that's using the cpu and how they use it. and people really cant compare cpu that's years old to a new one. the fx8350 is from 2012 for a chip that old its doing fine and at a low price. you can build an AMD gaming computer for less then a new i7. I did . last thing people always remember AMD=MAD and Intel= Intelligence needed. both has problems.
 


You can do it easily enough on a per-app basis by using a program to modify core affinity.
 
Which then begs the question of what cores are being disabled, and what is going on, since honestly, that shouldn't be happening. That would point to something else going wrong somewhere in AMDs frontend, since from a software perspective, disabling one module via affinity should essentially remove one module without any other impacts. To say "it doesn't work" without finding out why kind of defeats the purpose.
 
AMD processors will see a performance boost but only a small number may actually switch to using them and with the release of SKylake series, it's unlikely to see a shift in the market as many people have a loyalty to Intel and they continuously produce better processors than AMD. Also remember that there are certain applications that use features released by Intel that can't be guaranteed will be present in AMD CPUs.
 
Well, this is certainly interesting:

http://www.extremetech.com/gaming/214834-fable-legends-amd-and-nvidia-go-head-to-head-in-latest-directx-12-benchmark

Changing the system power settings changed the DX12 results. Maybe time to re-test Oxide and see if NVIDIA's numbers magically recover?

EDIT

http://techreport.com/review/29090/fable-legends-directx-12-performance-revealed

980 TI > Fury. Ironic that the NVIDIA card is more sensitive to using more threads, though performance did top out at 4 threads, as per normal under DX11, as expected.
 
and same with the the core 2 duo and the amd 64X 4 chips. why do people like intel so much. yes they are better then amd but amd matches the performance of intel for the time of cpu and for how much less. people told me to get the new i7 for my rig. I built my rig for less then the i7 lol and still the fx8350 almost ran at par with it. I ran them side by side with a friend who's a computer tech and he couldn't believe it. now for a 3 year old chip to be compared to a chip from this year is something. the i7 is better but not by much and dx12 is going to help all chips and gpu that support it. some better then others
 
It depends on what you're doing, what type of tasks. Gaming only, light web browsing, video encoding etc. Intel's i7's have no amd counterpart. The only reason amd are so cheap is because of their poor performance. I don't believe for a minute that anyone can build an entire fx 8350 rig for the cost of an i7 unless you're referring to the 5960x. The fx 8350 has trouble running on par with a stock clocked i5 9/10 times, much less an i7. They're only comparing a 3yr old chip to the latest and greatest because for amd, that IS their latest and greatest. Not because intel had to go back 3yrs to find an amd chip it could take on. It's not intel's fault they keep moving along with new product releases and amd is sitting on their laurels.

Everyone refers to how cheap amd is by comparison but they have short memories. The i5 2500k had a release msrp of $216, the fx 8150 bulldozer had an initial release price of $336 (245 euro), both in 2011. When amd's fx got embarrassed by the cheaper i5 they had no choice but to begin slashing prices right away.

http://news.softpedia.com/news/AMD-Is-Already-Cutting-8-Core-FX-8150-Bulldozer-CPU-Price-228076.shtml
http://www.pcmag.com/article2/0,2817,2383921,00.asp

The fx 9590 had an embarrassing price of $880 at newegg before prices 'fell' to $390ish (300gbp). Now priced at $224, almost cut in half again and currently around 1/4 the price it used to be. Amd fans are quick to play the pricing game when it suits them rather than tell the whole story. But given the performance of an fx 9590 compared to even an i7 5930k at around 2/3 the price, tell us again how amd has better bang for the buck. It can't hang with a rather 'pricey' newly released i7 6700k at $365.

The difference being when performance comes to light, products have to be priced according to their performance levels. You'd have a hard time trying to sell a ford focus for the same price as a corvette, they wouldn't move product at all. Number of cores, size of the cache, gigahertz etc don't matter - in the end it's about real world performance.

If the particular games someone plays show little benefit using one cpu vs the other then of course, pick the more affordable one that suits the need. To say that someone doesn't notice ms notepad running any faster on one than the other doesn't mean there isn't a difference it just means their particular tasks don't take advantage of the extra performance. Not that it doesn't exist.

It really depends on the game. Is the user playing primarily one game, a variety of games and which games. Just like any other group of programs, ms paint and adobe photoshop both alter images but obviously one is a lot more intensive on hardware than the other. GtaV as an example, the i5 outperforms the 8350 by around 12% more fps. If price is the argument and someone says well the $65 price increase isn't worth the 12.5% fps increase they may have a point. So then consider the i3 4130. It performs within 5% fps performance as the 8350 and cost $51 less. Roughly the same price differential from the i3 to 8350 as the 8350 to i5, yet the i5 gives nearly 3x the performance difference.
http://www.techspot.com/review/991-gta-5-pc-benchmarks/page6.html

Witcher3, same story.
http://www.techspot.com/review/1006-the-witcher-3-benchmarks/page5.html

It would be nice for some solid cpu benchmarks streaming while gaming for those who choose to do so but it's very difficult to reproduce. Connection speeds vary, hard drive setups vary, services vary (twitch, justin.tv etc), what software people are using (obs, xsplit, shadowplay etc) not including at what resolution they're gaming, encoding/streaming preferences/framerate and so on. There's just too much to try and cover, change any one of those variables and the results are useless.
 
I understand what you are saying but I can say my fx8350 walks on the field and out performs the i7 my friend has the i7 isn't the new one but still its a i7 and being killed by my fx8350 in a side by side compare and there are some benchmarks where I see the karvri beating the i7 the i5 the i3 so are the benchmark from those people wrong or is it just the benchmark they use. for real world I haven't seen a i7 walk all over the fx8350 yes the amd is beat but not be much and ad don't rush into thing the try to do all the testing themselves where intel has the user test the product out and fixes problems with new models of cpus. amd is making there come back thke console gaming xbox 1 and p4 both use amd and cant say intel couldn't make a custom chip like amd did for the systems. intel is great but every intel I ever got had its share is issues all my amds perfect no problems.. im not saying amd is better or intel is better it's the user of the system in question and how they use it. I never seen over 45% cpu usage and I play all the new game and stuff on max setting so if the newest games are getting me to 1/2 power of cpu then I think its fine and yes I did I rig for less then the Intel Core i7-3970X Extreme Edition or the Intel® Core™ i7-4930K I did my rig for under 700
 
In purely Integer based workloads, AMD > Intel. This is why AMD does good on benchmarks, since they tend to be biased toward Integer performance. In real-world usage, where FP is much more common, the reverse is true.
 


That sounded so far off that I checked.

http://geizhals.eu/?phist=973468&age=9999
Price of FX-9590 across different retailers. So in 2 months it's price dropped down to nearly 1/3. And previously it had a ludicrous price, comparable to middle range xeon. Yeah right. Even if that were correct, pretty much no one would have bought them.

http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/62166-amd-fx-9590-review-piledriver-5ghz.html

So, according to this and many others, it was not initially sold as separate, but first intrducted to OEM. So maybe few pieces were being sold fresh, or maybe they were stripped by shops from complete computers, maybe actually few were even sold for that price, but quoting that price as normal retail price, definetly no.
 
Akseli, check around at various amd chips and check their release prices. It's not the first time amd has had cpu's for over $500 and not including their opterons either. You're right, originally amd's plan was the fx 9590 should be oem only until people complained they wanted retail access to it. Then it came originally as an oem chip with recommendations for water cooling, then was offered at a higher price with water cooler. It was insanely expensive when it first came out, overrated by amd which tends to be their norm followed by several significant price slashes once performance came to light. Also typical of amd cpu's, they tend to release high and then drop like a rock where intel chips typically stay at their release price and fluctuate very little by comparison.

When I say prices tend to start high it's more on their newer tech. When they released rehash version of their cpu's like the fx 8350 for instance the original release price wasn't crazy like the 9590 was. It's still dropped a good amount but they already had a basis for fx 8xxx chips when the 8350 came out so it wasn't introducing a new model really just a tweaked version of existing product.

Quoting it as initial release price, it is in fact correct. It's no more inaccurate than skylake being made first available to the public at xyz pricing. If you were a potential customer who wanted the fx 9590 at the time it released, that's the price you were looking at. It's easier to look back on it now and say well prices are nowhere near that. No they're not, in typical amd form it got sale slashed repeatedly and heavily when performance wasn't there. It's also not a new cpu anymore. The initial release price was still a 'bargain' compared to intel's 8 core i7 running right around $1000 but real world performance quickly revealed not all 8 core cpu's are equal and when it struggled to keep pace with intel's quad i7 in actual capabilities they couldn't very well leave it at $800 for long and expect to move any off the shelves. Hard to sell a mazda for $250k next to a lambo for the same price.

Plenty of benchmarks out there done by tons of people with similar results. The i5's are stronger and cost a bit more. Nothing surprising there. In gaming, it depends on the game. If the game isn't cpu bound, amd cpu's do just fine. When it is they lag behind and have significantly lower min fps. They've been compared so much that it shouldn't be a mystery anymore and results concur with one another from multiple tests. Looking at those benchmarks in single gpu vs crossfire it's also obvious that the gpu is the bottleneck in those games or the values wouldn't change that drastically once adding a second card. On a 750ti both cpu's will do roughly the same as the gpu is holding the system back. Very few gaming benchmarks are done with an r9 280 equivalent, they're typically done with 290x, gtx 970, 980, 980ti etc.

Synthetics try to be helpful but unless someone's job actually consists of trying to solve for prime numbers they're pretty irrelevant. I've never played a game or edited movies on a program called firestrike or passmark or anything else. It's sort of like fuel economy estimates. The sticker on the window of a car can say it gets 35mpg all day long, in a lab maybe it does. If I buy said car and drive the same averages listed on the sticker (40hwy/60cty or whatever the case may be) and I never get better than 28mpg, the sticker doesn't make my car go any further on a tank of gas. In the end it still gets 28mpg.
 
Well well... We do have an example of what I was saying previously... Regarding an an FX-6 outperforming an i3 and an FX-8 being able to deliver i5 performance...


https://www.youtube.com/watch?v=Rutk9ErhKG4
But once we hit 01:12 – a tour of Novigrad City on horseback… well, then we see some big changes. This area can hit 80% utilisation across all eight threads on a Core i7 4790K! The less powerful the CPU here, the more CPU stutter you encounter. The G3258 – overclocked to 4.5GHz – doesn’t work out too well.

Also, check out the performance of the FX-6300 and FX-8350 starting from 02:26. In the initial cut-scenes they fall a little short of the Core i3 and i5, but once we hit the heavy Novigrad area, they compete very nicely – the FX 6300 moves ahead of the i3, while the FX 8350 is very close to the 4690K.


Read more: http://wccftech.com/witcher-3-cpu-benchmarks-fx-63008350-i7-4790ki5-4690ki3-4130g3258-oc/#ixzz3oPSk7gd6

How many additional games will achieve this, well... Time will tell...
 
Very few games are ever that CPU limited, and remember, DX12 is largely removing those CPU bottlenecks anyway. Hence why I continue to hold why Intel's lower cost offerings are going to benefit more then AMDs lineup when DX12 becomes mainstream.
 
In the first bit comparing the i3, fx 6300, i5 and 8350, we see the i3 and i5 consistently 6-8fps higher than the fx chips. Around 2:47 you see the 8350 12fps behind the 4690k and worse frametimes than the i3 - worst of the bunch as a matter of fact. The rest of novigrad, the fx 8350 is still consistently behind the 4690k both in fps and frametimes.

As far as the section at 1:12, despite using all 8 threads of the i7 why didn't they oc the 4690k to the same speed as the 4790k? Otherwise we're not seeing a threading improvement, we're seeing ht + 500mhz frequency. No kidding the 4790k runs faster at stock, it's an apples to oranges comparison though. I have a feeling if both the i5 (4c/4t) and i7 (4c/8t) were tested at the same speed to take the 500mhz boost out of the equation the differences would be less than they appear. Less fps and less frametime drops as the faster speeds easily keep up with the gpu.

While the 8350 can also be oc'd it will have less impact on performance due again to weaker ipc. Assigning a fictional value for comparison purposes, let's say the ipc of the 8350 is 4 while the ipc of the 5690k is 6. Going from 4ghz to 5ghz on the 8350 would be a resulting 'score' if you will of 20,000 (oc'd) vs 16,000 (stock). Taking the 4690k to 4.5ghz would be a resulting score of 27,000 (oc'd) vs 21,000 (stock). Not that those values actually mean anything, just as an illustration that the i5, having higher ipc to begin with and capable of the same 1ghz overclock as the 8350 will easily outrun it. More threads or not.

This also has nothing to do with dx12 since it said nothing about being done on win10 and witcher 3 isn't a dx12 game. This video's been around awhile now (almost 5mo). How many games will achieve this is an ongoing list of benchmarks showing not many games are achieving this. Thread usage doesn't scale comparing a 2c/2t to 2c/4t to 4c/4t and 4c/8t. It's quickly diminishing returns as we can easily see between the g3258/i3 and i3/i5 next to the i5/i7 (4c/4t vs 4c/8t) meaning it will be even less noticeable going from 4c/8t to 8c/16t. Basically pointing out that the extra threads of an i3 over a pentium ae is limited to comparing an i3 and pentium ae and has nothing to do with anything else.
 
You sure about the frame times?
Why does the i3 have a minimum fps of 38 while the FX-6300 has a minimum fps of 45? Even on average FPS, the FX-6300 wins with 69.6 fps vs 68.9 fps.
Why does the 4690k have a minimum fps of 52 while the FX-8350 has a minimum fps of 56? The i5 still wins on average fps, with 79.2 vs 75.2, but the minimum indicates that the FX is actually the more stable performer.
http://www.eurogamer.net/articles/digitalfoundry-2015-the-best-pc-hardware-for-the-witcher-3

Indeed, it doesn't directly have to do with DX12. But DX12 will facilitate the programming of distribution across cores compared to DX11. The Witcher III obviously used DX11 itself to achieve this, so it's not impossible under DX11. I have no idea what kind of voodoo they used to achieve this.
 
I was going by the video you used for reference. According to that video those fps averages are correct and they showed a frametimes graph right next to it. The two fx cpu's are constantly dipping lower than either of the intels until you hit novigrad where the pentium ae and i3 struggle. It's consistent with the findings in many benchmarks between intel/amd across many sources where typically intel has higher fps and even the few titles where amd manages to catch up anywhere close in average fps, min fps are much lower. That would affect the frame times and cause micro stuttering at various points, though some folks may or may not notice it. Some people are happy playing games at 25-30fps while others want no less than 60 and others want higher yet.

If I had to guess, they programmed the game to run on multiple cores/threads. Many other programs are capable of this, many dx11 games are also capable of this. If game devs choose not to then dx11, dx12 or dx13 (or whatever follows) will make little difference. You can lead a horse to water but can't force them to drink so to speak. Dx12 will help all cpu's, not just amd. While the fx 8350 will have increased benefits, it's not the one and only so fx won't be getting a boost over the i5. They'll both be getting a boost, according to older tests done with dx12 the i5 will still have more draw calls etc. I'd realistically expect an improvement across the board but no change in cpu positioning.

Only once or twice for a split second was the i5 dipping a few fps lower than the fx 8350. The rest of the time the fx 8350 was consistently below the i5 with much sharper and more frequent drops. The i5's fps was pretty level throughout. Sharp drops and erratic fps are what's going to cause hitching in the game. We're also looking at a very minor pinpointed spot in a game where the i5 consistently dominates either of the fx chips. The fact that the 8350 with twice the cores and twice the threads managed to match a 4c/4t cpu during one brief moment during a game doesn't make it 'just as good'. By the time someone playing the game even noticed it, it's over and the game moves on. Sort of a brief 'victory' for amd if you can call it that.

Looking at the fact win10 is having its own issues, dx12 is brand new and games have yet to really get into the swing of coding for it, it's a ways off yet. By the time this actually becomes the 'future' of games, zen will be out, cannonlake will be out and the rest of the tech will have moved on. It doesn't make a ton of sense to worry about how well the 6300 or 8350 will do in the future, they're only becoming further in the past. Intel has been a solid upgrade path for amd users looking to get more out of their pc. For strict amd fans who can't imagine going to intel, they've had no choice but to be stuck in the past since amd hasn't given them anything to move forward to. Zen will finally be that upgrade. Even a stronger cpu like the 2500k getting on 5+ years in age, users are looking now to finally move forward and upgrade. The fx lineup has been even longer overdue. It's a lot like intel users worrying whether improvements will keep their core 2 cpus in the game longer. We've since moved past that and it's not being elitist it's being honest. When chips are 5, 6 or more years old an upgrade is becoming more and more imminent. Just like win98/xp are no longer really viable, tech has just moved on.

Of course technically a core2 cpu with winxp still 'works' but is it lagging behind in terms of software, hardware and the forward progression of tech? Absolutely. Win10 though it offers dx12 and is forward looking isn't exactly the best solution either. At least not until they get bugs worked out, win10 is killing the functionality of a number of games which now refuse to load or install forcing people to go with vm's or dual boot systems. Dx12 is supposed to help gamers but now it's putting them in a pickle. There really aren't any new games out fully supporting dx12 and yet it's crippling the games people have already bought and paid for. For right now it's just a good idea but there needs to be better compatibility with a broader range of games. Especially since dx12 isn't available on previous os's that actually function with older games. Gamers have to pick and choose or come up with a tedious alternative to keep older games functional and hopefully enjoy the benefits of dx12 when it finally starts to come to light. I'm referring to more than just one or two pioneer games, when the new titles are 50% or more dx12.

It's a bit off track but in a way it's all tied together. There's more to consider than strictly dx12 and newer games because of all the caveats that come with it. Had ms decided to make dx12 fully compatible with other versions of windows or not changed security settings that gimp or lock out older games and everything had better integration it would be easier to get excited over it. Sort of as if digital tv signal were improving yet again and getting everyone excited, then said by the way if you opt for this new crisper super hd you can say goodbye to your local channels and hbo, showtime, cinemax etc. Kind of a backhanded 'improvement' and some may not see it as an improvement at all. Something windows compatibility feature was supposed to address and solve and one of the main gripes of console owners when they decide they'll no longer be backward compatible. Do you throw out a portion of your game collection? Hook up 2, 3, 4 different systems?