Will AMD cpus become better for gaming than intel with direct x12

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I asked for an explanation because in your previous post (In short, you can do more work on the CPU without reducing FPS, but making the CPU faster won't improve FPS either.) you imply that a faster CPU doesn't help with "Ashes of singularity", but the results are similar to what was observed in GTA, i.e., a faster CPU helps until the GPU is the bottleneck.

 


Insert into what? See, you are making assumptions. You are assuming a machine that has more CPU power than the a game really could use, per the GPU the machine currently has. In such case, faster GPU results into better performance, yes, but only then.

Generally speaking, of course, games are usually designed to be less limited by CPU, so that they run OK on average machine, because not only upgrading the GPU is much more easier and thus people do it more often, and also because the load imposed by the game, on the GPU, can be adjusted so much easier. Resolution, level of detail, all sorts of effects mostly done by GPU, can be easily adjusted.


Here is my firsthand experience of World of warships, unfinished beta like this Ashes of Singularity, trashing in some way.
http://forum.worldofwarships.eu/index.php?/topic/25341-extremely-poor-performance-on-multicore-system/

The game ran really great, maybe 45-60 FPS when I had two faster dual cores, now it runs really badly. I think it could be trashing either RAM, as each core can lock out portions of RAM from other cores, or it could trash due to first intel quad cores like my xeon, are actually each two dual core CPUs on a single multi-chip module, effective meaning my computer is has 4xdual core chips, and totally they have 2x 2x 2 L2 cache. That kind of system would of course be much easier to trash with L2 cache handling, than a newer true 4-8 core CPU.

But still, for example I've previously tried out BF4, which ran really great on multiplayer mode on 2xdual core config, and it still ran great with 2xquad cores, although previously I played in multiplayer mode which is supposed to require a lot more from CPU, and now after CPU upgrade, I played the single player campaign through. With 8 cores, it was using roughly 50% of CPU time and 80-90% GPU. I don't have the game, I just used the Origin's game time to try it out. I intended to try out the multiplayer mode on quads, but ran out of time. Might register a new Origin account again but I need to download the game again, as each installation records the game time offer validity.

Anyway point being, if there would be some inherrent problem in my system, it seems to happen only on World of Warships. So I think it's the way the game handles multi-treading. And based on that experiense and these slightly weird beachmarks of Ashes of Singularity, I think that game too, is still not good enough to actually measure much. It should run much better on FX-8xxx series IF it would effectively use all cores, because DX12 is not limiting it. But instead, it runs similiar with FX-8xxx than with FX-6xxx on powerful GPUs.

I know it's difficult to many to understand technical details like that, but I would guess if you look at the picture from process explorer, and the amount of kernel time processing, you would understand that's definetly something that shouldn't happen. And if someone would look it from resource monitor only, that someone would just think, oohh, it's using 70-90% of CPU time on 8 core system, that's nice multithreaded game. Although anyone would think something is clearly wrong, when the game runs to horribly slow as it runs on my computer.

"Now yes, making the CPU finish it's work faster allows you to do MORE within the same budget, so purely CPU processes such as AI can be expanded in DX12 without affecting FPS, independent of the GPU. "

There are quite more done with CPU than AI. Some graphics are drawn partially with CPU and GPU, some mainly use GPU, and some are drawn with CPU completely. It all of course depends on the particular game engine but usually all sorts of bullet flashes on hitting anything, and bullet tracers, small debris and such, is mostly done with CPU. As well as mentioned AI, and sound mixing and stuff like that.
 


Quoting myself is fun:

Which kinda proves the point I made in the AMD sticky thread: Why is EVERYONE ignoring the obvious CPU bottleneck that's compressing the results?

As I've explained many, many, many times here over the past 6 years: Using more cores does NOT give ANY performance benefit unless doing so removes a CPU bottleneck. If no CPU core is doing more work then it can handle, then at least for games [which are GPU driven], the ability to use MORE CPU resources gives you NOTHING.

So yeah, the benchmark scales to 8 cores. But you don't gain any performance after four. That's normal, expected behavior.

AoS is clearly CPU bottlenecked.

As for my second point: If you have a core doing more work then it can handle, how much work you add to the other cores via threading is moot; that one overworked core is going to force you to wait until it catches up. The number of threads doesn't matter as much as workload per core. One heavy workload thread causes the same problems as many lower workload threads. If any one core slows down, your application slows down. Threading is therefore not an automatic performance gainer, since you are limited by how much work your highest workload threads eats up each individual CPU core.
 


Well, if the game is GPU limited on some speficic system, then yes, using a faster CPU will increase performance, but only a bit. Whereas using a faster GPU, the performance added will be directly proportional to increased GPU power.

That's what he meant.
 
Post from someone on Overclock.net;

on FX-8xxx there is no real gain from PCI-E overclock (tested 100mhz vs 150mhz) or HT link overclock (2600mhz vs 3200mhz) (or both of them at same time)

but there is definitely some gains from memory overclock i just switched from 4x2GB 1333mhz 9-9-9-24-CR1 to 2x8GB 1866mhz 10-10-10-24-CR1 and gained about 5-10% FPS boost
also CPU-NB frequency have 5-10% impact on result


==Shot high vista ==================================
Total Time: 4.996590
Avg Framerate : 33.760746 ms (29.620199 FPS)
Weighted Framerate : 33.909782 ms (29.490015 FPS)
CPU frame rate (estimated framerate if not GPU bound): 26.268589 ms (38.068279 FPS)
Percent GPU Bound: 98.643028%
Driver throughput (Batches per ms): 6423.558105
Average Batches per frame: 50627.914063


http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1280#post_24358946
 
In a nutshell, many AMD FX owners were hoping that the release of DX12 games would make their system as fast as an Intel i5, but that won't happen. It didn't happen with Mantis; therefore it's somewhat difficult to understand why they had those expectations for DX12.
 
It should, if games can indeed be so efficient at multithreading under DX12. I'm going to jump on the bias bandwagon and say 'it's only a pre-beta'.

It's funny to see though.
When we look at the GPU results showing AMD in a better light than nVidia, everyone says 'oh it's only a pre-beta, we must wait for more results', or 'Oxide is biased towards AMD!', or 'I'm sure nVidia will optimize and beat AMD when the game comes out', and so on.
When we look at the CPU results showing AMD in a worse light than Intel, everyone says 'see, we told you DX12 wouldn't make a difference', or, 'FX CPUs will always suck', or 'IPC is all that matters, even under DX12'.

What happened to it only being a pre-beta? What happened to possible future optimizations? What happened to Oxide's bias towards AMD? Oh right, those arguments are only allowed when it's not in favor of AMD. And I'm the one that's biased and/or in denial. Funny.

The double standard is uncanny in this place.

But in any case, any new info that I find regarding the performance of FX CPUs, I will post here. Be it advantageous for AMD or not.
 
Comparing cpu's and gpu's is a bit apples and oranges though. Much of a gpu's performance is based on driver support and a simple driver update for the graphics card can make a world of difference if it's not optimized. There are no cpu drivers though, so a cpu is what it is. You never hear of intel or amd rolling out a new cpu driver and oh look, 15% performance improvement. Never happened. People have already compared cpu's head to head to death, single threading, multithreading, multitasking - the results are the results. Those aren't going to change, so no surprise there.

There have been cases where one gpu or the other, both amd/ati and nvidia, have struggled with a particular setting or game and as drivers improve so does their performance. It's not (in my opinion) bias for or against amd, but rather the apples/oranges comparison of cpu's and gpu's where one is very much driver efficiency/coding dependent on performance and the other is not.
 


I've explained it many many times over now; I'm not re-hashing it again. Relative processor performance in gaming isn't moving one way or the other, end story.
 
Before AMD released Mantle, many FX owners presumed that their CPU would all of a sudden be faster than an i5 simply because it has 8 threads; Mantle improved gaming performance all multi-core processors. Same with the very similar DX12. Did you buy a FX CPU only because you expected it to be faster than an i5 when running Mantle and DX12 compatible games?

Edit: Don't forget that AMD had time to get their drivers right because DX12 and Mantle are so similar. Eventually NVidia should also improve theirs, but we may not know for another few months. Does that mean everyone will now buy an AMD GPU? Most likely not.
 
All fine and dandy, except for one thing. You're assuming that the full potential of each CPU has already been used. Under DX11 one thread does most of the work and the others are idling. DX12 is supposed to change this, emphasis on supposed. And yes, we've seen the benchmarks of multithreading, and in such cases, the FX-8 CPUs come close to the performance of a 4690k depending on the application. You're saying no surprise because you're assuming IPC will always be the most important thing for gaming.

Except DX12 allows less driver interventions than DX11;
http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/820#post_24334485

The old ways of thinking need to be dropped.
 
Have any data to back that up? Mantle improved performance for all multithreaded CPUs that were bottlenecking. If you had a 4690k that wasn't bottlenecking in the first place, you wouldn't get the performance boost. But ultimately, Mantle was still not being used to its full potential. Async compute alone could've been implemented easily, but it never was. The core of the games were still based on the old ways of programming. But we'll see.

Drivers? Ok. Just going to quote Oxide here, aside from my prior post of drivers being less signficant in DX12.

Our code has been reviewed by Nvidia, Microsoft, AMD and Intel. It has passed the very thorough D3D12 validation system provided by Microsoft specifically designed to validate against incorrect usages. All IHVs have had access to our source code for over year, and we can confirm that both Nvidia and AMD compile our very latest changes on a daily basis and have been running our application in their labs for months.
http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/

Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone’s machine, regardless of what hardware our players have.

To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.

We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn’t move the engine architecture backward (that is, we are not jeopardizing the future for the present).


Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path.

http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995
 
Have any data to back that up? Mantle improved performance for all multithreaded CPUs that were bottlenecking. If you had a 4690k that wasn't bottlenecking in the first place, you wouldn't get the performance boost. But ultimately, Mantle was still not being used to its full potential. Async compute alone could've been implemented easily, but it never was. The core of the games were still based on the old ways of programming. But we'll see.

As I seem to recall, didn't the Core i3 and Pentium lineups benefit more then just about any other CPU out there in Mantle benchmarks?

Again, making the CPU code execute faster only helps if the CPU is a bottleneck in the first place. If not, no performance benefit. The "more threads = more performance" argument only holds in CPU bound tasks, and games are not (typically) CPU bound. Give me an infinitely fast GPU, then sure, FX has more pure number crunching capacity then Intel. But with the GPU being the primary bottleneck in gaming, IPC matters more then more cores, so Intel wins, often quite handily. And DX12, Vulcan, and Mantel are NOT changing that dynamic in any way.
 
Yes dx12 is 'supposed' to change how the threads handle the API, not the game's code itself. Dx11 is an api, it's the interface between the cpu and gpu not the cpu's ability to run game code. To assume that games are somehow limited (aside from the interface with the gpu) to a single core is bogus. Multicore, multi processor and multithreaded applications have been around and in use for ages. So yes, in that regard the cpu's ARE already being used to their full potential.

Graphics drivers are notorious for performance issues, from both camps. Just read any particular flailing game discussion, amd or nvidia release new drivers and tada, magic fix. There is no such thing for cpu's, if they're gimped they're gimped. Apples and oranges. Absolutely amd has had longer to work on their drivers since dx12 is so close to mantle which amd developed in house long before nvidia had the code to play with.

Exactly as gamerk316 mentioned, the newer api allows more direct handling and interaction with the gpu at the lower level reducing the need for all the intervening cpu work previously done in dx11 reducing some of the load from the cpu which is why the weakest cpu's (bottlenecks) have the most to gain, they need all the help they can get. With dx12 it's more likely people could get more out of their 970 or 980 with an i3 than before dx12. There's improvements for all the other cpu's as well.

Why with the introduction of dx12 would an 8350 become as strong as an i5? Take gaming and dx out of the equation, look at real world benchmarks for other heavily threaded applications and multitasking bench's where an fx 8350 can stretch its legs and the locked core i5's continue to surpass them. Again, it is what it is. No magic sauce for fx, they suffer from their architecture. Put all the paint you want on a house, if its foundation is weak its weak. New curtains don't fix the plumbing. New plumbing fixes the plumbing. That's why zen is so crucial for them to attempt to get it right this time, they went wrong with bulldozer and just kept on that path whether because they no longer had the funds to backtrack and correct it or because they were oblivious and stubborn. Either way the outcome is what it is.

On a side note, it's funny how a debate over cpu's turns into a deflected debate over gpu's when there's no longer a point to be made for the cpu's. Amd fans will go off on a tangent just to somehow praise their beloved oem in some form or fashion. None of this really had to do with gpu's. In addition to waiting to see if dx12 makes the fx series 'better' somehow, I'm glad I hung onto my p2 450 - maybe by dx13 it will beat all and I can finally blow the dust off my obsolete hardware and be winning as well. Makes about as much sense as fx becoming better with yet more time passing when it hasn't gotten any better in the 4-5yrs it's already been out. It would be the first time in history tech has moved backwards instead of forward and if fx in its current state had half a chance amd would continue building upon it rather than ditching it for zen. They're purposely focusing on ipc performance because they recognize their current downfall. If it wasn't an issue and wasn't a serious issue, why would the company acknowledge the issue and go out of their way to correct something fx fans refuse to admit is broken or lacking? Just seems like some serious denial by the diehard fans.
 

If games are not CPU bound, then explain to me why Intel CPUs perform better under DX11 than AMD CPUs. Your argument makes zero sense. If games are not CPU bound, than a slow and fast CPU would perform the same.
 
A game that isn't CPU bound performs better on a CPU that has a higher IPC simply because it takes less time to process the code that feeds the GPU. That is true of any code, but it's sort of visually obvious in games. As a CPU master you should know that.
 
No... The API changes how threads are handled, thus the game code must also be adapted to the API in order to maximize its benefits.

It's not an assumption and it's not bogus. There are so many examples of this on both Intel and AMD CPUs, it's not even funny. It's not only DX either. Same goes for OpenGL vs Vulkan and so on...That you bring this up at all is very surprising, in a bad way. Here we go, and after this, I hope things are clear...
Vulkan on Intel, watch the CPU load:
https://www.youtube.com/watch?v=llOHf4eeSzc

That video alone kills your argument because it shows that;
- APIs can limit performance to a single core
- It is not an assumption that games that use such APIs are limited to a single core
- The API influences the CPUs ability to run game code
- The API influences the distribution of load across CPUs
- The API influences the way a game is best coded

Not for gaming, which was the whole point of this discussion regarding multithreading and DX12. Don't start red herring- ing me.

Valid, based on the DX11 perspective. DX12, we still have to see. Intervening in close to the metal programming with drivers isn't really productive. It's like trying to

You're assuming they are gimped, despite multithreaded performance of FX-8 CPUs matching i5 CPUs. Although I get that with low IPC, they can be considered as such under DX11.

Again, one thing to blow what you say out of the water. Blog from nVidia:

For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC
Date of article: March 20th 2014
http://blogs.nvidia.com/blog/2014/03/20/directx-12/#sthash.nPz2v8b8.dpuf

So nVidia was working with MS since at least Q1 2013. And a bonus one for you:

We worked closely with Microsoft to develop the new DX12 standard. And that effort has paid off with day-one WHQL certification."

"More blockbuster titles are on the horizon. As are games using the new DirectX 12 API, which Microsoft reports is seeing rapid adoption. We're working on DX12 on many fronts. Our engineers are providing drivers, working with game engine providers, and co-developing with Microsoft. We're also helping game developers deploy their DX12 titles.

GeForce has been the GPU of choice for key DX12 demos from Microsoft since the API was announced. Combining the world’s fastest GPU hardware with a high-quality graphics driver made for the perfect showcase for the next-generation features of Windows 10 and DirectX 12.

http://blogs.nvidia.com/blog/2015/05/15/dx-12-game-ready-drivers/#sthash.qtt0fSZR.dpuf

Sounds like you're just assuming that what you think in your head is how it happens in reality, and that's not how it goes.

You'd be correct if those interventions were done on the same main thread that was being used for the game itself. I don't know about AMD (probably the same as well), but nVidia uses the unused cores for driver interventions under DX11. So that is not the main reason for the reduced CPU overhead, and if it is one, there are still a whole lot of others.

Correct, but not (only) due to the reasons you mentioned.

i5 continue to surpass FX-8 in multithreaded benchmarks? Really? Ok...

Sisoft’s Sandra’s Multi-Media benchmarking:
AMD FX-8350 321.69 Int X16 / 200.42 Floating Point / 112 / Double X8
Intel I7 4770K 401.11 INT X16 / 393.98 Floating Point / 226.11 Double X8
Intel I5 2500K 157.4 INT X16 / 198.9 Floating Point / 113.9 Double X8

Cinebench Version 11.5
AMD FX-8350 6.9
Intel I7 4770K 8.12
Intel I5 2500K 5.4

CineBench Version 15
AMD FX-8350 639
Intel I7 4770K 791
Intel I5 4690K 592

POV-Ray
AMD FX-8350 1504.3
Intel I7 4770K (OC to 4.2GHZ) 1535.4
Intel I5 2500K 1011.0

Except there is. The video of OpenGL vs Vulkan says it all.

Sometimes I really do wish that AMD does die, so that Intel and nVidia can overwhelm you people with their prices. When someone chooses to support AMD, it doesn't necessarily mean that they're a fanboy. Some people have a vision for what's best for the industry. Some people care about the whole rather than focusing on brand loyalty. Some people actually think for themselves and inform themselves, and try to spread information. Some people are not interested in e-penis debates.

Your lack of understand is the only thing that allows you to make such statements.

Time passed is irrelevant when the tech didn't move forward. If you have eight handhelds but only one game, you can only play one handheld at the time. Until more games are available... These games are what DX12 is supposed to be for gaming. There are currently so many changes going on in the way games are made. But I guess, when you're unaware of them, you'll keep looking at things through the same old lens.

Because that's the weakest part of their multithreaded architecture. They've already mastered designing chips for multithreading. It would've been better for them to do it the other way around, which is why they called it a misstep.
 

Ugh. Seriously? Where did you guys get your understanding from?

First thing's first. Being CPU bound means that your framerate is limited by the CPU. Being GPU bound means that your framerate is limited by the GPU. Logically, not being CPU bound means you're GPU bound (forgetting about possible HDD, memory and motherboard bottlenecks for the sake of simplicity). Or well, not bound at all from either side but that's not realistic.

Now. Being bound basically translates to the following. Again, simplistically... We're gonna have the CPU and GPU calculate only triangles.
1) If your CPU can calculate 100 triangles per second, and your GPU can calculate 200 triangles per second, you're CPU bound
2) If your CPU can calculate 200 triangles per second, and your GPU can calculate 100 triangles per second, you're GPU bound
3) If your CPU can calculate 200 triangles per second, and your GPU can calculate 200 triangles per second, you're not bound at all
4) If your CPU can calculate 400 triangles per second, and your GPU can calculate 100 triangles per second, you're GPU bound

In the first case where you ARE CPU bound, getting a better CPU will increase your framerate, because despite your GPU being able to calculate 200 triangles, your CPU can only feed it 100.
So if you get a CPU that can also calculate 200 triangles per second, you've doubled your framerate, and you are no longer CPU bound. You now have maximum performance for your GPU (case 3).
If you go to case 2, you have a CPU that is faster than your GPU. You are NOT CPU bound. Rather than giving 100 triangles per second to the GPU, it can give 200 triangles per second. Or from another perspective, it gives 100 triangles per half a second. But, the GPU can handle only 100 triangles per second. Your framerate will NOT increase!
In case 4 it's the same thing. You can calculate the 100 triangles in 250 ms rather than the whole second, but for 750 ms your CPU is doing nothing, waiting on the GPU. Your framerate will NOT increase!

When you are NOT CPU bound, which means you ARE GPU bound, having higher IPC will NOT increase your framerate. It will only increase the time that your CPU is idling. This is why your arguments make no sense at all.

Refutations are welcome, provided they are well explained and logical.
 
You obviously will do everything possible to prove that your decision to build an AMD FX based gaming system was good; what will you do if no DX12 game runs faster on your FX-8320 than it does on an i5? It's quite simple; if a CPU thread can calculate 100 triangles per second and the other can calculate 75 triangles per second, the first one will be faster even if it isn't at 100% load. That's true when processing SQL statements or anything else you can throw at it. Unfortunately IPC still is important.
 
Nothing. I sleep well at night and will continue doing so. And I'm playing older games anyway. I don't care about the e-penis BS.

That is true, but if your CPU is calculating 75 triangles while your GPU can handle 100, you are CPU bound, so the statement that higher IPC will increase framerate when not CPU bound is false. It will only increase with higher IPC when you're CPU bound.
 
So assuming everything I've speculated is wrong and everything you know is true NightAntilli, how is it amd is STILL lagging so horribly? Everything you say indicates fx chips should be smoking intel and yet the results couldn't be further from the truth. So much for theory I suppose. Amd fans have been blindly supporting amd's misguided theories for awhile now. Moar cores, bulldozer is going to blow intel away. Pfft. Dx12, amd's vast array of cores is going to finally beat intel. Again, pfft. The moar cores approach is just becoming a further and further epic fail. Amd understands this and is finally tired of beating the dead fx horse which is why they're letting it die. Now they're turning to zen.

It's got nothing to do with e-peen, it's got to do with I have two choices. One performs better than the other. For me to put my money into the worse performing of the two would just be ignorance. Who in their right mind purposely buys the lesser of two options? Cpu prices are 25-50% of what they were 15yrs ago, so much for cpu's becoming wildly expensive. That's in spite of inflation and a solid 5 yrs of no real competition. As much as I have brand preference I still don't wish for amd to go under. It has nothing to do with amd keeping intel's prices down, it's just because my option to choose the better performer has nothing to do with amd's failure or success.

Time passed is only irrelevant to amd since they're the only ones who haven't moved forward. Just because they haven't improved doesn't mean the rest of the tech industry hasn't moved forward without them. They've made themselves irrelevant. For someone who doesn't care you're putting an awful lot of effort into explaining away why amd should be doing better in terms of performance and yet that magic voodoo still has yet to surface. My opinions apply to both companies equally, same thing with intel. Don't give me theory about how a deeper pipeline in the prescott chips is supposed to do this and that. Bottom line, it was a turd. It failed. Theory means nothing, results are what matters. It might make for great discussion, contemplation and debate but if theory doesn't result in performance it's just that - musings. I enjoy a good campfire story like every one else from time to time.

As of late, the past several years, amd has done a good job telling campfire stories. They show us amusing charts showing upward performance trends (which have no basis on actual measurements), lots of pretty pictures coupled with theory but at the end of the day sasquatch has still yet to show its face. It remains an intriguing story.

I don't claim to know everything and certainly don't know with absolute certainty how everything interacts between a game, cpu, gpu etc. If I did I'd simply sketch up a new architecture and mail it to amd so they could fix everything. Obviously with all their years of expertise, their engineers don't get it entirely either or they wouldn't keep designing failing architectures. Isn't that common sense? No engineer ever went to work for amd or intel and said gee, this week I think I'll design something mediocre. To be fair, engineering isn't my day job either. Let's face it, how exactly are my misinterpretations of theory failing? The pro's at amd have been designing various architectures and making 'improvements' based on their educated theory and look at the end results. Where have they panned out exactly? These are the people who understand how every circuit interacts with one another and processes code, they have all the tools at hand - experts in chip architecture and design, fabrication, software engineers, they're working hand in hand with the actual individuals writing the game code - they have all the ingredients and schooling necessary and still can't get it done. Trust me, I'm not feeling too poorly if my theories are a tad off. At the end of the day, mine are free. Theirs are costing them millions. Still doesn't mean I'm going to buy amd and pay for their mistakes.
 
You wrote "That is true, but if your CPU is calculating 75 triangles while your GPU can handle 100, you are CPU bound, so the statement that higher IPC will increase framerate when not CPU bound is false. It will only increase with higher IPC when you're CPU bound." Since the GPU waits less when using a CPU with a faster IPC, it processes frames faster. In a way this is somewhat similar to using a hard disk or a SSD; the CPU isn't faster when using the SSD, but since it gets data faster, it obviously performs better.