[citation][nom]Crashman[/nom]I'm calling BS on this one because AMD's "eight cores" are actually four modules, on four front ends, with four FP units. Games have historically been limited by FP units specifically and front ends in general, no? What I'm seeing is that Intel's per-core IPC appears to be a little higher, when two different FOUR "full" CORE processors are compared.[/citation]
tl;dr warning for the less patient and/or people who don't care.
Let's think a little more about this. First off, FPU performance was historically important, but is easily shown to not be important for most games these days. For example, there is a very large difference in FPU performance and support of modern FPU instructions between the Family 10h CPUs (Athlon II, Phenom II, Sempron, etc.) and FX as well as Intel's modern CPU families, yet Phenom II is clearly able to hold its own against CPUs with similar integer performance despite the significant lack in FPU performance. So, FPU performance is clearly not even remotely as important as it used to be, at least with modern games.
Next, in the few gaming situations where the game can load up many threads effectively, performance gains on FX with higher core counts scales fairly similarly to Phenom II with four and six core models. Sure, we can clearly see how the front end bottle-necks in Bulldozer and Piledriver are not to be ignored, but they most certainly aren't on the order of making a module act anything like a single core.
Furthermore, just because there are FPU and front end bottle-necks does not mean that we should act as if modules are merely single cores. They do not act like single cores nor do they perform like them. Yes, performance per core on Bulldozer and even Piledriver is poor, but that's no excuse to pretend that each module is a single core. There shouldn't be any doubt about that there are two cores in each module. There are significant bottle-necks incurred in how AMD implemented it, but they should be seen as what they are, bottle-necks, not an excuse to pretend that they are reason to call a module containing two integer cores as anything but two cores. Before there were FPUs and any cache, CPUs were stil single core CPUs and that's a fact, so just because AMD changed how they organized the cache and FPUs is no reason to change the meaning of what a core is in the case of x86 CPUs. An x86 CPU's core is a integer processing core even though the technology has evolved greatly over time.
Also, since each FPU in an AMD module can process two 128 bit FP instructions at once instead of a single 128 bit FP instruction like the family 10h CPUs that preceded FX, although it's still a single FPU, it's arguably no worse than two of the older FPUs. Even more, it has support for many instructions that couldn't be run on the old FPUs, so it's even arguably better than two of the old ones.
There is no denying that Intel's current CPUs are far better per core than AMD's competing models and that Intel's far superior front ends are a huge part of that, but that has no impact on how we define the number of cores in AMD's CPUs. Like I said earlier, we didn't say that early CPUs were not single core CPUs just because they didn't have one FPU per core (especially since many lacked an FPU completely). We also didn't say that, for example, AMD's and Intel's first dual core models were not dual core models despite their huge front-end bottle-necks (especially for Intel) and very limited success in improving performance at the time.
It's not difficult to prove how improving the multi-threaded utilization of CPUs with many cores (such as six and eight core FX models) would improve their situation greatly. However, one thing that I wouldn't ignore (although I wouldn't call it a huge issue) is that even with AMD's CPUs being better utilized as a result of that, chances are that it wouldn't be a huge boost in FPS in most modern gaming situations. As we can plainly see from many of the benchmarks here at Tom's and many other places, most games are not so limited by the CPU that there is a huge gaming FPS difference between average performance on AMD's best and on Intel's best right now even with very high end graphics and such.
EDIT: That's not to say that there isn't a significant performance difference outside of games; I'm just saying that most modern games aren't as reliant on CPU performance improvements once you've already gotten into the high end as they are when you compare low end CPUs to mid-ranged and high end CPUs. It's not like with graphics cards where there's almost always ways to use more graphics performance.
Still, that software is generally not specifically optimized to work ideally on AMD's CPUs is no excuse for their poor performance per core. AMD really should work on improving their front end a lot, among other things.