After studying these benchmarks extensively I have come to the conclusion that there must be a bug in this release stepping when a module shifts from integer to floating point. Looking at the Sandra test suite it's clear that both integer and floating point performance is at least on par with the i7. Yet it drops to dismal figures whenever it's tested in standard workloads that shifts between these as most programs do.
Most noticeable is it that the FX8150 actually beats the i7 flat down in FPU performance and yet gets beaten in gaming tests which is FPU performance heavy. Clearly there has to be a bug. Probably the prefetch gets flushed every time a module has to shift workload type.
Now the good parts is that memory performance is on par with i7 even though FX uses 2 channels and i7 uses 3 channels, that's a 50% advantage. And it's also obvious that Global Foundries 32nm SOI process is far more advanced than Intels 32nm bulk process. 2B transistors in 315mm2 in the FX vs. 1B transistors in 216mm2 in the i7, that's a 37% advantage in transistor packing density.
So the FX shows great promise for future steppings where the bugs have been fixed but right now it's a terrible let down.
Most noticeable is it that the FX8150 actually beats the i7 flat down in FPU performance and yet gets beaten in gaming tests which is FPU performance heavy. Clearly there has to be a bug. Probably the prefetch gets flushed every time a module has to shift workload type.
Now the good parts is that memory performance is on par with i7 even though FX uses 2 channels and i7 uses 3 channels, that's a 50% advantage. And it's also obvious that Global Foundries 32nm SOI process is far more advanced than Intels 32nm bulk process. 2B transistors in 315mm2 in the FX vs. 1B transistors in 216mm2 in the i7, that's a 37% advantage in transistor packing density.
So the FX shows great promise for future steppings where the bugs have been fixed but right now it's a terrible let down.