[citation][nom]deksman[/nom]TDP is not necessarily directly connected to lower or higher temperatures.You need to take into account the amount of transistors (higher amount can produce higher temperatures) and lets not forget that Intel has a more powerful version of it's integrated gpu on the same chip - which is probably contributing to higher temperatures.As for AMD... I would love for them to finally have a product that can directly compete with Intel, but it's entirely possible that these FX chips might present a marginal increase in performance and nothing large.Unless of course AMD did some changes to the architecture itself.Either way, we won't know until the chips can be tested.To that end, AMD is still a good bang for buck, and those who may need more cpu power, they will probably get Intel either way.In games, the higher performing cpu's won't really offer that much of a difference, unless the game is heavily reliant on the cpu in the first place (and that is not so often).The biggest difference would be felt in professional programs such as 3d Studio Max, but even there, you can switch over to GPGPU solution that would effectively speed up your rendering times by a factor of 15 to 20.[/citation]
FX has huge non-architectural problems. Also, all of AMD's CPUs provide huge bottlenecks for high end graphics systems. The best Bulldozer based FX for gaming at stock performance, the FX-4170, bottlenecks anything more than the Radeon 6950 or GTX 570 (Nvidia is a little less CPU reliant than AMD graphics cards) in modern games (including the quad threaded games). For example, one of Tom's SBM had an FX-6100 machine with dual 6950s in CF and an i5-2400 machine with a single 6950. The i5 machine, despite having almost half of the FX machine's graphics performance, beat the FX machine in most of the games.
FX's biggest problems are it's poor design methods and it's very high latency cache. For example, the FX CPU was designed purely by computers (it's a computer generated design). The problem with this is that computers tend to not know the way to design the fastest CPU, they just know the fastest way to design a CPU. A CPU that is designed transistor-by-transistor by a small group of highly skilled engineers can be hand-optimized to be far better than the very poor computer generated designs. According to AMD's ex engineers, the computer generated designs are roughly 20% slower, 20% larger, and consume 20% more power. Just fixing this non-architectural problem alone would improve power efficiency by about 44%!
Improving the cache's latency, another non-architectural problem, would also improve performance significantly, although it wouldn't improve power efficiency nearly as much as using the proper design methods. Improving the efficiency of the memory controller, yet another non-architectural problem, would also be of benefit because the Bulldozer and Llano memory controller gets about 25% less bandwidth than the Sandy Bridge controller does at the same memory frequency.
Fixing these three non-architectural problems could improve upon Bulldozer based FX CPU's performance by over 40%* while decreasing power usage by almost 20%.
* It's hard to say how much more performance the CPU would have if the cache latencies get brought down properly. Many of them would need to at least be cut in half in order to come close to Sandy Bridge's latencies and I'm not quite sure just how much performance would be increased by this.