A lot of the above comments have some good points and some bad points so it's hard to say who's wrong and who's right since so many people are both at the same time on different points they tried to make.
For everyone complaining about using the 7970 in this comparison, it was used to show how good each CPU really is. It allowed benchmarking at 1080p with high settings without creating a GPU bottleneck. Because of this the CPUs were compared, not the whole system so we know that CPUs that failed here aren't worth buying because they will fail with any GPU since they are the bottleneck. It would have been nice to see a budget card or two thrown in to compare the CPUs with realistic cards but there were good reasons for using the 7970.
For people complaining about the i5-2500K being here, it was used to compare the performance of budget CPUs and high end CPUs, to see if the price is worth it. I think the 2500K should have been overclocked at least several hundred MHz more than it was since most buyers won't only put it at 4GHz. A higher clock rate would have provided a better comparison against the high end systems.
For people complaining about the 1333MHz memory used, it doesn't matter if better memory was an option. That would not be used in the budget systems and would thus skew the result a little. The point was to benchmark the CPUs with all else being equal and if changing the memory would not have helped this goal. I do believe that 1600MHz memory would have been reasonable since it is about the same price as 1333MHz, if I were doing the testing I would have used whatever was the cheaper. If 1600MHz was cheaper at the time (sales allow some 1600MHz kits to dip below 1333MHz kits but I haven't seen one in a while on Newegg.com) then I would have prefered 1600MHz but oh well, 1333 shouldn't change the results much, the order should be the same. If one system used 1600MHz then all of them would use 1600MHz. Anything above 1600MHz would be unreasonably expensive and would not help this comparison at all, it may even be detrimental. AMD's CPUs weren't crippled because Intel doesn't need the extra bandwidth, faster RAM wasn't used because it would have been too expensive. Unlike using the expensive 7970, using faster RAM would not provide a better look at the CPU when it's used in a budget system.
As for the RAM capacity, I think 8GB should have been used. Even in budget systems today, 8GB is easy to fit in ($25-$35 for 1333MHz, a little more for 1600MHz) so there is no good reason to use less than 8GB on a dual-channel system. 6GB is reasonable on a triple-channel system but there is no excuse for using 4GB unless these are older parts. Was there a reason for using only 4GB?
For anyone complaining about using 32 bit OS, if you bought the OS from Newegg or another retailer then you can upgrade to 64 bit for free. For everyone else, you can probably switch it out some how unless you are running XP. XP pro x64 isn't that good anyway, pretty poor driver/software support compared to Vista/7 x64.
Outlander_04 :
This is probably the least useful set of benchmarks ever on Tomshardware . It is not of any use to anyone building a budget pc , which it seems to be is the point of using sub $200 processors .
The use of a $470 graphics card with this build negates the entire point of making the comparison . What would have been far more useful would have been a 6850 or 6870 in this build . The results could be quite different . We saw exactly that in the SBM $500 builds last year where the i3 2100 was resoundingly spanked by a Phenom 955 .
Since the FX 4100 can game on a par with , or better than the 955, it should do the same .
As usual criticism of the FX architecture is based on a failure to understand that architecture . AMD shot themselves in the foot by calling these processors quad, hex and 8 core . They are really 2,3, and 4 core parts with a small part of a traditional core duplicated so that the Module [ which should be called a core ] can run two equal threads . Its a hardware implementation of hyperthreading .
You have failed to understand the Bulldozer architecture. It is not 2, 3, 4 core parts with hardware hyper-threading there really are 4, 6, 8 integer cores and 4, 6, 8 FPUs. Two cores and two 128 bit FPUs (that can work together for 256 bit FP math) share hardware. Hyper-threading duplicates some hardware for two threads to share a single core and Bulldozer shares hardware between two cores, the concepts are almost polar opposites. AMD's concept isn't why Bulldozer failed as much as their idea to do what Netburst did, increase pipeline length tremendously and do other things to reduce IPC while attempting to increase clock speed enough to make up the difference. The problem lies in that clock speed can't go much higher without decreasing power usage tremendously because AMD's CPUs just use to much power. The poor performance of AMD's SRAM caches really doesn't help them either. They probably use more power at stock than a moderately overclocked Sandy bridge quad core (i5 or i7) but I admit to this being en estimation at best.