Wisecracker :
To claim a review is biased and untrustworthy because you don't like the results is the last refuge of a scoundrel.
You are wrong. The facts as erroneously presented in various forums prove it.
Apparently on some forums it is acceptable to claim that a review is biased and unacceptable if it shows AMD doing well.
However on those same forums the reverse is not true: you aren't allowed to call a pro-Intel benchmark review biased even if you can point out half a dozen things about the benchmark that were done incorrectly.
Like the NB speed as you mentioned, or perhaps using very loose timings with slower memory on AMD and tighter timings with faster memory on Intel. Or not bothering to increase the NB Voltage, getting 400Mhz less than the average max overclock and then doing a complete review with the Intel chip using a 400Mhz higher clock. (Only someone completely without a brain would do something like that.)
Or comparing the chips "at stock" and posting the base clock frequency for various benchmarks on the result charts even when you know the chip is not running at that base clock speed. That takes extreme bias. (Or complete desperation.)
ElMoIsEviL :
It is common knowledge (not sense but knowledge) that the Nehalem Architecture excels in gaming scenarios and that the Phenom II X6 does well in multi-threaded scenarios.
What you say is commonly accepted on some forums; it is not "common knowledge".
But then we still have some less than knowledgeable people that believe Crysis is the last word in gaming benchmarks. They don't seem to understand that if Crysis had not been so pro-Intel that it would have disappeared as a popular benchmark years ago. Just like has happened with a dozen other benchmarks that didn't really show a difference between Intel and AMD. But because Crysis has anomalous results the pro-Intel people LOVE it. (Instead of throwing the results out as being statistically unacceptable.)
The same thing applies to some popular non-game benchmarks. You can show 4 similar benchmarks that compete with the questionable benchmark but since their results don't completley agree with the "popular" benchmark they are thrown out. When in fact the reverse should have been done.