ElMoIsEviL :
The "beliefs", if you can call them that, on this forum are based on a rational and reasonable interpretation of the evidence. If you have 10 Objective Websites telling you one thing and a single 1 telling you something else... who would you be more prone to believe?
We all know that you consider yourself to be unbiased and a non-fanboy. You keep repeating it like it's a mantra. Since you are part of a majority on this forum you somehow believe that makes you correct in that opinion. You then conclude that you are being rational and reasonable since they all agree with you. But having that majority doesn't automatically make you correct.
NOTE: There are not "10 objective websites versus one link" as you attempt to claim. Not unless you blindly accept summaries of reviews without actually looking at results yourself and doing your own analysis.
===========================================
Let me present an artificial example to illustrate the problem. Below are two examples of what could be summary statements from different reviewers that could be describing the exact same set of benchmark results:
A. "This review had 12 wins and 4 losses for one brand. They take the performance crown and the competition obviously can't compete."
versus
B. "This review had 4 obvious wins, 8 scores close enough to be considered tied and 4 obvious losses. These different chips trade blows and basically perform the same."
The problem is that if a popular reviewer writes the "A" summary statement then many people will blindly accept that as an irrefutable fact. They won't bother looking at the actual data for themselves. A worse problem is that if a few people strongly agree with summary "A" then if they see something like summary "B" they might discount or ignore it. Even though the summaries are the same results worded differently.
==========
TO exacerbate the situation what really needs to be done is to compare the results for the same exact benchmark against multiple review sites. (i.e., You can't just lump the end results of "wins" and "losses" together for a single reviewer and think you have something important if you truly want to be objective.) Sure when doing this type of analysis you must consider possible differences between the reviewers. But that is why you would want a larger number of different reviewers to "weed out" the inconsistencies statistically.
When this is done then often what some people originally considered to be "obvious" becomes much less than obvious.
jennyh :
And for gods sake how many times are you gonna post a World in Conflict benchmark as 'evidence'. We know how much that game favours intel.
In a true objective review the person doing the review would remove any anomalous results unless they could definitively explain exactly why there is an anomaly.
Oh but hey... we're not objective; we're living in a fantasy. Unlike Elmo who is the epitome of neutrality.