TheGreatGrapeApe
Champion
The main thing for me is that despite errors and issues internally within [H]'s own reviews, instead of looking to improve on them, they instead look to attack the criticism first or the other methods (what did Anand do to draw their ire personally?), and then only later if it becomes blatantly obvious to everyone else do they then even start to consider an options that is not of their own making. When it was pointed out there might be IQ problems with some drivers that they may have mised it became a problem that people focused on 'such a small issue' not that they missed it, and continued to use beta drivers despite using 'they are just beta drivers' as the excuse.
I like to use both [H]'s benchies and other sites, because they never match up. Which one is correct is all a point of view, [H] is subjective (no question it's the style they chose) and strictly valid just to the setup they have and the runs they ran. I'm not sure how that gives more information to someone not using a setup similar to there's nor someone who isn't playing the small handful of games they chose at the settings they prefer (Oblivion with little/no grass, but shadows on everything including grass?).
I find that instead of moving the art of benchmarking forward, this article is more of a defense of their methods and an attempt to condemn all those who don't follow this narrow path.
If someone were looking for a be all and end all single source of information, it would need to have more information and be more consitent than their current reviews are which pretend to be more than they are.
If anyone can explain to me why their hystograms don't match their min/avg/max numbers in the 'settings' that would also help alot. [:wr2:3]
Personally I prefer looking at th hystograms of the apples-apples tests because that tells me more than subjective op-onions, however if the hystograms too are questionable, then how can you validate what is essentially a 'look & fell' review rather than something reproduceable?
I like to use both [H]'s benchies and other sites, because they never match up. Which one is correct is all a point of view, [H] is subjective (no question it's the style they chose) and strictly valid just to the setup they have and the runs they ran. I'm not sure how that gives more information to someone not using a setup similar to there's nor someone who isn't playing the small handful of games they chose at the settings they prefer (Oblivion with little/no grass, but shadows on everything including grass?).
I find that instead of moving the art of benchmarking forward, this article is more of a defense of their methods and an attempt to condemn all those who don't follow this narrow path.
If someone were looking for a be all and end all single source of information, it would need to have more information and be more consitent than their current reviews are which pretend to be more than they are.
If anyone can explain to me why their hystograms don't match their min/avg/max numbers in the 'settings' that would also help alot. [:wr2:3]
Personally I prefer looking at th hystograms of the apples-apples tests because that tells me more than subjective op-onions, however if the hystograms too are questionable, then how can you validate what is essentially a 'look & fell' review rather than something reproduceable?