[citation][nom]stm1185[/nom]All I learned from this is how to collect a bunch of meaningless benchmarks, run the results, then decide that even though Firefox didn't win the most of them, it won anyway. Please you wanted FF to win so it won. Thats all there was to this whole piece.[/citation]
[citation][nom]spotify95[/nom]Correct me if I am wrong, however, how can the web browser that had the LEAST AMOUNT of wins in Windows be crowned the WINDOWS CHAMPION??!! I thought the Web Browser Grand Prix tested the browsers, and BASED ON THE RESULTS, crowned a champion!The Web Browser Grand Prix has been seriously rigged once the raw placing tables were dropped - it should be the raw placing that determines the order of what browser wins and what browser loses!!!Unless this is sorted out, I am DELETING the Tom`s Hardware site from my Favourites, and UNSUBSCRIBING from your newsletters etc!!!![/citation]
First off, let me apologize for the confusion. I've explained this a bunch of times (sometimes in the articles, sometimes in comments, and sometimes in reader emails). After nine articles and two years, I forget that everyone else hasn't been around since day one.
There was an overwhelming amount negative feedback regarding the original scoring method, which was based solely on raw placing. Straightforward, right? Well, the first problem was in the weight given to each test. For instance, if I ran 5 JavaScript tests and only 1 CSS test, that gives unfair weight to the browser strong in JavaScript. I couldn't argue with that logic, but the following scenario put forth by a reader drives the point home: reversing the number of tests to 5 CSS and 1 JavaScript would swing the result in the complete opposite direction, just by changing the porportion of tests.
The point of running redundant tests is to make sure they are all still valid (to benchmark the benchmarks), not to skew the results in favor of any single area of testing, or in favor of any browser which excels in a test-heavy category. The analysis tables were created to give equal weight to each category of testing. This allows us to run any number of tests in each category, as well as remove bad tests, without having to worry about balancing the makeup of the overall suite.
The second problem with raw placing was in scale of victory. The strong column was added to take into account all of the times where the winner beat the second place finisher by an insignificant margin - say 60.1 FPS to 59.9 FPS (this happens often). When looking at raw placing alone, a champion could come in first on the most tests, but also be the loser more than a close second place finisher with no loses. In this scenario, that second place finisher is legitimately the overall better choice. The strong column makes this recurring scenario clearer.
Later on, the acceptable column was added to further represent scale. It provides a place for browsers who aren't close enough to the winner to be considered strong [in comparison], but also aren't weak performers. For instance, consider these scores: 60.1 FPS, 59.9 FPS, 35 FPS, 32 FPS, and 20 FPS. Obviously 60.1 is the winner, but 59.9 practically ties it (strong), 35 and 32 are totally playable framerates (acceptable), and 20 is weak (in two ways: both in adequate frame rate, and in contrast to the other contenders).
The current scoring method essentially comes down to winner/strong versus weak, with acceptable providing a place for simply average performances. All of these changes happened over time, and now that the raw placing tables are entirely gone, the winner column is basically vestigial. While Firefox won WBGP7 by the slimmest of margins (ever in the series), the WBGP8 Windows championship is not at all that close - Firefox definitely wins.
If the WBGP was about finding out which browser wins the highest number of benchmarks in existence, raw placing would still be fine. But that is not an accurate picture, and not good for much more than unfounded fanboi bragging rights. Also, considering that in many cases the browser vendors are the test curators, that position is extremely easy to game - if you include every benchmark that exists, the IETestDrive site alone would ensure that IE stays king forever.
If you still feel this way, and have a solid argument for returning to the raw placing tables (even now knowing the reasons behind the switch) please let me know.
-Adam