Toms browser benchmarks always seem to be rigged, equalizing important tests with unimportant ones, resulting in inaccurate final results.
First, let's differentiate between important and unimportant tests:
Important:
Startup time, Light
Startup time, Heavy
Page Load Time, Uncached
Page Load Time, Cached
Javascript
Flash
Memory Usage, Light
Memory Usage, Heavy
Proper Page Loads
Unimportant, and reason:
Silverlight - Microsoft attempting to replace Flash? Don't need it if you have flash.
HTML5 - a more promising replacement for flash, but since it's so new, it shouldn't really be tested like this.
Hardware acceleration - also new, shouldn't be tested yet.
Memory management - flawed; some browsers retain cached images and pages in memory on purpose.
.
Another issue is features or problems not tested:
Addon support
Built-in features
Cross referencing speed slowdown on browsers with built-in features compared with browsers that require addons to supplement those features
Browser tests on low-end hardware (Atom netbooks, old single-core Celerons, etc.)
Control over bandwidth (for example, Chrome periodically attempts to spontaneously download an enormous quantity of data without permission, and there is no way to stop it short of closing the browser or disconnecting from the internet and reconnecting several seconds later)
GUI lag, aka non page-loadtime browser responsiveness (Firefox is very bad with this one. Pages may load fast, but try switching tabs or using the menu with 8+ tabs open. Massive GUI lag ensues.)