[citation][nom]cangelini[/nom]I believe we were actually the first to publish Sandra numbers back in October:
http://www.tomshardware.com/review [...] 91-3.html. The problem is that SiSoftware is still limited in what they can test. Windows RT's rules means we can't include the Surface or ATIV Tab (believe me, we've covered this with Adrian over there already). We can't include iOS-based devices. Really, we'd be limited to Android and Windows 8 (x32). GeekBench is far from ideal, and we've admitted that over and over again. But it remains one of the only tools at our disposal for the broadest possible comparisons.[/citation]
That link's broken though from the url i think i know what article you're talking about. But, that article didn't have either Medfield, IIRC, or of course, Clover Trail...I've seen SiSoft's own benchmarks for Medfield, which is why i was interested in seeing the clover trail results...
I understand the need for a broad comparison, but probably...if you could use Sandra on, let's say one Tegra 3, one clover trail, and then some other Andriod device, we could confirm geekbench's results, and probably place the other devices accordingly...or something.
Just a suggestion, not implying a conspiracy of some sorts
[citation][nom]AlanDang[/nom]Anand said "I've looked at using Geekbench to compare iOS to Android in the past and I've sometimes seen odd results."Each benchmark is just another tool in the toolbox. There's no such thing as an odd result on a synthetic benchmark -- the expectations were just wrong. If something is unusually fast, unless there is driver optimization/cheating, what it just means is that the CPU happens to have a best-case scenario that's being tested. Geekbench has some very specific benchmarks which are broken down with good granularity which can give you some clues into the platform. Likewise, platform performance is what's critical and as we saw with SurfaceRT vs Transformer Prime, the operating system can have a difference (with the 4+1 core support). If an OS like WebOS is slow, wouldn't you want a benchmark that also shows you that penalty?The main beauty of free online reviews is that you can see how different people get to different conclusions and different approaches and not have to pay anything. Anandtech and Tom's HW both recently did articles looking at granular power consumption on Atom vs. Tegra3 on WinRT using Intel's test equipment. Anand went with Intel's HW and focused on getting out benchmarks for boot-up, etc. (with pretty looking graphs). Our article published granular data for the memory subsystem, screen, etc. and then spent a lot of time with thinking about ways to independently validate those results and how just the mindset of thinking about watts being drawn for specific tasks lets us make some good predictions for A15, Krait, etc. It's almost like movie reviews. If everyone agrees that something is great, you know you have a winner. If there's a lot of debate, you know that no single solution is perfect, otherwise there wouldn't be a debate.[/citation]
I remember Anand also saying that he's not sure it's a good cross-platform thing, might be the same source you're quoting from because this is quite a while ago, so i may be confused.
But i'm not sure i quite agree with synthetics not showing odd results...Andrew for example, dislikes PassMark, and i've seen extremely unbelievable results from it myself across Android, Windows and iOS. Never used Geekbench on Android/iOS myself so can't say much about that, except quote Anand, and the only reason i'm bothering to do that is because i feel you guys and Anand probably know as much as the other, which is much more than me
But yeah-the driver optimization (or even compiler optimization?) is a factor (or vendors optimizing for a specific bench)...was testing Sandra for SiSoft recently after a recent version of theirs was suddenly showing abnormal video memory bandwidth for Nvidia cards...so yeah. But then i remember Andrew saying that you guys know Geekbench pretty well on a granular level, just saying that if you know too that it isn't perfect an possibly inconsistent (as Chris acknowledged above) then throwing in another synthetic is a good idea, IMO.
IIRC 3DMark is coming up with a "true" cross-platform benchmark, so i guess we'll have to wait for that...
BTW: Any point in using SunSpider any more? Isn't a recent trend to specifically optimize for SunSpider? In which case, what's the point?