GOM3RPLY3R :
hcl123 :
Also hardware differences are a major factor
Sure they can be big... but
software differences can be
more than an order of magnitude bigger than the hardware
An old example as old as my granpa, used to interest enterprises and corporations
http://www.edn.com/design/systems-design/4314689/Autovectorization-for-GCC-compiler
EEMBC publishes two types of scores: out of the box and “full fury,” or optimized...
... The full-fury scores show an average improvement of more than 1500% over out-of-the-box scores for the same processor running at the same speed
Can you appreciate the differences ?? ...1500% is 15x more, i can't remember when a CPU or GPU presented not even 20% of that from one generation to the next, usually not even in 2 or 3 generations apart... understand ?
Okay then with the point that you are saying that this is
software, why not just put it on the Intel processors? -_-
Don't understand, put what on intel processors ?
Understand, by standardization effort, all software runs either on AMD or intel processors, even if its.. how we could say.. "emulated" at an instruction level by the use of microcode.
Even from the same vendor, supporting nativelly the same exact ISA (instruction set arch) and all its extensions, not all "instructions" are hard coded the same way. From different vendors the differences can even be bigger, to the point that some design of some vendor has hard coded instructions that on others can only be run by microcode (emulated)
This mostly certain is independent of macro-architecture, usually called everything not "cores", but sometimes also of micro-architecture, that is, the cores can be pretty identical yet have some hard coded instructions differently.
All this maze of complexity, micro and macro, has an enormous influence on software, better said its the software job to take advantage of the hardware, not the other way around, and algorithms, routines, functions, procedures,etc... can have exactly the same effect/result but using combinations of different instructions of an immense number of possible combinations( triggered by compiler flags, or different versions of the same compiler, different compilers and or directly assenbled in code)...
that is why we optimize software... optimizations that can be plenty in number, on different levels of objectives,
**proving exactly the same results**. And since the hardcoding can have plenty of differences, is quite possible that some *versions of that software* runs better at some particular "processor" while other version run better at other processor (same vendor or not), yet the result is exactly the same.
(
and when i say "results" here, its not numbers on a chart, its the result of the software, the objective, not how fast it can get that result... is like calculating "PI" with superPI, there are plenty of ways to get the same pi results, ones faster or slower than others)
So if i published the lulebentmark uber duper benchpack, is possible that build 1024 runs better at CPU X of vendor A, yet build 1035 runs better CPU Z of vendor I, yet pretending to measure exactly the same thing and carrying exactly the same name.
So in last yes, ITS A WHOLE LOT more complicated than many analyses and opinions express, an yes *driver* software counts A LOT.