AMD vs INTEL FLOPS

psychoman

Honorable
Jan 17, 2014
29
0
10,530
I just saw amd saying that their new kaveri a10 7850k apu has 118 CPU Gflops, and I know that the i5 3570k has ~60 Gflops, I know that amd and intel flops are calculated differently, but still can someone explain why they are calculated differently and which is more efficient and WHY? :)
 
Solution
Flops are... well flops don't really matter for home users.
A flop is merely how quickly a processor can change a 0 to a 1 (or vice versa). The more cores it has, then typically the more flops it can produce, and so even a cheap 8 core CPU is going to have a pretty big lead over a high powered quad core. But in the real world you don't measure performance by flops, you measure it in real world workloads. Flops is generally a better comparison for theoretical throughput in highly threaded server applications... and even then it does not always translate to real world performance.

What you want to look for are 2 things:
1) Single thread benchmarks. This would be things like real world performance of how quickly iTunes and other crap...
Flops are... well flops don't really matter for home users.
A flop is merely how quickly a processor can change a 0 to a 1 (or vice versa). The more cores it has, then typically the more flops it can produce, and so even a cheap 8 core CPU is going to have a pretty big lead over a high powered quad core. But in the real world you don't measure performance by flops, you measure it in real world workloads. Flops is generally a better comparison for theoretical throughput in highly threaded server applications... and even then it does not always translate to real world performance.

What you want to look for are 2 things:
1) Single thread benchmarks. This would be things like real world performance of how quickly iTunes and other crap software can do things. Most simple programs (like office and web browser tabs) are single threaded, and so this is very important to get an idea of just how fast a processor is for day-to-day applicaitons

2) Game benchmarks. Again, not synthetic benchmarks, and not just any game benchmarks, but benchmarks of games you actually intend to play. Some will favor AMD, while others (most) will favor Intel.
 
Solution
Ya.... ReviewTechUSA is kinda dumb. Not quite sure how he has gotten so popular other than that he is outragious.

Video editing programs are highly HIGHLY optimized for specific processors. Any time you are using high end software you build your rig to the software. Vegas works better for AMD while Adobe typically works better for Intel. Also, when doing video editing (which I do on occasion) the HDDs or SSDs typically have a lot more say in performance than the processor. Also, we don't know what GPUs are being used, or if he is using any GPU acceleration for the output.

Generally speaking, he is full of crap and should be ignored whenever possible.
 


In general you are correct, but your definition of flops is not. FLOP stands for floating point operations per second, this is any math operation performed on a decimal number. Now if you want to be honest you would use a standard distribution of floating point operations like the Whetstone benchmark, you'll have all of the normal FP operations. Now if you want to make your numbers look really good you will only use addition and subtraction as they are usually 1-2 cycle operations compared to multiplication and division which can be 10+ depending on the processor, depending on the mix of instructions you use you can generate more realistic or more marketable numbers.


Generally(unless you are using a machine as a workstation) FLOPs and MIPs don't impact your life as consumer workloads are a mix of both instructions along with lots of memory accesses and IO accesses. Much of what consumers use is also shifted heavily towards integer math for better performance, if you can multiply everything by 10 and not have to use any decimals you can get a significant speed up. Many work station applications however are FP heavy, if you are doing computational fluid dynamics then the double precision floating point speed will define how fast you can do work, not just play a factor, this is why there is a large market for workstation graphics cards which have floating point performance orders of magnitude better than CPUs can ever achieve.
 

nardis_miles

Reputable
Mar 12, 2014
3
0
4,510


Your definition of flop is completely incorrect. a FLOP is a floating point operation, such as adding, multiplying, or dividing two real, non-integer numbers, as opposed to fixed-point (integer) operations. It is very important to engineers, and to some extent, to gamers, although most of the FLOPS in games are now handled by GPU's (graphical processing units).

The benchmark for FLOPS is usually generated using a standard software suite, such as linpack (matrix-matrix and matrix-vector operations), so that the benchmark is, in fact valid across processors. You just have to look a little deeper to see what suite is being used. Typically, Intel processors are faster, but the performance/$ is significantly lower. I usually by AMD for this reason.