Intel vs AMD: The Why

Status
Not open for further replies.

SlitWeaver

Honorable
Mar 23, 2013
544
0
11,060
30
Introduction
The first thing you're thinking is that this is another "Intel is better than AMD" or "AMD is better than Intel" thread. Well, thank god it's not! I have a number of curiosities about the differences between Intel and AMD that I would love to have satisfied (as I'm sure many others do as well). I'm not sure where to begin, so we'll just jump right on in and hope for the best.

Notes before we begin...
Please don't post anything along the lines of "Intel is better because it is not AMD" or any cr*p like that. I'm looking for intelligible responses and factual/proven information.

Which is for who?
A lot of times I have seen AMD CPUs recommended to people who do things like Adobe and the like and Intel recommended to gamers (of course AMD is recommended to gamers too). I'm curious as to the reasoning behind these suggestions to PC builders. Can anyone link to some (recent) tests proving which is better in which case?

Intel is stronger. Why?
Now I know AMD users reading this section are already screaming out in outrage that I could say such a horrible thing; however, it's been proven with hundreds of thousands of hours worth of testing and benchmarking that Intel CPUs are more powerful than AMD CPUs (at least in gaming). Why is it that an the i7-3770K outperforms the FX-8350? Based just on the specs, the 8350 should be much more powerful than the 3770k; it has more cores and a much higher clock. Why, then, does the 3770k outperform the 8350? Does Intel use better materials? Do they place components in a more efficient way inside the CPU?

Benchmarks
-----
Source: Passmarks
3770K= 9,639
8350= 9,163
Difference= 476
-----
Source: CPUBOSS
Cinebench (all cores)
3770K= 25,703
8350= 32,437
Cinebench (single core)
3770K= 6,862
8350= 4,319
3D Mark 11
3770K= 8,470
8350= 6,980
 

SlitWeaver

Honorable
Mar 23, 2013
544
0
11,060
30


If that's true, why even bother with PassMark scores and the like?
 

ASHISH65

Distinguished


Cinebench score is more imp than passmark

 

SlitWeaver

Honorable
Mar 23, 2013
544
0
11,060
30

This shows that the 3730K is more powerful, but not WHY. Good link nonetheless!


Cinebench, eh? Updating OP.
 

SlitWeaver

Honorable
Mar 23, 2013
544
0
11,060
30


I'll keep this in mind, but it still only shows that the Intel CPU is better, but not why.
 

SlitWeaver

Honorable
Mar 23, 2013
544
0
11,060
30


Do you have said deep knowledge? :D Because even if it's way above my level, I can do some research and bring a high level explanation to something at least semi-understandable to me.
 

e56imfg

Distinguished
Sep 5, 2011
937
0
19,160
50

You can compare performance to each other but not specs. As I said, different architectures.
 

ihog

Distinguished


http://en.wikipedia.org/wiki/Central_processing_unit

Read up. Anything you want to know will be in there, or linked to in there.
 

8350rocks

Distinguished
Cinebench is made using ICC, so Intel will always have a higher cinebench score unless an AMD chip is operating 200% faster than the intel chip because of the "not intel flag".

Impressively...the FX8350 still score 6.9 where the i5-3570k is only 7.25(? iirc, anyway)...and the i7-3770k is only 7.5-7.75 or so...

Also, here are some excerpts from a conversation that was had in another thread, and I am too lazy to retype it all:

What he is talking about, are protocols...

A software setup with data fed in a mostly serial manner favors intel, because intel's instruction execution protocol for their CPUs are 90% serial data...which means intel chips break down a serial stream of data faster (single threaded performance). AMD's instruction execution protocol for their CPUs are setup to run parallel streams of data (heavily threaded performance), which most software out right now is not designed to feed data to the CPU in this manner. So, data being fed serially to a CPU designed to run parallel streams of executions is inefficient, and favors one designed for that type of data streaming.

For example...

Picture you're at Wal-Mart (or where ever), and there are 8 checkout lanes open...the first lane has a line a mile long, and they will only allow 4 of the other 7 lanes to have a line 1 person long. It doesn't make any sense right? For starters, they're not even using all of the lanes available, and the ones they are, aren't being utilized efficiently.

That's what's happening inside an AMD architecture FX8350 with current software...

With Intel chips right now...it's more like the line at best buy...where you have 1 line a mile long, but the front person has 4 different cashiers to go to when they arrive at the front of the line.

So, having 1 line a mile long doesn't slow them down, they're designed that way...

However, once information is fed in a parallel manner to the CPU...AMD will have all 8 lanes at Wal-Mart open for business and the lines will be distributed equally with people (instructions for the CPU), but Intel will still have the Best buy type line with 4 people running a cash register...except that now there will be 4 or even 8 lines forming into that one line, which makes things slow down because they are not designed to execute like that.

I hope the analogy makes this very complicated architecture discussion make sense.
That's a great question man...

Think of it like this, when you're multitasking, RAM has an effect on the amount of multitasking you can do, though windows negates this to some degree by putting "page file" on your HDD that is dynamic in size. What that means is that windows opens a file similar to RAM and loads files from it when you don't have enough RAM to load everything into RAM at once. RAM is faster, but your performance loss is only noticeable if you're running something extremely CPU/GPU heavy...like a game in 1440p or hardcore video encoding/rendering.

When you're multitasking, AMD protocols allow your background programs to form a serial line in front of an unused core...so that you're not tapping the resources your foreground program is using.

Now, say you were running 5 fairly intensive things at once...(let's say, streaming web videos, downloading multiple music files, and playing a web game...) Your i5-3570k would be able to execute 2-3 of those (depending on their resource needs) well...the others would be passed off to a virtual core in the background and would run at a considerably slower rate.

Doing the same thing on an AMD 8 core chip, since only 1 of those requires any FP calculations, you could literally tap 5 cores to do all the work simultaneously.
That's what hyperthreading is, it is essentially passing off background or foreground applications to a "virtual core" which the processor is basically taking a fraction of clock time each cycle to run the threads dedicated to the virtual core. Now, for the sake of hardware, the i5 doesn't have any "virtual cores" in intel speak...however, the 4 cores you have can divide clocktime by % to execute functions you're currently running(the definition of CPU multitasking effectively). This will tap resources you're using elsewhere, though it won't be a largely noticeable difference in your foreground application performance unless you're doing several CPU intensive things at once. So for example...your foreground application may be using 80% of 4 cores, and your background applications may be using 20% of the clocktime per cycle to run their functions, but it's at a highly reduced rate compared to what it would be if that was the primary program running.
Also, intel is NOT faster than AMD, nor are their cores "stronger"...let me elaborate...to dispel this myth.

Their architecture is used more efficiently...THAT is the difference. In reality...the AMD chip is easily 150% more raw horsepower than the intel chip...but programs, while they are getting there, do not currently fully utilize the architecture of the AMD...YET!

Now, you should also know...places like teksyndicate, pureoverclock, overclock.net, openbenchmarking.org and other sites have a LOT of benchmarks where AMD wins...especially in games...

As a matter of fact...Crysis 3, Far Cry 3, BF3, Metro 2033, Bioshock Infinite, and Tomb Raider all run as well on a 8350 OR in some cases better on the AMD. (Crysis 3 runs better on AMD, like it or not intel guys, it's proven)
 

SlitWeaver

Honorable
Mar 23, 2013
544
0
11,060
30

First off, love your name ahaha! Second, thanks for all the quoted information! :D Third, could you link me to some of the AMD dominating Intel examples? I never really see any of those (maybe Intel pays Google to hide them? ;) )
 

rp_dxn

Honorable
Mar 27, 2013
89
0
10,630
0
To make the story short.

Intel CPU: More on office productivity, workstation, media, grafix proccesing, gaming, etc
AMD CPU: I can only think of gaming, what else? maybe file compression? :D

Intel: Withstand higher temperature. Proven and tested since their older generations
AMD: Can withstand, but not that much.

Direct to the point: Intel DOMINATES the AmD. Simple as that!
 

PapaCrazy

Distinguished
Dec 28, 2011
130
0
18,710
7
If all modern software was programmed perfectly to take full advantage of all resources available, and utilized each core equally, AMD cpus would probably beat Intel on everything. With the large variety of software out there, single core performance remains important to chug through ineffecient programming. This is partly why Intel excels at certain games (like Skyrim), because they are only programmed to take advantage of 1 or 2 cores, which favors an Intel advantage. Even if you are dealing with video/photo programs like Photoshop or After Effects, you are likely going to be using filters or 3rd party plugins that are not programmed efficiently and will lolly-gag on one core (and not even use 100% of that one core). I decided to go with a 2600k to hedge bets against those ineffecient single core programs and badly ported games, and still have some hyper-threading for efficiently programmed rendering and encoding. But if you want a reason for Intel's advantage over AMD, it's partly the deficiency of software developers to take advantage of more cores. There's also legacy software people haven't upgraded yet, mobile CPUs that are still on one core... the multi-core revolution we were all waiting for, and AMD bet on, hasn't quite come to a boil yet.
 

SlitWeaver

Honorable
Mar 23, 2013
544
0
11,060
30


The rest of your speech was fine, but this stuck out. My phone is dual-core, some are 4-core, and the Exynos 8-core has two 4-core CPU clusters. All in one phone.
 

PapaCrazy

Distinguished
Dec 28, 2011
130
0
18,710
7


I focus my resources into desktops not mobile devices, but if they're doing multi-core on phones that's cool I guess. Makes me wonder how many mobile apps are actually programmed for multi-core, but its interesting nonetheless.
 

amdfangirl

Splendid
Herald
Not many I would presume. Given that mobile ARM quad cores are relatively new and make up a small sector of the mobile market.

Even the the PC market that has had quad-cores being relatively cheaply and available since 2008-2009 still have applications like iTunes that aren't optimised for quad-core.

 

SlitWeaver

Honorable
Mar 23, 2013
544
0
11,060
30


Pretty sure all [smart]phones are multi-core now. I don't use my phone for anything more than web-browsing, emailing, text messaging, and other basic things like that. Use it to test my apps, but those are hardly resource intensive. Why are phones multi-core? I don't know, just know that they are :)
 

amdfangirl

Splendid
Herald
Because you can't continue scaling frequency. Other methods had to be found to increase performance.

There's that and marketing comes into it as well.

"I have quad core! No I have Hex core... Haha I beat you both with an Octo core"

Etc. Etc.
 

I bought that 37.6ghz i7, guess I win? :na:

 
Status
Not open for further replies.

Similar threads


TRENDING THREADS