Dispelling myths surrounding AMD vs. Intel

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

s4in7

Honorable
Feb 14, 2014
910
0
11,360
EDIT: some people pointed out that comparing clock-for-clock thermals and not similar performance thermals was not a great comparison. So I've included a similar performance thermal comparison immediately following the existing clock-for-clock comparison

I've seen too much false information in regards to AMD vs. Intel flying around here lately, so let's see if we can't put to bed some of the myths.

I didn't cherry-pick any of the following benchmarks to prove my point, and although performance differs between the two from benchmark to benchmark, I selected benchmarks indicative of the gaming landscape as it stands right now.



MYTH: AMD runs hotter than Intel
FACT: On a per clock basis, AMD actually runs cooler than Intel BUT it does draw more power, which I guess it where the myth came from.
EVIDENCE: Intel Core i7 4770k clocked at 4.8GHz runs at 93°C load (high-end air cooling)
qMZwErN.jpg


AMD FX-8320 clocked at 4.8GHz runs at 55°C load (low-end Corsair H60 water cooling)
1ugTzx3.png




Similar Performance Thermal Comparison

At stock 3.7GHz the 4770k (lots of people here agree that to get stock 4770k performance out of an FX-8xxx you'd have to overclock the FX in the neighborhood of 4.8GHz--so this is essentially comparing similar performance instead of clock speed) runs at 78°C max load on an NZXT Havik 140:
QK7tE70.jpg


which is directly comparable to the H60 that I use on my 8320 according to this:
tLeQC49.jpg


So my 8320 at 4.8GHz more or less equals the performance of the 4770k at stock, and it runs at 55°C max load versus the 4770k's 78°C max load (both were tested with Linx-AVX) with directly comparable cooling solutions--again Intel runs hotter, on a clock-for-clock basis and a similar performance basis.




MYTH: AMD is dramatically slower than Intel in game performance
FACT: AMD frequently falls behind Intel in gaming benchmarks that is true, but never so far that a game becomes unplayable on AMD--even in the worst cases, AMD maintains more-than-playable frame rates.
EVIDENCE: Intel Core i7 4770k runs Civ5 @ 1440p Max Settings (Radeon 7970) at 85fps
55339.png


AMD FX-8350 runs Civ5 @ 1440p Max Settings (Radeon 7970) at 71fps
55339.png


A difference of 14fps and both are well north of the desired 60fps threshold.

Intel Core i7 4770k runs Crysis 2 (DX11) at 1920x1200 Max Settings at 97fps
fkN9nR0.png


AMD FX-8350 runs Crysis 2 (DX11) at 1920x1200 Max settings at 85fps
fkN9nR0.png


A difference of 12fps and, again, both are well north of 60fps.

There are some rare instances, such as Skyrim which is heavily dependant upon single-core performance, where the performance delta between the two are much wider, but even in those instances AMD puts out more-than-playable numbers:
Intel Core i7 3770k runs Skyrim @ 1080p Ultra Settings (Radeon 7970) at 107fps
image016.png


AMD FX-8350 runs Skyrim @ 1080- Ultra Settings (Radeon 7970) at 70fps
image016.png


A big difference of 37fps, but both are able to maintain above 60fps.



MYTH: AMD will bottleneck a multi-GPU setup
FACT: AMD FX and Intel i5/i7 have more than enough power to push frames to a multi-GPU configuration
EVIDENCE: Intel i7 3770k with SLI GTX 680s puts out 162fps in Battlefield 3 Ultra 1080p
image009.png


AMD FX-8350 with SLI GTX 680s puts out 150fps in Battlefield 3 Ultra 1080p
image009.png


A difference of 12fps and both are way more than you'd need for a smooth, responsive gameplay experience.

Intel i7 3770k with Crossfire 7970s puts out 77fps in Battlefield 3 Ultra 1080p
image008.png


AMD FX-8350 with Crossfire 7970s puts out 75fps in Battlefield 3 Ultra 1080p
image008.png


A difference of a mere 2fps, both above 60fps.



Those are the three biggest myths that have been bugging me and there are more, but I feel better having cleared these up.

I'll leave you with some basic, no-nonsense facts about AMD and Intel performance:
FACT: Intel has better single-threaded/single-core performance than AMD
FACT: AMD has just as good, and sometimes better, multi-threaded/multi-core performance as Intel
FACT: AMD FX draws more power than Intel i5/i7
FACT: The bottom line is that both AMD FX and Intel i5/i7 are fantastic CPUs that are more than capable for even the most demanding gaming scenarios--Intel is the all-around speed king, but AMD is no slouch and is frequently right there with Intel or not very far behind.

So enough with the Intel vs. AMD infighting, they aren't that different after all and neither will let you down when it comes to gaming :)
 
The points are not what bothers me. It's the means to the conclusions that is deceiving and thus annoying to me. The glance line was a joke. I saw him say he wasn't cherry picking and I was looking forward to a well-thought, well executed argument. But that post was nothing but cherry picking .
 
44 Gigaflops @ 4.8 GHZ on that FX 8350. Am I the only one that actually saw that number? Wow, just wow, how disappointing. An i5-3570k @ 4.6 pulls nearly 118 Gigaflops. I'm all for keeping an open mind, but that is clear cut. I hope that was pre SP1 on Win7.

I also don't think that a cool running CPU is necessarily an indicator of overall raw computing power. In an Intel vs Intel comparo, a cool running Sandy Bridge @ 4.9 GHZ gets stomped on by a very hot 4.6 Ivy any day of the week.
 



You do know that floating operations are the minority of code executed by your computer? They are just indicator of the total FPU performance of a particular chip. When your actually running code things change radically, scalar performance (Integer) is what's important because the vast majority of code is going to Integer logic compares and base operations. x86 itself doesn't actually have any floating point capability, that was introduced in the x87 instruction set and belongs to a coprocessor. Eventually they integrated that coprocessor into the design but the way it executes is extremely different, it even has it's own RISC register stack. The SIMD instructions that were added later, MMX,SSE,AVX,XOR were added to the FPU coprocessor component. The Intel's CPU has 4 256-bit FPU's that can each run 3 separate types of instruction (the pipelines aren't shared), AMD has eight 128-bit FPU's that can bond with another to form four 256-bit FPU's, each can execute a single instruction. So while in general the Intel CPU has about double the total FPU performance, in reality most code will never utilize it as it has to be optimized to use those extra pipelines. To top it off your GPU is an order of magnitude better at doing SIMD code then the CPU's FPU unit is. GPU's are giant vector coprocessors, or in the case of GCN programmable hybrid scalar/vector coprocessors. So the only time you should be seeing SIMD FPU instructions is when you need something done really quick and setting up a GPGPU / OpenCL / CUDE session would take too long and isn't efficient.
 
Thanks to RGD and Anthony for going full fanboy on us. As for the rest been saying that for years that despite AMD having windows to improve on, its hardly at such a point where you can't get by on a game.

For the OP, I was running BF4 multiplayer on a i3 2120 for a while, while HT helps out in FPS gains it does have an impact on actual in game performance. I would be happy to take a FX 4000 part against a i3/HT and monitor performance spikes caused by latency, while the i3 may have higher FPS I am dead certain it also has the more attrocious lag spikes which is the reason I went to an i5. I think HT is not sufficient to act as a "like core" alternative for a demanding game like BF4, may get away with it in a port like Skyrim.

I would be happy for those to shed light on this take a i3/HT vs FX4 and FX6 comparing multiplayer core load spikes in a alpha title like BF4.



 
It is not microsoft job to make a program run on more cores, that is solid the developers of the applications job.

CPU manufacturer could help, by going with a doing like GPU's, having multiple cores share the same scheduler. I'm certain it will happen, but not any time soon. Unless someone go to a completely new approach with CPU-architecture.

Hyper-threading is not meant to be as another core, it will let two treads run simultaneously on a single core.

Also why would you compare them in a alpha stage? Unless I have misunderstood something..
 
I was refering to th comparison of a dual core with HT to a Quad and Hex core CPU in a demanding title like BF4 in regards to actual game experience and not just FPS. Going from a i3 to i5 i noticed significant gaming experience difference, notably spikes and this is where the comparison comes in, to determine whether HT is completely overwhelmed by the load BF4 puts on them and whether a FX4 or FX6 actually gives better gaming experience.
 
I do believe an haswell i3 should be better than fx 4xx0, because both will be loaded pretty quickly and a strong branch predictor and a well-organized cache can be critical.

Haswell i3 against the FX 6xx0, I don't really know.
 
The Sandy and Ivy IMC and Cache is already significantly faster and not far off Haswell, certainly faster than anything on AMD's R&D yet BF4's Multiplayer particularly 64 man maps is particularly brutal with massive hardware spikes. I would also like to see Mantle with an I3 to see whether BF is just brutal on Intel dual cores or whether Mantle is the salvation for the dual core.

My reasons are simple, if a i3 4130 can legitimately handle a game like BF it may then earn my recommendation as a budget gaming system for a demanding title like that. I have had Athlon FM2 parts and a A10 6800K with discreet graphics and BF4 is still fine on those systems.
 


What did I do? All I ask was why compare the the temp on per clock basis instead of by performance. And the response was
so no, it would not be better to compare temperature with identical performance levels.
That kind of answer to the
I didn't cherry-pick any of the following benchmarks to prove my point
 
A point that has me using an AMD system. If they weren't around, what do you think the Intel pricing table would look like?
Competition drives prices down. For someone like me who has a rather limited budget, it means I get solid gaming performance at a price point I can afford. For those who a few hundred extra dollars on a system are not a big deal, they have Intel to give them that extra performance.

This is a good thing for all of us. AMD apu's are getting quite good now as well, and that means a web surfer/Youtube/movie watcher PC for those with less demanding computing needs is cheap like borscht.

The consistency of socket compatibility meant I could stuff my 975BE in a Sabertooth board, and use it at acceptable levels until I can afford to upgrade it. If I had to change motherboard and cpu at the same time, I would not have been able to follow an upgrade path to a decent gaming system. Intel upgrade path for the last several years has meant changing both at the same time.

All of which is good for the market in general, and good for us folks who like to build our own toys.
 
Thanks to RGD and Anthony for going full fanboy on us. As for the rest been saying that for years that despite AMD having windows to improve on, its hardly at such a point where you can't get by on a game.

Lol, even said the FX-8000 series were good CPU's for the money and I accept them as contenders. Yet I'm going ''full fanboy''. The OP cherry-picked an i7-4770k @ 4.7 GHZ and 1.3v on air cooling from a motherboard benchmark (that you can only download as a PDF, not even a hw review site) and held it up to a moderately oc'd 8320 from 4GHz to 4.8ghz on water cooling - no other information provided. All that to show that clock per clock Intel runs hotter. Am I the only one that sees that as ridiculous?

EDIT: BTW when I say motherboard benchmark, I mean that they were showing that the motherboard they offered could overclock the i7 higher and more stable than competing motherboards. So the goal, was to push the i7 to the thermal limits. And the goal of the Oc'd 8320, was to show that it ran cooler than the i7 clock per clock; two completely different tests performed to accomplish two completwely different goals held together in his argument. Cherry picking.
 
It wasn't a flyer for a motherboard, it was from a review of a motherboard in Custom PC--they overclocked the 4770k on the motherboard they were reviewing, but didn't say what cooler, but since it's Custom PC I can almost guarantee it wasn't the stock cooler. It's very much a real-world example of the thermals one could expect, I don't understand what your problem with it is.

Also, the whole purpose was to dispel the myth that Intel is leagues better than AMD when it comes to gaming--yes Intel i7 wipes the floor on other applications, but gaming-wise it's not THAT much better than AMD and I'm sick of people perpetrating this myth.

And no I didn't cherry pick benchmarks to prove my point--otherwise I'd have picked benchmarks where the two are closer and not include the Skyrim benchmark...did you even read the post?

The i7's are the speed king, no doubt. The point of the post was to dispel the notion that AMD can't keep pace in the gaming realm, when they absolutely can--this isn't a "AMD is better than Intel" post, in fact it exists solely to quell those kind of views.

I'm with ewok here...I don't think you even read the damn thing.
 
If the clock-for-clock example isn't doing it for you then how about we compare similar performance like rgd suggested?

At stock 3.7GHz the 4770k (lots of people here agree that to get stock 4770k performance out of an FX-8xxx you'd have to overclock the FX in the neighborhood of 4.8GHz--so this is essentially comparing similar performance instead of clock speed) runs at 78°C max load on an NZXT Havik 140:
QK7tE70.jpg

which is directly comparable to the H60 that I use on my 8320 according to this:
tLeQC49.jpg


So my 8320 at 4.8GHz more or less equals the performance of the 4770k at stock, and it runs at 55°C max load versus the 4770k's 78°C max load with directly comparable cooling solutions--again Intel runs hotter, on a clock-for-clock basis and a similar performance basis.
 
Why would you choose a motherboard review in which the goal was to overclock an i7-4770k to its limits to show that the motherboard was better for overclocking? And then why choose a screenshot of some random 8320 @ 4.8ghz on water cooling to compare it to? I don't care about the rest of the post or the point you're trying to prove. I don't understand how you don't see this as cherry picking. I can't make my point any clearer. So I'm just going to agree to disagree with you.
And if I can offer some small advise. Next time you want to show clock for clock heat or noise or performance or anything. Use reviews and tests that are actually benchmarking for those results. Not one random sample of water cooled 8320 and another random sample of an i7 oc'd to the roof to show a better motherboard. Fair is fair. Same motherboard, same test bed, same cooling.
 
Well thanks for trying to make an addendum I suppose. But I'm starting to really question how you do your comparisons. That pic of the Corsair h60 result you showed me is from a review done on an i5-2500k overclocked. I understand that you're using that to show that the NZXT HAVIK 140 is close in performance to the H60, and that you use the H60.

But to me your logic is less than elegant:
If NZXT HAVIK 140 = ~1 degree Celsius to a Corsair H60 on an i5-2500k overclocked, and the i7-4770k stock hits 78 degrees on the NZXT HAVIK 140, then the i7-4770k must also get 78 degrees on the Corsair H60, ~1 degree Ceslius. You use the H60 on an OC'd 8320 @ 4.8GHZ and are getting 55 degrees Celsius, therefore your 8320 is 23 degrees cooler under stress test than an i7-4770k.
I see what you're doing, and I don't agree with it. Check this link, by your same logic applied above, I can show that with the Gamer Storm Assassin (~1 degree Celsius to an H60 ) that an i7-4770k stock under stress test runs at 57 degrees Celsius.

http://tpucdn.com/reviews/Deepcool/Assassin/images/CPU_stock_typical_a.gif

Now the cooler I chose in reference to your pic is ~ 1 degree from the H60 you use. But it's -1 degree in my arguments favor this time. This translates to a 57 degree Celsius prime test result. That's still 2 degrees hotter than your 55 degree run. But the delta is now so small it could easily be attributed to other factors like ambient temperatures etc. Do you see now why you shouldn't make comparisons like this?
 
It's not exact, no, but it does give you reliable ballpark figures.

And if we compare multiple benchmarks we see that the performance delta between the two coolers stays about the same regardless of platform, here's an i7 920 benchmark where we see a similar delta:
http://www.neoseeker.com/Articles/Hardware/Reviews/NZXT_Havik_140/4.html

And an AMD heatsink comparison that shows a 2°C delta between the two on the same testbed:
http://www.frostytech.com/articleview.cfm?articleid=2705&page=5

And an Intel heatsink comparison that shows a 1°C delta on the same testbed:
http://www.frostytech.com/articleview.cfm?articleid=2705&page=6

It's not science and never was meant to be, but it is an moderately effective comparison.
 
The thing is, the more heat the CPU generates, the more heat it will potentially dissipate:

"...higher temperatures, if they're tolerated by the equipment, are actually desirable as they result in better heat dissipation overall (as the difference with ambient temperature, and thus amount of heat that can be transferred, is higher)"

The purpose of that quote isn't to prove that the i7 running hotter is a good thing, it's to show you that the more heat a CPU generates, the more potential heat the HSF can dissipate. So when you show temperature results from i5's, i7-9xx's, etc., you're just showing that those CPU's don't dissipate as much heat as an i7-4770k would with the same cooler.
 
The relative comparison about the AMD vs. Intel is quite informative and one can surely get along some really beneficial amount of knowledge from the same. Such a fine relative thread it is.
 
Thank you Elenadavid for understanding. This thread was my attempt at correcting misconceptions and showing that either CPU is a fantastic choice when it comes to gaming--not to laud one brand over another.

Both are great, but there were misconceptions afoot trying to say otherwise that needed to be deconstructed and corrected.

If anyone came away from this thread thinking it was pushing one brand over they other missed the point entirely.
 


Comparison is great. I understand your point. Get what brand works for you. Now NVIDIA and AMD need to be compared because I read frequently that NVIDIA is better because of PhysX. Know that Mantle has been released I think a comparison between games supporting both PhysX and Mantle would be interesting.



 

TRENDING THREADS