Dispelling myths surrounding AMD vs. Intel

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

s4in7

Honorable
Feb 14, 2014
910
0
11,360
EDIT: some people pointed out that comparing clock-for-clock thermals and not similar performance thermals was not a great comparison. So I've included a similar performance thermal comparison immediately following the existing clock-for-clock comparison

I've seen too much false information in regards to AMD vs. Intel flying around here lately, so let's see if we can't put to bed some of the myths.

I didn't cherry-pick any of the following benchmarks to prove my point, and although performance differs between the two from benchmark to benchmark, I selected benchmarks indicative of the gaming landscape as it stands right now.



MYTH: AMD runs hotter than Intel
FACT: On a per clock basis, AMD actually runs cooler than Intel BUT it does draw more power, which I guess it where the myth came from.
EVIDENCE: Intel Core i7 4770k clocked at 4.8GHz runs at 93°C load (high-end air cooling)
qMZwErN.jpg


AMD FX-8320 clocked at 4.8GHz runs at 55°C load (low-end Corsair H60 water cooling)
1ugTzx3.png




Similar Performance Thermal Comparison

At stock 3.7GHz the 4770k (lots of people here agree that to get stock 4770k performance out of an FX-8xxx you'd have to overclock the FX in the neighborhood of 4.8GHz--so this is essentially comparing similar performance instead of clock speed) runs at 78°C max load on an NZXT Havik 140:
QK7tE70.jpg


which is directly comparable to the H60 that I use on my 8320 according to this:
tLeQC49.jpg


So my 8320 at 4.8GHz more or less equals the performance of the 4770k at stock, and it runs at 55°C max load versus the 4770k's 78°C max load (both were tested with Linx-AVX) with directly comparable cooling solutions--again Intel runs hotter, on a clock-for-clock basis and a similar performance basis.




MYTH: AMD is dramatically slower than Intel in game performance
FACT: AMD frequently falls behind Intel in gaming benchmarks that is true, but never so far that a game becomes unplayable on AMD--even in the worst cases, AMD maintains more-than-playable frame rates.
EVIDENCE: Intel Core i7 4770k runs Civ5 @ 1440p Max Settings (Radeon 7970) at 85fps
55339.png


AMD FX-8350 runs Civ5 @ 1440p Max Settings (Radeon 7970) at 71fps
55339.png


A difference of 14fps and both are well north of the desired 60fps threshold.

Intel Core i7 4770k runs Crysis 2 (DX11) at 1920x1200 Max Settings at 97fps
fkN9nR0.png


AMD FX-8350 runs Crysis 2 (DX11) at 1920x1200 Max settings at 85fps
fkN9nR0.png


A difference of 12fps and, again, both are well north of 60fps.

There are some rare instances, such as Skyrim which is heavily dependant upon single-core performance, where the performance delta between the two are much wider, but even in those instances AMD puts out more-than-playable numbers:
Intel Core i7 3770k runs Skyrim @ 1080p Ultra Settings (Radeon 7970) at 107fps
image016.png


AMD FX-8350 runs Skyrim @ 1080- Ultra Settings (Radeon 7970) at 70fps
image016.png


A big difference of 37fps, but both are able to maintain above 60fps.



MYTH: AMD will bottleneck a multi-GPU setup
FACT: AMD FX and Intel i5/i7 have more than enough power to push frames to a multi-GPU configuration
EVIDENCE: Intel i7 3770k with SLI GTX 680s puts out 162fps in Battlefield 3 Ultra 1080p
image009.png


AMD FX-8350 with SLI GTX 680s puts out 150fps in Battlefield 3 Ultra 1080p
image009.png


A difference of 12fps and both are way more than you'd need for a smooth, responsive gameplay experience.

Intel i7 3770k with Crossfire 7970s puts out 77fps in Battlefield 3 Ultra 1080p
image008.png


AMD FX-8350 with Crossfire 7970s puts out 75fps in Battlefield 3 Ultra 1080p
image008.png


A difference of a mere 2fps, both above 60fps.



Those are the three biggest myths that have been bugging me and there are more, but I feel better having cleared these up.

I'll leave you with some basic, no-nonsense facts about AMD and Intel performance:
FACT: Intel has better single-threaded/single-core performance than AMD
FACT: AMD has just as good, and sometimes better, multi-threaded/multi-core performance as Intel
FACT: AMD FX draws more power than Intel i5/i7
FACT: The bottom line is that both AMD FX and Intel i5/i7 are fantastic CPUs that are more than capable for even the most demanding gaming scenarios--Intel is the all-around speed king, but AMD is no slouch and is frequently right there with Intel or not very far behind.

So enough with the Intel vs. AMD infighting, they aren't that different after all and neither will let you down when it comes to gaming :)
 


I am sorry but we're getting a bit OT but my point was I want as many fps I can get to make my cpu last longer and and get closer to higher hz.

It's a misconception people can't see past 60 fps. Our eyes don't see in fps. You really should try a 120hz monitor. Even in Windows desktop it noticeably smoother than 60hz. Getting 150 fps @ 120hz is not for bragging it's for fragging. So much less input lag in twitch shooters.
 
The i3 is a fantastic little CPU--no doubt! You'd really only see better performance from a 6300 in highly multi-threaded applications, and even then the i3 still punches above its weight.

I have no experience with HT and it's benefits/drawbacks so i can't help ya there, sorry!
 
In some situations disabling hyper-threading can provide slightly better single-threaded performance, but it is a small margin and is not recommended as you will 9.5/10 times get better performance with hyper-threading.
 
Sorry but my last post on higher than 60 fps benefits. Try this little test on your 60 hz panel. Move you mouse quickly left and right on you desktop. Can you see multiple mouse pointers. That's 60 hz you're seeing. Just some proof your eyes don't see in fps. Now scale that up to your whole screen when in a fps. I let you think on that.
 


I agree 100%, and well said. :)

I think the OP and my comment though are only directed at the minority of inexperienced techies. Most people who have enough experience with the various brands and tech from the past few decades know how right you are, and also how competitors quite often leapfrog each other.

And I'm sure we'd agree that simply because you are a fan of one or the other that you are automatically a fanboy. There's much more to it and I did not mean to imply such a label but rather just make an observation. 😉
 


No actually, it doesn't. Whether stock/stock or OC/OC, they are both so very close the difference is barely noticeable.

Hey, S4IN7, maybe add this as another myth with some stock and OC figures? 😉
 


I know this is a bit Off topic...but that whole thing about input lag being tied to FPS...yeah...another myth there.

First off, it should be 'Response time' and not 'Input Lag'. Keyboard, mouse and controller are your inputs. Monitor is an output device that you respond to, as are speakers in the case of audio.

Believe it or not, the fastest an average fighter pilot responds to visual input is in the 200ms range. That's 1/5 of a second. Doesn't matter if you see 12 frames in that time or 24, the difference between intial render and brain registering is 1/120 of a second on one monitor and 1/60 of a second on another. Your response time is never going to be fast enough to benefit from that 1 extra FPS at the beginning of the action you need to respond to (Roughly 8ms difference between 1st frame on 60hz vs 120hz)

The player is actually the slowest link in that chain. 😉

The value of higher FPS is more for the aesthetic experience than competitiveness. It is definitely much smoother and takes a lot of strain off the eyes. But it's not going to make you a batter gamer.

Trivia: We respond MUCH faster to audio than video stimuli.
 
What bothers me is people who refuse to look at these facts, and will always have one reason why one is better than the other. Sure, this car gets twice the gas mileage, but it has flat tires and no air conditioning. But it has twice the gas mileage, so it's better!
 
MYTH or FACT?
Intel processors are more efficient, with a greater performance/TDP ratio.

Not that this is all that important to gamers or power users, but the trend of the market is performance/power and ARM is dominating the field. Intel has a better chance than AMD competing with ARM.
 


Intel microprocessors are substantially more efficient in terms of aggregate performance per watt. This is backed up by tons of benchmarks, but there's a bit more to it than that.

ARM dominates the embedded systems market for certain, but they're not in a position to scale the ARM architecture up to meet the x86 architecture in terms of raw performance per clock whereas Intel is in a position to scale down the x86 architecture to meet ARM's power efficiency.

One of the greatest strengths of Intel's x86 ISA is that it uses a CISC frontend to feed a RISC backend. This is the mechanism that allows a modern i7-4770k to run code written for MS-DOS in the mid 1980s. Heck, it can still run MS-DOS if the firmware retains BIOS compatibility. It's also the same mechanism that allows for an incredibly efficient DRAM and cache architecture that's necessary to keep a powerful microprocessor busy. Core for core and clock for clock even the best ARM architecture can't keep up in performance with an architecture such as Haswell as doing so would require significantly reworking the ARM architecture in a way that compromises its power efficiency. Several vendors such as Apple, Samsung, and Qualcomm have made some attempts to do so, and have met with some success, but Intel has a seemingly bottomless amount of R&D funds from which to draw.

The Von Neumann bottleneck is far more apparent on AMD's memory hungry APUs than it is on Intel's desktop microprocessors, and if some smartphone benchmarks are to be believed the low power DDR memory used is in some cases a severe bottleneck to ARM based SoCs.
 




Actually, yes it does improve your game. Just like having positional sound. It gives you a pretty great advantage.

Also 60fps is a pretty crap standard, and has been inadequate as a measure of gaming performance for a long time.

You can have stutterfest 60fps and you can have butter smooth 60fps.

That's why a lot of sites are using 99th percentile and such nowadays.
 


That is very intriguing. Thanks for that :)
 


I agree. I can also see it this way too: If they don't need tires or air conditioning, it is still a good fit.

The one-size fits all outlook hyped by many 'fanboys' on both sides is a recipe for failure. That's what really bugs me personally.

People need to know their apps and games, how many cores they can use and how well they will perform on each platform. That knowledge will lead to a smart purchase decision, regardless of preference. :).
 
Yeah, if you're commuting 100 miles to work, don't get an electric car with a battery that will only take you 90 miles. Get what will work for you. Research what games you'll be playing, figure out your budget, play with a couple different builds, maybe if you get AMD you can get a more powerful graphics card, or maybe you find a 4960x with motherboard on sale for $300 (highly unrealistic, sure, but if you found that deal, you know you'd take it). Don't base your needs off of what other people say.

And to people suggesting things for other people, do not just say GET THIS IT IS BETTER NO MATTER WHAT IN EVERY SITUATION. Take a look at what they want, what they need, and their budget. If you're not capable of keeping your bias out of it, you're not going to be helpful. Give them benchmarks, results, and let them decide for themselves, just give them the information they need to make an informed, educated decision that they won't regret.
 


I do agree with you to a point but isn't it nicer to have some more in reserve. Why get the electric car that does the 100 mile commute when you can get the one that does 120 miles and gives you a chance to do something else after work and may even last that little bit longer if the roads change and the journey becomes longer. Your needs may change and you need to plan for some extra and not just for today. You can got to far with this though. I think maybe I might have 2 x cars in the future (GPU). Then I should get a house with a double garage to store them in now (PSU). What if later I decide I don't want that second car as my new car is more efficient and faster. I have overspent out on my house when I didn't need to.

Most people will offer suggestions and be prejudice in some way. Some prefer Intel and some AMD. Sometimes it's hard for those asking the question to get a balanced argument either way because of this. At least by going to a forum you'll get more than one opinion and it should help not to skew it based on personal preferences to much. I have a bias towards Gigabyte mobos as I have not had a bad one.....yet?. Asus have been problematic for me but some swear by them. If you were getting me to decide between two similar spec boards from those two companies I would pick the Gigabyte yet I dare say they would both be suitable.
 


No, you're wrong. And I have examples.

http://graphics.stanford.edu/~mdfisher/GPUView.html

GPUViewOneSecondWow.png


Two threads (WOW, and the primary DX Driver) jumping around like mad. Other games have the same behavior, you can see some more examples at the link.

All those times you see games using 3-4 cores in Task Manager? Its almost always two threads doing it; they are simply jumping between two cores as the scheduler puts other processes on the cores they were running on, so instead of seeing a core loadout of something like 80-60-0-0, you see 60-40-30-10 instead.
 
One of the greatest strengths of Intel's x86 ISA is that it uses a CISC frontend to feed a RISC backend. This is the mechanism that allows a modern i7-4770k to run code written for MS-DOS in the mid 1980s. Heck, it can still run MS-DOS if the firmware retains BIOS compatibility

Which has nothing to do with DOS compatibility. The reason you can still boot to DOS if you wanted is because the old 16-bit Real mode processing resources are still in place. (You could cut down a lot of the chip is Intel started to rip out the "obsolete" portions of X86).
 
"So with Windows 7, there is a concerted effort to assign cores to an execution pipeline, such as a core. Now, threads get sent back to the same core where the last threads for that application executed, so an application more closely sticks with one core. This lets idle cores shut down and makes for smarter processor affinity. Instead of throwing threads at every core, it just goes to one."
 


That actually goes some way to explain why I need to turn core parking off on my i7 when playing BF4. The core would be parked the game would chuck a thread at it and the game would lag (spike) as the core woke up. With core parking disabled all was smooth. Not sure if latest BF4 patches cured it but I don't want to try as I don't leave my machine on 24/7 anyway.
 


Real Mode support is part of the ISA front end. CISC x86 instructions from the target mode are decoded into platforms specific RISC micro-ops because native CISC execution is incredibly hard to run concurrently. Backend execution is done the same way regardless of whether or not its running code in Real Mode, Protected mode, Compatibility Mode, or Long mode. The transistor budget for Real Mode is absolutely negligible.
 
CPU have a lot of power these days, more than what most people need. That's why an I3 or an FX-4XXX are perfectly fine for your average user, even one who does a good bit of gaming. (Just have a decent video card.)

People get too involved with crunching numbers and lose track of reality. You can't blame them - not everyone has the money to throw away to try buying rigs made from contemporary components of both manufacturers.
 
Lol this whole post is just ridiculous. Honestly, if you own an FX-8xxx, you don't have to stoop to these levels to justify the CPU purchase. The 8xxx CPU's are very nice for the price. With that being said, if you can afford an i7-4770k but you decide to buy an FX-8350 for reasons unrelated to money - then you're plain ill-informed. It's that simple.
What you've done here, OP, is reduced the range and the scope of the bench-marking analysis to such a point that all the benefits from a faster more efficient CPU are wiped out. You've effectively eliminated the ceiling, and isolated the baseline in an attempt to make AMD look better or equal. Sorry, that doesn't fly for me. You compared an i7-4770k over clocked and over volted on "high end air" that isn't named to an 8320 on ''low end water cooling''. No other details of the test are provided, no link to the actual tests themselves. I Google'd the script and found it was attached to a flyer advertising a motherboard. Lmao you're using a motherboard benchmark to show the i7-4770k's thermal limits? And don't bother mentioning that the i7-4770k OC'd to 4.7 GHZ @ 1.3V will totally destroy an 8320 @ 4.8 Ghz - we don't overclock for performance gains right, we just do it to push up against the heat ceiling.

I know people who own 8350's and 8320's and they swear by those CPU's. And I accept they are good CPU's. Are they as good as the mid/top-end i7's - No. Period. End of story. You literally have to 'move the goal post' several times just to reach a conclusion that they are equal - which in itself is entirely false.