AMD and Intel CPU Research Questions

G

Guest

Guest
I am saving $2000 to build my first gaming rig sometime between January and June of next year. I will use it mainly for gaming but I will also to listen to FLAC files. I am not brand specific, because I am more focused on reliability. I have heard a lot of AMD vs Intel debates and am trying to avoid it in this thread. My previous desktop PC platforms for gaming used Intel's Pentium II, Pentium III, and Pentium 4. My previous Dell XPS laptop used the Intel Core2 Duo and my current HP Pavilion laptop uses the Intel Core i7-2670QM. I have looked at both the AMD FX 8150 (3.60 GHz with a Turbo Boost to 4.20 GHZ) and Intel Core i7-3770K (3.50 GHz and a Max Turbo to 3.90 GHz). Most people are saying Intel is the better choice. From my current perspective, the FX 8150 seems to be more powerful than the i7-3770K. Can someone explain to me what factors are involved in choosing either an AMD or Intel gaming CPU? How does one perform better than the other?
 
Since newer chips will be out by then. Its best to decide then. Given your requirements, there isn't any reason to get the AMD cpu since mostly you'd be doing gaming. Intel's CPUs are currently much better for gaming than AMD's. The AMD chips may be cheaper and slightly faster in some other benchmarks but for gaming, its not worth it.
 
By then Intel's Haswell CPUs should be out. Not sure what the performance increase will be vs. Ivy Bridge though. However, since Ivy Bridge is more powerful than Phenom II / FX and the upcoming PileDriver CPUs. Unless PileDriver will somehow increase performance by 29% beyond Phenom II / FX to equal the Ivy Bridge CPUs.

I suppose the main objective of Haswell is to decrease power consumption further, but I'm sure there will be at least a small performance increase (5%+ ??) over Ivy Bridge. More details about performance increase should come out by Q1 2013.

In terms of overall reliability, it is hard to say which is more reliable... and buying budget components can mean lower reliability, but buying expensive components does not guarantee the components will last forever. It's possible a $300 motherboard will fail in 1 year while a $75 motherboard may last you 5 years or more.

My oldest AMD CPU is the Athlon XP-M 2600 which I used to build a home theater PC back in 2003. It still works, but I will be throwing it out before the end of this month. My oldest Intel CPU is a Pentium M 1.5GHz in my IBM ThinkPad notebook which I bought back in 2003. It still works.

Generally speaking, the more heat a PC generates the less reliable it can be. AMD CPUs consumes a lot of power and can generate a lot of heat. Intel CPUs on the other hand consumes less power and generally produces less heat. The exception is Ivy Bridge since the thermal paste within the CPU is actual paste instead of a solder-like (metallic) substance. Heat is not really a probably with an Ivy Bridge CPU unless you decide to overclock.

Note sure how much power AMD's upcoming PileDriver CPU will consume, but hopefully they can lower it. Ivy Bridge CPUs are generally rated at 77w TDP while AMD CPUs are 125w or 140w TDP (depending on exactly which one you are talking about). So.. AMD CPUs can be rather power hungry and less powerful than their Intel counterparts. At least they cost less.
 


Errr no, benchmarks are what Intel have used for years to create the impression that they are better....how do you think you get 80% marketshare. Its funny how synthetics never corrolate with real life results, and to say the least it is not very close at all. Synthetics do however accurately reflect Intel CPU's in numbers, basically written for Intel.



Curious as to how that 29% number gets thrown around, almost as curious as those suggesting that Piledriver will only level Bloomfield. Thubans basically did that and overall Zambezi (FX 8XXX) are stronger, again benchmarks don't corrolate AMD real world performance well. Some changes like a integrated imc which is way faster than that of Phenom II yet benches don't corrolate, it in fact goes backwards. Synthetics are as the word implies....artificial, manufactured. Suffice to say no intel chip is 30% faster.
 
I never look at synthetics. Game benchmarks for game performance. How can this be "used to create the impression that they are better" if it is the game itself? Same goes for content creation/productivty benchys, they are the actual program showing real world performance.
 


It's just an excuse for AMD fanboys to try and rationalize and justify buying inferior hardware. Next he'll be spouting off the rediculous Intel pay's everyone off consipracy garbage.

To the OP as everyone said for gaming Intel is far above AMD in terms of sheer CPU power and performance. Benchmarks show that Bulldozer falls behind Intel CPU's in all but the most heavily threaded programs.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

And what if Intel helped to fund development of said games?

for $2000, how much are you putting into graphics?

http://www.tweaktown.com/articles/4438/core_i7_3960x_with_4_way_crossfirex_hd_6970_performance_analysis/index5.html

would be nice if they included the I5 there.
 



It is very true AMD, Nvidia and Intel have game studio partners, in the case of Nvidia and Intel many more, it also makes perfect sense for the partners to make games run more efficiently with the respective setups, but this is a fact overlooked by pure denialism, or forum trollism.

It is also not a secret that Intel started benchmarketing, its the easiest way to ensure you maintain overall majority market share, how subtle than to make your competitors products appear worse than the older generation.

Using Thubans, FX and SB everyday I can tell you that a FX chip is far more responsive than a Thuban, the IMC is significantly faster and BF3 gives me around 10FPS more, odd yes considering the lunacy in benchmarks submitted.
 


O you mean kind of like how your foolish enough to buy into AMD's 8 core scam.
 
http://techreport.com/review/23246/inside-the-second-gaming-performance-with-today-cpus

skyrim-fps.gif


skyim-99th.gif


skyrim-beyond-16.gif


multi-fps.gif


multi-99th.gif


value.gif


As you probably expected, the Ivy Bridge-derived processors are near the top in overall gaming performance. Intel has made incremental improvements over the Sandy Bridge equivalents in each price range, from the i5-2400 to the i5-2500K and i7-2600K. The Core i5-3470 offers perhaps the best combination of price and performance on the plot, and the Core i5-3570K offers a little more speed for a bit more money. The value curve turns harsh from there, though. The i7-3770K doesn't offer much of an improvement over the 3750K, yet it costs over a hundred bucks more. The Core i7-3960X offers another minuscule gain over the 3770K, but the premium to get there is over $500.

Ivy Bridge moves the ball forward, but Intel made even more performance progress in the transition from the prior-generation Lynnfield 45-nm processors—such as the Core i5-760 and i7-875K—to the 32-nm Sandy Bridge chips. From Sandy to Ivy, some of the potential speed benefits of the die shrink were absorbed by the reduction of the desktop processor power envelope from 95W to 77W.

Sadly, with Bulldozer, AMD has moved in the opposite direction. The Phenom II X4 980, with four "Stars" cores at 3.7GHz, remains AMD's best gaming processor to date. The FX-8150 is slower than the Phenom II X6 1100T, and the FX-6200 trails the X4 980 by a pretty wide margin. Only the FX-4170 represents an improvement from one generation to the next, and it costs more than the Phenom II X4 850 that it outperforms. Meanwhile, all of the FX processors remain 125W parts.

We don't like pointing out AMD's struggles any more than many of you like reading about them. It's worth reiterating here that the FX processors aren't hopeless for gaming—they just perform similarly to mid-range Intel processors from two generations ago. If you want competence, they may suffice, but if you desire glassy smooth frame delivery, you'd best look elsewhere. Our sense is that AMD desperately needs to improve its per-thread performance—through IPC gains, higher clock speeds, or both—before they'll have a truly desirable CPU to offer PC gamers.

So that's where the "2 generations ago" comes from..
 
^ It's those "worst-case" lags that cause noticeable stuttering in the games tested. The human eye (particularly the rods responsible for black&white peripheral vision) is sensitive to movement, so excessive stuttering makes for p-p--po-po-poor game play, excuse my stutter :p..

BTW Scott Wasson has a good tech reputation across the web..
 


Generally speaking, Phenom II and FX are more or less the same. The FX can perform a little better in some specific benchmarks like Photoshop, video encoding and 3D rendering. However, Phenom II is generally a little better in games than the FX. Phenom II more or less performs just as well as Intel's Core 2 Duo/Quad CPUs. Therefore, the FX is also in the same boat as the Core 2 Duo/Quad. Again, there are some benchmarks that the FX will outperform Intel's older CPUs, but the overall average performance is nearly the same.

Intel's Clarkdale/Nehalem CPU cores (1st gen Core i3/i5/i7 CPUs) are on average 10% faster than the Core 2 family. That means 10% faster than Phenom II and FX.

Intel Sandy Bridge CPUs are on average 12% faster than Clarkdale/Nehalem.

Intel Ivy Bridge CPUs are on average 5% faster than Sandy Bridge.

Phenom II = FX = Core 2 Family = 100%

Clarkdale/Nehalem = 10% faster than Core 2 Family = 100% * 1.1 = 110%

Sandy Bridge = 12% faster than Clarkdale/Nehalem = 110% * 1.12 = 123.2%

Ivy Bridge = 5% faster than Sandy Bridge = 123.2 *1.05 = 129.36%

Ivy Bridge = 129.36% faster than Phenom II / FX / Core 2 Duo/Quad
 
I helped out a friend whom had issues with his i7 930 build so I donated a Crosshair V and 1100T to him, the synthetics beat his overclocked 930........I really don't know where you get Phenom II and FX at Core2 level but anyways I am not really keen to find out either.

As for stutter spike the only two causes I had with that was a) Ping and b) GTX 560ti SLI stuttered alot. Skyrim has a 60hz lock, you need to mod the game to break that lock, most AMD processors with the right graphics card hit the 60hz barrier so its pointless.

2500K with GTX 450 vs 2500K with 7970....if CPU dependance was the factor then the results will be the same, fact is games are GPU dependant.
 


And to think just a few years ago, AMD fans touted how much "smoother" K8 was than Conroe while gaming :p.. Now that the shoe is on the other foot, suddenly it's "pointless".

2500K with GTX 450 vs 2500K with 7970....if CPU dependance was the factor then the results will be the same, fact is games are GPU dependant.

On the first page of the article it says they used the exact same setup including identical GPUs for each CPU tested, and no the results are nowhere near the same.

BTW, the same article shows similar results when testing Batman: Arkham City and BF3, so no it's not just some peculiarity of Skyrim's internal VSync-type feature..
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

your numbers are a bit off. http://ixbtlabs.com/articles3/cpu/intel-ci7-123gen-p3.html

Ivy is only 12% over nahalem (10% on the high end wth HT)

The problem is as software gets updated, its not retested with the old hardware becasue 1) it takes too long, or 2) they got rid of it already. sure, at its release, nahalem was ~10% over core 2, what about after the software is updated on core 2 q?

This is where synthetic tests come in, they don't vary with updates because there isn't much to tweak and the reason they don't reflect real application/game performance. there are no optimizations to change over time.
 


You mean fps or stuttering? AFAIK Scott Wasson's article is the only one investigating stuttering vs. CPU choice. While gameplay is good with 60+ fps average, smoothness is also a consideration, as lags and stutter can be distracting. Hopefully he'll have a later article with more CPUs and games tested..

From AT:

atgamebench8150vs2500k.png


 
After running enough test runs on the 8150, at 3.6ghz the IPC penalties are quite severe, compared to 4ghz where in take Cinebench it jumps over a full point above a i5 and just behind a i7, considering the intended highest end Zambezi's were intended to be 2 billion transistors deep running at 4ghz up to 4.5ghz on boost and with far reaching overclocking potential, I will do the nvidia thing and call Zambezi a Half Bulldozer, which when you look at how badly skimped down the specs where due to GF not perfecting the 32nm process it is very much a plan B.

So if we take a FX 8150 at 4ghz and it gives in between i5 and i7 results, it is not half as bad as made out to be, and very far from what it was intended to be. Vs its direct replacement the Thubans.

1] IMC sub 10s in MaxXMem admittedly with only 4GB of DDR3 1333 (waiting to recieve AMD branded modules to test soon and this was all I had on me) that itself is very impressive considering a 4ghz 1100T does like 22s.

2] IPC, considering that a higher clock speed would have mitigated the loss of IPC, compare 3.6 to 4ghz is over a point in Cinebench and 200marks more in 3DM11, I will honestly say nothing was gained or lost, the 1100T still remains a fantastic processor in light of all this, running my 1100T vs my 8150 it seems to be the same thing on aggregate.

3] Power control and efficiency. A thuban needs 1.4+ v to hold stable at 4Ghz, 1.280v to hold stable at stock. The FX is far better on voltage stability, While it loses control at high overclocks the FX is still day to day the more efficient chip in that regard. FX running 4.4ghz at 1.315v stable, 4ghz 1100T 1.425v.

Fast forward to PD, achieved clock speed intended, Resonant clock mesh as intended, Bolt on instruction sets, improvements to front end and memory controller notably fixes to latency which should help micro stutters, but still well down on transistor counts.

Overall Zambezi is not as bad as made out, but not as good as intended...but most definitely far from 20/30/50% numbers I have heard.

I have only encountered stuttering at extremely high graphic settings with the fastest GPU's on the market, and multi GPU's as well, starts to get noticable at GTX670/HD7950 level.
 
^ Of course, SB and IB can be oc'd a bit as well. But FX8150 going from what - 3.6? to 4.0GHz is just 10% speed bump. If it does show a higher increase than 10% in Cinebench multithreaded, then it's probably due to the front end being better able to keep the pipes filled.

Just out of curiousity, what temps and Vcore does the 8150 generate at 4GHz under Prime95 loads? We already know the power draw increases much more rapidly with oc frequency at load and idle, than just about any other chip on the market, so that indicates a big spike in Vcore needed to remain stable.