Is the AMD FX 8350 good for gaming

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Then why does AMD hold the world record for highest overclock by 1+ GHz? If they were physically built better they would hold that record surely...
 


Yes its a good CPU, ivy bridge is generally a little faster because of the smaller manufacturing process, higher transistor count, and memory performance. Bottom line it is good enough as a gaming CPU and is a better solution price for performance wise. If it will save you enough to get a better GPU than it's a good option, games are mostly GPU bound at this point, so your best option is to weigh price. You would want to get some decent ram to pair with it for optimal performance, 1866 is the sweet spot and it's only a little bit higher in price than 1600 mhz models for overclocking it's also less of a jump up from 1866 to 2000 than it is from 1600 - 1800 or 1866 so they are good for overclocking.
 


The FX can beat an i7 3770K when its eight-core architecture is fully used. Unfortunately, most current games and even the slow Windows 7 are not designed to use the FX in full, doing that the AMD chip performs a bit poor in comparative tests with Intel chips. Using a different OS or future games (e.g. those ported from the future PS4) the FX will show its true potential (e.g beating the i7 in some tests).

In any case TomsH has a gaming card here

http://www.tomshardware.co.uk/gaming-cpu-review-overclock,review-32653-5.html

that shows that the FX is good enough for gaming even with the current unoptimized (pro-Intel) software.

Note that although the I5 3570K and the i7 3770k are currently one tier above the FX, the difference is not really noticeable, as the guide says: "I don’t recommend upgrading your CPU unless the potential replacement is at least three tiers higher. Otherwise, the upgrade is somewhat parallel and you may not notice a worthwhile difference in game performance."
 


That's probably about right it really doesn't pay off to be a fan boy of either side, the 8350 is just a little under the 3570k there will be a pretty negligible difference,plus performance really comes down to how much work you wan't to put into overclocking. The i7 3770k isn't worth the money for just a gaming rig, the i7 3820 is a better choice for only a little more on the chipset pricing, there's a lot more room for expansion.
 
You know what a whole lot of people don't think about... The FX8350 comes factory clocked at 4.0Ghz. A 3570k comes factory clocked at 3.4Ghz. And they are both overclockable to around 4.8Ghz. So you can overclock an 8350 +.8Ghz and you can overclock a 3570k +1.4Ghz. So for the i5 you have a whole .6Ghz more room to overclock. So imagine these processors both overclocked with the 3570k having a .6Ghz advantage over the AMD8350 since the 8350 started out at 4.0Ghz and the 3570k started out at 3.4Ghz and that's where they were benchmarked at stock. Even the multithreaded benchmarks will look a whole lot different after both processors are overclocked to 4.8Ghz. And no one can argue otherwise this is just a fact, the 3570k has a .6Ghz advantage over it's stock benchmarks. And if you don't understand that, well, IDK what else I can say.

I do believe the 8350 has a place as one of the top processors right now. But given the information I just provided, I think overclockers are going to prefer a 3570k. But it highly depends on WHAT you do, and HOW much you do of it.

And let me get rid of this rumor. Having more cores doesn't make you better at multitasking. It's RAM that lets you multitask because all of your programs are stored in RAM. An i5 3570k's 4 cores are more than capable of running 20 programs at once as long as you have enough RAM to store the programs in. So let me just mark that as BS so people will stop with that crap.

BTW, I've seen benchmarks of the 8350 vs 3570k overclocked, and let me just say it tips the scales in the i5's favor. Hell, my 3570k overclocked to 4.6Ghz is faster than a 3770k at even video editing and rendering. So the 3570k is no slouch at anything.
 


Not to start an argument...

But...overclock.net has a FX-8350 owner's club that includes at least 4 individuals with OC above 5.0 GHz, and there is one guy running a super high end rig that sits currently OC'ed to 5.6 GHz with a custom loop water cooler and 2 radiators + push/pull fans...

Link or it didn't happen right?

http://www.overclock.net/t/1318995/official-fx-8320-fx-8350-vishera-owners-club

The first post has a table of all the settings/mobo/system information/OS/voltage/core temps...

Some of them only OC'ed to 4.2-4.4...some of them are WAY OC'ed...
 


Yeah! when the FX has the world record overclocking its eight cores at 8.176GHz and with most overclockers obtaining 5.0-5.5GHz with easiness, the only hope for the i5 is to overclock both at 4.8GHz, giving the Intel chip an extra margin...



As this one from an overclockers forum?

It is a great chip; better than the i5-3750K in fact.

Or do you mean as the next ones?

Overclocked to 4.8GHz the FX-8350’s score rose to 8.25 points , again out-pacing the i5-3570K, even when overclocked to 5GHz.

This also saw the FX-8350 better the i5-3570K at stock and when overclocked, again an indication of the Piledriver architecture’s superior multi-threaded performance.



I have also seen some of those biased benchmarks. For instance, I saw a comparison of a 3570k @ 4.8GHz and a 8350 @ 4.8GHz (I wonder if this one is from where you got the 4.8GHz figure). Well, the 'review' conclusion was:

My conclusion is that they are very similar in performance at these speeds, i5 wins Some, FX8350 wins some, out of 6 game benchmarks 3 single GPU benches and 3 crossfire benches , the fx8350 won 3 and the i5 won 3, the I5 wins at 3d mark 11 but the 8350 was even at Heaven.

And it would be a nice tie, except that the benchmark was biased. First, the comparison used software that favoured Intel chips (compiler and architectural optimization for Intel chips) and, second,

Both rigs are running corsair vengeance ram 16 GB at 1600mhz

Have you noticed that your i5 has a stock memory bus of 1600MHz but it is of 1866MHz for the FX? By using 1600Mhz memory the reviewer is again favouring the Intel chip by bottlenecking the AMD chip...

What the next? Use a fast SSD on the Intel side some 5200 rpm notebook HDD on the AMD side and run SYSMARK?
 


What a fan you are. I've seen plenty of benchmarks. But not the one you mentioned. Just for the simple fact that you think benchmarking sites use "intel" software makes you biased. Theres no such thing as favorable software. The software favors intel because intel is better. I guess in order for the FX to win the FX has to use "unbiased" software. But in your view it would be "FX'" software.

Typically an 8350 clocks to 4.8Ghz. TYPICALLY. You think theres not 3570k's that clock up to 7.whateverGhz?

I love how you use quotes like they are facts when I know I can find twice as many quotes highly favoring intel.

And again, why don't people use Tom's Hardware for their benchmark and review site. Clearly Tom's isn't biased or in bed with any company. And Tom's clearly says the i5 3570k is the better cpu. Even with heavily threaded tasks the 3570k comes close. BTW my 3570k@4.8Ghz scores an 8.1 in Cinebench so I'm sure one clocked at 5.0Ghz would score higher than an 8.25.

Oh, and how about single threaded performance. ANY single threaded performance or programs that don't use 5+ cores. Like SO many programs and 99% of games do. Intel absolutely destroys, and I mean destroys, the 8350. It's not even funny how bad AMD gets beat right there. It's just that bad.

And you think 1866mhz memory really is going to make a CPU perform better? I just upgraded from 1600 to 2133 and I saw a 3% performance increase. 3%! You think that's going to tip the scales in AMD's favor? Get REAL. WAKE UPbounce:! And don't you know an Intel can run 2800Mhz memory just as well as AMD. WHy does it matter what stock frequency is suggested? You making points that don't even make sense.

Let me just say that an FX8350 is a good processor. But for someone to think it some how beats an i5 in overall computing is straight up wrong.

Edit: I love how AMD fans have to find the "right" kind of software to run to get comparative results to an Intel. Cause I guess anything that uses less than 5 cores is favoring Intel. If you gotta have the "right" software for you CPU to perform properly then somethings wrong.

Find me an AMD cpu that can do a 7 second SuperPi 1M score and THEN we can talk. That's what I score and an 8350 can't even get to 10 seconds. Much less 11 or 12.
 
What evs... Your both fanboys trying to proclaim who has the bigger pair of balls, Just let the thread die... Insulting each other solves nothing.

Also, the FX 8350 OC'd performs on par with a stock i5 or i7 and It costs less money. You can't go wrong with either CPU. Stop being morons.

And if you plan to do live streaming... the FX 8350 is better than the i7 at that. Stop trashing a CPU you don't own.
 


I am not going to argue, but I will chip in here...

What he is talking about, are protocols...

A software setup with data fed in a mostly serial manner favors intel, because intel's instruction execution protocol for their CPUs are 90% serial data...which means intel chips break down a serial stream of data faster (single threaded performance). AMD's instruction execution protocol for their CPUs are setup to run parallel streams of data (heavily threaded performance), which most software out right now is not designed to feed data to the CPU in this manner. So, data being fed serially to a CPU designed to run parallel streams of executions is inefficient, and favors one designed for that type of data streaming.

For example...

Picture you're at Wal-Mart (or where ever), and there are 8 checkout lanes open...the first lane has a line a mile long, and they will only allow 4 of the other 7 lanes to have a line 1 person long. It doesn't make any sense right? For starters, they're not even using all of the lanes available, and the ones they are, aren't being utilized efficiently.

That's what's happening inside an AMD architecture FX8350 with current software...

With Intel chips right now...it's more like the line at best buy...where you have 1 line a mile long, but the front person has 4 different cashiers to go to when they arrive at the front of the line.

So, having 1 line a mile long doesn't slow them down, they're designed that way...

However, once information is fed in a parallel manner to the CPU...AMD will have all 8 lanes at Wal-Mart open for business and the lines will be distributed equally with people (instructions for the CPU), but Intel will still have the Best buy type line with 4 people running a cash register...except that now there will be 4 or even 8 lines forming into that one line, which makes things slow down because they are not designed to execute like that.

I hope the analogy makes this very complicated architecture discussion make sense.
 


The existence of biased benchmarks artificially favouring Intel chips is well-known

http://news.softpedia.com/news/AMD-Nvidia-and-VIA-Quit-BAPCo-Call-SYSmark-2012-Biased-207412.shtml

http://semiaccurate.com/2011/06/20/nvidia-amd-and-via-quit-bapco-over-sysmark-2012/



No. If Intel chips were always so good as you believe, then Intel would not need to introduce the Cripple_Amd function for deliberately crippling performance on AMD machines:

However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string ... If the vendor string says 'GenuineIntel' then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version.

Analysis shows that Intel has been cheating (and still does) the real performance of their chips up to a 47% in some cases!!

http://www.jimstonefreelance.com/vanilla/discussion/297/intels-cripple-amd-function-in-their-compilers/p1

http://www.osnews.com/story/22683/Intel_Forced_to_Remove_quot_Cripple_AMD_quot_Function_from_Compiler_

The recent settlement with AMD requires that Intel will not include any "Artificial Performance Impairment" in any Intel product. However, I cannot find any change in the new Intel compiler version that reflects this requirement. And this explain why, when you avoid the Intel compiler you find a big boost on AMD chips. Everyone knows that AMD runs much faster in linux, for instance.



The FX overclocks above 4.8GHz with easiness and holds the worldwide record beyond the 8GHz and with its eight cores working.



At the time of writing this, your i5 best ever score in Cinebench R11.5 is of a mere 8.55. However, the FX 8350 has a worldwide record of 11.78, whereas the i7-3770k has a record of 12.72. Once again the 'cheaper' FX manages to obtain a 37% more performance than your 'expensive' i5.



Yes. Intel performance per core is usually better, but the future of computing is not on faster cores but in multi-core designs and parallelism because of well-known physical limits with single core designs.

When software really uses AMD multi-core architectural advantages, the FX-8350 destroy an i7-3770K (some examples below).



First, just because your i5 is not sensible to memory speed does not imply other chips are not. AMD designs are more sensible to memory speeds, specially the APUs.

Second, you are overclocking from stock 1600 to 2133, but my point was about biased reviews underclocking the FX from stock 1866 to 1600.

The i7-3770K gives a mere 3.8% gain from its stock 1600 to 2133 in skyrim, but it loses a 5-16% when underclocking from its stock 1600 to 1333 and 1066... and skyrim is not specially sensible to memory speeds.

The biased review used stock speed on the Intel chips and underclocked the AMD chips. If this is unimportant why did not them make the review in the other way with the AMD chips at stock speeds and the Intel chips underclocked?



Except it has been recently shown that the FX-8350 performs as well as the i7-3770K and even beats the i7 in some tests. Some examples

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=0c966a4&p=2

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=b799806&p=2

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=faec63f&p=2

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=293f200&p=2

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=f236ffc&p=2

http://openbenchmarking.org/embed.php?i=1210227-RA-AMDFX835085&sha=6dc05fb&p=2

And before you get shocked by seeing a AMD FX beating an Intel i7, let me add that the real potential of the Piledriver architecture is not still being used in those tests, because the bdver2 flag is not still using the BMI, TBM, F16C, and FMA3 capabilities over the original AMD Bulldozer processors.



From the wikipedia:

Super PI is single threaded, so its relevance as a measure of performance in the current era of multi-core processors is diminishing quickly.

Why do not using something more realist such as wPrime 32M? The world record for FX-8350 is of 4sec 406ms; the record for the i5-2750K is 4sec 437ms. That is a tie, is not?
 


That was an excellent way to explain that. I see you know your stuff. I'm not trying to rag on AMD either. I totally get what your saying. I'm just saying that just because software is already designed the way you say it is, that doesn't mean that the software purposely favors Intel. An Intel is just build better for a lot of software that's already out. To me, you should build your CPU based on the software that is currently available. I know AMD is looking forward on this, but to me, if you always have really powerful cores, on a per core basis, you can't go wrong. Because no matter how many cores you have or no matter how heavily threaded something is, all of your cores will be strong so it doesn't matter really. That's why I think AMD should start trying to compete with Intel again on a per core basis.

AMD's are not weak by any means with single threaded tests. It's just that Intel usually wins by about 50%. Which is a fairly large margin. And I think it'll be a good while before all games and programs can use 8+ cores. It's not that far away. I think about 5 years probably, but by then AMD and Intel will have 20 core processors. And whoever has the most powerful cores will end up being the better cpu.

This is why I love my 3570k. I never use more than 4 cores. So for what I'm doing, I have the most powerful CPU on the market. With no other CPU will I get my work done faster than with my 3570k. And even if I do decide to do editing and rendering, my 3570k is only about 20% slower than an 8350 or 3770k and my 30% overclock makes up for that, making me actually faster in rendering than a stock 8350 or 3770k.

This is why it really depends on what you do as to what processor you buy.

 




8350rocks, let me ask you a question. I hear a lot about how an 8350 is better at multitasking because it has 8 cores. But isn't multitasking really dependant on the amount of RAM you have? For instance, I have 16GB of 2133 G. Skill Ares RAM and a 3570k. I can load 16GB of programs in my RAM and my 3570k will run all of them just fine. I've never had a problem using many programs at once. I use multiple programs all the time.

So does having an 8 core 8350 with 16GB of RAM make you better at multitasking because all those programs have 8 cores to use instead of only 4, meaning that each program should actually perform better? Or will my 4 cores cope with the many programs just as well as the 8350, and no matter how many programs I have they should perform the same? What if the programs are single threaded?

To me the only thing that makes for better multitasking is more RAM. I can see maybe if your rendering on one program and editing on a seperate program, while surfing the internet, listening to music, downloading a movie, and playing a game while all that's happening, maybe the 8350 will handle that better because the programs can distribute themselves over 8 cores instead of 4. But maybe since the 3570k's 4 cores are so powerful they can handle that fine as well. But IDK, that's what I'm asking.

BTW, I don't really multitask that often anyways so it won't matter much anyways.
 
That's a great question man...

Think of it like this, when you're multitasking, RAM has an effect on the amount of multitasking you can do, though windows negates this to some degree by putting "page file" on your HDD that is dynamic in size. What that means is that windows opens a file similar to RAM and loads files from it when you don't have enough RAM to load everything into RAM at once. RAM is faster, but your performance loss is only noticeable if you're running something extremely CPU/GPU heavy...like a game in 1440p or hardcore video encoding/rendering.

When you're multitasking, AMD protocols allow your background programs to form a serial line in front of an unused core...so that you're not tapping the resources your foreground program is using.

Now, say you were running 5 fairly intensive things at once...(let's say, streaming web videos, downloading multiple music files, and playing a web game...) Your i5-3570k would be able to execute 2-3 of those (depending on their resource needs) well...the others would be passed off to a virtual core in the background and would run at a considerably slower rate.

Doing the same thing on an AMD 8 core chip, since only 1 of those requires any FP calculations, you could literally tap 5 cores to do all the work simultaneously.
 


Yeah, I never knew that part. I knew what a pagefile was(I have mine set at 800mb because I have 16GB of RAM, is this a good idea to have it set at only 800mb? I have an SSD too so my pagefile won't be that slow, but slower than RAM nonetheless), but I didn't know that if you only had 4 cores and if forground programs were using up all 4 cores then the other programs would get loaded into "virtual cores". I guess these virtual cores your talking about isn't what Hyperthreading is because an i5 doesn't have that. I've never heard of this but very interesting.

Where do you go to learn all these things about how computers work? I'd love to get more information that way I can better understand what I'm talking about as to give more accurate information and for the purpose of knowledge in general. I'm actually JUST now getting into this PC stuff. I only knew what a 3570k was since December of 2012. Pretty much since I got this PC I've been full steam ahead learning everything I can. All this is fairly new to me but all extremely interesting. Since the end of December I've learned how to overclock like hell, I've learned how to build and setup a PC 1st hand, and I've also learned a lot about all the products on the market whether it be CPU's, RAM, SSD's, HDD's, Cooler's Case's, Fan's, PSU's, GPU's, everything. So I'd like to see how much I can learn over the next year or so, maybe I can make a career out of it, no telling.

 


Good, but that is only a part of the whole picture.

The whole picture includes the existence of biased benchmarks favouring Intel chips. See the news about AMD, VIA, and Nvidia abandoning BAPCO, in my other answer.

And it includes also Intel bad practices in its compiler. When the Intel complier founds the 'GenuineIntel' label then it uses the optimal code path, but when the CPU is not from Intel then, the compiler generates the slowest possible version of the code, even if the non-Intel CPU is fully compatible with a better version. This explains why when you use another compiler you see a big boost in the performance of AMD chips. See links and further info in my other answer.
 


However it ends up working out is just how it is. Until the software changes the results will still be the same. And I'm sure AMD knows all of this too. If they could figure out a way to make it better I'm sure they will. But as it stands, the benchmark results are what we are left with. I'm sure Intel can find 100 reasons why different software doesn't perform even better on their chips. Blaming poor results on poorly written software, IMO, is a bad excuse for poor performance. Intel seems to find ways to make their chips perform better with different software. If that's what makes their chips perform so well, why should anyone care why? If AMD could do the same, I'm sure they will.

Do you get that point?

Edit: BTW, I know this compiler is probably not used in Tom's Hardware benchmarks or anyone elses. Software like Cinebench 11.5, SuperPi, Passmark8, PCMark7, 3DMark11, Handbrake, 7Zip, Sisoft Sandra, Aida64, Geekbench, Novabench and so on, I'm sure that software doesn't "favor" Intel CPU's even though they win in those benchmarks. Why come up with an example that I have never heard of other that one of the most famous benchmarks that we know should be written correctly and objectivly. You can't tell me software like Passmark8 is written purposely to make Intel CPU's faster. That's just not true.

Again, to say the benchmarks are a problem is looking way too far into it, because in the end, it doesn't matter, one CPU is faster then the other and that's all you can actually see and all you should care about.

 


That's what hyperthreading is, it is essentially passing off background or foreground applications to a "virtual core" which the processor is basically taking a fraction of clock time each cycle to run the threads dedicated to the virtual core. Now, for the sake of hardware, the i5 doesn't have any "virtual cores" in intel speak...however, the 4 cores you have can divide clocktime by % to execute functions you're currently running(the definition of CPU multitasking effectively). This will tap resources you're using elsewhere, though it won't be a largely noticeable difference in your foreground application performance unless you're doing several CPU intensive things at once. So for example...your foreground application may be using 80% of 4 cores, and your background applications may be using 20% of the clocktime per cycle to run their functions, but it's at a highly reduced rate compared to what it would be if that was the primary program running.

Edit: To answer your question about pagefiles, an SSD really eliminates the difference in performance almost entirely, but 800 MB pagefile size is fine with your current RAM.



I have been around computers for a long time, and read everything I can. Furthermore, I was a CECS (Computer Engineering) major in college before changing to game design. So I know how the interior functions work on alot of the hardware...now, I am not always 100% up to speed on the current terminology (the jargon changes over time)...but I can explain it well.

Computers are interesting indeed if you have a technical mind. That's why it can be frustrating for me when people don't understand the amount of vast untapped potential in AMD chips right now. Most people don't understand what's going on in the background that explains why something that has a 4.0 GHz stock clock cannot run some things as fast as a 3.4 GHz stock clock. It really has very little to do with strength of cores...because technically each AMD core is capable of running totally separate calculations from another, though floating point physics calculations are shared, more and more coding designs are putting floating point calculation onto the GPU anyway, so those will be less necessary in the future and the CPU will become more and more required to do the heavy lifting for processing data instructions and running integer calculations, which GPUs are not very good at by design.

Once the 2 are working in perfect harmony, technology will make another leap forward on the scale of the leap from x32 to x64 instructions, or essentially the leap made when the Athlon CPU came out and broke 1 GHz for the first time...

Have you ever noticed, we went from 66 MHz CPUs to 300 MHz CPUs in about 8-10 years then from 300 MHz to 1 GHz CPUs in a matter of ~24 months...but since then...in 12 years...we've only gone to 4.0 GHz? It has taken us 10x longer to increase proportionately the same amount as the last great leap! That next leap forward is about to arrive, we are on the cusp...it will be an interesting time again.
 
If I were you, I would buy the 8350. The i5 is a better choice now, but in the future when consoles come out and developers start optimizing games for multithreaded CPUs, a 8350 will perform better. However, it really shouldn't be a problem now unless you have a mid-low range graphics card like a 7770 or 650. Anything better and I would recommend for you to buy the 8350.
 


First, I have have given several reproducible open benchmarks showing how the FX-8350 beats a i7-3770K using available software.

Second, the problem with the biased benchmarks is not related to "poorly written software" or bad programmers, but to cheated code. Read the link on why AMD, Nvidia, and VIA abandoned BAPCO with accusations of Intel bias.

Third, there are biased hardware reviews on the Internet which use all kind of tricks for reducing the performance of AMD chips. You can google it.

Fourth, it is true that you can find software that runs better on Intel chips, but how many of that software uses Intel compiler? The Intel compiler cheats the code: when a non-intel CPU is detected through the GenuineIntel 'flag', the slowest possible version of the code is generated, even if the non-Intel CPU is fully compatible with a better version. Read the link on the court case that AMD won against Intel.

When you use another compiler (one that does not cheat) you see the real performance of AMD chips.

Fifth, I already showed that the best Cinebench score for the FX is on pair with the best score obtained by an i7-3770K and destroys your i5 best score by a wide margin. At the time of writing this, Passmark reports the FX (score: 9159) being on pair with the i7 (score: 9637)

http://www.cpubenchmark.net/cpu_lookup.php?cpu=AMD+FX-8350+Eight-Core&id=1780

with both destroying your i5 (score: 7137).

Why you insist on that your Intel i5 is faster than the AMD FX 8350 is a complete mystery for me, except when I consider that the FX offers i7 performance by a percentage of the price of your i5...
 


Precisely this is related to the "physical limits" that I mentioned in a previous response to him. The future is not substantially higher clocks but multi-core designs and heterogeneous computing; two ways are being leaded by AMD...

Therefore not only the FX 8350 is good enough for gaming per today requirements, but its innovative eight-core architecture puts in advantage for the near future.
 
Point here is gaming and i5 3570k till this date has more wins over fx8350 taking the CPU as same.
There is no point of saying which processor can do what in heavily multi-threaded renderings or encoding.
Just check out some Skyrim,Civilization 5,Dawn of war,etc i5 3570k has a good amount of lead.
Crysis 3 does well on fx8350 on the green scenes but rest does well on i5 3570k.In Toms test they were comparable.
For gaming i5 3570k this date is still better than fx8350.
 


We are discussing the reasons that the benchmarks reflect what they do...not what the current benchmarks say...

Also, take a look at openbenchmarking.org...they run non intel compiler programs and games on both systems and show you what you're REALLY getting into...
 
Status
Not open for further replies.