Is the AMD FX 8350 good for gaming

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


I hear this often. I would like to know how many people is running a 3570k, 3770k, or a FX-8350 24/7 full-load during years. Personally I don't know anyone.
 


I know all of that (however most of which is false). And Yes they do have a world record for clock speed, but that actually doesn't even matter the most. The thing that matters is code, which is what Intel exceeds in, and physical build quality. And Intel onboard video does actually come into play. Most people do not know this, but the onboard Intel graphics actually assists in any way possible to make the computer run faster. For example, windows explorer. It helps with windows areo and allows the GPU to not have to focus on it as much.

The other thing is that the Power Consumption is false. Many studies have shown that the power consumption is less. The reasoning is the physical build and coding.

The code is built more efficiently and can run processes faster than any other because of the way its executed, thus, it doesn't need as much power to do it. So cool you hold a world record for Clock Speed, but the factor that comes into play is the fact of test of actually executing operations. I.E. Super P.I which you can calculate up to 32 (or 64) million digits of pi, and it does it in multiple loops. Then tell me who will win then?

Also with the 8 Cores, that's better for Video editing, which is what its made that way for. Intel Hyper Threading is what helps games. Its actually more efficient than 8 separate cores running individually. There are a few games out there that don't benefit from it (ArmA) but you can still just shut it off.

The Physical is false also. Yeah there were a couple of downgrades, but still, the overall quality of the physical components is much better. This is also where the heat comes into play. From my study, I've found that AMD (and this is probably known) tends to heat up a lot. Many studies have shown that AMDs take up a score of watts or more and also generate more heat. Something that is counteract-ant to that is that my friend (and im getting that same) has an ASUS SABERTOOTH Mobo with thermal plating and his i7-3770k with the fan that came with it: and running BF3 max settings, the fan never went above half and it went to 1ºC, which is still dangerous but is absolutely amazing.

With the games, the CPUs can never run on their own with a game and it depends on what kind of GPU you get. Now of course, AMDs would obviously work the best because they make CPUs and GPUs that can easily function with each other, which honestly is an un-fair advantage. However, if you say ran it with and Nvidia card then it would be more fair. Then you have the games that best fit certain GPUs and then the Mobos with certain specs, etc. Simply put there are MANY factors that can affect the usage of it. Thus I did some research and found that Yes, AMD did do Great (notice the capital G) with many games, however, in comparison, most games run better on Nvidia cards (which is where this factor comes in). THUS further more Intel cards work better more with Nvidia than AMD does with them. And even an Intel CPU can do great with an AMD GPU. And after my studies (again) Intel came in first outdoing it with most games (about 80% and these games are the biggest on the market). HOWEVER, AMD did put up a great fight and broke out and did fantastic in multiple categories and came runner-ups behind Intel in others.

With that the Intel and Nvidia setup did actually out run most of the dual AMD setups (CPU and GPU) and proves to be the better match. Now yes it does cost more money, but that physical aspect comes into play and Nvidia and Intel are much better with their physical components than AMD and deserves that extra pull with the money or so.

EDIT: And please do not T L : D R me. I'm putting in much valuable information into this.

EDIT 2: I'm getting off for tonight, can't wait to see what "information" you'll have for me tomorrow. :)
 
1.)Intel uses Bulk wafers...SOI > Bulk...they only manage to make it work by using their proprietary Triple gate process.

This should explain the difference:

http://en.wikipedia.org/wiki/Silicon_on_insulator

AMD's wafers and internal components ARE physically higher quality than intel. That's why AMD products can take more voltage and OC higher. overclock ability is a direct indicator of build quality.

2.) Temps have nothing to do with quality...it's entirely to do with voltage and resistance, ohms law says that the more energy that passes through a circuit with resistance will generate heat proportionately to the amount of energy transferred over the circuit directly effected by the distance.

You can read about ohms law here:

http://en.wikipedia.org/wiki/Ohms_law

3.) Coding and build quality have nothing to do with power consumption, it has everything to do with the chips voltage operating range. If a chip is designed to operate at 125W then it consumes more power than a chip that operates at 95W. It's as simple as that. Many intel chips have lower wattage ratings and lower TDP.

4.) Coding has nothing to do with ILP, DLP, or TLP. Intel protocol is designed to take serial (single file) instructions and break them down quickly. This is called SIMD. AMD also has SIMD capability, but it excels at executing multiple streams of data and instructions at once...far better than intel as a matter of fact, this is called MIMD.

This explains SIMD:
http://en.wikipedia.org/wiki/SIMD

This explains MIMD:
http://en.wikipedia.org/wiki/MIMD

That explains the internal architecture differences between the 2 companies.

5.) Games do not use HyperThreading...at all. There is literally 0 benefit to HT in gaming. Meanwhile...AMD's integer cores are capable of being used to run calculations in gaming. See, the problem is, games use too much of a single intel core for a "virtual core" to be able to operate at any feasible rate, because "virtual cores" are a background process and cannot operate efficiently without tapping necessary resources that the 4 real cores are using. AMD does not have this issue, because the 8 real cores can be operated independently without tapping resources from another core. 2 cores share a floating point unit, though floating point calculations are shared with the GPU, the CPU will perform some of those calculations under heavy load in games like Crysis 3. That's why AMD benches so well on games like that. Also, a CPU cannot be 1C unless the ambient temperature is about 33-34F, because it cannot be cooler than the ambient temperature unless you're using Liquid Nitrogen or Liquid Helium coolant. His temp sensor was wrong.

Info on HyperThreading:
http://en.wikipedia.org/wiki/Hyper-threading

Overall the performance history of hyper-threading is mixed. As one commentary on high performance computing notes:


Hyper-Threading can improve the performance of some MPI applications, but not all. Depending on the cluster configuration and, most importantly, the nature of the application running on the cluster, performance gains can vary or even be negative. The next step is to use performance tools to understand what areas contribute to performance gains and what areas contribute to performance degradation.[12]

Bold face type is mine.

Here is a list of applications supporting HT:

http://www.tomshardware.co.uk/hyper-threading-core-i7-980x,review-31842-3.html

Notice there aren't but about 10-12 games on the list? The rest are all applications for businesses and productivity or media. HT is no real advantage in gaming.

6.) AMD's "load sharing" software allows ANY CPU to take advantage of it...AMD, unlike Nvidia and intel, do not participate in proprietary only processes. That's why AMD is so much better on Linux than Intel/Nvidia. Plus, games run better based on who sponsors them and what developer's kit they use, some run better on AMD, others run better on Nvidia.

This talks about AMDs App Acceleration:
http://www.amd.com/us/products/technologies/amd-app/Pages/eyespeed.aspx

7.) Your "benchmarks" were not really well executed, and which AMD cards vs which Nvidia cards? A GTX680 is not fair to run against a HD 7870, even an XT. You have to look at GPU benchmarks to compare the "equality" of your systems. If I put a HD 7990 in my system and benched it against a GTX 660Ti, of course I would win, even if I had a i7-3930 in the intel system because the GPU would bottleneck the CPU.

Your assumptions are wrong.
 


Precisely the world-record on overclocking was achieved by a better build quality of the AMD chip.



I fail to see your point by several reasons. Intel graphics is bad for most gaming titles. I am pretty sure that the OP will be buying a discrete card. AMD graphics (both integrated or discrete) also deal with things such as aero acceleration. Aero is not used when gaming at full screen. Aero is disabled in W8. Aero does not even exist in other OSs.



Super Pi is using a 25% of a desktop i5/i7 chip, but only a 12.5% of an eight-core FX chip.



HT can increase, decrease, or maintain equal the performance. There are examples where HT activated decreases performance of some application. Why? Because, with HT activated, a 4-core i7 is performing like a 5-6 core chip, but it is not a real 5-6 core chip because there is only 4 physical cores, the rest are only virtual.

It does not scale as a chip with 8 physical cores. That is why the FX-8350 can be 30-70% faster than an i7-3770k with apps that load the 8-cores.



Precisely it is the other way. The better physical build is the reason why AMD thermal tolerances are higher than intel. It can run hotter without problems.



How can you say this after saying us above the supposed advantages of intel graphics on intel chips? And why would be more fair to run AMD with Nvidia, when Nvidia GPUs are usually optimized for Intel CPUs? {*}


{*} I understand why they are collaborating so closely. Intel cannot compete with AMD on graphics and needs Nvidia. Nvidia cannot make X86 chips and needs Intel to compete with AMD...
 
Amd thermal tolerances are lower than intel. Tjmax for 8350 is 90, ivy bridge is 100-105. If there is no discrete gpu present (which was exactly my point because it will cost money) the i7 can use openCL acceleration while the FX cannot.

photoshop.png


Thats a sizable advantage without the need for a discrete gpu. You also arn't considering that for strictly cpu tasks the hd4000 is perfectly sufficient while the 8350 will require an dgpu which will raise costs and power consumption (but only when a gpu is not really needed for the system and the hd4000 is sufficient- if the work requires a gpu then the advantage is lost).

I suppose the question is how does trigate stand up to SOI/finfet? (I honestly have no idea).

Yes, super pi is 25% of an intel i5/i7 vs 12.5% of a FX but that forgetting how much faster the i5/i7 is going to be. (If you are still not seeing how this is a poor comparison running super pi on an 8 core jaguar chip-ps4 vs the i7, its still 25% vs 12.5% but the i5/i7 is going to blow the jaguar chip out of the water, being more than two times as fast [per core]).

Games do use hyperthreading. Its why an i3 is much much better than a pentium for gaming despite only a small increase in clock speed in multithread games. It's why in games like crysis 3 the i7 is a little better than the i5. Games don't use it as well as the modular fx architecture though. That article is more than 2 years old.

AMD had a real chance with GCN in laptops (which is a little more efficient than kepler in the smaller chips). (If I could have found a 7850m or 7870m I would have bought it right away). The problem was that they had few design wins and their drivers were crap (enduro issues with the 7970m for almost a year). Hopefully the 384 cores 'solar system' chips and richland crossfire will change this. Also AMD turbo in mobile is crap, intel lets their cpus draw enough power to turbo and throttle based on temperature, amd throttles based on power usage, so many times your chip is at only 55 degrees but throttling.



 


AMD can take full advantage of AMD Radeon App Acceleration, which is a similar technology. Intel has not yet adapted their technology to take advantage of this process...so the benefit is considerably less for them. Though the opportunity is still available.




Agreed...and if you're going to shill out $309 for the i7-3770k you're not going to run HD 4000 intel iGPU. You'd be a fool to do so...as you would gain no performance advantage graphically to take advantage of that chips capability. I have yet to see an i7 build that runs intel integrated graphics...thus on their high end chips, any advantages are nearly always lost. (So, if you're intel, why bother to include them?)



Intel has already stated they will be moving away from TriGate on Bulk to finfet/SOI for any architecture beyond Haswell because the quality of wafer they use is insufficient to support smaller processes than 14 nm.



There are several benchmark sites already moving away from SuperPi as a benchmark, because it is now becoming a less and less useful indicator of real world performance. Years ago it was great at showing single thread performance, since that is becoming less relevant with each passing day, many do not include this benchmark in their CPU reviews any longer.



Yes, it is dated, but I could not find a more recent list of HT supported software, so I took what I could find. There are some games that use it, but there are a great many more that do not, and there are even some that run drastically better with HT disabled...so, as shown in my quote above (from a senior intel employee, no less), even intel concedes that not all applications benefit from HT technology and in some cases it is a hindrance, rather than a boon.



This architecture will be changing drastically in the next few months, I agree that the solar system chips and Richland will make a large impact here. I disagree with AMDs updating of this segment last proportionately to the Desktop market, but I can understand why they did it. They placed higher importance on maintaining their largest market share segments over trying to conquest a new segment. It makes business sense, but I would have preferred to see them launch the mobile segment at the same time to try to conquest more of that market more quickly. However, with limited R&D money, you have to plug the biggest holes first, and I understand why they did it the way they did.



 


AMD does not measure Tj but Tc (by several technical reasons) and obtain Tj values from an estimation formula which uses a different scaling than Intel.

Moreover, those technical specifications are not a substitute for real tests. If Intel had better thermals, as you suggest, then with simpler cooling it would break worldwide record of over-clocking. The fact is that the record goes for AMD.
 
I believe that the 28nm process used for SR and Kaveri will be bulk but I'm not sure.
http://www.anandtech.com/show/6201/amd-details-its-3rd-gen-steamroller-architecture

Depends really, there are PLENTY of applications that need pure CPU grunt and really don't care about graphics other than something that can do a decent job of displaying to the screen.

I'm not sure about the tj and thermals but with notebooks AMD has a much lower temperature threshold than intel chips (which will run at 95+ degrees).
 


No, it will be a PD-SOI process...from GloFo...the next step down is supposed to be 20nm PD-SOI as well.

The only commercial applications that would require i7-3770k muscle are rendering machines and workstations, which clearly would already have a workstation GPU to render.

For raw number crunching a business would save itself a ton of money and just use something like an i3, or have a server setup to use something like Opterons or POWERPC or Xeon CPUs that were only going to be used for those specific functions.
 


What "temperature threshold"? And how is it measured/estimated? You cannot compare apples to oranges.

The situation is similar to Intel TDP vs AMD TDP. You cannot compare them directly because each company defines/measures TDPs differently.
 


EVERYWHERE I look I see 28 nm BULK.

I'm talking about fairly low budget but intensive number crunching (like in academic institutions where xeons may be too expensive and not required but you still need to run a lot of computationally sensitive data 24/7)
 


Those places run the FX8350/8320 or something similar with a bargain bin GPU...it saves them more money than buying a i7-3770k.

Educational arenas, where money is tight, are not going to buy intel just because, they'll buy the most budget friendly solution that will do the task. AMD is far more budget friendly, even with a $50 GPU you're still saving $80 before you account for more expensive motherboards with intel product.

http://www.investorvillage.com/smbd.asp?mb=476&mid=12464842&pt=msg

http://news.softpedia.com/news/AMD-and-GlobalFoundries-Interested-in-FD-SOI-for-the-20-nm-Process-267519.shtml

Sorry, I was incorrect, it's not Partially Depleted Silicon on Insulator they're using...(PD-SOI) it's Fully Depleted Silicon on Insulator (FD-SOI). They already use it in their 28nm wafer. They have been since 2003 making mostly SOI wafers, because GloFo was AMD's Fab business and AMD has solely used SOI since 2003.

Intel still uses TriGate finfet on bulk wafers.
 
The reason why low power is so important is because of heat, noise, and obviously electricity use. You can build an HTPC with a 35w intel in it and it'd be completely silent without needing a fan. It'd be a lot harder to do the same thing with an AMD as I've read in Tom's hardware's 0 db PC build. It's possible, but just a lot easier to do it with an intel, and cooler too.

I agree that with a 3570k or 8350, power consumption doesn't matter TOO much because if you get either one of those processors, your going to overclock and give that advantage away and the difference amount to a cup of coffee once a month. However, power consumption does give us an idea about a chips efficiency or the amount of power you have per watt. And this is where the big advantage lies. It says a lot about an Intel chip that Intel can do so much more, with so much less power. How fast would an Intel chip be if it could use well over 100w? Very fast. But that's not what they're trying to do, it is what AMD is trying to do though, and they still can't get up to par with an Intel even with power consumption out of the picture.

I agree with the HD4000 graphics, they hardly benefit at all, if any. They're a waste, and Intel should offer the 3570k and 3770k with out the graphics and for 50$ cheaper. If they would, they'd never sell another 3570k with an iGPU again but they would sell more of them. Although they do make a 3350p with the iGPU disabled although I still think it's on the chip, just disabled.

If anyone still thinks that an 8350 is completely equal to an Intel 3570k or 3770k in gaming, just take a look at Tom's new article that came out today. Comparing Nvidia and AMD cards in SLI, using an 8350 and 3770k. Clearly Nvidia and AMD cards run better on an Intel. Especially AMD's cards, which is surprising. But Nvidia cards run better on an 8350 because they require less power from the CPU. Very interesting indeed. But you can clearly see the quality difference in between the two chips. Is it worth the extra 20 bucks for the 3570k? IMO, yes it is. I know the article had a 3770k but their is really no difference between them in the games they tested.

Read the article, if you want 10% more performance in pretty much every game tested, then you'll want the best. If you could care less about 10FPS out of 100FPS, then you'll be rewarded with a better price.

And superPi is a very important benchmark, as it demonstrates single threaded capability, which is a very large slice of the pie. The reason why this benchmarks is important is because it is indicative of the performance you can expect to get out of programs like lame, and iTunes, single threaded games, and so many other pieces of software and also much of Windows 7 uses a single core. To say single threaded programs are being phased out is true, but to say a single cores performance isn't important at all is completely untrue. It's still very important, and it will remain this way for years, although it is being phased out it will always be important to some extent.
 


I know lots of fanless motherboards using AMD chips: C60, C70, E240, E350... Some time ago the E350 was very popular among HTPC builders. I don't check today.

There are lots of fanless motherboards using the Intel atoms: 425, 525, 2500... but those cannot be used for multimedia or HTPC. There is a MSI fanless motherboard that includes an intel celeron 847, but I know little about it.

AMD also showed how you can build a fanless system using a powerful trinity A10 chip

http://news.softpedia.com/news/AMD-Demonstrates-Fanless-A10-5700K-Trinity-System-296214.shtml

Finally, there are people running FX-8350 with passive cooling.
 


Well, if you spring for a well insulated case, you have less issue with noise, though building a 0 db HTPC is an interesting idea...the APU solutions AMD offer would be quite a bit of bang for your buck with some good capability if you need it in a pinch, and they're not known to run particularly hot...



Actually what chip is designed to consume has little bearing on speed. There are AMD chips that are 95W like the FX6300 which run comparable races to the mid range 84W Intel's, especially if you get into overclocking. Power consumption between the 2 amounts to about the equivalent of turning on an extra 40W light bulb in your home...(you won't notice the difference...) AMD's current architecture is designed around a certain power requirement, but that doesn't mean that Intel would run faster if they consumed more power, it just means that they would consume more power.



That little trinket on the die also allows intel to claim their i7's have more transistors than the FX8XXX series...in reality...the difference isn't much. I am not sure why they did it the way that they did...but it was clearly a poor choice. I suppose it may save them money to manufacture them all the same way, and it likely does...but it also makes the cost of the product increase, even though it saves money in manufacturing to only offer it one way.



I read the article this morning, as a matter of fact, and the 3770k was only marginally faster in most applications, and in some the FX8350 was marginally faster...the first benchmark they ran the AMD was faster as I recall...(I think it was BF3...?). But it only proves that 10% margin of error is accurate...(10% of 100 FPS = 10 FPS difference) So, in order to reasonably conclude one had a dramatic advantage you'd need to show a greater than 10% difference, and even then you could only say definitively that one was marginally better above margin for error at that specific task.



I addressed this above.



If all you did was download stuff on itunes and play flash games on facebook or something...sure, SuperPi is your best benchmark. Unfortunately...in this day and age...multi-tasking, gaming, multimedia, rendering, encoding, ripping DVD/BRD has made a single thread benchmark less and less relevant. As I said earlier, quite a few websites have stopped even bothering to run SuperPi, because while it is good at what it does...it doesn't cover many angles at all.

 



Last thing I'll say on the subject but I wasn't "pretending" the chips needed to be compared at the same relative clockspeed. All I'm saying is that it would make sense to measure how both CPU's perform at the same clock speeds given that in most cases 4.8-5.0 ghz seems to be the top end overclock for both CPU's on Air. Yes I know there are those with setups running at 5.2+ ghz with an FX but this is not the norm and the power and heat draw is tremendous.

That's all I was saying.
 
juanrga, I have to say, you are really taking this to another level. I can really tell you are such a Fan-Boy.

1. The World Record for clock speed doesn't matter. And yes it can withstand higher temps BECAUSE of the build quality. Your right on that, but its more because of a build quality more focused on heat rather than performance. And yeah we got a higher Clock Speed, but its the code that matters, and I think your disregarding that fact.

2. About Aero, yeah its disabled in Full Screen, but what about windowed? I know everyone doesn't just run in windowed, however, for all the games I play, It usually increases the FPS from about 10 to anywhere up to 80. And yeah that "wont matter" when I get my ass-kicking PC, but its just natural for me to play in that, I get annoyed in Full Screen. So your fact about Windows Aero isn't fully accurate. And what Dumb-Ass would get Windows 8 for a Gaming PC? It's terrible for it. With the Intel being bad for most gaming titles, you are right, however, when Intel runs its Graphics with GPU in, it tries to focus on anything that isnt being focused on by the main GPU. So i.e all the background processes. This goes back to me with windowed. Sometimes, I may run on one screen and want to check Facebook or a Map or something on another. It really comes in handy then.

3. Percentages, again, don't really matter. It's again with the code. I can run Super Pi and force it to go on all Cores. Then in a comparison between the two that both are to run at the same (4.0 for example), an i7-3770k still can get about the same results as the 8 Core FX Chip, simply because of the way its Coded and Physical Build.

4. What apps run 8 cores? Almost NONE. The extra cores at this point in time do not matter what soever, unless you use it to run cool bench marking and trying to do cool 8000 step math equations. With Hyper-Threading, you made the point that activated it can decrease performance, and stating that the 8 Separate Core Combo is better. I have to say, 1. You can just disable it. 2. Some applications actually work better with Intel HT on 4 cores than separate AMD 8 cores. And 3. Yeah It'll be faster with apps that run all eight, but unless your an "Extreme Computer Scientist," I don't see that advantage with my ArmA Game.

5. Back to that Physical Build stuff. Yeah it can run hotter without problems, but think about the process of the workload. Your AMD will probably run at 4.5 Ghz, and mine will run the same. And yeah you have more Cores, but the problem with that is, you have more of a heat problem. So AMD will use less expensive and more heat resistant materials that won't come close to the performance if you had regular materials. And thus you need the "Superior 8 Core Power" so you can get a similar performance as the Intel Processors.

6. With your usage question, I can say that. AMD's CPUs and GPUs are more suited towards each other so the workflow is more committed an can be processed in a 'better' manner. And it would be more fair in a testing environment. It would be most fair to run AMD with AMD, AMD GPU with Intel, Intel with Nvidia, and AMD processor with Nvidia. And yeah their more optimized, but there not made for each other (AMD and AMD), and that's where the AMD and AMD vs. Intel and Nvidia Combo comes into play.

Putting this with what ericjohn said, what do you have to say with your AMD fans?



***And that last footnote that you made, since AMD makes both Intel and Nivida rely on each other. Its no different than the AMD CPU and GPU relying on each other. You basically just answered your own question.***
 


I agree with what your saying 8350, but your just stating the obvious for no reason. There are plenty of sources out there that show comparisons with 3D rendering/gaming/etc. He was basically just agreeing to what I said earlier.
 


I don't know. At my university, every computer I can find (with the exception of really old athlon 64 computers) is using an intel chip from pentium to core 2 duo to i7-2600. These are the general use computers. The academic/professional use computers all use intel too.

Really for most places a dgpu is not needed. That is a $50 saving toward an intel system.
 


Calling other names will not eliminate the facts... those you continue to ignore.



I (and other people also) already explained what the world record shows. It is about build quality. Intel does not hold the world-record because its build quality is poor.



Are you trying to say that intel is better for gaming, because you play games on a window and gets Aero acceleration from the HD graphics? When Aero are just fancy effects for the desktop?

And then you try to support your 'special' opinion by insulting to millions of users who game on W8.

And finally you add how wonderful HD graphics work with a discrete GPU, when there are hundred of people disabling it, because generates problems when gaming. WOW!



Let me assume that you are running on four cores a program which is single threaded. Even if I was to accept that you would only be using a half (50%) of the FX-chip, but the 100% of the i7 chip. As you see again it has little to do with "coded" and "Physical Build".



There are several applications that use eight-cores. Mostly are professional applications and some are for scientists (I don't know what you mean by "Extreme Computer Scientist").

Moreover, you miss again the point that some people works multitask. I.e. they run several applications at once. Therefore it does not matter if a single application cannot use the eight-cores, because several applications can. In fact FX users know that its chip works better than the i7 at multitask (intel has flag problems).

A new generation of eight-core games are being developed because next consoles will be eight-core designs. You continue denying this fact as well.



You continue denying facts. It is not AMD who is using "less expensive" materials, but Intel who is using poor materials and builds for saving some bucks. That is why they cannot achieve AMD clocks.

Now just answer this to me. If the Intel chips are so incredible superior in terms of performance why Intel needs to cheat with its compiler and generate biased benchmarks? and why the fastest supercomputers in the world use AMD chips?



Did you read recent toms hardware review showing how Nvidia can run better on AMD than AMD on AMD?
 


Oh, but it DOES matter, see...Intel uses a triple gate on bulk process to try to get the most from the bulk wafer. Bulk is the cheapest wafer you can buy, it's the lowest quality bin, and intel uses TriGate to try to squeeze the most out of it. Funny they charge the most for the cheapest silicon huh? AMD uses bulk for non important things that don't require high performance...otherwise they use a SOI, Silicon On Insulator, which means that the silicon has an extra component in it to keep it insulated and heat resistant, making it perform better...for longer.

2. About Aero, yeah its disabled in Full Screen, but what about windowed? I know everyone doesn't just run in windowed, however, for all the games I play, It usually increases the FPS from about 10 to anywhere up to 80. And yeah that "wont matter" when I get my ass-kicking PC, but its just natural for me to play in that, I get annoyed in Full Screen. So your fact about Windows Aero isn't fully accurate. And what Dumb-Ass would get Windows 8 for a Gaming PC? It's terrible for it. With the Intel being bad for most gaming titles, you are right, however, when Intel runs its Graphics with GPU in, it tries to focus on anything that isnt being focused on by the main GPU. So i.e all the background processes. This goes back to me with windowed. Sometimes, I may run on one screen and want to check Facebook or a Map or something on another. It really comes in handy then.

Intel does not currently support on board graphics plus discrete GPU, it causes a plethora of issues with the hardware, and they have outright stated that onboard graphics should be disabled if you're using a discrete GPU. You've received bad information somewhere.

3. Percentages, again, don't really matter. It's again with the code. I can run Super Pi and force it to go on all Cores. Then in a comparison between the two that both are to run at the same (4.0 for example), an i7-3770k still can get about the same results as the 8 Core FX Chip, simply because of the way its Coded and Physical Build.

The physical build is inferior. PERIOD. Anyone who knows ANYTHING about composition of materials will not argue that intel has a superior quality wafer...it's the most easily disproved claim you make. You keep talking about "how it's coded"...you do realize coding is not a part of a CPU right? It's programming language. If you're talking about protocols...then intel is designed for single threaded applications, I have reviewed this multiple times in this thread alone. Also, SuperPi is a single core benchmark...it cannot be "forced" onto more cores...it is designed specifically to test single core performance. Bad information. The only reason the i7-3770k competes with the FX8350 in many categories is frankly, because it is that good at single threaded apps, and this allows it to overcompensate in highly threaded apps.

4. What apps run 8 cores? Almost NONE. The extra cores at this point in time do not matter what soever, unless you use it to run cool bench marking and trying to do cool 8000 step math equations. With Hyper-Threading, you made the point that activated it can decrease performance, and stating that the 8 Separate Core Combo is better. I have to say, 1. You can just disable it. 2. Some applications actually work better with Intel HT on 4 cores than separate AMD 8 cores. And 3. Yeah It'll be faster with apps that run all eight, but unless your an "Extreme Computer Scientist," I don't see that advantage with my ArmA Game.

Actually, games like Crysis 3 that "support" HT actually run better without it on...google it and look at the youtube videos...the facts are there. HT is a way to rook people out of more money for basically a software trying to do the work of a core in a background operation...that all the while robs the hardware of performance on the foreground operation. HyperThreading is an industry wide inside joke...Intel has nearly admitted as much openly.

5. Back to that Physical Build stuff. Yeah it can run hotter without problems, but think about the process of the workload. Your AMD will probably run at 4.5 Ghz, and mine will run the same. And yeah you have more Cores, but the problem with that is, you have more of a heat problem. So AMD will use less expensive and more heat resistant materials that won't come close to the performance if you had regular materials. And thus you need the "Superior 8 Core Power" so you can get a similar performance as the Intel Processors.

TDP and core voltage have a direct correlation to heat. At no point does the number of cores come into play. PERIOD.

Again, the BS about materials...look man...I posted a link to wikipedia that explained the difference between SOI and bulk for you...and you still sit here and try to tell me, wrongly, that I am wrong and you are right. Show me one shred of evidence that says bulk wafers are better than SOI, or that intel uses anything other than bulk wafers. You can't find it...you know why? Because it doesn't exist.

6. With your usage question, I can say that. AMD's CPUs and GPUs are more suited towards each other so the workflow is more committed an can be processed in a 'better' manner. And it would be more fair in a testing environment. It would be most fair to run AMD with AMD, AMD GPU with Intel, Intel with Nvidia, and AMD processor with Nvidia. And yeah their more optimized, but there not made for each other (AMD and AMD), and that's where the AMD and AMD vs. Intel and Nvidia Combo comes into play.

Putting this with what ericjohn said, what do you have to say with your AMD fans?

Just because intel is more optimized for single threaded apps does not mean they will be better at anything multi-threaded now or in the near future. You need to come up with some facts to support your statements.



Already addressed this earlier.
 


https://www.youtube.com/watch?v=rIVGwj1_Qno
https://www.youtube.com/watch?v=4et7kDGSRfc
https://www.youtube.com/watch?v=eu8Sekdb-IE
 


I'm just going to say that its sad that almost all of the results were done with AMD GPUs. That's not very fair. None of these links show constant results with all the same GPUs. However though I did find, that from all of the videos combined together (as a reference average), Intel did win overall.
 


Considering Tom's Hardware just published an article showing that AMD GPUs perform better with intel CPUs and Nvidia GPUs perform better with AMD CPUs...it doesn't surprise me that Intel won slightly. If they had used Nvidia GPUs AMD would have won outright without any doubt...LOL.
 
Status
Not open for further replies.