[citation][nom]kinggraves[/nom]I'd really like to see how much you're going to pull out of the Intel system, with a completely stable OC. If we're getting into changing the bus speeds and full stability isn't a concern, the AMD system could function on many FM1 boards with 133 Mhz, bringing the "potential" to nearly 5Ghz (An unrealistic potential on air, granted) while also OCing the iGPU. No no, in terms of OC potential, the winner is obvious.I think people just wanted to see the results of how the better memory would have performed, even if it was a separate test bar and you had to note that config cost slightly more. It isn't as if using better memory would double the results. It IS worth noting however, that your APU results are reduced by using that memory.I mean, as far as value goes, you could also get an A8 3850 for 10 dollars less and get nearly the same performance as the stock 3870. One of the graphics disabled models like the 631 might have been interesting paired with a discrete too, I don't know if anyone ran that comparison yet. How would an A6 3500 fare in these waters for $80, considering the graphics side is fairly close to an A8 and the CPU is still a triple core?[/citation]
Even with 3600MHz CPU/960MHz GPU we have the A8-3870K hitting almost 300w at load. Try getting a cheap cooler that can handle that. It doesn't matter if the A8 can go higher if allowed to because it could ONLY hit such high frequencies in laboratory settings, if even there. It would take liquid nitrogen, at a minimum, to cool it like that. It might need liquid hydrogen.
On the other hand, the Intel system can get a higher overclock if allowed to because it has it's heat spread across two different places (the CPU and the discrete video card's GPU) and Intel systems are already more efficient because Intel has better tech.
For example, an i5 or i7 can hit 5GHz on air if you're lucky, but no AMD CPU can, because Intel is more efficient even though Intel is also faster. AMD's often solution nowadays is to add more cores to get good multi-threaded performance. This doesn't mean that AMD has better technology, in fact it means the opposite and that AMD is trying to compensate for their problems. There is nothing wrong with trying to fix something, but it is not enough to fix poor performance in most software because most software doesn't utilize more than one or two cores. Going beyond 4 cores and you are pretty much left with mostly professional and server oriented software.
Having many cores on a system that doesn't use them all very well offers rapid diminishing returns on increased performance from increased core counts whereas improving the cores instead of adding more will always improve performance, unless there is another bottleneck other than the CPU cores.
Personally, an A6 plus a discrete card would undoubtedly be better than an A8 so that is definitely something that should be looked into. Just don't think that the same applies to an A4. An A6 has 20% fewer shaders so it is even slower than a Radeon 5550 (that is what the A8's graphics is based off of, not a 6550 because there is no such card anyway).
A lot of people seem to not understand something about TDPs and power usage. TDP is a very approximate number in many cases and should not be used to measure real power usage. Fully loaded scenarios are only going to happen when you do something like a burn-in benchmark test, otherwise you will probably never get even close.
A 65w TDP Pentium will usually max out at under 50w, normally closer to 45w, despite it having a 65w TDP. TDPs are also given for processors that just barely don't fit within a lower TDP bracket. If the Pentium used only 34ws maximum, it would still be given a 35w TDP, if it used only 40w at the maximum, it would still get a 65w TDP. This is done because a CPU/APU family has preset TDPs that a processor must fit into. A 50w Pentium fits into the 65w TDP bracket, but not the 35w TDP bracket, so instead of making a new TDP bracket they simply called it a 65w CPU. If the Pentium used much more than 65w, then it could have been given a 95w TDP instead. Intel has 35, 65, and 95 for the desktop Sandy Bridge processors, and most of their other processos. For the platforms that have CPUs that go above 95w, we have 130w and the much rarer 150w. If I remember correctly, only a few Xeons, the very top Core 2 Quad, and the top Netburst CPUs (Pentium 4 and Pentium D) have 150w TDPs.
Further confusing this, some processors go beyond their TDPs when they are fully loaded on stuff like a burn in benchmark instead of topping out at the TDP. AMD is supposedly notorious for this, but I offer no proof about this myself and this claim should most definitely be taken with a grain of salt until someone else shows some proof, unless you look this up yourselves.
Granted a Pentium probably uses more than 45w when you do a burn in test like Prime, but even if it only used 50w or 60w it would still get a 65w TDP because that is simply the way that hardware manufacturers decided to do things.
This stuff leads to great confusion and I understand why people are not understanding this and hope that this explanation helps.