[citation][nom]blazorthon[/nom]Even with 3600MHz CPU/960MHz GPU we have the A8-3870K hitting almost 300w at load. Try getting a cheap cooler that can handle that. It doesn't matter if the A8 can go higher if allowed to because it could ONLY hit such high frequencies in laboratory settings, if even there. It would take liquid nitrogen, at a minimum, to cool it like that. It might need liquid hydrogen.On the other hand, the Intel system can get a higher overclock if allowed to because it has it's heat spread across two different places (the CPU and the discrete video card's GPU) and Intel systems are already more efficient because Intel has better tech.For example, an i5 or i7 can hit 5GHz on air if you're lucky, but no AMD CPU can, because Intel is more efficient even though Intel is also faster.[/citation]
I did not say he should actually run it at 5Ghz, I specifically stated that it was a theoretical limit. Actually the limit is really closer to 6Ghz though on LN2, I lowballed it
http://news.softpedia.com/news/AMD-Llano-A8-3870K-APU-Overclocked-to-6067MHz-250023.shtml
Bringing up an i5 is pretty ridiculous. We aren't discussing an unlocked i5 here. What you can do with an i5 is irrelevant when this article is about a locked g620. If you get a locked g620 stable at 5Ghz, congratulations. My only point was that OC is smoother with an unlocked CPU, so saying he could have overclocked the G620 probably wouldn't have mattered. He wouldn't have overclocked it stably in a significant way, and if we're tossing out system stability, the unlocked APU likely would've gone farther.
I don't really know why his results show such high power usage. I've pushed a locked 3650 up to 3.6Ghz and it didn't show that much power usage even under Prime95 stress. I'm not calling him out on it or anything though. I wasn't doing any serious testing for results, just messing around, on a cheap aftermarket cooler in a fairly crowded HTPC case. Comparing the max of the OC APU to the max of the non OC Intel is silly though. If you're comparing anything, compare non OC to non OC. Intel would be using more watts OCed too.
The rest of this seems to make me think Intel bias, considering you pretty much state that Intel is just super great. They do have a lot more money and better factories, so yes their processes are better.
[citation][nom]blazorthon[/nom]AMD's often solution nowadays is to add more cores to get good multi-threaded performance. This doesn't mean that AMD has better technology, in fact it means the opposite and that AMD is trying to compensate for their problems. There is nothing wrong with trying to fix something, but it is not enough to fix poor performance in most software because most software doesn't utilize more than one or two cores. Going beyond 4 cores and you are pretty much left with mostly professional and server oriented software.Having many cores on a system that doesn't use them all very well offers rapid diminishing returns on increased performance from increased core counts whereas improving the cores instead of adding more will always improve performance, unless there is another bottleneck other than the CPU cores.[/citation]
A lot of good software is being optimized for multiple cores. For example, file compression tested here. 7Zip is a far better program than WinRAR/Zip. People who are actually concerned with the speed of compressing/decompressing files are probably using that to cover all standards. Companies releasing programs that actually need CPU are optimizing for multiple cores. Games will be optimized with multiple cores once they catch up. Games have a long dev cycle and use older engines, so they haven't been written for multicore yet. AMD was ahead of the time and put out multicores when programs were not using them. It's their mistake, but to say multiple cores are useless is short sighted. You WILL have to replace that g620 eventually. But again, we aren't using a CPU with over 4 cores in this discussion, so irrelevant if it uses over 4.
I mean, I personally don't have a problem with the results here. It's easily predictable. The Intel build had a better GPU and did better in graphics. The AMD performed better in programs that used multiple cores efficiently. That's the thing. This article didn't really answer the points people did make on the last AMD/Intel type article where people wanted the AMD build with better memory and the Intel build using an equivalent motherboard. It gave us results we already knew. What was the point?
I noticed some comments here though, saying that a more expensive Intel board is needed to use the 1600 memory properly. I'm not too sure that's true, but it could make motherboard choice an issue. Someone with a cheap build would use the cheap Intel board, and run at 1333. Someone with the AMD build would run at 1600 even on the cheap board. In order to have the cheap Intel experience, either the board should then be a cheap board, or the memory should be set to 1333. The performance difference wouldn't be anything major, but it's better to remove any possible change in results that could come from having a more expensive test board. If we wanted to decide results based on other tests, then we don't even need this article, since everything could be guessed from other information.