Battle At $140: Can An APU Beat An Intel CPU And Add-In Graphics?

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


It won't play other games at 1080p/28 FPS avg though... Battlefield 3 and Metro 2033 are two examples in this review alone.
 
Also, playable frame rates in a strategy game are radically different from playable frame rates in a first person shooter game. That is why we like to go beyond 40 and 50frames per second when we can. Every millisecond counts in a first person shooter, unless you have horrible reaction time in which you wouldn't be very good no matter what frame rate you play at.
 
I am not a heavy gamer, but when I do play games they are almost purely FPS games, mainly older Unreal Tournament games. I guarantee that I would have some trouble playing even against the best bots, let alone real people if I couldn't have higher frame rates. Yes, I have tested it on that game with my older machines. FYI, 25FPS really sucked. I can't see 28FPS being much better.

I'll look into getting more gaming time with newer games once some stuff is paid off :)

Also, you didn't post the link to the motherboard you wanted too.
 
[citation]Also interesting is that, when you add up the power use of Intel's Pentium and AMD's Radeon HD 6670 DDR3, you come up with 101 W. That's one one watt more than the A8-3870K. Crazy, right?[/citation]

ummm. isn't that 131W and 31W more than AMD's solution?
 


Damn it I've had enough of you idiots. TDPs do NOT mean power usage, they are very approximate AND the Pentiums TDP includes it's IGP which is not in use AND TDPs are usually only possible to be acheived at full load which is a situation that ONLY happens during something like a burn in benchmark such as Prime. I already went into very fine detail about TDPs just a few posts above you yet you still don't get it, you OBVIOUSLY didn't read any comments before spouting ignorance or you ignored them.

The Intel solution uses less power than the AMD does, that's that. Realistically, the Pentium only uses up to 45w, not 65w such as it's TDP makes people too ignorant to know what it means and to read the many posts describing actual power usage vs. TDP before making idiotic claims.

You don't need to be half as knowledgeable as I am to just read a comment that clearly explains this stuff and I am NOT the most knowledgeable person on this forum, not even close.
 
Sorry I over reacted, but just reading a few comments right above yours would have cleared that up.

@tourist

That looks like a decent board, but without any reviews yet, it might just be a piece of crap. ECS doesn't always make the best, although my only complaint about my current ECS board is that it doesn't like multiplier overclocking. Might just need to update the BIOS, but I'm lazy and the 25% BLCK OC is good enough for me so I don't care too much.
 
Sempron might not be enough. A Sempron can and probably will bottleneck anything above the 6670 or 6750.

There is a reason that AMD is abandoning Semprons, at least in their current form. If it did win, then I go for Celeron anyway. If we want to get the lowest end CPUs possible, then the single core Celeron 1.6GHz is like $40 or less and we could let it fight the Sempron.
 
Thanks once again for an interesting an informative article aimed at budget system builders. The articles about top flight gpu and other system components are very cool... but they don't apply to me very often. I'm grateful for articles like this one which help me pick a build for my in-laws upcoming new system.

One minor complaint, (sorry) the A8-3870K is the 'premium' part in the A series. Since you tend to get diminishing performance returns for your money in all cpu ranges as you move up to the top parts, I'd like to see what the comparison looks like at the mid-range of the series. For example, how does the A6-3650 compare to a Celeron G530 + Radeon HD 6570 combination at around $110 total? The 6570 is a well known overperformer for the money, and the Celeron 530 looks tempting, so I'm curious about how those parts stack up in real world usage at this stingy budget level.
 
No, it is an 890GX board. Sorry, but I can't check the exact model right now. It has five SATAIII ports, three PCIe 2.0 x16 slots (x8,x8,x4), eSATA, dual gigabit Ethernet, etc. I paid a little over $120 for it.

Yeah the northbridge runs hot, I attached an 80mm fan to it to fix that. The problem is a BIOS problem. Whenever I try to manually change the multiplier, it automatically sets it to 8 and brings the voltage down real low no matter what I make them in the settings. It doesn't bother me about anything else, jsut those two settings.

Like I said, I think it needs a BIOS update because it is running an older BIOS.

If I need a budget board, I tend to prefer ECS over Biostar and Gigabyte's low end boards. Gigabyte has good mid and high end boards, but their low end board kinda suck compared to the others. However, ECS has had some problems, just not when I use them. Depends on the model.
 
Sempron and Athlons aren't the same. I think that Semprons have less cache, for one difference. Yeah, the Celeron I mentioned probably won't be great, but not all Semprons unlock and the newer a CPU is, the more likely it is one that AMD "fixed" to stop unlocking.

Yeah, the 1.6GHz Celeron probably won't touch an unlocked Sempron 140/145, but they probably aren't going to beat the Celeron G530.

The price difference between the Sempron and Celeron G530 (that is the dual core 2.4GHz one) is only $10, find a cheap mother board and $10 is not enough to get a faster video card. At that point, the sempron will still use more power than the Celeron, which probably uses at most the same as the Pentium, more likely a little less.

The Sempron 140/145 are probably slower than the Celeron G530 in everything even if they're unlocked to a dual core, unless it is overclocked. Once again, we could push a overclock out the the Celeron/Pentium to compensate for this even more without pushing power usage over the AMD.

It is a good idea and we could look into it for more definitive proof, but I'm doubtful about the Sempron being able to handle good graphics cards that it's low cost would allow for.

I'll also admit that despite my doubts of it, Semprons do have a good track record of unlocking cores.
 
[citation][nom]blazorthon[/nom]Even with 3600MHz CPU/960MHz GPU we have the A8-3870K hitting almost 300w at load. Try getting a cheap cooler that can handle that. It doesn't matter if the A8 can go higher if allowed to because it could ONLY hit such high frequencies in laboratory settings, if even there. It would take liquid nitrogen, at a minimum, to cool it like that. It might need liquid hydrogen.On the other hand, the Intel system can get a higher overclock if allowed to because it has it's heat spread across two different places (the CPU and the discrete video card's GPU) and Intel systems are already more efficient because Intel has better tech.For example, an i5 or i7 can hit 5GHz on air if you're lucky, but no AMD CPU can, because Intel is more efficient even though Intel is also faster.[/citation]

I did not say he should actually run it at 5Ghz, I specifically stated that it was a theoretical limit. Actually the limit is really closer to 6Ghz though on LN2, I lowballed it

http://news.softpedia.com/news/AMD-Llano-A8-3870K-APU-Overclocked-to-6067MHz-250023.shtml

Bringing up an i5 is pretty ridiculous. We aren't discussing an unlocked i5 here. What you can do with an i5 is irrelevant when this article is about a locked g620. If you get a locked g620 stable at 5Ghz, congratulations. My only point was that OC is smoother with an unlocked CPU, so saying he could have overclocked the G620 probably wouldn't have mattered. He wouldn't have overclocked it stably in a significant way, and if we're tossing out system stability, the unlocked APU likely would've gone farther.

I don't really know why his results show such high power usage. I've pushed a locked 3650 up to 3.6Ghz and it didn't show that much power usage even under Prime95 stress. I'm not calling him out on it or anything though. I wasn't doing any serious testing for results, just messing around, on a cheap aftermarket cooler in a fairly crowded HTPC case. Comparing the max of the OC APU to the max of the non OC Intel is silly though. If you're comparing anything, compare non OC to non OC. Intel would be using more watts OCed too.

The rest of this seems to make me think Intel bias, considering you pretty much state that Intel is just super great. They do have a lot more money and better factories, so yes their processes are better.

[citation][nom]blazorthon[/nom]AMD's often solution nowadays is to add more cores to get good multi-threaded performance. This doesn't mean that AMD has better technology, in fact it means the opposite and that AMD is trying to compensate for their problems. There is nothing wrong with trying to fix something, but it is not enough to fix poor performance in most software because most software doesn't utilize more than one or two cores. Going beyond 4 cores and you are pretty much left with mostly professional and server oriented software.Having many cores on a system that doesn't use them all very well offers rapid diminishing returns on increased performance from increased core counts whereas improving the cores instead of adding more will always improve performance, unless there is another bottleneck other than the CPU cores.[/citation]

A lot of good software is being optimized for multiple cores. For example, file compression tested here. 7Zip is a far better program than WinRAR/Zip. People who are actually concerned with the speed of compressing/decompressing files are probably using that to cover all standards. Companies releasing programs that actually need CPU are optimizing for multiple cores. Games will be optimized with multiple cores once they catch up. Games have a long dev cycle and use older engines, so they haven't been written for multicore yet. AMD was ahead of the time and put out multicores when programs were not using them. It's their mistake, but to say multiple cores are useless is short sighted. You WILL have to replace that g620 eventually. But again, we aren't using a CPU with over 4 cores in this discussion, so irrelevant if it uses over 4.


I mean, I personally don't have a problem with the results here. It's easily predictable. The Intel build had a better GPU and did better in graphics. The AMD performed better in programs that used multiple cores efficiently. That's the thing. This article didn't really answer the points people did make on the last AMD/Intel type article where people wanted the AMD build with better memory and the Intel build using an equivalent motherboard. It gave us results we already knew. What was the point?

I noticed some comments here though, saying that a more expensive Intel board is needed to use the 1600 memory properly. I'm not too sure that's true, but it could make motherboard choice an issue. Someone with a cheap build would use the cheap Intel board, and run at 1333. Someone with the AMD build would run at 1600 even on the cheap board. In order to have the cheap Intel experience, either the board should then be a cheap board, or the memory should be set to 1333. The performance difference wouldn't be anything major, but it's better to remove any possible change in results that could come from having a more expensive test board. If we wanted to decide results based on other tests, then we don't even need this article, since everything could be guessed from other information.
 


The 6670 is a $70 card. Using a $110 or so card like the 7750 would mean that we also need to give the Llano system more power and there is probably no way to get anything that cheap except for a 6450 and that isn't going to make much of a difference.

not a bad idea per say, but it won't show anything that we don't already know. The 7750 is somewhere around a 6770 in performance.
 
@kinggraves

An i5 doesn't need to be unlocked to get an overclock. Even the i5-2400 (it is a locked i5) can hit almsot 4GHz easily by manipulation of Turbo Core and the BLCK, it will hover between 3.78GHz and 3.99GHz if done properly with a 105MHz BLCK. AMD is always less efficient than AMD. Well, not always, but it has been that way ever since Core 2 came about. We compared the non-overclocked Intel to the overclocked AMD because the non-overclocked Intel STILL won anyway. Had the AMD caught up, then the Intel system would have been overclocked far past the AMD and still be more power efficient whilst being far faster.

This is not an Intel bias, Intel is simply better here and most other places when it comes to gaming. An A6 and a 6450 might be able to match the Intel system here in performance and price, but would still use more power. AMD's CPU arch is a lot less efficient than Intel's CPU arch. Intel not only has better processes, they also have a better design. They also have better cache and a better memory controller. A lot of applications use multiple threads, but many still don't and games aren't yet using many threads. I realize that it is because of game software being behind the times due to older engines and such, but the end result is Intel's more efficient design winning over AMD's less efficient designs. If games could work with many threads then Intel would do something about it, probably just start using more cores or more threads per core.

There are some PowerPC or SPARC CPUs that have 8 or 16 threads per core, so more is possible, although they are radically different from x86 and x86-64. It does show that more can be done by Intel if it becomes necessary. Sure, you'll need to replace that G620, but you would also need to replace that A8. The difference is you get to replace the CPU instead of the CPU and motherboard. With Intel, if the games use more threads, all it takes is to get an i5 or i7 and overclock it. Intel's huge performance per core allows an overclocked i5 and i7 to match the higher core count FX and Phenom II x6s.

An i5-2400 can go to just below 4GHz, that is enough to match a six core FX in performance. The K edition i5 and i7 can go even farther. The i7 is already better, but it is pretty expensive. The i5-2500K can go past 4.5GHz easy and generates less heat and uses less power than a comparatively clocked FX because, as I said, Bulldozer is an inefficient architecture compared to Sandy Bridge. Also keep in mind that instead of a sandy Bridge i5, this system could be upgraded to an Ivy Bridge i5 and it will be even more efficient.

Different priced motherboard have been proven to not make more than a minute difference in stock performance and faster RAM would mean the AMD system has a higher budget, hence we give the Intel system a faster video card and any performance increase given by the faster memory is pointless. I'll tell you right now that the performance difference from 1600MHz to 1866MHz in Llano is no more than 16%, no less than 10%. 1866MHz has about 16.6% more bandwidth than 1600MHz so it can't be more than a 16.6% increase in performance unless an A8 somehow increases in memory bandwidth efficiency (it doesn't).

Also considering that the memory bottleneck at 1600MHz is less than the bottleneck at 1333MHz, the increase in bandwidth will be less efficient, so expect a performance difference less than 16.6%, but not too much less. I hope this helps to answer your question. H61 Intel motherboards support up to DDR3 1333MHz, H67 supports 1600MHz. H67 is a little more expensive. However, Intel doesn't see much performance differences from faster memory because it isn't bottlenecked by memory. It's controllers is more efficient than AMD's so it doesn't need the same amount of bandwidth to have the same amount of usable bandwidth and the Intel CPU doesn't need to share memory bandwidth with it's GPU so it gets all of the bandwidth to itself.

Hence, faster RAM is unnecessary for an Intel system. 1333MHz is better than 1066MHz, but going beyond doesn't show big benefits. 1600MHz is only worth it because it is about the same price as 1333MHz. The Intel system thus doesn't need 1600MHz support whereas the Llano system would be something like 20% slower without it. The performance difference of 1600MHz memory on an Intel system help it's productivity in archiving considerably, but otherwise it isn't even a 3 or 4% difference, especially in gaming. I've heard that faster memory can help for minimum frame rates in some games, but it isn't a big difference anyway.

I hope this helps clear anything up.
 
The G620 will run any RAM at 533MHz. Period. That's what the memory controller does, no exceptions. Can't change it, can OC it, can't fiddle with it. The G620 runs RAM at 533MHz (1066).

I was building a system I call 'dufflecomp' (because it had to fit into a dufflebag, along with the 2 monitors, cables, keyboard, etc) a few months ago, and had to decide between a Llano or a G620-class + a 6570/6670 GPU. I picked and built a G620 with a GDDR5 6670. I'm extremely satisfied with my selection. It runs Skyrim, SC2, DiabloIII and Tribes:Ascend well.

Not blazing Ultra-Detail @ 1900x1200 well, but medium detail @ 1680x1050 is smooth. Productivity is fine (especially since I went with a 90 gig SSD).

The computer is used when I'm on the road (I travel from L.A. to the bay area for 3-10 day installs of my custom software, and needed something I could set up in a hotel room that had better than laptop performance, and wanted moderate gaming ability).

Even if the Llano (this was just before the K models were released) was heavily OC'ed, I'd have run into cooling issues in a cramped mini-ITX case, so I went Intel + AMD GPU.

I mean, look at those power draw numbers for the OC'ed Llano. 269watts? Wow. The Apex MI-008 case I picked has a 250w SFX PSU, and puts out, I believe, 208w on the 12v line. The G620, running prime, draws 41w for me. Add in Uniqine running, and the total system draw is < 130w.

There's just no getting around it. The lure of the Llano, for me, was fitting it into a tiny little 3liter case, with some proprietary PSU (look at Inwin's mini-itx cases for examples) and from reviews (Tom's and others) of the Llano at the time, heat removal would have been a nightmare, and I'd have had PSU problems.

I'm running a 955BE at my main, and am entirely satisfied, so I have no problems with AMD. It's just that in a tiny little starved-for-air case, a G620+6670 is going to be the best choice unless you simply can not, for form-factor reasons, utilize a full-height card. Even then, Powercolor has a low-profile, double-width 6750 you could use with a G620, if your PSU can handle it.
 
@kinggraves - very good post, congrats !

@blazorthon - we don't need any more theoretical talks about AMD and Intel. If you missed it up, we want small holes in this article to be filled (especially RAM used and memory controller set up correctly, and written at what settings it run), and to see some professional touch.

May be i have to say it more directly to understand me:
As i say before, when some guy who understands a little bit more than the mass, look at the this article and see that you test this APU with ddr3 1600mhz - he will say - one more bull****, which i don't need to read. To kept the readers like me, please don't write articles like the other sites (where testing APU with 1333 ddr3 vs i5 with 1600 ddr3 ... WTF?).
I don't care how Intel is good and other empty talks - i want to see something finished. And i really hope that i will find it in TOMS, which is open in my browser all whole day!
 
[citation][nom]peroludiarom[/nom]@kinggraves - very good post, congrats ! @blazorthon - we don't need any more theoretical talks about AMD and Intel. If you missed it up, we want small holes in this article to be filled (especially RAM used and memory controller set up correctly, and written at what settings it run), and to see some professional touch. May be i have to say it more directly to understand me:As i say before, when some guy who understands a little bit more than the mass, look at the this article and see that you test this APU with ddr3 1600mhz - he will say - one more bull****, which i don't need to read. To kept the readers like me, please don't write articles like the other sites (where testing APU with 1333 ddr3 vs i5 with 1600 ddr3 ... WTF?).I don't care how Intel is good and other empty talks - i want to see something finished. And i really hope that i will find it in TOMS, which is open in my browser all whole day![/citation]

And to be honest, if next week i still read here same theoretical empty talks, i will go and get one of these 3870K, and i will post here the test results(same tests) with the right settings of the APU.
 
Hurray, this is a good Toms hardware's benchmark. Yeah, as usual AMD products are always craps. So, I'm not surprised, more AMD's engineers are leaving this loser companies because they tends more idiotic when they are working at AMD. Heck, even recently, Abu Dhabi bought more Intel shares because it will be better ROI than invest in crappy manufacturer like Globalfoundries. So, die AMD, die now.
 
@suryasans

If AMD did die right now, then Intel would have a monopoly. Assuming that anti-trust laws and the likes doesn't do something to Intel such as splitting the company, Intel would probably start increasing prices since they would no longer have any competition. Unless another company steps up and Intel decides to license x86 to them, Intel will never have competition. Also, buying out AMD won't matter because the X86 license is non-transferable.

Due to the laws, Intel would pay AMD to stay afloat just to avoid any consequences.

Besides, the A8 is right behind the Intel system so I wouldn't call it crap. I also assume that you didn't know this, but GlobalFoundries doesn't only make AMD processors.
 
I really wish I could see a review of an llano in the optimal set up that it appears to be built for. Something with 1866 or higher ram and a 6670 discrete card in crossfire put up against other single gpu builds in the same price range. With Trinity right around the corner, I doubt we'll see it. Hopefully this is something to consider when Trinity is released. If Trinity supports onboard + multi-gpu set ups, I'd like to see how that stacks up against other single high end gpu builds and any crossfire/sli builds in the same price range. Based on the review on this website which illustrated how multi-gpu crossfire set ups are showing significant gains in performance, I'd like to see how the mid-range cards perform. If the gains are similar, Trinity could end up being a real budget workhorse machine, capable of some great mid-range gaming. I priced out a complete llano 3870k system with a 6670 and 8gb of 1866 ram at just over $450. If I could add another 6670 to complete an onboard + multi-gpu crossfire set up, that would only run me another $70 to $90, but, sadly this type of set up can't be run. I'm hopefully Trinity, and the 7000 series cards that it can pair with, will allow something like this. If the price of Trinity and the FM2 boards isn't too much of an increase, and the technology is supported, you're potentially looking at a 3* gpu (onboard + 2 discrete) crossfire system in the $600 - $700 range. I'd love to see how that stacks up against a high-end sytem build around a single $500 discrete card.
 
@Chip in a Box

That seems like it could match the Intel system fairly well in gaming performance, but it might use more power than the A8 system. Still leaves out an upgrade path worth mentioning, but it is a good comparison that I would like to see. Then lets see both also overclocked.

Also, how about a Celeron G530 paired with a 6750 instead of a 6670? that should be enough gaming performance to win although I admit it might change the power usage charts around for the Intel system. The Celeron should be right behind the Pentium, so gaming performance should go up significantly without sacrificing much general purpose performance.

 
Status
Not open for further replies.