AMD Fusion: How It Started, Where It’s Going, And What It Means

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]rds1220[/nom]LOL they act like they re-invented the PC, when Intel has had integrated graphics for generations now. Meanwhile after all that the APU is junk. The graphics are good but processing power is slow and shitty.[/citation]

They actually just might kinda re-invent how CPUs and GPUs impact performance, so your comment will probably fall on deaf ears. Furthermore, having an IGP and making use of it are two entirely different things, especially given the difference in performance between Intel's IGPs and AMD's IGPs as well as how AMD was the first to have true entry-level performance graphics rather than minuscule GPUs that can hardly do anything intensive at all. Intel and AMD have been going quite different ways with their IGPs until Radeon 7000 where AMD also added what might be competition for Quick-Sync, but it's on all of their Radeon 7000 GPUs if I remember correctly, not just their IGPs.

Also, the CPU performance of the APUs is actually quite good, especially with Trinity, considering their prices, target markets, and purpose. Trinity is a huge leap towards Intel as far as CPU performance was concerned and I'd take a Trinity quad core over an i3 for pretty much any computer that I could. The same would not be true if it was quad core Intel CPUs versus the quad core Trinity CPUs, but versus the dual-core Intel CPUs, I'd go with Trinity every time within practicality.

The APUs are most certainly not junk. AMD has been quite clever with them, although I still do think that they could have done better, but I think that about a lot of things that AMD and other companies do.
 

rds1220

Splendid
Apu's are pointless at the moment. Its better to get a faster phenom II 965 and a dedicated video card. APU's offer nothing more than onboard graphics, they just moved the chip from the motherboard, to the CPU. IMO its a waste of money buying a CPU with integrated GPU when you will most likely have a dedicated GPU if you are into playing games. Also as mentioned already sure the graphics are better but whats the point when in terms of sheer processing power the APU is slow and far behind most Intel CPU's and Phenom II's. Also crossfiring an APU with a low end gpu's creates insane microstutter, and doesnt even give the performance of a good mid-range 6850. I would much rather have a good CPU and good dedicated GPU for what it would cost to get a crappy athlon x4 equivelant and a crappy low end crossfire setup.
 
[citation][nom]rds1220[/nom]Apu's are pointless at the moment. Its better to get a faster phenom II 965 and a dedicated video card. APU's offer nothing more than onboard graphics, they just moved the chip from the motherboard, to the CPU. IMO its a waste of money buying a CPU with integrated GPU when you will most likely have a dedicated GPU if you are into playing games. Also as mentioned already sure the graphics are better but whats the point when in terms of sheer processing power the APU is slow and far behind most Intel CPU's and Phenom II's. Also crossfiring an APU with a low end gpu's creates insane microstutter, and doesnt even give the performance of a good mid-range 6850. I would much rather have a good CPU and good dedicated GPU for what it would cost to get a crappy athlon x4 equivelant and a crappy low end crossfire setup.[/citation]

That a Phenom II x4 965 and a dedicated video card are faster doesn't matter because they are also much more expensive and use much more power. APUs offer much more than just integrated graphics. They offer the superior thermal headroom of a 32nm process node and Crossfire with low-end dedicated GPUs. What's better, a Phenom II x4, or a Llano A8-3850, when paired with a Radeon 6670? They're about the same price and both are decent upper entry-level gaming solutions, but the Llano system is much faster for games, especially with 1600MHz memory or better.

The GPU has a very low-latency connection to the CPU, so the CPU can better offload latency-critical work to the GPU. Furthermore, Llano is not far behind Phenom II in performance; Llano is fairly close to Phenom II. Trinity is beyond Phenom II in performance. Both are beyond Phenom II in power efficiency, but Trinity especially so.

There isn't insane micro-stutter unless there is something wrong with the computer. Furthermore, for quite efficiently quad-threaded tasks, an Athlon II x4 or a Llano A6/A8 with four cores can beat even the top current i3s in performance. Trinity can easily do so as well.
 

linuxlover

Honorable
Aug 20, 2012
1
0
10,510
0
This was a very well-written article, thank you! I have what some may consider a "stupid" question about AMD Fusion, but I have yet to hear about it. In order to take advantage of GPGPU, do you have to have graphics drivers installed? I've heard about there being a separate OpenCL driver, but does it need Catalyst to run? I was thinking about the possiblity of building a system with Trinity and Nvidia graphics (Linux setup running Wine), keeping the Radeon chip as a parallel processor only. Nvidia could drive the video, and one could perhaps leverage both OpenCL and CUDA if needed. Anyone know if this is possible?
 

technoholic

Distinguished
Feb 27, 2008
800
0
19,160
69
Intel's haswell will supposedly be 50 or more core processor. Perhaps, this is intel's approach to parallel computing? I mean aren't GPUs better in parallelism because they naturally have hundreds of "cores"? And perhaps AMD's approach is to use the integrated GPU for parallelism? Will GPUs and CPUs change roles? Or will CPUs evolve to better handle parallel work loads? Even though i don't have expert knowledge in micro-chips and stuff, i am getting excited about the future
 
[citation][nom]linuxlover[/nom]This was a very well-written article, thank you! I have what some may consider a "stupid" question about AMD Fusion, but I have yet to hear about it. In order to take advantage of GPGPU, do you have to have graphics drivers installed? I've heard about there being a separate OpenCL driver, but does it need Catalyst to run? I was thinking about the possiblity of building a system with Trinity and Nvidia graphics (Linux setup running Wine), keeping the Radeon chip as a parallel processor only. Nvidia could drive the video, and one could perhaps leverage both OpenCL and CUDA if needed. Anyone know if this is possible?[/citation]

You might be able to do something like that, but it seems a little odd.

[citation][nom]technoholic[/nom]Intel's haswell will supposedly be 50 or more core processor. Perhaps, this is intel's approach to parallel computing? I mean aren't GPUs better in parallelism because they naturally have hundreds of "cores"? And perhaps AMD's approach is to use the integrated GPU for parallelism? Will GPUs and CPUs change roles? Or will CPUs evolve to better handle parallel work loads? Even though i don't have expert knowledge in micro-chips and stuff, i am getting excited about the future[/citation]

Intel's 50+core processors are their projects such as Knight's Corner. Haswell will be another boost in performance per core like Sandy Bridge, Nehalem, and Conroe were. We might also see an eight core IB-E or eight core H-E setup to succeed SB-E, but that's probably the most parallel that Haswell will be any time soon.

Not all software can be made to run in a very parallel fashion (especially a lot of older software) for a few reasons. For example, there are some tasks that simply can't run in parallel. Each instruction in such tasks might rely on data from the previous calculation and such workloads are difficult or even seemingly impossible to make more parallel.

CPUs probably won't evolve to handle extremely parallel loads very well. That is, after all, something that GPUs are so good at that we may as well use them. CPUs can handle the less parallel loads far better than GPUs can, so we're probably going to work into a much more heterogeneous computing method with both CPUs and GPUs either working together on a task or working on two separate tasks that have different performance characteristics.
 
G

Guest

Guest
I'm still not convinced by this. Most stuff an average consumer does is single-threaded. The exception being gaming which is getting more and more multi-threaded and actually requires a powerful GPU. The current APUs only have value in laptop space. but even there the best one (which usually is used to compared to intels HD4000 graphics) directly competes (on price) with hyper threaded intel dual cores + low end discrete which perform better than the APU.

On desktop the APU is either overpowered (eg for Facebooking etc) or underpowered for gaming at a reasonable resolution. I agree with the poster that said AMD is too early. Sure there are use-cases that will have huge benefits from this but the mass-market? Not really, at least now.
 
[citation][nom]beginner_[/nom]I'm still not convinced by this. Most stuff an average consumer does is single-threaded. The exception being gaming which is getting more and more multi-threaded and actually requires a powerful GPU. The current APUs only have value in laptop space. but even there the best one (which usually is used to compared to intels HD4000 graphics) directly competes (on price) with hyper threaded intel dual cores + low end discrete which perform better than the APU. On desktop the APU is either overpowered (eg for Facebooking etc) or underpowered for gaming at a reasonable resolution. I agree with the poster that said AMD is too early. Sure there are use-cases that will have huge benefits from this but the mass-market? Not really, at least now.[/citation]

Many of Intel's laptops with prices similar to AMD's APUs (especially ultrabooks/sleekbooks) have i3s that are weaker per core than some of the APUs in the same price range and are far weaker than the APUs that have more cores and similar or better performance per core. For example, I was looking through some OEM web sites to compare prices on some laptops and the two closest in price at HP for ultrabooks had an i3 at 1.4GHz and an A6 at 2.6GHz. The i3 simply doesn't compare. Sure, this situation isn't always this bad for Intel and maybe sometimes it is reversed, but it shows that the APUs can be the better solution in both CPU and GPU power in some cases.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
0


Don't forget to mention those i3's lower power consumption there (i.e. longer battery life). But yeah, you did get your point through about them possibly being better in some cases. I myself would be happy to get an APU compared to an Intel laptop with no discrete graphics, generally-speaking.

I'm wondering, you're probably referring to a dual-core A6 above, but how many times better do you think that 2.6GHz would generally perform compared to that 1.4GHz i3? :)
 


I wouldn't say times faster, but it is a significant win. The i3 should have about 50-60% faster integer performance per Hz and Hyper-Threading, but it simply doesn't make up the clock frequency advantage of the A6. Also, it was a quad core Llano A6, not a Trinity A6. To be honest, 2.6GHz was the top Turbo frequency, but its 2.1GHz base frequency was still more than enough to stay ahead even if the Turbo Boost didn't work as well as it does and Hyper-Threading simply isn't nearly enough to make up for having half the cores that are similar in performance to the Llano cores worst-case scenario with Turbo Boost doing nothing and considerably slower best-case scenario with the Llano Turbo Boost maxing out the frequency.

Also, Llano tends to have better battery time than Sandy Bridge systems, although this might have been an exception (I didn't check the i3 ultrabook's battery time, but the A6's was pretty long) given the performance difference.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
0


Well, since I see you as not one to make baseless claims, I'll take your word for it and I guess you've seen benchmarks and reviews supporting this. Though that 2.1-2.6GHz quad-core A6 Llano would only "definitely" win against that dual-core 1.4GHz Core i3 if the application in question has more than 2 threads right? I mean, if the worst case for the A6 happens (staying at 2.1GHz) it would perform about on par in integer performance with the the 1.4GHz if the application only taxes 2 threads. I just thought of this 'coz 2.1GHz is 50% "faster" than 1.4GHz but the Core i3, like you said, has a 50-60% integer performance (architectural) advantage, which about balances their performance.
Oh, you did seem to imply this with this line:
having half the cores that are similar in performance to the Llano cores worst-case scenario with Turbo Boost doing nothing
I'm also curious about overall floating-point performance of those two exact CPU's mentioned. :)

Though all of this still doesn't disprove your initial point of how APU's can prove to be better solutions sometimes. I totally agree with it, on many grounds.
 


Well, a lot of software today can at least make use of two or three threads fairly well, even games, and having other cores for background work and multi-tasking can be quite the advantage. With Turbo, it's probably not going to have all cores at the base frequency during a CPU-intensive job, so it should be able to beat the i3 considerably. You could look at benchmarks between Sandy Bridge i3s and Athlon II or Llano CPUs to see the difference in performance per clock with single-threaded benchmarks. You could also do this with multi-threaded benchmarks if you divide the i3's results by 1.3 to account for Hyper-Threading (also adjust for differing core counts if necessary), but this is less accurate.

With floating point, I'd have to look into it again (it's been a while since I compared the floating point performance of the i3s to Llano), but Llano probably has an advantage given how the i3s are crippled in floating point compared to the i5s and i7s.
 

pcfan86

Honorable
Aug 18, 2012
7
0
10,510
0
The XBox One has a CPU/GPU chip that has 5 billion transistors. Built with 28nm process technology, it's 363 square millimeters. Even though it's already shipping, it never happened. Why? BECAUSE IT'S IMPOSSIBLE TO BUILD A 5 BILLION TRANSISTOR CHIP! That's why. Right Dave?
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS