AMD Sampling Fusion 'Llano' Chips to Customers

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
AMD gives a superior product if you factor in price. There is no reason that Intel could not drop their prices to compete with AMD. The average person does not have the money to blow ona 500 dollar processor and a 200 dollar motherboard. Intel is just a cash cow that has money to burn on research, however I would bet that quite a bit of their research is wasteful. AMD at least is innovative, does a very good job with much less resources, and thinks outside the box.
 
To be honest, I actually do think this could very well be the next step in CPU evolution. It DOES seem logical, as we've seen serious evolution in "build-up" of units throughout the history of the CPU:

1980: shortly after the release of the 8086 and 8088, Intel produces the 8087 FPU, a co-processor, giving the computer native floating-point capability. It was a separately installed chip; it could be seen as somewhat analogous to the modern-day video card's add-on capabilities.
1989: With the introduction of the 80486, the FPU is integrated into the main core, rather than being a separate chip.
1993: Intel introduces the Pentium, the first CISC super-scalar CPU for home use, giving it a pair of ALUs to work with.
1996: Intel intoduces the Pentium MMX, which includes a special SIMD capability, allowing it to process two integers with the same instruction at once.
1998: AMD introduces, with the K6-2, their "3DNow!" extension, which follows up on MMX, allowing it to process two floating-point numbers with the same operation, resulting in an immediate doubling its "FLOPs" rating.
1999: Intel introduces the Pentium III, bringing forth the "SSE" set, which goes even farther, giving the CPU full-fledged vector capability, of up to FOUR floating-point numbers in one pass.
2005: AMD introduces their "Toledo" Athlon64 X2, the first home-use dual-core desktop CPU.
2006: Intel introduces their "Kentsfield" Core2Quad, the first home-use quad-core desktop CPU.
2010: Intel introduces their "Gulftown" Core i7 CPU, the first home-use hexa-core desktop CPU.
2010/1: AMD introduces their "Llano" Fusion CPU, coupling traditional CPU cores with a large number of smaller, very-high-speed processing units, yielding the first desktop example of a "hybrid" design between traditional multi-core CPUs and super-wide, vector-based GPUs.

So, over time, we've witnessed as a CPU started as a simple, basic ALU, to having an FPU on a separate chip, to integrating that chip into itself, to getting a SECOND ALU, then to gaining SIMD ability on first the ALUs, then on the FPU, and on to multi-core designs. Now, finally, we're stacking large numbers of extra stream processors alongside traditional cores.

While this progress is thrilling, it also poses challenges, of course; as with any CPU design, there's the question of how to adequately keep it busy; extra care needs to be taken to the design so that the control and branching units will be able to properly distribute the load, and be able to keep all of the stream processors equally busy.

[citation][nom]future_fusion_owner[/nom]therefore a 70 gigaflop Core i7 might be competing with a 1 teraflop AMD processor.[/citation]
This is pretty much exactly my point, with my comment about "Cell on crack." (though a slight nitpick: IIRC, the Core i7 920 starts at ~128 gigaflops, with the i7 980X reaching 230) The Cell's SPEs, which are entirely responsible for its (previously) impressive floating-point throughput numbers, are not very different at all from the stream processors on a modern GPU.

Hence, it wouldn't take all that much to allow the CPU to issue instructions to the cores; it'd be going beyond GPGPU, and simply make it part of the CPU itself. And given that, as far as I understand, the clock rate limit in GPUs lies with the texturing units and not the SPs, they could be clocked MUCH higher when part of a CPU, hence yielding a multi-teraflop CPU, when other CPU designs (i7 hexacore, Cell) are barely scraping around the 200-gigaflop mark.

Of course, it IS possible for others to still compete... External units are quite usable. The x87 FPUs were separate chips from their parent x86 CPUs, which issued instructions to them.
 
Buying ATI might be the best decision AMD has ever made if all this "theory" is correct.

I only hope we don't see 1000 dollar CPUs from AMD like when the Athlon was king of the hill.
 
[citation][nom]anamaniac[/nom]Rich? When I bought my i7 920 I was making $13/hour...[/citation]

And is there a single reason why you needed an i7 other than bragging rights. I bet you use 20% of its full potential which would be a huge waste of money (especially with your wages).
 
[citation][nom]kartu[/nom]Guys, so what's the deal? How does it benefit "PC gamers"? Isn't it just a mediocre GPU slapped on CPU chip? Why would I buy it?[/citation]

Sure you can, It just depends on what point you're making, since everything Intel made since is faster then the Q6600 clock for clock, and overclocks to around the same speeds as the the Phenom 2s, which are faster then the first Phenoms and he's trying to say his Phenom is as fast as a Core I5. It illustrates my point nicely.

You just have to understand at what you're looking for in the data, and you obviously didn't.
 
[citation][nom]SlyNine[/nom]Sure you can, It just depends on what point you're making, since everything Intel made since is faster then the Q6600 clock for clock, and overclocks to around the same speeds as the the Phenom 2s, which are faster then the first Phenoms and he's trying to say his Phenom is as fast as a Core I5. It illustrates my point nicely.You just have to understand at what you're looking for in the data, and you obviously didn't.[/citation]

Transcoding and VM's are vastly faster with I7's. As well as 3d rendering and many many more apps. Hell since you are trying to limit everything to gaming, tell me why you'd need anything more then a Q6600 @3.0ghz.
 
[citation][nom]babybeluga[/nom]And is there a single reason why you needed an i7 other than bragging rights. I bet you use 20% of its full potential which would be a huge waste of money (especially with your wages).[/citation]

Transcoding and VM's are vastly faster with I7's. As well as 3d rendering and many many more apps. Hell since you are trying to limit everything to gaming, tell me why you'd need anything more then a Q6600 @3.0ghz.
 
Actually, if these work with current 790/890gx boards you get built in hybrid Xfire with no additional cost for a discrete board. Or possibly if you put in a discrete board and well triple gpu Xfire at entry level price.
 
Actually, if these work with current 790/890gx boards you get built in hybrid Xfire with no additional cost for a discrete board. Or possibly if you put in a discrete board and well triple gpu Xfire at entry level price.
 
[citation][nom]babybeluga[/nom]And is there a single reason why you needed an i7 other than bragging rights. I bet you use 20% of its full potential which would be a huge waste of money (especially with your wages).[/citation]
It's all about your priorities. Some people may think a $50,000 is worth it compared to a $15,000 car. Some people don't care.
I'm on a computer most of the time when I'm not working, so it only makes sense that's a decent place for me to invest (I have no need for $100 jeans and $200 shoes etc.).

For me, it was also partly a sense of liberation. I've owned many computers before, but every time they cost around $800 and I was never satisfied. I kept trying to save to build a gaming rig for a year, but every time I had the funds everything went to hell. I finally had the money, so I decided I was going all or nothing. Though I did consider a Phenom and Athlon build.
Only problem is, I have stopped PC gaming almost entirely these days...
I'm also not going to replace this for years.
$2,000 well spent.
 
I like AMD and I literally put my money on it (have a few AMD shares) but lets just wait before we say our last goodbyes to Intel and nvidia. We all remember great designs for chips that materialized into failures (Itanium, Phenom 1, Larrabee, maybe even Fermi etc). Fusion is a great concept and I hope that AMD will manufacture great Fusion C/GPUs as soon as possible but so many things can go wrong until we will see some Fusion powered systems. Even if everything will go according to plan, Intel and nvidia are not just hanging around waiting to be smashed. Intel's Tic Toc strategy is formidable and the gap between Intel and AMD is not narrowing at all. Nvidia is in a very tough spot right now but if the Radeon 4870 launch woke them up they may be back in the game soon.
Fusion may give a huge boost to AMD but we can not be sure about it right now. Only time will tell.
 
If AMD could create a 'Llano' variant for the professional market (CAD / CAM) that would be amazing (That and good drivers) then I'd buy it without a second thought.
 
Status
Not open for further replies.