AMD's "Fusion" processor to merge CPU and GPU

chuckshissle

Splendid
Feb 2, 2006
4,579
0
22,780
0
AMD today announced that it has completed the acquisition of graphics chip developer ATI. The company does not waste any time to make use of the acquired knowledge: In 2007, AMD will be upgrading its mobile platform and offer a Centrino-like platform as well as integrated solutions for commercial and media systems. And there will be a processor with built-in graphics.

Integrated CPU/GPU chip, that sounds really amazing!

But what will the future hold for Nvidia and other graphics processor manufacturers? What would happen to Intel on the gaming turf?
 

zarooch

Distinguished
Apr 28, 2006
350
0
18,780
0
well good for AMD but IIRC way back in 2000 Intel had a project on this thing CPU+GPU on a single die. But they scraped the project don't remember correctly if it were money matter or whatever, maybe someone can provide the link (CNet news?), so as far as Intel is concerned, I think they know the ABC of the technology they just need to get to XYZ.
 

WR

Distinguished
Jul 18, 2006
603
0
18,980
0
Some historical lessons may be gleaned from the Cyrix MediaGX, which was a system-on-a-chip for entry level PCs. But I'm sure AMD has considered that and understands what makes it different from Cyrix.

On the gaming turf, I don't expect anyone to abandon mainstream discrete graphics cards (only the lower end cards). Thus I think the Fusion chip would contain some unnecessary silicon real estate for gamers and enthusiasts, though that's not a detriment to performance. For the rest of the systems, it could save some money over an integrated video or low end discrete solution.
 

lcandy

Distinguished
Jul 14, 2006
260
0
18,780
0
What would be good, is a series of processors with integrated graphics and add in cards. If they worked it like Xfire (yes with a completely different technical setup, I know its not that easy), you could get your processor with standards graphics card built in, then have the option to get a beafed up card to work along side it. I wonder if anything like that is feasible, or am I dense? :)
 

sailer

Splendid
Apr 9, 2006
4,969
0
22,810
8
In my opinion, this might be a good step up for those who now use the "intergrated graphics" that is seen on cheap computers, mainly the Intel ones from what I've seen. It would off load the main cpu so that it could run at full power, while the graphics part did its thing.

At the same time, I don't see it working with the enthusiast crowd. Can you imagine wanting to upgrade your video card and having to buy a whole new cpu/graphics chip. or upgrading your cpu and having to do the same? Not me, for one. I tend to get a good cpu and keep it for a long time, upgrading the video card as I go along.
 
G

Guest

Guest
The name was Timna

Mooly Eden: It was a huge risk for the IDC. Banias market came just after Timna had been canceled [Timna was the codename for an integrated processor designed for the entry-level market and originally scheduled for the second half of 2000 - ed]. We had worked on Timna for two years and needed to make sure that we didn't get another project canceled. In such a case, the company may lose confidence in the development center. And worse than this, the people may lose confidence in themselves. But the biggest risk in this industry is not to take risks, because then you are doomed. If you want to play it safe, you are out of the game.
Timna microprocessor family was announced by Intel in 1999. Timna was planned as a low-cost microprocessor with integrated graphics unit and memory controller designed to work with Rambus memory. The company anticipated that by the time the processor is released to market, that is in the second half of 2000, the price of Rambus memory would fall to the level where it could be used in value computer systems. As the price of Rambus memory failed to drop, Intel decided to use a bridge chip (Memory Translation Hub or MTH), that was already used with Intel 820 chipset, to link Rambus memory controller with less expensive SDRAM memory. When a serious bug was discovered in the MTH design in the first half of 2000, Intel recalled the MTH and delayed Timna release until the first quarter of 2001. After that, the company started redesign of the MTH component from scratch, but due to continuing problems with the newly redesigned MTH part, as well as due to lack of interest from many vendors, Timna family was finally cancelled on September 29,2000.
I think it can be a good option, just look at all the integration goin on motherboard, you need less and less expansion, I can see the trend moving on the CPU if the lithography allows it!
 

gm0n3y

Distinguished
Mar 13, 2006
3,441
0
20,780
0
That sounds possible, but I think that they would probably start with the entry level systems. The 'ondie' graphics might be so slow that the overhead to do the 'crossfire' would outweigh the benefits. Or it would be such a negligible increase in performance that they wouldn't both selling it.
 

nobly

Distinguished
Dec 21, 2005
854
0
18,980
0
In my opinion, this might be a good step up for those who now use the "intergrated graphics" that is seen on cheap computers, mainly the Intel ones from what I've seen. It would off load the main cpu so that it could run at full power, while the graphics part did its thing.

At the same time, I don't see it working with the enthusiast crowd. Can you imagine wanting to upgrade your video card and having to buy a whole new cpu/graphics chip. or upgrading your cpu and having to do the same? Not me, for one. I tend to get a good cpu and keep it for a long time, upgrading the video card as I go along.
Totally agree.
I can see its potential in many office systems, etc that don't need good graphics. However, I wonder if the GPU processor will have separate RAM for it or what - its not like they can stick it on the CPU+GPU chip (not much real estate).

Problem is that if you want to upgrade you're kind of stuck. Either you're stuck w/ the CPU+GPU chip or you add an expansion card and you're stuck with 1/2 a chip that doesn't do anything. And if its more expensive than just a CPU by itself, it seems like a waste of $ to get the CPU+GPU. A saving grace there might be strong floating point performance by tapping the GPU.

Another thing I'd throw out is that I would think this kind of integrated chip would be hard to yield in the fabs. But I'm not a yield expert, so hopefully someone else can comment on it.
 
well good for AMD but IIRC way back in 2000 Intel had a project on this thing CPU+GPU on a single die. But they scraped the project don't remember correctly if it were money matter or whatever, maybe someone can provide the link (CNet news?), so as far as Intel is concerned, I think they know the ABC of the technology they just need to get to XYZ.
I think i remember somthing like that. it was around the same time the P3 was still using the SEC (single edge cartridge) type CPU's. intel also said around this time that these type of CPUs may someday contain all the system components of a home computer.
 

BaronMatrix

Splendid
Moderator
Dec 14, 2005
6,655
0
25,790
3
AMD today announced that it has completed the acquisition of graphics chip developer ATI. The company does not waste any time to make use of the acquired knowledge: In 2007, AMD will be upgrading its mobile platform and offer a Centrino-like platform as well as integrated solutions for commercial and media systems. And there will be a processor with built-in graphics.

Integrated CPU/GPU chip, that sounds really amazing!

But what will the future hold for Nvidia and other graphics processor manufacturers? What would happen to Intel on the gaming turf?
A CPU/GPU combo will NOT happen before 2008. I would think that AMD will first get a GPU in a Torrenza socket as an accelerator and then move to on die at 45nm.

We may see HTX(slot) first, but by Q307 there will be a Torrenza chip from ATi. Imagine putting an X1950 next to Barcelona for FP or media streaming. Or how about two Barcelonas and two X1950s in aquad socket box.
8O

I think that nVidia will be fine as they can still sell to AMD and Intel. They will probably present the Havoc physics engine for PCIe and HTX.The difference in slot shouldn't affect their price structure while HTX will give LOTS more bandwidth for interconnects than even PCIe 2.0.

I also wouldn't be surprised if they use Torrenza ond the Open Socket to create their CPU/GPU. Licensing Intel's bus may not provide the horsepower and creating their own FSB (with enough BW to compete with HT2) would take too long.
 

CompTIA_Rep

Splendid
Moderator
Jul 13, 2006
4,848
0
22,860
47
I cant imaging putting a x1950xt inside a processor die....


Can we say heat dispersion? With so little surface area, your looking to cool quite a lot of heat very quickly. This would invoke mandatory water cooling... or possible shuttle tiles??
 

kukito

Distinguished
May 17, 2006
568
0
18,990
2
Baron, off topic, but congratulations on the AMD/ATI marriage. I expect some really exciting product offerings from AMD and not just great CPUs anymore. I'm already drooling about the possibilities. :D Good luck, AMD!!
 

zarooch

Distinguished
Apr 28, 2006
350
0
18,780
0
The name was Timna

Mooly Eden: It was a huge risk for the IDC. Banias market came just after Timna had been canceled [Timna was the codename for an integrated processor designed for the entry-level market and originally scheduled for the second half of 2000 - ed]. We had worked on Timna for two years and needed to make sure that we didn't get another project canceled. In such a case, the company may lose confidence in the development center. And worse than this, the people may lose confidence in themselves. But the biggest risk in this industry is not to take risks, because then you are doomed. If you want to play it safe, you are out of the game.
Timna microprocessor family was announced by Intel in 1999. Timna was planned as a low-cost microprocessor with integrated graphics unit and memory controller designed to work with Rambus memory. The company anticipated that by the time the processor is released to market, that is in the second half of 2000, the price of Rambus memory would fall to the level where it could be used in value computer systems. As the price of Rambus memory failed to drop, Intel decided to use a bridge chip (Memory Translation Hub or MTH), that was already used with Intel 820 chipset, to link Rambus memory controller with less expensive SDRAM memory. When a serious bug was discovered in the MTH design in the first half of 2000, Intel recalled the MTH and delayed Timna release until the first quarter of 2001. After that, the company started redesign of the MTH component from scratch, but due to continuing problems with the newly redesigned MTH part, as well as due to lack of interest from many vendors, Timna family was finally cancelled on September 29,2000.
I think it can be a good option, just look at all the integration goin on motherboard, you need less and less expansion, I can see the trend moving on the CPU if the lithography allows it!

yes exactly Timna! thanx alot! If torrenza can happen then surely Timna with a new name and enhanced technology with latest manufacturing process with the best architecture. (Gesher possibly) can surely happen. Time will tell! once again thanx for the dig! I remembered coz it was already disscussed in a thread "Torrenza", so most of this technology is already discussed nothing new.
 

Cabletwitch

Distinguished
Feb 3, 2006
103
0
18,680
0
Integrated CPU/GPU... it had to be tried again, didnt it?

I cant see it being all that amazing, despite what people say. 'Oh goody', I hear, 'PROPER integrated graphics, about time!'.

Eh, no. Not really. Its not going to set the world alight, or even create that much smoke. Its an interecting concept, but do we REALLY need it? After all, current integrated systems work perfectly fine for the users who require one. This is my take on the whole shebang...

(Disclaimer: These points are my own, and arent based on anything other than my own thoughts and theories, which may be proven wrong at a later stage. If you feel the need to post derisory crap telling em I'm wrong, thats fine. Just remember, you're not actually doing anything constructive when you do so.)

1) Connectivity.

Pin density on CPUs is already getting rather crowded, and likewise on the GPU side of things. Now, while pin counts wont exactly double, HOW are you going to provide enough connections points without making the package size much bigger? From 30 seconds of thought, it'll be a similar package as current GPUs have. Which means these units will be soldered to the motherboard. Not good news for upgrades.

2) Die Size.

Ok, not as much of an issue as a lot of people might think. I guess the popular first image is of a CPU with a GPU on the same silicon. Yes, in a way it is. But it'll be tightly integrated with the CPU, in a way that will possibly see the two devices sharing circuitry. This sharing model will reduce overall heat production and power consumption. The downside will be a larger die needed to cope with the extra circuitry, and accordingly, less dies per wafer. This could potentially increase costs. Still, I have to concede that it would probably still be cheaper than a CPU and supporting external GPU/chipset.

3) Memory Control

Again, no so much of an issue, more a concern regarding control of the memory available to both parts. Will the IMC be responsible for allocating the memory for both the CPU and the GPU? Can it cope? Certainly, it will need a revision to allow for the fact that it now has to cope with roughly twice the workload, but this could very well be a good thing. After all, if the IMC is more efficient in its operation, it will benefit both devices. The downside is we're still using system RAM. But again, I guess for those people who will be using this setup, GFX performance is hardly a deciding factor.


4) Heat

I mentioned this back in #1, heat could be an issue for these combined units. Probably not as hot as a Prescott (or, say, the center of the sun, or a blast furnace), but its not going to be that much. I say this as ultimatly, I cant see this CPU/GPU approach being used for the higher end units, as that would be somewhat pointless. I'll wait and see what info we get regarding this before I decide any further.

I'm going to leave it there for a while, as I cant think of anything more right now. I'm interested to see what will come from this venture, as AMD is certainly in no position to release a product that cannot perform well and gain significant market share. If anyone has any valid and useful insights or information regarding this, please feel free to post.

If you intend to flame, I'll set ActionMan and StrageStranger on you. And if you continue, I might have to send BM, 9NM and Sharikou round to pay you a visit... you have been warned :D
 

flasher702

Distinguished
Jul 7, 2006
661
0
18,980
0
A saving grace there might be strong floating point performance by tapping the GPU.
They did say it's not just for GFX, but also to
...leverage the floating point capabilities of graphics engines.
Seems like they intend to be able to use it for more than just GFX. So for a gaming rig you would probably get a dedicated GFX card (so that it could be very close to large ram caches on a dedicated bus) and the "GPU/CPU" would then be used as a "GPGPU/CPU" to bump up the floating point power of the CPU to do physics or something while the discrete GFX card did the imaging. So it's not just to replace "integrated GFX" but also to basically bring back the math-coprocessor we did away with so very long ago when CPUs were getting faster at such a rate that it was more cost effective to get a new CPU then add a math co-processor to your system if you needed more computational power.

The CPU companies are desperately trying to give us a reason to want to upgrade every year again since they can't figure out how to make a CPU much faster without creating small furnaces. Unfortunately, without the code to run it, the extra floating point power of the extra logic in a GPU/CPU will go to waste just like the extra cores in the dual-core processors they are trying to convince us are 2x as fast all the time. Fortunately, the extra "GPU" logic won't make it just 2x as fast, it'll be more like 5x as fast (when and if you actually use it of course, but a much higher incentive to make the extra coding effort) and they can use the same logic and silicon for low-end desktops and high-end servers wich will streamline manufacturing. How they are going to convince the high-end server market to pay 10x as much for the same exact logic being put into budget laptops is my question.

A lot of things up in the air about this. And with AMD borrowing 2.5billion USD to invest in a market with razor-thin margins that is rapidly dropping in retail value I'm worried that this might be a shake-up that is bad for consumers. Intel, nVidia, AMD/ATI are all treading on thin ice (It's a concept that has been repeatedly scrapped in the past) and racing eachother to provide a solution to a problem that may not even exist. Intel may be the only one with enough financial clout to be able to ride out the storm. Why are 3 completely different companies all working on the same exact "new" (very old) concept at the same time and trying to get consumers excited about it ~2years prior to it even coming out? This makes no sense to me.
 

flasher702

Distinguished
Jul 7, 2006
661
0
18,980
0
I would think that AMD will first get a GPU in a Torrenza socket as an accelerator and then move to on die at 45nm.
I've seen this speculation mentioned elsewhere, I think there was even a quote from AMD suggesting this was their plan.

We may see HTX(slot) first.
What is this "HTX slot" you speak of? Hyper Transport Xisasupercoollettertouseinanacronym Slot or something like that? I missed this one. Could you link some articles about it?

I also wouldn't be surprised if they use Torrenza ond the Open Socket to create their CPU/GPU. Licensing Intel's bus may not provide the horsepower and creating their own FSB (with enough BW to compete with HT2) would take too long.
Yay! Open socket! This deserves way more attention than it is getting. I posted a thread about it not too long ago and it got zero replies :( Open socket is way cooler with far more potential to give consumers good products than any of the other crap Intel, AMD/ATI, or nVidia have been talking about. Pimp the open socket!
 

jamesgoddard

Distinguished
Nov 12, 2005
1,105
0
19,290
2
There is a huge market for the CPU/GPU thing for the $100/$200 type PC for emerging markets... 2/3 of the population of the world are classed as emerging markets, that’s a huge potential there... And who knows, these markets will probably not be emerging for ever.
 

BaronMatrix

Splendid
Moderator
Dec 14, 2005
6,655
0
25,790
3
I cant imaging putting a x1950xt inside a processor die....


Can we say heat dispersion? With so little surface area, your looking to cool quite a lot of heat very quickly. This would invoke mandatory water cooling... or possible shuttle tiles??
Contextually it should be clear that I meant in a socket not on the die. The on die verison will more than likely be just the pipelines, with the CPU controlling them.

If you look at the size of an IGP, you wil see that at 65nm those would be 50mm2 or so.
 
G

Guest

Guest
Wiki is all knowledgeable

HTX and Co-processor interconnect
The issue of bandwidth between CPUs and co-processors has usually been the major stumbling block to their practical implementation. After years without an officially recognized one, a connector designed for such expansion using a HyperTransport interface was recently introduced and is known as HyperTransport eXpansion (HTX). Using the same mechanical connector as a 16-lane PCI-Express slot, HTX allows plug-in cards to be developed which support direct access to a CPU and DMA access to the system RAM. Recently, co-processors such as FPGAs have appeared which can access the HyperTransport bus and become first-class citizens on the motherboard. Current generation FPGAs from both of the main manufacturers (Altera and Xilinx) can directly support the HyperTransport interface and have IP Cores available.

However, the existing HTX specification allows Hypertransport devices attached through HTX connectors to communicate at only a quarter of Hypertransport's full throughput, as it uses PCI-E's 16-bit connector and is downclocked to a mere 1.4GHz in spite of an earlier Samtec connector [2] supporting 32-bit, 2.8GHz operation.
 

steve4king

Distinguished
Jul 4, 2006
45
0
18,540
1
I dont expect heat would be that big of a deal, the die would be larger and perhaps the entire package would be a little larger.

This would remove the northbridge strain, shorten the video bus down to nothing. It could potentially create a significantly faster integrated video solution while decreasing overall system heat while shrinking the motherboard significantly.

I'm a little worried about what AMD will do now that they are combined with ATI however. Depending on whether AMD decides to start isolating themselves.

AMD can ensure that it's chipsets only support its processors and its video cards.. or provide decent updates and optimizations for its own parts only.

It could be very good, but it could be very bad for anyone using Intel processors, or Nvidia video...
 
We all need to remember that even AMD/ATI admits that the graphics/cpu chip would be a low-end setup. It by no means would be for us people here at THG Forumz that talk about overclocking everything. For example, it would NOT be:

FX-62 + Radeon 1950


It would more likely be:

Low end graphics + Sempron
 

Scooby2

Distinguished
Jun 3, 2006
142
0
18,680
0
I have only read the first post so apologies if Im going over old ground.

I cant really see that its much to get excited about. my guess it will be aimed at budget systems with low graphics peformance requirements. Or at the Ultra high end for the few that like to burn money. Imagine trying to keep up with Nvidia. A new high cost Cpu every 3 months or so anybody ?
 

JBS181818

Distinguished
Oct 18, 2006
113
0
18,680
0
eventually they are talking about doing it for everything though. Just as they have combined certain pieces of hardware in the past.
 

turpit

Splendid
Feb 12, 2006
6,376
0
25,780
0
We all need to remember that even AMD/ATI admits that the graphics/cpu chip would be a low-end setup. It by no means would be for us people here at THG Forumz that talk about overclocking everything. For example, it would NOT be:

FX-62 + Radeon 1950


It would more likely be:

Low end graphics + Sempron
I would have to agree with you. IMO it will do little to measure up to another manufacturers "fusion" product, made by Diamond way back when. Am I the only one who remembers the Diamond Fusion card (The first of the 2D/3D cards) ?
 

steve4king

Distinguished
Jul 4, 2006
45
0
18,540
1
If you are referring to the voodoo2 cards.. then no this is nothing like that.
this is taking the video processor which is normally integrated into the motherboard at the northbridge and dumping all of its processing into the same package as the main processor.

No it will not likely have anyhigh performance, but it has potential to be significantly better than current onboard graphics while taking up less space on the mobo and decreasing power.
 

Similar threads


TRENDING THREADS