AMD Responds to Intel's Larrabee Delay

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
AMD bought out ATI about 4 years ago... the fusion concept is about 3 years old.

Sooner or later, AMD should be able to make a SOC that is good enough for a netbook with Windows7. (System On a Chip) of just the CPU, GPU and Chipset.

ik242:

You make a good point. People should google Commodore64, 128, AppleII or Amiga to see what computers looks likes.

The system board in my OLD C=128 is bigger than my Quad-Core system. And yet its only a computer... but the C128 or Amigas did have Graphics & Audio unlike PCs or AppleIIs.

Some examples (wish this blogged allowed image insertions):

This site has photos - click on the mobo image, which goes over details:
http://www.old-computers.com/MUSEUM/photos.asp?t=1&c=96&st=1

Commodore 8bit floopy drive (the power supply - almost the size of todays ATX units - is removed. There is a CPU to control the drive) http://www.old-computers.com/MUSEUM/hardware/Commodore_128_1571FDD_s1.jpg

Amiga 1000 motherboard (2/3rd of it - sorry the image IS HUGE)http://www.amiga-hardware.com/download_photos/a1000mbreva.jpg
The 3 gold-looking chips on the left are Audio / Video / Memory&Chipset controller. The BIG chip on right is the 7mhz CPU. The 2 smaller chips are various I/O controllers for Par/Ser/Joy ports.
In 1986, this $1200 computer got you 256mb RAM & a 880k Floppy drive. It included 4096 colors & stereo sound, best gaming PC period. No HD controller or memory expansion. 2mb of RAM in 1986 = $1000 easy. 2 megabytes, not GIGABYTES!
 
Fixed link to A1000: http://www.amiga-hardware.com/download_photos/a1000mbreva.jpg

Other than SLI/CF... why do we need regular size ATX boards? Any PC I work on, uses onboard video (AMD). Only gamers add on a card, otherwise its onboard audio & LAN.

I think for 2010, AMD is going to OWN the GPU market. Nvidia is playing games with us which I am sick of. Re-branding products is PLAIN stupid! It got old with the 8800>9800>GT240 & the 9400>G210 or whatever... now its g 210 > g 310 and its 100% the same part?!

Get a clue Nvidia! The G3xx series should ALL be DX11 parts. Makes things easy. When someone buys an ATI 4000 video card vs a 5000, they know what it can do.
 
I think it was too bold to think that this design could mix it with the real GPUs in the conventional rendering arean, i.e, what GPUs are desinged specifically to do! HOWEVER, I never expected it to be able to from the start, but was still, and still am, excited by the idea.

It was all about easy access to computing power that was only seen from GPUs, with the x86 instructions. I saw it as basically a really powerful general co-processor, not a gpu, more of a really powerful, ultra threaded cpu, than a GPGPU. and I think there is still reason to release it as another option to GPGPU, because remember, GPGPU is all about doing anything OTHER than GRAPHICS, so not being great at graphics is irrelevant!
 
Intel displays those X-core design CPUs because they can. There's no other way to show your might. But nonetheless, it would still be irrelevant until we are able to fully utilize 4 cores. Kudos to AMD/ATI for making the CPU/GPU realm quite competitive.
 
Intel's idea (of making a general purpose chip into what would essentially be an overgrown shader unit) is not without merits.

However, the one they chose to use in Larrabee is probably the worst yet: P54C is a 32-bit chip with 'good' 16-bit capabilities, it has all the 16-bit and 20-bit 8086 and i286 compatibility mode, but it, well, sucks at this kind of job: its FPU is not up to speed any more (single precision floating point, several cycles required for an operation at that).

You can add that the 32-bit x86 instruction set is decried throughout the whole industry due to its ridiculous amount of general purpose registers (it prevents complex calculations in a single cycle, since one has to swap register contents), something that AMD mitigated when they created the AMD64 instruction set (they doubled the amount of general purpose registers).

Although the article does ring like a huge advertisement for the AMD platform, one has to concede that AMD knows how to design a CPU (if you look at i5/i7, all of its innovations were things AMD integrated in the x86 world with K8, in 2003, albeit Intel did it better with i5/i7) and that Ati knows how to design a GPU.

One design that could have been used in Larrabee, would have been ARM: it's designed to be small, it's designed to be fast, it hasn't too much compatibility baggage, it scales well, and programmers are used to it. But P54C? What were they smoking, at Intel's? Not Invented Here?
 
The decision to pull the plug on Larrabee is a sad sad day.
It may not have been the most powerful (1 teraflop, good enough for me), but it was special.
The CPU can brute through any equation, but at the expense of processing power.
The GPU can crunc hundreds of times faster than a CPU, but it's extremely limited in what it can do.
Larrabee had the functionality of a x86 CPU, yet nearly the power of a GPU.

Sad sad day.
 
Was anyone else completely surprised that Intel wouldn't be able to make a top-end graphics card? Its totally baffling considering how awesome their current graphics are.
 
I'd go with Intel and Nvidia combo over an AMD and ATI combo any day. If Intel ever gets their ducks in a row, I'd love to try and see what an Intel GPU with an Intel CPU can do.
 
[citation][nom]ik242[/nom]i don't see it that way - in fact i dare to call "keep them separate" claims silly. integration is what has brought low prices and high availability of any product (and specially electronics).memory and memory controller integrated in cpu don't cost much and since part of the cpu, get replaced together with cpu. just because there is some cache on the cpu, or some flash memory on some new digital camera (just to make point), it does not mean that you cannot add more ram (on computer) or larger storage (SD card for example in case of camera).for those who don't remember, there was a time where cache was not integrated in cpu. it was damn expensive and often costed more than cpu. there was time when CD drive needed dedicated controller (before they could attach to IDE for example) and it would occupy mobo slot. aneedless to say it was cluther, with slow performance and high cost.there was also time when chipset was just that -> collection of few dozens chips (a set) performing only few very basic functions (didn't include modem, serial or parallel port, network card, sound card, hdd or fdd controller etc. - think about what comes in today's moos or the north and south bridge).my first network card, sound card, modem etc. costed each about same as the CPU of the day. nowdays those things are part of chipset/motherboard just like video output which may not be faster than discrete card but it's good enough for 95% of applications and - it's "free". and just because there is onboard video, nobody says that you can't add another graphic card (or two, or three...).another thing is with integration, many things can be resolved more efficiently including size, power consumption, foootprint, bandwith etc. so AMD and Intel, please make my next pc small, size of a dime sounds about right as i would like to carry it around without straining my arm. heck, integrate it into glasses that can double as high definition monitor.[/citation]

I completely agree, integration has advanced these systems soo much over the years. Does anyone remember when computers had relays in them that were measured in inches? I remember working with Boeing on a project to update some old testing equipment in the military. The power module was a 4 tiered 3'x 5' block connected into a 5.6 Khz three phase power supply that was over six feet tall. This was old 50's technology. The replacement power module was all in one, 2'x 3' with the three phase built in. That was just the power feed. The rest of the system went from being the size of a large freezer to the size of a small dorm fridge. Not to mention reducing end to end run times from 8 hours to 45 minutes.
 
[citation][nom]Belardo[/nom]Fixed link to A1000: http://www.amiga-hardware.com/down [...] mbreva.jpgOther than SLI/CF... why do we need regular size ATX boards? Any PC I work on, uses onboard video (AMD). Only gamers add on a card, otherwise its onboard audio & LAN.I think for 2010, AMD is going to OWN the GPU market. Nvidia is playing games with us which I am sick of. Re-branding products is PLAIN stupid! It got old with the 8800>9800>GT240 & the 9400>G210 or whatever... now its g 210 > g 310 and its 100% the same part?!Get a clue Nvidia! The G3xx series should ALL be DX11 parts. Makes things easy. When someone buys an ATI 4000 video card vs a 5000, they know what it can do.[/citation]
Its just not gamers that throw in cards, tool. Imagine all the professional graphic workstations. So yes there will be a need for add on graphic cards, and atx sized boards. I really doubt that your integrated mess will be able to output autocad, Nice thought though. oh and it had 256k of memory not 256mb.
 
I agree with mitch074. If you want to put as many number-crunching cores onto a die as possible, you don't choose a core design that is inefficient in terms of size and power. You go RISC, not CISC, and you make sure that it incorporates double-precision FP calculations at the best possible speed.

Very few people touch machine code anymore. Driver writers who need to optimize for speed as much as possible code in C, C++ or a similar low-level "close-to-the-metal" language, perhaps with some assembler code thrown in for extremely critical sections. Everyone else uses higher-level languages.

A better core design for this GPGPU would probably be something like 3 or 4 integer/logic/branch units (ALUs) sharing an FP pipeline, combined in such a way that each ALU "thinks" that the FP pipeline is its own. FP cores are comparatively "expensive", so this approach ensures that you maximize the use of the available real estate, while providing near-single-clock FP calculation capabilities. The rest of the core would need to handle cache, communications and dependencies.

Come to think of it, this sounds suspiciously similar to ATI's current designs.
 
[citation][nom]thearm[/nom]I'd go with Intel and Nvidia combo over an AMD and ATI combo any day. If Intel ever gets their ducks in a row, I'd love to try and see what an Intel GPU with an Intel CPU can do.[/citation]
Dude this is already out!!! I have a sweet system featuring the amazing Prescott-P4 combined with a Badass i740 this is probably the best technology ever with intel discrete graphics and the awesome CPU Netburst architecture that will be capable of 10+ Ghz soon. AMD is like x86-64, on die memory controller blah blah blah, what a bunch of idiots! I am going to get the itaniums in side soon too then I will be the coolest inteler ever!
/sarcasm...
 
Well it could save money if you did not need to get another graphics card thinking, cell phones this would work great. who says you still can not have 4 or more GPUs with this chip. I don't think this chip is really meant for gaming yet but maybe servers. if everything is on one chip it is cheap, also the smaller it is to make. maybe it could work slower and do more.
 
[citation][nom]nachowarrior[/nom]intel empties change jar to try and buy n\/idia....[/citation]
they could do it today if they chose to (and if no regulatory body stepped in). if intel has shown anything is that if they cant make it in house, they will buy someone else's house and call it a day. nvidia would be cheap but then they would be modeled like their counterpart in the game.
truthfully, i think intel pulled the plug (at least for this planned release) as they knew that at least had to be close to existing tech in performance and i would guess they weren't close in spite of the bantering. they have the capitol and resources to do it so i wouldn't rule it out. they just found out that this isn't going to be an easy party to crash.
 
[citation][nom]Jenoin[/nom]Dude this is already out!!! I have a sweet system featuring the amazing Prescott-P4 combined with a Badass i740 this is probably the best technology ever with intel discrete graphics and the awesome CPU Netburst architecture that will be capable of 10+ Ghz soon. AMD is like x86-64, on die memory controller blah blah blah, what a bunch of idiots! I am going to get the itaniums in side soon too then I will be the coolest inteler ever!/sarcasm...[/citation]

I'm talking about a REAL GPU. Not an onboard GPU. And you're sarcasm needs work.
 
[citation][nom]thearm[/nom]I'm talking about a REAL GPU. Not an onboard GPU. And you're sarcasm needs work.[/citation]
The i740 is a "REAL GPU"*. Not an onboard GPU. Unless you mean on a circuit board. It is on a circuit board. Did you want one that isn't on a circuit board?
I740
Here this should help you use the interwebs to get less dumb!

*according to Intel. Similar to how Larrabee would have been a "REAL GPU", unless you happen to like functional drivers.
 
Graphics is not the only thing Larrabee is good for. Imagine a 16-core Larrabee with the other 16 cores replaced with a FPGA. It would be a universal embedded controller that could be made into a myriad different controller types all in firmware. So the same silicon could be a multi-core I/O processor for RAID, a MLC flash controller for SSDs, a wireless network controller, etc.
 
[citation][nom]Jenoin[/nom]The i740 is a "REAL GPU"*. Not an onboard GPU. Unless you mean on a circuit board. It is on a circuit board. Did you want one that isn't on a circuit board? I740Here this should help you use the interwebs to get less dumb!*according to Intel. Similar to how Larrabee would have been a "REAL GPU", unless you happen to like functional drivers.[/citation]

I don't really think of the i740 as a real GPU because of it's age. I'm saying when Intel comes out with a new GPU on a PCIe board, I'd be interested in that and would like to see it in action. I also don't consider an onboard GPU and real GPU since they are not as good as a dedicated video card. When someone comes out with an onboard video card that can compete with the PCIe video cards, I will pay more attention to onboard cards.

And your sarcasm still needs work. The internet is a 'series of tubes.'
 
[citation][nom]zingam[/nom]Intel have displayed 80 cores CPU some years ago - so what? They are actually getting worse[/citation]

Actually its a different CPU. the 80 core was Terascale. The 48 core is for cloud computing. Terascale was for anything with interchangeable cores and a modular design.

[citation]"It really comes down to design philosophy," said Erskine. "GPUs are hard to design and you can’t design one with a CPU-centric approach that utilizes existing x86 cores."[/citation]

The end of this just goes to show how forward thinking the current GPU companies are. Intel has a new idea thats not like their own and of course they dismiss it to rely on their age old SP tech.

I wounder if Intel had come out with SP based GPUs back when ATI and nVidia were still on the Pixel Pipeline tech if they would have said the same thing.

SPs have changed the way games are. It changed everything. Compare a game that utilizes SPs compared to one that used a Pixel Pipeline. The difference is amazing.

This though is what holds back change and innovation. I am sure if LRB was coming out and a major threat they wouldn't say the same thing. They would be working on their own version to compete with.
 
Status
Not open for further replies.