The chance to break into Dell's supplier chain has passed.

G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Fellow AMD admirers ;-),

Googling to see what anybody had to say about intel and cis turned up
this bit on AMD

http://money.cnn.com/2005/02/28/technology/techinvestor/hellweg/

"AMD caught Intel pretty good with Opteron," says David Wu, an analyst
with Global Crown Partners. "If AMD can't beat Intel with Opteron, I
don't know if they ever will."

I'm going to get beaten up for it, but I don't think Opteron changed
the lowdown on AMD: very smart company, tries hard, never comes up
with anything really new.

Make Intel's life miserable with 64-bit x86? Score. Big win for end
users.

Break Intel's effective monopoly? Not that way. Okay, maybe not any
way, certainly not any way I can think of.

RM
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> Fellow AMD admirers ;-),
>
> Googling to see what anybody had to say about intel and cis turned up
> this bit on AMD
>
> http://money.cnn.com/2005/02/28/technology/techinvestor/hellweg/
>
> "AMD caught Intel pretty good with Opteron," says David Wu, an
analyst
> with Global Crown Partners. "If AMD can't beat Intel with Opteron, I
> don't know if they ever will."

You gotta really differentiate where your quote of the article ends and
where your own opinion starts. I was thinking the below quote was from
the article.

> I'm going to get beaten up for it, but I don't think Opteron changed
> the lowdown on AMD: very smart company, tries hard, never comes up
> with anything really new.

On purpose, it wants to create practical stuff that the market will
accept. Unlike hopeless science projects like Itanium.

> Make Intel's life miserable with 64-bit x86? Score. Big win for end
> users.

Well, it has managed to marginalize Itanium effectively. No way Itanium
will ever make it out of its niches now.


> Break Intel's effective monopoly? Not that way. Okay, maybe not any
> way, certainly not any way I can think of.

Well, it was never going to break into Dell no matter what. However,
AMD does need to spend some money on marketing itself. There's simply
no other way around it. Intel will always be able to sell more than AMD
with inferior products, simply on the power of marketing.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> >You gotta really differentiate where your quote of the article ends
and
> >where your own opinion starts. I was thinking the below quote was
from
> >the article.
> >
> As you know, I usually use html-like notation <quote>,</quote> to set
> off extended quotes. I this particular case, it was a short quote,
> and the quote itself was a quote. I won't do it again.

This where Firefox and Thunderbird really make this stuff easier for
you. There are various extensions available for them, that automate
this process.

> >> I'm going to get beaten up for it, but I don't think Opteron
changed
> >> the lowdown on AMD: very smart company, tries hard, never comes up
> >> with anything really new.
> >
> >On purpose, it wants to create practical stuff that the market will
> >accept. Unlike hopeless science projects like Itanium.
> >
> Oh, hmmm. Was Itanium a science project? Intel certainly wanted to
> make a big score, and I applaud them for thinking they were doing the
> right science, no matter how inaccurate their prognostication turned
> out to be. The issue they thought they could see, the compiler
> problem, turned out to be harder than they thought. The biggest
> mistake I fault them on is that they seem to have lost control of the
> complexity of the architecture: way too many features, all of which
> had to be supported in hardware and, even more important, in
exception
> and recovery code.
>
> As to practical stuff vs. science projects, that's why I admire
intel.
> I admire their stubbornness. I'm an IBM admirer, too. To the extent
> that IBM has gotten more "practical," they've lost my respect, even
if
> I understand that they've had very little choice.
>
> The industry, Yousuf, is going to choke on its own vomit. More,
more,
> more x86? Same old bugs. Same old windoze. Same old creaky
> infrastructure. It takes an Intel or an IBM to break molds. AMD
> never.

X86's problems weren't really software, but hardware. Itanium did
nothing to make hardware any better. Itanium was continuing on with the
same old shared bus architecture that Intels have always had, despite
the fact that they were starting with a brand new software
architecture.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On 3 Mar 2005 13:15:55 -0800, "YKhan" <yjkhan@gmail.com> wrote:

>Robert Myers wrote:
>> Fellow AMD admirers ;-),
>>
>> Googling to see what anybody had to say about intel and cis turned up
>> this bit on AMD
>>
>> http://money.cnn.com/2005/02/28/technology/techinvestor/hellweg/
>>
>> "AMD caught Intel pretty good with Opteron," says David Wu, an
>analyst
>> with Global Crown Partners. "If AMD can't beat Intel with Opteron, I
>> don't know if they ever will."
>
>You gotta really differentiate where your quote of the article ends and
>where your own opinion starts. I was thinking the below quote was from
>the article.
>
As you know, I usually use html-like notation <quote>,</quote> to set
off extended quotes. I this particular case, it was a short quote,
and the quote itself was a quote. I won't do it again.

>> I'm going to get beaten up for it, but I don't think Opteron changed
>> the lowdown on AMD: very smart company, tries hard, never comes up
>> with anything really new.
>
>On purpose, it wants to create practical stuff that the market will
>accept. Unlike hopeless science projects like Itanium.
>
Oh, hmmm. Was Itanium a science project? Intel certainly wanted to
make a big score, and I applaud them for thinking they were doing the
right science, no matter how inaccurate their prognostication turned
out to be. The issue they thought they could see, the compiler
problem, turned out to be harder than they thought. The biggest
mistake I fault them on is that they seem to have lost control of the
complexity of the architecture: way too many features, all of which
had to be supported in hardware and, even more important, in exception
and recovery code.

As to practical stuff vs. science projects, that's why I admire intel.
I admire their stubbornness. I'm an IBM admirer, too. To the extent
that IBM has gotten more "practical," they've lost my respect, even if
I understand that they've had very little choice.

The industry, Yousuf, is going to choke on its own vomit. More, more,
more x86? Same old bugs. Same old windoze. Same old creaky
infrastructure. It takes an Intel or an IBM to break molds. AMD
never.

>> Make Intel's life miserable with 64-bit x86? Score. Big win for end
>> users.
>
>Well, it has managed to marginalize Itanium effectively. No way Itanium
>will ever make it out of its niches now.
>
Oh, who knows really. I have a hard time visualizing how Itanium will
survive in a niche, to be honest. If it does, it will eventually
break out of the niche. You think if the big boyz are using Power and
Itanium, your local bit-jockey won't want to be able to say he's doing
the same, if the price is right?

>
>> Break Intel's effective monopoly? Not that way. Okay, maybe not any
>> way, certainly not any way I can think of.
>
>Well, it was never going to break into Dell no matter what. However,
>AMD does need to spend some money on marketing itself. There's simply
>no other way around it. Intel will always be able to sell more than AMD
>with inferior products, simply on the power of marketing.
>
That whole deal is going to fall apart when one of the operatives
carrying messages written on flash paper back and forth between Santa
Clara and Round Rock is intercepted by AMD agents.

RM
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

YKhan wrote:
snip
>
>
> X86's problems weren't really software, but hardware. Itanium did
> nothing to make hardware any better. Itanium was continuing on with the
> same old shared bus architecture that Intels have always had, despite
> the fact that they were starting with a brand new software
> architecture.
>
> Yousuf Khan
>

I would disagree. Getting rid of shared FSB not a problem, not that big
of a deal. Although getting the board manufacturers to stop using junk
board material and learn how to control impedance is a different story.
And high speed link boards need controlled impedance.

In my opinion the real problem with Itanium is that its objectives had
nothing to do with the customers/users objectives. They (customers) had
no reason to embrace Itanium.

Put on your customer hat, of whatever persuasion. Try to think of a
real reason any end user would be desirous of using Itanium, as actually
delivered at the time it was delivered.

del cecchi
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Fri, 04 Mar 2005 11:35:55 -0600, Del Cecchi
<cecchinospam@us.ibm.com> wrote:

<snip>

>
>In my opinion the real problem with Itanium is that its objectives had
>nothing to do with the customers/users objectives. They (customers) had
>no reason to embrace Itanium.
>
Intel pursued the VLIW-like architecture for the same reason IBM
worked Daisy: the superchip to subsume all other chips. With
virtualization and whatever RAS needed to make it acceptable to IBM
and its mainframe-type customers, Itanium was to replace _everything_,
I think.

Opteron really has put Itanium into a no-man's-land: squeezed between
a very capable x86 and an actual mainframe manufacturer (ibm) that's
apparently not interested in abandoning its own architecture.

Had it worked, itanium would have satisfied customers' needs nicely: a
chip that would execute non-native binaries (including 360 and x86),
mainframe features, and a variety of vendors to choose from
("industry-standard architecture," in intel's code phrase).

>Put on your customer hat, of whatever persuasion. Try to think of a
>real reason any end user would be desirous of using Itanium, as actually
>delivered at the time it was delivered.
>
Oh, well, now that "as actually delivered" is a problem! x86
emulation never worked the way it was supposed to.

RM
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 05 Mar 2005 17:54:56 -0500, George Macdonald
<fammacd=!SPAM^nothanks@tellurian.com> wrote:

>On Sat, 05 Mar 2005 07:17:17 -0500, Robert Myers <rmyers1400@comcast.net>
>wrote:
>
>>On Sat, 05 Mar 2005 05:34:54 -0500, George Macdonald
>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>

<snip>

>
>>Some part of me wondered whether AMD could break into Dell. All other
>>considerations aside, it's not like Dell to add unnecessary
>>complication to its life. I'm sure they looked at how many sales they
>>might lose vs. the engineering costs and decided it wasn't worth it.
>>However that calculation came out, I'm sure they used it to squeeze
>>Intel a little harder.
>
>Engineering?... Dell? Hell, they don't even have a Serverworks chipset to
>diddle with any longer - it's just Intel generic boxen top to bottom. I
>think we all knew -- it was discussed here at length -- that Dell was just
>using AMD as a manouvering device to "squeeze" Intel.
>
I've lost track of Intel server chipsets. Is _anybody_ but Intel
making Server chipsets for Intel processors?

>>The fact that Dell holds the line makes life much tougher for AMD, and
>>if Opteron with Intel scrambling in the dust didn't do it, I don't
>>know what would.
>
>IMO Dell is going to get slaughtered in the server space anyway... unless
>we have the unlikely situation that IBM decides to OEM Hurricane. Like
>every other business, Dell will go through a bad spell and its precarious
>business model could mean it will not weather the storm. Lack of depth
>eventually tells... and then we'll get a new pretender.:)
>
You think you can see that far into the future? Nothing would please
me more than to see Dell out of the dominant position. But they have
got it all worked out so smoothly.

The rules are all about to change with multicore chips. With
bandwidth requirements going through the roof, I think the day of the
motherboard is about to be at hand. How are they going to route all
that stuff, anyway? That sounds like a bad scene for Dell, except
that motherboards of requisite quality will be commodities.

AMD will make somebody else successful? Who? Just like the auto
business, the computer business is a business of vanishing margins,
and Dell is tops at that game.

How is Dell going to get slaughtered?

>>>Hey I thought we were supposed to get an
>>>official name for "Desktrino" this week. Did I miss it in all the
>>>excitement?:)
>>
>>I'm more interested in where Intel is headed with interconnect.
>>Mellanox is now selling 10Gb/s infiniband adapters for $69 in
>>quantity:
>>
>>http://www.mellanox.com/news/press/pr_030105.html
>
>That works through a PCI Express interconnect. Pathscale has a direct
>connect to Hypertransport 4x inifiniband adapter
>http://www.pathscale.com/infinipath.html - dunno what "commodity priced"
>means... nor what the size of that market might turn out to be.

That's good to know about. That's a space in which AMD has a fighting
chance.

RM
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> Had it worked, itanium would have satisfied customers' needs nicely:
a
> chip that would execute non-native binaries (including 360 and x86),
> mainframe features, and a variety of vendors to choose from
> ("industry-standard architecture," in intel's code phrase).

If Intel had done that, i.e. come up with an architecture that could
emulate many other architectures, then it would've guaranteed Itanium
of 100% success. A chip that could emulate both x86 and PA-RISC at full
speed, at the very least; possibly something that could translate
anything. But instead it came up with this braindead VLIW/EPIC concept
which was an answer to nobody's needs.

That would've meant a RISC-like architecture, as RISC translates to
RISC very well, and as well as CISC.

Part of the reason I'm not so confident about IBM's Cell processor
either, is because of this same reason, it's not really answering
anybody's needs. It's not an architecture that can take over from
anybody else's architecture, except for PowerPC itself which is its
native architecture.

The Transmeta concept held a lot of excitement for me at one time, not
because of its power savings but its code-morphing. But its internal
VLIW was really only meant for translating x86 and nothing else. They
might as well have not bothered with VLIW as the underlying
architecture.

> >Put on your customer hat, of whatever persuasion. Try to think of a

> >real reason any end user would be desirous of using Itanium, as
actually
> >delivered at the time it was delivered.
> >
> Oh, well, now that "as actually delivered" is a problem! x86
> emulation never worked the way it was supposed to.

I think if somebody can come up with a code-morpher that can translate
anything with a small firmware upgrade at only a smallish 20% loss of
performance, will finally have themselves a winner something that can
replace anything. Buy the one processor and you get something that can
run PowerPC, Sparc, MIPS, and x86 on the same system.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 05 Mar 2005 19:37:36 -0500, Robert Myers <rmyers1400@comcast.net>
wrote:

>On Sat, 05 Mar 2005 17:54:56 -0500, George Macdonald
><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>
>>On Sat, 05 Mar 2005 07:17:17 -0500, Robert Myers <rmyers1400@comcast.net>
>>wrote:
>>
>>>On Sat, 05 Mar 2005 05:34:54 -0500, George Macdonald
>>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>>
>
><snip>
>
>>
>>>Some part of me wondered whether AMD could break into Dell. All other
>>>considerations aside, it's not like Dell to add unnecessary
>>>complication to its life. I'm sure they looked at how many sales they
>>>might lose vs. the engineering costs and decided it wasn't worth it.
>>>However that calculation came out, I'm sure they used it to squeeze
>>>Intel a little harder.
>>
>>Engineering?... Dell? Hell, they don't even have a Serverworks chipset to
>>diddle with any longer - it's just Intel generic boxen top to bottom. I
>>think we all knew -- it was discussed here at length -- that Dell was just
>>using AMD as a manouvering device to "squeeze" Intel.
>>
>I've lost track of Intel server chipsets. Is _anybody_ but Intel
>making Server chipsets for Intel processors?

Well as mentioned, there's IBM's Hurricane - IBM *does* like to add some of
its own "value" and it does sound err, nice. SiS just got a license for
1066MHz FSB but I'm not sure whether they intend to go into server stuff.
AYK, traditionally, Intel server chipsets have been so-so.

>>>The fact that Dell holds the line makes life much tougher for AMD, and
>>>if Opteron with Intel scrambling in the dust didn't do it, I don't
>>>know what would.
>>
>>IMO Dell is going to get slaughtered in the server space anyway... unless
>>we have the unlikely situation that IBM decides to OEM Hurricane. Like
>>every other business, Dell will go through a bad spell and its precarious
>>business model could mean it will not weather the storm. Lack of depth
>>eventually tells... and then we'll get a new pretender.:)
>>
>You think you can see that far into the future? Nothing would please
>me more than to see Dell out of the dominant position. But they have
>got it all worked out so smoothly.

Just prognosticating.:) Hell I'm at least as good as your average
anal...yst and I'm quite sure the Dell model is fragile - every business
that has traded on paper-thin margins has gone down with a crash; ever hear
of Crazy Eddie? I just hope they don't take too many others down along the
way.

>The rules are all about to change with multicore chips. With
>bandwidth requirements going through the roof, I think the day of the
>motherboard is about to be at hand. How are they going to route all
>that stuff, anyway? That sounds like a bad scene for Dell, except
>that motherboards of requisite quality will be commodities.

Again, AMD is much better positioned from the POV of scalability here: add
a Hypertransport link as necessary - its already in the CPUs and the
chipset/mbrd companies are all clued up on implementing... easy stuff.
Current desktop mbrds have more than enough bandwidth -- you need to take a
look at the grass on the other side -- so adding a little won't be a big
deal. nForce3/4 are single chips!

>AMD will make somebody else successful? Who? Just like the auto
>business, the computer business is a business of vanishing margins,
>and Dell is tops at that game.

Yep there's some truth in that auto comparison and, like I've said here
before, the PC/Server business is, like the auto business, now pretty much
a cyclical replacement market - you just hope that everybody doesn't
synchronize on their cycles.:) Right about now, I'd think it's a fair bet
that Dell is taking a very close look at Lenovo's expansion strategy
options and monitoring their actual moves. Hell who knows?.... with
Carleton gone, HP may even get its hat on straight... and Sun has two
options, one of which is die.

>How is Dell going to get slaughtered?

Technology-wise, two directions in server-space that I see off-hand: IBM
will have a better widget with its Xeon MP chipset and Sun will have a
better mid to upper-scale server with Opteron. HP is sounding enthusiastic
about Opteron too, though they're obviously not going to throw the (Intel)
baby out with the bath water when it comes down to it.

--
Rgds, George Macdonald
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 03:10:31 -0500, George Macdonald
<fammacd=!SPAM^nothanks@tellurian.com> wrote:

>On Sat, 05 Mar 2005 19:37:36 -0500, Robert Myers <rmyers1400@comcast.net>
>wrote:
>
>>On Sat, 05 Mar 2005 17:54:56 -0500, George Macdonald
>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>
>>>On Sat, 05 Mar 2005 07:17:17 -0500, Robert Myers <rmyers1400@comcast.net>
>>>wrote:
>>>

<snip>

>>>
>>I've lost track of Intel server chipsets. Is _anybody_ but Intel
>>making Server chipsets for Intel processors?
>
>Well as mentioned, there's IBM's Hurricane - IBM *does* like to add some of
>its own "value" and it does sound err, nice. SiS just got a license for
>1066MHz FSB but I'm not sure whether they intend to go into server stuff.
>AYK, traditionally, Intel server chipsets have been so-so.
>
Intel does only as well as it has to, I'm sure. I'm sure that's what
infuriates many techies, but a business type looking at how Intel
plays its cards. They'll do just as well as they have to to stay at
the table...that's the Intel guarantee.

>>>>The fact that Dell holds the line makes life much tougher for AMD, and
>>>>if Opteron with Intel scrambling in the dust didn't do it, I don't
>>>>know what would.
>>>
>>>IMO Dell is going to get slaughtered in the server space anyway... unless
>>>we have the unlikely situation that IBM decides to OEM Hurricane. Like
>>>every other business, Dell will go through a bad spell and its precarious
>>>business model could mean it will not weather the storm. Lack of depth
>>>eventually tells... and then we'll get a new pretender.:)
>>>
>>You think you can see that far into the future? Nothing would please
>>me more than to see Dell out of the dominant position. But they have
>>got it all worked out so smoothly.
>
>Just prognosticating.:) Hell I'm at least as good as your average
>anal...yst and I'm quite sure the Dell model is fragile - every business
>that has traded on paper-thin margins has gone down with a crash; ever hear
>of Crazy Eddie? I just hope they don't take too many others down along the
>way.
>
I'll guess there are too many people watching Dell in a way that Crazy
Eddie was never watched and even Enron was never watched.

You may be right about Lenovo, but that deal is surely structured so
that Lenovo can't touch the server space.

>>The rules are all about to change with multicore chips. With
>>bandwidth requirements going through the roof, I think the day of the
>>motherboard is about to be at hand. How are they going to route all
>>that stuff, anyway? That sounds like a bad scene for Dell, except
>>that motherboards of requisite quality will be commodities.
>
>Again, AMD is much better positioned from the POV of scalability here: add
>a Hypertransport link as necessary - its already in the CPUs and the
>chipset/mbrd companies are all clued up on implementing... easy stuff.
>Current desktop mbrds have more than enough bandwidth -- you need to take a
>look at the grass on the other side -- so adding a little won't be a big
>deal. nForce3/4 are single chips!
>
I'm skeptical that it actually works that way above four processors.
Take a look at tpmc sorted by raw performance

http://www.tpc.org/tpcc/results/tpcc_results.asp?print=false&orderby=tpm&sortby=desc

I think the first Opteron entry is a RackSaver QuatreX-64 Server 4P,
with a score of 82,226, with Power and Itanium up in the millions.
It's true, the $/tpmc is very attractive at $2.72, but the claim you
are making is about scalability. I think AMD has designed a sizzling
chip for the 4P space.

>>AMD will make somebody else successful? Who? Just like the auto
>>business, the computer business is a business of vanishing margins,
>>and Dell is tops at that game.
>
>Yep there's some truth in that auto comparison and, like I've said here
>before, the PC/Server business is, like the auto business, now pretty much
>a cyclical replacement market - you just hope that everybody doesn't
>synchronize on their cycles.:) Right about now, I'd think it's a fair bet
>that Dell is taking a very close look at Lenovo's expansion strategy
>options and monitoring their actual moves. Hell who knows?.... with
>Carleton gone, HP may even get its hat on straight... and Sun has two
>options, one of which is die.
>
HP or Sun is going to save itself by becoming the king of low-priced
4P Opteron servers, the space that IBM and Dell have left open for
them? Just writing the sentence down would make me want to sell the
stock of either. I'm sure Lenovo is a cause for concern on Dell's
part.

>>How is Dell going to get slaughtered?
>
>Technology-wise, two directions in server-space that I see off-hand: IBM
>will have a better widget with its Xeon MP chipset and Sun will have a
>better mid to upper-scale server with Opteron. HP is sounding enthusiastic
>about Opteron too, though they're obviously not going to throw the (Intel)
>baby out with the bath water when it comes down to it.

HP's future is itanium. Sun doesn't have a future. If something
kills Dell, it won't be Dell's failure to adopt AMD that does it.

RM
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On 5 Mar 2005 19:52:46 -0800, "YKhan" <yjkhan@gmail.com> wrote:

>Robert Myers wrote:
>> Had it worked, itanium would have satisfied customers' needs nicely:
>a
>> chip that would execute non-native binaries (including 360 and x86),
>> mainframe features, and a variety of vendors to choose from
>> ("industry-standard architecture," in intel's code phrase).
>
>If Intel had done that, i.e. come up with an architecture that could
>emulate many other architectures, then it would've guaranteed Itanium
>of 100% success. A chip that could emulate both x86 and PA-RISC at full
>speed, at the very least; possibly something that could translate
>anything. But instead it came up with this braindead VLIW/EPIC concept
>which was an answer to nobody's needs.
>
Intel thought it was taking the best ideas available at the time it
started the project. IBM had a huge investment in VLIW, and Elbrus
was making wild claims about what it could do.

Somebody who doesn't actually do computer architecture probably has a
very poor idea of all the constraints that operate in that universe,
but I'll stick with my notion that Intel/HP's mistake was that they
had a clean sheet of paper and let too much coffee get spilled on it
from too many different people.

>That would've meant a RISC-like architecture, as RISC translates to
>RISC very well, and as well as CISC.
>
>Part of the reason I'm not so confident about IBM's Cell processor
>either, is because of this same reason, it's not really answering
>anybody's needs. It's not an architecture that can take over from
>anybody else's architecture, except for PowerPC itself which is its
>native architecture.
>
Nobody needs a home computer, and worldwide demand for computers will
be five units. The advantages of streaming processors is low power
consumption and high throughput.

>The Transmeta concept held a lot of excitement for me at one time, not
>because of its power savings but its code-morphing. But its internal
>VLIW was really only meant for translating x86 and nothing else. They
>might as well have not bothered with VLIW as the underlying
>architecture.
>
The belief was (I think) that the front end part was sufficiently
repetitive that it could be massaged heavily to deliver a very clean
instruction stream to the back end. The concept isn't completely
wrong, just not sufficiently right. The DynamoRio people just
announced a new release, but I haven't had a chance to try it. That's
an optimizing front-end driving CISC. That project was motivated by
Itanium, I think.

>> >Put on your customer hat, of whatever persuasion. Try to think of a
>
>> >real reason any end user would be desirous of using Itanium, as
>actually
>> >delivered at the time it was delivered.
>> >
>> Oh, well, now that "as actually delivered" is a problem! x86
>> emulation never worked the way it was supposed to.
>
>I think if somebody can come up with a code-morpher that can translate
>anything with a small firmware upgrade at only a smallish 20% loss of
>performance, will finally have themselves a winner something that can
>replace anything. Buy the one processor and you get something that can
>run PowerPC, Sparc, MIPS, and x86 on the same system.
>
That's what IBM (and Intel and probably Transmeta, although they never
admitted it) probably wanted to do. For free, you should get runtime
feedback-directed optimization to make up for the overhead of
morphing. That's the theory, anyway. Exception and recovery may not
be the biggest problem, but it's one big problem I know about.

RM
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> Somebody who doesn't actually do computer architecture probably has a
> very poor idea of all the constraints that operate in that universe,
> but I'll stick with my notion that Intel/HP's mistake was that they
> had a clean sheet of paper and let too much coffee get spilled on it
> from too many different people.

I mean they achieved none of their original goals. Neither did Itanium
run x86 at close to full-speed. Nor did it simplify core design enough
to make the core very small, cheap to make, and/or fast to run. It
required massive amounts of cache to run making it expensive. It was
complicated, making it hard to transitition to the next miniaturization
process node. The x86 emulator was useless despite being put right into
silicon.

> Nobody needs a home computer, and worldwide demand for computers will
> be five units. The advantages of streaming processors is low power
> consumption and high throughput.

Five units of what?

They do need home electronics though. The sooner they can bring PC
technology into the realm of home electronics the better. I'm surprised
they can't get the cost of these things down any further. They were
making huge strides in reducing prices until now.

> That's what IBM (and Intel and probably Transmeta, although they never
> admitted it) probably wanted to do. For free, you should get runtime
> feedback-directed optimization to make up for the overhead of
> morphing. That's the theory, anyway. Exception and recovery may not
> be the biggest problem, but it's one big problem I know about.

What they really need is a kind of YACC (Yet Another Compiler Compiler)
for instruction sets. A most atomic of instruction sets that has as much
in common with other instruction sets as possible. Something that can
simply be table-based and do a simple lookup between emulated
instruction sets and its own native instruction set.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> AMD will make somebody else successful? Who? Just like the auto
> business, the computer business is a business of vanishing margins,
> and Dell is tops at that game.
>
> How is Dell going to get slaughtered?

The Chinese are going to slaughter it. Dell might be able to convince
protectionist US congressman to save it in the US for a little while,
but they can't save Dell in the rest of the world.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> Some part of me wondered whether AMD could break into Dell. All other
> considerations aside, it's not like Dell to add unnecessary
> complication to its life. I'm sure they looked at how many sales they
> might lose vs. the engineering costs and decided it wasn't worth it.
> However that calculation came out, I'm sure they used it to squeeze
> Intel a little harder.

Well, actually AMD has taken care of the systems engineering problem
completely for Dell. It created an ecosystem straight away for Opteron,
not just motherboards but complete barebones systems from Newisys. It
was so easy to make an Opteron system that people like IBM couldn't find
any excuse not to go with Opteron this time at all. Not to say that IBM
is thrilled to be having to sell Opterons, it would much rather
concentrate on Power and possibly Xeon, but it simply has no excuse not
to. So IBM is doing its most minimal job at selling Opterons.

So Dell has no excuse from a systems engineering point of view either.
But it does still have the marketing funds issue which I gather is much
more important to it.

> The fact that Dell holds the line makes life much tougher for AMD, and
> if Opteron with Intel scrambling in the dust didn't do it, I don't
> know what would.

AMD has been fine so far without it. AMD should really start asserting
itself and say that it is not expecting to sell anything to Dell. Even
when Dell says nice things about AMD, AMD should immediately put the
kibosh on the rumours. That'll really drive Dell nuts, it'll ruin their
negotiations with Intel. And it should continue doing that quarter after
quarter, that way Dell will only get regular discounts from Intel. When
Dell gets only regular discounts, then that puts all of Dell's
competitors at a level playing field against them.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> You may be right about Lenovo, but that deal is surely structured so
> that Lenovo can't touch the server space.

Any servers that Lenovo sells won't be allowed to have the IBM name on
it, but they'll likely be able to sell Lenovo-branded servers none-the-less.

They'll be able to sell Lenovo IBM-branded products as add-ons to
servers sales from both IBM and Lenovo.

>>>How is Dell going to get slaughtered?
>>
>>Technology-wise, two directions in server-space that I see off-hand: IBM
>>will have a better widget with its Xeon MP chipset and Sun will have a
>>better mid to upper-scale server with Opteron. HP is sounding enthusiastic
>>about Opteron too, though they're obviously not going to throw the (Intel)
>>baby out with the bath water when it comes down to it.
>
>
> HP's future is itanium. Sun doesn't have a future. If something
> kills Dell, it won't be Dell's failure to adopt AMD that does it.

The only thing that will kill Dell is Intel's inability to support them
anymore.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 13:06:21 -0500, Yousuf Khan <bbbl67@ezrs.com>
wrote:

>Robert Myers wrote:

<unsnip>

>>Part of the reason I'm not so confident about IBM's Cell processor
>>either, is because of this same reason, it's not really answering
>>anybody's needs. It's not an architecture that can take over from
>>anybody else's architecture, except for PowerPC itself which is its
>>native architecture.
>>

</unsnip>

>> Nobody needs a home computer, and worldwide demand for computers will
>> be five units. The advantages of streaming processors is low power
>> consumption and high throughput.
>
>Five units of what?
>
>They do need home electronics though. The sooner they can bring PC
>technology into the realm of home electronics the better. I'm surprised
>they can't get the cost of these things down any further. They were
>making huge strides in reducing prices until now.
>
Oh, come on, Yousuf, I was making a joking reference to the comments
of Watson of IBM about the worldwide need for computers (about five
should do it, he opined), and Olson of DEC on the need for computers
in the home (not needed at all). I unsnipped your comment, without
which the exchange makes no sense at all. Your dismissal of Cell may
be correct, but I don't think there's enough evidence anywhere for
anybody to draw any conclusions of any kind. I made reference to the
Watson and Olson opinions as a reminder of just how wrong people can
be. Olson didn't think the home computer was meeting anybody's needs,
either.

>> That's what IBM (and Intel and probably Transmeta, although they never
>> admitted it) probably wanted to do. For free, you should get runtime
>> feedback-directed optimization to make up for the overhead of
>> morphing. That's the theory, anyway. Exception and recovery may not
>> be the biggest problem, but it's one big problem I know about.
>
>What they really need is a kind of YACC (Yet Another Compiler Compiler)
>for instruction sets. A most atomic of instruction sets that has as much
>in common with other instruction sets as possible. Something that can
>simply be table-based and do a simple lookup between emulated
>instruction sets and its own native instruction set.
>

But it's processor state, not instruction sets, that's the problem.

RM
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 13:20:21 -0500, Yousuf Khan <bbbl67@ezrs.com>
wrote:

>Robert Myers wrote:
>> Some part of me wondered whether AMD could break into Dell. All other
>> considerations aside, it's not like Dell to add unnecessary
>> complication to its life. I'm sure they looked at how many sales they
>> might lose vs. the engineering costs and decided it wasn't worth it.
>> However that calculation came out, I'm sure they used it to squeeze
>> Intel a little harder.
>
>Well, actually AMD has taken care of the systems engineering problem
>completely for Dell. It created an ecosystem straight away for Opteron,
>not just motherboards but complete barebones systems from Newisys. It
>was so easy to make an Opteron system that people like IBM couldn't find
>any excuse not to go with Opteron this time at all. Not to say that IBM
>is thrilled to be having to sell Opterons, it would much rather
>concentrate on Power and possibly Xeon, but it simply has no excuse not
>to. So IBM is doing its most minimal job at selling Opterons.
>
You don't think IBM's involvement with the process technology has
something to do with it selling Opteron? They're in bed with AMD. I
look at it the other way around. When Intel looks at them fiercely,
they can just shrug their shoulders and say, "What can we do? We
gotta pay our process guys, you know."

>So Dell has no excuse from a systems engineering point of view either.
>But it does still have the marketing funds issue which I gather is much
>more important to it.
>
>> The fact that Dell holds the line makes life much tougher for AMD, and
>> if Opteron with Intel scrambling in the dust didn't do it, I don't
>> know what would.
>
>AMD has been fine so far without it. AMD should really start asserting
>itself and say that it is not expecting to sell anything to Dell. Even
>when Dell says nice things about AMD, AMD should immediately put the
>kibosh on the rumours. That'll really drive Dell nuts, it'll ruin their
>negotiations with Intel. And it should continue doing that quarter after
>quarter, that way Dell will only get regular discounts from Intel. When
>Dell gets only regular discounts, then that puts all of Dell's
>competitors at a level playing field against them.
>
That gets to a level of speculation about how the big boys play the
game that I wouldn't want to get to. I'll buy the China thing. If
AMD can crack that market and if (say) Lenovo can make decent inroads
in the server space, then maybe it would be something significant for
AMD. It works in China just like it works anywhere else, maybe worse,
because it's probably a little more tolerant of the business practices
of Intel, which is building plants in China.

I'm sure you think I'm out to sell diminished prospects for AMD. I'm
not. I just don't see a path for AMD to turn technical superiority
into significantly greater sales.

RM
 

keith

Distinguished
Mar 30, 2004
1,335
0
19,280
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 07:40:53 -0500, Robert Myers wrote:

> On 5 Mar 2005 19:52:46 -0800, "YKhan" <yjkhan@gmail.com> wrote:
>
>>Robert Myers wrote:
>>> Had it worked, itanium would have satisfied customers' needs nicely:
>>a
>>> chip that would execute non-native binaries (including 360 and x86),
>>> mainframe features, and a variety of vendors to choose from
>>> ("industry-standard architecture," in intel's code phrase).
>>
>>If Intel had done that, i.e. come up with an architecture that could
>>emulate many other architectures, then it would've guaranteed Itanium
>>of 100% success. A chip that could emulate both x86 and PA-RISC at full
>>speed, at the very least; possibly something that could translate
>>anything. But instead it came up with this braindead VLIW/EPIC concept
>>which was an answer to nobody's needs.
>>
> Intel thought it was taking the best ideas available at the time it
> started the project. IBM had a huge investment in VLIW, and Elbrus
> was making wild claims about what it could do.

IBM never had a "huge investment" in VLIW. It was a research project, at
best. OTOH, Intel has a *huge* investment in VLIW, and it's a bus
that isn't going anywhere. It's too easy for us hardware folks to toss of
the hard problems to the compiler folk. History shows that this isn't a
good plan. Even if Intel *could* have pulled it off, where was the
incentive for the customers? They have a business to run and
processor technology isn't generally part of it.

> Somebody who doesn't actually do computer architecture probably has a
> very poor idea of all the constraints that operate in that universe, but
> I'll stick with my notion that Intel/HP's mistake was that they had a
> clean sheet of paper and let too much coffee get spilled on it from too
> many different people.

That was one, perhaps a big one. Intel's real problem, as I see it, is
that they didn't understand their customers. I've told the FS stories
here before. FS was doomed because the customers had no use for it and
they spoke *loudly*. Itanic is no different, except that Intel didn't
listen to their customers. They had a different agenda than their
customers; not a good position to be in.

>>That would've meant a RISC-like architecture, as RISC translates to RISC
>>very well, and as well as CISC.
>>
>>Part of the reason I'm not so confident about IBM's Cell processor
>>either, is because of this same reason, it's not really answering
>>anybody's needs. It's not an architecture that can take over from
>>anybody else's architecture, except for PowerPC itself which is its
>>native architecture.
>>
> Nobody needs a home computer, and worldwide demand for computers will be
> five units.

640Kb is enough for anyone, yada-yada-yada. It's all about missing the
point. Customers rule, architects don't.

> The advantages of streaming processors is low power consumption and high throughput.

You keep saying that, but so far you're alone in the woods. Maybe for the
codes you're interested in, you're right. ...but for most of us there are
surprises in life. We don't live it linearly.

>>The Transmeta concept held a lot of excitement for me at one time, not
>>because of its power savings but its code-morphing. But its internal
>>VLIW was really only meant for translating x86 and nothing else. They
>>might as well have not bothered with VLIW as the underlying
>>architecture.
>>
> The belief was (I think) that the front end part was sufficiently
> repetitive that it could be massaged heavily to deliver a very clean
> instruction stream to the back end. The concept isn't completely wrong,
> just not sufficiently right.

I worked (tangentially) on the original TMTA product. The "proof of
concept" was on MS Word. Let's call it "VC irrational exuberance". Yes
there was lots learned there, some of it interesting, but it came at a
time when Dr. Moore was still quite alive. Brute force won.

>>I think if somebody can come up with a code-morpher that can translate
>>anything with a small firmware upgrade at only a smallish 20% loss of
>>performance, will finally have themselves a winner something that can
>>replace anything. Buy the one processor and you get something that can
>>run PowerPC, Sparc, MIPS, and x86 on the same system.
>>
> That's what IBM (and Intel and probably Transmeta, although they never
> admitted it) probably wanted to do. For free, you should get runtime
> feedback-directed optimization to make up for the overhead of morphing.
> That's the theory, anyway. Exception and recovery may not be the
> biggest problem, but it's one big problem I know about.

As usual, theory says that it and reality are the same. Reality has a
different opinion.

--
Keith
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
>>>Nobody needs a home computer, and worldwide demand for computers will
>>>be five units. The advantages of streaming processors is low power
>>>consumption and high throughput.
>>
>>Five units of what?
>>
>>They do need home electronics though. The sooner they can bring PC
>>technology into the realm of home electronics the better. I'm surprised
>>they can't get the cost of these things down any further. They were
>>making huge strides in reducing prices until now.
>>
>
> Oh, come on, Yousuf, I was making a joking reference to the comments
> of Watson of IBM about the worldwide need for computers (about five
> should do it, he opined), and Olson of DEC on the need for computers
> in the home (not needed at all). I unsnipped your comment, without
> which the exchange makes no sense at all.

And I snipped them again, because even with them in, it still makes no
sense whatsoever. How old do you think I am, to have gotten that
reference? Even if I was an old foghat, it's doubtful I would've gotten
that reference without at least a reminder about who said it. Or at
least quotes around it to say it's a quote.

> Your dismissal of Cell may
> be correct, but I don't think there's enough evidence anywhere for
> anybody to draw any conclusions of any kind. I made reference to the
> Watson and Olson opinions as a reminder of just how wrong people can
> be. Olson didn't think the home computer was meeting anybody's needs,
> either.

I think it's safe to assume it's going to fail to live upto its hype.
The hype being that it'll sweep the world in every field including PCs.

And likely the comments that Olsen and Watson made about the lack of
demand for home computers was completely right for the times they were
uttered. The first PC was still likely decades away at those points in
time. Even Bill Gates' infamous, "640K oughta be enough", was probably
right on the money for that point in time.

However, the Cell is almost present-day technology now, and it's pretty
easy to see where it's going to go because it's not so far away.

>>What they really need is a kind of YACC (Yet Another Compiler Compiler)
>>for instruction sets. A most atomic of instruction sets that has as much
>>in common with other instruction sets as possible. Something that can
>>simply be table-based and do a simple lookup between emulated
>>instruction sets and its own native instruction set.
>>
>
>
> But it's processor state, not instruction sets, that's the problem.

What do you mean?

Yousuf Khan
 

keith

Distinguished
Mar 30, 2004
1,335
0
19,280
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 13:20:21 -0500, Yousuf Khan wrote:

> Robert Myers wrote:
>> Some part of me wondered whether AMD could break into Dell. All other
>> considerations aside, it's not like Dell to add unnecessary
>> complication to its life. I'm sure they looked at how many sales they
>> might lose vs. the engineering costs and decided it wasn't worth it.
>> However that calculation came out, I'm sure they used it to squeeze
>> Intel a little harder.
>
> Well, actually AMD has taken care of the systems engineering problem
> completely for Dell. It created an ecosystem straight away for Opteron,
> not just motherboards but complete barebones systems from Newisys. It
> was so easy to make an Opteron system that people like IBM couldn't find
> any excuse not to go with Opteron this time at all. Not to say that IBM
> is thrilled to be having to sell Opterons, it would much rather
> concentrate on Power and possibly Xeon, but it simply has no excuse not
> to. So IBM is doing its most minimal job at selling Opterons.

Kinda like OS/2? IBM isn't about doing what others easily can. It can be
described as a one-stop supermarket. "If you *really* want it, we have it!"

> So Dell has no excuse from a systems engineering point of view either.
> But it does still have the marketing funds issue which I gather is much
> more important to it.

Dell - systems engineering? Is that like "military intelligence"?

>> The fact that Dell holds the line makes life much tougher for AMD, and
>> if Opteron with Intel scrambling in the dust didn't do it, I don't
>> know what would.
>
> AMD has been fine so far without it. AMD should really start asserting
> itself and say that it is not expecting to sell anything to Dell. Even
> when Dell says nice things about AMD, AMD should immediately put the
> kibosh on the rumours. That'll really drive Dell nuts, it'll ruin their
> negotiations with Intel. And it should continue doing that quarter after
> quarter, that way Dell will only get regular discounts from Intel. When
> Dell gets only regular discounts, then that puts all of Dell's
> competitors at a level playing field against them.

I agree with the first few sentences. AMD should flat out tell the world
that they're not going after Dell, never! The second half I don't so much
agree with. Intel, nor Dell particularly care.

--
Keith
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> You don't think IBM's involvement with the process technology has
> something to do with it selling Opteron? They're in bed with AMD. I
> look at it the other way around. When Intel looks at them fiercely,
> they can just shrug their shoulders and say, "What can we do? We
> gotta pay our process guys, you know."

The only thing that IBM's chip division tells its server division to
sell is Power, nothing else. Actually that's probably coming down from
the executive board of IBM, rather than one division to another.

IBM's chip division is in bed with AMD. IBM's server division is in bed
with Intel for Xeon.

>>AMD has been fine so far without it. AMD should really start asserting
>>itself and say that it is not expecting to sell anything to Dell. Even
>>when Dell says nice things about AMD, AMD should immediately put the
>>kibosh on the rumours. That'll really drive Dell nuts, it'll ruin their
>>negotiations with Intel. And it should continue doing that quarter after
>>quarter, that way Dell will only get regular discounts from Intel. When
>>Dell gets only regular discounts, then that puts all of Dell's
>>competitors at a level playing field against them.
>>
>
> That gets to a level of speculation about how the big boys play the
> game that I wouldn't want to get to. I'll buy the China thing. If
> AMD can crack that market and if (say) Lenovo can make decent inroads
> in the server space, then maybe it would be something significant for
> AMD. It works in China just like it works anywhere else, maybe worse,
> because it's probably a little more tolerant of the business practices
> of Intel, which is building plants in China.

Well, so is AMD. Neither is building anything like a full-fledged chip
plant in China, just packaging plants. It's likely that AMD will be the
first to build a full chip plant in China though, as the subsidies in
Europe are drying up. Ireland just had to withdraw a promise of
subsidies to Intel for its Irish plant, because the EU overruled it.

> I'm sure you think I'm out to sell diminished prospects for AMD. I'm
> not. I just don't see a path for AMD to turn technical superiority
> into significantly greater sales.

It's a matter of them playing dirty like Intel. It's the only way to do it.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

"Yousuf Khan" <bbbl67@ezrs.com> wrote in message
news:gOGdnf3mrJZBJrbfRVn-1g@rogers.com...
> Robert Myers wrote:
> > You don't think IBM's involvement with the process technology has
> > something to do with it selling Opteron? They're in bed with AMD.
I
> > look at it the other way around. When Intel looks at them fiercely,
> > they can just shrug their shoulders and say, "What can we do? We
> > gotta pay our process guys, you know."
>
> The only thing that IBM's chip division tells its server division to
> sell is Power, nothing else. Actually that's probably coming down from
> the executive board of IBM, rather than one division to another.

You smoking that BC bud again, up there in canuckistan?

>
> IBM's chip division is in bed with AMD. IBM's server division is in
bed
> with Intel for Xeon.
>
I don't even know what this sentence is supposed to mean. Maybe you
didn't notice IBM's last? reorganization?
This is almost as funny as the stuff from "the sun never sets on ibm"
about how ibm deliberately made S/3 not be 360 compatible....

snipitee doo dah.

del
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

"Yousuf Khan" <bbbl67@ezrs.com> wrote in message
news:gOOdnfSgc9DZ5LbfRVn-3A@rogers.com...
>
> They do need home electronics though. The sooner they can bring PC
> technology into the realm of home electronics the better. I'm
surprised
> they can't get the cost of these things down any further. They were
> making huge strides in reducing prices until now.
>
I snipped it all, although I can't believe that someone educated in
computers would be ignorant of both watson's and olson's remarks along
with Gary Killdall flying and gates' 640k.

I was just out at sam's club the other day, and they were selling, for
550 bucks retail or the cost of a nice middle of the road TV, a Compaq
AMD system with a 17 inch flat CRT monitor (not lcd), 512MB, 180 GB
disk (might have been 250, don't remember for sure), XP, about 8 USB
ports, sound, etc etc. Even a little reader for the memory cards out of
cameras right on the front.

Computers already are into the realm of home electronics.

del cecchi
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 20:24:16 -0500, keith <krw@att.bizzzz> wrote:

>On Sun, 06 Mar 2005 07:40:53 -0500, Robert Myers wrote:
>
>> On 5 Mar 2005 19:52:46 -0800, "YKhan" <yjkhan@gmail.com> wrote:
>>

<snip>

>>>
>>>If Intel had done that, i.e. come up with an architecture that could
>>>emulate many other architectures, then it would've guaranteed Itanium
>>>of 100% success. A chip that could emulate both x86 and PA-RISC at full
>>>speed, at the very least; possibly something that could translate
>>>anything. But instead it came up with this braindead VLIW/EPIC concept
>>>which was an answer to nobody's needs.
>>>
>> Intel thought it was taking the best ideas available at the time it
>> started the project. IBM had a huge investment in VLIW, and Elbrus
>> was making wild claims about what it could do.
>
>IBM never had a "huge investment" in VLIW. It was a research project, at
>best. OTOH, Intel has a *huge* investment in VLIW, and it's a bus
>that isn't going anywhere. It's too easy for us hardware folks to toss of
>the hard problems to the compiler folk. History shows that this isn't a
>good plan. Even if Intel *could* have pulled it off, where was the
>incentive for the customers? They have a business to run and
>processor technology isn't generally part of it.
>
You mean the work required to tune? People will optimize the hell out
of compute intensive code--to a point. The work required to get the
world-beating SpecFP numbers is probably beyond that point.

>> Somebody who doesn't actually do computer architecture probably has a
>> very poor idea of all the constraints that operate in that universe, but
>> I'll stick with my notion that Intel/HP's mistake was that they had a
>> clean sheet of paper and let too much coffee get spilled on it from too
>> many different people.
>
>That was one, perhaps a big one. Intel's real problem, as I see it, is
>that they didn't understand their customers. I've told the FS stories
>here before. FS was doomed because the customers had no use for it and
>they spoke *loudly*. Itanic is no different, except that Intel didn't
>listen to their customers. They had a different agenda than their
>customers; not a good position to be in.
>
If alpha and pa-risc hadn't been killed off, I might agree with you
about Itanium. No one is going to abandon the high-end to an IBM
monopoly. Never happen (again).

I gather that Future Systems eventually became AS/400. We'll never
know what might have become of Itanium if it hadn't been such a
committee enterprise. The 8080, after all, was not a particularly
superior processor design, and nobody needed *it*, either.

<snip>

>
>> The advantages of streaming processors is low power consumption and high throughput.
>
>You keep saying that, but so far you're alone in the woods. Maybe for the
>codes you're interested in, you're right. ...but for most of us there are
>surprises in life. We don't live it linearly.
>
I'm definitely not alone in the woods on this one, Keith. Go look at
Dally's papers on Brook and Stream. Take a minute and visit
gpgpu.org. I could dump you dozens of papers of people doing stuff
other than graphics on stream processors, and they are doing a helluva
lot of graphics, easily found with google, gpgpu, or by checking out
siggraph conferences. Network processors are just another version of
the same story. Network processors are right at the soul of
mainstream computing, and they're going to move right onto the die.

With everything having turned into point-to-point links, computers
have turned into packet processors already. Current processing is the
equivalent of loading a container ship by hand-loading everything into
containers, loading them onto the container ship, and hand-unloading
at the other end. Only a matter of time before people figure out how
to leave things in the container for more of the trip, as the world
already does with physical cargo.

Power consumption matters. That's one point about BlueGene I've
conceded repeatedly and loudly.

Stream processors have the disadvantage that it's a wildly different
computing paradigm. I'd be worried if *I* had to propose and work
through the new ways of coding. Fortunately, I don't. It's
happening.

The harder question is *why* any of this is going to happen. A lower
power data center would be a very big deal, but nobody's going to do a
project like that from scratch. PC's are already plenty powerful
enough, or so the truism goes. I don't believe it, but somebody has
to come up with the killer app, and Sony apparently thinks they have
it. We'll see.

>>>The Transmeta concept held a lot of excitement for me at one time, not
>>>because of its power savings but its code-morphing. But its internal
>>>VLIW was really only meant for translating x86 and nothing else. They
>>>might as well have not bothered with VLIW as the underlying
>>>architecture.
>>>
>> The belief was (I think) that the front end part was sufficiently
>> repetitive that it could be massaged heavily to deliver a very clean
>> instruction stream to the back end. The concept isn't completely wrong,
>> just not sufficiently right.
>
>I worked (tangentially) on the original TMTA product. The "proof of
>concept" was on MS Word. Let's call it "VC irrational exuberance". Yes
>there was lots learned there, some of it interesting, but it came at a
>time when Dr. Moore was still quite alive. Brute force won.
>
On the face of it, MS word doesn't seem like it should work because of
a huge number of unpredictable code paths. Turns out that even a word
processing program is fairly repetitive. Do you know if they included
exception and recovery in the analysis?

>>>I think if somebody can come up with a code-morpher that can translate
>>>anything with a small firmware upgrade at only a smallish 20% loss of
>>>performance, will finally have themselves a winner something that can
>>>replace anything. Buy the one processor and you get something that can
>>>run PowerPC, Sparc, MIPS, and x86 on the same system.
>>>
>> That's what IBM (and Intel and probably Transmeta, although they never
>> admitted it) probably wanted to do. For free, you should get runtime
>> feedback-directed optimization to make up for the overhead of morphing.
>> That's the theory, anyway. Exception and recovery may not be the
>> biggest problem, but it's one big problem I know about.
>
>As usual, theory says that it and reality are the same. Reality has a
>different opinion.

It's still worth understanding why. The only way to make things go
faster, beyond a certain point, is to make them predictable.

RM
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 21:05:58 -0500, Yousuf Khan <bbbl67@ezrs.com>
wrote:

>Robert Myers wrote:

<snip>

> > Your dismissal of Cell may
>> be correct, but I don't think there's enough evidence anywhere for
>> anybody to draw any conclusions of any kind. I made reference to the
>> Watson and Olson opinions as a reminder of just how wrong people can
>> be. Olson didn't think the home computer was meeting anybody's needs,
>> either.
>
>I think it's safe to assume it's going to fail to live upto its hype.
>The hype being that it'll sweep the world in every field including PCs.
>
That's called knocking down a straw man. Sure, there are some game
players getting a little carried away. There is simply no way of
knowing, until it plays itself out, how big a deal this is going to
be. I hope somebody at Intel is paying attention.

>And likely the comments that Olsen and Watson made about the lack of
>demand for home computers was completely right for the times they were
>uttered.

Watson was closer to right than Olsen, and Olsen was completely wrong,
even for his time. The evidence was on the table, although he was two
years ahead of the release of VisiCalc (1977 vs. 1979).

>The first PC was still likely decades away at those points in
>time. Even Bill Gates' infamous, "640K oughta be enough", was probably
>right on the money for that point in time.
>
Several candidates for the "First PC" had been out for several years
by the time Olsen stuck his foot in his mouth. The Apple I was
released the year before. Gates was an idiot, if he ever said such a
thing, and I don't think he actually did. Think of a 1000x1000 color
bitmap.

>However, the Cell is almost present-day technology now, and it's pretty
>easy to see where it's going to go because it's not so far away.
>
Why don't you be a little more specific in your predictions, since
they're so easy to make?

>>>What they really need is a kind of YACC (Yet Another Compiler Compiler)
>>>for instruction sets. A most atomic of instruction sets that has as much
>>>in common with other instruction sets as possible. Something that can
>>>simply be table-based and do a simple lookup between emulated
>>>instruction sets and its own native instruction set.
>>>
>>
>> But it's processor state, not instruction sets, that's the problem.
>
>What do you mean?
>
For itanium, the actual effect of an instruction depends on a great
many past events that have to be kept track of (state). The op-code
appears to act on a few registers. The actual instruction operates on
a space of much larger dimensionality. x86 also has state that is
sufficiently scrambled that it's amazing that vmware can do what it
does. The problem is *much* harder than translating instructions,
especially if you want to take advantage of all of itanium's wigetry
to optimize peformance. And for every interrupt, all that state has
to be kept track of and acted upon appropriately, perhaps involving
elaborate unwinding of provisional actions.

RM