Archived from groups: comp.sys.ibm.pc.hardware.chips (
More info?)
On Sun, 06 Mar 2005 20:24:16 -0500, keith <krw@att.bizzzz> wrote:
>On Sun, 06 Mar 2005 07:40:53 -0500, Robert Myers wrote:
>
>> On 5 Mar 2005 19:52:46 -0800, "YKhan" <yjkhan@gmail.com> wrote:
>>
<snip>
>>>
>>>If Intel had done that, i.e. come up with an architecture that could
>>>emulate many other architectures, then it would've guaranteed Itanium
>>>of 100% success. A chip that could emulate both x86 and PA-RISC at full
>>>speed, at the very least; possibly something that could translate
>>>anything. But instead it came up with this braindead VLIW/EPIC concept
>>>which was an answer to nobody's needs.
>>>
>> Intel thought it was taking the best ideas available at the time it
>> started the project. IBM had a huge investment in VLIW, and Elbrus
>> was making wild claims about what it could do.
>
>IBM never had a "huge investment" in VLIW. It was a research project, at
>best. OTOH, Intel has a *huge* investment in VLIW, and it's a bus
>that isn't going anywhere. It's too easy for us hardware folks to toss of
>the hard problems to the compiler folk. History shows that this isn't a
>good plan. Even if Intel *could* have pulled it off, where was the
>incentive for the customers? They have a business to run and
>processor technology isn't generally part of it.
>
You mean the work required to tune? People will optimize the hell out
of compute intensive code--to a point. The work required to get the
world-beating SpecFP numbers is probably beyond that point.
>> Somebody who doesn't actually do computer architecture probably has a
>> very poor idea of all the constraints that operate in that universe, but
>> I'll stick with my notion that Intel/HP's mistake was that they had a
>> clean sheet of paper and let too much coffee get spilled on it from too
>> many different people.
>
>That was one, perhaps a big one. Intel's real problem, as I see it, is
>that they didn't understand their customers. I've told the FS stories
>here before. FS was doomed because the customers had no use for it and
>they spoke *loudly*. Itanic is no different, except that Intel didn't
>listen to their customers. They had a different agenda than their
>customers; not a good position to be in.
>
If alpha and pa-risc hadn't been killed off, I might agree with you
about Itanium. No one is going to abandon the high-end to an IBM
monopoly. Never happen (again).
I gather that Future Systems eventually became AS/400. We'll never
know what might have become of Itanium if it hadn't been such a
committee enterprise. The 8080, after all, was not a particularly
superior processor design, and nobody needed *it*, either.
<snip>
>
>> The advantages of streaming processors is low power consumption and high throughput.
>
>You keep saying that, but so far you're alone in the woods. Maybe for the
>codes you're interested in, you're right. ...but for most of us there are
>surprises in life. We don't live it linearly.
>
I'm definitely not alone in the woods on this one, Keith. Go look at
Dally's papers on Brook and Stream. Take a minute and visit
gpgpu.org. I could dump you dozens of papers of people doing stuff
other than graphics on stream processors, and they are doing a helluva
lot of graphics, easily found with google, gpgpu, or by checking out
siggraph conferences. Network processors are just another version of
the same story. Network processors are right at the soul of
mainstream computing, and they're going to move right onto the die.
With everything having turned into point-to-point links, computers
have turned into packet processors already. Current processing is the
equivalent of loading a container ship by hand-loading everything into
containers, loading them onto the container ship, and hand-unloading
at the other end. Only a matter of time before people figure out how
to leave things in the container for more of the trip, as the world
already does with physical cargo.
Power consumption matters. That's one point about BlueGene I've
conceded repeatedly and loudly.
Stream processors have the disadvantage that it's a wildly different
computing paradigm. I'd be worried if *I* had to propose and work
through the new ways of coding. Fortunately, I don't. It's
happening.
The harder question is *why* any of this is going to happen. A lower
power data center would be a very big deal, but nobody's going to do a
project like that from scratch. PC's are already plenty powerful
enough, or so the truism goes. I don't believe it, but somebody has
to come up with the killer app, and Sony apparently thinks they have
it. We'll see.
>>>The Transmeta concept held a lot of excitement for me at one time, not
>>>because of its power savings but its code-morphing. But its internal
>>>VLIW was really only meant for translating x86 and nothing else. They
>>>might as well have not bothered with VLIW as the underlying
>>>architecture.
>>>
>> The belief was (I think) that the front end part was sufficiently
>> repetitive that it could be massaged heavily to deliver a very clean
>> instruction stream to the back end. The concept isn't completely wrong,
>> just not sufficiently right.
>
>I worked (tangentially) on the original TMTA product. The "proof of
>concept" was on MS Word. Let's call it "VC irrational exuberance". Yes
>there was lots learned there, some of it interesting, but it came at a
>time when Dr. Moore was still quite alive. Brute force won.
>
On the face of it, MS word doesn't seem like it should work because of
a huge number of unpredictable code paths. Turns out that even a word
processing program is fairly repetitive. Do you know if they included
exception and recovery in the analysis?
>>>I think if somebody can come up with a code-morpher that can translate
>>>anything with a small firmware upgrade at only a smallish 20% loss of
>>>performance, will finally have themselves a winner something that can
>>>replace anything. Buy the one processor and you get something that can
>>>run PowerPC, Sparc, MIPS, and x86 on the same system.
>>>
>> That's what IBM (and Intel and probably Transmeta, although they never
>> admitted it) probably wanted to do. For free, you should get runtime
>> feedback-directed optimization to make up for the overhead of morphing.
>> That's the theory, anyway. Exception and recovery may not be the
>> biggest problem, but it's one big problem I know about.
>
>As usual, theory says that it and reality are the same. Reality has a
>different opinion.
It's still worth understanding why. The only way to make things go
faster, beyond a certain point, is to make them predictable.
RM