Might be a book that even R. Myers can love :-)

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:

[SNIP]

> By comparison, we can do teraflop on a chip _now_ with streaming
> technology. That's really hard to ignore, and we do need those
> teraflops, and more.

Yes, but can you do anything *useful* with that streaming pile of
TeraFLOP ? :)

I still can't see what this Streaming idea is bringing to the table
that's fundamentally new. It still runs into the parallelisation
wall eventually, it's just Yet Another Coding Paradigm. :/

Cheers,
Rupert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Rupert Pigott wrote:

> Robert Myers wrote:
>
> [SNIP]
>
>> By comparison, we can do teraflop on a chip _now_ with streaming
>> technology. That's really hard to ignore, and we do need those
>> teraflops, and more.
>
>
> Yes, but can you do anything *useful* with that streaming pile of
> TeraFLOP ? :)
>

The long range forces part of the molecular dynamics calculation is
potentially a tight little loop where the fact that it takes many cycles
to compute a reciprocal square root wouldn't matter if the calculation
were streamed.

There are many such opportunities to do something useful. There are
circumstances where you can't do streaming parallelism naively because
of well-known pipeline hazards, but, as always, there are ways to cheat
the devil.

> I still can't see what this Streaming idea is bringing to the table
> that's fundamentally new. It still runs into the parallelisation
> wall eventually, it's just Yet Another Coding Paradigm. :/
>

In a conventional microprocessor, the movement of data and progress
toward the final answer are connected only in the most vaguely
conceptual way: out of memory, into the cache, into a register, into an
execution unit, into another register, back into cache,... blah, blah,
blah. All that chaotic movement takes time and, even more important,
energy. In a streaming processor, data physically move toward the exit
and toward a final answer.

Too simple a view? By a country mile to be sure. Some part of almost
all problems will need a conventional microprocessor. For problems that
require long range data movement, getting the streaming paradigm to work
even in the crudest way above the chip level will be... challenging.

Fortunately, there is already significant experience from graphics
programming with what can be accomplished by way of streaming
parallelism, and we don't have to count on anybody with a big checkbook
waking up from their x86 hangover to see these ideas explored more
thoroughly: Playstation 3 and the associated graphics workstation will
make it happen.

Yet Another Coding Pardigm? I can live with that, but I think it's a
more powerful paradigm than you do, plainly.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Rupert Pigott wrote:

> Robert Myers wrote:
>
> [SNIP]
>
>> By comparison, we can do teraflop on a chip _now_ with streaming
>> technology. That's really hard to ignore, and we do need those
>> teraflops, and more.
>
>
> Yes, but can you do anything *useful* with that streaming pile of
> TeraFLOP ? :)
>

The long range forces part of the molecular dynamics calculation is
potentially a tight little loop where the fact that it takes many cycles
to compute a reciprocal square root wouldn't matter if the calculation
were streamed.

There are many such opportunities to do something useful. There are
circumstances where you can't do streaming parallelism naively because
of well-known pipeline hazards, but, as always, there are ways to cheat
the devil.

> I still can't see what this Streaming idea is bringing to the table
> that's fundamentally new. It still runs into the parallelisation
> wall eventually, it's just Yet Another Coding Paradigm. :/
>

In a conventional microprocessor, the movement of data and progress
toward the final answer are connected only in the most vaguely
conceptual way: out of memory, into the cache, into a register, into an
execution unit, into another register, back into cache,... blah, blah,
blah. All that chaotic movement takes time and, even more important,
energy. In a streaming processor, data physically move toward the exit
and toward a final answer.

Too simple a view? By a country mile to be sure. Some part of almost
all problems will need a conventional microprocessor. For problems that
require long range data movement, getting the streaming paradigm to work
even in the crudest way above the chip level will be... challenging.

Fortunately, there is already significant experience from graphics
programming with what can be accomplished by way of streaming
parallelism, and we don't have to count on anybody with a big checkbook
waking up from their x86 hangover to see these ideas explored more
thoroughly: Playstation 3 and the associated graphics workstation will
make it happen.

Yet Another Coding Pardigm? I can live with that, but I think it's a
more powerful paradigm than you do, plainly.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

K Williams wrote:

> Robert Myers wrote:
>

<snip>

>
> Gee, fantasy meets reality, once again. The reality is that what we
> have is "good enough". It's up to you softies to make your stuff
> fit within the hard realities of physics. That is, it's *all*
> about algorithms. Don't expect us hardware types to bail you out
> of your problems anymore. We're knocking on the door of hard
> physics, so complain to the guys across the Boneyard from MRL.
>

You seem to think that the complexity of the problems to be solved is
arbitrary, but it's not. It would be naive to assume that everything
possible has been wrung out of the algorithms, but it would be equally
naive to think that problems we want so badly to be able to solve will
ever be solved without major advances in hardware.

As to the physics...I wish I even had a clue.

>
>>It apparently didn't take too many poundings from clusters of
>>boxes at supercomputer shows to drive both the customers and the
>>manufacturers of
>>big iron into full retreat.
>
>
> Perhaps because *cheap* clusters could solve the "important"
> problems, given enough thought?

That's been the delusion, and that's exactly what it is: a delusion.

> Of course the others are deemed to
> be "unimportant", by definition. ...at least until there is a
> solution. ;-)
>

And that's why us "algorithm" types can't afford to ignore hardware: the
algorithms and even the problems we can solve are dictated by hardware.

<snip>

>
>>The possibilities for grand leaps just don't come from plugging
>>commodity boxes together, or even from plugging boards of
>>commodity
>>processors together. If you can't make a grand leap, it really
>>isn't worth the bother (that's the statement that makes enemies
>>for me--people may not know how to do much else, but they sure do
>>know how to run cable).
>
>
> IMHO, we're not going to see any grand leaps in hardware. We have
> some rather hard limits here. "186,000mi/sec isn't just a good
> idea, it's the *LAW*", sort of thing.
>

For the purpose of doing computational physics, the speed of light is a
limitation on how long it takes to satisfy data dependencies in a single
computational step. For the bogey protein-folding calculation in Allen.
et. al., we need to do 10^11 steps. One microsend is 300 meters (3x10^8
m/s x 10^-6 s). If we can jam the computer into a 300 meter sphere,
then a calculation that took one crossing time per time step would take
10^5 seconds, or about 30 hours. The Blue Gene document estimates 3
years for such a calculation, thereby allowing for more like 1000 speed
of light crossings per time step. To make the calculation go faster, we
need to reduce the number of speed of light crossings required or to
reduce the size of the machine.

> No doubt were currently running into what ammounts to a technology
> speedbump, but there *are* some hard limits were starting to see.
> It's up to you algorithm types now. ;-)
>

All previous predictions of the end of the road have turned out to be
premature, so I'm hesitant to join the chorus now, no matter how clear
the signs may seem to be.

<snip>

>
>>Processors with *Teraflop* capabilities are a reality, and not
>>just in
>>artificially inflated numbers for game consoles. Not only do
>>those teraflop chips wipe the floor with x86 and Itanium for the
>>problems you really need a breakthrough for, they don't need
>>warehouses full of routers, switches, and cable to get those
>>levels of performance.
>
> So buy them. I guess I don't understand your problem. They're
> reality, so...
>

Before silicon comes a simulation model, and there are, indeed, better
ways to be approaching that problem than to be chatting about it on csiphc.

<snip>

>
>>Streaming processors a slam dunk? Apparently not. They're hard
>>to
>>program and inflexible. IBM is the builder of choice for them at
>>the
>>moment. Somebody else, though, will have to come up with the
>>money.
>
>
> Builder, perhaps. Architect/proponent/financier? I don't think
> so. ...at least not the way this peon sees things. I've had many
> wishes over the years, This doesn't even come close to my list of
> "good ideas wasted on dumb management",
>

IBM, and those who might be concerned with what might happens to the
technical capabilities it might possess, have more pressing concerns
than whether IBM should be going into supercomputers or not, and I don't
think IBM should, so we seem to be agreed about that.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

In article <rfp3d0livl7lj5st0v2cj8bdho9u3aoejm@4ax.com>,
George Macdonald <fammacd=!SPAM^nothanks@tellurian.com> writes:
<snip>
> As for JCL, I once had a JCL evangelist explain to me how he could use JCL
> in ways which wasn't possible on systems with simpler control statements -
> conditional job steps, subsitution of actual file names for dummy
> parameters etc... "catalogued procedures"?[hazy again] The guy was stuck
> in his niche of "job steps" where data used to be massaged from one set of
> tapes to another and then on in another step to be remassaged into some
> other record format for storing on another set of tape... all those steps
> being necessary, essentially because of the sequential tape storage. We'd
> had disks for a while but all they did was emulate what they used to do
> with tapes - he just didn't get it.
>
I used to do JCL, back when I ran jobs on MVS. After getting used to it,
and the fact that you allocated or deleted files using the infamous
IEFBR14, there were things to recommend it. At the very least, you edited
your JCL, and it all stayed put. Then you submitted, and it was in the
hands of the gods. None (or very little, because there were ways to kill
a running job) of this Oops! and hit Ctrl-C.

I never had to deal with tapes, fortunately. It was also frustrating not
having dynamic filenames. There were ways to weasel around some of those
restrictions, though.

Dale Pontius
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

In article <9ji1d0htgdbqorgc3rkqbanrvm952l62sl@4ax.com>,
daytripper <day_trippr@REMOVEyahoo.com> writes:
> On Wed, 16 Jun 2004 18:19:39 GMT, Robert Redelmeier <redelm@ev1.net.invalid>
> wrote:
>
>>Robert Myers <rmyers1400@comcast.net> wrote:
>>> Leave out the technical issues. If Intel/HP have to climb down from the
>>> fortress they've built around Itanium, how will they ever pull it off?
>>> It would be like IBM admitting that maybe System 360 wasn't such a great
>>> idea, after all (which, who knows, maybe it wasn't).
>>
>>Whatever one thinks about the technical merits of S/360,
>>the commercial success was undeniable.
>
> I think the technical merits were right up there as well.
> What other system had a control store that required an air-pump to operate?;-)
>
When a former boss had a service anniversay, they brought him a 'gift'.
It was one of those thingies that needed an air pump to operate, also
known as CCROS. I suspect it meant Capacitive-Coupled Read-Only Storage.
The slick thing was that it was a ROM you could progam with a keypunch.
Not very dense, though. 36KB in 2 or 3 cubic feet.

Dale Pontius
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Dale Pontius wrote:

<snip>

>
> The question for IA64 becomes can it bring enough to the table on future
> revisions to make up for its obstacles. Will >8-way become compelling,
> and a what price? At this point, AMD is trying to push its Opteron ASPs
> up, but probably has more flex room than IA64 or Xeon.
>

At this point, Itanium is _still_ mostly expectation. My point in
commenting on the book that started the thread is that Intel seemed to
have no interest in lowering expectations about Itanium.

Intel will do _something_ to diminish the handicap that Itanium
currently has due to in-order execution. The least painful thing that
Intel can do, as far as I understand things, is to use speculative
slices as a prefetch mechanism. That gets a big piece of the advantages
of OoO without changing the main thread control logic at all. Whether
that strategy works at an acceptable cost in transistors and power is
another question.

That single change could rewrite the rules for Itanium, because it will
take much of the heat off compilation and allow people more frequently
actually to see the kind of performance that Itanium now seems to
produce mostly only in benchmarks.

As to cost, Intel have made it clear that they are prepared to do
whatever they have to do to make the chip competitive.

As to how the big (more than 8-way) boxes behave, that's up to the
people who build the big boxes, isn't it? The future big boxes will
depend on board level interconnect and switching infrastructure, and if
anybody knows what that is going to look like in Intel's PCI Express
universe, I wish they'd tell me.

It gets harder to stick with the position all the time, but you still
have to take a deep breath when betting against Intel. The message
Intel wants you to hear is: IA-64 for mission critical big stuff, IA-32
for not-so-critical, not-so-big stuff.

No marketing baloney for you and you don't care what Intel wants you to
hear? That's reasonable and to be expected from technical people.
Itanium is where they intend to put their resources and support for high
end applications, and they apparently have no intention of backing away
from that. Feel free to ignore what they're spending so much money to
tell you. It's your nickel.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Mon, 21 Jun 2004 19:50:55 -0400, dale@edgehp.invalid (Dale Pontius) wrote:
>One simple question about IA64...
>
>What and whose problem does it solve?
>
>As far as I can tell, its prime mission is to solve Intel's problem, and
>rid it of those pesky cloners from at least some segments of the CPU
>marketplace, hopefully an expanding portion.
>
>It has little to do with customers' problems, in fact it makes some
>problems for customers. (Replace ALL software? Why is this good for
>ME?)

I love the smell of irony in the evening...

The need for humongous non-segmented memory space is a driver for "wider
addressing than ia32 provided" architectures.

The real irony is, after years of pain for everyone involved, the ia64 may
just find itself in the dustbin of perpetual non-starters because the pesky
CLONER came up with a "painless" way to extend memory addressing!

/daytripper (simply delicious stuff ;-)
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

In article <T92Cc.71530$2i5.7652@attbi_s52>,
Robert Myers <rmyers1400@comcast.net> writes:
> Dale Pontius wrote:
>
> <snip>
>
>>
>> The question for IA64 becomes can it bring enough to the table on future
>> revisions to make up for its obstacles. Will >8-way become compelling,
>> and a what price? At this point, AMD is trying to push its Opteron ASPs
>> up, but probably has more flex room than IA64 or Xeon.
>>
>
> At this point, Itanium is _still_ mostly expectation. My point in
> commenting on the book that started the thread is that Intel seemed to
> have no interest in lowering expectations about Itanium.
>
> Intel will do _something_ to diminish the handicap that Itanium
> currently has due to in-order execution. The least painful thing that
> Intel can do, as far as I understand things, is to use speculative
> slices as a prefetch mechanism. That gets a big piece of the advantages
> of OoO without changing the main thread control logic at all. Whether
> that strategy works at an acceptable cost in transistors and power is
> another question.
>
> That single change could rewrite the rules for Itanium, because it will
> take much of the heat off compilation and allow people more frequently
> actually to see the kind of performance that Itanium now seems to
> produce mostly only in benchmarks.
>
Development cost is a different thing to Intel than to most of the rest
of us. I've heard of "Intellian Hordes," (my perversion of Mongolian)
and that it sounds tough to me to coordinate the sheer number of people
they have working on a project. I contrast that with the small team we
have on projects, and our perpetual fervent wish for just a few more
people.

> As to cost, Intel have made it clear that they are prepared to do
> whatever they have to do to make the chip competitive.
>
> As to how the big (more than 8-way) boxes behave, that's up to the
> people who build the big boxes, isn't it? The future big boxes will
> depend on board level interconnect and switching infrastructure, and if
> anybody knows what that is going to look like in Intel's PCI Express
> universe, I wish they'd tell me.
>
Actually, it's none of my business, except as an interested observer. I
don't ever forsee that kind of hardware in my home, and I don't oversee
purchases of that kind of equipment.

> It gets harder to stick with the position all the time, but you still
> have to take a deep breath when betting against Intel. The message
> Intel wants you to hear is: IA-64 for mission critical big stuff, IA-32
> for not-so-critical, not-so-big stuff.
>
My one stake in the IA-64 vs X86-64/IA-32e debate is that I have some
wish to run EDA software on my home machine. I like to have dinner with
the family, and it's about a half-hour each way to/from work. Having
EDA on Linux at home means I can do O.T. after dinner without a drive.

I currently have IA-32 and run EDA software, but that stuff is moving to
64-bit. I can foresee having X86-64 in my own home in the near future,
which keeps me capable. I can't see the horizon where I'll have IA-64
in my home, at the moment. In addition to EDA software, my IA-32
machine also does Internet stuff, plays Quake3, and other clearly non-
work related things. Actually, the work is the extra mission.

> No marketing baloney for you and you don't care what Intel wants you to
> hear? That's reasonable and to be expected from technical people.
> Itanium is where they intend to put their resources and support for high
> end applications, and they apparently have no intention of backing away
> from that. Feel free to ignore what they're spending so much money to
> tell you. It's your nickel.
>
Marketing baloney or not, it's really irrelevant at the moment. I'm a
home user, and Intel's roadmap doesn't put IA-64 in front of me for the
visible horizon. Nor do I have anything to say about purchasing that
calibre of machines at work. I *have* expressed my preference about
seeing EDA software on X86-64 - for the purpose of running it on a home
machine. So not only is it my nickel, they're not even asking me for
it. Any ruminations about IA-64 vs X86-64 are merely that - technical
discussion and ruminations. Anything they're spending money telling me
now is simply cheerleading.

For that matter, since IA-64 isn't on the Intel roadmap for home users
yet, I could well buy an X86-64 machine in the next year or two. When
it's time to step up again, I can STILL examine the IA-64 decision vs
whatever else is on the market, then.

Put simply, at the moment my choices are IA-32, X86-64, and Mac.
Period. Any discussion of IA-64 is just that -discussion, *because* I'm
a technical person.

Dale Pontius
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Dale Pontius wrote:

<snip>

>
> Development cost is a different thing to Intel than to most of the rest
> of us. I've heard of "Intellian Hordes," (my perversion of Mongolian)
> and that it sounds tough to me to coordinate the sheer number of people
> they have working on a project. I contrast that with the small team we
> have on projects, and our perpetual fervent wish for just a few more
> people.
>

No matter how it turns out, Itanium should be safely in the books for
case studies at schools of management. To my eye, the opportunities and
challenges resemble the opportunities and challenges of big aerospace.
NASA isn't the very best example, but it's the easiest to talk about.
If you have unlimited resources and you're damned and determined to put
a man on the moon, you can do it, no matter how many people you have to
manage to get there. In the aftermath of Apollo, though, with shrinking
budgets and a chronic need to oversell, NASA delivered a Shuttle program
that many see as poorly conceived and executed. Intel and Itanium are
still in the Apollo era in terms of resources.

>
> My one stake in the IA-64 vs X86-64/IA-32e debate is that I have some
> wish to run EDA software on my home machine. I like to have dinner with
> the family, and it's about a half-hour each way to/from work. Having
> EDA on Linux at home means I can do O.T. after dinner without a drive.
>
> I currently have IA-32 and run EDA software, but that stuff is moving to
> 64-bit. I can foresee having X86-64 in my own home in the near future,
> which keeps me capable. I can't see the horizon where I'll have IA-64
> in my home, at the moment. In addition to EDA software, my IA-32
> machine also does Internet stuff, plays Quake3, and other clearly non-
> work related things. Actually, the work is the extra mission.
>

<snip>

> For that matter, since IA-64 isn't on the Intel roadmap for home users
> yet, I could well buy an X86-64 machine in the next year or two. When
> it's time to step up again, I can STILL examine the IA-64 decision vs
> whatever else is on the market, then.
>
> Put simply, at the moment my choices are IA-32, X86-64, and Mac.
> Period. Any discussion of IA-64 is just that -discussion, *because* I'm
> a technical person.
>

The one thing you might care about would be the possibility that the
standard environment for EDA went from x86/Linux to ia64/Whatever. That
could still happen, but it seems like a distant prospect right now.
Itanium seems most plausible to prevail over x86-64 in proprietary
software with high license fees, but that kind of software isn't
generally running next to Quake3 now and probably won't ever be.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Dale Pontius wrote:

> In article <rfp3d0livl7lj5st0v2cj8bdho9u3aoejm@4ax.com>,
> George Macdonald <fammacd=!SPAM^nothanks@tellurian.com> writes:
> <snip>
>> As for JCL, I once had a JCL evangelist explain to me how he
>> could use JCL in ways which wasn't possible on systems with
>> simpler control statements - conditional job steps, subsitution
>> of actual file names for dummy
>> parameters etc... "catalogued procedures"?[hazy again] The guy
>> was stuck in his niche of "job steps" where data used to be
>> massaged from one set of tapes to another and then on in another
>> step to be remassaged into some other record format for storing
>> on another set of tape... all those steps
>> being necessary, essentially because of the sequential tape
>> storage. We'd had disks for a while but all they did was emulate
>> what they used to do with tapes - he just didn't get it.
>>
> I used to do JCL, back when I ran jobs on MVS. After getting used
> to it, and the fact that you allocated or deleted files using the
> infamous IEFBR14, there were things to recommend it.

I didn't have much problem with JCL either, and found it rather
powerful. (and one only needed IEFBR14 for cleanup detail).

> At the very
> least, you edited your JCL, and it all stayed put. Then you
> submitted, and it was in the hands of the gods. None (or very
> little, because there were ways to kill a running job) of this
> Oops! and hit Ctrl-C.

If it was your job, it was rather easy to kill. Of course I
remember when even MVS was about as secure as MSDOS. I learned
much of my MVS stuff (including what initiators were "hot") by
walking through others' JCL and code. Even the "protection"
wasn't. Simply copy the file to another pack and delete it from
the VTOC where it was originally and re-catalog it. Of course RACF
ruined all my fun. ;-) Then there were ways of "hiding" who one
was (start TSO in the background, and submit a job from there hid
one's identity). ...much more fun than the incomprehensible *ix
stuff. ;-)

> I never had to deal with tapes, fortunately. It was also
> frustrating not having dynamic filenames. There were ways to
> weasel around some of those restrictions, though.

Dynamic file names weren't a problem, AFAIR.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

daytripper wrote:

> On Mon, 21 Jun 2004 19:50:55 -0400, dale@edgehp.invalid (Dale
> Pontius) wrote:
>>One simple question about IA64...
>>
>>What and whose problem does it solve?
>>
>>As far as I can tell, its prime mission is to solve Intel's
>>problem, and rid it of those pesky cloners from at least some
>>segments of the CPU marketplace, hopefully an expanding portion.
>>
>>It has little to do with customers' problems, in fact it makes
>>some
>>problems for customers. (Replace ALL software? Why is this good
>>for ME?)
>
> I love the smell of irony in the evening...

I rather like my wife doing that in the morning, so I have crisp
shirts to wear (and if you believe that...).
>
> The need for humongous non-segmented memory space is a driver for
> "wider addressing than ia32 provided" architectures.

But, but, bbbb, everone *knows* there is no reason for 64b
processors on the desktop! Intel says so.

> The real irony is, after years of pain for everyone involved, the
> ia64 may just find itself in the dustbin of perpetual non-starters
> because the pesky CLONER came up with a "painless" way to extend
> memory addressing!

Are you implying that Intel dropped a big ball? ...or a little one,
BIG-TIME!

> /daytripper (simply delicious stuff ;-)

Indeed. ...though remember; no one needs 64bits. no one needs
64bits. no one needs 64bits. no one, no one, no...

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:

> Dale Pontius wrote:
>
> <snip>
>
>>
>> Development cost is a different thing to Intel than to most of
>> the rest
>> of us. I've heard of "Intellian Hordes," (my perversion of
>> Mongolian) and that it sounds tough to me to coordinate the sheer
>> number of people
>> they have working on a project. I contrast that with the small
>> team we have on projects, and our perpetual fervent wish for just
>> a few more people.
>>
>
> No matter how it turns out, Itanium should be safely in the books
> for
> case studies at schools of management.

Rather like the Tacoma Narrows Bridge movie is required viewing for
all freshmen engineers? ;-)

> To my eye, the
> opportunities and challenges resemble the opportunities and
> challenges of big aerospace. NASA isn't the very best example, but
> it's the easiest to talk about. If you have unlimited resources
> and you're damned and determined to put a man on the moon, you can
> do it, no matter how many people you have to
> manage to get there.

....but Intel hasn't gotten there yet, if they ever will.

> In the aftermath of Apollo, though, with
> shrinking budgets and a chronic need to oversell, NASA delivered a
> Shuttle program
> that many see as poorly conceived and executed. Intel and Itanium
> are still in the Apollo era in terms of resources.

No IMHO, Intel missed the moon and the Shuttle, and went directly
to the politics of the International Space Station. ...A mission
without a requirement.

> <snip>
<ditto>

>> For that matter, since IA-64 isn't on the Intel roadmap for home
>> users
>> yet, I could well buy an X86-64 machine in the next year or two.
>> When it's time to step up again, I can STILL examine the IA-64
>> decision vs whatever else is on the market, then.
>>
>> Put simply, at the moment my choices are IA-32, X86-64, and Mac.
>> Period. Any discussion of IA-64 is just that -discussion,
>> *because* I'm a technical person.
>>
>
> The one thing you might care about would be the possibility that
> the
> standard environment for EDA went from x86/Linux to ia64/Whatever.
> That could still happen, but it seems like a distant prospect
> right now. Itanium seems most plausible to prevail over x86-64 in
> proprietary software with high license fees, but that kind of
> software isn't generally running next to Quake3 now and probably
> won't ever be.

I know several EDA folks have been reluctant to support Linux and
instead support Windows, for at least the low-end stuff (easier to
restrict licensing). I don't see anyone seriously going for IPF
though. It is *expensive* supporting new platforms. ...which is
why x86-64 is so attractive.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

K Williams wrote:

> Robert Myers wrote:
>
>
>>K Williams wrote:
>>
>>
>>>Robert Myers wrote:
>>>

>>
>>I'm not sure what kind of complexity you are imagining. Garden
>>variety microprocessers are already implausibly complicated as far
>>as I'm concerned.
>
>
> I guess I'm trying to figure out exactly *what* you're driving at.
> Performance comes with arrays of processors or complex processors.
> Depending on the application, either may win, but there aren't any
> simple-uniprocessors at the high-end. We're long past that
> possibility.
>
>
>>I have some fairly aggressive ideas about what *might* be done
>>with computers, but they don't necessarily lead to greatly
>>complicated
>>machines. Complicated switching fabric--probably.
>
>
> Ok, now we're back to arrays. ...something which I thought you were
> whining about "last" week.
>

If by an array you mean a stream of data and insructions, I suppose
that's general enough.

As to what I want...I think Iain McClatchie did well enough in
presenting what I thought might have been done with Blue Gene in talking
about his "WIZZIER processor" on comp.arch. You can do it for certain
classes of problems...no one doubts that. You can do it with ASIC's if
you've got the money...no one doubts that. Can you build a
general-purpose "supercomputer" that way? Not easily.

We are, in any case, a long way from exhausting the architectural
possibilities.

>
>>>>As to the physics...I wish I even had a clue.
>>>
>>>
>>>Gee, I thought you were plugged into that "physics" stuff too.
>>>Perhaps you just like busting concrete? ;-)
>>>
>>
>>No. I started out, in fact, in the building across the boneyard
>>from
>>MRL. I understand the physical limitations well enough. What I
>>don't know about is what might be done to get around those
>>limitations.
>
>
> ...and neither does anyone else. Many people are hard at work
> re-inventing physics. The last time I remember a significant
> speed-bump IBM invested ten-figures in a synchrotron for x-ray
> lithography.

I thought the grand illusion was e-beam lithography.

> Smarter people came up with the diffraction masks.
> Sure, some of these smarter people will come around again, but the
> problems go up exponentially as the feature size shrinks.
>

I'm looking for improvements from: low power operation (the basic
strategy of Blue Gene), improvements in packaging (Sun's slice of the
DARPA pie being one idea, albeit one I'm not crazy about), using
pipelines creatively and aggressively, and more efficient handling of
the movement of instructions and data. If we get better or even
acceptable power-frequency scaling with further scale shrinks,
naturally, I'll take it, but I'm not counting on it.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

K Williams wrote:
> Robert Myers wrote:
>
>
>>Dale Pontius wrote:
>>
>><snip>
>>
>>>Development cost is a different thing to Intel than to most of
>>>the rest
>>>of us. I've heard of "Intellian Hordes," (my perversion of
>>>Mongolian) and that it sounds tough to me to coordinate the sheer
>>>number of people
>>>they have working on a project. I contrast that with the small
>>>team we have on projects, and our perpetual fervent wish for just
>>>a few more people.
>>>
>>
>>No matter how it turns out, Itanium should be safely in the books
>>for
>>case studies at schools of management.
>
>
> Rather like the Tacoma Narrows Bridge movie is required viewing for
> all freshmen engineers? ;-)
>
>
>>To my eye, the
>>opportunities and challenges resemble the opportunities and
>>challenges of big aerospace. NASA isn't the very best example, but
>>it's the easiest to talk about. If you have unlimited resources
>>and you're damned and determined to put a man on the moon, you can
>>do it, no matter how many people you have to
>>manage to get there.
>
>
> ...but Intel hasn't gotten there yet, if they ever will.
>
>
>>In the aftermath of Apollo, though, with
>>shrinking budgets and a chronic need to oversell, NASA delivered a
>>Shuttle program
>>that many see as poorly conceived and executed. Intel and Itanium
>>are still in the Apollo era in terms of resources.
>
>
> No IMHO, Intel missed the moon and the Shuttle, and went directly
> to the politics of the International Space Station. ...A mission
> without a requirement.
>

The comparison to the International Space Station doesn't seem
especially apt. I made the comparison to Apollo only to make the point
that neither ambitious objectives nor the need to bring enormous
resources to bear doom an enterprise to failure. Who knows how the
Shuttle, which was not a well-conceived undertaking to begin with, might
have fared without the ruinous political and budgetary pressure to which
the program was subjected. By comparison, Intel seems not to have
followed the path of publicly-funded technology, which is to starve
troubled programs, thereby guaranteeing even more trouble.

One is tempted to make the comparison to hot fusion, a program that,
after decades of lavish funding, has entered an old-age pension phase.
Both hot fusion and itanium had identifiable problems involving basic
science, and in neither case have those problems yet been solved. With
Itanium, the misconception (that static scheduling can do the job) may
be so severe that the problem can't be fixed in a sastisfactory way. As
to hot fusion, who knows...the physics are infinitely more complicated
that the bare Navier-Stokes equations, which themselves are the subject
of one of the Clay Institute's Millenium Problems.

Both Itanium and hot fusion have been overtaken by events. Hot fusion
has become less compelling as other less Faustian schemes for energy
production have become ever more attractive. In the case of Itanium,
who would ever have imagined that x86 would become so good? In
retrospect, an easy call, but if it were so easy in prospect, lots of
things might have happened differently. Should one fault Intel for not
forseeing the attack of the out-of-order x86? Quite possibly, but I
wouldn't claim to understand the history well enough to make that judgment.

<snip>

>
> I know several EDA folks have been reluctant to support Linux and
> instead support Windows, for at least the low-end stuff (easier to
> restrict licensing).

Right now, Linux is hostile territory for compiled binaries because of
shared libaries. Windows has an equivalent issue with "DLL hell," but
Microsoft never pretended it wasn't a problem (What's the problem? Just
recompile from source.) and has been working at solving it, not
completely without success. I'm sure the Free Software Foundation would
be just as happy if the problem were never addressed, and the biggest
problems I've encountered with GLIBC, but with Linux spending so much of
its time playing a real OS on TV, it seems inevitable that it will be
addressed. For the moment, though, companies like IBM can't be
completely unhappy that professional support or hacker status is almost
a necessity for using proprietary applications with Linux.

> I don't see anyone seriously going for IPF
> though. It is *expensive* supporting new platforms. ...which is
> why x86-64 is so attractive.
>

Intel's real mistake with Itanium, I think. It's a problem even for
PowerPC.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

In article <hJKdnXalDN7JFEbdRVn-jA@adelphia.com>,
K Williams <krw@att.biz> writes:
> Robert Myers wrote:
>
<snip>
>> In the aftermath of Apollo, though, with
>> shrinking budgets and a chronic need to oversell, NASA delivered a
>> Shuttle program
>> that many see as poorly conceived and executed. Intel and Itanium
>> are still in the Apollo era in terms of resources.
>
> No IMHO, Intel missed the moon and the Shuttle, and went directly
> to the politics of the International Space Station. ...A mission
> without a requirement.
>
Every now and then, I have to pop up and defend the ISS.

I must agree that at the moment, the ISS has practically NO value to
science. But I must disagree that it has NO value, at all.

At one point it had, and perhaps may have again, value in diplomacy
and fostering international cooperation.

But IMHO the real value of the ISS is not as a SCIENCE experiment, but
as an ENGINEERING experiment. The fact that we're having such a tough
time with it indicates that it is a HARD problem. It's clearly a third
generation space station. The first generation was preassembled, like
Skylab and Salyut, perhaps with a little unfurling and maybe a gizmo or
two docked, but primarly ground-assembled, and sent up. The second
generation was Mir, with a bunch of ground-assembled pieces sent up and
docked. There's some on-orbit assembly, but it's still largely a thing
of the ground.

The ISS has modules all built on the ground, obviously. But the on-
orbit assembly is well beyond that of Mir. It's the next step of a
logical progression.

Some look and say it's hard, let's stop. I say that until we solve the
'minor' problems of the ISS, we're NEVER going to get to anything like
Von Braun's (or 2001: ASO) wheels. Zubrin's proposal, in order to avoid
requiring an expensive space station, went to the extreme of having
nothing to do with one, even if it already were to exist. But until we
get to some sort of on-orbit, or at least off-Earth assembly capability
we're going to be limited to something in the 30ft-or-less diameter
that practically everything we've ever sent up has had.

Oh, the ISS orbit is another terrible obstacle. But at the moment, it
clearly permits Russian launches, and would be in even worse trouble,
without.

But IMHO, the ENGINEERING we're learning, however reluctantly and
slowly, is ESSENTIAL to future steps in space.

Dale Pontius
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Dale Pontius wrote:

> In article <hJKdnXalDN7JFEbdRVn-jA@adelphia.com>,
> K Williams <krw@att.biz> writes:
>> Robert Myers wrote:
>>
> <snip>
>>> In the aftermath of Apollo, though, with
>>> shrinking budgets and a chronic need to oversell, NASA delivered
>>> a Shuttle program
>>> that many see as poorly conceived and executed. Intel and
>>> Itanium are still in the Apollo era in terms of resources.
>>
>> No IMHO, Intel missed the moon and the Shuttle, and went
>> directly
>> to the politics of the International Space Station. ...A mission
>> without a requirement.
>>
> Every now and then, I have to pop up and defend the ISS.

Ok, I'll play devil. ;-)

> I must agree that at the moment, the ISS has practically NO value
> to science. But I must disagree that it has NO value, at all.
>
> At one point it had, and perhaps may have again, value in
> diplomacy and fostering international cooperation.

Where's the beef? I *did* say "to the *politics* (emphasis added)
of the International Space Station". ;-)

> But IMHO the real value of the ISS is not as a SCIENCE experiment,
> but as an ENGINEERING experiment. The fact that we're having such
> a tough time with it indicates that it is a HARD problem. It's
> clearly a third generation space station. The first generation was
> preassembled, like Skylab and Salyut, perhaps with a little
> unfurling and maybe a gizmo or two docked, but primarly
> ground-assembled, and sent up. The second generation was Mir, with
> a bunch of ground-assembled pieces sent up and docked. There's
> some on-orbit assembly, but it's still largely a thing of the
> ground.

It's absolutely an engineering experiment. We already knew the
"science". Though there are problems, it went together more easily
than most erector-set projects (surprising all). The problems,
IMO, have been mostly political (and as a subset, financial).

> The ISS has modules all built on the ground, obviously. But the
> on- orbit assembly is well beyond that of Mir. It's the next step
> of a logical progression.

Progression to what? I see no grand-plan that requires ISS.
Freedom was cut down to "Fred", because of the massive costs, then
morfed into ISS when it turned into a political tool.

> Some look and say it's hard, let's stop. I say that until we solve
> the 'minor' problems of the ISS, we're NEVER going to get to
> anything like Von Braun's (or 2001: ASO) wheels. Zubrin's
> proposal, in order to avoid requiring an expensive space station,
> went to the extreme of having nothing to do with one, even if it
> already were to exist. But until we get to some sort of on-orbit,
> or at least off-Earth assembly capability we're going to be
> limited to something in the 30ft-or-less diameter that practically
> everything we've ever sent up has had.

I simply don't see ISS as interesting science or engineering. It's
a cut-down compromise done on the cheap with a very foggy mission
statement. It seems politics rules any possible science. There
was a good article (titled "1000 days", or some such) on this in
the last issue of _Air_and_Space_.

> Oh, the ISS orbit is another terrible obstacle. But at the moment,
> it clearly permits Russian launches, and would be in even worse
> trouble, without.

Sure. A 57 degree inclination is useful for other reasons, as well.
The 25 degree orbit out of the cape would save little, other than
fuel. A polar or even sun-synchronous orbit would be "interesting"
too, but for "other" reasons, which wouldn't be in the spirit of
the ISS. ;-)

> But IMHO, the ENGINEERING we're learning, however reluctantly and
> slowly, is ESSENTIAL to future steps in space.

I disagree, in that ISS isn't doing what was promised. It is not
providing anything essential to the progress, since we don't even
know what we're progressing to.

--
Keith

> Dale Pontius