Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (
More info?)
Rupert Pigott wrote:
> Robert Myers wrote:
>
>> Rupert Pigott wrote:
>>
<snip>
>>
>> You are apparently arguing for the desirability of folding the
>> artificial computational boundaries of clusters into software. If
>
>
> That happens with SSI systems too. There is a load of information that
> has been published about scaling on SGI's Origin machines over the
> years. IIRC Altix is based on the same Origin 3000 design. You may
> remember that I quizzed Rob Warnock on this, he said that there were
> in practice little gotchas that tend to crop up at particular #'s of
> procs. He even noted that the gotcha processor counts tended to change
> with the particular generation of Origin.
>
>> that's a necessity of life, I can learn to live with it, but I'm
>> having a hard time seeing it as desirable. We are so fortunate as to
>> live in a universe that presents itself to us in midtower-sized
>> chunks? I'm worried. ;-).
>
>
> In my mind it's a question of fitting our computing effort to reality
> as opposed to living in an Ivory Tower. Some goals, while worthy,
> desirable, or even partially achievable, are basically impossible to
> achieve in reality. A genuinely *flat* address space is impossible
> right here and now. That SSI Altix box will *not* have *flat* address
> space in terms of time. It is a NUMA machine.
>
Well, yes, it is. The spread in latencies is more like a half a
microsecond, as opposed to five microseconds for the latest and greatest
of the DoE build-to-order specials.
On the question of Ivory Towers vs. reality, I believe that I am on the
side of the angels, naturally. If you believe the right question really
is: "What's the least expensive way we can get a high Linpack score?",
then clusters are a slam dunk, but I don't think that anybody worth
talking to on the subject really thinks that's the right question to be
asking.
As to access to 1000-node and even bigger machines, I don't need them.
What I need is to know what kind of machine a code is likely to run on
when somebody decides an NCSA-type installation is required.
How you will _ever_ scale _anything_ to the kinds of memory and and
compute requirements required to do even some very pedestrian problems
properly is my real concern, and, from that point of view, no
architecture currently on the table, short of specialized hardware, is
even in the right universe.
Given that _nothing_ currently available can really do the physics
right--with the possible exception of things like the Cell-like chips
the Columbia QCD people are using--and that nothing currently available
really scales in a way that I can imagine, I'm inclined to give heavy
emphasis to useability.
>>> It's a
>>> matter of choice over the long run... If you use the unique features
>>> of a kilonode Itanium box then you're basically locked-in. Clearly
>>> this is not an issue for some establishments, Cray customers are a
>>> good example.
😛
>>>
>>
>> Can you give an example of something that you think would happen?
>
>
> Depends on the app. Stuff like memory mapping one large file for read
> and occasional write could cause some fantastic locking + latency
> issues when it comes to porting.
>
I understand just enough about operating systems to know that building a
1000-node image that runs on realizable hardware is a real
tour-de-force. I also understand that you can take off-the-shelf copies
of, say, RedHat Linux, and some easily-obtainable clustering software
and (probably) get a thousand beige boxes to run like a kilonode
cluster. Someone else (Linus, SGI, et al) wrote the Altix OS. Someone
else (Linus, RedHat, et al) wrote the OS for the cluster nodes. I don't
want to fiddle with either one. You want me to believe that I am better
off synchronizing processes and exchanging data across infiniband stacks
and through trips in and out of kernel and user space and with heaven
only knows how many control handoffs for each exchange than I am reading
and writing to my own user space under the control of a single OS, and I
just don't.
<snip>
>
> I mentioned Opteron, if HT really does suffer from crash+burn on
> comms failure then it is holding itself back. If that ain't the
> case I'd have figured that a tiny form factor Opteron + DRAM +
> router cards would be a reasonable component for high-density
> clusters and beige SSI machines. You'd need some facility for
> driving some links for longer distances than HT currently allows
> too ($$$). The next thing holding you back is tuning the OS + Apps
> to a myriad of possible configurations...
🙁
I'm guessing that, the promise of Opteron for HPC notwithstanding, HT is
going to be marginalized by PCI Express/Infiniband.
> [SNIP]
>
>> The optimistic view is that the chaos we currently see is the HPC
>> equivalent of the pre-Cambrian explosion and that natural selection
>> will eventually give us a mature and widely-adopted architecture. My
>> purpose in starting this discussion was simply to opine that single
>> image architectures have some features that make them seem promising
>> as a survivor--not a widely-held view, I think.
>
>
> I'm sure they'll have their place. But in the long run I think that
> PetaFLOP pressure will tend to push people towards message passing
> style machines. Consdier this though : Internet is becoming more and
> more prominent on daily life. The Spooks must have a fair old time
> keeping up with the sheer volume of data flowing around the globe.
> Distributed processing is a natural fit here, SSI machines just would
> not make sense. More and more governments and their civil servants
> will want to make use of this surveillance resource too, check out
> the rate at which legislation is legitimising their intrusion on the
> individual's privacy. The War on Terror has added more fuel to that
> growth market too.
>
Nothing that _I_ say about distributed processing is going to slow it
down, that's for sure, and that isn't my intent. If you've got a
google-type task, you should use google-type hardware. Computational
physics is not a google-type task.
RM