Is Centrino brand all that strong?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Wed, 09 Feb 2005 21:56:08 -0500, keith <krw@att.bizzzz> wrote:

>On Tue, 08 Feb 2005 22:45:53 -0500, Robert Myers wrote:
>

<snip>

>
>Seeee, that's where we differ. I'm a "latency" bigot, and I uunderstand
>that my problem is bigger than yours. Bandwidth is too easy.
>
The engineer's mistake: thinking a problem is important because it's
hard. The current memory latency to processor cycle time ratio is a
couple hundred. Did _anybody_ think we'd get away with that?

Latency is not the enemy. Unpredictability is the enemy. With
sufficiently predictable dataflow, you can fake latency, but you
_cannot_ fake bandwidth.

You have unpredictable data and need global access with low latency?
Where can I buy something that does that...cheap?

Hardware is what you understand. Hardware is the topic of the group.
The limits to what you can do with hardware to beat latency are
really...hard.

On the other hand, you have to work really, really hard even to fake
randomness. Most of the gains to be made in beating latency are in
software.

<snip>

>>
>> Oh, science is doing just fine these days. Aside from the oil
>> companies, I want to see if a company doing, say, drug discovery buys
>> one.
>
>What they're doing (or not) should be instructive. They have the bux to
>force the issue if they see some profit at the end of teh tunnel. Sine
>apparently they don't (correct me if I'm wrong)...
>
The people who are doing work where there's real money to be made
aren't necessarily advertising what they're doing. That leaves the
impression that the guys with all the color plots on the web are the
ones doing the real work. They aren't.

A clue to that reality came out on comp.arch when I took exception to
a DoE claim that Power was the leading architecture for HPC. That
exchange smoked out the existence of huge oil-company clusters of x86
that could show up on Top 500 but don't (why would they?). There is
custom hardware in use in biotech.

>> There is an interesting post on realworldtech by someone who authors
>> things like chess-playing software about the importance of having true
>> random access to memory for things like search (which is what much of AI
>> is coming down to). He also mentions the FFT. You can dismiss it as my
>> private obsession, if you like, but I prefer to think of it as a really
>> strong intuition as to what computing is really all about. Or, rather,
>> a strong intuition as to what a real measure of capability is.
>
>My *strong* intuition is opposite of yours, apparently. I really, really,
>believe we're latency bound, not bandwidth bouund. All the works seems to
>be going into trying to excuse latency.
>
Latency is incredibly important for performance of big boxes where
unpredictable contention for shared objects is the bottleneck. Since
those big boxes are designed for such (commercial) applications,
that's where the money and the effort go.

>> You are absolutely right: the guy with the checkbook writes the order.
>> If the guy with the checkbook wants to keep doing what was already done
>> twenty years ago, only just more of it, there is not much I can do about
>> it.
>
>The guy with the checkbook wins. THe guy with the biggest one can afford
>to dabble in new things like Itanic or Cell. At least the jury is still
>out on one of these. ;-)

If you stand _way_ back, some important technlogies have been frozen
for a long time: the internal combustion engine, rockets, jet engines,
turbines, electric motors and generators: the mainsprings of
industrialized civilization. Microprocessors, which are mostly just
shrunk down versions of what was pretty well developed by the sixties
are going to be the same way? Maybe. Intel made a bad bet on Itanium
changing the rules. It didn't and it's not going to, although Intel
might still use it successfully to fence off part of the market. I'm
betting on the whole paradigm of microprocessors to change from
fetching of instructions and data from memory to cache and registers
to on the fly processing of packets. I could be just as wrong about
that as Intel was about VLIW.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

In article <lnum019pef50bhnlhc839dreqd66hfs5i8@4ax.com>, rmyers1400
@comcast.net says...
> On Wed, 09 Feb 2005 21:56:08 -0500, keith <krw@att.bizzzz> wrote:
>
> >On Tue, 08 Feb 2005 22:45:53 -0500, Robert Myers wrote:
> >
>
> <snip>
>
> >
> >Seeee, that's where we differ. I'm a "latency" bigot, and I uunderstand
> >that my problem is bigger than yours. Bandwidth is too easy.
> >
> The engineer's mistake: thinking a problem is important because it's
> hard.

Bullshit.

> The current memory latency to processor cycle time ratio is a
> couple hundred. Did _anybody_ think we'd get away with that?

Irrelevent what "they" thought. "They" knew we'd have to because it's
a real problem.

> Latency is not the enemy. Unpredictability is the enemy. With
> sufficiently predictable dataflow, you can fake latency, but you
> _cannot_ fake bandwidth.

Again, bullshit. If you know the answer, why are you calculating it?
You can *buy* bandwidth. ...and no you can't fake latency. You can
guess at what you'll need, but when you guess wrong you still have to
pay the piper.

> You have unpredictable data and need global access with low latency?
> Where can I buy something that does that...cheap?

That's the point. One can't buy latency. It's not even expensive.
One *can* buy bandwidth, but once you have enough, more doesn't matter.
That's not true with latency. Lower is *always* better.

> Hardware is what you understand. Hardware is the topic of the group.
> The limits to what you can do with hardware to beat latency are
> really...hard.

Exactly. 299,792,458m/S isn't just a good idea. It's the law.

> On the other hand, you have to work really, really hard even to fake
> randomness. Most of the gains to be made in beating latency are in
> software.

It's easy to come close enough to stall a pipe.

> <snip>
>
> >>
> >> Oh, science is doing just fine these days. Aside from the oil
> >> companies, I want to see if a company doing, say, drug discovery buys
> >> one.
> >
> >What they're doing (or not) should be instructive. They have the bux to
> >force the issue if they see some profit at the end of teh tunnel. Sine
> >apparently they don't (correct me if I'm wrong)...
> >
> The people who are doing work where there's real money to be made
> aren't necessarily advertising what they're doing. That leaves the
> impression that the guys with all the color plots on the web are the
> ones doing the real work. They aren't.

I don't see you forking over a couple of hundred $Megabux to solve your
problems. I'm sure I could direct you to the appropriate people if
your pockets are that deep (as Del has told you before).

> A clue to that reality came out on comp.arch when I took exception to
> a DoE claim that Power was the leading architecture for HPC. That
> exchange smoked out the existence of huge oil-company clusters of x86
> that could show up on Top 500 but don't (why would they?). There is
> custom hardware in use in biotech.

<shrug>

> >> There is an interesting post on realworldtech by someone who authors
> >> things like chess-playing software about the importance of having true
> >> random access to memory for things like search (which is what much of AI
> >> is coming down to). He also mentions the FFT. You can dismiss it as my
> >> private obsession, if you like, but I prefer to think of it as a really
> >> strong intuition as to what computing is really all about. Or, rather,
> >> a strong intuition as to what a real measure of capability is.
> >
> >My *strong* intuition is opposite of yours, apparently. I really, really,
> >believe we're latency bound, not bandwidth bouund. All the works seems to
> >be going into trying to excuse latency.
> >
> Latency is incredibly important for performance of big boxes where
> unpredictable contention for shared objects is the bottleneck. Since
> those big boxes are designed for such (commercial) applications,
> that's where the money and the effort go.

Bingo! When you can convince the monkeys-with-money there's as much
money in Cray-1's, you'll have 'em coming out of your ears. I believe
strongly in the "existence theorem".

> >> You are absolutely right: the guy with the checkbook writes the order.
> >> If the guy with the checkbook wants to keep doing what was already done
> >> twenty years ago, only just more of it, there is not much I can do about
> >> it.
> >
> >The guy with the checkbook wins. THe guy with the biggest one can afford
> >to dabble in new things like Itanic or Cell. At least the jury is still
> >out on one of these. ;-)
>
> If you stand _way_ back, some important technlogies have been frozen
> for a long time: the internal combustion engine, rockets, jet engines,
> turbines, electric motors and generators: the mainsprings of
> industrialized civilization.

Sure. These things have already had the innovation squoze out of 'em.
When you're up against the Carnot efficiency, where's the money?

> Microprocessors, which are mostly just
> shrunk down versions of what was pretty well developed by the sixties
> are going to be the same way? Maybe.

IMO, yes. Note that "shrunk down" improves latency. ;-)

> Intel made a bad bet on Itanium
> changing the rules. It didn't and it's not going to, although Intel
> might still use it successfully to fence off part of the market.

Sure, and that's why I guessed it would fail six or seven years ago.
What does Itanic bring to the table that isn't already there? Why
would a money-monkey want to jump onto a proprietary platform? Indeed,
why would they ever consider the expense of *moving* to one?

> I'm
> betting on the whole paradigm of microprocessors to change from
> fetching of instructions and data from memory to cache and registers
> to on the fly processing of packets. I could be just as wrong about
> that as Intel was about VLIW.

<shrug> Could be, not that it matters much at the end of the day.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Thu, 10 Feb 2005 11:58:21 -0500, Keith R. Williams <krw@att.bizzzz>
wrote:

>In article <lnum019pef50bhnlhc839dreqd66hfs5i8@4ax.com>, rmyers1400
>@comcast.net says...

<snip>

>
>That's the point. One can't buy latency. It's not even expensive.
>One *can* buy bandwidth, but once you have enough, more doesn't matter.
>That's not true with latency. Lower is *always* better.
>

Lower latency just means you have more slop in scheduling, that's all.

There _are_ circumstances, like transaction processing, where you
can't do much to beat latency. I don't do those kinds of
applications. Neither does anybody doing computation, as opposed to
transaction processing.

The enemy, to repeat, is unpredictability. If you're stuck with
unpredictable data flow, you're stuck with unpredictable data flow.
With a 200:1 memory access:CPU cycle time, you're going to spend much
of your time stalled, anyway, which, in fact, is exactly the way that
CPU's involved in transaction processing behave.

<snip>

>
>Bingo! When you can convince the monkeys-with-money there's as much
>money in Cray-1's, you'll have 'em coming out of your ears. I believe
>strongly in the "existence theorem".
>

Money, fortunately, isn't the answer. The national labs have a very
short time horizon, despite their visionary claims. Venture
capitalists aren't interested, generally speaking--same reason. DARPA
might be interested if you could do an autonomous vehicle, but the
problem there is algorithms, not hardware. The streaming hardware is
coming, anyway, thanks to ATI, and nVidia for sure, maybe
IBM/Sony/Toshiba if only I understood Cell better.

<snip>

>
>> Microprocessors, which are mostly just
>> shrunk down versions of what was pretty well developed by the sixties
>> are going to be the same way? Maybe.
>
>IMO, yes.

Well, I'm hoping you're wrong. Evidence coming in for GPU's is that
there is a revolution coming there. If there is a show-stopper, it
will be _bandwidth_.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Thu, 10 Feb 2005 18:33:42 -0500, Robert Myers wrote:

> On Thu, 10 Feb 2005 11:58:21 -0500, Keith R. Williams <krw@att.bizzzz>
> wrote:
>
>>In article <lnum019pef50bhnlhc839dreqd66hfs5i8@4ax.com>, rmyers1400
>>@comcast.net says...
>
> <snip>
>
>>
>>That's the point. One can't buy latency. It's not even expensive.
>>One *can* buy bandwidth, but once you have enough, more doesn't matter.
>>That's not true with latency. Lower is *always* better.
>>
>
> Lower latency just means you have more slop in scheduling, that's all.

Or like most of the world, you *cannot* schedule. If scheduling were easy
Itanic would still be floating. The existence theorem says that it's not.
Thus latency is the lynchpin to performance. Maybe *you* have some
"embarasingly linear" data flow, but the real world doesn't. Were this
the general case we'd have a P-V with a hundred-stage pipe. We don't;
there's that ugly existence theorem at work again.

> There _are_ circumstances, like transaction processing, where you can't
> do much to beat latency. I don't do those kinds of applications.
> Neither does anybody doing computation, as opposed to transaction
> processing.

Really? You don't care about conditions? ...don't need precise
exceptions? You're by *far* in the minority, Robert. If your data is so
homogenous, what's the interest in computing the answer. It'll always be
the same.

> The enemy, to repeat, is unpredictability. If you're stuck with
> unpredictable data flow, you're stuck with unpredictable data flow. With
> a 200:1 memory access:CPU cycle time, you're going to spend much of your
> time stalled, anyway, which, in fact, is exactly the way that CPU's
> involved in transaction processing behave.

Thank you. The processors doing this kind of work outnumber your
"workload" by 1E10:1, at least.
>
>>Bingo! When you can convince the monkeys-with-money there's as much
>>money in Cray-1's, you'll have 'em coming out of your ears. I believe
>>strongly in the "existence theorem".
>>
>>
> Money, fortunately, isn't the answer.

It is, if you want to make *your* widget. Otherwise others get to make
*their* widget: you lose.

> The national labs have a very
> short time horizon, despite their visionary claims. Venture capitalists
> aren't interested, generally speaking--same reason. DARPA might be
> interested if you could do an autonomous vehicle, but the problem there
> is algorithms, not hardware.

Hmm, seems Itanic hit that same 'berg. There's that ugly "existence
theorem" again.

> The streaming hardware is coming, anyway,
> thanks to ATI, and nVidia for sure, maybe IBM/Sony/Toshiba if only I
> understood Cell better.

Ah, but that pisses you off enevn more, because their market has no need
for a DP FPU. ...and you can't afford to have them make one for you that
does. There's that money thing again, and the dreaded "existence theorem".

> <snip>
>
>
>>> Microprocessors, which are mostly just shrunk down versions of what
>>> was pretty well developed by the sixties are going to be the same way?
>>> Maybe.
>>
>>IMO, yes.
>
> Well, I'm hoping you're wrong. Evidence coming in for GPU's is that
> there is a revolution coming there. If there is a show-stopper, it will
> be _bandwidth_.

Nah, it'll be latency. The pipe's will still be starved. The bandwith
will match the *number* of pipes. The pipes themselves still will be
latency limited.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Thu, 10 Feb 2005 10:16:47 -0500, Robert Myers wrote:

> On Wed, 09 Feb 2005 23:31:30 -0500, George Macdonald
> <fammacd=!SPAM^nothanks@tellurian.com> wrote:
>
>>On Wed, 09 Feb 2005 09:33:47 -0500, Robert Myers <rmyers1400@comcast.net>
>>wrote:
>>
>>>On Wed, 09 Feb 2005 08:21:39 -0500, George Macdonald
>>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>>
>>><snip>
>>>
>>>>
>>>>>I made an intense effort to understand what was going on with process
>>>>>technology when all the surprises came down at 90nm, but since then
>>>>>I've lost track of process technology. If Intel really has lost the
>>>>>playbook, that would be news, but I don't really believe it.
>>>>
>>>>What's going on with process tech does not really have to be understood at
>>>>the detail level to see the picture. IBM chief technologists, among
>>>>others, have told us of the "end of scaling" - Intel has demonstrated the
>>>>effect with 90nm P4. We know, as Keith has said right here, that the two
>>>>critical issues involved are power density and leakage. OTOH nobody is
>>>>talking of abandoning 65nm and lower, though they do talk of increasing
>>>>difficulty.
>>>>
>>>But I don't know whether to take "the end of scaling" seriously or
>>>not. What about nanotubes?
>>
>>I think it's serious OK -- we already have evidence -- and I'm not sure
>>which part of the problem nanotubes solves...
>
> Mobility. Faster gates at lower voltage, smallest possible voltage
> being the goal of low power operation.

We've been hearing the same tune from the optics guys for a quarter of a
century too. ...nothing interesting yet.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Thu, 10 Feb 2005 08:36:20 -0600, chrisv wrote:

> keith wrote:
>
>>On Wed, 09 Feb 2005 08:10:04 -0600, chrisv wrote:
>>
>>> keith wrote:
>>>
>>>>On Mon, 07 Feb 2005 11:06:00 -0600, chrisv wrote:
>>>>>
>>>>> I still remember those ads, and Corinthian leather was only one of the
>>>>> awesome features that were touted. Others were "AM/FM cassette radio"
>>>>> (Wow! Those are a whole $20 at Best Buy.) and, my personal favorite,
>>>>> "optional wire wheel covers" (Wow! I can pay extra and get wire
>>>>> wheel covers? And I can't do that on other cars?)
>>>>
>>>>What do you pay for an auto radio today? Just because you can buy a
>>>>"transistor" radio for $20 at K-Mart doesn't mean the radio in the car is
>>>>the same thing.
>>>
>>> Yeah, the $20 K-Mart radio was probably better. I'm only being
>>> slightly facetious when I say that.
>>
>><serious mode> Tell me that again after 100K New England miles.
>>Automotive electronics is some pretty rugged stuff. The environment is
>>rather harsh.
>
> I was referring to aftermarket automotive radios being quite
> affordable.

$20?

> Is the OEM radio "better built" and thus more costly to
> make?

Generally. ...at least moreso than a $20 radio.

> Maybe, but the point is, it was not a great "feature" for a car
> to have a kool AM/FM cassette radio - they simply were not uncommon or
> expensive.

Sure, but not $20 either.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Thu, 10 Feb 2005 20:39:58 -0500, keith <krw@att.bizzzz> wrote:

>On Wed, 09 Feb 2005 08:21:39 -0500, George Macdonald wrote:
>
>> On Tue, 08 Feb 2005 22:26:35 -0500, keith <krw@att.bizzzz> wrote:
>>

>>>...but I wouldn't lift a finger if France was run-over by the Germans,
>>>once again.
>>
>> Most of Europe is err, overrrun by Germans now... in a slightly different
>> way of course but the effect is similar: they buy up companies that are
>> going through a weak spell,
>
>Like Chrysler? They're enough to put DB through a weak spell!
>...forever. I still don't understand *what*they*were*thinking*. Sorta
>like HP buying the 'Q' and then dumping Alpha. ...another of the
>great corporate *what*they*were*thinking* moments. At least the latter
>has been admitted now.

From what I hear, DB is learning how to produce even worse junk than
they've had a habit of. They couldn't believe that people would buy the
stuff so they've adopted the "strategy".

As for HP, now that Carleton has gone I wonder what Plan B is?

>> which have something they want, e.g. Bentley,
>> Rolls Royce et.al. and move production to the err, Fatherland. Much of
>> this is against Euro-rules now of course, e.g. the Seimens division
>> transformation to Infineon and "move" from U.K. to Germany but
>> apparently there are "ways". Now the French car companies, after a
>> period of reasonable success, are showing signs of flagging a bit and
>> I'm just waiting for VW to put a move on Peugeot or Renault... talk
>> about putting the cat among the pigeons.... Sacre Bleu!!🙂
>
>;-) Then there's Ford buying Jag. Huh? Volvo was almost as bad.

Don't forget Aston Martin - with a back-order log of ~2 years, they're the
only Ford division which has been turning a profit recently. I hear they
have a guy with a sander/polisher scrubbing off the blue ovals on the
gear-knobs, door latch handles etc.🙂

>> BTW, completely OT here but if you haven't come across
>> http://diplomadic.blogspot.com/ yet it's well worth a visit, since it's
>> being wound up and has some hair-raising stuff on GW, Oil-for-Food and
>> yes, tsunami relief... much of it straight from first-hand witness. I
>> only found it recently so if you're already in the know......
>
>I looked briefly today, but didn't even have time to figure out how to
>navigate the site.

Just scroll down to "Tim Blair" and "UNHonesty" - it's a hoot!

--
Rgds, George Macdonald
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Thu, 10 Feb 2005 20:28:03 -0500, keith <krw@att.bizzzz> wrote:

>On Thu, 10 Feb 2005 10:16:47 -0500, Robert Myers wrote:
>
>> On Wed, 09 Feb 2005 23:31:30 -0500, George Macdonald
>> <fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>
>>>On Wed, 09 Feb 2005 09:33:47 -0500, Robert Myers <rmyers1400@comcast.net>
>>>wrote:
>>>
>>>>On Wed, 09 Feb 2005 08:21:39 -0500, George Macdonald
>>>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>>>
>>>><snip>
>>>>
>>>>>
>>>>>>I made an intense effort to understand what was going on with process
>>>>>>technology when all the surprises came down at 90nm, but since then
>>>>>>I've lost track of process technology. If Intel really has lost the
>>>>>>playbook, that would be news, but I don't really believe it.
>>>>>
>>>>>What's going on with process tech does not really have to be understood at
>>>>>the detail level to see the picture. IBM chief technologists, among
>>>>>others, have told us of the "end of scaling" - Intel has demonstrated the
>>>>>effect with 90nm P4. We know, as Keith has said right here, that the two
>>>>>critical issues involved are power density and leakage. OTOH nobody is
>>>>>talking of abandoning 65nm and lower, though they do talk of increasing
>>>>>difficulty.
>>>>>
>>>>But I don't know whether to take "the end of scaling" seriously or
>>>>not. What about nanotubes?
>>>
>>>I think it's serious OK -- we already have evidence -- and I'm not sure
>>>which part of the problem nanotubes solves...
>>
>> Mobility. Faster gates at lower voltage, smallest possible voltage
>> being the goal of low power operation.
>
>We've been hearing the same tune from the optics guys for a quarter of a
>century too. ...nothing interesting yet.

For the record, in light of the reply I just posted, let us stress that you
and I are not collaborating here.🙂

--
Rgds, George Macdonald
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Thu, 10 Feb 2005 20:25:59 -0500, keith <krw@att.bizzzz> wrote:

>On Thu, 10 Feb 2005 18:33:42 -0500, Robert Myers wrote:
>
>> On Thu, 10 Feb 2005 11:58:21 -0500, Keith R. Williams <krw@att.bizzzz>
>> wrote:
>>
>>>In article <lnum019pef50bhnlhc839dreqd66hfs5i8@4ax.com>, rmyers1400
>>>@comcast.net says...
>>
>> <snip>
>>
>>>
>>>That's the point. One can't buy latency. It's not even expensive.
>>>One *can* buy bandwidth, but once you have enough, more doesn't matter.
>>>That's not true with latency. Lower is *always* better.
>>>
>>
>> Lower latency just means you have more slop in scheduling, that's all.
>
>Or like most of the world, you *cannot* schedule. If scheduling were easy
>Itanic would still be floating. The existence theorem says that it's not.
>Thus latency is the lynchpin to performance. Maybe *you* have some
>"embarasingly linear" data flow, but the real world doesn't. Were this
>the general case we'd have a P-V with a hundred-stage pipe. We don't;
>there's that ugly existence theorem at work again.
>
We have different attitudes. There is no theory, as far as I know,
and when you don't know that something can't be done, my instinct is
to assume that it can.

>> There _are_ circumstances, like transaction processing, where you can't
>> do much to beat latency. I don't do those kinds of applications.
>> Neither does anybody doing computation, as opposed to transaction
>> processing.
>
>Really? You don't care about conditions? ...don't need precise
>exceptions? You're by *far* in the minority, Robert. If your data is so
>homogenous, what's the interest in computing the answer. It'll always be
>the same.

A great deal has been written about this. Large swaths of application
software show an astonishing level of predictability.

Exceptions? That's a whole separate, ugly business for someone. Not
for me.

>
>> The enemy, to repeat, is unpredictability. If you're stuck with
>> unpredictable data flow, you're stuck with unpredictable data flow. With
>> a 200:1 memory access:CPU cycle time, you're going to spend much of your
>> time stalled, anyway, which, in fact, is exactly the way that CPU's
>> involved in transaction processing behave.
>
>Thank you. The processors doing this kind of work outnumber your
>"workload" by 1E10:1, at least.

Well, no. See comment above. Exploiting the known predictability is
another matter entirely.

<snip>

>
>> The streaming hardware is coming, anyway,
>> thanks to ATI, and nVidia for sure, maybe IBM/Sony/Toshiba if only I
>> understood Cell better.
>
>Ah, but that pisses you off enevn more, because their market has no need
>for a DP FPU. ...and you can't afford to have them make one for you that
>does. There's that money thing again, and the dreaded "existence theorem".
>
Apparently not. People are now doing even 128-bit floating point.
Cell is probably stuck in 32-bit land.

>> <snip>
>>
>>
>>>> Microprocessors, which are mostly just shrunk down versions of what
>>>> was pretty well developed by the sixties are going to be the same way?
>>>> Maybe.
>>>
>>>IMO, yes.
>>
>> Well, I'm hoping you're wrong. Evidence coming in for GPU's is that
>> there is a revolution coming there. If there is a show-stopper, it will
>> be _bandwidth_.
>
>Nah, it'll be latency. The pipe's will still be starved. The bandwith
>will match the *number* of pipes. The pipes themselves still will be
>latency limited.

There are some deep, deep, deep problems here. No doubt about it.
Why didn't dataflow machines take over the world?

Your proposed solution: wait for the information to become available
and then act really, really fast is the only possible answer for
transaction processing. That's why you think it's the only possible
answer. We could go around on this forever. ;-).

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Fri, 11 Feb 2005 08:43:50 -0500, Robert Myers wrote:

> On Thu, 10 Feb 2005 20:25:59 -0500, keith <krw@att.bizzzz> wrote:
>
>>On Thu, 10 Feb 2005 18:33:42 -0500, Robert Myers wrote:
>>
>>> On Thu, 10 Feb 2005 11:58:21 -0500, Keith R. Williams <krw@att.bizzzz>
>>> wrote:
>>>
>>>>In article <lnum019pef50bhnlhc839dreqd66hfs5i8@4ax.com>, rmyers1400
>>>>@comcast.net says...
>>>
>>> <snip>
>>>
>>>>
>>>>That's the point. One can't buy latency. It's not even expensive.
>>>>One *can* buy bandwidth, but once you have enough, more doesn't matter.
>>>>That's not true with latency. Lower is *always* better.
>>>>
>>>
>>> Lower latency just means you have more slop in scheduling, that's all.
>>
>>Or like most of the world, you *cannot* schedule. If scheduling were easy
>>Itanic would still be floating. The existence theorem says that it's not.
>>Thus latency is the lynchpin to performance. Maybe *you* have some
>>"embarasingly linear" data flow, but the real world doesn't. Were this
>>the general case we'd have a P-V with a hundred-stage pipe. We don't;
>>there's that ugly existence theorem at work again.
>>
> We have different attitudes. There is no theory, as far as I know,
> and when you don't know that something can't be done, my instinct is
> to assume that it can.

Perhaps not, but there is insumountable evidence that shows that when
there i smuch money to be made such vacuums suddenly vanish. This fact
tells me your windmills aren't ppullign their weight.

>>> There _are_ circumstances, like transaction processing, where you
>>> can't do much to beat latency. I don't do those kinds of
>>> applications. Neither does anybody doing computation, as opposed to
>>> transaction processing.
>>
>>Really? You don't care about conditions? ...don't need precise
>>exceptions? You're by *far* in the minority, Robert. If your data is
>>so homogenous, what's the interest in computing the answer. It'll
>>always be the same.
>
> A great deal has been written about this. Large swaths of application
> software show an astonishing level of predictability.

"Large"? Why isn't Itanic cleaning house in these "swaths"? It seems to
be a natural for those who want to compute the known.

> Exceptions? That's a whole separate, ugly business for someone. Not
> for me.

Ah, the ugly face of reality shows. Without exceptions processors would
be simple, life would be great, and we'd have sunny days, even here in New
England. Except...

>>> The enemy, to repeat, is unpredictability. If you're stuck with
>>> unpredictable data flow, you're stuck with unpredictable data flow.
>>> With a 200:1 memory access:CPU cycle time, you're going to spend much
>>> of your time stalled, anyway, which, in fact, is exactly the way that
>>> CPU's involved in transaction processing behave.
>>
>>Thank you. The processors doing this kind of work outnumber your
>>"workload" by 1E10:1, at least.
>
> Well, no. See comment above. Exploiting the known predictability is
> another matter entirely.

It seems Intel and HP believed as you still do. Wanna job? There's one
open.

>
>>> The streaming hardware is coming, anyway, thanks to ATI, and nVidia
>>> for sure, maybe IBM/Sony/Toshiba if only I understood Cell better.
>>
>>Ah, but that pisses you off enevn more, because their market has no need
>>for a DP FPU. ...and you can't afford to have them make one for you
>>that does. There's that money thing again, and the dreaded "existence
>>theorem".
>>
> Apparently not. People are now doing even 128-bit floating point. Cell
> is probably stuck in 32-bit land.

Like I said, where's your checkbook? I'm sure for between six and ten
zeros you can give you what you want. It's a simple matter of hardware.


>>>>> Microprocessors, which are mostly just shrunk down versions of what
>>>>> was pretty well developed by the sixties are going to be the same
>>>>> way?
>>>>> Maybe.
>>>>
>>>>IMO, yes.
>>>
>>> Well, I'm hoping you're wrong. Evidence coming in for GPU's is that
>>> there is a revolution coming there. If there is a show-stopper, it
>>> will be _bandwidth_.
>>
>>Nah, it'll be latency. The pipe's will still be starved. The bandwith
>>will match the *number* of pipes. The pipes themselves still will be
>>latency limited.
>
> There are some deep, deep, deep problems here. No doubt about it. Why
> didn't dataflow machines take over the world?

Because the problems that pay the bills don't match the costs of producing
the hardware. If that trend reverses the hardware *will* be there.
"Existence theorem", once again. Ok, perhaps the opposite; the
"non-existance theorem".

> Your proposed solution: wait for the information to become available and
> then act really, really fast is the only possible answer for transaction
> processing. That's why you think it's the only possible answer. We
> could go around on this forever. ;-).

I didn't say it was the only answer at all. I said that's the answer to
the problem that people really have, and thus will pay for.

BTW, the same answer goes for your other example; the oil industry. When
oil gets expensive we may switch to corn, but why pay the bastards to
produce corn for fuel when it's more expensive even *after* subsidies?

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Fri, 11 Feb 2005 05:19:05 -0500, George Macdonald wrote:

> On Thu, 10 Feb 2005 20:28:03 -0500, keith <krw@att.bizzzz> wrote:
>
>>On Thu, 10 Feb 2005 10:16:47 -0500, Robert Myers wrote:
>>
>>> On Wed, 09 Feb 2005 23:31:30 -0500, George Macdonald
>>> <fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>>
>>>>On Wed, 09 Feb 2005 09:33:47 -0500, Robert Myers <rmyers1400@comcast.net>
>>>>wrote:
>>>>
>>>>>On Wed, 09 Feb 2005 08:21:39 -0500, George Macdonald
>>>>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>>>>
>>>>><snip>
>>>>>
>>>>>>
>>>>>>>I made an intense effort to understand what was going on with process
>>>>>>>technology when all the surprises came down at 90nm, but since then
>>>>>>>I've lost track of process technology. If Intel really has lost the
>>>>>>>playbook, that would be news, but I don't really believe it.
>>>>>>
>>>>>>What's going on with process tech does not really have to be understood at
>>>>>>the detail level to see the picture. IBM chief technologists, among
>>>>>>others, have told us of the "end of scaling" - Intel has demonstrated the
>>>>>>effect with 90nm P4. We know, as Keith has said right here, that the two
>>>>>>critical issues involved are power density and leakage. OTOH nobody is
>>>>>>talking of abandoning 65nm and lower, though they do talk of increasing
>>>>>>difficulty.
>>>>>>
>>>>>But I don't know whether to take "the end of scaling" seriously or
>>>>>not. What about nanotubes?
>>>>
>>>>I think it's serious OK -- we already have evidence -- and I'm not sure
>>>>which part of the problem nanotubes solves...
>>>
>>> Mobility. Faster gates at lower voltage, smallest possible voltage
>>> being the goal of low power operation.
>>
>>We've been hearing the same tune from the optics guys for a quarter of a
>>century too. ...nothing interesting yet.
>
> For the record, in light of the reply I just posted, let us stress that you
> and I are not collaborating here.🙂

At least not by any back-channel. If we agree, is that collaboration?

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Fri, 11 Feb 2005 04:36:41 -0500, George Macdonald wrote:

> On Thu, 10 Feb 2005 10:16:47 -0500, Robert Myers <rmyers1400@comcast.net>
> wrote:
>
>>On Wed, 09 Feb 2005 23:31:30 -0500, George Macdonald
>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>
>>>On Wed, 09 Feb 2005 09:33:47 -0500, Robert Myers <rmyers1400@comcast.net>
>>>wrote:
>>>
>>>>On Wed, 09 Feb 2005 08:21:39 -0500, George Macdonald
>>>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>>>
>>>><snip>
>
>>>>But I don't know whether to take "the end of scaling" seriously or
>>>>not. What about nanotubes?
>>>
>>>I think it's serious OK -- we already have evidence -- and I'm not sure
>>>which part of the problem nanotubes solves...
>>
>>Mobility. Faster gates at lower voltage, smallest possible voltage
>>being the goal of low power operation.
>>
>>http://www.eetimes.com/at/news/OEG20031217S0020
>>
>>I've got a decent physics education, but I'm not a solid state
>>physicist and certainly not a device engineer. I am pretty quick with
>>google:
>>
>>nanotube transistor mobility electron OR carrier.
>>
>>Carbon nanotubes also have very attractive thermal properties. They
>>also currently cost about as much, pound for pound, as industrial
>>diamonds.
>
> Yeah, yeah I know what's being said. The fact that nanotubes are going to
> solve a multitude of other problems and bring us micro-motors and the like
> arouses suspicion, from my POV... one solution fits all?
>
>>>besides which major change in
>>>material like that is bound to have an extended development time.
>>>
>>Don't know how to evaluate that. There's a company nearby I could
>>walk to that thinks it's going to revolutionize memory (memory always
>>comes first, doesn't it?) surviving on venture capital. They'd better
>>come up with something pretty quick.
>
> Memory just looks easy I guess. What about organic cell memory which was
> supposed to be commercialized 5 years ago? I've been hearing about optical
> memory as a replacement for main memory for 30 years - the closest thing we
> have is the DVD.🙂

Memory is always first because its regular structure is easy to
manufacture and diagnose faults. If it doesn't make it past memory
(bubbles anyone?) it hasn't a chance at logic. Since it hasn't done the
simple part...

<snip the rest - too many subjects for one post>

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

keith <krw@att.bizzzz> wrote: in small part:
> Nah, it'll be latency. The pipe's will still be starved.
> The bandwith will match the *number* of pipes. The pipes
> themselves still will be latency limited.

True for most apps (barring vector processing like imagery).

Over the past 7 years, CPU speed (clock*IPC) has increased
by at least 4x. Memory bandwidth (burst) has increased ~8x.
Latency has only improved ~1.5x and overall the machines seem
(subjectively) about that much faster.

-- Robert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Fri, 11 Feb 2005 22:50:15 GMT, Robert Redelmeier
<redelm@ev1.net.invalid> wrote:

>keith <krw@att.bizzzz> wrote: in small part:
>> Nah, it'll be latency. The pipe's will still be starved.
>> The bandwith will match the *number* of pipes. The pipes
>> themselves still will be latency limited.
>
>True for most apps (barring vector processing like imagery).
>
>Over the past 7 years, CPU speed (clock*IPC) has increased
>by at least 4x. Memory bandwidth (burst) has increased ~8x.
>Latency has only improved ~1.5x and overall the machines seem
>(subjectively) about that much faster.
>

There is a bizarre circular logic afoot. Part of what has killed off
vector processors is their requirement for bandwidth. Bandwidth is a
serious issue for stream processors, which we _are_ seeing in more
general use, thanks to the ambitions of GPU manufacturers.

How the bandwidth issue is going to be addressed is something of a
mystery to me, but the argument you present is an argument about past
design decisions, not about actual requirements. Millibytes per flop?
Of what possible use all those flops would be with so little bandwidth
is a mystery to me.

Would lower latency be nice? Sure. It's stupefyingly expensive, and,
your programmer-with-screwdriver-in-hand perspective notwithstanding,
it isn't necessary or even all that useful for applications where
stream processors are attractive. Bandwidth, on the other hand, is an
absolute requirement. Just how deeply stream processors will
penetrate general purpose computing remains to be seen. My stake is
in the ground: packet processors are going to all but push
conventional microprocessors off the stage.

To address your argument more substantively, your argument suggests
that processors are now routinely stalled a _much_ higher fraction of
the time than they were seven years ago (4x increase in speed, 1.5x
increase in throughput). 'Taint so.

RM

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Fri, 11 Feb 2005 22:50:15 +0000, Robert Redelmeier wrote:

> keith <krw@att.bizzzz> wrote: in small part:
>> Nah, it'll be latency. The pipe's will still be starved.
>> The bandwith will match the *number* of pipes. The pipes
>> themselves still will be latency limited.
>
> True for most apps (barring vector processing like imagery).

Point given. Imagery has been done rather well in GPUs with far less that
128bit precision FPUs.

> Over the past 7 years, CPU speed (clock*IPC) has increased
> by at least 4x. Memory bandwidth (burst) has increased ~8x.
> Latency has only improved ~1.5x and overall the machines seem
> (subjectively) about that much faster.

Exactly my point. Latency is the bitch that won't let go. Sure, caches
and prefecthing have helped over the last couple of decades, but the
elephant hasn't left the livingroom.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers <rmyers1400@comcast.net> wrote:
> There is a bizarre circular logic afoot. Part of what
> has killed off vector processors is their requirement

No. What kills off anything in the market is lack of
demand--usually due to poor performance/price.

> Would lower latency be nice? Sure. It's stupefyingly
> expensive,

Really? DRAMs are obscenely inexpensive. Why not use 4x
the transistors and make SRAMs? Especially if they could
be interfaced into Northbridges as "standard" DDR DIMMs.
I know I'd rather 256 MByte of 10 ns latency SRAM rather than
1 GByte of 100 ns latency DRAM. YYMV.

> your programmer-with-screwdriver-in-hand perspective

Why, thank you for the compliment!

> To address your argument more substantively, your argument
> suggests that processors are now routinely stalled a _much_
> higher fraction of the time than they were seven years ago
> (4x increase in speed, 1.5x increase in throughput). 'Taint so.

Surely you've noticed that benchmark scores are only very
rarely linear with CPU speed. Now even bus doesn't help
hugely. But no sense arguing. At least Intel CPUs have
performance counters on stalls.

-- Robert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 12 Feb 2005 02:40:12 GMT, Robert Redelmeier <redelm@ev1.net.invalid>
wrote:

>Robert Myers <rmyers1400@comcast.net> wrote:
>> There is a bizarre circular logic afoot. Part of what
>> has killed off vector processors is their requirement
>
>No. What kills off anything in the market is lack of
>demand--usually due to poor performance/price.
>
>> Would lower latency be nice? Sure. It's stupefyingly
>> expensive,
>
>Really? DRAMs are obscenely inexpensive. Why not use 4x
>the transistors and make SRAMs? Especially if they could
>be interfaced into Northbridges as "standard" DDR DIMMs.
>I know I'd rather 256 MByte of 10 ns latency SRAM rather than
>1 GByte of 100 ns latency DRAM. YYMV.

It may vary indeed, especially if your theoretical small-but-fast memory is
too small to avoid paging. It doesn't take too many page-ins to wipe out any
"speed" advantage that memory might have offered...

/daytripper
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Fri, 11 Feb 2005 21:31:20 -0500, keith <krw@att.bizzzz> wrote:

>On Fri, 11 Feb 2005 08:43:50 -0500, Robert Myers wrote:
>
>> On Thu, 10 Feb 2005 20:25:59 -0500, keith <krw@att.bizzzz> wrote:
>>

<snip>

>>>
>>>Or like most of the world, you *cannot* schedule. If scheduling were easy
>>>Itanic would still be floating. The existence theorem says that it's not.
>>>Thus latency is the lynchpin to performance. Maybe *you* have some
>>>"embarasingly linear" data flow, but the real world doesn't. Were this
>>>the general case we'd have a P-V with a hundred-stage pipe. We don't;
>>>there's that ugly existence theorem at work again.
>>>
>> We have different attitudes. There is no theory, as far as I know,
>> and when you don't know that something can't be done, my instinct is
>> to assume that it can.
>
>Perhaps not, but there is insumountable evidence that shows that when
>there i smuch money to be made such vacuums suddenly vanish. This fact
>tells me your windmills aren't ppullign their weight.
>
Your theory just doesn't wash. There's lots of money to be made in
all kinds of problems that are long understood and long to solve. The
dataflow problem is like an open problem in math: either it is solved
or it is demonstrated that it can't be done.

AI isn't done. There hasn't been all that much progress, really, but
even what AI is understood to be has changed dramatically since
researchers imagined that logical deduction is the way that people
think. Most people think we'll eventually get something more like
what was originally expected. You don't? There's lots of money to be
made.

>>>> There _are_ circumstances, like transaction processing, where you
>>>> can't do much to beat latency. I don't do those kinds of
>>>> applications. Neither does anybody doing computation, as opposed to
>>>> transaction processing.
>>>
>>>Really? You don't care about conditions? ...don't need precise
>>>exceptions? You're by *far* in the minority, Robert. If your data is
>>>so homogenous, what's the interest in computing the answer. It'll
>>>always be the same.
>>
>> A great deal has been written about this. Large swaths of application
>> software show an astonishing level of predictability.
>
>"Large"? Why isn't Itanic cleaning house in these "swaths"? It seems to
>be a natural for those who want to compute the known.
>
>> Exceptions? That's a whole separate, ugly business for someone. Not
>> for me.
>
>Ah, the ugly face of reality shows. Without exceptions processors would
>be simple, life would be great, and we'd have sunny days, even here in New
>England. Except...
>
Exceptions may explain a big part of the problem with Itanium. Could
be just poor design, or it could be inherent in a processor that has
to do so much speculation and recovery. I suspect poor design.

<snip>

>>
>> There are some deep, deep, deep problems here. No doubt about it. Why
>> didn't dataflow machines take over the world?
>
>Because the problems that pay the bills don't match the costs of producing
>the hardware. If that trend reverses the hardware *will* be there.
>"Existence theorem", once again. Ok, perhaps the opposite; the
>"non-existance theorem".
>
Just as with the AI problem, there's lots of fundamental stuff we
don't know about dataflow.

>> Your proposed solution: wait for the information to become available and
>> then act really, really fast is the only possible answer for transaction
>> processing. That's why you think it's the only possible answer. We
>> could go around on this forever. ;-).
>
>I didn't say it was the only answer at all. I said that's the answer to
>the problem that people really have, and thus will pay for.
>
>BTW, the same answer goes for your other example; the oil industry. When
>oil gets expensive we may switch to corn, but why pay the bastards to
>produce corn for fuel when it's more expensive even *after* subsidies?

The biofuel question is easier to answer than the microprocessor
question. In both cases, there is a reliable, affordable product that
satisfies obvious needs.

The price of oil to U.S. (and actually to world) consumers is
subsidized by the cost of security arrangements to ensure the flow of
that oil. You don't seem to have a problem with that kind of subsidy.
Developing a biofuels industry as an alternative to that kind of
arrangement seems pretty attractive to me, and it makes sense to me to
get started before oil supplies have been disrupted.

What could you do with much faster microprocessors if they were
available? And how would you use processors that are much better at
streaming data than fetching it? I think we're going to find out, no
matter how little you think of the enterprise.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 12 Feb 2005 08:21:02 -0500, George Macdonald
<fammacd=!SPAM^nothanks@tellurian.com> wrote:

>On Fri, 11 Feb 2005 20:58:28 -0500, Robert Myers <rmyers1400@comcast.net>
>wrote:
>
>>On Fri, 11 Feb 2005 19:42:22 -0500, George Macdonald
>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>

<snip>

>>>
>>Intel trashed infiniband because they lost control of it. If it
>>wasn't going to be their silicon, they didn't want to play. I got
>>walloped for saying that, too, but players who are much better
>>positioned than I am have discreetly nodded assent. You guys think
>>I'm a shill for Intel. Ha!
>
>Silicon? PCI Express is only "their silicon" in their chipsets... and
>they've ceded at least part of that to nVidia. Control is their thing of
>course - everybody wants to feel they're in control of their destiny.
>

M$ mysteriously withdrew native support for infiniband at about the
same time Intel dropped its own host channel adapter. That's one I
don't blame on Bill.

<snip>

>>
>>ORNL has scenarios in which biofuels contribute noticeably to
>>transportation fuels by 2020. Given the volatility of the sources of
>>oil, I do believe that developing a viable biofuels industry is
>>sensible. They will for sure beat hydrogen to market as a significant
>>player. The odor is a problem? We're talking about cooking
>>_cellulose_ here, George. Yes, the odor is a problem, as some
>>operators have already discovered.
>>
>>I mean, what other possibility is there? Canada has lots of
>>unconventional oil. You want us to invade Canada? ;-).
>
>Believe what you want - even with quite ideal fermentation input, e.g.
>corn, the energy balance is way off... it's the distillation that gets
>you... energy in. Oh and the petroleum guys actually have quite a bit of
>experience here: it is a bitch to maintain in storage and it does not mix
>well with petroleum in transportation, storage or even the customer fuel
>tank. Do you think the pipeline companies are going to say "ethanol?...
>sure shove it in, where do you want it to go?" There's no damned
>infrastructure.
>
There's plenty of oil. Maybe even plenty of oil with depending on the
Middle East, with some adjustments to usage patterns, but you have to
count on unproven, unconventional sources. Wind power is the only
underexploited renewable that works for sure, but, short of taking
over North and South Dakota, wind is not a big enough player, and it
is not, in any case, a transportation fuel. A case can be made for
nuclear power, but it isn't a transportation fuel, either. There is
always coal, but it isn't a transportation fuel, either.

If hydrogen works as a way to turn stationary energy into a
transportation fuel, so do biofuels. Both have serious infrastructure
problems. I think I'd rather figure out how to protect piping from
corrosion than how to protect piping from hydrogen migration and
embrittlement.

The lessons of this whole sorry saga--finding a replacement for
oil--should give any prospective true believer reason to pause. We've
been at it for thirty years and generated mostly polemic. One thing
that has changed is that the earliest econometric models were
inadequate because the cost of computer time was so high. That's not
a problem anymore, but the models aren't demonstrably any more useful.

I brought the whole business up as an example of how hard it is to
anticipate anything in technology. A tremendous amount of money and
brain power went into the energy problem, and we are still pretty
clueless. It will happen when it happens, whatever "it" is.


<snip>

>>
>>The better (more disturbing?) question is: how can you take the talent
>>that used to be DEC and so totally annihilate it?
>
>Oh-oh... didn't Intel have a hand in that too?

You don't mean to start the alpha debate again, I'm sure. If you
leave out the compaq intermediate episode, you had a technically
competent company (DEC) that didn't know how to market being fed into
a company whose focus was marketing (HP). One glaring example, I
suspect, of just how badly bean-counters and marketers manage
technical talent.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 12 Feb 2005 09:52:06 -0500, Robert Myers <rmyers1400@comcast.net>
wrote:

>On Sat, 12 Feb 2005 08:21:02 -0500, George Macdonald
><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>Believe what you want - even with quite ideal fermentation input, e.g.
>>corn, the energy balance is way off... it's the distillation that gets
>>you... energy in. Oh and the petroleum guys actually have quite a bit of
>>experience here: it is a bitch to maintain in storage and it does not mix
>>well with petroleum in transportation, storage or even the customer fuel
>>tank. Do you think the pipeline companies are going to say "ethanol?...
>>sure shove it in, where do you want it to go?" There's no damned
>>infrastructure.
>>
>There's plenty of oil. Maybe even plenty of oil with depending on the
>Middle East, with some adjustments to usage patterns, but you have to
>count on unproven, unconventional sources. Wind power is the only
>underexploited renewable that works for sure, but, short of taking
>over North and South Dakota, wind is not a big enough player, and it
>is not, in any case, a transportation fuel. A case can be made for
>nuclear power, but it isn't a transportation fuel, either. There is
>always coal, but it isn't a transportation fuel, either.

The funny thing about windpower, to me, is that it has divided the greens.
It's more developed in Europe in general and especially in a few "pockets"
like Denmark and Netherlands; there are several *BIG* projects in the
discussion stages - the usual NIMBY and "nature-greens" are raising a
stink.

There are scientific studies which say that it's just not viable and one
does wonder about the economics of power generation in general - it *could*
be that the fill-in and peak coverage required, usually from gas turbines,
is actually more polluting/wasteful than just running a running as we are.
I know that it wouldn't work here in NJ and surrounding states - we get
maybe 10 days a year with enough wind to do anything useful. The only wind
farm I've actually seen had all its turbines sitting stationary for several
days - so wasteful... and fugly.

>If hydrogen works as a way to turn stationary energy into a
>transportation fuel, so do biofuels. Both have serious infrastructure
>problems. I think I'd rather figure out how to protect piping from
>corrosion than how to protect piping from hydrogen migration and
>embrittlement.

The corrosion is a minor part of the difficulties of ethanol - the stuff is
very hygroscopic and the mixing problems with petroleum are a nightmare.
As for hydrogen, I just don't see that it works at all as a transportation
fuel, without some major rule-breaking technology.

--
Rgds, George Macdonald
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 12 Feb 2005 18:16:32 -0500, George Macdonald
<fammacd=!SPAM^nothanks@tellurian.com> wrote:

>On Sat, 12 Feb 2005 09:52:06 -0500, Robert Myers <rmyers1400@comcast.net>
>wrote:
>
>>On Sat, 12 Feb 2005 08:21:02 -0500, George Macdonald
>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>>Believe what you want - even with quite ideal fermentation input, e.g.
>>>corn, the energy balance is way off... it's the distillation that gets
>>>you... energy in. Oh and the petroleum guys actually have quite a bit of
>>>experience here: it is a bitch to maintain in storage and it does not mix
>>>well with petroleum in transportation, storage or even the customer fuel
>>>tank. Do you think the pipeline companies are going to say "ethanol?...
>>>sure shove it in, where do you want it to go?" There's no damned
>>>infrastructure.
>>>
>>There's plenty of oil. Maybe even plenty of oil with depending on the
>>Middle East, with some adjustments to usage patterns, but you have to
>>count on unproven, unconventional sources. Wind power is the only
>>underexploited renewable that works for sure, but, short of taking
>>over North and South Dakota, wind is not a big enough player, and it
>>is not, in any case, a transportation fuel. A case can be made for
>>nuclear power, but it isn't a transportation fuel, either. There is
>>always coal, but it isn't a transportation fuel, either.
>
>The funny thing about windpower, to me, is that it has divided the greens.
>It's more developed in Europe in general and especially in a few "pockets"
>like Denmark and Netherlands; there are several *BIG* projects in the
>discussion stages - the usual NIMBY and "nature-greens" are raising a
>stink.
>
>There are scientific studies which say that it's just not viable and one
>does wonder about the economics of power generation in general - it *could*
>be that the fill-in and peak coverage required, usually from gas turbines,
>is actually more polluting/wasteful than just running a running as we are.
>I know that it wouldn't work here in NJ and surrounding states - we get
>maybe 10 days a year with enough wind to do anything useful. The only wind
>farm I've actually seen had all its turbines sitting stationary for several
>days - so wasteful... and fugly.
>
>>If hydrogen works as a way to turn stationary energy into a
>>transportation fuel, so do biofuels. Both have serious infrastructure
>>problems. I think I'd rather figure out how to protect piping from
>>corrosion than how to protect piping from hydrogen migration and
>>embrittlement.
>
>The corrosion is a minor part of the difficulties of ethanol - the stuff is
>very hygroscopic and the mixing problems with petroleum are a nightmare.
>As for hydrogen, I just don't see that it works at all as a transportation
>fuel, without some major rule-breaking technology.

I brought up the oil thing as an example of the futility of trying to
figure out the future of technology. I disagree with your assessment
of biofuels, but no one will really know until it is actually done.

Keith's attitude (smooth transitions only) has the advantage that you
don't get making foolish predictions, unless, of course, events make
an abrupt transition for you (all the people who were still in the
whale oil business when it was made obsolete by kerosene).

As to wind power, you can easily find maps of where the good stuff is.
Like I said, North and South Dakota. There's a thing going on here
about a wind farm near (gasp) Cape Cod. Not in front of my mansion!

The whole exercise makes Intel's planning around Itanium look cautious
and wise. The best and the brightest have repeatedly gotten
everything embarrassingly wrong. Meanwhile, the cost of finding new
oil is at a post-1974 low (about $5/bbl, if I recall correctly)--that
means people don't drill many dry holes, not that new oil is all that
easy to find.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

daytripper <day_trippr@removeyahoo.com> wrote:
>>I know I'd rather 256 MByte of 10 ns latency SRAM rather
>>than 1 GByte of 100 ns latency DRAM. YYMV.

> It may vary indeed, especially if your theoretical
> small-but-fast memory is too small to avoid paging. It
> doesn't take too many page-ins to wipe out any "speed"
> advantage that memory might have offered...

Of course thrashing must be avoided. But _none_ of my
problems have working sets approaching 256 MByte, and I
suspect that very few people have such large problems.

My point is that DRAM pricing has fallen and size risen
past the point of usability. Something more useful than
size needs to be done with those transistors.

-- Robert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

George Macdonald <fammacd=!SPAM^nothanks@tellurian.com> wrote:
> The funny thing about windpower, to me, is that it has
> divided the greens.

It _is_ ironic. Many windfarms are nuisances:
eyesores, noisy and lethal to birds.

> The corrosion is a minor part of the difficulties of
> ethanol - the stuff is very hygroscopic and the mixing
> problems with petroleum are a nightmare.

All can be dealt with. Brazil does, and we do in some
mid-West states. The real problem is that very little
of any biomass can be fermented to ethanol--only the starch.
We need more efficient cellulases that can hydrolyze the
B-glucoside linkages in cellulose.

> As for hydrogen, I just don't see that it works at all as
> a transportation fuel, without some major rule-breaking
> technology.

There are already a fair number (~1000) hydrogen powered
vehicles world-wide. Most use high pressure storage. A few use
cryogenic. A bigger problem is moving & generating the stuff.
Normal industrial hydrogen generation (steam methane reforming)
releases _lots_ of CO2 to the atmosphere, and hydrolysis is rather
inefficent and needs a clean source of electricity (nuclear?!?).

-- Robert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 12 Feb 2005 23:47:48 GMT, Robert Redelmeier
<redelm@ev1.net.invalid> wrote:

>George Macdonald <fammacd=!SPAM^nothanks@tellurian.com> wrote:
>> The funny thing about windpower, to me, is that it has
>> divided the greens.
>
>It _is_ ironic. Many windfarms are nuisances:
>eyesores, noisy and lethal to birds.
>
>> The corrosion is a minor part of the difficulties of
>> ethanol - the stuff is very hygroscopic and the mixing
>> problems with petroleum are a nightmare.
>
>All can be dealt with. Brazil does, and we do in some
>mid-West states. The real problem is that very little
>of any biomass can be fermented to ethanol--only the starch.
>We need more efficient cellulases that can hydrolyze the
>B-glucoside linkages in cellulose.
>

Brazil has gotten pretty shrewd about managing its sugar harvest.

The best friends that biofuels have are congressmen and senators from
agricultural states--another reason I think biofuels have a plausible
future.

>> As for hydrogen, I just don't see that it works at all as
>> a transportation fuel, without some major rule-breaking
>> technology.
>
>There are already a fair number (~1000) hydrogen powered
>vehicles world-wide. Most use high pressure storage. A few use
>cryogenic. A bigger problem is moving & generating the stuff.
>Normal industrial hydrogen generation (steam methane reforming)
>releases _lots_ of CO2 to the atmosphere, and hydrolysis is rather
>inefficent and needs a clean source of electricity (nuclear?!?).
>

The rule-breaking technologies are nuclear-power and a fuel cell
membrane that we're going to have real soon now. The darling of the
national labs. As unkillable as an athlete's foot infection.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 12 Feb 2005 23:47:48 GMT, Robert Redelmeier
<redelm@ev1.net.invalid> wrote:

>George Macdonald <fammacd=!SPAM^nothanks@tellurian.com> wrote:
>> The funny thing about windpower, to me, is that it has
>> divided the greens.
>
>It _is_ ironic. Many windfarms are nuisances:
>eyesores, noisy and lethal to birds.
>
>> The corrosion is a minor part of the difficulties of
>> ethanol - the stuff is very hygroscopic and the mixing
>> problems with petroleum are a nightmare.
>
>All can be dealt with. Brazil does, and we do in some
>mid-West states. The real problem is that very little
>of any biomass can be fermented to ethanol--only the starch.
>We need more efficient cellulases that can hydrolyze the
>B-glucoside linkages in cellulose.

Yes I know it's being used fairly widely in some small %age currently and
mainly to satisfy oxygenate content regulations - possibly you are not
aware of all the resulting problems: the mix can not go in a pipeline; it
can't be stored for any length of time, mixing generally being done into
the final delivery vehicle and the broken engines *are* real - even small
amounts of water and you lose octane *big* time. As already mentioned the
mid-west FFT boondoggle is just Daschle's pork barrel.

I've already mentioned scale and I hardly think the auto business in Brazil
can be compared with that of the U.S. There are a lot of inconsistent
studies/numbers floating around on the energy balance of fermentation fuels
- hard to know whom to believe but there are some fundamentals which can't
be ignored, like acreage required vs. car-population density, ultimate
efficiency (even an optimistic 50% loss seems on the high side for
viability to me) and what you do when you have a low yield of
corn/bio-mass, for whatever reason. I also don't think there are many
places in Brazil which have the temperature extremes we have in the
northern states, which contributes significantly to the storage problems.

>> As for hydrogen, I just don't see that it works at all as
>> a transportation fuel, without some major rule-breaking
>> technology.
>
>There are already a fair number (~1000) hydrogen powered
>vehicles world-wide. Most use high pressure storage. A few use
>cryogenic. A bigger problem is moving & generating the stuff.
>Normal industrial hydrogen generation (steam methane reforming)
>releases _lots_ of CO2 to the atmosphere, and hydrolysis is rather
>inefficent and needs a clean source of electricity (nuclear?!?).

Of course they exist; the fact is that they are not economically viable and
unless we get the major breakthroughs already mentioned they never will
be... cryogenic storage in a vehicle (-240°C) is absurd. I've already
harped on about the storage and portability. There is also storage in an
adsorption slurry [magnesium hydride is one] but again not very
practical... the disposal problem as just one example. While the
fundamental research could be valuable, producing vehicles at this stage is
just a waste of the energy which is supposedly so precious - the pollution
balance is even less convincing.

--
Rgds, George Macdonald