Intel drops HyperThreading

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In comp.sys.ibm.pc.hardware.chips Rob Stow <rob.stow@shaw.ca> wrote:
> keith wrote:
>> On Thu, 01 Sep 2005 07:56:36 +0000, Rob Stow wrote:
>>>Anyone from New Orleans who wants to play the blame game need
>>>only look in the mirror to find someone to accuse. Its like
>>>someone badly hurt after running a stop light and getting
>>>broadsided: it is impossible not to feel sympathy for him and
>>>equally impossible not to recognize that he has only himself
>>>to blame.
>>
>> Damn, Rob. You get it! ...or perhaps you don't want my endorsement. ;-)

While it is true that no-one can escape consequences for their
[in]actions, this is also a very limited view. Society is based
on division of labor, and trusting others to do their jobs
competently. People believed the levees would protect them.

> And here I was sitting in my flame-proof suit ...

.... that gets hot without air conditioning :)

> Before today I had never gotten a single e-mail from anyone in
> this newgroup, but today I got three e-mails that basically
> said "I agree with what you said but I never would have said
> it in public."

This is sad. The triumph of political correctness and
brainwashing. I may disagree with what you say, but I would
never try to stifle debate. That resolves nothing and merely
drives passions underground to erupt later.

>>>It has been known for hundreds, if not thousands, of years that
>>>that area is regularly battered by hurricanes, yet those people
>>>chose to build their homes and their businesses not only in a
>>>hurricane zone but in areas well below sea level in a hurricane

Yes, certainly. It is their choice how much risk to take.
Please remember the NO survived the hurricane itself remarkably
well. It was the flooding from breeched levees that has caused
most of the damage.

>>>zone ? It is one thing to gamble on a home or business on the
>>>gulf coast, but to compound the risk by choosing a location 20
>>>feet below sea level is stupid beyond all belief.

Had the levees held, it would not have seemed so stupid. Yes, it
is a bit of a gamble on stormwater pumps. Much of Holland makes
precisely the same gamble.

All places have site risks. Is it "stupid beyond belief" to live
in San Francisco, where a 1906 or bigger earthquake is coming?
Is it stupid to live in New York City which is incredibly dependant
on transportation infrastructure? Is it stupid to use forced-air
heating so the predictible ice-storm powerfails also cut heating?

>> It's a carnival town. It's very corrupt, by all accounts.

It might be, although I don't know of any objective corruption
scale to compare it against Chicago or NYC.

But that doesn't matter much. The levees are run by the US
Army Corps of Engineers, not locals. They knew the levees were
weak and were working on them near at least one of the fails.

>> Over the past couple of days they've certainly shown that
>> not all the snakes have scales. Looting TVs when there
>> is nothing to plug them in to? Raping kids?

Very unfortunate. But reember two things: First, the media
want to sell ink, photons and electrons. "If it bleeds, it leads"
The images are disturbing and likely unstaged, but not necessarily
representative. Second, those left behind in New Orleans are not
a cross section of society. Mostly, they are the underclass who
lacked the means to evacuate.

>>>I can empathize with but not fully agree with the idea that
>>>a higher degree of risk is acceptable to people tied to the
>>>oil industry in that area, but gambling with your life and
>>>everything you own in order to run a tourist trap or work in
>>>a Casino is simply unfathomable.

See above. People take risks everyday.


>> People risk their lives on oil rigs and coal mines. I don't really
>> see much difference. I do see the difference when civilization
>> breaks down so easily though. Rabid animals need to be shot.

Well, police & military are neither the brightest nor the most
patient. They will make mistakes. How many non-rabid animals
would you accept being shot? 1 of 10? 1 of 4? Would you
shoot someone taking food? Or carrying their own rifle?

None of this is easy.

-- Robert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Robert Redelmeier wrote:
> In comp.sys.ibm.pc.hardware.chips Rob Stow <rob.stow@shaw.ca> wrote:
>
>>keith wrote:
>>
>>>On Thu, 01 Sep 2005 07:56:36 +0000, Rob Stow wrote:
>>>
>>>>Anyone from New Orleans who wants to play the blame game need
>>>>only look in the mirror to find someone to accuse. Its like
>>>>someone badly hurt after running a stop light and getting
>>>>broadsided: it is impossible not to feel sympathy for him and
>>>>equally impossible not to recognize that he has only himself
>>>>to blame.
>>>
>>>Damn, Rob. You get it! ...or perhaps you don't want my endorsement. ;-)
>
>
> While it is true that no-one can escape consequences for their
> [in]actions, this is also a very limited view. Society is based
> on division of labor, and trusting others to do their jobs
> competently. People believed the levees would protect them.

Only the people who can't/don't read... there was a lot of publicity
showing that the levies were good for category three, and that the
hurricane was category five. The Corp of Engineers said it, the local
paper (something-picyune?) won a prize for describing exactly what
happened some years ago.

People who said they couldn't afford to leave because gas was too high
clearly lack the ability to associate their feet with ten miles or less
to higher ground. Those who were physically unable to move or stayed to
care for them have my sympathy, and financial support, but healthy
people who failed to respond caused much of their own misery.

And *anyone* who doesn't stock food and water for a week is very
trusting in my opinion. I don't recommend a gun unless the owner is
willing to learn to use it well and without hesitation.
>
>
>>And here I was sitting in my flame-proof suit ...
>
>
> ... that gets hot without air conditioning :)
>
>
>>Before today I had never gotten a single e-mail from anyone in
>>this newgroup, but today I got three e-mails that basically
>>said "I agree with what you said but I never would have said
>>it in public."
>
>
> This is sad. The triumph of political correctness and
> brainwashing. I may disagree with what you say, but I would
> never try to stifle debate. That resolves nothing and merely
> drives passions underground to erupt later.
>
>
>>>>It has been known for hundreds, if not thousands, of years that
>>>>that area is regularly battered by hurricanes, yet those people
>>>>chose to build their homes and their businesses not only in a
>>>>hurricane zone but in areas well below sea level in a hurricane
>
>
> Yes, certainly. It is their choice how much risk to take.
> Please remember the NO survived the hurricane itself remarkably
> well. It was the flooding from breeched levees that has caused
> most of the damage.
>
>
>>>>zone ? It is one thing to gamble on a home or business on the
>>>>gulf coast, but to compound the risk by choosing a location 20
>>>>feet below sea level is stupid beyond all belief.
>
>
> Had the levees held, it would not have seemed so stupid. Yes, it
> is a bit of a gamble on stormwater pumps. Much of Holland makes
> precisely the same gamble.
>
> All places have site risks. Is it "stupid beyond belief" to live
> in San Francisco, where a 1906 or bigger earthquake is coming?
> Is it stupid to live in New York City which is incredibly dependant
> on transportation infrastructure? Is it stupid to use forced-air
> heating so the predictible ice-storm powerfails also cut heating?

Steam. Needs only heat and gravity. No pumps, no fans, a very few
failure modes. Multifuel furnaces are available, allowing operation on
whatever is available (and/or cheap).
>
>
>>>It's a carnival town. It's very corrupt, by all accounts.
>
>
> It might be, although I don't know of any objective corruption
> scale to compare it against Chicago or NYC.
>
> But that doesn't matter much. The levees are run by the US
> Army Corps of Engineers, not locals. They knew the levees were
> weak and were working on them near at least one of the fails.
>
>
>>>Over the past couple of days they've certainly shown that
>>>not all the snakes have scales. Looting TVs when there
>>>is nothing to plug them in to? Raping kids?

I suspect that's related to not stopping the looting (of non-survival
items). When some people perceive that there is no law and order they do
whatever they please.
>
>
> Very unfortunate. But reember two things: First, the media
> want to sell ink, photons and electrons. "If it bleeds, it leads"
> The images are disturbing and likely unstaged, but not necessarily
> representative. Second, those left behind in New Orleans are not
> a cross section of society. Mostly, they are the underclass who
> lacked the means to evacuate.

See above. For the most part I don't buy the lack of means. The rest of
it is true AFAIK.
>
>
>>>>I can empathize with but not fully agree with the idea that
>>>>a higher degree of risk is acceptable to people tied to the
>>>>oil industry in that area, but gambling with your life and
>>>>everything you own in order to run a tourist trap or work in
>>>>a Casino is simply unfathomable.
>
>
> See above. People take risks everyday.
>
>
>
>>>People risk their lives on oil rigs and coal mines. I don't really
>>>see much difference. I do see the difference when civilization
>>>breaks down so easily though. Rabid animals need to be shot.
>
>
> Well, police & military are neither the brightest nor the most
> patient. They will make mistakes. How many non-rabid animals
> would you accept being shot? 1 of 10? 1 of 4? Would you
> shoot someone taking food? Or carrying their own rifle?
>
> None of this is easy.
>
> -- Robert
>
>


--
bill davidsen
SBC/Prodigy Yorktown Heights NY data center
http://newsgroups.news.prodigy.com
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

keith wrote:

> Few politicians know anything about economics or capitalism (Jack Kemp
> was one who did). They're by nature commisars living off the fat of the
> land.
>
One right one wrong. Politicians live by giving the voters what they
want, which is often "something for nothing." No one wanted to pay for
better levies in New Orleans, so they didn't get upgraded. No one wanted
to make mandatory evacuation more than a term, no one wanted to take
responsibility for shooting looters, etc.

And FEMA is so totally incompetent that I won't begin to talk about it
here, I wrote an essay for my newsletter and blog, and I got so mad that
I had to go for a walk in the middle. And if homeland security didn't
know there were people dying in the Convention Center they should be
fired one and all, it was on network TV for days! I personally think the
secretary was lying about "this is the first I've heard of it."

--
bill davidsen
SBC/Prodigy Yorktown Heights NY data center
http://newsgroups.news.prodigy.com
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In comp.sys.ibm.pc.hardware.chips Bill Davidsen <davidsen@deathstar.prodigy.com> wrote:
> Robert Redelmeier wrote:
> Only the people who can't/don't read... there was a lot of
> publicity showing that the levies were good for category
> three, and that the hurricane was category five. The Corp of

In the Gulf it most certainly was. After landfall, I've heard
numbers from 2-4. But no matter. How come the levees were only
breeched in two places? I suspect some sort of local defect.
The levees may not have been designed to contain Cat5 storm surges,
but they shouldn't have been undermined or washed away by slopover.

> Engineers said it, the local paper (something-picyune?) won
> a prize for describing exactly what happened some years ago.

Times-Picayune. http://www.nola.com

If it was widely known, then why wasn't it prevented?

> healthy people who failed to respond caused much of their
> own misery.

Well, if you mean "cause" in the sense of "fail to prevent",
then almost all of us cause our own misery, at least
in clear hindsight.

> Steam. Needs only heat and gravity. No pumps, no fans, a
> very few failure modes. Multifuel furnaces are available,
> allowing operation on whatever is available (and/or cheap).

Yes, steam is good (watch the burn potential) and gravity
hot water is also possible. But how many homes are built
that way?

> I suspect that's related to not stopping the looting (of
> non-survival items). When some people perceive that there
> is no law and order they do whatever they please.

True, and it's very difficult to stop a large number
of people doing anything.

-- Robert
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

George Macdonald wrote:
> On 1 Sep 2005 04:53:29 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
>
> >George Macdonald wrote:
> >> On 31 Aug 2005 05:36:51 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:

> >>
> >NetBurst may be one of the few things we agree on. Intel's architect
> >says that he expected eventually to improve the IPC performance of
> >NetBurst. He also admits that his marching orders were to deliver a
> >processor with a high clock rate, which he did.
>
> Sounds suspicious to me - the last iteration made IPC worse! By that point
> they still had not acknowledged that it was a err, bust?... quite late in
> the day it would appear.
>
By then, the architect cited was no longer working for Intel. Who
knows what really happened. His talk at Stanford (which was available
online and may still be) sounded candid, but who knows what expectation
went unfulfilled. Leakage, we all know, was one, but by then he was
out of the picture.

> >> >4. For many applications, performance per watt is the figure of merit
> >> >of greatest interest because that will determine how much muscle can be
> >> >packed into a given space. For those who need single-thread
> >> >performance, it isn't a figure of merit that's of interest. If you
> >> >really need single-thread performance, there will always be options, at
> >> >a price
> >> >
> >> >http://www.alienware.com/configurator_pages/Aurora_alx.aspx?SysCode=PC-AURORALX-SLI-D
> >> >
> >> >IOW, if it's *that* important to you right now, all you have to do is
> >> >to get out your checkbook.
> >>
> >> That is an extreme example and irrelevant here where most people DIY, among
> >> other reasons. A careful choice of processor performance slot and
> >> components can get very nice single-thread performance, with the obvious
> >> promise of more to come. Now that the heat is (temporarily) off AMD on
> >> single-thread CPUs there is no way of knowing how much they may be holding
> >> back but I still see performace ramps possible for me as prices drop as
> >> introduction of the next speed jump slots in at the top-end... given that
> >> anybody who pays the $$ for the top slot either really needs it or is just
> >> mad.
> >>
> >It's apparent that you don't really need it.
>
> I *know* what I need and its not the ridiculous Alienware system you want
> to suggest as a bolster for your POV. using extreme point examples only
> weakens your point.
>
I don't know what point of view you think I'm bolstering. What is
possible onesy-twosy with careful selection and maybe with some heroic
cooling is always well beyond what is available as mass production.
You don't have to buy your system from Alienware if you don't care to.
The system I'm working on right now is significantly overclocked (but
with the memory subsystem within spec), and I didn't buy *it* from
Alienware. I think I'm guessing that we *are* pretty near the end of
the road but that you or most anyone else will be able to get
significantly greater single thread performance, at least for a while,
almost no matter what AMD does. It's not a very profound point.

<snip>

> >>
> >While you are insisting that the world is staying single-threaded, most
> >of the rest of the world is going to be trying to figure out how to use
> >multiple threads.
>
> Quit attributing concepts to me which I have not proposed. Where
> multi-thread fits it will be fine, according to the effort required - just
> don't try to tell me that it fits all and that the effort will always be
> worth the gain.
>
Sorry if I misunderstood you. That's the first positive thing I can
remember you saying about multi-threading. As to the effectiveness of
multi-threading, there is no way that I know of to circumvent Amdahl's
law.

> >> The way that Intel just can't seem to resist the spin bothers me - the
> >> (marketing) tail is wagging the (engineering) dog just a bit too much...
> >> there's a corporate sickness here. The way that you have apparently
> >> latched on to their latest religion from just this past week bothers me
> >> too.<shrug>
> >>
> >There is no reason for you to make this personal. I haven't latched
> >onto anybody's religion. Power consumption as an issue for HPC wasn't
> >invented by Intel and I didn't learn about it from Intel. I didn't
> >learn about power consumption as an issue for servers from Intel: I
> >learned about it from google and from people with power and space
> >constraints. And I have been posting for a long time now about the
> >inevitable move to more cores as a way to get more performance within
> >the constraints of what is possible with standard cooling.
>
> <sigh>Here we go again - every time you get nailed, you feign indignation.
> I said "apparently" because that's the way it appears to me - I don't
> recall you applauding AMD's dual-core intentions which preceded Intel's
> commitment by a long way (Opteron was designed from the start to accomodate
> dual) or their demo of a year ago but I do recall you lauding Intel's HT in
> your previous posts on multi-threading. From my POV the shoe fits.
>
There is no feigned about this, George. Your attacks are slanderous,
and I am offended. Is that direct enough? I'm not at all excited by
Intel's Dual core, and I don't think I've said anything laudatory about
it. In case you missed it, I've allowed as how hyperthreading, from
one point of view (performance per watt, in fact) is a marketing
gimmick, because it adds proportionally as many transistors and as many
watts as it gains on average in performance. And, you're right,
neither have I said anything about dual core Opteron because that
doesn't excite me, either. I am looking forward to the four-core
Whitefield, if indeed the four cores share L2 cache, because it opens
up some really interesting possibilities.

> >> >> >That's why you do profiling.
> >> >>
> >> >> It makes me wonder sometimes when you spout some buzzword like that as
> >> >> though it is known to work well for all general purpose code working on all
> >> >> possible data sets.<shrug> People who use compilers know this.
> >> >>
> >> >Would you have been happier if I had said "feedback-directed
> >> >optimization?" The compiler can't accurately infer dependencies from
> >> >software written in, say, c. If those ambiguities didn't exist, there
> >> >would still be the problem of determining the hot paths in the
> >> >software. Given an accurate control and data flow graph, it's pretty
> >> >easy to discover the hot paths, and the main reason feedback directed
> >> >optimization doesn't always work well is that there is no accurate
> >> >control and data flow graph to begin with. Even the most unlikely of
> >> >software, like Microsoft Word, turns out to be incredibly repetitive in
> >> >the paths taken through the software.
> >>
> >> You already tried the "feedback directed..." one a while back - thanks for
> >> reminding me.🙂 The thing is in a performance-oriented application -- not
> >> MS Word🙂 -- how much computer cycles can you afford to "waste" on
> >> "profiling" the problem data-set at hand before you end up losing out on
> >> the balance of any gain? I'm afraid your "hot paths" are just as likely to
> >> be transient anyway - calling it "easy" just doesn't wash for me.
> >>
> >I chose MS Word as an example of a particularly challenging
> >application, because it is. I'm sure there are examples where
> >profiling doesn't help or helps only if you profile on the actual
> >problem, but I haven't encountered such a problem. The difficulty,
> >which you either don't understand or don't want to acknowledge, is the
> >way that software is written.
>
> Hmm, should I now feign indignation? We've been over this before but it
> seems to me that you are the one who does not understand... in particular
> the software development cycle; your knowledge and experience of commercial
> software is limited if you don't know of instances of software which handle
> different types of problems with varying volumes/density of original *and*
> transient data. Have you ever written software to be sold as a commercial
> product? Assuming some one-time static profile would do, buyers are not
> expected to have compilers to re-profile for their current data set.
>
I'm basing my comments on papers that studied the behavior of
commercial software. I think you know that I have an interest in
Itanium. I haven't read all of the VLIW literature, but I've read a
good bit of it.

I'm still not certain that you get the fact that a compiler cannot
construct an unambiguous control and dataflow graph from c and not even
from Fortran, which is a little easier than c. If you don't have that
information, you can't build an auto-parallelizing compiler. If you do
have that information, there may still be cases that are resistant to
auto-parallelization, but if you can't get past the problem of control
and dataflow ambiguities, you can't auto-parallelize anything but the
most trivial of cases.

> This is also much more than a programming issue - it includes the whole
> system design/engineering from top to bottom. You're going to need a
> helluva lot more than a "profiling" compiler.
>
I think I'm aware of the history of the attempts to build heroic
autoparallelising compilers. I have very definite ideas of why those
efforts failed. I could be wrong in detail, but the failure had
nothing to do with some BS about the "software development cycle."

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Bill Davidsen wrote:

> keith wrote:
>
>> Few politicians know anything about economics or capitalism (Jack Kemp
>> was one who did). They're by nature commisars living off the fat of the
>> land.
>>
> One right one wrong. Politicians live by giving the voters what they
> want, which is often "something for nothing." No one wanted to pay for
> better levies in New Orleans, so they didn't get upgraded.

That's just false. People were begging for the money to make needed
repairs, and the White House cut the funds.


No one wanted
> to make mandatory evacuation more than a term, no one wanted to take
> responsibility for shooting looters, etc.
>
> And FEMA is so totally incompetent that I won't begin to talk about it
> here, I wrote an essay for my newsletter and blog, and I got so mad that
> I had to go for a walk in the middle. And if homeland security didn't
> know there were people dying in the Convention Center they should be
> fired one and all, it was on network TV for days! I personally think the
> secretary was lying about "this is the first I've heard of it."
>


--
The e-mail address in our reply-to line is reversed in an attempt to
minimize spam. Our true address is of the form che...@prodigy.net.
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"CJT" <abujlehc@prodigy.net> wrote in message
news:43189F94.1060903@prodigy.net...

> Bill Davidsen wrote:

>> One right one wrong. Politicians live by giving the voters what they
>> want, which is often "something for nothing." No one wanted to pay for
>> better levies in New Orleans, so they didn't get upgraded.

> That's just false. People were begging for the money to make needed
> repairs, and the White House cut the funds.

In your world, begging for money and being willing to pay for something
are comparable? They are practically opposites in mine.

DS
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On 1 Sep 2005 04:53:29 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:

>George Macdonald wrote:
>> On 31 Aug 2005 05:36:51 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
>>
>> >George Macdonald wrote:
>> >> On 29 Aug 2005 16:58:42 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
>> >>
>> >> >
>>
>> >> Hmm, and you you said you didn't want to get into another "Intel/AMD"
>> >> round... and yet, there you go again. I was only stating a documneted
>> >> acknowledged fact - your prognostications are not relevant.
>> >>
>> >> The game makers have already stated that they don't expect to get much out
>> >> of multi-core - it looks to me single high-speed core is what is needed
>> >> there for a (long) while yet. Hell, dual CPUs have been available for long
>> >> enough and they have not tweaked any gaming interest.
>> >>
>> >There are several different issues tangled up here:
>> >
>> >1. How much further single-thread performance can be pushed.
>> >
>> >2. How much those chips will cost.
>> >
>> >3. Whether single-thread chips will dominate gaming.
>> >
>> >4. How much of what Intel is doing is pure market-speak.
>> >
>> >5. The purported advantage AMD has with respect to "heat stress."
>> >
>> >Taking the issues in inverse order:
>> >
>> >5. The "heat stress" problems Intel has are the result of having to run
>> >NetBurst at a higher clock to get comparable performance. The P6 core
>> >derivatives have plenty of headroom.
>>
>> Well that's NetBust fer ya! How'd that happen? They didn't know this was
>> going to happen? They did know but decided to press ahead with a spin
>> angle on it anyway?... and add HT as a crutch? The "advantage AMD has" is
>> not purported - it is tangible... palpable even.
>>
>NetBurst may be one of the few things we agree on. Intel's architect
>says that he expected eventually to improve the IPC performance of
>NetBurst. He also admits that his marching orders were to deliver a
>processor with a high clock rate, which he did.

Sounds suspicious to me - the last iteration made IPC worse! By that point
they still had not acknowledged that it was a err, bust?... quite late in
the day it would appear.

>> >4. For many applications, performance per watt is the figure of merit
>> >of greatest interest because that will determine how much muscle can be
>> >packed into a given space. For those who need single-thread
>> >performance, it isn't a figure of merit that's of interest. If you
>> >really need single-thread performance, there will always be options, at
>> >a price
>> >
>> >http://www.alienware.com/configurator_pages/Aurora_alx.aspx?SysCode=PC-AURORALX-SLI-D
>> >
>> >IOW, if it's *that* important to you right now, all you have to do is
>> >to get out your checkbook.
>>
>> That is an extreme example and irrelevant here where most people DIY, among
>> other reasons. A careful choice of processor performance slot and
>> components can get very nice single-thread performance, with the obvious
>> promise of more to come. Now that the heat is (temporarily) off AMD on
>> single-thread CPUs there is no way of knowing how much they may be holding
>> back but I still see performace ramps possible for me as prices drop as
>> introduction of the next speed jump slots in at the top-end... given that
>> anybody who pays the $$ for the top slot either really needs it or is just
>> mad.
>>
>It's apparent that you don't really need it.

I *know* what I need and its not the ridiculous Alienware system you want
to suggest as a bolster for your POV. using extreme point examples only
weakens your point.

>> >3. It may take a while, but there really isn't anywhere else to go.
>> >The idea of having a separate, specialized physics engine is kind of
>> >silly because there's no reason why the physics can't be done by
>> >another CPU core (the solution I really like, actually, is a design
>> >like Cell, which seems to get the best of both worlds). You're going
>> >to accuse me of shilling for Intel, but you (or someone else reading
>> >this) might be interested in
>> >
>> >http://www.intel.com/cd/ids/developer/asmo-na/eng/strategy/multicore/index.htm
>> >
>> >Scroll down past the marketing bs to "Application Development and
>> >Performance Resources."
>>
>> Sorry but I have to hark back to my quote by Honda San: "I want to touch
>> and hold a better piston, not watch another concept presentation". While
>> AMD agrees with, and even pre-empted Intel on dual core, they do not appear
>> to be throwing the baby out with the bath water here... in quite the same
>> way.
>>
>While you are insisting that the world is staying single-threaded, most
>of the rest of the world is going to be trying to figure out how to use
>multiple threads.

Quit attributing concepts to me which I have not proposed. Where
multi-thread fits it will be fine, according to the effort required - just
don't try to tell me that it fits all and that the effort will always be
worth the gain.

>> The way that Intel just can't seem to resist the spin bothers me - the
>> (marketing) tail is wagging the (engineering) dog just a bit too much...
>> there's a corporate sickness here. The way that you have apparently
>> latched on to their latest religion from just this past week bothers me
>> too.<shrug>
>>
>There is no reason for you to make this personal. I haven't latched
>onto anybody's religion. Power consumption as an issue for HPC wasn't
>invented by Intel and I didn't learn about it from Intel. I didn't
>learn about power consumption as an issue for servers from Intel: I
>learned about it from google and from people with power and space
>constraints. And I have been posting for a long time now about the
>inevitable move to more cores as a way to get more performance within
>the constraints of what is possible with standard cooling.

<sigh>Here we go again - every time you get nailed, you feign indignation.
I said "apparently" because that's the way it appears to me - I don't
recall you applauding AMD's dual-core intentions which preceded Intel's
commitment by a long way (Opteron was designed from the start to accomodate
dual) or their demo of a year ago but I do recall you lauding Intel's HT in
your previous posts on multi-threading. From my POV the shoe fits.

>> >> >That's why you do profiling.
>> >>
>> >> It makes me wonder sometimes when you spout some buzzword like that as
>> >> though it is known to work well for all general purpose code working on all
>> >> possible data sets.<shrug> People who use compilers know this.
>> >>
>> >Would you have been happier if I had said "feedback-directed
>> >optimization?" The compiler can't accurately infer dependencies from
>> >software written in, say, c. If those ambiguities didn't exist, there
>> >would still be the problem of determining the hot paths in the
>> >software. Given an accurate control and data flow graph, it's pretty
>> >easy to discover the hot paths, and the main reason feedback directed
>> >optimization doesn't always work well is that there is no accurate
>> >control and data flow graph to begin with. Even the most unlikely of
>> >software, like Microsoft Word, turns out to be incredibly repetitive in
>> >the paths taken through the software.
>>
>> You already tried the "feedback directed..." one a while back - thanks for
>> reminding me.🙂 The thing is in a performance-oriented application -- not
>> MS Word🙂 -- how much computer cycles can you afford to "waste" on
>> "profiling" the problem data-set at hand before you end up losing out on
>> the balance of any gain? I'm afraid your "hot paths" are just as likely to
>> be transient anyway - calling it "easy" just doesn't wash for me.
>>
>I chose MS Word as an example of a particularly challenging
>application, because it is. I'm sure there are examples where
>profiling doesn't help or helps only if you profile on the actual
>problem, but I haven't encountered such a problem. The difficulty,
>which you either don't understand or don't want to acknowledge, is the
>way that software is written.

Hmm, should I now feign indignation? We've been over this before but it
seems to me that you are the one who does not understand... in particular
the software development cycle; your knowledge and experience of commercial
software is limited if you don't know of instances of software which handle
different types of problems with varying volumes/density of original *and*
transient data. Have you ever written software to be sold as a commercial
product? Assuming some one-time static profile would do, buyers are not
expected to have compilers to re-profile for their current data set.

This is also much more than a programming issue - it includes the whole
system design/engineering from top to bottom. You're going to need a
helluva lot more than a "profiling" compiler.

--
Rgds, George Macdonald
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

David Schwartz wrote:

> "CJT" <abujlehc@prodigy.net> wrote in message
> news:43189F94.1060903@prodigy.net...
>
>
>>Bill Davidsen wrote:
>
>
>>>One right one wrong. Politicians live by giving the voters what they
>>>want, which is often "something for nothing." No one wanted to pay for
>>>better levies in New Orleans, so they didn't get upgraded.
>
>
>>That's just false. People were begging for the money to make needed
>>repairs, and the White House cut the funds.
>
>
> In your world, begging for money and being willing to pay for something
> are comparable? They are practically opposites in mine.
>
> DS
>
>
Nothing of the sort, but the Corps of Engineers was responsible for
them. Navigable waterways and all that. I doubt N.O. could have
legally done it on their own even if they'd had the money to do so.

Grow up.

--
The e-mail address in our reply-to line is reversed in an attempt to
minimize spam. Our true address is of the form che...@prodigy.net.
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On 2 Sep 2005 17:22:29 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:

>George Macdonald wrote:
>> On 1 Sep 2005 04:53:29 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
>>
>> >George Macdonald wrote:
>> >> On 31 Aug 2005 05:36:51 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
>
>> >>
>> >NetBurst may be one of the few things we agree on. Intel's architect
>> >says that he expected eventually to improve the IPC performance of
>> >NetBurst. He also admits that his marching orders were to deliver a
>> >processor with a high clock rate, which he did.
>>
>> Sounds suspicious to me - the last iteration made IPC worse! By that point
>> they still had not acknowledged that it was a err, bust?... quite late in
>> the day it would appear.
>>
>By then, the architect cited was no longer working for Intel. Who
>knows what really happened. His talk at Stanford (which was available
>online and may still be) sounded candid, but who knows what expectation
>went unfulfilled. Leakage, we all know, was one, but by then he was
>out of the picture.

As far as expectations, I'm a little surprised that Intel did not get more
out of Tracecache - it seemed like a great idea so I dunno whether it just
needs to be bigger or whether the bottom line is that average code is just
not repetitive enough to get the benefit of skipped decode stages.

Given how little was obtained from 90nm -- both Intel and AMD seemed to
need a second stab at it to get it working right and for little real clock
speed increment so far -- the increased power density of 65nm is going to
be "interesting".

>> >> >4. For many applications, performance per watt is the figure of merit
>> >> >of greatest interest because that will determine how much muscle can be
>> >> >packed into a given space. For those who need single-thread
>> >> >performance, it isn't a figure of merit that's of interest. If you
>> >> >really need single-thread performance, there will always be options, at
>> >> >a price
>> >> >
>> >> >http://www.alienware.com/configurator_pages/Aurora_alx.aspx?SysCode=PC-AURORALX-SLI-D
>> >> >
>> >> >IOW, if it's *that* important to you right now, all you have to do is
>> >> >to get out your checkbook.
>> >>
>> >> That is an extreme example and irrelevant here where most people DIY, among
>> >> other reasons. A careful choice of processor performance slot and
>> >> components can get very nice single-thread performance, with the obvious
>> >> promise of more to come. Now that the heat is (temporarily) off AMD on
>> >> single-thread CPUs there is no way of knowing how much they may be holding
>> >> back but I still see performace ramps possible for me as prices drop as
>> >> introduction of the next speed jump slots in at the top-end... given that
>> >> anybody who pays the $$ for the top slot either really needs it or is just
>> >> mad.
>> >>
>> >It's apparent that you don't really need it.
>>
>> I *know* what I need and its not the ridiculous Alienware system you want
>> to suggest as a bolster for your POV. using extreme point examples only
>> weakens your point.
>>
>I don't know what point of view you think I'm bolstering.

I thought it was pretty clear - a ridiculously priced "luxury" gamer
system, with SLI and a de luxe liquid cooled case, as an example of a
single core CPU.

> What is
>possible onesy-twosy with careful selection and maybe with some heroic
>cooling is always well beyond what is available as mass production.
>You don't have to buy your system from Alienware if you don't care to.

Do I have to explain DIY to you?;-)

>The system I'm working on right now is significantly overclocked (but
>with the memory subsystem within spec), and I didn't buy *it* from
>Alienware. I think I'm guessing that we *are* pretty near the end of
>the road but that you or most anyone else will be able to get
>significantly greater single thread performance, at least for a while,
>almost no matter what AMD does. It's not a very profound point.

If you look at what overclockers are getting out of Athlon64 systems, it
would appear that we still have some more to come yet from even current
technology/silicon. This has to do with the humble JEDEC DDR-SDRAM specs
as much as with CPU clock speed.

Maybe the "end of the road" is in sight but we've been hearing similar
stories since the beginning of computing... with the same theme: "future
advances are going to have to come from software and better algorithms".🙂

>> >While you are insisting that the world is staying single-threaded, most
>> >of the rest of the world is going to be trying to figure out how to use
>> >multiple threads.
>>
>> Quit attributing concepts to me which I have not proposed. Where
>> multi-thread fits it will be fine, according to the effort required - just
>> don't try to tell me that it fits all and that the effort will always be
>> worth the gain.
>>
>Sorry if I misunderstood you. That's the first positive thing I can
>remember you saying about multi-threading. As to the effectiveness of
>multi-threading, there is no way that I know of to circumvent Amdahl's
>law.

It's not like it's a new concept - I've used it where it was actually
desirable/necessary, in assembler (not x86) even. I don't relish the
experience of trying to bend an essentially serial algorithm into being
able to perform consistently better as a parallel implementation; obviously
a better compilation procedure is not going to help for those cases.

>> >> The way that Intel just can't seem to resist the spin bothers me - the
>> >> (marketing) tail is wagging the (engineering) dog just a bit too much...
>> >> there's a corporate sickness here. The way that you have apparently
>> >> latched on to their latest religion from just this past week bothers me
>> >> too.<shrug>
>> >>
>> >There is no reason for you to make this personal. I haven't latched
>> >onto anybody's religion. Power consumption as an issue for HPC wasn't
>> >invented by Intel and I didn't learn about it from Intel. I didn't
>> >learn about power consumption as an issue for servers from Intel: I
>> >learned about it from google and from people with power and space
>> >constraints. And I have been posting for a long time now about the
>> >inevitable move to more cores as a way to get more performance within
>> >the constraints of what is possible with standard cooling.
>>
>> <sigh>Here we go again - every time you get nailed, you feign indignation.
>> I said "apparently" because that's the way it appears to me - I don't
>> recall you applauding AMD's dual-core intentions which preceded Intel's
>> commitment by a long way (Opteron was designed from the start to accomodate
>> dual) or their demo of a year ago but I do recall you lauding Intel's HT in
>> your previous posts on multi-threading. From my POV the shoe fits.
>>
>There is no feigned about this, George. Your attacks are slanderous,
>and I am offended. Is that direct enough?

Oh it's certainly direct but it doesn't quite fit the evidence from my POV.
You answered me by pointing to a bunch of very recent Intel PR & docs from
Intel's latest spin machine. You seemed to rile at the fact that I regard
Intel's IDF "dual-core for performance/watt" PR as nothing more than spin.

> I'm not at all excited by
>Intel's Dual core, and I don't think I've said anything laudatory about
>it. In case you missed it, I've allowed as how hyperthreading, from
>one point of view (performance per watt, in fact) is a marketing
>gimmick, because it adds proportionally as many transistors and as many
>watts as it gains on average in performance. And, you're right,
>neither have I said anything about dual core Opteron because that
>doesn't excite me, either. I am looking forward to the four-core
>Whitefield, if indeed the four cores share L2 cache, because it opens
>up some really interesting possibilities.

The bottom line for me is that if dual core means two down-clocked cores,
I'd rather have the err, up-clocked single core version... thanks.

>> >> >> >That's why you do profiling.
>> >> >>
>> >> >> It makes me wonder sometimes when you spout some buzzword like that as
>> >> >> though it is known to work well for all general purpose code working on all
>> >> >> possible data sets.<shrug> People who use compilers know this.
>> >> >>
>> >> >Would you have been happier if I had said "feedback-directed
>> >> >optimization?" The compiler can't accurately infer dependencies from
>> >> >software written in, say, c. If those ambiguities didn't exist, there
>> >> >would still be the problem of determining the hot paths in the
>> >> >software. Given an accurate control and data flow graph, it's pretty
>> >> >easy to discover the hot paths, and the main reason feedback directed
>> >> >optimization doesn't always work well is that there is no accurate
>> >> >control and data flow graph to begin with. Even the most unlikely of
>> >> >software, like Microsoft Word, turns out to be incredibly repetitive in
>> >> >the paths taken through the software.
>> >>
>> >> You already tried the "feedback directed..." one a while back - thanks for
>> >> reminding me.🙂 The thing is in a performance-oriented application -- not
>> >> MS Word🙂 -- how much computer cycles can you afford to "waste" on
>> >> "profiling" the problem data-set at hand before you end up losing out on
>> >> the balance of any gain? I'm afraid your "hot paths" are just as likely to
>> >> be transient anyway - calling it "easy" just doesn't wash for me.
>> >>
>> >I chose MS Word as an example of a particularly challenging
>> >application, because it is. I'm sure there are examples where
>> >profiling doesn't help or helps only if you profile on the actual
>> >problem, but I haven't encountered such a problem. The difficulty,
>> >which you either don't understand or don't want to acknowledge, is the
>> >way that software is written.
>>
>> Hmm, should I now feign indignation? We've been over this before but it
>> seems to me that you are the one who does not understand... in particular
>> the software development cycle; your knowledge and experience of commercial
>> software is limited if you don't know of instances of software which handle
>> different types of problems with varying volumes/density of original *and*
>> transient data. Have you ever written software to be sold as a commercial
>> product? Assuming some one-time static profile would do, buyers are not
>> expected to have compilers to re-profile for their current data set.
>>
>I'm basing my comments on papers that studied the behavior of
>commercial software. I think you know that I have an interest in
>Itanium. I haven't read all of the VLIW literature, but I've read a
>good bit of it.
>
>I'm still not certain that you get the fact that a compiler cannot
>construct an unambiguous control and dataflow graph from c and not even
>from Fortran, which is a little easier than c. If you don't have that
>information, you can't build an auto-parallelizing compiler. If you do
>have that information, there may still be cases that are resistant to
>auto-parallelization, but if you can't get past the problem of control
>and dataflow ambiguities, you can't auto-parallelize anything but the
>most trivial of cases.

If a human cannot find the parallelisation value in the algorithm/method at
hand, you can't really think a set of new language constructs and semantics
is going to help?

>> This is also much more than a programming issue - it includes the whole
>> system design/engineering from top to bottom. You're going to need a
>> helluva lot more than a "profiling" compiler.
>>
>I think I'm aware of the history of the attempts to build heroic
>autoparallelising compilers. I have very definite ideas of why those
>efforts failed. I could be wrong in detail, but the failure had
>nothing to do with some BS about the "software development cycle."

You need to think about it then. So far you've talked about graduate
students and researchers who write their own code - this is not how the
majority of business software is created and used. The very idea of an
end-user having access to a compiler, *and* even a portion of the source
code, to improve performance leads to a very rocky path.

--
Rgds, George Macdonald
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

George Macdonald wrote:
> On 2 Sep 2005 17:22:29 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
>
> >George Macdonald wrote:
> >> On 1 Sep 2005 04:53:29 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
> >>
> >> >George Macdonald wrote:
> >> >>Robert Myers wrote:

> >
> >> >>
> >> >NetBurst may be one of the few things we agree on. Intel's architect
> >> >says that he expected eventually to improve the IPC performance of
> >> >NetBurst. He also admits that his marching orders were to deliver a
> >> >processor with a high clock rate, which he did.
> >>
> >> Sounds suspicious to me - the last iteration made IPC worse! By that point
> >> they still had not acknowledged that it was a err, bust?... quite late in
> >> the day it would appear.
> >>
> >By then, the architect cited was no longer working for Intel. Who
> >knows what really happened. His talk at Stanford (which was available
> >online and may still be) sounded candid, but who knows what expectation
> >went unfulfilled. Leakage, we all know, was one, but by then he was
> >out of the picture.
>
> As far as expectations, I'm a little surprised that Intel did not get more
> out of Tracecache - it seemed like a great idea so I dunno whether it just
> needs to be bigger or whether the bottom line is that average code is just
> not repetitive enough to get the benefit of skipped decode stages.
>
I don't know. There are a fair number of ideas that depend on software
being repetitive (trace cache, code morphing, dynamic optimization and
introspection to name a few). None of them seem to have had the
dramatic payoff that probably was anticipated when the light bulb first
went off.

> Given how little was obtained from 90nm -- both Intel and AMD seemed to
> need a second stab at it to get it working right and for little real clock
> speed increment so far -- the increased power density of 65nm is going to
> be "interesting".
>
Intel is advertising 3-D tri-gate transistors at 30nm, with one obvious
goal of reducing leakage. I'm assuming they're not introducing the
technology at 65nm because they don't have to, but maybe I'm just being
optimistic.

> >> >> >4. For many applications, performance per watt is the figure of merit
> >> >> >of greatest interest because that will determine how much muscle can be
> >> >> >packed into a given space. For those who need single-thread
> >> >> >performance, it isn't a figure of merit that's of interest. If you
> >> >> >really need single-thread performance, there will always be options, at
> >> >> >a price
> >> >> >
> >> >> >http://www.alienware.com/configurator_pages/Aurora_alx.aspx?SysCode=PC-AURORALX-SLI-D
> >> >> >
> >> >> >IOW, if it's *that* important to you right now, all you have to do is
> >> >> >to get out your checkbook.
> >> >>
> >> >> That is an extreme example and irrelevant here where most people DIY, among
> >> >> other reasons. A careful choice of processor performance slot and
> >> >> components can get very nice single-thread performance, with the obvious
> >> >> promise of more to come. Now that the heat is (temporarily) off AMD on
> >> >> single-thread CPUs there is no way of knowing how much they may be holding
> >> >> back but I still see performace ramps possible for me as prices drop as
> >> >> introduction of the next speed jump slots in at the top-end... given that
> >> >> anybody who pays the $$ for the top slot either really needs it or is just
> >> >> mad.
> >> >>
> >> >It's apparent that you don't really need it.
> >>
> >> I *know* what I need and its not the ridiculous Alienware system you want
> >> to suggest as a bolster for your POV. using extreme point examples only
> >> weakens your point.
> >>
> >I don't know what point of view you think I'm bolstering.
>
> I thought it was pretty clear - a ridiculously priced "luxury" gamer
> system, with SLI and a de luxe liquid cooled case, as an example of a
> single core CPU.
>
I posted a link to something that could be bought off the shelf.
Alienware isn't the only one who makes them, and, if off-the-shelf
single thread performance becomes more rare, the supply will just
increase.

> > What is
> >possible onesy-twosy with careful selection and maybe with some heroic
> >cooling is always well beyond what is available as mass production.
> >You don't have to buy your system from Alienware if you don't care to.
>
> Do I have to explain DIY to you?;-)
>
No, but most businesses probably wouldn't be pleased at the thought of
homebrew machines that are running out of spec.


<snip>

>
> >> >> The way that Intel just can't seem to resist the spin bothers me - the
> >> >> (marketing) tail is wagging the (engineering) dog just a bit too much...
> >> >> there's a corporate sickness here. The way that you have apparently
> >> >> latched on to their latest religion from just this past week bothers me
> >> >> too.<shrug>
> >> >>
> >> >There is no reason for you to make this personal. I haven't latched
> >> >onto anybody's religion. Power consumption as an issue for HPC wasn't
> >> >invented by Intel and I didn't learn about it from Intel. I didn't
> >> >learn about power consumption as an issue for servers from Intel: I
> >> >learned about it from google and from people with power and space
> >> >constraints. And I have been posting for a long time now about the
> >> >inevitable move to more cores as a way to get more performance within
> >> >the constraints of what is possible with standard cooling.
> >>
> >> <sigh>Here we go again - every time you get nailed, you feign indignation.
> >> I said "apparently" because that's the way it appears to me - I don't
> >> recall you applauding AMD's dual-core intentions which preceded Intel's
> >> commitment by a long way (Opteron was designed from the start to accomodate
> >> dual) or their demo of a year ago but I do recall you lauding Intel's HT in
> >> your previous posts on multi-threading. From my POV the shoe fits.
> >>
> >There is no feigned about this, George. Your attacks are slanderous,
> >and I am offended. Is that direct enough?
>
> Oh it's certainly direct but it doesn't quite fit the evidence from my POV.
> You answered me by pointing to a bunch of very recent Intel PR & docs from
> Intel's latest spin machine. You seemed to rile at the fact that I regard
> Intel's IDF "dual-core for performance/watt" PR as nothing more than spin.
>
Those weren't entirely docs from Intel's spin machine. That was a page
for developers.

> > I'm not at all excited by
> >Intel's Dual core, and I don't think I've said anything laudatory about
> >it. In case you missed it, I've allowed as how hyperthreading, from
> >one point of view (performance per watt, in fact) is a marketing
> >gimmick, because it adds proportionally as many transistors and as many
> >watts as it gains on average in performance. And, you're right,
> >neither have I said anything about dual core Opteron because that
> >doesn't excite me, either. I am looking forward to the four-core
> >Whitefield, if indeed the four cores share L2 cache, because it opens
> >up some really interesting possibilities.
>
> The bottom line for me is that if dual core means two down-clocked cores,
> I'd rather have the err, up-clocked single core version... thanks.
>
I'm sure you won't be alone. It will be interesting to see how the
market copes with that kind of need.

> >> >> >> >That's why you do profiling.
> >> >> >>
> >> >> >> It makes me wonder sometimes when you spout some buzzword like that as
> >> >> >> though it is known to work well for all general purpose code working on all
> >> >> >> possible data sets.<shrug> People who use compilers know this.
> >> >> >>
> >> >> >Would you have been happier if I had said "feedback-directed
> >> >> >optimization?" The compiler can't accurately infer dependencies from
> >> >> >software written in, say, c. If those ambiguities didn't exist, there
> >> >> >would still be the problem of determining the hot paths in the
> >> >> >software. Given an accurate control and data flow graph, it's pretty
> >> >> >easy to discover the hot paths, and the main reason feedback directed
> >> >> >optimization doesn't always work well is that there is no accurate
> >> >> >control and data flow graph to begin with. Even the most unlikely of
> >> >> >software, like Microsoft Word, turns out to be incredibly repetitive in
> >> >> >the paths taken through the software.
> >> >>
> >> >> You already tried the "feedback directed..." one a while back - thanks for
> >> >> reminding me.🙂 The thing is in a performance-oriented application -- not
> >> >> MS Word🙂 -- how much computer cycles can you afford to "waste" on
> >> >> "profiling" the problem data-set at hand before you end up losing out on
> >> >> the balance of any gain? I'm afraid your "hot paths" are just as likely to
> >> >> be transient anyway - calling it "easy" just doesn't wash for me.
> >> >>
> >> >I chose MS Word as an example of a particularly challenging
> >> >application, because it is. I'm sure there are examples where
> >> >profiling doesn't help or helps only if you profile on the actual
> >> >problem, but I haven't encountered such a problem. The difficulty,
> >> >which you either don't understand or don't want to acknowledge, is the
> >> >way that software is written.
> >>
> >> Hmm, should I now feign indignation? We've been over this before but it
> >> seems to me that you are the one who does not understand... in particular
> >> the software development cycle; your knowledge and experience of commercial
> >> software is limited if you don't know of instances of software which handle
> >> different types of problems with varying volumes/density of original *and*
> >> transient data. Have you ever written software to be sold as a commercial
> >> product? Assuming some one-time static profile would do, buyers are not
> >> expected to have compilers to re-profile for their current data set.
> >>
> >I'm basing my comments on papers that studied the behavior of
> >commercial software. I think you know that I have an interest in
> >Itanium. I haven't read all of the VLIW literature, but I've read a
> >good bit of it.
> >
> >I'm still not certain that you get the fact that a compiler cannot
> >construct an unambiguous control and dataflow graph from c and not even
> >from Fortran, which is a little easier than c. If you don't have that
> >information, you can't build an auto-parallelizing compiler. If you do
> >have that information, there may still be cases that are resistant to
> >auto-parallelization, but if you can't get past the problem of control
> >and dataflow ambiguities, you can't auto-parallelize anything but the
> >most trivial of cases.
>
> If a human cannot find the parallelisation value in the algorithm/method at
> hand, you can't really think a set of new language constructs and semantics
> is going to help?
>
You've set up a false dichotomy: either a human being can't find
parallelism and better semantics won't help or the human being can find
the parallelism and you don't need the semantics.

Better semantics would help human beings write correct code. Subsets
of ada exist that allow for automatic checking for correctness. TLA+

http://research.microsoft.com/users/lamport/tla/tla.html

was designed from the ground up for the correct specification of
concurrent systems. Neither is what I would call a perfect solution,
but either is better than Java, C, Fortran, or any extension I know.

In an ideal world, a programmer would describe the properties of the
system (as in TLA+) and the compiler would implement the concurrency.
With a system like TLA+ available, I'm not sure why it's acceptable to
be guessing about what c code will do, but that's the state of the art,
apparently.

> >> This is also much more than a programming issue - it includes the whole
> >> system design/engineering from top to bottom. You're going to need a
> >> helluva lot more than a "profiling" compiler.
> >>
> >I think I'm aware of the history of the attempts to build heroic
> >autoparallelising compilers. I have very definite ideas of why those
> >efforts failed. I could be wrong in detail, but the failure had
> >nothing to do with some BS about the "software development cycle."
>
> You need to think about it then. So far you've talked about graduate
> students and researchers who write their own code - this is not how the
> majority of business software is created and used. The very idea of an
> end-user having access to a compiler, *and* even a portion of the source
> code, to improve performance leads to a very rocky path.
>
I'm not expecting end users routinely to tune code. It's not my part
of the world, but at one time IBM gave source to at least some end
users who tuned and compiled and found and reported bugs, so it's not
unthinkable, but that's not really a solution to anything.

The fundamental problem is the loss of information that occurs when
what the programmer knows about the problem is reduced to a language
like c. In most cases the compiler can't, even in theory, decide what
can be done concurrently because of the limitations of the language.
Nor can the compiler or any other tool check for correctness.

Right now, there are places where better tools could be used (open
source, academic research, national security applications) but aren't.
The one place I know where appropriate tools are used is in fly-by-wire
applications, where a bulletproof subset of ada is used.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

George Macdonald wrote:
> On 5 Sep 2005 05:48:20 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
>
> >
> >George Macdonald wrote:
>
>
> >Better semantics would help human beings write correct code. Subsets
> >of ada exist that allow for automatic checking for correctness. TLA+
> >
> >http://research.microsoft.com/users/lamport/tla/tla.html
> >
> >was designed from the ground up for the correct specification of
> >concurrent systems. Neither is what I would call a perfect solution,
> >but either is better than Java, C, Fortran, or any extension I know.
> >
> >In an ideal world, a programmer would describe the properties of the
> >system (as in TLA+) and the compiler would implement the concurrency.
> >With a system like TLA+ available, I'm not sure why it's acceptable to
> >be guessing about what c code will do, but that's the state of the art,
> >apparently.
>
> I dunno what to make of this - the final YACC?🙂... or just another
> language for the scrap heap?
>
If it has anything to do with yacc, someone would have to explain it to
me. TLA+ is almost certain to have theoretical importance, if nothing
else.

<snip>

>
> >The fundamental problem is the loss of information that occurs when
> >what the programmer knows about the problem is reduced to a language
> >like c. In most cases the compiler can't, even in theory, decide what
> >can be done concurrently because of the limitations of the language.
> >Nor can the compiler or any other tool check for correctness.
>
> But that's wrapped up in an old problem: "correctness" in general - a
> solution to which would render programmers (coders ?) redundant eventually.
> It's all a bit too esoteric and futuristic from my POV.
>
A solution to the correctness problem would far from render programmers
redundant. The formal systems I know about require a high skill level,
and I assume that's the real reason managers complain about bugs while
continuing to use languages (like c) that invite problems.

Where the costs of being wrong could lead to damages in the hundreds of
millions of dollars or even to criminal charges, people use systems
that can guarantee correctness. It's not some unknown dark art. It's
just that, in most instances, speed is more important than accuracy, no
matter how much people claim otherwise.

One place (other than fly-by-wire airplanes) where formal systems have
been applied is in the temporal logic of distributed systems. That's
another application (bank transactions, etc.) where the cost of
mistakes is high.

Advocates of formal systems have been around for a long time (Dijkstra,
for example). They haven't made much of a dent on the world of
practical programming, and if you want to tell me that isn't likely to
change, you're probably right, but I can always hope.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Fri, 02 Sep 2005 03:51:57 -0700, Robert Myers wrote:

> keith wrote:
>
>>
>> Even our programming language(s) are inherently concurrent. Though I'm sure the programmers screw it up.
>
> I don't understand why something like systemc isn't used as a
> concurrent programming language (other than for hardware design).

Because programmers screw it all up by serializing explicitly parallel
processes? There have been/are parallel simulators for such things, but
they're expensive, thus not popular.

--
Keith
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

CJT wrote:
> Bill Davidsen wrote:
>
>> keith wrote:
>>
>>> Few politicians know anything about economics or capitalism (Jack Kemp
>>> was one who did). They're by nature commisars living off the fat of the
>>> land.
>>>
>> One right one wrong. Politicians live by giving the voters what they
>> want, which is often "something for nothing." No one wanted to pay for
>> better levies in New Orleans, so they didn't get upgraded.
>
>
> That's just false. People were begging for the money to make needed
> repairs, and the White House cut the funds.

That is absolutely what I said, yes. Everyone was willing to leave it
undone rather than pay for solving the problem.
>
>
>> No one wanted
>> to make mandatory evacuation more than a term, no one wanted to take
>> responsibility for shooting looters, etc.
>>
>> And FEMA is so totally incompetent that I won't begin to talk about it
>> here, I wrote an essay for my newsletter and blog, and I got so mad
>> that I had to go for a walk in the middle. And if homeland security
>> didn't know there were people dying in the Convention Center they
>> should be fired one and all, it was on network TV for days! I
>> personally think the secretary was lying about "this is the first I've
>> heard of it."
>>
>
>


--
bill davidsen
SBC/Prodigy Yorktown Heights NY data center
http://newsgroups.news.prodigy.com
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

CJT wrote:
> David Schwartz wrote:
>
>> "CJT" <abujlehc@prodigy.net> wrote in message
>> news:43189F94.1060903@prodigy.net...
>>
>>
>>> Bill Davidsen wrote:
>>
>>
>>
>>>> One right one wrong. Politicians live by giving the voters what they
>>>> want, which is often "something for nothing." No one wanted to pay
>>>> for better levies in New Orleans, so they didn't get upgraded.
>>
>>
>>
>>> That's just false. People were begging for the money to make needed
>>> repairs, and the White House cut the funds.
>>
>>
>>
>> In your world, begging for money and being willing to pay for
>> something are comparable? They are practically opposites in mine.
>>
>> DS
>>
>>
> Nothing of the sort, but the Corps of Engineers was responsible for
> them. Navigable waterways and all that. I doubt N.O. could have
> legally done it on their own even if they'd had the money to do so.

Everybody has an excuse, don't they. Nobody could have done anything.
>
> Grow up.
>
Whinny excuses stopped cutting it in grade school, like a horrid remake
of _It Can't Happen Here_, if you took freshman English.

--
bill davidsen
SBC/Prodigy Yorktown Heights NY data center
http://newsgroups.news.prodigy.com
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Robert Myers wrote:
> Bill Davidsen wrote:
>
>>Robert Myers wrote:
>>
>>
>>>As to the day when the average physics or biology graduate student
>>>writes software with meaningful concurrency without depending on
>>>libraries, that may take a while.
>>
>>And why should they? Those problems is often really linear, and
>>compilers can see where to optimize, vectorize, and parallelize for
>>better than the programmer. Particularly if the code is to be portable,
>>because the next machine may want something else done.
>>
>
>
> Because the day is coming when almost everyone will be using a box with
> multiple processors, I'm assuming that the day is coming when most code
> will be written with multiple processors in mind, and that that's how
> people will learn to code practically from day 1. That means I'm
> assuming that, out of the current chaos of competing schemes, a style
> of coding will emerge for multiple processors that is reasonably
> portable.
Far better done by computers than programmers.

--
bill davidsen
SBC/Prodigy Yorktown Heights NY data center
http://newsgroups.news.prodigy.com
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

George Macdonald wrote:
> On 2 Sep 2005 17:22:29 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:

>>I'm still not certain that you get the fact that a compiler cannot
>>construct an unambiguous control and dataflow graph from c and not even
>>from Fortran, which is a little easier than c. If you don't have that
>>information, you can't build an auto-parallelizing compiler. If you do
>>have that information, there may still be cases that are resistant to
>>auto-parallelization, but if you can't get past the problem of control
>>and dataflow ambiguities, you can't auto-parallelize anything but the
>>most trivial of cases.
>
>
> If a human cannot find the parallelisation value in the algorithm/method at
> hand, you can't really think a set of new language constructs and semantics
> is going to help?
>
>
>>>This is also much more than a programming issue - it includes the whole
>>>system design/engineering from top to bottom. You're going to need a
>>>helluva lot more than a "profiling" compiler.
>>>
>>
>>I think I'm aware of the history of the attempts to build heroic
>>autoparallelising compilers. I have very definite ideas of why those
>>efforts failed. I could be wrong in detail, but the failure had
>>nothing to do with some BS about the "software development cycle."
>
>
> You need to think about it then. So far you've talked about graduate
> students and researchers who write their own code - this is not how the
> majority of business software is created and used. The very idea of an
> end-user having access to a compiler, *and* even a portion of the source
> code, to improve performance leads to a very rocky path.
>
Unfortunately there are whole classes of problems which simply can't be
parallelized, and they include some real world problems. I thought this
was a case of "we don't know how yet," but it seems that this class can
be mathmatically proven (not by me). I was working on some of these
problems 15-18 years ago, and had one of the researchers point out the
proof (from late 50's early 60's). So faster single threads are still
going to be useful.

Fortunately games don't seem to fall into that category.

--
bill davidsen
SBC/Prodigy Yorktown Heights NY data center
http://newsgroups.news.prodigy.com
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Tue, 06 Sep 2005 22:45:52 GMT, Bill Davidsen
<davidsen@deathstar.prodigy.com> wrote:

>George Macdonald wrote:
>> On 2 Sep 2005 17:22:29 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
>
>>>I'm still not certain that you get the fact that a compiler cannot
>>>construct an unambiguous control and dataflow graph from c and not even
>>>from Fortran, which is a little easier than c. If you don't have that
>>>information, you can't build an auto-parallelizing compiler. If you do
>>>have that information, there may still be cases that are resistant to
>>>auto-parallelization, but if you can't get past the problem of control
>>>and dataflow ambiguities, you can't auto-parallelize anything but the
>>>most trivial of cases.
>>
>>
>> If a human cannot find the parallelisation value in the algorithm/method at
>> hand, you can't really think a set of new language constructs and semantics
>> is going to help?
>>
>>
>>>>This is also much more than a programming issue - it includes the whole
>>>>system design/engineering from top to bottom. You're going to need a
>>>>helluva lot more than a "profiling" compiler.
>>>>
>>>
>>>I think I'm aware of the history of the attempts to build heroic
>>>autoparallelising compilers. I have very definite ideas of why those
>>>efforts failed. I could be wrong in detail, but the failure had
>>>nothing to do with some BS about the "software development cycle."
>>
>>
>> You need to think about it then. So far you've talked about graduate
>> students and researchers who write their own code - this is not how the
>> majority of business software is created and used. The very idea of an
>> end-user having access to a compiler, *and* even a portion of the source
>> code, to improve performance leads to a very rocky path.
>>
>Unfortunately there are whole classes of problems which simply can't be
>parallelized, and they include some real world problems. I thought this
>was a case of "we don't know how yet," but it seems that this class can
>be mathmatically proven (not by me).

Certainly some of them are intractable... nP-Hard. Even when you can get
some small parallelisation of numerical methods, the gain is often not
consistent and also often does not balance the extra work.

> I was working on some of these
>problems 15-18 years ago, and had one of the researchers point out the
>proof (from late 50's early 60's). So faster single threads are still
>going to be useful.
>
>Fortunately games don't seem to fall into that category.

AFAIK the game makers have been making pessimistic noises.

--
Rgds, George Macdonald
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

George Macdonald wrote:
>
> AFAIK the game makers have been making pessimistic noises.
>

And why wouldn't they? Except for those who are interested in the
mathematical aspects of concurrency, who would want to deal with
concurrency rather than just getting a faster processor?

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Robert Myers" <rbmyersusa@gmail.com> wrote in message
news:1126276395.189216.195100@g47g2000cwa.googlegroups.com...
> George Macdonald wrote:
>>
>> AFAIK the game makers have been making pessimistic noises.
>>
>
> And why wouldn't they? Except for those who are interested in the
> mathematical aspects of concurrency, who would want to deal with
> concurrency rather than just getting a faster processor?
>
> RM
>
oooh oooh I know this one.

Folks who aren't in denial about the future of computer architecture and
chip manufacturing?

What do I win?

🙂

del
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Del Cecchi wrote:
> "Robert Myers" <rbmyersusa@gmail.com> wrote in message
> news:1126276395.189216.195100@g47g2000cwa.googlegroups.com...
> > George Macdonald wrote:
> >>
> >> AFAIK the game makers have been making pessimistic noises.
> >>
> >
> > And why wouldn't they? Except for those who are interested in the
> > mathematical aspects of concurrency, who would want to deal with
> > concurrency rather than just getting a faster processor?
> >
> >
> oooh oooh I know this one.
>
> Folks who aren't in denial about the future of computer architecture and
> chip manufacturing?
>
> What do I win?
>
> 🙂
>
Continuing admiration for your warm sense of humor?

I suspect that a fair number of people are still in denial.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Sat, 10 Sep 2005 03:12:20 -0700, Robert Myers wrote:

> Del Cecchi wrote:
>> "Robert Myers" <rbmyersusa@gmail.com> wrote in message
>> news:1126276395.189216.195100@g47g2000cwa.googlegroups.com...
>> > George Macdonald wrote:
>> >>
>> >> AFAIK the game makers have been making pessimistic noises.
>> >>
>> >
>> > And why wouldn't they? Except for those who are interested in the
>> > mathematical aspects of concurrency, who would want to deal with
>> > concurrency rather than just getting a faster processor?
>> >
>> >
>> oooh oooh I know this one.
>>
>> Folks who aren't in denial about the future of computer architecture and
>> chip manufacturing?
>>
>> What do I win?
>>
>> 🙂
>>
> Continuing admiration for your warm sense of humor?
>
> I suspect that a fair number of people are still in denial.
>
> RM

Who needs denial when they can blame it on someone else,or something else!
The day of taking responsibility for one's actions is about over. I guess
the greed is too strong, instead we all just get hype and more hype.

Gnu_Raiz
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

George Macdonald wrote:
> On 9 Sep 2005 07:33:15 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
>
> >George Macdonald wrote:
> >>
> >> AFAIK the game makers have been making pessimistic noises.
> >>
> >
> >And why wouldn't they? Except for those who are interested in the
> >mathematical aspects of concurrency, who would want to deal with
> >concurrency rather than just getting a faster processor?
>
> Where do you get that attitude from? It's my impression they are in a
> competitive market and they try to squeeze as much realism & "action" into
> a game that is possible with the tools available.

I don't know what attitude you're attributing to me. Concurrent
programming is much harder than sequential programming. What's
controversial about that? Nobody wants to program for concurrency, but
they're going to have to. They just don't want to. Of course they're
going to talk it down.

> As for the mathematical
> aspects of concurrency, we've just been through a err, discussion about
> that - there *are* methods which do not adapt! I don't know enough about
> game algorithms & methods to have a good opinion... certainly not enough to
> pour contempt on the experts in the field....you?

I know a lot about simulating physics, and I know a fair bit about the
nuts and bolts of graphics programming. I don't know much about the
nuts and bolts of game play, but I wasn't, in any case pouring contempt
on anyone.

I mentioned the mathematics of concurrent programming only to indicate
that there might be some (like me) who would be interested in
concurrency in its own right, as a purely mathematical problem.

Game programmers would prefer faster sequential processors to more
cores. The future, though, is more cores.

RM
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On 9 Sep 2005 07:33:15 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:

>George Macdonald wrote:
>>
>> AFAIK the game makers have been making pessimistic noises.
>>
>
>And why wouldn't they? Except for those who are interested in the
>mathematical aspects of concurrency, who would want to deal with
>concurrency rather than just getting a faster processor?

Where do you get that attitude from? It's my impression they are in a
competitive market and they try to squeeze as much realism & "action" into
a game that is possible with the tools available. As for the mathematical
aspects of concurrency, we've just been through a err, discussion about
that - there *are* methods which do not adapt! I don't know enough about
game algorithms & methods to have a good opinion... certainly not enough to
pour contempt on the experts in the field....you?

--
Rgds, George Macdonald
 
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On 10 Sep 2005 18:41:30 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:

>
>George Macdonald wrote:
>> On 9 Sep 2005 07:33:15 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
>>
>> >George Macdonald wrote:
>> >>
>> >> AFAIK the game makers have been making pessimistic noises.
>> >>
>> >
>> >And why wouldn't they? Except for those who are interested in the
>> >mathematical aspects of concurrency, who would want to deal with
>> >concurrency rather than just getting a faster processor?
>>
>> Where do you get that attitude from? It's my impression they are in a
>> competitive market and they try to squeeze as much realism & "action" into
>> a game that is possible with the tools available.
>
>I don't know what attitude you're attributing to me. Concurrent
>programming is much harder than sequential programming. What's
>controversial about that? Nobody wants to program for concurrency, but
>they're going to have to. They just don't want to. Of course they're
>going to talk it down.

As you well know "concurrent programming" is not a general fit for all
computing methods. What's "controversial" is your non-expert opinion that
game designers/programmers are going to "talk it down", presumably because
they are just lazy... and have no competition?

>> As for the mathematical
>> aspects of concurrency, we've just been through a err, discussion about
>> that - there *are* methods which do not adapt! I don't know enough about
>> game algorithms & methods to have a good opinion... certainly not enough to
>> pour contempt on the experts in the field....you?
>
>I know a lot about simulating physics, and I know a fair bit about the
>nuts and bolts of graphics programming. I don't know much about the
>nuts and bolts of game play, but I wasn't, in any case pouring contempt
>on anyone.

You also pretended to know something about a field which I've had a long
interest in, where you seemed to think all you needed was a text book and a
compiler. You make a very good impersonation of contempt -- or is it just
disrespect? -- from my POV. The subject is *NOT* "game play" but game
design and programming - seems safe to assume you know very little.;-)

>I mentioned the mathematics of concurrent programming only to indicate
>that there might be some (like me) who would be interested in
>concurrency in its own right, as a purely mathematical problem.
>
>Game programmers would prefer faster sequential processors to more
>cores. The future, though, is more cores.

Puuhhleeeeze - give it up. How many times do you want to do the same dance
to the same tune?

--
Rgds, George Macdonald