Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (
More info?)
George Macdonald wrote:
> On 29 Aug 2005 16:58:42 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
>
> >
> >George Macdonald wrote:
> >> On 29 Aug 2005 07:33:07 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
> >>
> >> >George Macdonald wrote:
> >> >> On 27 Aug 2005 06:36:43 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
> >> >>
> >> >> >George Macdonald wrote:
> >> >> >> On 26 Aug 2005 07:37:06 -0700, "Robert Myers" <rbmyersusa@gmail.com> wrote:
> >> >> >>
> >>
> >> >> >No matter what power management trickery does for you most of the time,
> >> >> >you've got to be able to cool the thing when it's operating at peak
> >> >> >performance.
> >> >>
> >> >> Well we know that Intel stubbed its toes there at 4GHz and while the end of
> >> >> scaling seems to be accepted as imminent, it's not clear how far other mfrs
> >> >> can go, nor in what time scale. What I'm talking about is also more than
> >> >> what we normally think of as power management - more like distributed
> >> >> dynamic adaptive clocks - there may be a better term for that. 100% load
> >> >> is difficult to categorize there and of course "clock rate" becomes
> >> >> meaningless as a performance indicator.
> >> >>
> >> >> AMD has said that it intends to continue to push clock speeds on single
> >> >> core CPUs and its current offerings do not suffer anywhere near the same
> >> >> heat stress as Intel's even at "100% load"; if AMD can get to 4GHz, and
> >> >> maybe a bit beyond with 65nm, they are quite well positioned. All I'm
> >> >> saying is that I'm not ready to swallow all of Intel's latest market-speak
> >> >> on power/watt as a new metric for measuring CPU effectiveness. They
> >> >> certainly tried to get too far up the slippery slope too quickly - it still
> >> >> remains to be seen where the real limits are and which technology makes a
> >> >> difference.
> >> >>
> >> >Let's not get into another Intel/AMD round. As it stands now, Intel is
> >> >likely to put its efforts at pushing single thread performance into
> >> >Itanium. Who knows how long that emphasis will last.
> >>
> >> It was an honest and *correct* comment on Intel's technology choices - no
> >> need to have any "round" of anything... and certainly not about Itanium.
> >>
> >Translation: Intel isn't likely to want to play. That may have no
> >bearing on AMD's decision-making whatsoever, but, if AMD wants to go
> >after x86 users with need for single-thread performance, I suspect they
> >will have the market all to themselves. The gamers who have
> >historically carried users hungry for single-threaded performance will
> >all have moved to multi-core machines because that's where they'll be
> >getting the best performance because all of the software will have been
> >rewritten for mulitple cores. IBM will stay in the game because IBM
> >wants to keep Power ahead of Itanium on SpecFP, and the x86 chips
> >you'll be looking to buy, if they're available, will be priced like
> >Power5, or whatever it is by then. You know, that monopoly thing.
>
> Hmm, and you you said you didn't want to get into another "Intel/AMD"
> round... and yet, there you go again. I was only stating a documneted
> acknowledged fact - your prognostications are not relevant.
>
> The game makers have already stated that they don't expect to get much out
> of multi-core - it looks to me single high-speed core is what is needed
> there for a (long) while yet. Hell, dual CPUs have been available for long
> enough and they have not tweaked any gaming interest.
>
There are several different issues tangled up here:
1. How much further single-thread performance can be pushed.
2. How much those chips will cost.
3. Whether single-thread chips will dominate gaming.
4. How much of what Intel is doing is pure market-speak.
5. The purported advantage AMD has with respect to "heat stress."
Taking the issues in inverse order:
5. The "heat stress" problems Intel has are the result of having to run
NetBurst at a higher clock to get comparable performance. The P6 core
derivatives have plenty of headroom.
4. For many applications, performance per watt is the figure of merit
of greatest interest because that will determine how much muscle can be
packed into a given space. For those who need single-thread
performance, it isn't a figure of merit that's of interest. If you
really need single-thread performance, there will always be options, at
a price
http://www.alienware.com/configurator_pages/Aurora_alx.aspx?SysCode=PC-AURORALX-SLI-D
IOW, if it's *that* important to you right now, all you have to do is
to get out your checkbook.
3. It may take a while, but there really isn't anywhere else to go.
The idea of having a separate, specialized physics engine is kind of
silly because there's no reason why the physics can't be done by
another CPU core (the solution I really like, actually, is a design
like Cell, which seems to get the best of both worlds). You're going
to accuse me of shilling for Intel, but you (or someone else reading
this) might be interested in
http://www.intel.com/cd/ids/developer/asmo-na/eng/strategy/multicore/index.htm
Scroll down past the marketing bs to "Application Development and
Performance Resources."
2. Chips with high single-thread performance will continue to be
available because the market will be limited, and the easiest way to
achieve high performance is binning. Intel will have to have at least
one chip so that they can publish SPEC benchmarks, and I suppose a few
dozen will have to be on offer somewhere to keep it legit.
1. I have no idea how much further single thread performance can be
pushed.
<snip>
>
> >> >> A different programming paradigm/style is not going to help them -
> >> >> expectation of success is not obviously better than that of new
> >> >> semiconductor tweaks, or even technology, which allows another 100x speed
> >> >> ramp over 10 years or so coming along. When I hear talk of new compiler
> >> >> technology to assist here, I'm naturally skeptical, based on past
> >> >> experiences.
> >> >
> >> >Well sure. The compiler first has to reverse engineer the control and
> >> >dataflow graph that's been obscured by the programmer and the
> >> >sequential language with bolted-on parallelism that was used. If you
> >> >could identify the critical path, you'd know what to do, but, even for
> >> >very repetitive calculations, the critical path that is optimized is at
> >> >best a guess.
> >>
> >> It's not static - compilers don't have the right info for the job... and
> >> compiler-compilers won't do it either.
> >>
> >That's why you do profiling.
>
> It makes me wonder sometimes when you spout some buzzword like that as
> though it is known to work well for all general purpose code working on all
> possible data sets.<shrug> People who use compilers know this.
>
Would you have been happier if I had said "feedback-directed
optimization?" The compiler can't accurately infer dependencies from
software written in, say, c. If those ambiguities didn't exist, there
would still be the problem of determining the hot paths in the
software. Given an accurate control and data flow graph, it's pretty
easy to discover the hot paths, and the main reason feedback directed
optimization doesn't always work well is that there is no accurate
control and data flow graph to begin with. Even the most unlikely of
software, like Microsoft Word, turns out to be incredibly repetitive in
the paths taken through the software.
RM