Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (
More info?)
GSV Three Minds in a Can wrote:
>>
>> What multi-CPU aware apps are you using then?
>
> None at the moment, because I can't afford a multi CPU workstation to
> play with, which is why I've been drooling over the x2 chips for some
> time..
> Media encoding and rendering are the obvious apps which would soak up
> lots of cores/CPUs with ease.
>
My error. Just needs to be multi-threaded.
>
> Minor hassles, and certainly no worse than writing interrupt driven OS
> code and trying to weave that around the regular applications. Some game
> (engine) designers are really smart people (some are just glorified
> graphics artists or authors or musicians, which is why the team are now
> so huge).
>
> Yeah, debugging is an issue, but hey, these folks can't debug what they
> deliver today, so nothing new there.. 8>.
>
True.
>
> Nope, there (was) no other way of doing 80bit maths on an x87 and
> keeping the (right) 80 bit values in the (right) x87 registers in the
> stack at that time. These days maybe you could do as well with SSEn,
> although I suspect not.
>
> If you want to 'fly through' a Julia set you can fly a lot deeper (in
> reasonable time - i.e. without getting into multi-word arithmetic) with
> 80 bit operands than you can with 64 bits, before you run into the
> pixellation limit (i.e. where adjacent pixels have the =same= floating
> point value to N bits, and your picture blacks out). All the C/C++
> compilers I looked at didn't really believe in 80bit data values, and
> certainly didn't have a clue as to how to leave them in the FPU for the
> whole of a scan line.
>
> I've got some code sitting here which ought be able to soak up however
> many cores I can afford to throw at it - one thread for the display
> (rate limited by how fast the user flies &/or the availability of frame
> buffers), one for the UI (may be mostly idle), and 1-N threads (1 per
> frame) doing the calculation, (allowing as how you may have to dump
> future frames if the user decides to scroll sideways). All one process,
> although nobody gets to play with anyone else's frame buffer until it is
> 'done', and there is no interaction between frame<N> and frame <N+1>
> during the calculation phase. Actually there isn't any interaction
> between each scan line and the next one IIRC, so I could actually toss
> 1024 cores at =each= frame buffer (but it isn't coded that way).
>
> Chess plays pretty well on multi-CPU systems of course, an I don't see
> why an X2 is going to be any different from a two CPU workstation in
> that regard - Fritz<n> should be able to handle it right out of the box.
> Not that I'm very excited by that, except for analysis - I already can't
> beat Fritz on an XP3000+.
>
Yes. As long as problems can be segmented in such a way as it makes it
easy for a multi-core CPU to operate on them efficiently, then you get a
significant performance boost (i.e. GPU's with multiple pipelines).
Game engines (for twitch games at least), revolve around very tight
loops. You can certainly offload a number of tasks to run in parallel,
however, the trick is to do so without incurring a significant penalty
during execution (i.e. avoiding CPU cache misses as much as possible).
> For something like Morrowind, I guess you'd turn most of the processing
> power loose on the 'wandering monsters' (and sundry mobile bits of
> scenery/weather) which need animating, and where the interaction between
> the 'objects' is actually pretty small (and again you can play the 'next
> frame, frame after that' trick).
>
Yep. Indeed when interaction with other objects is at a minimum, then
processing the surrounding environment perhaps isn't such a big deal.
For turn based games (strategy) like chess, then one would expect a
noticable benefit with multi-cores.
The tricky situations are where there is a lot of interaction going on -
again this is mostly likely for realtime games (team sports, FPS, RTS).
> Wait and see .. however history says that game designers have never let
> complications of technology stand in the way of consuming all the PC
> they can find, and then some - and in 5(?) years time I bet you'll have
> trouble buying a single core desktop CPU chip.
>
As my work colleague says "threads are evil", and life is sooo much
easier without them (google it online - a few interesting links).
I have no doubt, game developers will eventually learn to make the most
of multi-core CPUs.
I agree, I imagine once more programs start appearing that take
advantage of multiple cores then people may ask how we ever made do with
single core CPUs.
However, I still disagree with your initial statement about the current
capability of developers in regards to multi-core CPUs.