Is Itanium the first 64-bit casualty?

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

The little lost angel wrote:

> On Tue, 06 Jul 2004 05:01:43 -0400, George Macdonald
> <fammacd=!SPAM^nothanks@tellurian.com> wrote:
>
>>Now if we could just get some agreement on what "firmware" means....
>>oops!
>
> It means hardware since firmware means it isn't soft ;PpPpPPp

Not hardware. ...more like Tupperware. ;-)

--
Keith
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Greg Lindahl" <lindahl@pbm.com> wrote in message
news:40ea4784$1@news.meer.net...
> In article <pan.2004.07.06.05.49.22.522052@zaitcev.lan>,
> Pete Zaitcev (OTID1) <ot16a6ca05878e44c0@comcast.net> wrote:
> >Greg, a number of PCI masters in the field do not support DAC.
> >This is a huge problem with EM64T right now. Now it might not
> >be such a big problem for an HPC weenie who only has to deal
> >with a very limited set of hardware (most of which is high end
> >anyway).
>
> Pete, this sub-sub-thread is about the fact that 32-bit PCI _can_ have
> 64-bit addressing. I was not asserting that there is no problem,
> I was laughing at absolute statements on comp.arch that happen
> to be absolutely wrong, a fairly common issue.

I am guilty of not knowing about DAC, but that doesn't change my answer,
which was based on the actual behavior of a popular x86 OS.

As long as a non-trivial number of PCI cards or bridges don't support DAC,
OSes will have to deal with the case where it's not available. Windows and
Linux both do a very sensible thing when this occurs, though obviously
buying all DAC-capable hw is the best solution.

> I *am* an HPC weenie, but that fact has nothing to do with 32-bit PCI with
> or without 64-bit addressing.

No, but it means you probably have a limited view of range of hardware
capabilities that a modern OS (and IT dept) has to deal with. Mandating
that all systems have DAC-capable hardware may work in the HPC world, but
the very concept is laughable to an IT manager or OS developer.

S

--
Stephen Sprunk "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

email4rjn@yahoo.com (Bob Niland) writes:

>> ... a number of PCI masters in the field do not support DAC.
>
> Does PCI Express fix this problem by mandating
> 64-bit compliance?

It does, but vendors just ignore it (case in point: popular
GPUs)

I fully expect there will be even devices with support for less than
32bit, like it is common with many "PCI sound chips". Vendors will
just add a PCI-Express bridge, but not fix the core chip.


>> This is a huge problem with EM64T right now.
>
> Since we haven't seen 64-bit benchmarks yet, there
> could be "huger" problems. But in any case, not many
> EM64T systems will be run in 64-bit mode this year,

What makes you think so? A significant portion
of the AMD Opterons seem to run 64bit kernels, why
should it be different with Intel?

> and few of those with over 4GB. By year end, Intel
> will likely have fixed this oversight (along with
> some others they missed when they cloned AMD64).

It's 3.2+GB, not 4GB, see my other messages in this
this thread.

-Andi
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

The article to which this is a response
never showed up on Google.

AK: > I fully expect there will be even devices with support
> for less than 32bit, like it is common with many "PCI
> sound chips". Vendors will just add a PCI-Express
> bridge, but not fix the core chip.

I'd like to think that the bridges would be fully
compliant, and mask the legacy junk behind them,
but industries do have way of defeating the goals
of their own standards initiatives.

>> Since we haven't seen 64-bit benchmarks yet, there
>> could be "huger" problems. But in any case, not many
>> EM64T systems will be run in 64-bit mode this year,

> What makes you think so? A significant portion
> of the AMD Opterons seem to run 64bit kernels, why
> should it be different with Intel?

Any one of these could significantly impair EM64T
adoption (in 64-bit mode):
- CPUs late or not available in quantity
- chipset problems that cause further slips
- system price uneconomic (even for 32-bit)
- desired clock speeds have major thermal issues
- CPUs run no faster in 64-bit mode
- incomplete AMD64 cloning delays software
- CPUs actually run slower in 64-bit mode (e.g. IOMMU)

It's been over a week since Nocona intro, and we're
still waiting for useful 64b test reports. I don't know
how many of the above speculations will turn out true,
but I just have a hunch that for end users needing to
run 64-bit this year, AMD64 chips will be more attractive
than the first generation of EM64T chips.

>> and few of those with over 4GB. By year end, Intel
>> will likely have fixed this oversight (along with
>> some others they missed when they cloned AMD64).

> It's 3.2+GB, not 4GB, ...

So a 4GB config would get tagged by the IOMMU lapse?

> ... see my other messages in this this thread.

Not found on Google in the xpost groups of this header.
I did find some of your DMA remarks in Linux groups though.

--
Regards, Bob Niland mailto:name@ispname.tld
http://www.access-one.com/rjn email4rjn AT yahoo DOT com
NOT speaking for any employer, client or Internet Service Provider.
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In comp.arch The little lost angel <a?n?g?e?l@lovergirl.lrigrevol.moc.com> wrote:
> On Tue, 06 Jul 2004 05:01:43 -0400, George Macdonald
> <fammacd=!SPAM^nothanks@tellurian.com> wrote:
>
> >Now if we could just get some agreement on what "firmware" means.... oops!
>
> It means hardware since firmware means it isn't soft ;PpPpPPp
>

it means software tht has cement shoes on 😛

--
Sander

+++ Out of cheese error +++
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Sander Vesik" <sander@haldjas.folklore.ee> wrote in message
news:1089207460.830124@haldjas.folklore.ee...
> In comp.arch The little lost angel <a?n?g?e?l@lovergirl.lrigrevol.moc.com>
wrote:
> > On Tue, 06 Jul 2004 05:01:43 -0400, George Macdonald
> > <fammacd=!SPAM^nothanks@tellurian.com> wrote:
> >
> > >Now if we could just get some agreement on what "firmware" means....
oops!
> >
> > It means hardware since firmware means it isn't soft ;PpPpPPp
> >
>
> it means software tht has cement shoes on 😛
>
> --
> Sander
>
> +++ Out of cheese error +++

Software for lawyers, or maybe a cure for cellulite.

Peter
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

> If you find that you can't use any of the range- and type-checked languages,
> for whatever reason, then you probably wouldn't be happy with a non-flat
> memory space in hardware, either.

Agreed.

> If you can use those languages, then the segments that were being
> discussed will be completely invisible to you,

Agreed.

> other than for the fact that your software might possibly be a little
> faster, because said range checking and object relocation will be getting
> some hardware assistance.

Here, tho, I have to disagree: I can't think of any type-safe language where
the compiler would be able to make good use of segments. You might be able
to keep most objects in a flat space and then map every array to a segment
(thus benefitting from the segment's bounds check), but it's probably going
to be too much trouble considering the risk that it will suffer pathological
degradation on programs that use a large number of small arrays (because the
hardware would have a hard time managing efficiently thousands of actively
used segments).

> As I understand it, the contention was whether or not it was possible or
> useful to run C (or C++) on such hardware. I suspect that quite large
> chunks of application-level C (and C++) would be perfectly fine, since the
> restrictions involved are the same as those needed to avoid most compiler
> warning messages.

In theory, yes. In practice, it's very difficult for the compiler to
figure out how to compile the thing (I gather that one of the difficulty is
to figure out whether "foo *bar" is a pointer to an object `foo' or
a pointer to an array of `foo's or a pointer to one of the elements of an
array of `foo's).


Stefan
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <d97c4731.0407070747.2fb86f4a@posting.google.com>,
Bob Niland <email4rjn@yahoo.com> wrote:

> - CPUs run no faster in 64-bit mode

Given the additional registers and better calling sequence, there's
significant additional performance to be had. The IOMMU problem
doesn't affect apps that don't do very much I/O.

>It's been over a week since Nocona intro, and we're
>still waiting for useful 64b test reports.

Given the short timeframes and teething problems for the hardware
(does anyone have a PCI Express graphics card they can lend me?),
I'm not surprised at all...

Followups to a group that I read.

-- greg
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Bob Niland" <email4rjn@yahoo.com> wrote in message
news:d97c4731.0407070747.2fb86f4a@posting.google.com...
> It's been over a week since Nocona intro, and we're
> still waiting for useful 64b test reports. I don't know
> how many of the above speculations will turn out true,
> but I just have a hunch that for end users needing to
> run 64-bit this year, AMD64 chips will be more attractive
> than the first generation of EM64T chips.

The public trial of XP64 doesn't currently run on Intel's chips (though the
closed beta program's current build does -- and reporting performance is
banned):
http://www.infoworld.com/article/04/07/06/HNwindowsnocona_1.html

Very few AMD64 benchmarks have been run on Linux, despite that being the
majority of 64-bit x86 software currently in use. The XP64 trial version is
uniformly slower than XP32 in the few benchmarks that have been run (usually
by gaming sites), so there's not much reason to expect it to be adopted
before the final release in Q4, even among AMD owners.

S

--
Stephen Sprunk "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <2776600ff278f743552a72e1d2433eb8@news.teranews.com>,
Stephen Sprunk <stephen@sprunk.org> wrote:

>Very few AMD64 benchmarks have been run on Linux, despite that being the
>majority of 64-bit x86 software currently in use.

You might want to be more specific as to what benchmarks you're
referring to, as I know of a lot of HPC benchmarks that have been run
on AMD64 Linux.

Examples: all Linux: http://www.pc.ibm.com/ww/eserver/opteron/benchmarks/,
mixed Solaris and Linux: http://www.sun.com/servers/entry/v20z/benchmarks.html

AMD and IBM make regular SPEC submissions on 64-bit Linux.

Followups to group that I actually read.

-- greg
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

email4rjn@yahoo.com (Bob Niland) writes:

>> ... a number of PCI masters in the field do not support DAC.
>
> Does PCI Express fix this problem by mandating
> 64-bit compliance?

"Legacy Endpoints" (which are basically PCI v2.3 compliant devices) are
not required to be able to generate addresses above 4GB, according to
the PCI-E spec.

PCI Express Endpoints _are_ required to support >4GB addressing.

(PCI-Express base standard v1.0a, sections 1.3.2.1 & 1.3.2.2, page 32).

> I'm sure that PCI-E also fixes the voltage problem
> (5v-tolerant 3.3v universal PCI cards are common,
> but universal slots are uneconomical, with the result
> that 66MHz and faster PCI slots are rare in retail PCs,
> even though some of us could use the speed). And, without
> having seen the spec, I'll bet PCI-E fixes the clocking
> problem too (the max speed of shared slots is limited
> to the slowest installed card).

PCI-Express does not use a shared bus; it uses point-to-point links
and a central switch. So there you go.

Regards,


Kai
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Stefan Monnier wrote:

[SNIP]

> Here, tho, I have to disagree: I can't think of any type-safe language where
> the compiler would be able to make good use of segments. You might be able
> to keep most objects in a flat space and then map every array to a segment
> (thus benefitting from the segment's bounds check), but it's probably going
> to be too much trouble considering the risk that it will suffer pathological
> degradation on programs that use a large number of small arrays (because the
> hardware would have a hard time managing efficiently thousands of actively
> used segments).

Perhaps no more than the risk posed by offloading the problem onto the
TLB/VM code. You have even less control over that as it's basically at
the mercy of the workload at run-time. 🙁

Cheers,
Rupert
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Stefan Monnier wrote:
> Here, tho, I have to disagree: I can't think of any type-safe language where
> the compiler would be able to make good use of segments. You might be able
> to keep most objects in a flat space and then map every array to a segment
> (thus benefitting from the segment's bounds check), but it's probably going
> to be too much trouble considering the risk that it will suffer pathological
> degradation on programs that use a large number of small arrays (because the
> hardware would have a hard time managing efficiently thousands of actively
> used segments).

I admit that the possibility of pathological behaviour exists, but
it does on every platform in one way or another. Who would have
thought that database code could have such long runs of loopless
code that it trashed the decoded instruction caches of some
processors, putting the decoder on the critical path?

To sort-of answer the question, I know of at least one
language/compiler combination (Inria's SmartEiffel) that manages
all allocations of small objects through typed pools, so that
system memory requests are always at least a whole page. This is
for a language with strict bounds checking, so I assume that some
of the same issues must hold. I dare say that other strongly
typed languages could do the same. It wouldn't be hard to do
something similar for C, either, just that the only "type"
information available to the allocator at run time is the object size.

> In theory, yes. In practice, it's very difficult for the compiler to
> figure out how to compile the thing (I gather that one of the difficulty is
> to figure out whether "foo *bar" is a pointer to an object `foo' or
> a pointer to an array of `foo's or a pointer to one of the elements of an
> array of `foo's).

I think that the last of these is the only one that could be
tricky, and without too much thought that seems to fit the plan
too. There is no difference between a pointer to an object foo
and a pointer to an array of foos, just the first case has an
array length of one (which could be checked). If your pointers
are compound things containing base and index, then the pointer to
a specific element should still work too.

Cheers,

--
Andrew
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Stephen Sprunk wrote:

> Very few AMD64 benchmarks have been run on Linux, despite that being the
> majority of 64-bit x86 software currently in use. The XP64 trial version is
> uniformly slower than XP32 in the few benchmarks that have been run (usually
> by gaming sites), so there's not much reason to expect it to be adopted
> before the final release in Q4, even among AMD owners.

Either they improved the beta-version, or there are some prgrams that can benefit
already (AMD reports 57% improvement with Win64 beta):
http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_543~87018,00.html

Regards,
Evgenij

--

__________________________________________________
*science&fiction*free programs*fine art*phylosophy:
http://sudy_zhenja.tripod.com
----------remove hate_spam to answer--------------
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

> I admit that the possibility of pathological behaviour exists, but it does
> on every platform in one way or another.

But it's yet-another-source of yet-another-pathological behavior.
It had better give some substantial benefits. AFAIk, the only benefit is
array-bounds checking "for free". Whether that's substantial or not
depends on the circumstance: in many cases ABC can be optimized away.

> To sort-of answer the question, I know of at least one language/compiler
> combination (Inria's SmartEiffel) that manages all allocations of small
> objects through typed pools, so that system memory requests are always at
> least a whole page.

Sure, that's pretty common, but it has nothing to do with segments.
Allocating non-array objects in segments (grouped or not) is useless since
the bounds-checking is unnecessary: you might as well allocate it in
a flat space and save the cost of managing segment descriptors.

> I think that the last of these is the only one that could be tricky, and
> without too much thought that seems to fit the plan too. There is no
> difference between a pointer to an object foo and a pointer to an array of
> foos, just the first case has an array length of one (which could be
> checked).

Most type-safe implementations of arrays need to keep the array size
somewhere at run time, so a single element and an array of size 1 are not
represented the same way.

> If your pointers are compound things containing base and index,
> then the pointer to a specific element should still work too.

But such a representation of pointers is unnecessarily costly for the usual
case of a pointer to a single object. Some C compilers use such tricks to
get a "safe" implementation, but the runtime cost is very significant
(we're talking more than a factor 2 slowdown).


Stefan
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Stefan Monnier <monnier@iro.umontreal.ca> writes:
> > If your pointers are compound things containing base and index,
> > then the pointer to a specific element should still work too.
>
> But such a representation of pointers is unnecessarily costly for the usual
> case of a pointer to a single object. Some C compilers use such tricks to
> get a "safe" implementation, but the runtime cost is very significant
> (we're talking more than a factor 2 slowdown).

Pointers which are run-time distinguished from integers, and which
contain bounds information are essential to computers and languages
that are secure. We've known this for decades, and the ideas have
been tested thoroughly in implementations such as the lisp machine.

A more recent idea is the inclusion of small floating point length
descriptors within the pointer itself. The idea is that every pointer
contain, in addition to a full-address location in memory, a sixteen
bit or so origin and length descriptor. For example, such a
descriptor could consist of three fields, a block-size field (exponent
of the length), a count of the number of blocks in the object
(mantissa of the length) and a finger field (mechanism for identifying
the origin of the object in units of blocks).

With such a scheme, we can represent block-size aligned data
structures with high efficiency of memory use (1.5% wasted memory
space), guarantee that no references take place outside the object,
allow pointer arithmetic within the object, and do fine grained
allocation at the word level for objects less than 32 words.

Here's what an object identifier and pointer to a sub-word of the
object look like:


6 5 5 42-64 bits
******|*****|*****|******** ---- *******************
E B F pointer within object


Let C = the pointer field zeroing the bottom E+1 bits


This tells you that the object is of size B x 2**E words
That the object is word aligned on memory boundaries of size 2**E
That the beginning of the object is at location C - F x 2**E
That the end of the object is at location C - (B - F) x 2**E


In my opinion, schemes such as this combine the best of flat address
space and object bounds checking.
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

>> Here, tho, I have to disagree: I can't think of any type-safe language where
>> the compiler would be able to make good use of segments. You might be able
>> to keep most objects in a flat space and then map every array to a segment
>> (thus benefitting from the segment's bounds check), but it's probably going
>> to be too much trouble considering the risk that it will suffer pathological
>> degradation on programs that use a large number of small arrays (because the
>> hardware would have a hard time managing efficiently thousands of actively
>> used segments).

> Perhaps no more than the risk posed by offloading the problem onto the
> TLB/VM code. You have even less control over that as it's basically at
> the mercy of the workload at run-time. 🙁

Maybe. I guess it could be pretty comparable (but I don't believe in the
"more control" because the user code would only control segment "pointers"
while the base-address, size and access rights would most likely not be
loaded/unloaded explicitly).

But would segments save you from using paging, really?


Stefan
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Evgenij Barsukov" <e-barsoukov2_hate_spam@ti.com> wrote in message
news:ccjn2c$g1p$1@home.itg.ti.com...
> Stephen Sprunk wrote:
> > Very few AMD64 benchmarks have been run on Linux, despite that being the
> > majority of 64-bit x86 software currently in use. The XP64 trial
version is
> > uniformly slower than XP32 in the few benchmarks that have been run
(usually
> > by gaming sites), so there's not much reason to expect it to be adopted
> > before the final release in Q4, even among AMD owners.
>
> Either they improved the beta-version, or there are some prgrams that can
benefit
> already (AMD reports 57% improvement with Win64 beta):
>
http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_543~87018,00.html

The beta version is several hundred builds ahead of the public trial
version, but other than that AMD press release I've never seen any
benchmarks of the beta; I assume AMD got a waiver of the NDA because it was
good news they wanted to report.

According to a closed MS newsgroup, the public trial version will be updated
Real Soon Now to catch up with the beta. That's good news, as the original
release was in Sep 2003 and there've been no updates so far; the most
popular topic on the newsgroup is how to uninstall XP64 because people are
frustrated with the lack of progress in performance and driver support.

S

--
Stephen Sprunk "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Stefan Monnier wrote:

[SNIP]

>>Perhaps no more than the risk posed by offloading the problem onto the
>>TLB/VM code. You have even less control over that as it's basically at
>>the mercy of the workload at run-time. 🙁
>
>
> Maybe. I guess it could be pretty comparable (but I don't believe in the
> "more control" because the user code would only control segment "pointers"
> while the base-address, size and access rights would most likely not be
> loaded/unloaded explicitly).
>
> But would segments save you from using paging, really?

Stuff like sparse addressing would be "hard" without paging or
something akin to it. In most other respects I think a friendly
segmentation scheme would be adequate. :)

Cheers,
Rupert
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <1089305308.466753@teapot.planet.gong>,
Rupert Pigott <roo@try-removing-this.darkboong.demon.co.uk> wrote:
>
>Stuff like sparse addressing would be "hard" without paging or
>something akin to it. In most other respects I think a friendly
>segmentation scheme would be adequate. :)

Can you give one SOLID reason why sparse addressing should be provided
by hardware and privileged code? Note that, as usual, I am not talking
about implementing the current methods in applications and libraries,
but about providing equivalent functionality and efficiency.


Regards,
Nick Maclaren.
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

>> > If your pointers are compound things containing base and index,
>> > then the pointer to a specific element should still work too.
>>
>> But such a representation of pointers is unnecessarily costly for the usual
>> case of a pointer to a single object. Some C compilers use such tricks to
>> get a "safe" implementation, but the runtime cost is very significant
>> (we're talking more than a factor 2 slowdown).

> Pointers which are run-time distinguished from integers, and which
> contain bounds information are essential to computers and languages
> that are secure.

Are they? Java's integers are difficult to distinguish from pointers at
run-time, unless you have sufficient context information. Same thing with
Modula-3 and many/most other statically type safe languages.

> We've known this for decades, and the ideas have been tested thoroughly in
> implementations such as the lisp machine.

Sure there's lots of funny ways to do runtime checks, but when compiling C,
it's very difficult to make them cost less than a factor of 2 slowdown,
because you're running against a "no check, no runtime info" base
performance and because there are many different situations you have to deal
with, such as "this int pointer migth be pointing at the 3rd element of an
array which is itself the second field of a struct which is itself the 25th
element of an array". Given such constraints, it's difficult to pack all
the necessary runtime data into the usual space taken by a pointer, so you
typically end up at least doubling the size of a pointer, using up that much
more registers and memory (and instructions to move them around), ...

With a bit of help from the programmer, it's usually easier (tho by no
means trivial), see the Cyclone project for an example.


Stefan
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

>Can you give one SOLID reason why sparse addressing should be provided
>by hardware and privileged code?

er, could I give you a reason that was full of holes?

--
mac the naïf
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <cck8rk$k3g$1@pcls4.std.com>, Alex Colvin <alexc@std.com> wrote:
>>Can you give one SOLID reason why sparse addressing should be provided
>>by hardware and privileged code?
>
>er, could I give you a reason that was full of holes?

Yes, but I will simply shoot it down through the holes 🙂


Regards,
Nick Maclaren.
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <jwvhdsjn6cb.fsf-monnier+comp.arch@gnu.org>,
monnier@iro.umontreal.ca says...
>
>Here, tho, I have to disagree: I can't think of any type-safe language
where
>the compiler would be able to make good use of segments. You might be able
>to keep most objects in a flat space and then map every array to a segment
>(thus benefitting from the segment's bounds check), but it's probably going
>to be too much trouble considering the risk that it will suffer
pathological
>degradation on programs that use a large number of small arrays (because
the
>hardware would have a hard time managing efficiently thousands of actively
>used segments).
>
After using Pascal & Algol on a Unisys NX (nee. A series) machine, I really
have to disagree. Both make "segments" for every array dimension, and
record (structure) that is allocated. For the most part, all arrays and
structures are NOT allocated on the return stack. The OS and the languages
really don't have a problem with this. Array elements are allocated usually
only when they are "touched", which addresses some of the sparse array
questions.
>
>In theory, yes. In practice, it's very difficult for the compiler to
>figure out how to compile the thing (I gather that one of the difficulty is
>to figure out whether "foo *bar" is a pointer to an object `foo' or
>a pointer to an array of `foo's or a pointer to one of the elements of an
>array of `foo's).
>
Yes, for a C compiler, because C (and the programs written in it) assume
you can do differences on pointers, that a pointer is a single "word", that
it can fit into some kind of integer, that the address space is flat, etc,
etc.

Use a language that doesn't let you assume those things (Algol or Pascal),
and a pointer is a pointer.

For instance, if every pointer had to use a segment for the object it
addresses, you wouldn't have the above problem, if it addressed a single
object, the length of the segment would be enough to contain just one
object. Of course, this leads to a profileration of segments, which is a
real short coming of the x86 architecture.

- Tim

NOT speaking for Unisys.
 
Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Paul Gunson <spammersrot@mybasement.com> writes:

> i was shocked to learn that it still required a floppy drive to
> install XP-64.

I've been wondering why all new PCs seem to have them the last five
years 🙂

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants