Archived from groups: microsoft.public.windowsxp.general,microsoft.public.windowsxp.perform_maintain (
More info?)
In article <eCJf1XwIFHA.3332@TK2MSFTNGP15.phx.gbl>, David Candy <.> wrote:
>And the CPU was designed to page, the motherboard chipsets were designed =
>to page. Windows was designed to suit the hardware.
Fire up task manager and pick the View/Select Columns tab. You'll be
able to add counters for memory-related usage and see which programs
are doing what. There's lots there I don't understand for Windows but
to me the interesting data is "PF delta" which is how many times in
the update interval an apllication needed a page what wasn't in it's
cache.
If there is a Page Fault it means that a page your application needs
isn't in the VM mapping tables and the OS takes over and updates the
VM tables, bringing a page in, if necessary. If you are short on real
memory that may mean forcing a physical write of some other page to
make room, so there were two disk I/O ops instead of zero. Even if a
page is in memory a PF takes CPU time away from useful work and slows
down your app. A "soft PF" means that the page was in memory and no
I/O was necessary to resolve. a "hard PF" means that I/O was
necessary. (anyone that can correct my terminology for WIndows please
chip in.)
As someone else described, each program has a "working set", the
minimum number pf pages it needs to do it's job with essentially zero
page faults (except for startup). The total size of the program is
frequently many times the working set size. As long as the total of
the working set for all running processes is less than the total real
memory you've got a system that is running efficiently.
I used to be able to quote microsecond figures for page fault handling
for certain mainframes. Soft faults were in microseconds, hard faults
are in milliseconds. (they still are.) In the day, I knew that 25 soft
faults per second meant we either had to tune our application mix
(might be expensive) or buy another chunk of memory (expensive.)
At least one major mainframe operating systems that was current in the
late 70's was even more tightly coupled to the VM architecture than
Windows is. The hardware and OS managed a data page of the file
system the same way it handled a memory page. All of memory was one
big cache. Top-20, fast as h**l for it's day.
Many PhD papers were were written in the 60's and 70's about memory
management strategies for virtual systems and there were loud
arguements at perfessional meetings about how they worked with
different process scheduling algoritms. Something we take for granted
now.
Now I'll return your TV channel to the 21st century......
>
>--=20
>----------------------------------------------------------
>http://www.microscum.com/mscommunity/
>"Ron Martell" <ron.martell@gmail.com> wrote in message =
>news:867o21ttifskdf1akbrq8or316hgd18agt@4ax.com...
>> "axis" <nospam@nospam.org> wrote:
>>=20
>>>Why do I still need a page file, even when I have 1 Gig of ram and my =
>RAM=20
>>>usage normally hovers around 3-400mb? If I set the page file to be =
>very=20
>>>small windows XP goes nuts. I understand the need for page files in =
>memory=20
>>>constrained situations, I would appreaciate some info as to why one =
>needs it=20
>>>even in a situation where we shouldn't need to page any data our of =
>memory=20
>>>to disk.
>>>
>>>thanks=20
>>>
>>=20
>> One big reason is that Windows uses the page file to satisfy the
>> memory address space requirements for the unused portions of memory
>> allocation requests.
>>=20
>> By design Windows must identify specific memory address space for all
>> of the memory allocation requests that are issued, whether by Windows
>> itself, device drivers, or application programs. And all of these
>> typically ask for allocations that are larger that what is usually
>> needed under normal circumstances. So what Windows does is to
>> allocate RAM only to those portions of these requests that are
>> actually used and uses space in the page file for the unused portions.
>>=20
>> Two points about this:
>> 1. Mapping of these unused portions of memory requests to the page
>> file does not require any actual writing to the hard drive. All that
>> is need is entries in the memory mapping tables maintained by the CPU.
>> 2. Windows Task Manager includes the swap file space allocated to
>> these unused ports as Page File Usage in the data reported on the
>> performance tab.
>>=20
>> And if subsequent events result in the usage of previously requested
>> but unused memory then it can be instantaneously remapped from the
>> page file to an available location in RAM. =20
>>=20
>> The bottom line, insofar as the current topic is concerned, is that
>> the existence of the page file will make the actual usage of your RAM
>> more efficient. Without a page file it is quite possible, indeed even
>> likely, that you would have a couple of hundred megabytes of RAM tied
>> up for memory that was requested but never used.
>>=20
>>=20
>> Also you need to be aware that Windows does use the page file for more
>> than just swapping out of memory content from RAM. It is also used
>> for:
>> a: System Failure Memory Dumps, unless you have this option
>> configured as "no memory dump". And in order for this option to be
>> usable there must be an existing page file on the boot drive that is
>> at least as large as the dump size option selected.
>> b: If you have multiple users configured on the computer and if you
>> have the "fast user switching" option in effect then Windows will use
>> the page file to "roll out" the memory contents of the previous user
>> when the machine is switched to a new user.
>>=20
>> Hope this explains the situation.
>>=20
>> Good luck
>>=20
>>=20
>> Ron Martell Duncan B.C. Canada
>> --=20
>> Microsoft MVP
>> On-Line Help Computer Service
>>
http://onlinehelp.bc.ca
>>=20
>> "The reason computer chips are so small is computers don't eat much."
--
a d y k e s @ p a n i x . c o m
Don't blame me. I voted for Gore.