How To Manage Virtual Memory (Pagefile) In Windows 10

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
Is the chance of getting a PAGE_FAULT_IN_NONPAGED_AREA or KERNEL_DATA_INPAGE_ERROR Blue Screen of Death higher if you use System managed setting?

Why would you adjust the size of the page file?


Why would the amount of ram you have present dictate the size you should set your page file to?

That seems counter intuitive and may even make the system unstable.

For instance if my computer had 8 gigabytes of ram and I wanted to open a giant 20 gigabyte TIF picture.

My system might very well crash if we use the logic of 8 gigabyte maximum Virtual memory.

8 gigabyte ram + 8 gigabyte page file = 16 gigabyte combined maximum
 
Adjusting the page file is not the first thing I'd recommend in the event of a BSOD. Reviewing the mini-dump file and logs will illuminate the root cause is most cases. It could be a faulty driver or resource conflict. It could also be hardware related such as bad RAM or incorrect memory timings; an uncorrectable error from a bit flip will cause a kernel panic (which is what a BSOD is)
 


The appropriate page file size i needed was the exact reason i clicked in the article.. thanks!
 

Windows requires that a page file be present, otherwise very nasty things will happen when the system runs low on RAM and there is no page file to back it up. If you have 16GB or more RAM then just set it to a static size of 2-4GB and leave it alone. Increase as/if necessary.
 
Can you not put a pagefile on a secondary (non-windows) drive (as a second pagefile; from what I understand windows forces you to have at least a small one on your primary windows installation drive)? The virtual memory manager lets you set one up but it doesn't persist between restarts (at least for me). @Winterlord I would say no, unless you really need that drive space.
 


You can, but it serves no purpose.

You have to balance how much RAM you have vs the drive space, etc.

For instance...
Years ago, I had a 120GB SSD for the boot drive and 16GB RAM.
Letting Windows suck up 8GB or more for the pagefile was unacceptable, due to the small SSD.
So I set it at 1GB min/max. Worked just fine.

Now, I have 32GB RAM and a 500GB SSD C drive. Only recently did I change from that original 1GB, to letting Windows manage it. The required drive space is not so much of a matter.

If you have an SSD for the C drive, you really want the page file on that.
In the event the OS needs to use the pagefile, why cripple the system by having that on an HDD secondary drive?
 

Yes, better paging performance can be achieved by placing a paging file on each physical hard drive. The page file on the least busy drive at the time of the paging operation will be used.

Note: This applies to spinning drives. SSD's are fast enough that only one page file is needed.
 


The files that actually get written to the pagefile because they dont fit in memory are just the files that are modified right? Any files the same as those on the drives just stay on the drive where they cause no additional writes to the pagefile.

So if I opened programs worth about 20GB Committed memory on a 16GB physical memory system that didnt modify any data, those 4GB difference would just be in their regular places on the drive and not re-written to the pagefile?
What about if you have 2 Drives? Does it behave the same as just one in this scenario or can that actually cause some writes in some circumstances?

Because if this is not the case then I would be concerned with writes to the pagefile shortening the SSD lifespan (at least how I use my computer) because everytime the total amount of committed memory cant fit in the physical memory youre writing those extra bytes to the pagefile.


 
The days of limited SSD lifespan due to too many write cycles is long gone in the consumer space.
Long gone.

For instance...
My current Samsung C drive - 500GB 850 EVO.
From Samsung Magician, 14.2TBW so for
14472 running hours, almost 2 years.

The Samsung warranty on that drive is 5 years or 150TBW.
That TBW limit would not be reached for ~20 years.
This PC is SSD only...5 drives.

All of the SSD's in the house, 8 drives among 3 systems, do not reach to 50TBW totaled. In several years of use, going back to 2012.

And multiple independent endurance tests have shown typical consumer grade SSD's lasting far beyond that warranty TBW number.
 


Point taken. I've had my 1TB 850EVO drive for about 3-4 months and im at 1.84TB written, but even at 10TB a year it does only get to 50TB written in 5 years.

However, out of curiosity, is my understanding of the pagefile interactions with physical devices correct?
 


The pagefile (and RAM) work at the byte level, not the file level.

And your 1.84TB used in 3-4 months is not a straight line usage. The majority of that happened in the first 45 days or so.
Installing the OS and your applications, getting everything tested and set up.
Usage probably flattened out after that.
 


Certainly 400GB of that is installed program (which doesnt count extra writes done during download/installation etc). If I'm feeling really concerned I'll check to see how it changes in a month.

Ah I see the link posted above answered some of the other questions at the very bottom of the post:
"Be aware that actual page file usage depends greatly on the amount of modified memory that the system is managing. This means that files that already exist on disk (such as .txt, .doc, .dll, and .exe) are not written to a page file. Only modified data that does not already exist on disk (for example, unsaved text in Notepad ) is memory that could potentially be backed by a page file. After the unsaved data is saved to disk as a file, it is backed by the disk and not by a page file."

So in a lot of cases, the pagefile is a placeholder that isn't doing the storing. And I'm guessing the commit charge doesnt go over RAM + Pagefile to avoid the edge-case where none of that data is present on the disks in unmodified form.
 
If you all think the pagefile size is a problem for SSDs, try disabling hibernation first. The hiberfil.sys file is equal to the system RAM. Unless it’s a laptop, just turn it off at the command prompt

powercfg.exe /hibernate off
 
The sort of BSOD issues listed in this guide have nothing to do with the size of the swap file - this advice is flat out wrong.

Additionally, Virtual Memory refers to the memory hierarchy of any modern OS. The swap file/swap device is typically the slowest and largest tier of this hierarchy. The statement that every time a program is opened, things get swapped out is also completely wrong - eviction of pages from one memory tier is driven by well-tuned algorithms within the memory management routines of the OS.
 
This guide is no longer correct for the most recent Windows 10 or those with the simplified settings page. When you go to System in that right-click menu, as described in the How-To, you're no longer taken to the "System" page but rather the Settings->System->About page.

There's three ways I know of to get to the Control Panel->All Control Panel Items->System page that is in this guide.
The first is to use the keyboard shortcut Window Key+Pause/Break.
The second one is to click start, type in "Control Panel", click on the "Control Panel/Desktop App" option that is given at the top of the results, and then click System.
The final way skips directly to the "System Properties" dialog box which is the next image in the How-To. Click start, type in "sysdm.cpl", and hit enter.
 
Windows 10 runs the Pagefile slightly different as far as I have found. For many iterations the Pagefile was relegated to just handling crashdumps and little else.

However, I've found that setting your Pagefile too small, even with 16+GB of ram can cause issues.

The thing that kicks in is Commit Size. I have 16GB of DDR4 3000 and had set a 500MB Pagefile as I wanted my system to use as much of that DDR4 as possible. However, every so often Fallout 4 and some other software was crashing out and warning me of low memory. I would check in Task Manager and I would only be using around 8GB. Hmmm odd. On further investigation (Event Viewer etc. ) turns out the Pagefile was too small due to the Commit Size (Task Manager/Details/add Commit Size to table) of the software I was running. This had never caused an issue before (I used to run a 64MB Pagefile just so legacy stuff was happy I had one) but with 10, it kicks in quite often. Windows 10 has a lot more RAM/Cacheing involved which probably goes through the Pagefile.

So I set the Pagefile to a 128GB fast NVME SSD and set that to Auto Config and it's been rock solid since.

It's all about the Commit Size.
 
Manually changing the Page file was predominantly done to accommodate 32 bit operating system memory limitations of 3gb (10+ years ago). Today this modification is not needed due to the amount of system memory that comes standard with 64 bit operations systems, that now tend to be anywhere from 8gb to 16gb. Because of this, "paging" is not needed. For the average user, upgrading the system memory will benefit more to reduce hard drive load (especially if the drive is a hard disc drive), as modifying the page file on new systems today is just masking the need for more system memory.
 


This is true - Windows 10 now uses RAM compression as an alternate intermediate means of conserving free space prior to resorting to paging it out. It does consume CPU cycles, but it's far less system impactful then paging it directly to storage. This new change is most welcomed .



Not entirely true - While yes, it provides a means of generating a crash-dump for post-crash analysis, that was never the primary function. For the end-user, the pagefile was a physical extension of addressable memory beyond what physical RAM modules were installed. However, it's up to the OS (kernel) to handle what processes got paged out to file via it's memory management.

I remember prior to OSX on an iMac, it was recommended to disable the page file if you had enough RAM installed. Sure enough, there was a major improvement in performance as the HDD never had to be touched. However, there was a brickwall of sorts just beyond the physical RAM. Once you hit that wall, more often than not the entire system would lock up. Of course, modern OSes are much better in handling memory and processes these days. But if you had to pick your poison, a system slowdown from paging out is much preferred to a system hard-lock / non-responsiveness.
 
this is totally false, the rule to calculate page file was valid for the whole Windows 7 era. Since Windows Vista/8 and 10 new logarithms are in place for better automatic handling of the total available system resources.
If you really want to deep down pagefile, have a look at Mark Russinovich speech:

https://www.youtube.com/watch?v=TrFEgHr72Yg

don't follow this wrong advice here.
 

Vista came out before Windows 7....
 
I am running a Threadripper 1950x with 64gb of ram. I have never looked it up, what would be the recommended size of page file for my system. Have 1tb nvme hdd for boot drive. I am getting those errors every so often. Like maybe once a month. Not at computer at the moment so can't tell you large the page file is at the moment. But going 4x your ram size seems a little excessive that would be 256gb page file. When I am pretty sure I will never use probobly even 32gb of ram.
 
Status
Not open for further replies.