Question SSD Wear Leveling and Pagefile question - 1TB Pagefile ?

Nov 1, 2023
15
0
10
I'm just curious about something, and I'm not getting clarity on my searches.
Firstly, I had an extra 1TB SSD, so I moved my pagefile from C, to that drive. this question is about
Wear Leveling

I know there's all these rules with 1.5 size and other sizes depending on installed memory. But putting that aside for a moment: If I wanted to use an entire 1TB drive for the pagefile - a 1TB Page file for example (the reason being is because SSD's wear out over time, so by using the entire drive, I can put less wear on the same cells in one area of the drive buy spreading out the page file across the drive.)

But on the other hand, If I used a normal size page file such as 1.5 times installed memory, would the drive's Wear Leveling use different cells across the drive, or would it keep writing to the same location of the drive only using the size specified for the page file? I just figured that if I used the entire drive just as a swap drive, it would take longer for the drive to wear out then confining the page file to one size. Example
 
Last edited:
If I used a normal size page file such as 1.5 times installed memory, would the drive's Wear Leveling use different cells across the drive, or would it keep writing to the same location of the drive only using the size specified for the page file? I just figured that if I used the entire drive just as a swap drive, it would take longer for the drive to wear out then confining the page file to one size. Example
Swap of size 1.5x ram is not a normal size. This was from time, when computers had 256MB or 512MB of total ram.
Now with 32GB of ram you want swap size as little as possible, but as much as necessary, so size doesn't get changed too often.

Set it to 4GB initial and 16GB max. Then monitor swap file size. If it grows, then increase initial size accordingly.

Having 1TB of swap file is just stupid. You'll never need that much of virtual ram. And having a huge pagefile impacts performance negatively. You'll have minimal writes, but reads will take a lot of time. Reading huge file takes more time, than reading a small file.
 
  • Like
Reactions: PC-Mac-User

Colif

Win 11 Master
Moderator
If you have enough ram, windows rarely uses the page file anyway so making it 1tb would have been a waste of time as it would simply never be used. Windows is more likely to compress files into ram than use page file. It only uses it if you run out of ram. And if you doing that its better to just buy more ram.

I could be wrong but I don't think partition size or placement would have any effect on wear levelling anyway. Its a process done at a level below windows and is probably automatically done to make sure all cells are used equally. The ssd memory controller looks after it.
 
Nov 1, 2023
15
0
10
Well I had a 6GB page file, I was getting freezes in ArmA 3, which is a memory hog, and maybe the application had a memory leak. So I increased the page file to 24GBs and the game stopped freezing. I always read that it's better if you can use a permanent page file rather then system managed, because windows likes to constantly resize the page file and it adds more wear on your drive then if you just use a permanently set swap size.

It's getting hard to get good memory for this system, it's about 7 years old.

Win 10 Pro
16-GB DDR3 corsair vengeance
nVidea RTX 3060Ti / 8Gb DDR5
Z87-G45 gamming MSI

Games
ArmA 3
Doom
Wildlands
 
Last edited:

Colif

Win 11 Master
Moderator
Arma 3 does chew ram, and has had a memory leak since launch.

Make sure you're using the 64-bit version of the game. Set it to 64-bit if it is not already through the launcher options, then set your maximum memory limit for the game to about 75% of your total memory, so 12GB. If other programs are using more than 3.5GB of the remaining 4GB, then something is off with your PC.

So regardless of size of page file, it will eventually eat it all if you played long enough.

Diablo 4 has one as well, it used to use 98% of my virtual memory. I don't know if they have fixed that.
 
because windows likes to constantly resize the page file and it adds more wear on your drive then if you just use a permanently set swap size.
thats is somewhat not true
swap file resize (dynamic/system managed) doesnt take any extra physical space when resizing page file, its only virtualy resized (reserved in mapping table which is ram cached) until it gets used by some data
but same data gets written if its dynamic or not, it makes zero difference on wear level

the only wear that happens is when you run out or ram and you need to use pagefile
 
Nov 1, 2023
15
0
10
@ berberos
I don't think my question is being understood. In the event the system did need the use the page file, if the file was large , or the size of the entire SSD, then all the cells across the entire drive would be available for writes. Smaller designated page sizes constrains the system to use cells only available in the page file container size.

@SkyNetRising
I'm not implying your wrong when you said larger page files have longer seek times. But I feel that this would be a limitation of the conventional disk drive with read heads. SSDs use a master map proportional to the size of the volume. So read requests are immediately accessed anywhere on the drive that contains the data requested. Seek times should not change on SSDs based on page sizes, because the map is used to point right to the data where it is, but does not have to seek and find it like a read head on older drives.

3rd. having an OS managed swap would uses more I/O than a permanent swap because windows is constantly shrinking and increasing the page file when it's being managed.
 
Last edited:

USAFRet

Titan
Moderator
I don't think my question is being understood. In the event the system did need the use the page file, if the file was large , or the size of the entire SSD, then all the cells across the entire drive would be available for writes. Smaller designated page sizes constrains the system to use cells only available in the page file container size.
Pagefile is used when there is no more physical RAM available.

And the pagefile is very much slower than actual RAM, even with an uber fast SSD.

Having the "entire drive" be the page file would only slow the system down.
 
  • Like
Reactions: CountMike

Colif

Win 11 Master
Moderator
The entire point of his 1tb partition relies on an idea the partition has any affect on wear levelling.
It doesn't. Drive will use all the cells equally, Drive doesn't know what a partition is, that is a software concept.

The simple answer is:
Partitioning of the storage drive is a software construct.
The partitioning scheme that the drive uses is stored on the drive itself (as data), typically in the first sector since that traditionally is an error-free sector.
Operating systems are expected to identify and respect the partitioning scheme implemented for each storage drive.


Partitioning of the storage drive is not supported by hardware.
The storage drive (HDD or SSD) at the hardware level has no concept of what a partition is.
The drive interface (ATA/ATAPI or SCSI) has no commands that reference or respect partition boundaries.
It is solely up to the OS and its filesystems (i.e. software) to respect partition boundaries.

There are storage devices that support partitioning at the hardware level, e.g. eMMC modules have boot partitions. But SSDs do not.


The technical answer is:
SSDs use a flash translation layer to implement wear leveling so that when a Logical Block Address (LBA aka sector) is (re)written, a different physical block is mapped to the LBA, and written with the new data.
The original physical block is marked as unused, and goes to the end of the available block list.
(Actually it's a bit more complicated because an erase operation is required before a block can be written, and the erase block is much larger than the sector block. Maintaining a supply of already-erased sectors available for remapping and writing is crucial for SSD performance.)


Since the SSD has no concept of partitions and there's a flash translation layer that maps LBAs to physical sectors, the list of physical sectors used by a partition could be distributed across the entire drive (plus hidden/spare blocks not reported as part of the drive's capacity).
As LBAs are (re)written, that list of physical sectors used by the partition will change.


Wear leveling could be restricted to a partition, for instance, in an embedded Linux system that uses JFFS2 on a NAND chip. In that case, wear leveling is part of that particular filesystem.
A PC does not use raw NAND chips for mass storage.

The SSD with its integrated controller uses a flash translation layer to emulate a HDD so that the OS can use a filesystem that is (almost) oblivious to NAND issues.
Consequently the NTL is at a lower layer than the partitioning scheme.
Hence the SSD's controller and the FTL have no concept of partitions, and therefore cannot restrict a drive operation (e.g. wear leveling) to a partition.

 
Last edited:
Guys, there really isn't any reason to not leave the Page File on system managed.

Once upon a time, when we had to live within 4GB, you had instances where you *had* to set the Page File to a size to get things working (especially certain games that were memory hogs), but, speaking as someone who does heavily modded games that literally eat 64+GB at a time, there's no reason to just let the system manage and grow it as necessary.
 

Colif

Win 11 Master
Moderator
Guys, there really isn't any reason to not leave the Page File on system managed.

Once upon a time, when we had to live within 4GB, you had instances where you *had* to set the Page File to a size to get things working (especially certain games that were memory hogs), but, speaking as someone who does heavily modded games that literally eat 64+GB at a time, there's no reason to just let the system manage and grow it as necessary.
Mine is on auto. If it wasn't I probably get Out of memory errors playing Diablo 4.

the entire idea of over provisioning wouldn't make any sense if wear was restricted to the partition the files are in.
 
Last edited:
  • Like
Reactions: PC-Mac-User
Nov 1, 2023
15
0
10
@USAFRet

When you say, "having the entire drive as a partition will only slow it down", its not the OS drive, it's it's own drive. But Colif pretty much answered my question when he said, "Hence the SSD's controller and the FTL have no concept of partitions, and therefore cannot restrict a drive operation (e.g. wear leveling) to a partition." So in-other-words, You can have a static/permanent page file and the controller will still utilize some type of wear leveling internally across the entire drive.

@SkyNetRising.
You mentioned that "windows doesn't shrink pagefile". I don't know if that is true, but I DO know they are reset and boot. Everything I've read online says that windows will expand and shrink when using managed. But it's only the container, the data written is still the same writes at the end of the day.

I also realize that having more ram helps avoid having to use the page swap space all together. I've actually ordered another 16GBs to have a total of 32GB in the system. Colif also makes a good point for using managed in that ArmA with all it's memory leaks - it will keep eating memory as you play.

@kerberos_20
Yes, course you would have to run out of ram for wear leveling to kick in. I guess it don't really matter it seems.

I've just been confused for a while as to why ten or so years ago, places like Tom's Hardware would say that having a permanent swap file was more efficient for gaming because the OS didn't have to resize. But I guess technology has changed. It always made me think that something was changing the container /partition, it was constantly being resized even if you hand enough memory.
 

Colif

Win 11 Master
Moderator
The nvme has no idea about ram, just as windows has no control over wear levelling. Nvme can't "see" the other parts of the system, it just looks after itself.

Setting a limit for Arma 3 will mean you probably need to restart game more often as it will eat all the memory it can until it hits that limit. It could cause errors in game, but that beats causing entire PC to crash. Memory leaks are like that.
 
  • Like
Reactions: PC-Mac-User
Nov 1, 2023
15
0
10
So just for a test, this is a little off topic from my original post.
I wanted to test the Performance monitor. I don't know if I'm doing this correct. The graph just shows nothing happening even though I added a page file usage counter. Obviously the system is not out of memory.

But to test it, I went into Photoshop and created a HUGE 50GB file to intentionally run my system out of memory, still nothing is showing up on the graph indicating that the page file is being used. Screenshot -> Example
 
Nov 1, 2023
15
0
10
I only have one page file, but it's on another SSD drive.
I was looking to see if there was a way to point the resource monitor to that file. So are you saying that the page file needs to be on the OS drive for monitor to see it? I did change it from static to managed.
I would think the Resource Monitor would be smart enough to know where the page file is since windows allowed me to move, so it should have some kind of link to it in the system.
 
So are you saying that the page file needs to be on the OS drive for monitor to see it?
No.
There will be separate counters for each pagefile on each drive.
You have to add all of them.

Can you show screenshot from virtual memory settings on your pc?
(Settings/Syste/About/Advanced system settings/Advanced/Settings/Advanced/Virtual Memory/Change)

Upload screenshot to imgur.com and post link
 
Nov 1, 2023
15
0
10
That was my PC screenshot.

By the way, now that I'm looking at Resource Monitor in addition to Performance Mon, I think the reason why I'm not seeing activity on the page file in performance Mon, is for the same reasons someone above mentioned, the data in memory is being compressed internally in memory. In other words, I made five / 7 GB image files, I opened each one of them in Photoshop which took forever. At the same time I was watching what Resource mon was doing: To my surprise, not as much memory was being utilized by Photoshop as I thought would, 7 files @ 7 GB should be 35GBs used. But the Modified memory section on resource Mon was working hard which makes me think it was compression the data in memory to keep me from running out of memory.

So I think with certain things like Photoshop type programs, windows can compress memory, just not with some type of programs like games which need real time memory transfers.

EDIT
I was able to get the Performance Mon graph to move up for my page file after I opened a bunch of games at the same time. So it looks like it's working. But the behavior I'm seeing when the memory usage drops after a while even sitting idol makes me think that memory compression is happening as other user mentioned.
 
Last edited:

Colif

Win 11 Master
Moderator
Windows will only write information to the page file if it runs out of ram. It is going to compress everything into ram until you need that space. You won't see any usage before then.

if you close a program, it won't write the data to page file then, it will compress it into ram in case you need it again, as its much faster to decompress it than get it from storage. Its designed to make windows more responsive.

just an explanation, https://www.urtech.ca/2022/11/solved-memory-compression-explained/ - Don't turn it off.
 
Last edited:
  • Like
Reactions: PC-Mac-User