View Single Post
Old 07-19-2011, 07:33 PM   #13
karbomusic
Human being with feelings
 
karbomusic's Avatar
 
Join Date: May 2009
Posts: 29,269
Default Page File Follow Up

So, to follow up on the page file/virtual memory thing. I'll try to keep it as simple as possible. First off, there are several challenges concerning DAW performance that I would like to rant about first.

1. The vast majority of information on the internet concerning computers and performance is hearsay and/or just plain wrong. Even from credible sources at times. The high percentage of the information is often nothing more than the blind leading the blind. "I did this and it seemed to work, you should try it". Arggg...

2. Most of us musicians are more than intelligent enough to understand all of this but there is a time investment in both the learning curve and on top of that, measuring the performance in such a way as to come to valid conclusions can take up lots of time better spent making music.

As far as performance goes, every situation is different and should be approached as such when possible. Outside of the no-brainer stuff, chasing symptoms by blindly making changes because "dude on the Internet had a similar issue" without actually being able to verify first, eventually leads to lots of unstable machines then blaming the OS. The long and short of it is don't make the change unless you KNOW it applies to you and why.

==================
Virtual Memory
==================

Again, trying to keep it simple as possible for the time being. Virtual memory is part of the VMM (Virtual Memory Manager). It is a layer of sorts between a process and physical memory/paging file. In the simplest terms the VMM "Maps" virtual memory to physical memory and page file and the process only sees the virtual memory as one large piece of memory. Typically the process doesn't know or care what is physical or what is page file.

For those in the know, I purposely left out kernel mode memory, user mode memory, 3GB, USERVA, PAE and AWE out of this post in order to get the basic idea across. Adding all of that minutia right out of the gate is part of the problem Moving on... here is a very simple diagram showing the memory manager mapping physical memory resources to appear as real memory to a process. Why of course is another discussion but it is basically a method that allows the computer to act as if it has more physical memory that it actually does:

Code:
==============    ==============
Physical RAM        PAGE FILE
==============    ==============
    \                   /
================================
	Virtual Memory
================================
          \       /
     ===================
         Process.exe
     ===================
================
Page File Size
================

So the heart of the matter in this thread was page file size. As I stated before there is no one size fits all rule or algorithm that will tell you what the proper page file size is. Why? Because every system has different performance and memory requirements. This is precisely why OS's tend to have dynamically adjusting page files out-of-the-box. Thus, you need to do a few simple investigations on your own to calculate the size that's best for your system. I also mentioned that Task Manager doesn't always tell the right story, the numbers are somewhat accurate but the ones it displays and what it calls them is misleading and has changed from OS version to OS version.

The best tool to use instead is Process Explorer and there are three values it displays that are critical in determining the best page file size for your system. They are listed under "Commit Charge".




Notice on the bottom left of the picture above under commit charge (Current, Limit, Peak). Commit Charge is a true committal by Windows that a process will have the memory it has promised to that process. Meaning it must be available as virtual memory (even if it is a mix of page/physical, see diagram from before). Here are the definitions:

Current: The current amount of committed memory that is the sum of commited memory for all processes.
Limit: The current hard limit that can safely be committed. (Appx. calculated by adding Page file size + Physical RAM).
Peak: The highest amount of of memory that has been committed since the last reboot.

It's important to note that if you do not set a fixed page size the commit limit can grow/shrink based on system needs. If you do have a fixed size then it is pretty much a hard limit because the page file cannot be expanded to meet unexpected demands in the future. Historically DAW users have set fixed page file sizes so that growing/shrinking of the page file doesn't cause audio glitches. I will say that it is somewhat rare these days for this to happen once the machine is "settled in" and you don't hit any sudden extremes and have a decent amount of RAM. However, fixed is perfectly fine so long as it is calculated with proper headroom to meet sudden demand.

Finally, in order to find what is the best "fixed" page file size for YOUR DAW (should you choose to use fixed) is to look at the peak AFTER the machine has ran for a few days, a week or however long it takes for you to use the system to the maximum performance limits it usually ever sees without reboots. For example, if this peak after that period of time is 6GB and you have 3GB of RAM installed then a safe page file setting might be 4GB which includes 1GB of emergency memory headroom. IE: 3GB physical + 4GB page file = 7GB Commit Charge Limit which would be decently above the 6GB peak you previously measured.

Reminder: I purposely left out 32bit/64bit and 3GB switch discussions for now because the basic ideas above are more important to understand first. Even if the numbers I quoted don't align with 32 or 64bit boundaries, the basic concept is the same.

"I have a 64 bit machine and a huge amount of memory, I don't need no stinkin' page file."

True. Windows will run just fine without a page file but there are a couple of things to be aware of:

1. If the system BSODs and creates a kernel dump it must be written to the system drive because even disk drivers are bypassed to write the dump. Since for all we know the BSOD was caused by a disk driver, the dump is literally written from memory directly to the page file sector by sector. Later, when the system reboots, the .dmp file is extracted from the page file during reboot. If you have no page file on the system drive and you need that dump for troubleshooting by a vendor or even Cockos, its not going to get written. To cover for this, keep it around 200-256MB.

2. It can actually hurt performance in some cases because with no page file since every single resource that uses memory on the system must remain in memory at all times even if you aren't actively using it. Even processes and data structures that have nothing to do with your DAW and won't ever be needed while running it are now being managed in memory at all times because it cannot be paged out until later.

"I heard that a large page file hurts performance due to excessive paging"

False. Excessive paging is due to not having enough physical memory to supply demand. If the page file size is larger but not used much (machine isn't memory starved), its not hurting performance outside of the chance it gets fragmented over time (different topic) but then again, just sitting there isn't causing fragmentation either. If fragmentation is ever a concern, just recreate the page file, reboot and problem solved.

I hope this helps at least a little.
__________________
Music is what feelings sound like.

Last edited by karbomusic; 07-19-2011 at 08:02 PM.
karbomusic is offline   Reply With Quote