Most IT professionals dealing with Windows in a decent way have already heard of terms like “virtual memory”, “physical memory”, “working set”, “reserving memory”, “committed memory”, “page file”, “swapping”, “regions”, etc. I can imagine that most of them don’t really know what this is all about. Okay, the difference between physical and virtual memory and the meaning of a page file are probably quite understood, but is everybody really knowing what they are talking about? Do they really know, for instance, what a page file is for, how it is organized, how it is used, how the system behaves if it exists (or doesn’t exist),…? I seriously have my doubts. And I’m sure that those other dirty words like “working set”, “committed memory” and “data alignment” were, are and will be ignored when met. Well, it shouldn’t be like this: understanding how memory works, especially at the OS level, is important, whether you are a system administrator/engineer/architect, a developer or another kind of IT pro. You can improve performance, fine-tune applications and systems, troubleshoot so much easier and faster and control your environment so much better if you understand how this key part of your machine works and behaves. And it shouldn’t be that difficult: read, learn and, yes, have a better life (no, I promise you it won’t be the opposite!).
The concept of virtual memory, physical memory and memory mapping
I could give you an introduction of many, many pages, but I won’t do that. After all this is a blog entry, an article. So I will blow something immediately into your ear: “every process has its own memory”. I guess most of you will nod silently, but some people probably have that “you don’t know what you’re saying, dude” look right now. Well, let me be a little bit arrogant: I know what I’m saying and yes, it’s correct. Okay, perhaps the sceptics are right too, partly… It is indeed true that every process doesn’t have its own RAM modules, those physical things that give you memory. That memory is shared and meant for the whole system (with some systems you should nuance this a little bit, but more about this later). Every process makes use of this same bunch of memory. But despite this admission I still stress the fact that my original statement was and is correct, although I should make myself more clear: every process has its own memory, though in a virtual way.
Memory that is meant for use by the OS and applications running on top of this OS, is delivered by physical (or sort of) components. This is the (quite) real, actual memory. The most commonly known form how it is delivered, is by literally physical hardware (normally of the Random Access Memory (RAM) type), but there is also another way of offering memory, which is semi-physical (we’ll talk about this in a few minutes). Both of these types of memories can be called physical memory (so “physical” should be considered here in its broadest meaning). The sum of all of them on a system is “the physical memory of a system”. Like I said before, this memory is meant for the OS, all the applications, all the processes. It is shared, nothing more, nothing less (again: with some systems you should nuance this a little bit, but more about this later).
On the other hand there is also memory considered on a higher level; it is called virtual memory. It doesn’t represent the literally physical or semi-physical memory, it’s just a concept, something that doesn’t really exist, except in that weird, virtual world. And then, yes then, every process has its own (virtual) memory. The sum of all the virtual memories of all processes on a system is “the virtual memory of a system”. The amount of virtual memory of a system depends on the number of running processes or with other words is very dynamic, as the number of processes on a system can change very rapidly. It doesn’t really matter though: normally it has no sense to know the total amount of virtual memory on a system and you will rarely see this value appear in some kind of list with all kinds of memory values (for example in Task Manager, Process Explorer or whatever article about memory you’ve found). The only thing that is important is the virtual memory per process.
If only physical memory would exist, this would mean that all processes would have to access physical memory directly. Don’t forget that physical memory is shared, so security-wise this wouldn’t be a good thing… With virtual memory we have an extra memory layer that takes care of this: a process can only access its own virtual memory, not another process’ virtual memory. This means that a process’ virtual memory can be considered as private to that process. And as a process will only directly access its virtual memory and not the physical memory behind that virtual memory (although exceptions exist for special purposes, but this isn’t the normal behavior at all), this way the whole memory architecture is secured. Most of the time the content of a virtual address of a process, residing somewhere at a physical address behind the virtual layer because that’s where the content actually is located, is also private to that process; if it’s not, it’s because the content is shared by processes through sharing mechanisms. The following example illustrates this: suppose you have a process A and a different process B. Every address in the virtual memory of process A is different from every address in the virtual memory of process B, even if the addresses have the same number. For example, address 0x24675A4B of the virtual memory of process A is different from address 0x24675A4B of the virtual memory of process B. They have the same number (the hexadecimal 0x24675A4B) but they are valid in and refer to another virtual memory. This also means that the content at these addresses can be different: for process A it can be 5 and for process B it can be 7. Notice that I said “can be different”: of course they can be the same when it’s a coincidence, so they can be both 5 for instance. It’s important though to understand that these two fives have nothing to do with each other: the technical context of this content will be different. Whether they are the same or not, both values will reside on a different physical address. This means not only the virtual addresses themselves are private, but also the content, although residing in that shared physical memory (which is not directly accessed by processes in normal situations). There could be one exception: it is possible that both values reside at the same physical address and then we can say we are dealing with shared content, despite the fact that the virtual addresses themselves are and stay private.
There is a second reason for the existence of virtual memory: the (virtual) memory addressable by a process doesn’t have to be the same value as the total amount of physical memory, RAM or whatever is behind the virtual memory. One small example illustrates this: on a 32 bit OS a process can address 2^32 addresses, which means a 4 GiB of memory is addressable. The hardware could be able to address a bigger memory zone, so not using virtual memory would mean that all processes could only address a piece of it and it would be the same piece! The conclusion is that virtual memory makes it possible to actually use more physical memory and that’s a very good thing of course.
If you were alert, you have probably noticed that we have a strange situation: processes use their own virtual memory, but ultimately the memory content is stored in physical memory. This means there should be some kind of mapping, right? Indeed, and we could call this, very simply, a virtual-physical memory mapping, because that’s what it basically is. In a basic situation the only kind of “physical” memory being in use is the very physical one, present thanks to the RAM modules installed (let’s assume that we do use the RAM type of memory, which is normally the case). In this situation we could see the mappings as virtual-“really physical” and the mappings themselves as well as the accompanying mapping tables are normally controlled by the CPU subsystem of a machine.
But hey, what if we have 20 processes, all having their own 4 GiB of virtual memory? Then we have a total of 80 GiB of virtual memory on the system! Oh my, I don’t think I have this amount of RAM… I don’t even think my motherboard can support this! Even worse: my operating system doesn’t support this. Does this mean I’m limited to only a few processes? Or even none, because it seems I only have 2 GiB of RAM!! Relax, it’s not as bad as it sounds, really, believe me. The fact is that processes rarely need that much of their virtual memory. As long as they don’t need a certain part, well, we could ignore it. For example, if 15 of our 20 processes only use 10 MiB of their virtual memory and the other 5 only use 30 MiB of their virtual memory, this means we have 15 * 10 + 5 * 30 = 150 + 150 = 300 MiB of actually used virtual memory. Ok, that 2 GiB of RAM I have installed still seems to be fair! Even if I run extra processes or even when a few processes would need to use more of their virtual memory, I still have enough RAM in my box! I must admit that my example is purely hypothetical, it doesn’t resemble a realistic scenario, but I think you get my point, right? Unused parts of virtual memory can be ignored as long as they aren’t needed and this means the virtual-physical mapping tables in practice are much, much smaller compared to when those unused virtual memory parts wouldn’t be ignored. This means that you don’t need that much of physical memory than you’ve thought a few lines ago. Attention: it doesn’t mean every amount of physical memory is enough for every amount of used virtual memory! Of course there are limits: it could be that on a certain point that much of virtual memory is used so that all the physical memory is consumed. This is not an exception: a lot of people experience this scenario sooner or later. At this point we are sooo close to a problem: not enough physical memory. There are a few possible solutions:
- Perhaps you can fine-tune your system so less virtual memory is used and everything stays working the way it should be
- Or perhaps not, well, than it’s time to add extra RAM to your beloved machine, isn’t it? Remember that operating systems, CPU architectures and motherboards force limits to the maximum amount of RAM, so perhaps this isn’t an option anymore.
- Perhaps you cannot add more RAM or perhaps your wife doesn’t want you to spend another set of notes to that damned computer. In that case you could look for an alternative, a simulated physical memory solution. Something that’s not really physical, but in a way it’s not completely virtual either. I’m talking about the semi-physical memory (remember?), which is usually just a file on a disk, called a page file, paging file or swap file. Yup, we could tell the system that extra physical memory is available, not in RAM this time, but in a file on a disk. Actually, we can use a number of these page files to add extra physical memory, although most of the time one will suffice for the physical memory needs (not meaning that using more than 1 page file doesn’t have extra advantages, but more about this in the next section).
Page files and paging
Like I’ve just said, physical memory can be extended with a semi-physical memory type, usually just one or more files on one or more disk partitions (but no more than 1 per disk partition), called page files, paging files or swap files. The total amount of physical memory is the sum of RAM + the sizes of the page files. Suppose we have 3 GiB of RAM, a 1 GiB large page file on the C partition and a 2 GiB large page file on the D partition, then we have a total of 3 + 1 + 2 = 6 GiB of physical memory in our system (at the OS level; on the hardware level it still stays 3 GiB (RAM)).
Suppose the third option (see the end of the previous section) is the only one appropriate for your situation. What happens with virtual-physical mapping? The CPU subsystem will only map virtual memory addresses to really physical memory addresses, memory addresses in RAM, that is. Well, if you try to access something which resides in one of those page files only, the CPU subsystem will see that it’s not available in RAM (called a page miss) and will generate a so-called page fault, which is actually a kind of exception. Oh no, don’t worry, this is not bad; in this case a fault doesn’t mean you need to look into your event logs, run that crazy (although useful IMHO) debugger or tell your wife you really, really, really need that extra RAM module. Actually the word “trigger” would have been a better choice in a way: with the “fault” the CPU subsystem tells the OS that the desired piece of data is not residing in RAM. Ignoring memory-mapped files for now (see later), the OS knows that there are a few of those special page files and assumes that the data is probably present in one of those files (it should be, and if not (because of a bug for example) a real error will be generated, the one you maybe need that debugger for). Of course the OS won’t check each page file sequentially till it has found the requested data. No way, it would make your system so terribly slow! Actually the OS has its own mapping table system for this so it can just map the virtual memory address to the physical one, present in a page file. This means that a complete virtual-physical mapping could contain of two sub-mappings:
- First of all the virtual-physical mapping performed by the CPU subsystem to map a virtual address to a physical one in RAM
- If the virtual address cannot be mapped to a physical address in RAM a page fault is generated and the OS performs another virtual-physical mapping to map a virtual address to a physical one in one of the page files. At the end every memory access should terminate at the RAM level, so the page is copied from page file to RAM, the first mapping tables are updated and the request is redone, so the second request will end at the first step this time (so no page fault anymore).
If the OS has found the right spot in a page file, it should be copied to a free space in RAM, because ultimately every memory access should take place in RAM. This also has an extra reason: it is recently accessed, so there is a good chance it will be accessed again soon. Because it is copied to RAM the next access attempt will pass by much faster as no disk access will be necessary (supposing the piece of data is still available in RAM by the next access attempt, this depends on a number of factors). The process of copying something from page file to RAM is called paging in. It is possible that no free space is available in RAM and in this case the system will remove something from RAM, so a free space is created. If the to be removed piece has been modified in RAM (and then we call it dirty) it will be copied to a page file, called paging out. The concept of paging in and out is called paging or swapping. When paging happens, a page file is always used somehow. In a way paging is a waste of resources, especially time, because it’s slower due to the required disk access. Ideally paging shouldn’t happen: we should have enough RAM, so a page file isn’t needed and paging never happens. But of course practice is different most of the time: we don’t always have enough RAM so page files and paging should help us a little bit. Still, it can be considered as not-so-ideal or when you’re in a bad mood you could call it trash. Well, you’re not the only one with a bad temper: when too much paging happens, one calls this, indeed, trashing.
One can wonder why they call swapping also paging. The first term seems very logical (swapping of data between RAM and (a) special file(s)), but the second one doesn’t seem to make sense. Or does it…? The fact is that a piece being swapped is called a page (like in a book) and its size is fixed, the page size. Actually the whole memory (virtual as well as physical) is divided into such fixed sized pages. If data is needed from a page file, the page or pages this data belongs to are paged in to RAM. This means that if a piece of data of 8 bytes should be accessed and those 8 bytes reside in a page file, not only those 8 bytes will be paged in to RAM, but the whole page those bytes belong to or if they belong to more than 1 page, every one of these pages will be paged in to RAM.
In Windows two possible page sizes are supported, a small one (small page) and a large one (large page), both having their advantages and disadvantages. By default small pages are used in Windows. X86 and x64 Windows use a small page size of 4 KiB, but on an IA-64 Windows this is 8 KiB. Large pages are always 16 KiB in size.
The way paging precisely works, depends on the algorithm the OS uses: the paging algorithm. One aspect is which page(s) should be paged out to disk if free RAM is needed but not available anymore in RAM. Paging is an interaction between RAM and paging files. It means that not every piece of content of a process’ memory is actually in RAM on a certain moment (with enough RAM this is theoretically possible though). The part of a process’ virtual memory that is present in RAM is called the working set of the process: the set of content of a process where the system can directly work with without doing some preparation like paging in.
Above you see a graphical example of a possible situation. Remember that this a theoretical example, based on what we have learned so far. We take a look at two different processes: process A and process B. They each have a virtual memory address range from 0x00000000 till 0xFFFFFFFF, containing 4 GiB of virtual memory space. This is typically the situation on a 32 bit OS. The lowest addresses are put at the top, the highest addresses at the bottom. In the middle we see a view of the RAM, which is in our example also 4 GiB. Let’s consider only the addresses mentioned in the schema. For process A only 2 of the 6 addresses considered have a mapping to RAM. For process B only 1 virtual address has a mapping to RAM. For those addresses no page fault will be generated. Process B though has 2 addresses which have a mapping to a page file (red lines), so the CPU subsystem will generate a page fault first, as the content is not available in RAM. Virtual addresses without an arrow to a physical address or the term “PAGE FAULT” are unused addresses (well, actually there is still another possibility (memory-mapped files), but we’ll talk about this in a few sections), which means addressing them would cause an memory access violation exception (a violation against the memory access rules has occurred). The RAM also has a few locations where no virtual address is mapped to. This is unused or free RAM. This seems illogical: there is free RAM and still for some addresses page files are used. Well, this is possible… The thing is that Windows will always try to keep a certain amount of RAM free. In another article I will try to explain this. Next to this there is also another situation where stuff can be available in a page file, but not in RAM even if loads of RAM are still as free as they can be (supposing a page file is available of course). This will be explained in the next article I will write about the Windows memory architecture. Note that I haven’t written down the content of physical addresses in the schema, so please don’t think that the content at these addresses is nothing else than 0.
This picture shows the page file settings of a system. The intention of this figure is not to show the best page file settings, it’s just an example. The E and F partition don’t have a page file assigned, the C partition has. The page file is 16484 MiB large. This picture shows there are a few possibilities to configure a page file’s size:
- System managed: you don’t configure an explicit size, the system determines the necessary size. The system does this in a dynamic way: the size can change over time. This is a good way to let the size depend on the needs: there will never be either too much page file or a shortage of page file. Be aware of the fact that you need enough free disk space to serve all the needs of the very moment and you don’t know for sure where this could end. The dynamic nature of this concept could also easily introduce disk fragmentation for the page file.
- An explicit size (custom size): you can define an initial value and a maximum value. The advantage of a maximum value is it can’t eat all your free disk space (except when that’s no more than the maximum value of course). With these values you can configure a small page file with a minimum size to begin with and let the system increase it when needed, with a maximum limit you’ve defined. In the figure you see that both values are the same though and this has a reason: the page file is created with its maximum value right from the start to avoid disk fragmentation.
Each choice has its advantages and you have to determine the right setting for yourself. If you don’t care about disk space perhaps it’s best to set a fixed value for the page file size (initial = maximum): you can define a size large enough if disk space is no problem and disk fragmentation is avoided. The system managed choice is only useful when you definitely prefers the support to handle very extreme and very rare unexpected peaks over the absence of disk fragmentation of the page file (which means paging should be faster).
If you have multiple physical hard disks it is a good idea to use page files on the different disks because the disks can be accessed simultaneously (this is not the case for different partitions on the same physical disk!). Let’s assume you would like to have a page file of 4 GiB and you have 2 physical hard disks A and B. In this case the best thing you can do is to assign 1 page file of 2 GiB to a partition on disk A and another one of 2 GiB to a partition on disk B. Of course this doesn’t apply to partitions/disks with a shortage of disk space. The following points are interesting to understand and remember:
- Divide the page file “system” equally over different physical disks. Use one partition (with enough disk space) per physical disk to put a page file on. If there are no partitions with enough disk space for a certain physical disk, you may skip this point for the particular disk. Note that there could be other reasons as well to skip a disk: a disk could have a very special, dedicated purpose for example and the company’s policy forbids to put something else (like a page file) on such a disk. You could also assign a slightly smaller or larger part to a certain disk because the disk is obviously much slower or faster.
- Use at least a page file on the system partition (usually the C partition) with a size at least large enough for the “desired” memory dumps (more about this in a few seconds). Memory dumps not only require a page file which is large enough, but also the presence of that page file on the system partition. This means there is a reason to use page files in situations where you have even more than enough RAM installed… I hope you’re not too confused by now?
First define your desired page file size, independent of the dump requirements. Then perform point 1; use at least the system partition to put a page file on. Then define the size of the system partition page file for the dump needs. If the system partition page file size was smaller than the new calculated value, replace it with this new calculated value. This procedure is the best way to combine both points: for normal paging behavior multiple page files are simultaneously available if multiple disks are usable for paging purposes and for dumps a correct page file is available too.
Page files extend the physical memory in a simulated way, but are still behind the virtual memory. If you need more physical memory, page files are a very interesting option in some cases. But if you can handle all the needs with real RAM, well, that’s so much better. Does this mean page files are not useful anymore when you do have enough RAM? No. First of all they can still be used as an extra margin: suppose you have 8 GiB of RAM for a system which needs only 3 GiB in normal situations and 4 GiB in expected peak situations. Then you have 4 GiB extra to catch the unexpected situations where, for whatever reason, for example 6 or 7 GiB is needed. You never know if this can happen and if it does, well, you’re well equipped, isn’t it? But of course you cannot exclude very high unexpected peak moments where for example 10 GiB is needed. First of all, unexpected/unexplained peaks are mostly not a good thing, but it doesn’t mean they can’t happen and you shouldn’t think of them. If they happen, your system should be able to handle them (except for perhaps very, very extreme situations; you can’t take care of everything). Well, a page file of a few gigabytes can help to add that extra layer of robustness, even if you have already many gigas of “too much” RAM. I must admit though that there is always some kind of a limit: in a practical situation where you have 4 GiB of RAM “too much” it would be better to use that 4 gig somewhere else where it is needed more and replace it by a page file of 4 GiB. But suppose you have too much money, OK, leave the 4 GiB of RAM in that box. But in practice then you won’t add that extra page file for that very small possibility of an unexpected extremely high peak where you need more than 8 GiB. You see, it’s always useful, but normally you limit things to “a little bit quite normal/realistic” (not “too normal/realistic”!) and don’t take care of the really extreme cases. You should determine that boundary yourself and perhaps it depends on the importance of a specific server… Anyway, my point is that extra physical memory, whether it is RAM or page file, always has some extra benefit, how little it may be.
Secondly, even if you don’t need a page file considering every aspect, you probably will need one. Sight… The thing is you need them for memory dumps. If Windows crashes a dump file will be generated. Different kind of dumps exist and you can configure the type and a few other settings related to them. Every kind of dump needs a page file and demands a certain location and minimum size of that page file. So if you would like to get dumps after a crash you need a page file and depending on the type of dump you wish you have to give that page file a minimum size. If you don’t know what a memory dump is, it’s something very simple: a copy/view of a part of or the whole physical memory in RAM (note that last word!), so a dump file is a “dump” of RAM content into a file. Why only in RAM? Page files extend physical memory, so they do count, right? Right, but eventually something that’s needed and resides in a page file only will be paged in to RAM and then it’s used from there. Residing in a page file only thus means it’s not used recently. A crash is directly caused by something that happened recently and the original cause is probably something that happened recently too (or quite recently), although that’s not always the case, not at all. Anyway, RAM is considered as “(quite) recently used memory so it probably contains enough information to analyze the crash”. Of course a full memory dump (this is one of the three dump types) contains the whole RAM content and can be very big and actually it contains a lot of information which isn’t that useful for post-mortem debugging (that’s how they call this kind of debugging). The more sure you want to be to find a cause and perhaps a solution, the bigger you want your dump, even if the chance on success grows with only 0,1%. If you don’t want that extra chance on success and would rather like to save disk space, well, then you can choose another dump type, but I won’t go into detail in this article. Last thing about this: if you choose a full dump, then your page file has to be at least the size of your RAM + 1 MiB. That extra MiB contains the header of the dump. Don’t forget you need also free disk space for the memory dump file itself. Suppose you want full memory dumps, then not only your page file has to be at least the size of your RAM + 1 MiB, but your free disk space should also be at least that same value, at any moment!
This information is an important factor in determining in which page file size option to choose. Now there is an extra reason to choose for a page file size with an initial value large enough for the “desired” dump type, at least for the page file on a system partition. System managed sizes don’t guarantee they will be large enough at any time.
In the figure just shown we see that the system is configured to generate a complete memory dump when the system (not just a process) crashes. You can also see the location for that dump file is %SystemRoot% and the file name will be MEMORY.DMP. If such a file already exists, it will be overridden, but this behavior can be changed.
It is important to know that the concept of virtual and physical memory doesn’t imply the paging mechanism. Virtual-physical mapping is perfectly possible without any form of paging and is a good way to introduce memory related security and be able to use more physical memory. Paging is a mechanism which you can extend the physical memory with, but it works smoothly with the virtual-physical mapping principle and is indeed used on many, many systems. A good and quite complete story about virtual and physical memory does mention and explain the concept of paging; nevertheless it is something different, although it seems to confuse so many people believing paging is a part of the virtual memory concept itself.
What does this all mean in practice? A practical summary and an example
A lot of questions arise now: how much RAM do we need? How much physical memory do we need in page files? Do we only need page file memory when having a shortage of RAM? Well, first of all we should realize that the use of page files slows down the system: accessing RAM is much faster than accessing files on disk. If we need physical memory this means we should prefer real RAM, it’s as simple as that. But… perhaps you cannot add more RAM because of a certain RAM limit, perhaps you cannot buy another RAM module because it’s not supported anymore, perhaps more RAM is financially not an option,… In these cases we need to use those page files. Remember though that the use of page files is slower and that this also costs something else: disk space. Secondly, you only need as much physical memory as needed. If your system constantly consumes 3 GiB of memory (physically), then normally there is no need to install for example 8 GiB. Don’t forget that your system sometimes can manifest peaks. It does make sense to install 8 GiB of RAM if the system consumes 7 GiB in normal situations and 8 GiB at peak times. Even then you should look further, as these peaks can be normal and expected, but a system can sometimes peak unexpectedly. In my last example you should consider 9, 10 or perhaps even 12 GiB of physical memory. Theoretically the more the better, although of course in practice you limit this to normal proportions. If we combine the fact that we should define the right amount of physical memory to keep serving all the memory needs, even at unexpected peaks, and the fact that RAM is better than page files, we could say that ideally you should have as much RAM as possibly needed physical memory and if that is not achievable as much RAM as needed in expected situations or at least in average expected situations. The difference between possibly needed physical memory and available RAM should be compensated by (a) page file(s). Let’s say that we have a system that needs 3 GiB of physical memory in normal operation mode and has peaks from time to time where 4 GiB is needed. I’m paranoia and don’t exclude possible unexpected peaks of 6 GiB (based on experience for instance). Ideally I would like to install 6 GiB of RAM, but the problem is I only have 4 GiB and my wallet is as empty as it can be. The 6 GiB I wanted minus the 4 GiB I have, gives me a shortage of 2 GiB of RAM. I compensate this with a page file of 2 GiB, so I can catch even unexpected peaks (as long as they are not that extreme), although in that case my system will handle them a little bit slower because of the use of the slower page file.
In my example I have 3 disk partitions: C (my system partition) and D on physical disk 1 and E on physical disk 2. For performance reasons I put a page file of 1 GiB on partition E and one of 1 GiB on partition D. I think this will suffice and I would like to avoid disk fragmentation so paging doesn’t become unnecessarily slower. So for both page files I choose a custom size setting, where I give the initial and maximum settings the same value: 1024 MiB (which is 1 GiB). But hey, I have forgotten one thing: there must be a page file on the system partition to be able to get memory dumps, so I change my configuration and remove the page file from D and put one of 1 GiB on C (initial and maximum value: 1024 MiB). But wait, I have forgotten something else: the page file should be large enough for my memory dumps. Is 1 GiB enough? I want full memory dumps, so my page file has to be at least 4 GiB + 1 MiB and as I don’t need more I change the page file size on the C partition to 4 GiB + 1 MiB = 4096 MiB + 1 MiB = 4097 MiB (as initial and maximum value).
Virtual memory of a process
One thing you are probably asking yourself is how much virtual memory a process has… I cannot give you a very simple and short answer, I’m sorry. It depends on the CPU architecture the OS is meant for (more precisely, the bit size part of the architecture). If you use a 16 bit Windows, memory address pointers of 16 bit are used. This means that 2^16 (65 536) different addresses can be accessed, which makes up for 64 KiB of memory. Wow, this seems like nothing! Don’t forget though that 16 bit Windows lies far behind us, although there are still places where they are used, but this can be considered rare to very rare. More modern and used a lot these days, although not the most modern anymore, are 32 bit Windows systems. These use address pointers which are 32 bit wide, which means 4 GiB of virtual memory can be addressed (2^32 = 4 294 967 296). This seems acceptable for modern times, no? Most of the time this is true, but from experience you do realize that even 4 GiB is not always enough anymore, but we are very lucky: there are 64 bit Windows systems and on those systems 64 bit memory address pointers are used, which means a virtual memory address range of 16 EiB (2^64 = 18 446 744 073 709 551 616) is available. No no, I haven’t written down a mistake, I will even repeat this for you: 16 EiB! If you don’t know where in space you are right now, well, after giga there is tera, then peta and then exa. Yes, you are in exa world, my friend! It’s probably a waste of disk space and bandwidth, but to be complete I should say that this will suffice for a “very long” time (but of course this is quite relative in IT World)… Supposing you have enough physical memory (theoretically), you have an extremely big problem if that would not be enough for you: or something is very wrong with your application or system or you are the type of person who is too demanding and will never be happy in his/her whole life.
You should realize that not the whole portion of this virtual memory is available for normal applications in 32 and 64 bit Windows systems. To explain this we should begin with taking a look at a certain feature of many CPUs, including all the CPUs which are supported for 32 and 64 bit Windows. These CPUs support different modes which code can run in. The modes differ in what they allow the code to do, so we can actually speak of security modes. The least restrictive and the most restrictive modes are used by Windows and are called user mode respectively kernel mode. Kernel mode allows everything and it is obvious that certain OS components and third party software (certain drivers for example) should be able to do whatever they want, so these components should certainly run in kernel mode. Other software components should definitely not run in this mode, because then they would have too many rights, raising the risk the (potentially malicious) software can corrupt system data, destroy system data integrity, cause system crashes, etc. For those components Windows uses the user mode. The term says it all: a mode wherein the normal user stuff happens, rather than in kernel mode. This last term is a little bit misleading: the mode is not only meant for the kernel itself, but also for other lower middle level OS components and certain third party components like certain drivers. “Kernel mode” should thus be interpreted as “mode wherein also the kernel runs”.
A process consists of different components, some of them running in user mode, others running in kernel mode. For example: if you start Word from Microsoft Office, the Word executable and a lot of supporting DLLs run in user mode, as these components perform normal user level stuff. But of course these components rely on OS components which should also be loaded into and run in the process. Some of these components can do their job in user mode too, others need to run in kernel mode. This means that a process does some stuff in user mode, but also in kernel mode. That’s why every process has some user CPU time and kernel CPU time (although more correct terms would be “user mode CPU time” and “kernel mode CPU time”).
One of the restrictions when running in user mode is the fact that only a part of virtual memory is available. In kernel mode the whole portion is right there, but in user mode only a piece of it can be used. The reason is simple: user mode components cannot influence (write to) the more sensitive, delicate memory parts used by kernel mode components and they cannot even read from it (privacy assured). I don’t even dare to think what would happen if the Word executable would corrupt a data structure which is used by a low level OS component and could affect the whole system’s behavior… No, it’s indeed better to deny it access to these parts and if it absolutely wants to try it in a way the OS cannot stop it, well, then the CPU should stop it and raise an exception of the memory access violation type. This exception is seen by the OS and the process is terminated (crashed) (the way this works is outside the scope of this article, perhaps I will write something about exception handling later). Seems bad, he? Yup, but it’s still better than corrupted data or a crashing system.
Ok, that was the theory. Let’s go on: which portion of the virtual memory is accessible in user mode? 1 GiB? 2? And where? In the middle? Well, it depends… First of all does it depend on the CPU architecture the OS is meant for. Secondly, in some cases we can configure these values. Before telling you the values, let me point out first that the virtual memory is divided into so-called partitions. In my opinion we can speak of 2 partitions, one of which can be further divided in what I personally call sub-partitions. One partition is accessible in user and kernel mode and is called the User-Mode partition. The second partition is called the Kernel-Mode partition and is only reachable from kernel mode. The user mode partition consists of the following:
- the “NULL-Pointer Assignment” partition. This is a very small sub-partition in the very beginning of the virtual memory address range.
- something I call the “Usable User-Mode” partition. Actually this is the user mode part that is accessible in practice.
- the “64-KB Off-Limits” partition. This is also a very small sub-partition between the Usable User-Mode partition and the kernel mode partition.
The user mode partition is situated before the kernel mode partition.
The following tables give you an overview of the different partitions and sub-partitions, their ranges and their sizes, based on the CPU architecture the OS was meant for and possible configuration values.
|Partition||X86 32 bit||X86 32 bit with custom sizes||X64 64 bit||IA-64 64 bit|
|User-Mode||2 GiB||2-3 GiB||8 TiB||7 TiB|
||64 KiB||64 KiB||64 KiB||64 KiB|
||2 097 024 KiB (< 2 GiB)||2 097 024 KiB (< 2 GiB) – 3 145 600 KiB (< 3 GiB)||8 589 934 464 KiB (< 8 TiB)||7 516 192 640 KiB (< 7 TiB)|
||64 KiB||64 KiB||64 KiB||64 KiB|
|Kernel-Mode||2 GiB||1-2 GiB||16 777 208 TiB||16 777 209 TiB|
|TOTAL||4 GiB||4 GiB||16 EiB||16 EiB|
|Partition||X86 32 bit||X86 32 bit with custom sizes||X64 64 bit||IA-64 64 bit|
0x7FFFFFFF – 0xBFFFFFFF
0x7FFEFFFF – 0xBFFEFFFF
|0x7FFF0000 – 0xBFFF0000
|0x80000000 – 0xC0000000
The NULL-Pointer Assignment partition is a special one. In a way it is part of the user mode partition, but it is reserved by the system and an attempt to access it will raise an exception (memory access violation). This partition contains the null address (address 0x00000000 for 32 bit systems). If functions operate on pointers and they fail, those pointers should contain a value that indicates the failure, actually implying that the pointer is invalid. The null address is used for this and in this case we speak of the null pointer. A very logical consequence is that this address should not be used for valid purposes. That’s why the system reserves it and configures an exception for it when an attempt to really access it takes place. The partition does include more than just the null address itself though, but still, it’s a very small piece (64 KiB). The reason why 65535 addresses after the null address are also included in this “forbidden” zone is data alignment: to respect the involved rules of data alignment a whole bunch of 65536 addresses needs to be handled the same way. Of course this is a serious loss if you look at it the following way: to deny access to 1 address, access to 65536 addresses has to be denied. Compared to the total amount of bytes in the user mode partition it is not a big deal though, so don’t worry about it too much… At this moment don’t think about what data alignment is, we will deal with it later; just remember this concept is the cause of why 64 KiB has to be reserved. The name of this memory “zone” is very clear: it’s the zone which contains the address the so-called null pointer is assigned to (i.e. to the null address).
The Usable User-Mode partition contains all the user mode stuff: every piece of code and data is loaded here, as long as it’s destined for user mode. This implies that all user mode exe and DLL modules are loaded here (third party and/or some OS components), as well as memory-mapped data files (more about them later). Also normal runtime data is located in this part, including some data structures put here and perhaps even belonging to the operating system (in the broadest meaning). An example of the last mentioned case is the Process Environment Block (PEB); this is a data structure the OS creates when a process is created and is put in the user mode partition of that process, because it doesn’t contain any secrets. On the other hand the OS also creates another data structure for the process in the kernel mode partition: this structure is a secured one (for completeness: it’s called a process kernel object, but a description of kernel objects is outside the scope of this article, so don’t worry if you don’t really understand what it is or does).
In default situations on 32 bit Windows the Kernel-Mode partition is as large as the User-Mode partition, even bigger than the usable part of the User-Mode partition. Does this make sense? Oh yes, sure! Perhaps you don’t expect this, but there is a lot to be put here, including kernel code, other lower middle level OS code, kernel mode drivers (some of third parties) and many, many kinds of data (like for example the process kernel objects I was talking about). In fact 2 GiB is quite tight… Sometimes it just isn’t enough, although most of the time, especially after fine-tuning, you should be able to succeed and get the machine up and running smoothly enough. In situations where the sizes of user mode and kernel mode partitions are configured differently, you have to watch out for a really too small kernel mode partition: the smaller, the more problematic, especially when it’s only 1 GiB anymore… On 64 bit Windows the situation is much better, as an immense amount of TiBs are available for kernel mode usage. This is moooore than enough, it’s so much that mathematically the partition can be said to be almost empty. It feels like freedom, doesn’t it? Don’t worry about data structures needed to maintain all that free memory space, because they are not needed.
On 64 bit Windows you shouldn’t worry about kernel mode space and the same counts for user mode space. It’s not a big deal their sizes are not in proportion, as 8 TiB is still extremely huge for any partition. They just choosed 8 TiB for user mode and gave all the rest for kernel mode use. The only difference between x64 and IA-64 is the precise assignment: x64 has 1 TiB more user mode space than IA-64, but IA-64 has 1 TiB more kernel mode space than x64. But these values are so high, it doesn’t count as an advantage or disadvantage in practice. In a way it’s unimportant.
Figure 4 illustrates the different partitions and sub-partitions of the virtual memory of a process called A. The schema speaks for itself and gives a good graphical image of the partition structure of virtual memory. This schema only refers to the default 32 bit Windows situation, but is similar for the other scenarios. One remark: the size of the blocks in the schema has no meaning, so same-sized blocks don’t necessarily mean they have the same memory size!
Configuring partition sizes
BTW, what is all that talk about configuring partition sizes? Well, on 32 bit Windows you have the possibility to augment the default size (2 GiB) of the user mode partition till a maximum of 3 GiB, which implies that the default size (2 GiB) of the kernel mode partition can be diminished till a minimum of 1 GiB. The reason for customizing this is a shortage of user mode memory space. Some applications really need more than 2 GiB to be able to work, especially when performance and scalability is important. A typical example is a database server (SQL Server from Microsoft for example). It is however not always a possibility, because it can result in a situation with too little kernel mode memory space. Normally applications could run decently with 2 GiB of user mode space and 2 GiB of kernel mode space should be enough too. I would never change this, because most of the time it won’t make a (real) difference at the user mode level and it will probably be a bad thing at the kernel mode level. Only in a few cases when it’s really necessary to have more user mode space I would change this value till the point where at the very least just enough kernel mode space is left (possibly after fine-tuning a lot). If you can’t achieve such a point it means you’re stuck: you should really upgrade to 64 bit Windows or find a solution using more than 1 machine (things you should already have done actually, especially in situations like these), but of course there are always a few exceptional situations where these are no options for whatever reason and then you find yourself at a dead end… Don’t panic, these situations are extremely rare. If it’s really, really necessary, normally (but perhaps even then not literally always) you can rewrite applications, restructure your IT infrastructure (or part of it), use another database server, etc. Very big and difficult changes, a lot of stress and a very far from ideal situation, but possible most of the time…
On 64 bit Windows there is no need to change the partition sizes, as they are already more than large enough, so on 64 bit Windows (x64 or IA-64) the sizes are as fixed as they can be.
Changing a partition size is not something you can do on the fly. If you reconfigure this setting the system has to reboot. Actually the setting is part of the boot configuration, containing settings which determine the way the system boots but also settings which should already be known at boot time, like the partition sizes. The boot configuration is to be found in the file boot.ini, but starting from Windows Vista the boot configuration is called the boot configuration data (BCD) en is defined in the file “bcd“.
In a boot.ini scenario enabling a user mode partition size larger than 2 GiB is done through the switch /3GB. In Windows Server 2003 another switch, /userva (meaning “the amount of user mode virtual addresses”), can be combined with /3GB to configure more precisely. If you only mention the first switch, the user mode partition size will be 3 GiB. When combined with /userva you can define the desired user mode partition size yourself. You do this by adding the equal sign (“=”) to /userva, followed by the desired user mode partition size in MiB. The value following /userva= must be between 2048 (2 GiB) and 3072 (3 GiB).
The BCD can be managed using the tool BCDEdit.exe. For changes to the configuration you should use it with the /set switch. After this switch you tell the tool which setting you have in mind; for the partition sizes you use the setting IncreaseUserVA, standing for “increase the amount of user mode virtual addresses”. The setting is followed by a value in MiB, determining the final size of the user mode partition. If you would like to increase the user mode partition to its maximum, this means you must execute the following command and reboot:
Bcdedit /set IncreaseUserVA 3072
When you change the user mode partition size, the kernel mode partition size is automatically changed too; the sum of both partition sizes will always be 4 GiB. If you would like to reset the setting to its default value (2048) you can do this in 2 ways:
Bcdedit /set IncreaseUserVA 2048
Bcdedit /deletevalue IncreaseUserVA
In the first case the setting stays available explicitly, but it has the same value as the default. In the second case the setting is removed, so the default value will be active. Same practical result, but with a little different nuance. Personally I prefer the second scenario if I would like the default value whatever the default value is or might be and I would only use the first scenario where I explicitly want this value, independent of what the default value is or might be.
If you would like to know the current value of this or other BCD settings, perform the following:
All the settings will be enumerated, including the user mode partition size setting.
It won’t surprise you when I tell you that increasing your too small user mode partition of 2 GiB to for instance 3 GiB because you want your application to run smoothly again, is not always a good option… I’ve already told you that this solution can cause problems for the kernel mode partition and even if that’s not the case another problem is waiting for you. You have to realize that increasing a partition wasn’t possible in early versions of Windows. In that time some developers noticed that the highest bit of address pointers was actually unused, because the highest bit is used to refer to kernel mode address space and that space is a no go zone for user mode code. So some developers thought it would be a good idea to use this “unused” bit for other purposes. If an address pointer was to be used their code would first do something with that highest bit, clear the bit (set it to 0) and after that the pointer was eventually used as a normal address (in the user mode partition). This gave no real problems, but… When the need for bigger user mode partitions was growing and growing, Microsoft introduced the user mode partition size setting. This means that with a user mode partition larger than 2 GiB the highest bit should not be used for other purposes as it could be necessary to address locations in its own user mode partition! Code that does otherwise could have a big problem in such a situation. Ok, if the code doesn’t do anything in that extra space, it just doesn’t use the extra memory locations, which is not efficient, but hey, it’s not a big problem either. But it’s not that simple: the OS can use this extra user mode place. For example: if a new thread is created, a Thread Environment Block (TEB, just like a PEB) is placed in the user mode partition, intentionally for use by any module from the process. If the OS places that structure in the new extra user mode space and the application tries to access the TEB because it’s programmed to do that, then the big problem occurs: this will completely fail and the application could crash.
Luckily Microsoft has found a solution for this; you see, those guys are not that bad. Technically Windows will give every process the new user mode partition size, but at the same time it will take that extra space back (in a way…) for applications which don’t explicitly tell Windows they support a user mode partition size larger than 2 GiB. This is done through a linker switch: /LARGEADDRESSAWARE., with which the application is informing it’s aware of large (> 2 GiB) address spaces, including supporting them. If Windows sees this linker switch in the exe image used to start a process, it just creates the user mode partition with the size it should have and that’s it. If Windows doesn’t see this switch though, the same user mode partition is created, but the space above the default 2 GiB will be off limit (by reserving it and then never using it; more about reserving in a following article), so in practice such an application will still use the old typical gray 2 GiB of user mode partition size. The conclusion of this all is that applications have to support extra user mode partition size by code AND by explicitly telling this by using a linker switch. This means that if an application supports it by code, but isn’t (re)linked with this switch it doesn’t actually support it at the end. That’s why many applications will never take advantage of that new space, but on the other hand applications with “bad code” will still be able to run, without crashes.
It’s important to understand that only the base image of a process, the exe image used to start it, is checked for this linker switch. DLLs are not checked this way, so if the exe tells Windows that it supports the new extra user mode space, Windows will think that automatically the DLLS support it too. If not, crashes will still occur. Also, having the linker switch doesn’t technically mean your code can deal with more than 2 GiB. Suppose your code doesn’t support more than 2 GiB of user mode virtual memory, but somehow you provide that linker switch, well, then Windows will just believe you, but hey, you are fooling yourself, because a crash will probably occur later. The linker switch is something used to inform Windows, but there is no proof the information is correct. Normally though applications having this switch are actually to be trusted that they are speaking the truth.
Notice that there is another disadvantage to increase the user mode virtual address space: the absolute maximum of RAM supported is 64 GiB and not 128 GiB anymore.
Porting 32 bit applications to 64 bit
In the previous section I told you about “bad code” “misusing” the highest bit (bit number 31) of 32 bit address pointers. Well, suppose you port your 32 bit application to 64 bit, then address pointers are 64 bit wide. If your code is doing bad things to this pointers, like truncating the upper 32 bits, misusing the highest bit or misusing bit number 31, your application has a big chance to fail in 64 bit mode. Again, those guys at Microsoft have a good heart from time to time and they took care of this as well: a very, very big user mode partition of 8 TiB will be given to every process, but if the application doesn’t have the /LARGEADDRESSAWARE linker switch, it will reserve everything above 2 GiB so that in practice only the bottom 2 GiB is used. If the switch is available, the full 8 TiB is usable. Again, Windows only checks this switch for the executable. DLLs are not checked for this switch, but of course even they have to use address pointers correctly or crashes can occur.
The linker switch doesn’t limit its meaning to “I don’t misuse bit number 31”, but actually means “I don’t misuse anything from those address pointers, so I am able to deal with every virtual address situation”. Perhaps it sounds weird, but this situation implies that even applications originally designed for 64 bit Windows need to explicitly state this switch or they run like in an old default 32 bit environment, at least concerning the user mode partition which is usable in practice. I could easily state the following: every application needs to have this switch or they will run in the old way, so support for every new address situation should be made explicitly by an application, even in default 64 bit Windows environments.
You have read it a couple of times: shared content. It seems not possible, as I’ve told you that every process has its own private virtual memory. Still sharing is possible while purely technically keeping the actual virtual memory private. Huh? Keep thinking of virtual memory as a completely private zone with private addresses, but notice that this is only on the virtual level itself. Shared content should be seen as using the same data from physical memory: it is shared on the physical level in such a way that it can be referred to from different (private) virtual memories. This mechanism is possible through the mapping tables. If mapping tables map virtual address 0x24675A4B in process A to physical address 0x11145C3E in RAM (for simplicity let’s just suppose that the content is available in RAM), but also map virtual address 0x529A00BB in process B (although theoretically it could be again process A) to this same physical address, then that physical address is referred to twice (it could be more though) and one can say that the content at this physical address is shared among certain processes. The result is that both virtual addresses, although different and/or from different virtual memories, “have” the same, shared content. If the content is changed through the first virtual address, you can see the change by reading through the second virtual address. This way processes can share data or code and this mechanism can also be used as a way of interprocess communication (IPC), communication between processes.
Not only IPC is a purpose of sharing content between processes. It’s also useful to save physical memory. Suppose we have the same data (to keep it simple let’s consider data that should be read only) to be available in different processes. It could be loaded by every process in a non shared way and everything will work fine, but we have squandered too much physical memory. If 5 processes each load the same 1 MiB of data, this means this 1 MiB of data is present 5 times in physical memory. What a waste! With the concept of sharing at the virtual level everything seems private, but behind this layer the data is shared: only 1 MiB is necessary in physical memory and we have saved 4 MiB for other purposes! Perhaps you’re not very convinced, because it’s only 4 MiB, but believe me when I say that in a realistic scenario this mechanism is used so much that you save quite a lot of memory! Perhaps the most important example is all the kernel mode data which should be available from within all processes; yes, that process kernel object I was talking about is accessible from within every process, but it will be found only once in physical memory.
The shared content concept isn’t limited to data created at runtime. It can also be used for data loaded from files (although after loading it’s technically the same as data created at runtime): the first process loads it the normal way and later other processes actually use the same data from physical memory. And hey, it can be used for code too! Many executables (.exe files), dynamic-link libraries (DLLs, .dll files), drivers (.sys files),… are needed by different processes. If they only have to be in physical memory only once, then we have saved a lot of space!
Do you have any idea where this ends? Let’s think about it: a big part of the kernel mode code and data should actually be available in different processes (because it’s needed or could be needed) and we have learned that kernel mode space is protected from use by user mode code. Why hasn’t Microsoft shared the whole kernel mode partition? Well, actually they did! Yes, you will find, for example, the same content at virtual address 0x8444A09D (a location in the Kernel-Mode partition) in process A, at virtual address 0x8444A09D in process B, at virtual address 0x8444A09D in process C and…Although virtually these are different Kernel-Mode partitions, they do refer to the same locations (and thus content) in physical memory. We can say that the kernel mode content is shared. Strictly spoken this doesn’t count for the (virtual) kernel mode address space itself, because that’s virtual and thus private. Fact is though you will read about a shared kernel mode address space at so many places (articles, manuals, forums,…), but theoretically it’s not completely correct: they do mean that the kernel mode content is shared (at the physical level).
Content can be private or shared. The amount of a process’ virtual memory referring to private content is called private bytes. The amount of a process’ virtual memory referring to shared content is called shared bytes. These could be quite interesting values sometimes…
A page file is a file where virtual addresses can be mapped on and so it is a file where (virtual) memory is mapped on. We can call such a file a memory-mapped file. Like I’ve just said, a page file is a file of such a type, but there are also other examples. For instance, we could have a file containing code and/or data that should be loaded into a process. With the knowledge you have right now you would think that such a file would be loaded into RAM and/or page file and virtual addresses would be mapped to these RAM/page file zones representing that file. This is perfectly possible, but it’s not the only possibility. If that file exists on some medium we could map the virtual addresses referring to the file’s content directly to this file. This means that no RAM or page file is needed in the first place, although when actually accessed RAM is always the ultimate place where the content should reside (so some kind of paging in from memory-mapped file to RAM happens). This is completely the same concept as with a page file, with only one difference: the file is still just a normal file, it’s not officially considered an extension of physical memory, but the memory-mapped mechanism is just a way of keeping some file content outside physical memory as long as it’s not really needed (and still it’s loaded into a process and addressable in virtual memory). The page file though is conceptually considered as a “simulated” extension of the physical memory and can be updated with all kinds of memory content, while the other memory-mapped files only keep containing their own private code and/or data. The executable of Word for instance will never be used as a memory-like place for content that has nothing to do with the Word executable. The page file should be used for such purposes. All this means that not every piece of content in virtual memory is actually present in physical memory: memory-mapped files (excluding page files) form a second “back memory” next to the physical memory (including page files) towards the virtual memory. On the other hand we can call memory-mapped files, including page files, some kind of back memory towards the RAM, because the RAM is the memory type where content is ultimately located when accessed.
Files containing code, and perhaps also containing some data, are loaded by Windows as memory-mapped files by default (we’re talking about executables, DLLs,…). The first time they are loaded this way they are of course not shared: only 1 process refers to it. The second time Windows loads it again as a memory-mapped file and automatically we have a shared situation, like described in the previous section. Starting Word for instance means that the virtual addresses for Word will be mapped on the exe image itself. Only when a certain part of it is necessary, it will be “paged in” to RAM, something that of course happens when starting the application, but normally not for the whole image. This means we save physical memory space, but it also means the application could start faster, because the whole image must not be copied to physical memory.
Data files (perhaps containing code which should be considered here as a form of data) which are loaded by your code as a normal command are loaded the traditional way, so not as a memory-mapped file, except when explicitly asked so (then we speak of memory-mapped data files).
Notice that memory-mapping files and shared content are two different concepts: a file which is memory-mapped more than once is automatically shared content, but these are still two different concepts, as shared content can also happen without memory-mapped files. Also notice that shared content and the existence of memory-mapped files have the following consequence: the sum of all used virtual memory on a system will be less than the used physical memory (RAM + page file(s)). Keep this in mind…
When images (executables and DLLs) are run from floppy disk the OS loads the images in a traditional way, so in this situation no memory-mapping of the images occurs. This wasn’t some arbitrary choice of Microsoft, there was a very good reason for this decision. Often a setup program runs from a floppy disk, but after a while another floppy disk has to be inserted to load other components. If the image is not present in physical memory the chance is very high that the first disk has to be inserted many, many times again to read from. Note that this isn’t the case for images running from other removable (or alike) media like CD-ROMs or network drives, as these places are usually large enough (but not always though…). For CD-ROMs and network drives this default behavior can be overridden if the image is linked with the linker switch /SWAPRUN:CD respectively /SWAPRUN:NET. The switch tells the system that the image should be loaded to physical memory so it runs completely within the “swap system” (and if there are no page files, this means in RAM only).
At the Performance tab in Task Manager you can monitor a few memory related values. Some of them should be understandable for you by now, so I describe them briefly:
- Physical Memory (K) – Total: amount of RAM (see figure below in red)
- Commit Charge (K) – Limit: amount of physical memory (RAM + page file sizes) (see figure below in blue)
- Physical Memory (K) – Available: amount of free RAM
- Commit Charge (K) – Total: amount of used physical memory
- Commit Charge (K) – Peak: maximum amount of used physical memory till now
The Kernel Memory (K) – Total value is not the amount of kernel memory used, actually it is only a subset of it. Also the system cache value is not completely correct, the value contains more than just the system cache. The meaning of the values not listed above will be covered in later parts.
RAM of a system
The amount of RAM that can be installed depends on a number of factors:
- The bit size part of the CPU architecture
- Motherboard limits
- OS limits
- Special features and configurations
In a 16 bit CPU architecture 2^16 (65 536) RAM addresses should be available, which is the same as 64 KiB RAM. On 32 bit systems an address range of 2^32 addresses is available, which means a RAM of 4 GiB. On 64 bit systems we have a really big wow feeling: 16 EiB of RAM can be addressed! These values are only based on the theory behind CPU architectures. Mobos (motherboard) and operating systems (including Windows) can limit these values.
On 32 bit systems 4 GiB of RAM seemed not enough soon. Later 32 bit systems therefore included an extra hardware feature called Physical Address Extension (PAE), which uses a few tricks to allow more than 4 GiB of RAM. This hardware feature is not enough though: Windows has to be configured to use it. If you have 4 GiB or less physical memory installed you don’t need to activate PAE in Windows. If you have more than 4 GiB of RAM installed you have to activate PAE in Windows to be able to profit. The PAE setting is a boot configuration setting as the value of the setting should be known at boot time. If you reconfigure this setting you should reboot the system to (de-)activate PAE. The setting can be controlled through the boot.ini file, but starting from Windows Vista boot settings are part of the boot configuration data (BCD).
Prior to Vista, open the boot.ini file, which contains boot configuration settings, and configure PAE behavior as you desire by using the switch /PAE, /NOPAE or no switch at all. More information about this will be covered in another article.
In the scenario of BCD, use BCDEdit.exe to control the PAE behavior with the /set switch. The setting is called pae and should be followed by one of the following values:
- Default: the default behavior, depending on other factors
- ForceEnable: enables PAE
- ForceDisable: disables PAE
For instance, to enable PAE explicitly use the following command:
Bcdedit /set pae ForceEnable
Remember that the use of PAE requires some extra maintenance structures, but hey, you need to sacrifice something, right? Anyway, don’t worry, the system takes care of all this.
There are situations when you have a 32 bit system with 4 GiB of RAM (or even less) where PAE is useful too. This seems a statement from a fool, but actually the opposite is true. Sometimes Windows doesn’t see the whole 4 GiB (or even less) of RAM. For example: you can have 4 GiB of RAM installed and PAE disabled, but the system only sees 3,25 GiB. With the PAE setting enabled Windows will see the full 4 GiB. The reason for this is memory-mapped I/O: I/O devices being addressed by using memory addresses. At boot time this memory-mapped I/O is assigned from the 4 GiB address down to the bottom till assignment for I/O devices is finished. The remaining part is then assigned to the RAM. If memory-mapped I/O takes addresses from the 4 GiB address till just above the 3,25 GiB address (from top to bottom), then only the addresses from the null address till the 3,25 GiB address are available for RAM. Addresses above the 4 GiB boundary are used to assign to the remaining of the 4 GiB of RAM (0,75 GiB). Without PAE Windows cannot see above the 4 GiB boundary, so 0,75 GiB of RAM is never seen or used. With PAE this problem is solved; this is why PAE is even useful in situations without more than 4 GiB of RAM. Now, who is the fool?
Memory-mapped I/O is not memory: it just means that memory addressing is used to address I/O devices. A part of the memory addresses is not used for memory purposes, like in the “good” old days with DOS (do you remember that memory zone for devices?). Well, it’s just the same, it hasn’t disappeared! And you should be careful, as sometimes a lot of your memory can be “truncated” this way. Just think about graphics cards with 1 GiB of video memory… And then we haven’t dealt with the other I/O devices yet… So always check the amount of RAM installed, the amount of RAM Windows sees and your PAE settings to correctly configure your system.
The table below shows how much RAM a certain version/edition of Windows supports.
|Windows NT||4 GiB|
|Windows 2000 Professional||4 GiB|
|Windows 2000 Server||4 GiB|
|Windows 2000 Advanced Server||8 GiB|
|Windows 2000 Datacenter Server||32 GiB|
|Windows Server 2003 Standard Edition (x86)||4 GiB|
|Windows Server 2003 Standard Edition (64 bit)||16 GiB|
|Windows Server 2003 Standard Edition (64 bit) – SP1/SP2/R2||32 GiB|
|Windows Server 2003 Enterprise Edition (x86)||32 GiB|
|Windows Server 2003 Enterprise Edition (x86) – SP1/SP2/R2||64 GiB|
|Windows Server 2003 Enterprise Edition (64 bit)||64 GiB|
|Windows Server 2003 Enterprise Edition (64 bit) – SP1/R2||1 TiB|
|Windows Server 2003 Enterprise Edition (64 bit) – SP2||2 TiB|
|Windows Server 2003 Datacenter Edition (x86)||128 GiB|
|Windows Server 2003 Datacenter Edition (64 bit)||512 GiB|
|Windows Server 2003 Datacenter Edition (64 bit) – SP1/R2||1 TiB|
|Windows Server 2003 Datacenter Edition (64 bit) – SP2||2 TiB|
|Windows Server 2003 Web Edition||2 GiB||x86 only|
|Windows Small Business Server 2003||4 GiB||x86 only|
|Windows Storage Server 2003||4 GiB||x86 only|
|Windows Storage Server 2003 Enterprise Edition||8 GiB||x86 only|
|Windows Compute Cluster Server 2003||32 GiB||64 bit only|
|Windows XP (x86)||4 GiB|
|Windows XP Starter Edition (x86)||512 MiB|
|Windows XP (64 bit)||128 GiB|
|Windows Home Server||4 GiB||x86 only|
|Windows Vista Ultimate (x86)||4 GiB|
|Windows Vista Ultimate (64 bit)||128 GiB|
|Windows Vista Enterprise (x86)||4 GiB|
|Windows Vista Enterprise (64 bit)||128 GiB|
|Windows Vista Business (x86)||4 GiB|
|Windows Vista Business (64 bit)||128 GiB|
|Windows Vista Home Premium (x86)||4 GiB|
|Windows Vista Home Premium (64 bit)||16 GiB|
|Windows Vista Home Basic (x86)||4 GiB|
|Windows Vista Home Basic (64 bit)||8 GiB|
|Windows Vista Starter||1 GiB||x86 only|
|Windows Server 2008 Standard (x86)||4 GiB|
|Windows Server 2008 Standard (64 bit)||32 GiB|
|Windows Server 2008 Enterprise (x86)||64 GiB|
|Windows Server 2008 Enterprise (64 bit)||2 TiB|
|Windows Server 2008 Datacenter (x86)||64 GiB|
|Windows Server 2008 Datacenter (64 bit)||2 TiB|
|Windows Web Server 2008 (x86)||4 GiB|
|Windows Web Server 2008 (64 bit)||32 GiB|
|Windows Small Business Server 2008 (x86)||4 GiB|
|Windows Small Business Server 2008 (64 bit)||32 GiB|
|Windows Server 2008 HPC Edition||128 GiB||64 bit only|
|Windows Server 2008 for Itanium-Based Systems||2 TiB||64 bit only|
|Windows Server 2008 Foundation||8 GiB||64 bit only – R2 only|
|Windows 7 Ultimate (x86)||4 GiB|
|Windows 7 Ultimate (64 bit)||192 GiB|
|Windows 7 Enterprise (x86)||4 GiB|
|Windows 7 Enterprise (64 bit)||192 GiB|
|Windows 7 Business (x86)||4 GiB|
|Windows 7 Business (64 bit)||192 GiB|
|Windows 7 Home Premium (x86)||4 GiB|
|Windows 7 Home Premium (64 bit)||16 GiB|
|Windows 7 Home Basic (x86)||4 GiB|
|Windows 7 Home Basic (64 bit)||8 GiB|
|Windows 7 Starter (x86)||2 GiB|
|Windows 7 Starter (64 bit)||2 GiB|
Some last words…
Also important is to understand that the term physical memory is used in two ways: in a narrow meaning and in a broad meaning. In the first case it just refers to the literally physical memory, the RAM. In the second case it refers to the RAM plus semi-physical memory that simulates physical memory, but still isn’t a part of the virtual memory concept. Otherwise stated: the bundle of RAM and page files. Some tools, guides, articles,… (perhaps even most of them) are talking about the first case when they use the term physical memory. Other times the broadest meaning will be the subject. From a hardware perspective you should use the term in its first form, from an OS level though I guess it’s better to use the term in its broadest form: eventually it is the sum of RAM and page files that counts for applications and the OS as the (“physical”) memory behind the virtual memory. I will give you one very concrete example: if you take a look at the Performance tab of the Task Manager in Windows you see the Total value in the “Physical Memory (K)” section: this value refers to the RAM. In the “Commit Charge (K)” section the Limit value contains the size of the physical memory in its broad form: RAM size plus page file sizes.
You have probably noticed that memory is graphically presented in an unusual way most of the time: the lower addresses stand on top and at the bottom we find the higher addresses. This means that when partitions are showed, the user mode partition seems to be located above the kernel mode partition. There is indeed something illogical about this way of representing, but on the other hand it does make sense: we read from top to bottom, so the first things come first. This means the lower addresses/partitions are shown first (= at the top). The only “problem” is that this is the way we order things in a textual way, not in a graphical way. Still I should advice everyone to accept that this kind of graphical representation is the default for memory schemas and not to try to change the world. It would confuse people even more if we would do that. Let’s just get used to it. The figure below shows what I just tried to explain.
In the beginning of this article I told you that physical memory is shared. Systems exist where this is also the case, but at the same time this sharing should be nuanced. The term NUMA is more than likely something you’ve already heard of. NUMA systems consist of different CPUs and memories organized in nodes. A node consists of a subset of CPUs and memories belonging together. A node’s memory will normally be used by the same node’s CPUs. Only when a node has a shortage of memory, the NUMA system will allow the node to use memory from another node. As this memory access is not uniform, it won’t surprise you when I say that NUMA stands for Non-Uniform Memory Access. You see, the physical memory is shared, but at the same time divided too.
The following table shows a few Windows versions/editions which do or don’t support NUMA.
|Windows Server 2003 Standard Edition||–|
|Windows Server 2003 Enterprise Edition||YES|
|Windows Server 2003 Datacenter Edition||YES|
|Windows Server 2003 Web Edition||–||x86 only|
One last thing: memory access violations. Exceptions of this type will be raised in different situations, including:
- User mode code trying to access a kernel mode partition address
- Code trying to access an address in the NULL-Pointer Assignment, including the null address
- Code trying to access an unused address
- Code trying to do an action (for example writing) with an address which is not allowed according to its protection settings (more about this in part 2)
Also remember that a process only accesses its own virtual memory and normally doesn’t access physical memory directly.
In this article I wanted to explain you the basics of the memory architecture of Windows. This is of course just the beginning of the story and for many of you it doesn’t contain anything really new, although I still think some details should be more clear by now. For others this has hopefully been a good introduction before diving into in-depth information, which I hope to provide in later parts. Those subjects might deal with reserving memory, committing memory, protection attributes, regions, blocks and data alignment. Also, this part will be re-edited from time to time, as I have already some stuff in mind I would like to add in future revisions. For example: I would like to tell you more about paging algorithms. In a way this means I don’t consider the first version of my articles as completely finished, only as a first version to start with.
I hope you have learned a lot and everything was clear to you. You are welcome to give feedback of every kind (as long as it’s true of course). Please let me know if you think something is incorrect, should be explained more/better or should be added. Thanks for your time and hopefully till the next version or part 2.
Christophe Heereman aka Pedro