Start Lecture #19
The idea is to enable a program to execute even if only the active
portion of its address space is memory resident.
That is, we are to swap in and swap out portions of
In a crude sense this could be called
The memory management unit is a piece of hardware in the processor that translates virtual addresses (i.e., the addresses in the program) into physical addresses (i.e., real hardware addresses in the memory). The memory management unit is abbreviated as and normally referred to as the MMU.
(The idea of an MMU and virtual to physical address translation applies equally well to non-demand paging and in olden days the meaning of paging and virtual memory included that case as well. Sadly, in my opinion, modern usage of the term paging and virtual memory are limited to fetch-on-demand memory systems, typically some form of demand paging.)
The idea is to fetch pages from disk to memory when they are referenced,hoping to get the most actively used pages in memory. The choice of page size is discussed below.
Demand paging is very common: More complicated variants, multilevel-level paging and paging plus segmentation (both of which we will discuss), have been used and the former dominates modern operating systems.
Started by the Atlas system at Manchester University in the 60s (Fortheringham).
Each PTE continues to contain the frame number if the page is loaded. But what if the page is not loaded (i.e., the page exists only on disk)?
The PTE has a flag indicating if the page is loaded (can think of the X in the diagram on the right as indicating that this flag is not set). If the page is not loaded, the location on disk could be kept in the PTE, but normally it is not (discussed below).
When a reference is made to a non-loaded page (sometimes called a non-existent page, but that is a bad name), the system has a lot of work to do. We give more details below.
Really not done quite this way
alwaysa free frame because ...
pages them out(writing them back to disk if dirty).
A discussion of page tables is also appropriate for (non-demand)
paging, but the issues are more acute with demand paging and the
tables can be much larger.
We must be able access to the page table very quickly since it is needed for every memory access.
Unfortunate laws of hardware.
So we can't just say, put the page table in fast processor registers, and let it be huge, and sell the system for $1000.
The simplest solution is to put the page table in main memory. However it seems to be both too slow and two big.
fixis to use multiple levels of mapping. We will see two examples below: multilevel page tables and segmentation plus paging.
Each page has a corresponding page table entry (PTE). The information in a PTE is used by the hardware and its format is machine dependent; thus the OS routines that access PTEs are not portable. Information set by and used by the OS is normally kept in other OS tables.
(Actually some systems, those with software TLB reload, do not require hardware access to the page table.)
The page table is indexed by the page number; thus the page number is not stored in the table.
The following fields are often present in a PTE.
Why are the disk addresses of non-resident pages
not in the PTE?
On most systems the PTEs are accessed by the hardware automatically on a TLB miss (see immediately below). Thus the format of the PTEs is determined by the hardware and contains only information used on page hits. Hence the disk address, which is only used on page faults, is not present.
As mentioned above, the simple scheme of storing the page table in its entirety in central memory alone appears to be both too slow and too big. We address both these issues here, but note that a second solution to the size question (segmentation) is discussed later.
Note: Tanenbaum suggests that
associative memory and
translation lookaside buffer
This is wrong.
Associative memory is a general concept of which translation
lookaside buffer is a specific example.
An associative memory is a content addressable memory. That is you access the memory by giving the value of some field (called the index) and the hardware searches all the records and returns the record whose index field contains the requested value.
Name | Animal | Mood | Color ======+========+==========+====== Moris | Cat | Finicky | Grey Fido | Dog | Friendly | Black Izzy | Iguana | Quiet | Brown Bud | Frog | Smashed | Green
If the index field is Animal and Iguana is given, the associative memory returns
Izzy | Iguana | Quiet | Brown
A Translation Lookaside Buffer or TLB is an associate memory where the index field is the page number. The other fields include the frame number, dirty bit, valid bit, etc.
Note that, unlike the situation with a the page table, the page number is stored in the TLB; indeed it is the index field.
A TLB is small and expensive but at least it is fast. When the page number is in the TLB, the frame number is returned very quickly.
On a miss, a TLB reload is performed. The page number is looked up in the page table. The record found is placed in the TLB and a victim is discarded (not really discarded, dirty and referenced bits are copied back to the PTE). There is no placement question since all TLB entries are accessed at the same time and hence are equally suitable. But there is a replacement question.
As the size of the TLB has grown, some processors have switched from single-level, fully-associative, unified TLBs to multi-level, set-associative, separate instruction and data, TLBs.
We are actually discussing caching, but using different terminology.
The words above assume that, on a TLB miss, the MMU (i.e., hardware and not the OS) loads the TLB with the needed PTE and then performs the virtual to physical address translation. This implies that the OS need not be concerned with TLB misses.
Some newer systems do this in software, i.e., the OS is involved.
Recall the diagram above showing the data and stack growing towards each other. Most of the virtual memory is the unused space between the data and stack regions. However, with demand paging this space does not waste real memory. But the single large page table does waste real memory.
The idea of multi-level page tables (a similar idea is used in Unix i-node-based file systems, which we study later when we do I/O) is to add a level of indirection and have a page table containing pointers to page tables.
This idea can be extended to three or more levels. The largest I know of has four levels. We will be content with two levels.
For a two level page table the virtual address is divided into three pieces
+-----+-----+-------+ | P#1 | P#2 | Offset| +-----+-----+-------+
Do an example on the board
The VAX used a 2-level page table structure, but with some wrinkles (see Tanenbaum for details).
Naturally, there is no need to stop at 2 levels. In fact the SPARC has 3 levels and the Motorola 68030 has 4 (and the number of bits of Virtual Address used for P#1, P#2, P#3, and P#4 can be varied). More recently, x86-64 also has 4-levels.
For many systems the virtual address range is much bigger that the size of physical memory. In particular, with 64-bit addresses, the range is 264 bytes, which is 16 million terabytes. If the page size is 4KB and a PTE is 4 bytes, a full page table would be 16 thousand terabytes.
A two level table would still need 16 terabytes for the first level table, which is stored in memory. A three level table reduces this to 16 gigabytes, which is still large and only a 4-level table gives a reasonable memory footprint of 16 megabytes.
An alternative is to instead keep a table indexed by frame number. The content of entry f contains the number of the page currently loaded in frame f. This is often called a frame table as well as an inverted page table.
Now there is one entry per frame. Again using 4KB pages and 4 byte PTEs, we see that the table would be a constant 0.1% of the size of real memory.
But on a TLB miss, the system must search the inverted page table, which would be hopelessly slow except that some tricks are employed. Specifically, hashing is used.
Also it is often convenient to have an inverted table as we will see when we study global page replacement algorithms. Some systems keep both page and inverted page tables.
These are solutions to the replacement question. Good solutions take advantage of locality when choosing the victim page to replace.
likelyto be referenced. So it is good to bring in the entire page on a miss and to keep the page in memory for a while.
When programs begin there is no history so nothing to base locality
At this point the paging system is said to be undergoing a
phase changes in which the set of pages
referenced changes abruptly (similar to a cold start).
An example would occurs in your linker lab when you finish pass 1
and start pass 2.
At the point of a phase change, many page faults occur because
locality is poor.
Pages belonging to processes that have terminated are of course perfect choices for victims.
Pages belonging to processes that have been blocked for a long time are good choices as well.
A lower bound on performance. Any decent scheme should do better.
Replace the page whose next reference will be furthest in the future.
Divide the frames into four classes and make a random selection from the lowest nonempty class.
Assumes that in each PTE there are two extra flags R (for referenced; sometimes called U, for used) and M (for modified, often called D, for dirty).
NRU is based on the belief that a page in a lower priority class is a better victim.
We again have the prisoner problem: We do a good job of making little ones out of big ones, but not as good a job on the reverse. We need more resets. Therefore, every k clock ticks, the OS resets all R bits.
Why not reset M as well?
Answer: If a dirty page has a clear M, we will not copy the page back to disk when it is evicted, and thus the only accurate version of the page will be lost!
I suppose one could have two M bits one accurate and one reset, but I don't know of any system (or proposal) that does so.
What if the hardware doesn't set these bits?
Answer: The OS can uses tricks. When the bits are reset, the PTE is made to indicate that the page is not resident (which is a lie). On the ensuing page fault, the OS sets the appropriate bit(s).
We ignore the tricks and assume the hardware does set the bits.
Simple but poor since the usage of the page is ignored.
Belady's Anomaly: Can have more frames yet generate more faults. An example is given later.
The natural implementation is to have a queue of nodes each pointing to a resident page (i.e., pointing to a frame).
Similar to the FIFO PRA, but altered so that a page recently referenced is given a second chance.
Same algorithm as 2nd chance, but a better implementation for the nodes: Use a circular list with a single pointer serving as both head and tail.
Let us begin by assuming that the number of pages loaded is constant.
oldest, unreferencedpage by a new page.
Thus, when the number of loaded pages (i.e., frames) is constant,
the algorithm is just like 2nd chance except that only the one
pointer (the clock
hand) is updated.
How can the number of frames change for a fixed machine? Presumably we don't (un)plug DRAM chips while the system is running?
The number of frames can change when we use a so called
local algorithm—discussed later—where the victim
must come from the frames assigned to the faulting process.
In this case we have a different frame list for each process.
At times we want to change the number of frames assigned to a given
process and hence the number of frames in a given frame list changes
How does this affect 2nd chance?