Operating Systems

Start Lecture #7

Remark: Lab 3 (banker) assigned. It is due in 2 weeks.

Homework: What is the difference between a physical address and a virtual address?

When is address translation performed?

  1. At compile time
  2. At link-edit time (the linker lab)
  3. At load time
  4. At execution time

Extensions

  1. Dynamic Loading
  2. Dynamic Linking.
Note: I will place ** before each memory management scheme.

3.1 No Memory Management

The entire process remains in memory from start to finish and does not move.

The sum of the memory requirements of all jobs in the system cannot exceed the size of physical memory.

Monoprogramming

The good old days when everything was easy.

Running Multiple Programs Without a Memory Abstraction

This can be done via swapping if you have only one program loaded at a time. A more general version of swapping is discussed below.

One can also support a limited form of multiprogramming, similar to MFT (which is described next). In this limited version, the loader relocates all relative addresses, thus permitting multiple processes to coexist in physical memory the way your linker permitted multiple modules in a single process to coexist.

**Multiprogramming with Fixed Partitions

Two goals of multiprogramming are to improve CPU utilization, by overlapping CPU and I/O, and to permit short jobs to finish quickly.

3.2 A Memory Abstraction: Address Spaces

the Notion of an Address Space

Just as the process concept creates a kind of abstract CPU to run programs, the address space creates a kind of abstract memory for programs to live in.

This does for processes, what you so kindly did for modules in the linker lab: permit each to believe it has its own memory starting at address zero.

Base and Limit Registers

Base and limit registers are additional hardware, invisible to the programmer, that supports multiprogramming by automatically adding the base address (i.e., the value in the base register) to every relative address when that address is accessed at run time.

In addition the relative address is compared against the value in the limit register and if larger, the processes aborted since it has exceeded its memory bound. Compare this to your error checking in the linker lab.

The base and limit register are set by the OS when the job starts.

3.2.2 Swapping

Moving an entire processes between disk and memory is called swapping.

Multiprogramming with Variable Partitions

Both the number and size of the partitions change with time.

Homework: A swapping system eliminates holes by compaction. Assume a random distribution of holes and data segments, assume the data segments are much bigger than the holes, and assume a time to read or write a 32-bit memory word of 10ns. About how long does it take to compact 128 MB? For simplicity, assume that word 0 is part of a hole and the highest word in memory conatains valid data.

3.2.3 Managing Free Memory

MVT Introduces the Placement Question

That is, which hole (partition) should one choose?

Homework: Consider a swapping system in which memory consists of the following hole sizes in memory order: 10K, 4K, 20K, 18K 7K, 9K, 12K, and 15K. Which hole is taken for successive segment requests of

  1. 12K
  2. 10K
  3. 9K
for first fit? Now repeat the question for best fit, worst fit, and next fit.

Memory Management with Bitmaps

Divide memory into blocks and associate a bit with each block, used to indicate if the corresponding block is free or allocated. To find a chunk of size N blocks need to find N consecutive bits indicating a free block.

The only design question is how much memory does one bit represent.

Memory Management with Linked Lists

Instead of a bit map, use a linked list of nodes where each node corresponds to a region of memory either allocated to a process or still available (a hole).

Memory Management using Boundary Tags

See Knuth, The Art of Computer Programming vol 1.

MVT also introduces the Replacement Question

That is, which victim should we swap out?

This is an example of the suspend arc mentioned in process scheduling.

We will study this question more when we discuss demand paging in which case we swap out only part of a process.

Considerations in choosing a victim

NOTEs:
  1. So far the schemes presented so far have had two properties:
    1. Each job is stored contiguously in memory. That is, the job is contiguous in physical addresses.
    2. Each job cannot use more memory than exists in the system. That is, the virtual addresses space cannot exceed the physical address space.

  2. Tanenbaum now attacks the second item. I wish to do both and start with the first.

  3. Tanenbaum (and most of the world) uses the term paging to mean what I call demand paging. This is unfortunate as it mixes together two concepts.
    1. Paging (dicing the address space) to solve the placement problem and essentially eliminate external fragmentation.
    2. Demand fetching, to permit the total memory requirements of all loaded jobs to exceed the size of physical memory.

  4. Most of the world uses the term virtual memory as a synonym for demand paging. Again I consider this unfortunate.
    1. Demand paging is a fine term and is quite descriptive.
    2. Virtual memory should be used in contrast with physical memory to describe any virtual to physical address translation.

** (non-demand) Paging

Simplest scheme to remove the requirement of contiguous physical memory.

Example: Assume a decimal machine with page size = frame size = 1000.
Assume PTE 3 contains 459.
Then virtual address 3372 corresponds to physical address 459372.

Properties of (non-demand) paging (without segmentation).

Address translation

Choice of page size is discuss below.

Homework: Using the page table of Fig. 3.9, give the physical address corresponding to each of the following virtual addresses.

  1. 20
  2. 4100
  3. 8300

3.3 Virtual Memory (meaning Fetch on Demand)

The idea is to enable a program to execute even if only the active portion of its address space is memory resident. That is, we are to swap in and swap out portions of a program. In a crude sense this could be called automatic overlays.

Advantages

Disadvantages

The Memory Management Unit and Virtual to Physical Address Translation

The memory management unit is a piece of hardware in the processor that translates virtual addresses (i.e., the addresses in the program) into physical addresses (i.e., real hardware addresses in the memory). The memory management unit is abbreviated as and normally referred to as the MMU.

(The idea of an MMU and virtual to physical address translation applies equally well to non-demand paging and in olden days the meaning of paging and virtual memory included that case as well. Sadly, in my opinion, modern usage of the term paging and virtual memory are limited to fetch-on-demand memory systems, typically some form of demand paging.)

** 3.3.1 Paging (Meaning Demand Paging)

The idea is to fetch pages from disk to memory when they are referenced,hoping to get the most actively used pages in memory. The choice of page size is discussed below.

Demand paging is very common: More complicated variants, multilevel-level paging and paging plus segmentation (both of which we will discuss), have been used and the former dominates modern operating systems.

Started by the Atlas system at Manchester University in the 60s (Fortheringham).

Each PTE continues to contain the frame number if the page is loaded. But what if the page is not loaded (i.e., the page exists only on disk)?

The PTE has a flag indicating if the page is loaded (can think of the X in the diagram on the right as indicating that this flag is not set). If the page is not loaded, the location on disk could be kept in the PTE, but normally it is not (discussed below).

When a reference is made to a non-loaded page (sometimes called a non-existent page, but that is a bad name), the system has a lot of work to do. We give more details below.

  1. Choose a free frame, if one exists.
  2. What if there is no free frame?
    Make one!
    1. Choose a victim frame. This is the replacement question about which we will have more to say latter.
    2. Write the victim back to disk if it is dirty,
    3. Update the victim PTE to show that it is not loaded.
    4. Now we have a free frame.
  3. Copy the referenced page from disk to the free frame.
  4. Update the PTE of the referenced page to show that it is loaded and give the frame number.
  5. Do the standard paging address translation (p#,off)→(f#,off).

Really not done quite this way

Homework: 9.

3.3.2 Page tables

A discussion of page tables is also appropriate for (non-demand) paging, but the issues are more acute with demand paging and the tables can be much larger.
Why?

  1. The total size of the active processes is no longer limited to the size of physical memory. Since the total size of the processes is greater, the total size of the page tables is greater and hence concerns over the size of the page table are more acute.
  2. With demand paging an important question is the choice of a victim page to page out. Data in the page table can be useful in this choice.

We must be able access to the page table very quickly since it is needed for every memory access.

Unfortunate laws of hardware.

So we can't just say, put the page table in fast processor registers, and let it be huge, and sell the system for $1000.

The simplest solution is to put the page table in main memory. However it seems to be both too slow and two big.

  1. This solution seems too slow since all memory references now require two reference.
  2. We will soon see how to speed up the references and for many programs eliminate extra reference by using a TLB.
  3. This solution seems too big.
  4. A fix is to use multiple levels of mapping. We will see two examples below: multilevel page tables and segmentation plus paging.

Structure of a Page Table Entry

Each page has a corresponding page table entry (PTE). The information in a PTE is used by the hardware and its format is machine dependent; thus the OS routines that access PTEs are not portable. Information set by and used by the OS is normally kept in other OS tables.

(Actually some systems, those with software TLB reload, do not require hardware access to the page table.)

The page table is index by the page number; thus the page number is not stored in the table.

The following fields are often present in a PTE.

  1. The valid bit. This tells if the page is currently loaded (i.e., is in a frame). If set, the frame number is valid. It is also called the presence or presence/absence bit. If a page is accessed whose valid bit is unset, a page fault is generated by the hardware.

  2. The frame number. This field is the main reason for the table. It gives the virtual to physical address translation.

  3. The Modified or Dirty bit. Indicates that some part of the page has been written since it was loaded. This is needed if the page is evicted so that the OS can tell if the page must be written back to disk.

  4. The Referenced or Used bit. Indicates that some word in the page has been referenced. Used to select a victim: unreferenced pages make good victims by the locality property (discussed below).

  5. Protection bits. For example one can mark text pages as execute only. This requires that boundaries between regions with different protection are on page boundaries. Normally many consecutive (in logical address) pages have the same protection so many page protection bits are redundant. Protection is more naturally done with segmentation, but in many current systems, it is done with paging (since the systems don't utilize segmentation, even though the hardware supports it).

Why are the disk address of non-resident pages not in the PTE?
On most systems the PTEs are accessed by the hardware automatically on a TLB miss (see immediately below). Thus the format of the PTEs is determined by the hardware and contains only information used on page hits. Hence the disk address, which is only used on page faults, is not present.

3.3.3 Speeding Up Paging

As mentioned above the simple scheme of storing the page table in its entirety in central memory alone appears to be both too slow and too big. We address both these issues here, but note that a second solution to the size question (segmentation) is discussed later.

Translation Lookaside Buffers (and General Associative Memory)

Note: Tanenbaum suggests that associative memory and translation lookaside buffer are synonyms. This is wrong. Associative memory is a general concept of which translation lookaside buffer is a specific example.

An associative memory is a content addressable memory. That is you access the memory by giving the value of some field (called the index) and the hardware searches all the records and returns the record whose index field contains the requested value.

For example

    Name  | Animal | Mood     | Color
    ======+========+==========+======
    Moris | Cat    | Finicky  | Grey
    Fido  | Dog    | Friendly | Black
    Izzy  | Iguana | Quiet    | Brown
    Bud   | Frog   | Smashed  | Green
  

If the index field is Animal and Iguana is given, the associative memory returns

    Izzy  | Iguana | Quiet    | Brown
  

A Translation Lookaside Buffer or TLB is an associate memory where the index field is the page number. The other fields include the frame number, dirty bit, valid bit, etc.

Note that unlike the situation with a the page table, the page number is stored in the TLB; indeed it is the index field.

A TLB is small and expensive but at least it is fast. When the page number is in the TLB, the frame number is returned very quickly.

On a miss, a TLB reload is performed. The page number is looked up in the page table. The record found is placed in the TLB and a victim is discarded (not really discarded, dirty and referenced bits are copied back to the PTE). There is no placement question since all TLB entries are accessed at the same time and hence are equally suitable. But there is a replacement question.

Homework: 15.

As the size of the TLB has grown, some processors have switched from single-level, fully-associative, unified TLBs to multi-level, set-associative, separate instruction and data, TLBs.

We are actually discussing caching, but using different terminology.

Software TLB Management

The words above assume that, on a TLB miss, the MMU (i.e., hardware and not the OS) loads the TLB with the needed PTE and then performs the virtual to physical address translation. This implies that the OS need not be concerned with TLB misses.

Some newer systems do this in software, i.e., the OS is involved.

Multilevel Page Tables

Recall the diagram above showing the data and stack growing towards each other. Most of the virtual memory is the unused space between the data and stack regions. However, with demand paging this space does not waste real memory. But the single large page table does waste real memory.

The idea of multi-level page tables (a similar idea is used in Unix i-node-based file systems, which we study later when we do I/O) is to add a level of indirection and have a page table containing pointers to page tables.

This idea can be extended to three or more levels. The largest I know of has four levels. We will be content with two levels.

Address Translation With a 2-Level Page Table

For a two level page table the virtual address is divided into three pieces

    +-----+-----+-------+
    | P#1 | P#2 | Offset|
    +-----+-----+-------+
  

Do an example on the board

The VAX used a 2-level page table structure, but with some wrinkles (see Tanenbaum for details).

Naturally, there is no need to stop at 2 levels. In fact the SPARC has 3 levels and the Motorola 68030 has 4 (and the number of bits of Virtual Address used for P#1, P#2, P#3, and P#4 can be varied). More recently, x86-64 also has 4-levels.

Inverted Page Tables

For many systems the virtual address range is much bigger that the size of physical memory. In particular, with 64-bit addresses, the range is 264 bytes, which is 16 million terabytes. If the page size is 4KB and a PTE is 4 bytes, a full page table would be 16 thousand terabytes.

A two level table would still need 16 terabytes for the first level table, which is stored in memory. A three level table reduces this to 16 gigabytes, which is still large and only a 4-level table gives a reasonable memory footprint of 16 megabytes.

An alternative is to instead keep a table indexed by frame number. The content of entry f contains the number of the page currently loaded in frame f. This is often called a frame table as well as an inverted page table.

Now there is one entry per frame. Again using 4KB pages and 4 byte PTEs, we see that the table would be a constant 0.1% of the size of real memory.

But on a TLB miss, the system must search the inverted page table, which would be hopelessly slow except that some tricks are employed. Specifically, hashing is used.

Also it is often convenient to have an inverted table as we will see when we study global page replacement algorithms. Some systems keep both page and inverted page tables.