Operating Systems


Chapter 3: Memory Management

Also called storage management or space management.

Memory management must deal with the storage hierarchy present in modern machines.

We will see in the next few lectures that there are three independent decision:

  1. Segmentation (or no segmentation)
  2. Paging (or no paging)
  3. Fetch on demand (or no fetching on demand)

Memory management implements address translation.

Homework: 7.

When is address translation performed?

  1. At compile time
  2. At link-edit time (the ``linker lab'')
  3. At load time
  4. At execution time

Extensions

Note: I will place ** before each memory management scheme.

3.1: Memory management without swapping or paging

Entire process remains in memory from start to finish and does not move.

The sum of the memory requirements of all jobs in the system cannot exceed the size of physical memory.

** 3.1.1: Monoprogramming without swapping or paging (Single User)

The ``good old days'' when everything was easy.

3.1.2: Multiprogramming and Memory Usage

Goal is to improve CPU utilization, by overlapping CPU and I/O

Homework: 1, 3.

3.1.3: Multiprogramming with fixed partitions


================ Start Lecture #10 ================

Note: Typo on lab 2. The last input set has a process with a zero value for IO, which is an error. Specifically, you should replace
    5 (0 3 200 3) (0 9 500 3) (0 20 500 3) (100 1 100 3) (100 100 500 3)
with
    5 (0 3 200 3) (0 9 500 3) (0 20 500 3) (100 1 100 3) (100 100 500 3)
End of Note.

3.2: Swapping

Moving entire processes between disk and memory is called swapping.

3.2.1: Multiprogramming with variable partitions

Homework: 4

MVT Introduces the ``Placement Question'', which hole (partition) to choose

Homework: 2, 5.

MVT Also introduces the ``Replacement Question'', which victim to swap out

We will study this question more when we discuss demand paging.

Considerations in choosing a victim

NOTEs:
  1. So far the schemes presented have had two properties:
    1. Each job is stored contiguously in memory. That is, the job is contiguous in physical addresses.
    2. Each job cannot use more memory than exists in the system. That is, the virtual addresses space cannot exceed the physical address space.

  2. Tanenbaum now attacks the second item. I wish to do both and start with the first.br>
  3. Tanenbaum (and most of the world) uses the term ``paging'' to mean what I call demand paging. This is unfortunate as it mixes together two concepts.
    1. Paging (dicing the address space) to solve the placement problem and essentially eliminate external fragmentation.
    2. Demand fetching, to permit the total memory requirements of all loaded jobs to exceed the size of physical memory.

  4. Tanenbaum (and most of the world) uses the term virtual memory as a synonym for demand paging. Again I consider this unfortunate.
    1. Demand paging is a fine term and is quite descriptive
    2. Virtual memory ``should'' be used in contrast with physical memory to describe any virtual to physical address translation.

** (non-demand) Paging

Simplest scheme to remove the requirement of contiguous physical memory.


================ Start Lecture #11 ================

Example: Assume a decimal machine with page size = frame size = 1000.
Assume PTE 3 contains 459.
Then virtual address 3372 corresponds to physical address 459372.

Properties of (non-demand) paging.

Homework: 13

Address translation

Choice of page size is discuss below.

Homework: 8.

3.3: Virtual Memory (meaning fetch on demand)

Idea is that a program can execute even if only the active portion of its address space is memory resident. That is, swap in and swap out portions of a program. In a crude sense this can be called ``automatic overlays''.

Advantages

3.2.1: Paging (meaning demand paging)

Fetch pages from disk to memory when they are referenced, with a hope of getting the most actively used pages in memory.

Homework: 11.

3.3.2: Page tables

A discussion of page tables is also appropriate for (non-demand) paging, but the issues are more acute with demand paging since the tables can be much larger. Why?

  1. The total size of the active processes is no longer limited to the size of physical memory. Since the total size of the processes is greater, the total size of the page tables is greater and hence concerns over the size of the page table are more acute.
  2. With demand paging an important question is the choice of a victim page to page out. Data in the page table can be useful in this choice.

We must be able access to the page table very quickly since it is needed for every memory access.

Unfortunate laws of hardware.

So we can't just say, put the page table in fast processor registers, and let it be huge, and sell the system for $1500.




For now, put the (one-level) page table in main memory.


Contents of a PTE

Each page has a corresponding page table entry (PTE). The information in a PTE is for use by the hardware. Information set by and used by the OS is normally kept in other OS tables. The page table format is determined by the hardware so access routines are not portable. The following fields are often present.

  1. The valid bit. This tells if the page is currently loaded (i.e., is in a frame). If set, the frame pointer is valid. It is also called the presence or presence/absence bit. If a page is accessed with the valid bit zero, a page fault is generated by the hardware.
  2. The frame number. This is the main reason for the table. It is needed for virtual to physical address translation.
  3. TheModified bit. Indicates that some part of the page has been written since it was loaded. This is needed when the page is evicted so the OS can know that the page must be written back to disk.
  4. The referenced bit. Indicates that some word in the page has been referenced. Used to select a victim: unreferenced pages make good victims by the locality property.
  5. Protection bits. For example one can mark text pages as execute only. This requires that boundaries between regions with different protection are on page boundaries. Normally many consecutive (in logical address) pages have the same protection so many page protection bits are redundant. Protection is more naturally done with segmentation.

Multilevel page tables (not on 202 exams)

Recall the previous diagram. Most of the virtual memory is the unused space between the data and stack regions. However, with demand paging this space does not waste real memory. But the single large page table does waste real memory.

The idea of multi-level page tables (a similar idea is used in Unix inode-based file systems) is to add a level of indirection and have a page table containing pointers to page tables.

Do an example on the board

The VAX used a 2-level page table structure, but with some wrinkles (see Tanenbaum for details).

Naturally, there is no need to stop at 2 levels. In fact the SPARC has 3 levels and the Motorola 68030 has 4 (and the number of bits of Virtual Address used for P#1, P#2, P#3, and P#4 can be varied).

3.3.4: Associative memory (TLBs)

Note: Tanenbaum suggests that ``associative memory'' and ``translation lookaside buffer'' are synonyms. This is wrong. Associative memory is a general structure and translation lookaside buffer is a special case.

An associative memory is a content addressable memory. That is you access the memory by giving the value of some field and the hardware searches all the records and returns the record whose field contains the requested value.

For example

Name  | Animal | Mood     | Color
======+========+==========+======
Moris | Cat    | Finicky  | Grey
Fido  | Dog    | Friendly | Black
Izzy  | Iguana | Quiet    | Brown
Bud   | Frog   | Smashed  | Green
If the index field is Animal and Iguana is given, the associative memory returns
Izzy  | Iguana | Quiet    | Brown

A Translation Lookaside Buffer or TLB is an associate memory where the index field is the page number. The other fields include the frame number, dirty bit, valid bit, and others.

Homework: 15.

3.3.5: Inverted page tables

Keep a table indexed by frame number with the entry f containing the number of the page currently loaded in frame f.

3.4: Page Replacement Algorithms

These are solutions to the replacement question.

Good solutions take advantage of locality.

Pages belonging to processes that have terminated are of course perfect choices for victims.

Pages belonging to processes that have been blocked for a long time are good choices as well.

Random

A lower bound on performance. Any decent scheme should do better.

3.4.1: The optimal page replacement algorithm (opt PRA) (aka Belady's min PRA)

Replace the page whose next reference will be furthest in the future.


================ Start Lecture #12 ================

3.4.2: The not recently used (NRU) PRA

Divide the frames into four classes and make a random selection from the lowest nonempty class.

  1. Not referenced, not modified
  2. Not referenced, modified
  3. Referenced, not modified
  4. Referenced, modified

Assumes that in each PTE there are two extra flags R (sometimes called U, for used) and M (often called D, for dirty).

Also assumes that a page in a lower priority class is cheaper to evict.

We again have the prisoner problem, we do a good job of making little ones out of big ones, but not the reverse. Need more resets.

Every k clock ticks, reset all R bits

What if the hardware doesn't set these bits?

3..4.3: FIFO PRA

Simple but poor since usage of the page is ignored.

Belady's Anomaly: Can have more frames yet generate more faults. Example given later.

3.4.4: Second chance PRA

Similar to the FIFO PRA but when time choosing a victim, if the page at the head of the queue has been referenced (R bit set), don't evict it. Instead reset R and move the page to the rear of the queue (so it looks new). The page is being a second chance.

What if all frames have been referenced?
Becomes the same as fifo (but takes longer).

Might want to turn off the R bit more often (say every k clock ticks).

3.4.5: Clock PRA

Same algorithm as 2nd chance, but a better (and I would say obvious) implementation: Use a circular list.

Do an example.

LIFO PRA

This is terrible! Why?
Ans: All but the last frame are frozen once loaded so you can replace only one frame. This is especially bad after a phase shift in the program when it is using all new pages.

3.4.6:Least Recently Used (LRU) PRA

When a page fault occurs, choose as victim that page that has been unused for the longest time, i.e. that has been least recently used.

LRU is definitely

Homework: 19, 20

A hardware cutsie in Tanenbaum

3.4.7: Approximating LRU in Software

The Not Frequently Used (NFU) PRA

The Aging PRA

NFU doesn't distinguish between old references and recent one. The following modification does distinguish.

R counter
110000000
001000000
110100000
111010000
001101000
000110100
110011010
111001101
001100110

Homework: 21, 25

3.5: Modeling Paging Algorithms

3.5.1: Belady's anomaly

Consider a system that has no pages loaded and that uses the FIFO PRU.
Consider the following ``reference string'' (sequences of pages referenced).

 0 1 2 3 0 1 4 0 1 2 3 4

If we have 3 frames this generates 9 page faults (do it).

If we have 4 frames this generates 10 page faults (do it).

Theory has been developed and certain PRA (so called ``stack algorithms'') cannot suffer this anomaly for any reference string. FIFO is clearly not a stack algorithm. LRU is.

Repeat the above calculations for LRU.

3.6: Design issues for (demand) Paging

3.6.1 & 3.6.2: The Working Set Model and Local vs Global Policies

I will do these in the reverse order (which makes more sense). Also Tanenbaum doesn't actually define the working set model, but I shall.

A local PRA is one is which a victim page is chosen among the pages of the same process that requires a new page. That is the number of pages for each process is fixed. So LRU means the page least recently used by this process.

If we apply global LRU indiscriminately with some sort of RR processor scheduling policy, and memory is somewhat over-committed, then by the time we get around to a process, all the others have run and have probably paged out this process.

If this happens each process will need to page fault at a high rate; this is called thrashing. It is therefore important to get a good idea of how many pages a process needs, so that we can balance the local and global desires.

The working set policy (Peter Denning)

The goal is to specify which pages a given process needs to have memory resident in order for the give process to run without too many page faults.

The idea of the working set policy is to ensure that each process keeps its working set in memory.

Interesting questions include:

... various approximations to the working set, have been devised.

  1. Wsclock
  2. Page Fault Frequency (PFF)


================ Start Lecture #13 ================

3.6.3: Page size

3.6.4: Implementation Issues

Don't worry about instruction backup. Very machine dependent and modern implementations tend to get it right.

Locking (pinning) pages

We discussed pinning jobs already. The same (mostly I/O) considerations apply to pages.

Shared pages

Really should share segments.

Backing Store

The issue is where on disk do we put pages.

Paging Daemons

Done earlier

Page Fault Handling (Not on the 202 exams)

What happens when a process, say process A, gets a page fault?
  1. The hardware detects the fault and traps to the kernel (switches to supervisor mode and saves state).

  2. Some assembly language code save more state, establishes the C-language (or another programming language) environment, and ``calls'' the OS.

  3. The OS determines that a page fault occurred and which page was referenced.

  4. If the virtual address is invalid, process A is killed. If the virtual address is valid, the OS must find a free frame. If there is no free frames, the OS selects a victim frame. Call the process owning the victim frame, process B. (If the page replacement algorithm is local process B is process A.)

  5. If the victim frame is dirty, the OS schedules an I/O write to copy the frame to disk. Thus, if the victim frame is dirty, process B is blocked (it might already be blocked for some other reason). Process A is also blocked since it needs to wait for this frame to be free. The process scheduler is invoked to perform a context switch.


  6. Now the O/S has a clean frame (this may be much later in wall clock time if a victim frame had to be written). The O/S schedules an I/O to read the desired page into this clean frame. Process A is blocked (perhaps for the second time) and hence the process scheduler is invoked to perform a context switch.

  7. A Disk interrupt occurs when the I/O completes (trap / asm / OS determines I/O done). The PTE is updated.

  8. The O/S may need to fix up process A (e.g. reset the program counter to re-execute the instruction that caused the page fault).

  9. Process A is placed on the ready list and eventually is chosen by the scheduler to run. Recall that process A is executing O/S code.

  10. The OS returns to the first assembly language routine.

  11. The assembly language routine restores registers, etc. and ``returns'' to user mode.

Process A is unaware that all this happened.

3.7: Segmentation

Up to now, the virtual address space has been contiguous.

Homework: 29.

** Two Segments

Late PDP-10s and TOPS-10

** Three Segments

Traditional Unix shown at right.

  1. Shared text marked execute only.
  2. Data segment (global and static variables).
  3. Stack segment (automatic variables).

** Four Segments

Just kidding.

** General (not necessarily demand) Segmentation

** Demand Segmentation

Same idea as demand paging applied to segments.

The following table mostly from Tanenbaum compares demand paging with demand segmentation.

Consideration Demand
Paging
Demand
Segmentation
Programmer aware NoYes
How many addr spaces 1Many
VA size > PA size YesYes
Protect individual
procedures separately
NoYes
Accommodate elements
with changing sizes
NoYes
Ease user sharing NoYes
Why invented let the VA size
exceed the PA size
Sharing, Protection,
independent addr spaces

Internal fragmentation YesNo, in principle
External fragmentation NoYes
Placement question NoYes
Replacement question YesYes

** 3.7.2: Segmentation with paging

Combines both segmentation and paging to get advantages of both at a cost in complexity. This is very common now.

Homework: 30.

Some last words on memory management.


================ Start Lecture #14 ================

Midterm Exam


================ Start Lecture #15 ================

Reviewed Answers to Midterm Exam



Allan Gottlieb