================ Start Lecture #13 ================

3.6: Design issues for (demand) Paging

3.6.1 & 3.6.2: The Working Set Model and Local vs Global Policies

I will do these in the reverse order (which makes more sense). Also Tanenbaum doesn't actually define the working set model, but I shall.

A local PRA is one is which a victim page is chosen among the pages of the same process that requires a new page. That is the number of pages for each process is fixed. So LRU means the page least recently used by this process.

If we apply global LRU indiscriminately with some sort of RR processor scheduling policy, and memory is somewhat over-committed, then by the time we get around to a process, all the others have run and have probably paged out this process.

If this happens each process will need to page fault at a high rate; this is called thrashing. It is therefore important to get a good idea of how many pages a process needs, so that we can balance the local and global desires.

The working set policy (Peter Henning)

The goal is to specify which pages a given process needs to have memory resident in order for the give process to run without too many page faults.

The idea of the working set policy is to ensure that each process keeps its working set in memory.

Interesting questions include:

Various approximations to the working set frequency have been devised.

  1. Wsclock
  2. Page Fault Frequency (PFF)

3.6.3: Page size

3.6.4: Implementation Issues

Don't worry about instruction backup. Very machine dependent and modern implementations tend to get it right.

Locking (pinning) pages

We discussed pinning jobs already. The same (mostly I/O) considerations apply to pages.

Shared pages

Really should share segments

Backing Store

The issue is where on disk do we put pages

Paging Daemons

Done earlier

Page Fault Handling (not on 202 exams)

  1. Hardware traps to the kernel (switches to supervisor mode; saves state)

  2. Assembly language code save more state, establishes the C-language environment, calls the OS

  3. OS determines that a fault occurred and which page

  4. If virtual address is invalid, shoot process. If valid, seek a free frame. If no free frames, select a victim.

  5. If the victim frame is dirty, schedule an I/O write to copy the frame to disk. This process is blocked so the process scheduler is invoked to perform a context switch.


  6. Now the frame is clean (this may be much later in wall clock time). Schedule an I/O to read the desired page into this clean frame. The process is again blocked and hence the process scheduler is invoked to perform a context switch.

  7. Disk interrupt occurs when I/O complete (trap / asm / OS determines I/O done) / process made ready / process starts running). PTE updated

  8. Fix up process (e.g. reset PC)

  9. Process put in ready queue and eventually runs. The OS returns to the first asm routine.

  10. Asm routine restores registers, etc. and returns to user mode.

The process is unaware that all this happened.

3.7: Segmentation

Up to now, the virtual address space has been contiguous.

The following table mostly from Tanenbaum compares demand paging with demand segmentation.

Consideration Demand
Paging
Demand
Segmentation
Programmer aware NoYes
How many addr spaces 1Many
VA size > PA size YesYes
Protect individual
procedures separately
NoYes
Accommodate elements
with changing sizes
NoYes
Ease user sharing NoYes
Why invented let the VA size
exceed the PA size
Sharing, Protection,
independent addr spaces

Internal fragmentation YesNo, in principle
External fragmentation NoYes
Placement question NoYes
Replacement question YesYes

Homework: 29.

** Two Segments

Late PDP-10s and TOPS-10

** Three Segments

Traditional Unix shown above.

  1. Shared text execute only
  2. Data segment (global and static variables)
  3. Stack segment (automatic variables)

** Four Segments

Just kidding.

** General (not necessarily demand) Segmentation

** Demand Segmentation

Same idea as demand paging applied to segments

** 3.7.2: Segmentation with paging

Combines both segmentation and paging to get advantages of both at a cost in complexity. This is very common now.

Homework: 30.

Some last words