==== Start Lecture #6 ====

A Translation Lookaside Buffer or TLB is an associate memory where the index field is the page number. The other fields include the frame number, dirty bit, valid bit, and others.

Homework: 15.

3.3.5: Inverted page tables

Keep a table indexed by frame number with the entry f containg the number of the page currently loaded in frame f.

3.4: Page Replacement Algorithms

These are solns to the replacement question.

Good solutions take advantage of locality.

Pages belonging to processes that have terminated are of course perfect choices for victims.

Pages belonging to processes that have been blocked for a long time are good choices as well.

Random

A lower bound on performance. Any decent scheme should do better.

3.4.1: The optimal page replacement algorithm (opt PRA)

Replace the page whose next reference will be furthest in the future

3.4.2: The not recently used (NRU) PRA

Divide the frames into four classes and make a random selection from the lowest nonempty class.

  1. Not referenced, not modified
  2. Not referenced, modified
  3. Referenced, not modified
  4. Referenced, modified

Assumes that in each PTE there are two extra flags R (sometimes called U, for used) and M (often called D, for dirty).

Also assumes that a page in a lower number class is cheaper to evict

We again have the prisoner problem, we do a good job of making little ones out of big ones, but not the reverse. Need more resets

Every k clock ticks, reset all R bits

What if hardware doesn't set these bits?

3..4.3: FIFO PRA

Simple but poor since usage of page is given no weight.

Belady's Anomaly: Can have more frames yet more faults. Example given later.

3.4.4: Second chance PRA

Fifo but when time to choose a victim if page at the head of the queue has been referenced (R bit), don't evict it but reset R move it to the rear of the queue (so it looks new). The page is being a second chance.

What if all frames have been referenced?
Becomes the same as fifo (but takes longer)

Might want to turn off the R bit more often (k clock ticks).

3.4.5: Clock PRA

Same algorithm as 2nd chance, but a better (and I would say obvious) implementation: Use a circular list.

Do an example.

3.4.6:Least Recently Used (LRU) PRA

When a page fault occurs, choose as victim that page that has been unused for the longest time, i.e. that has been least recently used.

LRU is definitely

Homework: 19, 20

A hardware cutsie in in tanenbaum

3.4.7: Simulating LRU in Software

The Not Frequently Used (NFU) PRA

The Aging PRA

NFU doesn't distinguish between old references and recent one. Modify NFU so that, for all PTEs, at every k clock ticks

  1. Counter is shifted right one bit
  2. R is inserted as the new high order bit (HOB)

R counter
110000000
001000000
110100000
111010000
001101000
000110100
110011010
111001101
001100110

Homework: 21, 25

3.5: Modeling Paging Algorithms

3.5.1: Belady's anomaly

Consider the following ``reference string'' (sequence of pages referenced), which is assumed to occur on a system with no pages loaded initially that uses the FIFO PRU.

 0 1 2 3 0 1 4 0 4 1 2 3 4

If we have 3 frames this generates 9 page faults.

If we have 4 frames this generates 10 page faults.

Theory has been developed and certain PRA (so called ``stack algorithms'') cannot suffer this anomaly for any reference string. FIFO is clearly not a stack algorithm. LRU is.

Repeat the above for LRU.

3.6: Design issues for (demand) Paging

3.6.1 & 3.6.2: The Working Set Model and Local vs Global Policies

I will do these in the reverse order (which makes more sense). Also tanenbaum doesn't actually define the working set model, but I shall.

A local PRA is one is which a victim page is chosen among the pages of the same process that requires a new page. That is the number of pages for each process is fixed. So LRU means the page least recently used by this process.

Of course we can't have a purely local policy, why?
Ans: A new process has no pages and even if we didn't apply this for the first page loaded, the process would remain with only one page.

Perhaps wait until a process has been running a while.

A global policy is one in which the choice of victim is made among all pages of all processes

If we apply global LRU indiscrimanently with some sort of RR processor scheduling policy, and memory is somewhat over-committed, then by the time we get around to a process, all the others have run and have probably paged out this process.

If this happens each process will need to page fault at a high rate; this is called thrashing. It would therefore be good to get an idea of how many pages a process needs, so that we can balance the local and global desires.