Lots of email answers on web about banker ---------------- Page Replacement Algorithms ---------------- (Called choosing a victim above) Guiding Principle is locality Temporal Locality: Word referenced now is likely to be used in the near future Spacial Locality: Words near current reference likely to be used in the near future Locality suggests choosing "stale" pages for victims Spacial Locality also suggests using largish pages and prefetching Clean pages are cheaper victims Why? Don't need to write back to disk Maybe not cheaper beacause they might be heavily used (eg code) HOMEWORK 9.4 Random (replacement) Ignore principles Used for comparison purposes Optimal (Belady min) Choose as victim, page whose next ref is furthest in future Not implementable without a crystal ball or rerunning program Provably optimal Used for comparison purposes FIFO Victim = page whose last loading into memory was furthest in the past Amazing anomaly (Belady's) Can have MORE faults with MORE frames Try 1 2 3 4 1 2 5 1 2 3 4 5 with three and four frames LRU Least Recently Used Victim = page whose last reference was furthest in the past HOMEWORK 9.11 Works well Does not have belady's anomoly (neither does Optimal) HOMEWORK 9.13 (ask me in class next time for answer) Hard (i.e. expensive) to implement Store timestamps in PTE and search for oldest You must be kidding Doubly link PTEs as a stack, newest on top Two extra pointers per PTE Pointer updating for MANY memory refs Basically hopeless w/o hardware help Simple Approximation to LRU (Not Recently Used--NRU also NUR) A little hardware help: ref bit in PTE set on ref Choose a victim (at random) with ref bit NOT set Start next victim search where this one left off (clock) When do you clear ref bits? Periodically clear all Every page fault (or kth fault) clear all When all ref bits are set clear all When clock passes clear this one bit Enhanced NRU algorithms Can have two bits, ref and dirty View dirty as HOB, ref as LOB Choose a victim (at random) with lowest value for bits Can have k ref bits (plus dirty if desired) Each ref right shift current ref bits and set HOB Choose a victim (at random) with lowest value for bits Second chance algorithm Fifo (clock) but check the ref bit If bit not set, evict If set, unset bit and move to next page At worst go all the way around then the original choice will be selected since its bit is off Enhanced Two bits one for ref one for dirty Look for best (unref, clean) if none try next class etc HOMEWORK 9.6 Count References LFU (Least Frequently Used) MFU (Most Frequently Used) Good to replace heavily used (locality) Good to replace rarely used (just brought in) Sounds bogus Not used in real systems Page Buffering Keep pool of free (or at least clean pages) Whenever nothing better to do, write out a dirty page and mark it clean Allocating frames to processes Give each the same number Give proportional to virtual mem size of process Use some (external) priority as influence To each according to its need (marx) (see ws below) HOMEWORK 9.18, 9.20 Global vs Local Replacement Must victim come from same process as beneficiary? Local is "fair" but global works better Working Set Model (Denning) Can/will thrash if multiprogramming level MPL too high To each according to its need w (omega) the working set window size W(t,w) = { pages refed from t-w to t } is the working set w*(t,w) = | W(t,W) | (w* is w distinguished from w=omega) Choose w (not very sensitive) Adjust MPL so that working set is in memory That is SUM over all (non-suspended) processes of w < # frames Medium term scheduling Believed to work well but expensive to implement (keeping track of W) Approximations to working set PFF (Page Fault Frequency) If the faulting freq is too high a process needs more frames If all processes need more frames, lower MPL WS clock Like NRU On a fault start circular scan If used, set unused and record time If unused and old (time set above), remove If unused and new, skip over If all pages unused and new, reduce MPL Demand segmentation Makes sense Harder to implement External Fragmentation Was used by OS2 on 286 ---------------- End Chapter 9 ---------------- ---------------- Start Chapter 10 File System Interface ---------------- File - Named collection of (hopefully) related data File attributes often called metadata Name Type Location (on secondary storage) Size Protection Timestamps, user id File Operations Create Reading Writing/appending Delete Truncate (remove data keep metadata) Seek (could be part of read/write) Open get handle for the file Close Get/Set attributes