Protection Put bits on each PTE (page table entry) Keep length of addr space, i.e. size of PT PTLR (PT length register) If p# > PTLR, trap Reducing contiguous size of PT PTLR as above reduces the overall size Multilevel paging By paging the page table the PT need not be contig Can have a bunch of levels Will see next chapter that can actually reduce the overall memory size of the PT this way (not all the PT is then mem resident). Inverted page table It is a page frame table: indexed by frame# it gives P# and PID (process ID) Subtle coding used in IBM RT/PC More complicated than in book We dealt with this in our research Not covered (beyond definition) Shared pages Two PTEs point to same page RISKY if read/write HOMEWORK 8.11 Segmentation User VISIBLE division of virtual addr space Variable size pieces Now addr is s#,offset HOMEWORK 8.16 Sample segments global variables Procedures Segment table indexed by s# entry contains size (limit) and starting phy addr (base) STBR points to beginning of ST STLR gives length of ST Implementation Naive is like naive paging, two memory refs, hopeless Figure 8.23 has bug: Arrows from d and s reversed Again use TLB Protection and sharing More natural than for paging since the seg boundaries are logical divisions in the program not just where a 2K bndry occurred. Not trivial to do sharing since must get agreement on seg number because the seg quite possibly points to itself (code has jumps) PC relative addresses are fine Address with s# in a reg are fine Address with s# in displacement are NG HOMEWORK 8.14 Suffers from external fragmentation since variable size Segmentation + paging STE (segment table entry) contains size and PTBR (pointer to PT) Addr is still s#,off Figure 8.26 Since offset is paged it is really p#,off, so addr is really s#,p#,off Three memory refs for naive implementation TLB (In fact the ST could be paged so s# is really two components and we have a 4 part addr). HOMEWORK 8.12 ---------------- Chapter 9 Virtual Memory ---------------- Not efficient use of memory to keep an entire job loaded while it runs Some code rarely (if ever used) Some data structures larger than needed Some data used only in certain phases of the program Virtual Memory: separation of user logical memory from physical memory Commonly implemented by demand paging Indeed common usage is to equate demand paging and virt mem Could have demand segmentation OS/2 on 286 (not 386) does this ---------------- Demand Paging ---------------- First Used in Atlas computer (Univ Manchester) All pages assigned to a disk block Only some are RESIDENT, i.e. assigned to a page frame Add a "valid" bit to each PTE If valid bit set, treat as in chapter 8 If valid bit not set, the page is not resident, it is only on disk Must know the disk block Could store in PTE instead of mem addr (if entry big enough) Could store all pages contiguously on disk Could store all static mem (known at load time) contiguously on disk Find a free frame (what if none exists? later_ Read disk block into this frame Really schedule the I/O and block the process Make the process ready after I/O completes Now it is back to normal HOMEWORK 9.1 9.3 Some instructions can generate MANY page faults Instruction could straddle page boundary Can reference several memory operands Memory operands could straddle page boundaries or could be BIG RISC machines do not do much of this Potential DISASTER Restart (rather than resume) instruction after TLB miss is satisfied Have fewer TLB entries than max number of misses Oops Performance impact Cache hit < 10 nanoseconds Page hit < 100 nanoseconds Page miss > 10 miliseconds > million cache hits > 100 thousand page hits Finding a free frame (from above) Good to keep a bunch free at all times so don't have to wait When analysis number of misses generally assume you don't keep a bunch free HOMEWORK 9.2 When too few are free Choose a victim (how? later) Write victim to disk if dirty Mark victim's PTE as invalid Add frame to free list HOMEWORK 9.9 (Ask me next time to give answer)