==== Start Lecture #5 ====

See announcements on course home page

Introduces the ``Placement Question'', which hole (partition) to choose

Homework: 2, 4, 5. (Note to me: Remove 4 from notes since added to previous lecture)

Also introduces the ``Replacement Question'', which victim to swap out

We will study this question more when we discuss demand paging

Considerations in choosing a victim
NOTEs:
  1. So far the schemes have had two properties
    1. Each job is stored contiguously in memory. That is, the job is contiguous in physical addresses.
    2. Each job cannot use more memory than exists in the system. That is, the virtual addresses space cannot exceed the physical address space.

  2. Tanenbaum now attacks the second item. I wish to do both and start with the first

  3. Tanenbaum (and most of the world) uses the term ``paging'' to mean what I call demand paging. This is unfortunate as it mixes together two concepts
    1. Paging (dicing the address space) to solve the placement problem and essentially eliminate external fragmentation.
    2. On demand fetching, to permit the total memory requirements of all loaded jobs to exceed the size of physical memory.

  4. Tanenbaum (and most of the world) uses the term virtual memory as a synonym for demand paging. Again I consider this unfortunate.
    1. Demand paging is a fine term and is quite discriptive
    2. Virtual memory ``should'' be used in contrast with physical memory to describe any virtual to physical address translation.

** (non-demand) Paging

Simplest scheme to remove the requirement of contiguous physical memory.

Example: Assume a decimal machine, with pagesize=framesize=1000.
Assume PTE 3 contains 459.
Then virtual address 3372 corresponds to physical address 459372.

Properties of (non-demand) paging.

Homework: 13

Address translation

Choice of page size is discuss below

Homework: 8, 13, 15.

3.2: Virtual Memory (meaning fetch on demand)

Idea is that a program can execute if only the active portion of its address space is memory resident. That is swap in and swap out portions of a program. In a crude sense this can be called ``automatic overlays''.

Advantages

3.2.1: Paging (meaning demand paging)

Fetch pages from disk to memory when they are referenced, with a hope of getting the most actively used pages in memory.

Homework: 11.

3.3.2: Page tables

A discussion of page tables is also appropriate for (non-demand) paging, but the issues are more acute with demand paging since the tables can be much larger. Why?
Ans: The total size of the active processes is no longer limited to the size of physical memory.

Want access to the page table to be very fast since it is needed for every memory access.

Unfortunate laws of hardware

So we can't just say, put the page table in fast processor registers and let it be huge and sell the system for $1500.

Put the (one-level) page table in main memory.

Multilevel page tables

Idea, which is also used in unix inode-based file systems, is to add a level of indirection and have a page table containing pointers to page tables.

Do an example on the board

The VAX used a 2-level page table structure, but with some wrinkles (see tanenbaum for details).

Naturally, there is no need to stop at 2 levels. In fact the sparc has 3 levels and the motorola 68030 has 4 (and the number of bits of Virtual Address used for P#1, P#2, P#3, and P#4 can be varied).

3.3.4: Associative memory (TLBs)

Note: Tanenbaum suggests that ``associative memory'' and ``translation lookaside buffer'' are synonyms. This is wrong. Associative memory is a general structure and translation lookaside buffer is a special case.

An associative memory is a content addressable memory. That is you access the memory by giving the value of some field and the hardware searches all the records and returns the record whose field contains the requested value.

For example

Name  | Animal | Mood     | Color
======+========+==========+======
Moris | Cat    | Finicky  | Grey
Fido  | Dog    | Friendly | Black
Izzy  | Iguana | Quiet    | Brown
Bud   | Frog   | Smashed  | Green
If the index field is Animal and Iguana is given, the associative returns
Izzy  | Iguana | Quiet    | Brown

A Translation Lookaside Buffer or TLB is an associate memory where the index field is the page number. The other fields include the frame number, dirty bit, valid bit, and others.