V22.0202: Operating Systems
1999-2000 Spring
Mon Wed 2-3:13
Ciww 109

Allan Gottlieb
gottlieb@nyu.edu
http://allan.ultra.nyu.edu/~gottlieb
715 Broadway, Room 1001
212-998-3344
609-951-2707
email is best

Administrivia

Web Pages

There is a web page for the course. You can find it from my home page.

Textbook

Text is Tanenbaum, "Modern Operating Systems".

A Grade of ``Incomplete''

It is university policy that a student's request for an incomplete be granted only in exceptional circumstances and only if applied for in advance. Of course the application must be before the final exam.

Computer Accounts and majordomo mailing list

Homework and Labs

I make a distinction between homework and labs.

Labs are

Homeworks are

Upper left board for assignments and announcements.

Homework: Read Chapter 1 (Introduction)

Chapter 1. Introduction

Levels of abstraction (virtual machines)

1.1: What is an operating system?

The kernel itself raises the level of abstraction and hides details. Can write to a file (a concept not present in hardware) and ignore whether it is a floppy or hard disk.

The kernel is a resource manager (so users don't conflict).

How is an OS fundamentally different from a compiler (say)?

Answer: Concurrency! Per Brinch Hansen in Operating Systems Principles (Prentice Hall, 1973) writes.

The main difficulty of multiprogramming is that concurrent activities can interact in a time-dependent manner, which makes it practically impossibly to locate programming errors by systematic testing. Perhaps, more than anything else, this explains the difficulty of making operating systems reliable.

1.2 History of Operating Systems

  1. Single user (no OS)
  2. Batch, uniprogrammed, run to completion
  3. Multiprogrammed
  4. Multiple computers
  5. Real time systems
Homework:1, 2, 5 (unless otherwise stated, problems numbers are from the end of the chapter in Tanenbaum.)

1.3: Operating System Concepts

This will be brief. Much of the rest of the course will consist in ``filling in the details''.

1.3.1: Processes

A program in execution.

Often one distinguishes the state or context (memory image, open file) from the thread of control. Then if one has many threads running in the same task, the result is a ``multithreaded processes''.

The OS keeps information about all processes in the process table. Indeed, the OS views the process as the entry. An example of an active entity being viewed as a data structure (cf. discrete event simulations).

The set of processes forms a tree via the fork system call. The forker is the parent of the forkee.

A signal can be sent to a process to cause it to execute a predefined function (the signal handler). This can be tricky to program since the programmer does not know when in his ``main'' program the signal handler will be invoked.


================ Start Lecture #2 ================

1.3.2: Files

Modern systems have a hierarchy of files. A file system tree.

Files and directories normally have permissions

Devices (mouse, tape drive, cdrom) are often view as ``special files''. In a unix system these are normally found in the /dev directory. Some utilities that are normally applied to (ordinary) files can as well be applied to some special files. For example, when you are accessing a unix system and do not have anything serious going on (e.g., right after you log in), type the following command

    cat /dev/mouse
and then move the mouse. You kill the cat by typing cntl-C. I tried this on my linux box and no damage occurred. Your mileage may vary.

Many systems have standard files that are automatically made available to a process upon startup. There (initial) file descriptors are fixed

A convenience offered by some command interpretors is a pipe. The pipeline

  ls | wc
will give the number of files in the directory (plus other info).

Homework: 3

1.3.3: System Calls

System calls are the way a user (i.e. program) directly interfaces with the OS. Some textbooks use the term envelope for the component of the OS responsible for fielding system calls and dispatching them. Here is a picture showing some of the components and the external events for which they are the interface.

What happens when a user writes a function call such as read()?

  1. Normal function call (in C, ada, etc.)
  2. Library routine (in C)
  3. Small assembler routine
    1. Move arguments to predefined place (perhaps registers)
    2. Poof (a trap instruction) and then the OS proper runs in supervisor mode
    3. Fixup result (move to correct place)

Homework: 6

1.3.4: The shell

Assumed knowledge

Homework: 9.

1.4: OS Structure

I must note that tanenbaum is a big advocate of the so called microkernel approach in which as much as possible is moved out of the (protected) microkernel into usermode components.

In the early 90s this was popular. Digital Unix and windows NT were examples. Digital unix was based on Mach, a research OS from carnegie mellon university. Lately, the growing popularity of linux has called this into question.

1.4.1: Monolithic approach

The previous picture: one big program

The system switches from user mode to kernel mode during the poof and then back when the OS does a ``return''.

But of course we can structure the system better, which brings us to.

1.4.2: Layered Systems

Some systems have more layers and are more strictly structured.

An early layered system was ``THE'' operating system by Dijkstra. The layers were.

  1. The operator
  2. User programs
  3. I/O mgt
  4. Operator-process communication
  5. Memory and drum management

The layering was done by convention, i.e. there was no enforcement by hardware and the entire OS is linked together as one program. This is true of many modern OS systems as well (e.g., linux).

The multics system was layered in a more formal manner. The hardware provided several protection layers and the OS used them. That is, arbitrary code could not jump to or access data in a more protected layer.

1.4.4: Virtual machines

Use a ``hypervisor'' (beyond supervisor) to switch between multiple Operating Systems

14.4: Client Server

When implemented on one computer, a client server OS is the microkernel approach in which the microkernel just supplies interprocess communication and the main OS functions are provided by a number of usermode processes.

This does have advantages. For example an error in the file server cannot corrupt memory in the process server. This makes errors easier to track down.

But it does mean that when a (real) user process makes a system call there are more switches from user to kernel mode and back. These are not free.

A distributed system can be thought of as an extension of the client server concept where the servers are remote.

Homework: 11


================ Start Lecture #3 ================

Interlude on Linkers

Originally called linkage editors by IBM.

This is an example of a utility program included with an operating system distribution. Like a compiler, it is not part of the operating system per se, i.e. it does not run in supervisor mode. Unlike a compiler it is OS dependent (what object/load file format is used) and is not (normally) language dependent.

What does a Linker Do?

Link of course.

When the assembler has finished it produces an object module that is almost runnable. There are two primary problems that must be solved for the object module to be runnable. Both are involved with linking (that word, again) together multiple object modules.

  1. Relocating relative addresses.


  2. Resolving external references.


The output of a linker is called a load module because it is now ready to be loaded and run.

To see how a linker works lets consider the following example, which is the first dataset from lab #1. The description in lab1 is more detailed.

The target machine is word addressable and has a memory of 1000 words, each consisting of 4 decimal digits. The first (leftmost) digit is the opcode and the remaining three digits form an address.

Each object module contains three parts, a definition list, a use list, and the program text itself. Each definition is a pair (sym, loc). Each use is a pair (sym, loc). The address in loc points to the next use or is 999 to end the chain.

For those text entries that do not form part of a use chain a fifth (leftmost) digit is added. If it is 8, the address in the word is relocatable. If it is 0 (and hence omitted), the address is absolute.

Sample input
1 xy 2
1 z 4
5 81234 5678 2999 88888 7002
0
1 z 3
6 88888 1999 1001 3002 81002 1234
0
1 z 1
2 81234 4999
1 z 2
1 xy 2
3 8000 1999 2001

I will illustrate a two-pass approach: The first pass simply produces the symbol table giving the values for xy and z (2 and 15 respectively). The second pass does the real work (using the values in the symbol table).

It is faster (less I/O) to do a one pass approach, but is harder since you need ``fix-up code'' whenever a use occurs in a module that precedes the module with the definition.

xy=2
z=15

+0
0:      81234           1234+0=1234
1:       5678           5678
2: xy:   2999   ->z     2015
3:      88888           8888+0=8888
4: ->z   7002           7015
+5
0       88888           8888+5=8893
1        1999   ->z     1015
2        1001   ->z     1015
3 ->z    3002           3015
4       81002           1002+5=1007
5        1234           1234
+11
0       81234           1245
1 ->z    4999           4015
+13
0        8000           8000
1        1999   ->xy    1002    
2 z:->xy 2001           2002

The linker on unix is mistakenly called ld (for loader), which is unfortunate since it links but does not load.

Lab #1 Implement a linker. The specific assignment is detailed on the sheet handed out in in class and is due in three weeks 16 February. The content of the handout is available on the web as well (see the class home page).

End of Interlude on Linkers

Chapter 2: Process Management

Tanenbaum's chapter title is ``processes''. I prefer process management. The subject matter is processes, process scheduling, interrupt handling, and IPC (Interprocess communication--and coordination).

2.1: Processes

Definition: A process is a program in execution.

We are assuming a multiprogramming OS that automatically switches from one process to another. Sometimes this is called pseudoparallelism since one has the illusion of a parallel processor. The other possibility is real parallelism in which two or more processes are actually running at once because the computer system is a parallel processor, i.e., has more than one processor. We do not study real parallelism (parallel processing, distributed systems, multiprocessors, etc) in this course.


================ Start Lecture #4 ================

3.1.1: The Process Model

Even though in actuality there are many processes running at once, the OS gives each process the illusion that it is running alone.

Virtual time and virtual memory are examples of abstractions provided by the operating system to the user processes so that the latter ``sees'' a more pleasant virtual machine than actually exists.

Process Hierarchies

Modern general purpose operating systems permit a user to create and destroy processes. In unix this is done by the fork system call, which creates a child process, and the exit system call, which terminates the current process. After a fork both parent and child keep running (indeed they have the same program text) and each can fork off other processes. A process tree results. The root of the tree is a special process created by the OS during startup.

MS-DOS is not multiprogrammed so when one process starts another, the first process is blocked and waits until the second is finished.

Process states and transitions

The above diagram contains a great deal of information.

One can organize an OS around the scheduler.

The above is called the client-server model and is one Tanenbaum likes. His ``Minix'' operating system works this way. Indeed, there was reason to believe that it would dominate. But that hasn't happened. Such an OS is sometimes called server based. Systems like traditional unix or linux would then be called self-service since the user process serves itself. That is, the user process switches to kernel mode and performs the system call. To repeat: the same process changes back and forth from/to user<-->system mode and services itself.

2.1.3: Implementation of Processes

The OS organizes the data about each process in a table naturally called the process table. Each entry in this table is called a process table entry or PTE.

An aside on Interrupts

In a well defined location in memory (specified by the hardware) the OS store an interrupt vector, which contains the address of the (first level) interrupt handler.

Assume a process P is running and a disk interrupt occurs for the completion of a disk read previously issued by process Q, which is currently blocked. Note that interrupts are unlikely to be for the currently running process.

  1. The hardware stacks the program counter etc (possibly some registers)
  2. Hardware loads new program counter from the interrupt vector.
  3. Assembly language routine saves registers
  4. Assembly routine sets up new stack
  5. Assembly routine calls C procedure (tanenbaum forgot this one)
  6. C procedure does the real work
  7. The C procedure (that did the real work in the interrupt processing) continues and returns to the assembly code.
  8. Assembly language restores P's state (e.g., registers) and starts P at the point it was when the interrupt occurred.

2.2: Interprocess Communication (IPC) and Process Coordination and Synchronization

2.2.1: Race Conditions

A race condition occurs when two processes can interact and the outcome depends on the order in which the processes execute.

Homework: 2

2.2.2: Critical sections

We must prevent interleaving sections of code that need to be atomic with respect to each other. That is, the conflicting sections need mutual exclusion. If process A is executing its critical section, it excludes process B from executing its critical section. Conversely if process B is executing is critical section, it excludes process A from executing its critical section.

Goals for a critical section implementation.

  1. No two processes may be simultaneously inside their critical section
  2. No assumption may be made about the speeds or the number of CPUs
  3. No process outside its critical section may block other processes
  4. No process should have to wait forever to enter its critical section

2.2.3 Mutual exclusion with busy waiting

The operating system can choose not to preempt itself. That is, no preemption for system processes (if the OS is client server) or for processes running in system mode (if the OS is self service). Forbidding preemption for system processes would prevent the problem above where x<--x+1 not being atomic crashed the printer spooler if the spooler is part of the OS.

But this is not adequate

Software solutions for two processes

Initially P1wants=P2wants=false

Code for P1                             Code for P2

Loop forever {                          Loop forever {
    P1wants <-- true         ENTRY          P2wants <-- true
    while (P2wants) {}       ENTRY          while (P1wants) {}
    critical-section                        critical-section
    P1wants <-- false        EXIT           P2wants <-- false
    non-critical-section }                  non-critical-section }

Explain why this works.

But it is wrong! Why?


================ Start Lecture #5 ================

Let's try again. The trouble was that setting want before the loop permitted us to get stuck. We had them in the wrong order!

Initially P1wants=P2wants=false

Code for P1                             Code for P2

Loop forever {                          Loop forever {
    while (P2wants) {}       ENTRY          while (P1wants) {}
    P1wants <-- true         ENTRY          P2wants <-- true
    critical-section                        critical-section
    P1wants <-- false        EXIT           P2wants <-- false
    non-critical-section }                  non-critical-section }

Explain why this works.

But it is wrong again! Why?

So let's be polite and really take turns. None of this wanting stuff.

Initially turn=1

Code for P1                      Code for P2

Loop forever {                   Loop forever {
    while (turn = 2) {}              while (turn = 1) {}
    critical-section                 critical-section
    turn <-- 2                       turn <-- 1
    non-critical-section }           non-critical-section }

This one forces alternation, so is not general enough. Specifically, it does not satisfy condition three, which requires that no process in its non-critical section can stop another process from entering its critical section. With alternation, if one process is in its non-critical section (NCS) then the other can enter the CS once but not again.

In fact, it took years (way back when) to find a correct solution. Many earlier ``solutions'' were found and several were published, but all were wrong The first true solution was found by dekker. It is very clever, but I am skipping it (I cover it when I teach OS II). Subsequently, algorithms with better fairness properties were found (e.g. no task has to wait for another task to enter the CS twice).

What follows is Peterson's solution. When it was published, it was a surprise to see such a simple soluntion. In fact Peterson gave a solution for any number of processes. A proof that the algorithm satisfies our properties (including a strong fairness condition) can be found in Operating Systems Review Jan 1990, pp. 18-22.

Initially P1wants=P2wants=false  and  turn=1

Code for P1                        Code for P2

Loop forever {                     Loop forever {
    P1wants <-- true                   P2wants <-- true
    turn <-- 2                         turn <-- 1
    while (P2wants and turn=2) {}      while (P1wants and turn=1) {}
    critical-section                   critical-section
    P1wants <-- false                  P2wants <-- false
    non-critical-section               non-critical-section

Hardware assist (test and set)

TAS(b) where b is a binary variable ATOMICALLY sets b<--true and returns the OLD value of b. Of course it would be silly to return the new value of b since we know the new value is true

Now implementing a critical section for any number of processes is trivial.

loop forever {
    while (TAS(s)) {}   ENTRY
    CS
    s<--false           EXIT
    NCS

P and V and Semaphores

Note: Tanenbaum does both busy waiting (like above) and blocking (process switching) solutions. We will only do busy waiting.

Homework: 3

The entry code is often called P and the exit code V (tanenbaum only uses P and V for blocking, but we use it for busy waiting). So the critical section problem is to write P and V so that

loop forever
    P
    critical-section
    V
    non-critical-section
satisfies
  1. Mutual exclusion
  2. No speed assumptions
  3. No blocking by processes in NCS
  4. Forward progress (my weakened version of tanenbaum's last condition

Note that I use indenting carefully and hence do not need (and sometimes omit) the braces {}

A binary semaphore abstracts the TAS solution we gave for the critical section problem.

The above code is not real, i.e., it is not an implementation of P. It is, instead, a definition of the effect P is to have.

To repeat: for any number of processes, the critical section problem can be solved by

loop forever
    P(S)
    CS
    V(S)
    NCS

The only specific solution we have seen for an arbitrary number of processes is the one just above with P(S) and V(S) implemented via test and set.

Remark: Peterson's solution requires each process to know its processor number. The TAS soluton does not.

To solve other coordination problems we want to extend binary semaphores.

The solution to both of these shortcomings is to remove the restriction to a binary variable and define a generalized or counting semaphore.

These counting semaphores can solve what I call the semi-critical-section problem, where you premit up to k processes in the section. When k=1 we have the original critical-section problem.

initially S=k

loop forever
    P(S)
    SCS   <== semi-critical-section
    V(S)
    NCS

Producer-consumer problem

Initially e=k, f=0 (counting semaphore); b=open (binary semaphore)

Producer                         Consumer

loop forever                     loop forever
    produce-item                     P(f)
    P(e)                             P(b); take item from buf; V(b)
    P(b); add item to buf; V(b)      V(e)
    V(f)                             consume-item

================ Start Lecture #6 ================

You may use C++ for the labs.

The email address for the three e-tutors are

Robert Szarek szar9908@cs.nyu.edu
Aldo J Nunez ajn203@omicron.acf.nyu.edu
Franqueli Mendez fm201@omicron.acf.nyu.edu

Dining Philosophers

A classical problem from Dijkstra

What algorithm do you use for access to the shared resource (the forks)?

The point of mentioning this without giving the solution is to give a feel of what coordination problems are like. The book gives others as well. We are skipping these (again this material would be covered in a sequel course). If you are interested look, for example, at http://allan.ultra.nyu.edu/~gottlieb/courses/1997-98-spring/os/class-notes.html

Homework: 14,15 (these have short answers but are not easy).

Readers and writers

Quite useful in multiprocessor operating systems. The ``easy way out'' is to treat all processes as writers in which case the problem reduces to mutual exclusion (P and V). The disadvantage of the easy way out is that you give up reader concurrency. Again for more information see the web page referenced above.

2.4: Process Scheduling

Scheduling the processor is often called ``process scheduling'' or simply ``scheduling''.

The objectives of a good scheduling policy include

Recall the basic diagram describing process states

For now we are discussing short-term scheduling running <--> ready.

Medium term scheduling is discussed later.

Preemption

It is important to distinguish preemptive from non-preemptive scheduling algorithms.

Deadline scheduling

This is used for real time systems. The objective of the scheduler is to find a schedule for all the tasks (there are a fixed set of tasks) so that each meets its deadline. The run time of each task is known in advance.

Actually it is more complicated.

We do not cover deadline scheduling in this course.

The name game

There is an amazing inconsistency in naming the different (short-term) scheduling algorithms. Over the years I have used primarily 4 books: In chronological order they are Finkel, Deitel, Silberschatz, and Tanenbaum. The table just below illustrates the name game for these four books. After the table we discuss each scheduling policy in turn.

Finkel  Deitel  Silbershatz Tanenbaum
-------------------------------------
FCFS    FIFO    FCFS        --    unnamed in tanenbaum
RR      RR      RR          RR      
PS      **      PS          PS
SRR     **      SRR         **    not in tanenbaum
SPN     SJF     SJF         SJF   
PSPN    SRT     PSJF/SRTF   --    unnamed in tanenbaum
HPRN    HRN     **          **    not in tanenbaum
**      **      MLQ         **    only in silbershatz
FB      MLFQ    MLFQ        MQ

First Come First Served (FCFS, FIFO, FCFS, --)

If you ``don't'' schedule, you still have to store the PTEs somewhere. If it is a queue you get FCFS. If it is a stack (strange), you get LCFS. Perhaps you could get some sort of random policy as well.


================ Start Lecture #7 ================

Round Robbin (RR, RR, RR, RR)

Homework: 9, 19, 20, 21, and the following problem
Consider the following set of processes, each of which performs no I/O (i.e., no process ever blocks). All times are in milliseconds. The CPU time is the total time required for the process. The creation time is the time when the process is created. So P1 is created when the problem begins and P2 is created 5 miliseconds later.
ProcessCPU TimeCreation Time
P1200
P233
P325

Processor Sharing (PS, **, PS, PS)

Merge the ready and running states and permit all ready jobs to be run at once. However, the processor slows down so that when n jobs are running at once each progresses at a speed 1/n as fast if it were running alone.

Homework: 18.

Variants of Round Robbin

Priority Scheduling

Each job is assigned a priority (externally, perhaps by charging more for higher priority) and the highest priority ready job is run.

Priority aging

As a job is waiting, raise its priority so eventually it will have the maximum priority. Can apply this to many policies, in particular to priority scheduling described above

Homework: 22, 23

Selfish RR (SRR, **, SRR, **)

Shortest Job First (SPN, SJF, SJF, SJF)

Sort jobs by total execution time needed and run the shortest first.

Preemptive Shortest Job First (PSPN, SRT, PSJF/SRTF, --)

Preemptive version of above

Highest Penalty Ratio Next (HPRN, HRN, **, **)

Run job that has been ``hurt'' the most.

Multilevel Queues (**, **, MLQ, **)

Put different classes of jobs in different queues

Multilevel Feedback Queues (FB, MFQ, MLFBQ, MQ)

Many queues and processs move from queue to queue in an attempt to dynamically separate ``batch-like'' from interactive processs.

Theoretical Issues

Considerable theory has been developed

Medium Term scheduling

Decisions made at a coarser time scale.

Long Term Scheduling


================ Start Lecture #8 ================

See the web page for the command to submit lab1 and your grader's email address

Chapter 3: Memory Management

Also called storage management or space management.

Memory management must deal with the storage hierarchy present in modern machines.

We will see in the next few lectures that there are three independent decision:

  1. Segmentation (or no segmentation)
  2. Paging (or no paging)
  3. Fetch on demand (or no fetching on demand)

Memory management implements address translation.

Homework: 7.

When is the address translation performed?

  1. At compile time
    • Primitive
    • Compiler generates physical addresses
    • Requires knowledge of where the compilation unit will be loaded
    • Rarely used (MSDOS .COM files)

  2. At link-edit time (the ``linker lab'')
    • Compiler
      • Generates relocatable addresses for each compilation unit
      • References external addresses
    • Linkage editor
      • Converts the relocatable addr to absolute
      • Resolves external references
      • Misnamed ld by unix
      • Also converts virtual to physical addresses by knowing where the linked program will be loaded. Unix ld does not do this.
    • Loader is simple
    • Hardware requirements are small
    • A program can be loaded only where specified and cannot move once loaded.
    • Not used much any more.

  3. At load time
    • Same as linkage editor but do not fix the starting address
    • Program can be loaded anywhere
    • Program can move but cannot be split
    • Need modest hardware: base/limit registers

  4. At execution time
    • Dynamically during execution
    • Hardware needed to perform the virtual to physical address translation quickly
    • Currently dominates
    • Much more information later

Extensions

Note: I will place ** before each memory management scheme.

3.1: Memory management without swapping or paging

Job remains in memory from start to finish

The sum of the memory requirements of all jobs in the system cannot exceed the size of physical memory.

** 3.1.1: Monoprogramming without swapping or paging (Single User)

The ``good old days'' when everything was easy.

3.1.2: Multiprogramming

Goal is to improve CPU utilization, by overlapping CPU and I/O

Homework: 1, 3.


================ Start Lecture #9 ================

3.1.3: Multiprogramming with fixed partitions

3.2: Swapping

Moving entire jobs between disk and memory is called swapping.

3.2.1: Multiprogramming with variable partitions

Homework: 4

MVT Introduces the ``Placement Question'', which hole (partition) to choose

Homework: 2, 5.

MVT Also introduces the ``Replacement Question'', which victim to swap out

We will study this question more when we discuss demand paging

Considerations in choosing a victim


================ Start Lecture #10 ================

NOTEs:
  1. So far the schemes have had two properties
    1. Each job is stored contiguously in memory. That is, the job is contiguous in physical addresses.
    2. Each job cannot use more memory than exists in the system. That is, the virtual addresses space cannot exceed the physical address space.

  2. Tanenbaum now attacks the second item. I wish to do both and start with the first

  3. Tanenbaum (and most of the world) uses the term ``paging'' to mean what I call demand paging. This is unfortunate as it mixes together two concepts
    1. Paging (dicing the address space) to solve the placement problem and essentially eliminate external fragmentation.
    2. Demand fetching, to permit the total memory requirements of all loaded jobs to exceed the size of physical memory.

  4. Tanenbaum (and most of the world) uses the term virtual memory as a synonym for demand paging. Again I consider this unfortunate.
    1. Demand paging is a fine term and is quite descriptive
    2. Virtual memory ``should'' be used in contrast with physical memory to describe any virtual to physical address translation.

** (non-demand) Paging

Simplest scheme to remove the requirement of contiguous physical memory.

Example: Assume a decimal machine with page size = frame size = 1000.
Assume PTE 3 contains 459.
Then virtual address 3372 corresponds to physical address 459372.

Properties of (non-demand) paging.

Homework: 13

Address translation

Choice of page size is discuss below

Homework: 8, 13.

3.2: Virtual Memory (meaning fetch on demand)

Idea is that a program can execute even if only the active portion of its address space is memory resident. That is, swap in and swap out portions of a program. In a crude sense this can be called ``automatic overlays''.

Advantages

3.2.1: Paging (meaning demand paging)

Fetch pages from disk to memory when they are referenced, with a hope of getting the most actively used pages in memory.

Homework: 11.


================ Start Lecture #11 ================

3.3.2: Page tables

A discussion of page tables is also appropriate for (non-demand) paging, but the issues are more acute with demand paging since the tables can be much larger. Why?
Answer: The total size of the active processes is no longer limited to the size of physical memory.

Want access to the page table to be very fast since it is needed for every memory access.

Unfortunate laws of hardware

So we can't just say, put the page table in fast processor registers and let it be huge and sell the system for $1500. Put the (one-level) page table in main memory.

Protection bits

Can place protection bits on pages. For example can mark pages as execute only. This requires that boundaries between regions with different protection must be on page boundaries. Protection is more naturally done with segmentation.

Multilevel page tables

The idea, which is also used in Unix inode-based file systems, is to add a level of indirection and have a page table containing pointers to page tables. This topic will not be on the 202 midterm or final.

Do an example on the board

The VAX used a 2-level page table structure, but with some wrinkles (see Tanenbaum for details).

Naturally, there is no need to stop at 2 levels. In fact the SPARC has 3 levels and the Motorola 68030 has 4 (and the number of bits of Virtual Address used for P#1, P#2, P#3, and P#4 can be varied).

3.3.4: Associative memory (TLBs)

Note: Tanenbaum suggests that ``associative memory'' and ``translation lookaside buffer'' are synonyms. This is wrong. Associative memory is a general structure and translation lookaside buffer is a special case.

An associative memory is a content addressable memory. That is you access the memory by giving the value of some field and the hardware searches all the records and returns the record whose field contains the requested value.

For example

Name  | Animal | Mood     | Color
======+========+==========+======
Moris | Cat    | Finicky  | Grey
Fido  | Dog    | Friendly | Black
Izzy  | Iguana | Quiet    | Brown
Bud   | Frog   | Smashed  | Green
If the index field is Animal and Iguana is given, the associative memory returns
Izzy  | Iguana | Quiet    | Brown

A Translation Lookaside Buffer or TLB is an associate memory where the index field is the page number. The other fields include the frame number, dirty bit, valid bit, and others.

Homework: 15.

3.3.5: Inverted page tables

Keep a table indexed by frame number with the entry f containing the number of the page currently loaded in frame f.

3.4: Page Replacement Algorithms

These are solutions to the replacement question.

Good solutions take advantage of locality.

Pages belonging to processes that have terminated are of course perfect choices for victims.

Pages belonging to processes that have been blocked for a long time are good choices as well.

Random

A lower bound on performance. Any decent scheme should do better.

3.4.1: The optimal page replacement algorithm (opt PRA)

Replace the page whose next reference will be furthest in the future

3.4.2: The not recently used (NRU) PRA

Divide the frames into four classes and make a random selection from the lowest nonempty class.

  1. Not referenced, not modified
  2. Not referenced, modified
  3. Referenced, not modified
  4. Referenced, modified

Assumes that in each PTE there are two extra flags R (sometimes called U, for used) and M (often called D, for dirty).

Also assumes that a page in a lower priority class is cheaper to evict

We again have the prisoner problem, we do a good job of making little ones out of big ones, but not the reverse. Need more resets

Every k clock ticks, reset all R bits

What if hardware doesn't set these bits?

3..4.3: FIFO PRA

Simple but poor since usage of the page is ignored.

Belady's Anomaly: Can have more frames yet more faults. Example given later.

3.4.4: Second chance PRA

Fifo but when time to choose a victim if page at the head of the queue has been referenced (R bit), don't evict it. Instead reset R and move the page to the rear of the queue (so it looks new). The page is being a second chance.

What if all frames have been referenced?
Becomes the same as fifo (but takes longer).

Might want to turn off the R bit more often (k clock ticks).

3.4.5: Clock PRA

Same algorithm as 2nd chance, but a better (and I would say obvious) implementation: Use a circular list.

Do an example.


================ Start Lecture #12 ================

LIFO PRA

This is terrible! Why?
All but the last frame are frozen once loaded so you essentially use only one frame.

3.4.6:Least Recently Used (LRU) PRA

When a page fault occurs, choose as victim that page that has been unused for the longest time, i.e. that has been least recently used.

LRU is definitely

Homework: 19, 20

A hardware cutsie in Tanenbaum (skipped in 202)

3.4.7: Approximating LRU in Software

The Not Frequently Used (NFU) PRA

The Aging PRA

NFU doesn't distinguish between old references and recent one. The following modification does distinguish.

R counter
110000000
001000000
110100000
111010000
001101000
000110100
110011010
111001101
001100110

Homework: 21, 25

3.5: Modeling Paging Algorithms

3.5.1: Belady's anomaly

Consider a system that has no pages loaded and that uses the FIFO PRU.
Consider the following ``reference string'' (sequences of pages referenced).

 0 1 2 3 0 1 4 0 1 2 3 4

If we have 3 frames this generates 9 page faults (do it).

If we have 4 frames this generates 10 page faults (do it).

Theory has been developed and certain PRA (so called ``stack algorithms'') cannot suffer this anomaly for any reference string. FIFO is clearly not a stack algorithm. LRU is.

Repeat the above calculations for LRU.


================ Start Lecture #13 ================

3.6: Design issues for (demand) Paging

3.6.1 & 3.6.2: The Working Set Model and Local vs Global Policies

I will do these in the reverse order (which makes more sense). Also Tanenbaum doesn't actually define the working set model, but I shall.

A local PRA is one is which a victim page is chosen among the pages of the same process that requires a new page. That is the number of pages for each process is fixed. So LRU means the page least recently used by this process.

If we apply global LRU indiscriminately with some sort of RR processor scheduling policy, and memory is somewhat over-committed, then by the time we get around to a process, all the others have run and have probably paged out this process.

If this happens each process will need to page fault at a high rate; this is called thrashing. It is therefore important to get a good idea of how many pages a process needs, so that we can balance the local and global desires.

The working set policy (Peter Henning)

The goal is to specify which pages a given process needs to have memory resident in order for the give process to run without too many page faults.

The idea of the working set policy is to ensure that each process keeps its working set in memory.

Interesting questions include:

Various approximations to the working set frequency have been devised.

  1. Wsclock
    • Use the aging algorithm above to maintain a counter for each PTE and declare a page whose counter is above a certain threshold to be part of the working set.
    • Apply the clock algorithm globally (i.e. to all pages) but refuse to page out any page in a working set, the resulting algorithm is called wsclock.
    • What if we find there are no pages we can page out?
      Answer: Reduce the multiprogramming level (MPL).
  2. Page Fault Frequency (PFF)
    • For each process keep track of the page fault frequency, which is the number of faults divided by the number of references.
    • Actually, must use a window or a weighted calculation since you are really interested in the recent page fault frequency
    • If the PFF is too high, allocate more frames to this process. Either
      1. Raise its number of frames and use local policy; or
      2. Bar its frames from eviction (for a while) and use a global policy.
    • What if there are not enough frames?
      Answer: Lower the MPL.

3.6.3: Page size

3.6.4: Implementation Issues

Don't worry about instruction backup. Very machine dependent and modern implementations tend to get it right.

Locking (pinning) pages

We discussed pinning jobs already. The same (mostly I/O) considerations apply to pages.

Shared pages

Really should share segments

Backing Store

The issue is where on disk do we put pages

Paging Daemons

Done earlier

Page Fault Handling (not on 202 exams)

  1. Hardware traps to the kernel (switches to supervisor mode; saves state)

  2. Assembly language code save more state, establishes the C-language environment, calls the OS

  3. OS determines that a fault occurred and which page

  4. If virtual address is invalid, shoot process. If valid, seek a free frame. If no free frames, select a victim.

  5. If the victim frame is dirty, schedule an I/O write to copy the frame to disk. This process is blocked so the process scheduler is invoked to perform a context switch.

    • Tanenbaum ``forgot'' some here
    • Disk interrupt occurs when I/O complete
    • Hardware trap / assembly code / OS determines I/O done
    • Process moved from blocked to ready
    • Some time later a context switch occurs to this ready process. Since this process is in kernel mode, perhaps it was scheduled to run as soon as it was ready. (I am using a ``self-service'' model where the process moves from user mode to kernel mode.)

  6. Now the frame is clean (this may be much later in wall clock time). Schedule an I/O to read the desired page into this clean frame. The process is again blocked and hence the process scheduler is invoked to perform a context switch.

  7. Disk interrupt occurs when I/O complete (trap / asm / OS determines I/O done) / process made ready / process starts running). PTE updated

  8. Fix up process (e.g. reset PC)

  9. Process put in ready queue and eventually runs. The OS returns to the first asm routine.

  10. Asm routine restores registers, etc. and returns to user mode.

The process is unaware that all this happened.

3.7: Segmentation

Up to now, the virtual address space has been contiguous.

The following table mostly from Tanenbaum compares demand paging with demand segmentation.

Consideration Demand
Paging
Demand
Segmentation
Programmer aware NoYes
How many addr spaces 1Many
VA size > PA size YesYes
Protect individual
procedures separately
NoYes
Accommodate elements
with changing sizes
NoYes
Ease user sharing NoYes
Why invented let the VA size
exceed the PA size
Sharing, Protection,
independent addr spaces

Internal fragmentation YesNo, in principle
External fragmentation NoYes
Placement question NoYes
Replacement question YesYes

Homework: 29.

** Two Segments

Late PDP-10s and TOPS-10

** Three Segments

Traditional Unix shown above.

  1. Shared text execute only
  2. Data segment (global and static variables)
  3. Stack segment (automatic variables)

** Four Segments

Just kidding.

** General (not necessarily demand) Segmentation

** Demand Segmentation

Same idea as demand paging applied to segments

** 3.7.2: Segmentation with paging

Combines both segmentation and paging to get advantages of both at a cost in complexity. This is very common now.

Homework: 30.

Some last words


================ Start Lecture #14 ================

Review material from last lecture for midterm

Chapter 4: File Systems

Requirements

  1. Size: Store very large amounts of data
  2. Persistence: Data survives the creating process
  3. Access: Multiple processes can access the data concurrently

Solution: Store data in files that together form a file system

4.1: Files

4.1.1: File Naming

Very important. A major function of the file system.


================ Start Lecture #15 ================

Midterm exam


================ Start Lecture #16 ================

Review midterm answers

Hand out lab2

4.1.2: File structure

A file is a

  1. Byte stream
    • Unix, dos, windows (I think)
    • Max flexibility
    • Min structure

  2. (fixed size) Record stream: Out of date

  3. Varied and complicated beast
    • Indexed sequential
    • B-trees
    • Supports rapidly finding a record with a specific key
    • Supports retrieving (varying size) records in key order.
    • Treated in depth in database courses

4.1.3: File types

Examples

  1. (Regular) files

  2. Directories: studied below

  3. Special files (for devices)
    • Uses the naming power of files to unify many actions
    • dir # prints on screen
    • dir > file # result put in a file
    • dir > /dev/tape # results written to tape

  4. ``Symbolic'' Links (similar to ``shortcuts''): Also studied below.

``Magic number'': Identifies an executable file.

Strongly typed files: Easy (hopefully normal) case easier (and safer) hard case harder.

4.1.4: File access

Basically two possibilities, sequential access and random access (a.k.a. direct access). Previously, files were declared to be sequential or random. Modern systems do not do this.

  1. Sequential access where the bytes (or records) are accessed in order (i.e., n-1, n, n+1, ...) is most common and gives the highest performance. For some devices (e.g. tapes) access ``must'' be sequential.
  2. In random access, the bytes are accessed in any order. Thus each access must specify which bytes are desired.

4.1.5: File attributes

A laundry list of properties that can be specified for a file, e.g.