================ Start Lecture #1 ================
G22.2250 Operating Systems
2006-07 Spring
Allan Gottlieb
Wed 5-6:50pm Rm 109 Ciww

Chapter -1: Administrivia

I start at -1 so that when we get to chapter 1, the numbering will agree with the text.

(-1).1: Contact Information

(-1).2: Course Web Page

There is a web site for the course. You can find it from my home page, which is http://cs.nyu.edu/~gottlieb

(-1).3: Textbook

The course text is Tanenbaum, "Modern Operating Systems", 2nd Edition

(-1).4: Computer Accounts and Mailman Mailing List

(-1).5: Grades

Grades will computed as 40%*LabAverage + 60%*FinalExam (but see homeworks below).

(-1).6: The Upper Left Board

I use the upper left board for lab/homework assignments and announcements. I should never erase that board. Viewed as a file it is group readable (the group is those in the room), appendable by just me, and (re-)writable by no one. If you see me start to erase an announcement, let me know.

I try very hard to remember to write all announcements on the upper left board and I am normally successful. If, during class, you see that I have forgotten to record something, please let me know. HOWEVER, if I forgot and no one reminds me, the assignment has still been given.

(-1).7: Homeworks and Labs

I make a distinction between homeworks and labs.

Labs are

Homeworks are

(-1).7.1: Homework Numbering

Homeworks are numbered by the class in which they are assigned. So any homework given today is homework #1. Even if I do not give homework today, the homework assigned next class will be homework #2. Unless I explicitly state otherwise, all homeworks assignments can be found in the class notes. So the homework present in the notes for lecture #n is homework #n (even if I inadvertently forgot to write it to the upper left board).

(-1).7.2: Doing Labs on non-NYU Systems

You may solve lab assignments on any system you wish, but ...

(-1).7.3: Obtaining Help with the Labs

Good methods for obtaining help include

  1. Asking me during office hours (see web page for my hours).
  2. Asking the mailing list.
  3. Asking another student, but ...
    Your lab must be your own.
    That is, each student must submit a unique lab. Naturally, simply changing comments, variable names, etc. does not produce a unique lab.

(-1).7.4: Computer Language Used for Labs

You may write your lab in Java, C, or C++.

(-1).8: A Grade of “Incomplete”

The rules for incompletes and grade changes are set by the school and not the department or individual faculty member. The rules set by GSAS state:

The assignment of the grade Incomplete Pass(IP) or Incomplete Fail(IF) is at the discretion of the instructor. If an incomplete grade is not changed to a permanent grade by the instructor within one year of the beginning of the course, Incomplete Pass(IP) lapses to No Credit(N), and Incomplete Fail(IF) lapses to Failure(F).

Permanent grades may not be changed unless the original grade resulted from a clerical error.

(-1).9: An Introductory OS Course with a Programming Prerequisite

(-1).9.1: This is an introductory course ...

I do not assume you have had an OS course as an undergraduate, and I do not assume you have had extensive experience working with an operating system.

If you have already had an operating systems course, this course is probably not appropriate. For example, if you can explain the following concepts/terms, the course is probably too elementary for you.

... with a Programming Prerequisite

I do assume you are an experienced programmer, at least to the extent that you are comfortable writing modest size (a few hundred lines) programs. You may write your programs in C, C++, or java.

(-1).10 Academic Integrity Policy

Our policy on academic integrity, which applies to all graduate courses in the department, can be found here.

Chapter 0: Interlude on Linkers

Originally called a linkage editor by IBM.

A linker is an example of a utility program included with an operating system distribution. Like a compiler, the linker is not part of the operating system per se, i.e. it does not run in supervisor mode. Unlike a compiler it is OS dependent (what object/load file format is used) and is not (normally) language dependent.

0.1: What does a Linker Do?

Link of course.

When the compiler and assembler have finished processing a module, they produce an object module that is almost runnable. There are two remaining tasks to be accomplished before object modules can be run. Both are involved with linking (that word, again) together multiple object modules. The tasks are relocating relative addresses and resolving external references. relocate

0.1.1: Relocating Relative Addresses


0.1.2: Resolving External Reverences


The output of a linker is called a load module because it is now ready to be loaded and run.

To see how a linker works lets consider the following example, which is the first dataset from lab #1. The description in lab1 is more detailed.

The target machine is word addressable and has a memory of 250 words, each consisting of 4 decimal digits. The first (leftmost) digit is the opcode and the remaining three digits form an address.

Each object module contains three parts, a definition list, a use list, and the program text itself. Each definition is a pair (sym, loc). Each entry in the use list is a symbol and a list of uses of that symbol.

The program text consists of a count N followed by N pairs (type, word), where word is a 4-digit instruction described above and type is a single character indicating if the address in the word is Immediate, Absolute, Relative, or External.

Input set #1

1 xy 2
1 z 4
5 R 1004  I 5678  E 2777  R 8002  E 7002
0
1 z 3
6 R 8001  E 1777  E 1001  E 3002  R 1002  A 1010
0
1 z 1
2 R 5001  E 4777
1 z 2
1 xy 2
3 A 8000  E 1777  E 2001

The first pass simply finds the base address of each module and produces the symbol table giving the values for xy and z (2 and 15 respectively). The second pass does the real work using the symbol table and base addresses produced in pass one.

              Symbol Table
                  xy=2
                  z=15

             Memory Map
 +0
 0:       R 1004      1004+0 = 1004
 1:       I 5678               5678
 2: xy:   E 2000 ->z           2015
 3:       R 8002      8002+0 = 8002
 4:       E 7001 ->z           7015
 +5
 0        R 8001      8001+5 = 8006
 1        E 1000 ->z           1015
 2        E 1000 ->z           1015
 3        E 3000 ->z           3015
 4        R 1002      1002+5 = 1007
 5        A 1010               1010
 +11
 0        R 5001      5001+11= 5012
 1        E 4000 ->z           4015
 +13
 0        A 8000               8000
 1        E 1001 ->z           1015
 2 z:     E 2000 ->xy          2002

The output above is more complex than I expect you to produce it is there to help me explain what the linker is doing. All I would expect from you is the symbol table and the rightmost column of the memory map.

You must process each module separately, i.e. except for the symbol table and memory map your space requirements should be proportional to the largest module not to the sum of the modules. This does NOT make the lab harder.

(Unofficial) Remark: It is faster (less I/O) to do a one pass approach, but is harder since you need “fix-up code” whenever a use occurs in a module that precedes the module with the definition.

The linker on unix was mistakenly called ld (for loader), which is unfortunate since it links but does not load.

Historical remark: Unix was originally developed at Bell Labs; the seventh edition of unix was made publicly available (perhaps earlier ones were somewhat available). The 7th ed man page for ld begins (see http://cm.bell-labs.com/7thEdMan).

.TH LD 1
.SH NAME
ld \- loader
.SH SYNOPSIS
.B ld
[ option ] file ...
.SH DESCRIPTION
.I Ld
combines several
object programs into one, resolves external
references, and searches libraries.
By the mid 80s the Berkeley version (4.3BSD) man page referred to ld as "link editor" and this more accurate name is now standard in unix/linux distributions.

During the 2004-05 fall semester a student wrote to me “BTW - I have meant to tell you that I know the lady who wrote ld. She told me that they called it loader, because they just really didn't have a good idea of what it was going to be at the time.”

Lab #1: Implement a two-pass linker. The specific assignment is detailed on the class home page.

End of Interlude on Linkers

Chapter 1: Introduction

Homework: Read Chapter 1 (Introduction)

Levels of abstraction (virtual machines)

1.1: What is an operating system?

The kernel itself raises the level of abstraction and hides details. For example a user (of the kernel) can write to a file (a concept not present in hardware) and ignore whether the file resides on a floppy, a CD-ROM, or a hard disk. The user can also ignore issues such as whether the file is stored contiguously or is broken into blocks.

The kernel is a resource manager (so users don't conflict).

How is an OS fundamentally different from a compiler (say)?

Answer: Concurrency! Per Brinch Hansen in Operating Systems Principles (Prentice Hall, 1973) writes.

The main difficulty of multiprogramming is that concurrent activities can interact in a time-dependent manner, which makes it practically impossibly to locate programming errors by systematic testing. Perhaps, more than anything else, this explains the difficulty of making operating systems reliable.

Homework: 1, 2. (unless otherwise stated, problems numbers are from the end of the chapter in Tanenbaum.)

1.2 History of Operating Systems

  1. Single user (no OS).

  2. Batch, uniprogrammedR, run to completion.

  3. Multiprogrammed
  4. Personal Computers

Homework: 3.

1.3: OS Zoo

There is not as much difference between mainframe, server, multiprocessor, and PC OSes as Tannenbaum suggests. For example Windows NT/2000/XP, Unix and Linux are used on all.

1.3.1: Mainframe Operating Systems

Used in data centers, these systems ofter tremendous I/O capabilities and extensive fault tolerance.

1.3.2: Server Operating Systems

Perhaps the most important servers today are web servers. Again I/O (and network) performance are critical.

1.3.3: Multiprocessor Operating systems

These existed almost from the beginning of the computer age, but now are not exotic.

1.3.4: PC Operating Systems (client machines)

Some OSes (e.g. Windows ME) are tailored for this application. One could also say they are restricted to this application.

1.3.5: Real-time Operating Systems

1.3.6: Embedded Operating Systems

1.3.7: Smart Card Operating Systems

Very limited in power (both meanings of the word).

Multiple computers

Homework: 5.

1.4: Computer Hardware Review

Tannenbaum's treatment is very brief and superficial. Mine is even more so. The picture above is very simplified. (For one thing, today separate buses are used to Memory and Video.)

A bus is a set of wires that connect two or more devices. Only one message can be on the bus at a time. All the devices “receive” the message: There are no switches in between to steer the message to the desired destination, but often some of the wires form an address that indicates which devices should actually process the message.

1.4.1: Processors

We will ignore processor concepts such as program counters and stack pointers. We will also ignore computer design issues such as pipelining and superscalar. We do, however, need the notion of a trap, that is an instruction that atomically switches the processor into privileged mode and jumps to a pre-defined physical address.

1.4.2: Memory

We will ignore caches, but will (later) discuss demand paging, which is very similar (although demand paging and caches use completely disjoint terminology). In both cases, the goal is to combine large slow memory with small fast memory to achieve the effect of large fast memory.

The central memory in a system is called RAM (Random Access Memory). A key point is that it is volatile, i.e. the memory loses its data if power is turned off.

Disk Hardware

I don't understand why Tanenbaum discusses disks here instead of in the next section entitled I/O devices, but he does. I don't.

ROM / PROM / EPROM / EEPROM / Flash Ram

ROM (Read Only Memory) is used to hold data that will not change, e.g. the serial number of a computer or the program use in a microwave. ROM is non-volatile. A modern, familiar ROM is CD-ROM (or the denser DVD).

But often this unchangable data needs to be changed (e.g., to fix bugs). This gives rise first to PROM (Programmable ROM), which, like a CD-R, can be written once (as opposed to being mass produced already written like a CD-ROM), and then to EPROM (Erasable PROM; not Erasable ROM as in Tanenbaum), which is like a CD-RW. An EPROM is especially convenient if it can be erased with a normal circuit (EEPROM, Electrically EPROM or Flash RAM).

Memory Protection and Context Switching

As mentioned above when discussing OS/MFT and OS/MVT, multiprogramming requires that we protect one process from another. That is we need to translate the virtual addresses of each program into distinct physical addresses. The hardware that performs this translation is called the MMU or Memory Management Unit.

When context switching from one process to another, the translation must change, which can be an expensive operation.

1.4.3: I/O Devices

When we do I/O for real, I will show a real disk opened up and illustrate the components

Devices are often quite complicated to manage and a separate computer, called a controller, is used to translate simple commands (read sector 123456) into what the device requires (read cylinder 321, head 6, sector 765). Actually the controller does considerably more, e.g. calculates a checksum for error detection.

How does the OS know when the I/O is complete?

  1. It can busy wait constantly asking the controller if the I/O is complete. This is the easiest (by far) but has low performance. This is also called polling or PIO (Programmed I/O).
  2. It can tell the controller to start the I/O and then switch to other tasks. The controller must then interrupt the OS when the I/O is done. Less waiting, but harder (concurrency!). Also on modern processors a single interrupt is rather costly. Much more than a single memory reference, but much, much less than a disk I/O.
  3. Some controllers can do DMA (Direct Memory Access) in which case they deal directly with memory after being started by the CPU. This takes work from the CPU and halves the number of bus accesses.

We discuss this more in chapter 5. In particular, we explain the last point about halving bus accesses.

1.4.4: Buses

I don't care very much about the names of the buses, but the diagram given in the book doesn't show a modern design. The one below does. On the right is a figure showing the specifications for a chip set introduced in 2000. The chip set has two different width PCI busses, which is not shown below. Instead of having the chip set supply USB, a PCI USB controller may be used. Finally, the use of ISA is decreasing. Indeed my last desktop didn't have an ISA bus and I had to replace my ISA sound card with a PCI version.

================ Start Lecture #2 ================

1.5: Operating System Concepts

This will be very brief. Much of the rest of the course will consist in “filling in the details”.

1.5.1: Processes

A program in execution. If you run the same program twice, you have created two processes. For example if you have two editors running in two windows, each instance of the editor is a separate process.

Often one distinguishes the state or context (memory image, open files) from the thread of control. Then if one has many threads running in the same task, the result is a “multithreaded processes”.

The OS keeps information about all processes in the process table. Indeed, the OS views the process as the entry. This is an example of an active entity being viewed as a data structure (cf. discrete event simulations). An observation made by Finkel in his (out of print) OS textbook.

The Process Tree

The set of processes forms a tree via the fork system call. The forker is the parent of the forkee, which is called a child. If the system blocks the parent until the child finishes, the “tree” is quite simple, just a line. But the parent (in many OSes) is free to continue executing and in particular is free to fork again producing another child.

A process can send a signal to another process to cause the latter to execute a predefined function (the signal handler). This can be tricky to program since the programmer does not know when in his “main” program the signal handler will be invoked.

Each user is assigned User IDentification (UID) and all processes created by that user have this UID. One UID is special (the superuser or administrator) and has extra privileges. A child has the same UID as its parent. It is sometimes possible to change the UID of a running process. A group of users can be formed and given a Group IDentification, GID.

Access to files and devices can be limited to a given UID or GID.






1.5.2: Deadlocks

A set of processes each of which is blocked by a process in the set. The automotive equivalent, shown at right, is gridlock.

1.5.3: Memory Management

Each process requires memory. The linker produces a load module that assumes the process is loaded at location 0. The operating system ensures that the processes are actually given disjoint memory. Current operating systems permit each process to be given more (virtual) memory than the total amount of (real) memory on the machine.

1.5.4: Input/Output

There are a wide variety of I/O devices that the OS must manage. For example, if two processes are printing at the same time, the OS must not interleave the output. The OS contains device specific code (drivers) for each device as well as device-independent I/O code.

1.5.5: Files

Modern systems have a hierarchy of files. A file system tree.

You can name a file via an absolute path starting at the root directory or via a relative path starting at the current working directory.

In addition to regular files and directories, Unix also uses the file system namespace for devices (called special files, which are typically found in the /dev directory. Often utilities that are normally applied to (ordinary) files can be applied as well to some special files. For example, when you are accessing a unix system using a mouse and do not have anything serious going on (e.g., right after you log in), type the following command

    cat /dev/mouse
and then move the mouse. You kill the cat by typing cntl-C. I tried this on my linux box and no damage occurred. Your mileage may vary.

Before a file can be accessed, it must be opened and a file descriptor obtained. Subsequent I/O system calls (e.g., read and write) use the file descriptor rather that the file name. This is an optimization that enables the OS to find the file once and save the information in a file table accessed by the file descriptor. Many systems have standard files that are automatically made available to a process upon startup. These (initial) file descriptors are fixed.

A convenience offered by some command interpretors is a pipe or pipeline. The pipeline

  dir | wc
which pipes the output of dir into a character/word/line counter, will give the number of files in the directory (plus other info).

1.5.6: Security

Files and directories have associated permissions.

Security has of course sadly become a very serious concern. The topic is very serious and I do not feel that the necessarily superficial coverage that time would permit is useful so we are not covering the topic at all.

1.5.7: The Shell or Command Interpreter (DOS Prompt)

The command line interface to the operating system. The shell permits the user to

Homework: 8

1.6: System Calls

System calls are the way a user (i.e., a program) directly interfaces with the OS. Some textbooks use the term envelope for the component of the OS responsible for fielding system calls and dispatching them to the appropriate component of the OS. On the right is a picture showing some of the OS components and the external events for which they are the interface.

Note that the OS serves two masters. The hardware (below) asynchronously sends interrupts and the user synchronously invokes system calls and generates page faults.

Homework: 14

What happens when a user executes a system call such as read()? We show a more detailed picture below, but at a high level what happens is

  1. Normal function call (in C, Ada, Pascal, Java, etc.).
  2. Library routine (probably in C).
  3. Small assembler routine.
    1. Move arguments to predefined place (perhaps registers).
    2. Poof (a trap instruction) and then the OS proper runs in supervisor mode.
    3. Fix up result (move to correct place).

The following actions occur when the user executes the (Unix) system call

    count = read(fd,buffer,nbytes)
  
which reads up to nbytes from the file described by fd into buffer. The actual number of bytes read is returned (it might be less than nbytes if, for example, an eof was encountered).

  1. Push third parameter on to the stack.
  2. Push second parameter on to the stack.
  3. Push first parameter on to the stack.
  4. Call the library routine, which involves pushing the return address on to the stack and jumping to the routine.
  5. Machine/OS dependent actions. One is to put the system call number for read in a well defined place, e.g., a specific register. This requires assembly language.
  6. Trap to the kernel. This enters the operating system proper and shifts the computer to privileged mode. Assembly language is again used.
  7. The envelope uses the system call number to access a table of pointers to find the handler for this system call.
  8. The read system call handler processes the request (see below).
  9. Some magic instruction returns to user mode and jumps to the location right after the trap.
  10. The library routine returns (there is more; e.g., the count must be returned).
  11. The stack is popped (ending the function call read).

A major complication is that the system call handler may block. Indeed for read it is likely that a block will occur. In that case a switch occurs to another process. This is far from trivial and is discussed later in the course.

Process Management
Posix Win32 Description
Fork CreateProcess Clone current process
exec(ve) Replace current process
waid(pid) WaitForSingleObject Wait for a child to terminate.
exit ExitProcess Terminate current process & return status
File Management
Posix Win32 Description
open CreateFile Open a file & return descriptor
close CloseHandle Close an open file
read ReadFile Read from file to buffer
write WriteFile Write from buffer to file
lseek SetFilePointer Move file pointer
stat GetFileAttributesEx Get status info
Directory and File System Management
Posix Win32 Description
mkdir CreateDirectory Create new directory
rmdir RemoveDirectory Remove empty directory
link (none) Create a directory entry
unlink DeleteFile Remove a directory entry
mount (none) Mount a file system
umount (none) Unmount a file system
Miscellaneous
Posix Win32 Description
chdir SetCurrentDirectory Change the current working directory
chmod (none) Change permissions on a file
kill (none) Send a signal to a process
time GetLocalTime Elapsed time since 1 jan 1970

A Few Important Posix/Unix/Linux and Win32 System Calls

The table on the right shows some systems calls; the descriptions are accurate for Unix and close for win32. To show how the four process management calls enable much of process management, consider the following highly simplified shell. (The fork() system call returns true in the parent and false in the child.)

while (true)
    display_prompt()
    read_command(command)

    if (fork() != 0)
        waitpid(...)
    else
        execve(command)
    endif
endwhile

Simply removing the waitpid(...) gives background jobs.

Homework: 18.

1.6A: Addendum on Transfer of Control

The transfer of control between user processes and the operating system kernel can be quite complicated, especially in the case of blocking system calls, hardware interrupts, and page faults. Before tackling these issues later, we begin with the familiar example of a procedure call within a user-mode process.

An important OS objective is that, even in the more complicated cases of page faults and blocking system calls requiring device interrupts, simple procedure call semantics are observed from a user process viewpoint. The complexity is hidden inside the kernel itself, yet another example of the operating system providing a more abstract, i.e., simpler, virtual machine to the user processes.

More details will be added when we study memory management (and know officially about page faults) and more again when we study I/O (and know officially about device interrupts).

A number of the points below are far from standardized. Such items as where to place parameters, which routine saves the registers, exact semantics of trap, etc, vary as one changes language/compiler/OS. Indeed some of these are referred to as “calling conventions”, i.e. their implementation is a matter of convention rather than logical requirement. The presentation below is, we hope, reasonable, but must be viewed as a generic description of what could happen instead of an exact description of what does happen with, say, C compiled by the Microsoft compiler running on Windows XP.

1.6A.1: User-mode procedure calls

Procedure f calls g(a,b,c) in process P.

Actions by f prior to the call:

  1. Save the registers by pushing them onto the stack (in some implementations this is done by g instead of f).

  2. Push arguments c,b,a onto P's stack.
    Note: Stacks usually grow downward from the top of P's segment, so pushing an item onto the stack actually involves decrementing the stack pointer, SP.
    Note: Some compilers store arguments in registers not on the stack.

Executing the call itself

  1. Execute PUSHJ <start-address of g>.
    This instruction pushes the program counter PC onto the stack, and then jumps to the start address of g. The value pushed is actually the updated program counter, i.e., the location of the next instruction (the instruction to be executed by f when g returns).

Actions by g upon being called:

  1. Allocate space for g's local variables by suitably decrementing SP.

  2. Start execution from the beginning of the program, referencing the parameters as needed. The execution may involve calling other procedures, possibly including recursive calls to f and/or g.

Actions by g when returning to f:

  1. If g is to return a value, store it in the conventional place.

  2. Undo step 4: Deallocate local variables by incrementing SP.

  3. Undo step 3: Execute POPJ, i.e., pop the stack and set PC to the value popped, which is the return address pushed in step 4.

Actions by f upon the return from g:

  1. We are now at the step in f immediately following the call to g.
    Undo step 2: Remove the arguments from the stack by incrementing SP.

  2. Undo step 1: Restore the registers while popping them off the stack.

  3. Continue the execution of f, referencing the returned value of g, if any.

Properties of (user-mode) procedure calls:

1.6A.2: Kernel-mode procedure calls

We mean one procedure running in kernel mode calling another procedure, which will also be run in kernel mode. Later, we will discuss switching from user to kernel mode and back.

There is not much difference between the actions taken during a kernel-mode procedure call and during a user-mode procedure call. The procedures executing in kernel-mode are permitted to issue privileged instructions, but the instructions used for transferring control are all unprivileged so there is no change in that respect.

One difference is that often a different stack is used in kernel mode, but that simply means that the stack pointer must be set to the kernel stack when switching from user to kernel mode. But we are not switching modes in this section; the stack pointer already points to the kernel stack. Often there are two stack pointers one for kernel mode and one for user mode.

1.6A.3: The Trap instruction

The trap instruction, like a procedure call, is a synchronous transfer of control: We can see where, and hence when, it is executed; there are no surprises. Although not surprising, the trap instruction does have an unusual effect, processor execution is switched from user-mode to kernel-mode. That is, the trap instruction itself is executed in user-mode (it is naturally an UNprivileged instruction) but the next instruction executed (which is NOT the instruction written after the trap) is executed in kernel-mode.

Process P, running in unprivileged (user) mode, executes a trap. The code being executed was written in assembler since there are no high level languages that generate a trap instruction. There is no need to name the function that is executing. Compare the following example to the explanation of “f calls g” given above.

Actions by P prior to the trap

  1. Save the registers by pushing them onto the stack.

  2. Store any arguments that are to be passed. The stack is not normally used to store these arguments since the kernel has a different stack. Often registers are used.

Executing the trap itself

  1. Execute TRAP <trap-number>.
    Switch the processor to kernel (privileged) mode, jumps to a location in the OS determined by trap-number, and saves the return address. For example, the processor may be designed so that the next instruction executed after a trap is at physical address 8 times the trap-number. The trap-number should be thought of as the “name” of the code-sequence to which the processor will jump rather than as an argument to trap. Indeed arguments to trap, are established before the trap is executed.

Actions by the OS upon being TRAPped into

  1. Jump to the real code.
    Recall that trap instructions with different trap numbers jump to locations very close to each other. There is not enough room between them for the real trap handler. Indeed one can think of the trap as having an extra level of indirection; it jumps to a location that then jumps to the real start address. If you learned about writing jump tables in assembler, this is very similar.

  2. Check all arguments passed. The kernel must be paranoid and assume that the user mode program is evil and written by a bad guy.

  3. Allocate space by decrementing the kernel stack pointer.
    The kernel and user stacks are separate.

  4. Start execution from the jumped-to location, referencing the parameters as needed.

Actions by the OS when returning to user mode

  1. Undo step 6: Deallocate space by incrementing the kernel stack pointer.

  2. Undo step 3: Execute (in assembler) another special instruction, RTI or ReTurn from Interrupt, which returns the processor to user mode and transfers control to the return location saved by the trap. The word interrupt is used because an RTI is used when the kernel is returning from an interrupt as well as the present case when it is returning from an trap.

Actions by P upon the return from the OS

  1. We are now in at the instruction right after the trap
    Undo step 1: Restore the registers by popping the stack.

  2. Continue the execution of P, referencing the returned value(s) of the trap, if any.

Properties of TRAP/RTI:

Remark: A good way to use the material in the addendum is to compare the first case (user-mode f calls user-mode g) to the TRAP/RTI case line by line so that you can see the similarities and differences.

1.7: OS Structure

I must note that Tanenbaum is a big advocate of the so called microkernel approach in which as much as possible is moved out of the (supervisor mode) kernel into separate processes. The (hopefully small) portion left in supervisor mode is called a microkernel.

In the early 90s this was popular. Digital Unix (now called True64) and Windows NT/2000/XP/Vista? are examples. Digital Unix is based on Mach, a research OS from Carnegie Mellon university. Lately, the growing popularity of Linux has called into question the belief that “all new operating systems will be microkernel based”.

1.7.1: Monolithic approach

The previous picture: one big program

The system switches from user mode to kernel mode during the poof and then back when the OS does a “return” (an RTI or return from interrupt).

But of course we can structure the system better, which brings us to.

1.7.2: Layered Systems

Some systems have more layers and are more strictly structured.

An early layered system was “THE” operating system by Dijkstra. The layers were.

  1. The operator
  2. User programs
  3. I/O mgt
  4. Operator-process communication
  5. Memory and drum management

The layering was done by convention, i.e. there was no enforcement by hardware and the entire OS is linked together as one program. This is true of many modern OS systems as well (e.g., linux).

The multics system was layered in a more formal manner. The hardware provided several protection layers and the OS used them. That is, arbitrary code could not jump to or access data in a more protected layer.

1.7.3: Virtual Machines

Use a “hypervisor” (beyond supervisor, i.e. beyond a normal OS) to switch between multiple Operating Systems. Made popular by IBM's VM/CMS

1.7.4: Exokernels (unofficial)

Similar to VM/CMS but the virtual machines have disjoint resources (e.g., distinct disk blocks) so less remapping is needed.

1.7.5: Client-Server

When implemented on one computer, a client-server OS uses the microkernel approach in which the microkernel just handles communication between clients and servers, and the main OS functions are provided by a number of separate processes.

This does have advantages. For example an error in the file server cannot corrupt memory in the process server. This makes errors easier to track down.

But it does mean that when a (real) user process makes a system call there are more processes switches. These are not free.

A distributed system can be thought of as an extension of the client server concept where the servers are remote.

Today with plentiful memory, each machine would have all the different servers. So the only reason a message would go to another computer is if the originating process wished to communicate with a specific process on that computer (for example wanted to access a remote disk).

Homework: 23

Microkernels Not So Different In Practice

Dennis Ritchie, the inventor of the C programming language and co-inventor, with Ken Thompson, of Unix was interviewed in February 2003. The following is from that interview.

What's your opinion on microkernels vs. monolithic?

Dennis Ritchie: They're not all that different when you actually use them. "Micro" kernels tend to be pretty large these days, and "monolithic" kernels with loadable device drivers are taking up more of the advantages claimed for microkernels.

================ Start Lecture #3 ================

Chapter 2: Process and Thread Management

Homework solutions posted, give password.

TAs assigned

Tanenbaum's chapter title is “Processes and Threads”. I prefer to add the word management. The subject matter is processes, threads, scheduling, interrupt handling, and IPC (InterProcess Communication--and Coordination).

2.1: Processes

Definition: A process is a program in execution.

2.1.1: The Process Model

Even though in actuality there are many processes running at once, the OS gives each process the illusion that it is running alone.

Virtual time and virtual memory are examples of abstractions provided by the operating system to the user processes so that the latter “sees” a more pleasant virtual machine than actually exists.

2.1.2: Process Creation

From the users or external viewpoint there are several mechanisms for creating a process.

  1. System initialization, including daemon (see below) processes.
  2. Execution of a process creation system call by a running process.
  3. A user request to create a new process.
  4. Initiation of a batch job.

But looked at internally, from the system's viewpoint, the second method dominates. Indeed in unix only one process is created at system initialization (the process is called init); all the others are children of this first process.

Why have init? That is why not have all processes created via method 2?
Ans: Because without init there would be no running process to create any others.

Definition of daemon

Many systems have daemon process lurking around to perform tasks when they are needed. I was pretty sure the terminology was related to mythology, but didn't have a reference until a student found “The {Searchable} Jargon Lexicon” at http://developer.syndetic.org/query_jargon.pl?term=demon

daemon: /day'mn/ or /dee'mn/ n. [from the mythological meaning, later rationalized as the acronym `Disk And Execution MONitor'] A program that is not invoked explicitly, but lies dormant waiting for some condition(s) to occur. The idea is that the perpetrator of the condition need not be aware that a daemon is lurking (though often a program will commit an action only because it knows that it will implicitly invoke a daemon). For example, under {ITS}, writing a file on the LPT spooler's directory would invoke the spooling daemon, which would then print the file. The advantage is that programs wanting (in this example) files printed need neither compete for access to nor understand any idiosyncrasies of the LPT. They simply enter their implicit requests and let the daemon decide what to do with them. Daemons are usually spawned automatically by the system, and may either live forever or be regenerated at intervals. Daemon and demon are often used interchangeably, but seem to have distinct connotations. The term `daemon' was introduced to computing by CTSS people (who pronounced it /dee'mon/) and used it to refer to what ITS called a dragon; the prototype was a program called DAEMON that automatically made tape backups of the file system. Although the meaning and the pronunciation have drifted, we think this glossary reflects current (2000) usage.

As is often the case, wikipedia.org proved useful. Here is the first paragraph of a more thorough entry. The wikipedia also has entries for other uses of daemon.

In Unix and other computer multitasking operating systems, a daemon is a computer program that runs in the background, rather than under the direct control of a user; they are usually instantiated as processes. Typically daemons have names that end with the letter "d"; for example, syslogd is the daemon which handles the system log.

2.1.3: Process Termination

Again from the outside there appear to be several termination mechanism.

  1. Normal exit (voluntary).
  2. Error exit (voluntary).
  3. Fatal error (involuntary).
  4. Killed by another process (involuntary).

And again, internally the situation is simpler. In Unix terminology, there are two system calls kill and exit that are used. Kill (poorly named in my view) sends a signal to another process. If this signal is not caught (via the signal system call) the process is terminated. There is also an “uncatchable” signal. Exit is used for self termination and can indicate success or failure.

2.1.4: Process Hierarchies

Modern general purpose operating systems permit a user to create and destroy processes.

Old or primitive operating system like MS-DOS are not fully multiprogrammed, so when one process starts another, the first process is automatically blocked and waits until the second is finished.

2.1.5: Process States and Transitions

The diagram on the right contains much information.


Homework: 1.

One can organize an OS around the scheduler.

2.1.6: Implementation of Processes

The OS organizes the data about each process in a table naturally called the process table. Each entry in this table is called a process table entry or process control block (PCB).

I normally refer to a process table entry as a PTE, but this is bad. I recently realized that I use PTE for two different things, Process Table Entry and Page Table Entry. Since the latter is very common, I must stop using the former. Please correct me if I slip up.

2.1.6A: An addendum on Interrupts

This should be compared with the addendum on transfer of control.

In a well defined location in memory (specified by the hardware) the OS stores an interrupt vector, which contains the address of the (first level) interrupt handler.

Assume a process P is running and a disk interrupt occurs for the completion of a disk read previously issued by process Q, which is currently blocked. Note that disk interrupts are unlikely to be for the currently running process (because the process that initiated the disk access is likely blocked).

Actions by P prior to the interrupt:

  1. Who knows??
    This is the difficulty of debugging code depending on interrupts, the interrupt can occur (almost) anywhere. Thus, we do not know what happened just before the interrupt.

Executing the interrupt itself:

  1. The hardware saves the program counter and some other registers (or switches to using another set of registers, the exact mechanism is machine dependent).

  2. Hardware loads new program counter from the interrupt vector.
  3. As with a trap, the hardware automatically switches the system into privileged mode. (It might have been in supervisor mode already, that is an interrupt can occur in supervisor mode).

Actions by the interrupt handler (et al) upon being activated

  1. An assembly language routine saves registers.

  2. The assembly routine sets up new stack. (These last two steps are often called setting up the C environment.)

  3. The assembly routine calls a procedure in a high level language, often the C language (Tanenbaum forgot this step).

  4. The C procedure does the real work.
  5. The scheduler decides which process to run (P or Q or something else). This loosely corresponds to g calling other procedures in the simple f calls g case we discussed previously. Eventually the scheduler decides to run P.

Actions by P when control returns

  1. The C procedure (that did the real work in the interrupt processing) continues and returns to the assembly code.

  2. Assembly language restores P's state (e.g., registers) and starts P at the point it was when the interrupt occurred.

Properties of interrupts

2.2: Threads

Per process itemsPer thread items
Address spaceProgram counter
Global variablesMachine registers
Open filesStack
Child processes
Pending alarms
Signals and signal handlers
Accounting information

The idea is to have separate threads of control (hence the name) running in the same address space. An address space is a memory management concept. For now think of an address space as the memory in which a process runs and the mapping from the virtual addresses (addresses in the program) to the physical addresses (addresses in the machine). Each thread is somewhat like a process (e.g., it is scheduled to run) but contains less state (e.g., the address space belongs to the process in which the thread runs.

2.2.1: The Thread Model

A process contains a number of resources such as address space, open files, accounting information, etc. In addition to these resources, a process has a thread of control, e.g., program counter, register contents, stack. The idea of threads is to permit multiple threads of control to execute within one process. This is often called multithreading and threads are often called lightweight processes. Because threads in the same process share so much state, switching between them is much less expensive than switching between separate processes.

Individual threads within the same process are not completely independent. For example there is no memory protection between them. This is typically not a security problem as the threads are cooperating and all are from the same user (indeed the same process). However, the shared resources do make debugging harder. For example one thread can easily overwrite data needed by another and if one thread closes a file other threads can't read from it.

2.2.2: Thread Usage

Often, when a process A is blocked (say for I/O) there is still computation that can be done. Another process B can't do this computation since it doesn't have access to the A's memory. But two threads in the same process do share memory so that problem doesn't occur.

An important modern example is a multithreaded web server. Each thread is responding to a single WWW connection. While one thread is blocked on I/O, another thread can be processing another WWW connection.
Question: Why not use separate processes, i.e., what is the shared memory?
Ans: The cache of frequently referenced pages.

A common organization is to have a dispatcher thread that fields requests and then passes this request on to an idle thread.

Another example is a producer-consumer problem (c.f. below) in which we have 3 threads in a pipeline. One thread reads data from an I/O device into a buffer, the second thread performs computation on the input buffer and places results in an output buffer, and the third thread outputs the data found in the output buffer. Again, while one thread is blocked the others can execute.

Question: Why does each thread block?

Answer:

  1. The first thread blocks while waiting for the device to supply the data. It also blocks if the input buffer is full.

  2. The second thread blocks when either the input buffer is empty or the output buffer is full.

  3. The third thread blocks when the output device is busy (it might also block waiting for the output request to complete, but this is not necessary). It also blocks if the output buffer is empty.

Homework: 9.

A final (related) example is that an application that wishes to perform automatic backups can have a thread to do just this. In this way the thread that interfaces with the user is not blocked during the backup. However some coordination between threads may be needed so that the backup is of a consistent state.

2.2.3: Implementing threads in user space

Write a (threads) library that acts as a mini-scheduler and implements thread_create, thread_exit, thread_wait, thread_yield, etc. The central data structure maintained and used by this library is the thread table, the analogue of the process table in the operating system itself.

Advantages

Disadvantages

Possible methods of dealing with blocking system calls

2.2.4: Implementing Threads in the Kernel

Move the thread operations into the operating system itself. This naturally requires that the operating system itself be (significantly) modified and is thus not a trivial undertaking.

2.2.5: Hybrid Implementations

One can write a (user-level) thread library even if the kernel also has threads. This is sometimes called the M:N model since M user mode threads run on each of N kernel threads. Then each kernel thread can switch between user level threads. Thus switching between user-level threads within one kernel thread is very fast (no context switch) and we maintain the advantage that a blocking system call or page fault does not block the entire multi-threaded application since threads in other processes of this application are still runnable.

2.2.6: Scheduler Activations

Skipped

2.2.7: Popup Threads

The idea is to automatically issue a thread-create system call upon message arrival. (The alternative is to have a thread or process blocked on a receive system call.) If implemented well, the latency between message arrival and thread execution can be very small since the new thread does not have state to restore.

Making Single-threaded Code Multithreaded

Definitely NOT for the faint of heart.

2.3: Interprocess Communication (IPC) and Coordination/Synchronization

2.3.1: Race Conditions

A race condition occurs when two (or more) processes are about to perform some action. Depending on the exact timing, one or other goes first. If one of the processes goes first, everything works, but if another one goes first, an error, possibly fatal, occurs.

Imagine two processes both accessing x, which is initially 10.

Homework: 18.

================ Start Lecture #4 ================

2.3.2: Critical sections

We must prevent interleaving sections of code that need to be atomic with respect to each other. That is, the conflicting sections need mutual exclusion. If process A is executing its critical section, it excludes process B from executing its critical section. Conversely if process B is executing is critical section, it excludes process A from executing its critical section.

Requirements for a critical section implementation.

  1. No two processes may be simultaneously inside their critical section.

  2. No assumption may be made about the speeds or the number of CPUs.

  3. No process outside its critical section (including the entry and exit code) may block other processes.

  4. No process should have to wait forever to enter its critical section.

2.3.3 Mutual exclusion with busy waiting

The operating system can choose not to preempt itself. That is, we do not preempt system processes (if the OS is client server) or processes running in system mode (if the OS is self service). Forbidding preemption for system processes would prevent the problem above where x<--x+1 not being atomic crashed the printer spooler if the spooler is part of the OS.

But simply forbidding preemption while in system mode is not sufficient.

Software solutions for two processes

    Initially P1wants=P2wants=false

    Code for P1                             Code for P2

    Loop forever {                          Loop forever {
        P1wants <-- true         ENTRY          P2wants <-- true
        while (P2wants) {}       ENTRY          while (P1wants) {}
        critical-section                        critical-section
        P1wants <-- false        EXIT           P2wants <-- false
        non-critical-section }                  non-critical-section }

Explain why this works.

But it is wrong! Why?

Let's try again. The trouble was that setting want before the loop permitted us to get stuck. We had them in the wrong order!

Initially P1wants=P2wants=false

Code for P1                             Code for P2

Loop forever {                          Loop forever {
    while (P2wants) {}       ENTRY          while (P1wants) {}
    P1wants <-- true         ENTRY          P2wants <-- true
    critical-section                        critical-section
    P1wants <-- false        EXIT           P2wants <-- false
    non-critical-section }                  non-critical-section }

Explain why this works.

But it is wrong again! Why?

So let's be polite and really take turns. None of this wanting stuff.

Initially turn=1

Code for P1                      Code for P2

Loop forever {                   Loop forever {
    while (turn = 2) {}              while (turn = 1) {}
    critical-section                 critical-section
    turn <-- 2                       turn <-- 1
    non-critical-section }           non-critical-section }

This one forces alternation, so is not general enough. Specifically, it does not satisfy condition three, which requires that no process in its non-critical section can stop another process from entering its critical section. With alternation, if one process is in its non-critical section (NCS) then the other can enter the CS once but not again.

The first example violated rule 4 (the whole system blocked). The second example violated rule 1 (both in the critical section. The third example violated rule 3 (one process in the NCS stopped another from entering its CS).

In fact, it took years (way back when) to find a correct solution. Many earlier “solutions” were found and several were published, but all were wrong. The first correct solution was found by a mathematician named Dekker, who combined the ideas of turn and wants. The basic idea is that you take turns when there is contention, but when there is no contention, the requesting process can enter. It is very clever, but I am skipping it (I cover it when I teach distributed operating systems in V22.0480 or G22.2251). Subsequently, algorithms with better fairness properties were found (e.g., no task has to wait for another task to enter the CS twice).

What follows is Peterson's solution, which also combines turn and wants to force alternation only when there is contention. When Peterson's solution was published, it was a surprise to see such a simple soluntion. In fact Peterson gave a solution for any number of processes. A proof that the algorithm satisfies our properties (including a strong fairness condition) for any number of processes can be found in Operating Systems Review Jan 1990, pp. 18-22.

Initially P1wants=P2wants=false  and  turn=1

Code for P1                        Code for P2

Loop forever {                     Loop forever {
    P1wants <-- true                   P2wants <-- true
    turn <-- 2                         turn <-- 1
    while (P2wants and turn=2) {}      while (P1wants and turn=1) {}
    critical-section                   critical-section
    P1wants <-- false                  P2wants <-- false
    non-critical-section               non-critical-section

Hardware assist (test and set)

TAS(b), where b is a binary variable, ATOMICALLY sets b<--true and returns the OLD value of b.
Of course it would be silly to return the new value of b since we know the new value is true.

The word atomically means that the two actions performed by TAS(x), testing (i.e., returning the old value of x) and setting (i.e., assigning true to x) are inseparable. Specifically it is not possible for two concurrent TAS(x) operations to both return false (unless there is also another concurrent statement that sets x to false).

With TAS available implementing a critical section for any number of processes is trivial.

    loop forever {
        while (TAS(s)) {}   ENTRY
        CS
        s<--false           EXIT
        NCS

2.3.4: Sleep and Wakeup

Remark: Tanenbaum does both busy waiting (as above) and blocking (process switching) solutions. We will only do busy waiting, which is easier. Sleep and Wakeup are the simplest blocking primitives. Sleep voluntarily blocks the process and wakeup unblocks a sleeping process. We will not cover these.

Homework: Explain the difference between busy waiting and blocking process synchronization.

2.3.5: Semaphores

Remark: Tannenbaum use the term semaphore only for blocking solutions. I will use the term for our busy waiting solutions. Others call our solutions spin locks.

P and V and Semaphores

The entry code is often called P and the exit code V. Thus the critical section problem is to write P and V so that

loop forever
    P
    critical-section
    V
    non-critical-section
satisfies
  1. Mutual exclusion.
  2. No speed assumptions.
  3. No blocking by processes in NCS.
  4. Forward progress (my weakened version of Tanenbaum's last condition).

Note that I use indenting carefully and hence do not need (and sometimes omit) the braces {} used in languages like C or java.

A binary semaphore abstracts the TAS solution we gave for the critical section problem.

The above code is not real, i.e., it is not an implementation of P. It is, instead, a definition of the effect P is to have.

To repeat: for any number of processes, the critical section problem can be solved by

loop forever
    P(S)
    CS
    V(S)
    NCS

The only specific solution we have seen for an arbitrary number of processes is the one just above with P(S) implemented via test and set.

Remark: Peterson's solution requires each process to know its processor number. The TAS soluton does not. Moreover the definition of P and V does not permit use of the processor number. Thus, strictly speaking Peterson did not provide an implementation of P and V. He did solve the critical section problem.

To solve other coordination problems we want to extend binary semaphores.

Both of the shortcomings can be overcome by not restricting ourselves to a binary variable, but instead define a generalized or counting semaphore.

These counting semaphores can solve what I call the semi-critical-section problem, where you premit up to k processes in the section. When k=1 we have the original critical-section problem.

initially S=k

loop forever
    P(S)
    SCS   <== semi-critical-section
    V(S)
    NCS

Producer-consumer problem

Initially e=k, f=0 (counting semaphore); b=open (binary semaphore)

Producer                         Consumer

loop forever                     loop forever
    produce-item                     P(f)
    P(e)                             P(b); take item from buf; V(b)
    P(b); add item to buf; V(b)      V(e)
    V(f)                             consume-item

2.3.6: Mutexes

Remark: Whereas we use the term semaphore to mean binary semaphore and explicitly say generalized or counting semaphore for the positive integer version, Tanenbaum uses semaphore for the positive integer solution and mutex for the binary version. Also, as indicated above, for Tanenbaum semaphore/mutex implies a blocking primitive; whereas I use binary/counting semaphore for both busy-waiting and blocking implementations. Finally, remember that in this course we are studying only busy-waiting solutions.

My Terminology
Busy waitblock/switch
critical(binary) semaphore(binary) semaphore
semi-criticalcounting semaphorecounting semaphore
Tanenbaum's Terminology
Busy waitblock/switch
criticalenter/leave regionmutex
semi-criticalno namesemaphore

2.3.7: Monitors

Skipped.

2.3..8: Message Passing

Skipped. You can find some information on barriers in my lecture notes for a follow-on course (see in particular lecture #16).

2.4: Classical IPC Problems

2.4.0: The Producer-Consumer (or Bounded Buffer) Problem

We did this previously.

2.4.1: The Dining Philosophers Problem

A classical problem from Dijkstra

What algorithm do you use for access to the shared resource (the forks)?

The purpose of mentioning the Dining Philosophers problem without giving the solution is to give a feel of what coordination problems are like. The book gives others as well. We are skipping these (again this material would be covered in a sequel course). If you are interested look, for example, here.

Homework: 31 and 32 (these have short answers but are not easy). Note that the problem refers to fig. 2-20, which is incorrect. It should be fig 2-33.

2.4.2: The Readers and Writers Problem

Quite useful in multiprocessor operating systems and database systems. The “easy way out” is to treat all processes as writers in which case the problem reduces to mutual exclusion (P and V). The disadvantage of the easy way out is that you give up reader concurrency. Again for more information see the web page referenced above.

2.4.3: The Sleeping Barber Problem

Skipped.

Critical Sections versus Transactions

Critical Sections have a form of atomicity, in some ways similar to transactions. But there is a key difference: With critical sections you have certain blocks of code, say A, B, and C, that are mutually exclusive (i.e., are atomic with respect to each other) and other blocks, say D and E, that are mutually exclusive; but blocks from different critical sections, say A and D, are not mutually exclusive.

The day after giving this lecture in 2006-07-spring, I found a modern reference to the same question. The quote below is from Subtleties of Transactional Memory Atomicity Semantics by Blundell, Lewis, and Martin in Computer Architecture Letters (volume 5, number 2, July-Dec. 2006, pp. 65-66). As mentioned above, busy-waiting (binary) semaphores are often called locks (or spin locks).

... conversion (of a critical section to a transaction) broadens the scope of atomicity, thus changing the program's semantics: a critical section that was previously atomic only with respect to other critical sections guarded by the same lock is not atomic with respect to all other critical sections.

2.4A: Summary of 2.3 and 2.4

We began with a problem (wrong answer for x++ and x--) and used it to motivate the Critical Section Problem for which we provided a (software) solution.

We then defined (binary) Semaphores and showed that a Semaphore easily solves the critical section problem and doesn't require knowledge of how many processes are competing for the critical section. We gave an implementation using Test-and-Set.

We then gave an operational definition of Semaphore (which is not an implementation) and morphed this definition to obtain a Counting (or Generalized) Semaphore, for which we gave NO implementation. I asserted that a counting semaphore can be implemented using 2 binary semaphores and gave a reference.

We defined the Readers/Writers (or Bounded Buffer) Problem and showed that it can be solved using counting semaphores (and binary semaphores, which are a special case).

Finally we briefly discussed some classical problem, but did not give (full) solutions.

2.5: Process Scheduling

Scheduling processes on the processor is often called “process scheduling” or simply “scheduling”.

The objectives of a good scheduling policy include

Recall the basic diagram describing process states

For now we are discussing short-term scheduling, i.e., the arcs connecting running <--> ready.

Medium term scheduling is discussed later.

Preemption

It is important to distinguish preemptive from non-preemptive scheduling algorithms.

Deadline scheduling

This is used for real time systems. The objective of the scheduler is to find a schedule for all the tasks (there are a fixed set of tasks) so that each meets its deadline. The run time of each task is known in advance.

Actually it is more complicated.

We do not cover deadline scheduling in this course.

The name game

There is an amazing inconsistency in naming the different (short-term) scheduling algorithms. Over the years I have used primarily 4 books: In chronological order they are Finkel, Deitel, Silberschatz, and Tanenbaum. The table just below illustrates the name game for these four books. After the table we discuss each scheduling policy in turn.

Finkel  Deitel  Silbershatz Tanenbaum
-------------------------------------
FCFS    FIFO    FCFS        FCFS
RR      RR      RR          RR
PS      **      PS          PS
SRR     **      SRR         **    not in tanenbaum
SPN     SJF     SJF         SJF
PSPN    SRT     PSJF/SRTF   --    unnamed in tanenbaum
HPRN    HRN     **          **    not in tanenbaum
**      **      MLQ         **    only in silbershatz
FB      MLFQ    MLFQ        MQ

Remark: For an alternate organization of the scheduling algorithms (due to Eric Freudenthal and presented by him Fall 2002) click here.

First Come First Served (FCFS, FIFO, FCFS, --)

If the OS “doesn't” schedule, it still needs to store the list of ready processes in some manner. If it is a queue you get FCFS. If it is a stack (strange), you get LCFS. Perhaps you could get some sort of random policy as well.

Round Robin (RR, RR, RR, RR)

Homework: 26, 35, 38.

Homework: Give an argument favoring a large quantum; give an argument favoring a small quantum.

ProcessCPU TimeCreation Time
P1200
P233
P325
Homework:

Homework: Redo the previous homework for q=2 with the following change. After process P1 runs for 3ms (milliseconds), it blocks for 2ms. P1 never blocks again. P2 never blocks. After P3 runs for 1 ms it blocks for 1ms. Remind me to answer this one in class next lecture.

Processor Sharing (PS, **, PS, PS)

Merge the ready and running states and permit all ready jobs to be run at once. However, the processor slows down so that when n jobs are running at once, each progresses at a speed 1/n as fast as it would if it were running alone.

Homework: 34.

Variants of Round Robin

================ Start Lecture #5 ================

Remark: Last time there was a question concerning whether critical sections made a section of code atomic with respect to just another section or to all code. The very next day I found a modern reference to the same question.

The quote below is from Subtleties of Transactional Memory Atomicity Semantics by Blundell, Lewis, and Martin in Computer Architecture Letters (volume 5, number 2, July-Dec. 2006, pp. 65-66). As mentioned in class, busy-waiting (binary) semaphores are often called locks (or spin locks).

... conversion (of a critical section to a transaction) broadens the scope of atomicity, thus changing the program's semantics: a critical section that was previously atomic only with respect to other critical sections guarded by the same lock is not atomic with respect to all other critical sections.

Priority Scheduling

Each job is assigned a priority (externally, perhaps by charging more for higher priority) and the highest priority ready job is run.

Priority aging

As a job is waiting, raise its priority so eventually it will have the maximum priority.

Selfish RR (SRR, **, SRR, **)

Shortest Job First (SPN, SJF, SJF, SJF)

Sort jobs by total execution time needed and run the shortest first.

Homework: 39, 40. Note that when the book says RR with each process getting its fair share, it means Processor Sharing.

Preemptive Shortest Job First (PSPN, SRT, PSJF/SRTF, --)

Preemptive version of above

Highest Penalty Ratio Next (HPRN, HRN, **, **)

Run the process that has been “hurt” the most.

Remark: Recall that SFJ/PSFJ do a good job of minimizing the average waiting time. The problem with them is the difficulty in finding the job whose next CPU burst is minimal. We now learn two scheduling algorithms that attempt to do this (approximately). The first one does this statically, presumably with some manual help; the second is dynamic and fully automatic.

Multilevel Queues (**, **, MLQ, **)

Put different classes of processs in different queues

Multilevel Feedback Queues (FB, MFQ, MLFBQ, MQ)

As with multilevel queues above we have many queues, but now processs move from queue to queue in an attempt to dynamically separate “batch-like” from interactive processs so that we can favor the latter.

Theoretical Issues

Considerable theory has been developed.

Medium-Term Scheduling

In addition to the short-term scheduling we have discussed, we add medium-term scheduling in which decisions are made at a coarser time scale.

Long Term Scheduling

2.5.4: Scheduling in Real Time Systems

Skipped

2.5.5: Policy versus Mechanism

Skipped.

2.5.6: Thread Scheduling

Skipped.

Research on Processes and Threads

Skipped.

Chapter 3: Deadlocks

A deadlock occurs when every member of a set of processes is waiting for an event that can only be caused by a member of the set.

Often the event waited for is the release of a resource.

In the automotive world deadlocks are called gridlocks.

Old Reward: I used to give one point extra credit on the final exam for anyone who brings a real (e.g., newspaper) picture of an automotive deadlock. Note that it must really be a gridlock, i.e., motion is not possible without breaking the traffic rules. A huge traffic jam is not sufficient. This was solved last semester so no reward any more. One of the winners in on my office door.

For a computer science example consider two processes A and B that each want to print a file currently on tape.

  1. A has obtained ownership of the printer and will release it after printing one file.
  2. B has obtained ownership of the tape drive and will release it after reading one file.
  3. A tries to get ownership of the tape drive, but is told to wait for B to release it.
  4. B tries to get ownership of the printer, but is told to wait for A to release the printer.

Bingo: deadlock!

3.1: Resources

The resource is the object granted to a process.

3.1.1: Preemptable and Nonpreemptable Resources

3.1.2: Resource Acquisition

Simple example of the trouble you can get into.

Recall from the semaphore/critical-section treatment last chapter, that it is easy to cause trouble if a process dies or stays forever inside its critical section; we assume processes do not do this. Similarly, we assume that no process retains a resource forever. It may obtain the resource an unbounded number of times (i.e. it can have a loop forever with a resource request inside), but each time it gets the resource, it must release it eventually.

3.2: Introduction to Deadlocks

To repeat: A deadlock occurs when a every member of a set of processes is waiting for an event that can only be caused by a member of the set.

Often the event waited for is the release of a resource.

3.2.1: (Necessary) Conditions for Deadlock

The following four conditions (Coffman; Havender) are necessary but not sufficient for deadlock. Repeat: They are not sufficient.

  1. Mutual exclusion: A resource can be assigned to at most one process at a time (no sharing).
  2. Hold and wait: A processing holding a resource is permitted to request another.
  3. No preemption: A process must release its resources; they cannot be taken away.
  4. Circular wait: There must be a chain of processes such that each member of the chain is waiting for a resource held by the next member of the chain.

The first three are characteristics of the system and resources. That is, for a given system with a fixed set of resources, the first three conditions are either true or false: They don't change with time. The truth or falsehood of the last condition does indeed change with time as the resources are requested/allocated/released.

3.2.2: Deadlock Modeling

On the right are several examples of a Resource Allocation Graph, also called a Reusable Resource Graph.

Homework: 5.

Consider two concurrent processes P1 and P2 whose programs are.

P1: request R1       P2: request R2
    request R2           request R1
    release R2           release R1
    release R1           release R2

On the board draw the resource allocation graph for various possible executions of the processes, indicating when deadlock occurs and when deadlock is no longer avoidable.

There are four strategies used for dealing with deadlocks.

  1. Ignore the problem
  2. Detect deadlocks and recover from them
  3. Avoid deadlocks by carefully deciding when to allocate resources.
  4. Prevent deadlocks by violating one of the 4 necessary conditions.

3.3: Ignoring the problem--The Ostrich Algorithm

The “put your head in the sand approach”.

3.4: Detecting Deadlocks and Recovering From Them

3.4.1: Detecting Deadlocks with Single Unit Resources

Consider the case in which there is only one instance of each resource.

To find a directed cycle in a directed graph is not hard. The algorithm is in the book. The idea is simple.

  1. For each node in the graph do a depth first traversal to see if the graph is a DAG (directed acyclic graph), building a list as you go down the DAG (and pruning it as you backtrack back up).

  2. If you ever find the same node twice on your list, you have found a directed cycle, the graph is not a DAG, and deadlock exists among the processes in your current list.

  3. If you never find the same node twice, the graph is a DAG and no deadlock occurs.

  4. The searches are finite since there are a finite number of nodes.

================ Start Lecture #6 ================

3.4.2: Detecting Deadlocks with Multiple Unit Resources

This is more difficult.

3.4.3: Recovery from deadlock

Preemption

Perhaps you can temporarily preempt a resource from a process. Not likely.

Rollback

Database (and other) systems take periodic checkpoints. If the system does take checkpoints, one can roll back to a checkpoint whenever a deadlock is detected. Somehow must guarantee forward progress.

Kill processes

Can always be done but might be painful. For example some processes have had effects that can't be simply undone. Print, launch a missile, etc.

Remark: We are doing 3.6 before 3.5 since 3.6 is easier.

3.6: Deadlock Prevention

Attack one of the coffman/havender conditions

3.6.1: Attacking Mutual Exclusion

Idea is to use spooling instead of mutual exclusion. Not possible for many kinds of resources

3.6.2: Attacking Hold and Wait

Require each processes to request all resources at the beginning of the run. This is often called One Shot.

3.6.3: Attacking No Preempt

Normally not possible. That is, some resources are inherently preemptable (e.g., memory). For those deadlock is not an issue. Other resources are non-preemptable, such as a robot arm. It is normally not possible to find a way to preempt one of these latter resources.

3.6.4: Attacking Circular Wait

Establish a fixed ordering of the resources and require that they be requested in this order. So if a process holds resources #34 and #54, it can request only resources #55 and higher.

It is easy to see that a cycle is no longer possible.

Homework: 7.

3.5: Deadlock Avoidance

Let's see if we can tiptoe through the tulips and avoid deadlock states even though our system does permit all four of the necessary conditions for deadlock.

An optimistic resource manager is one that grants every request as soon as it can. To avoid deadlocks with all four conditions present, the manager must be smart not optimistic.

3.5.1 Resource Trajectories

We plot progress of each process along an axis. In the example we show, there are two processes, hence two axes, i.e., planar. This procedure assumes that we know the entire request and release pattern of the processes in advance so it is not a practical solution. I present it as it is some motivation for the practical solution that follows, the Banker's Algorithm.

Homework: 10, 11, 12.

3.5.2: Safe States

Avoiding deadlocks given some extra knowledge.

Definition: A state is safe if there is an ordering of the processes such that: if the processes are run in this order, they will all terminate (assuming none exceeds its claim).

Recall the comparison made above between detecting deadlocks (with multi-unit resources) and the banker's algorithm (which stays in safe states).

In the definition of a safe state no assumption is made about the running processes; that is, for a state to be safe termination must occur no matter what the processes do (providing the all terminate and to not exceed their claims). Making no assumption is the same as making the most pessimistic assumption.

Give an example of each of the four possibilities. A state that is

  1. Safe and deadlocked--not possible.

  2. Safe and not deadlocked--trivial (e.g., no arcs).

  3. Not safe and deadlocked--easy (any deadlocked state).

  4. Not safe and not deadlocked--interesting.

Is the figure on the right safe or not?

A manager can determine if a state is safe.

The manager then follows the following procedure, which is part of Banker's Algorithms discovered by Dijkstra, to determine if the state is safe.

  1. If there are no processes remaining, the state is safe.

  2. Seek a process P whose max additional requests is less than what remains (for each resource type).
  3. The banker now pretends that P has terminated (since the banker knows that it can guarantee this will happen). Hence the banker pretends that all of P's currently held resources are returned. This makes the banker richer and hence perhaps a process that was not eligible to be chosen as P previously, can now be chosen.

  4. Repeat these steps.

Example 1

A safe state with 22 units of one resource
processinitial claimcurrent allocmax add'l
X312
Y1156
Z19109
Total16
Available6

Example 2

An unsafe state with 22 units of one resource
processinitial claimcurrent allocmax add'l
X312
Y1156
Z19127
Total18
Available4

Start with example 1 and assume that Z now requests 2 units and the manager grants them.

Remark: Discuss the diagram above (before the examples) explaining why can't tell safety without initial claims.

Remark: Lab 3 (banker) assigned. It is due in 2 weeks.

Remark: An unsafe state is not necessarily a deadlocked state. Indeed, if one gets lucky all processes in an unsafe state may terminate successfully. A safe state means that the manager can guarantee that no deadlock will occur.

3.5.3: The Banker's Algorithm (Dijkstra) for a Single Resource

The algorithm is simple: Stay in safe states. Initially, we assume all the processes are present before execution begins and that all initial claims are given before execution begins. We will relax these assumptions very soon.

Homework: 13.

3.5.4: The Banker's Algorithm for Multiple Resources

At a high level the algorithm is identical: Stay in safe states.

Limitations of the banker's algorithm

Homework: 21, 27, and 20. There is an interesting typo in 20: A has claimed 3 units of resource 5, but there are only 2 units in the entire system. Change the problem by having B both claim and be allocated 1 unit of resource 5.

3.7: Other Issues

3.7.1: Two-phase locking

This is covered (MUCH better) in a database text. We will skip it.

3.7.2: Non-resource deadlocks

You can get deadlock from semaphores as well as resources. This is trivial. Semaphores can be considered resources. P(S) is request S and V(S) is release S. The manager is the module implementing P and V. When the manager returns from P(S), it has granted the resource S.

3.7.3: Starvation

As usual FCFS is a good cure. Often this is done by priority aging and picking the highest priority process to get the resource. Also can periodically stop accepting new processes until all old ones get their resources.

3.8: Research on Deadlocks

Skipped.

3.9: Summary

Read.

Chapter 4: Memory Management

Also called storage management or space management.

Memory management must deal with the storage hierarchy present in modern machines.

We will see in the next few weeks that there are three independent decision:

  1. Segmentation (or no segmentation)
  2. Paging (or no paging)
  3. Fetch on demand (or no fetching on demand)

Memory management implements address translation.

Homework: 6.

When is address translation performed?

  1. At compile time
  2. At link-edit time (the “linker lab”)
  3. At load time
  4. At execution time

Extensions

  1. Dynamic Loading
  2. Dynamic Linking

================ Start Lecture #7 ================

Remark: Lab 3 (banker) is on the web. It is due in 2 nyu weeks (3 calendar weeks) 21 march 2007. Note: I will place ** before each memory management scheme.

4.1: Basic Memory Management (Without Swapping or Paging)

Entire process remains in memory from start to finish and does not move.

The sum of the memory requirements of all jobs in the system cannot exceed the size of physical memory.

** 4.1.1: Monoprogramming without swapping or paging (Single User)

The “good old days” when everything was easy.

**4.1.2: Multiprogramming with fixed partitions

Two goals of multiprogramming are to improve CPU utilization, by overlapping CPU and I/O, and to permit short jobs to finish quickly.

4.1.3: Modeling Multiprogramming (crudely)

Homework: 1, 2 (typo in book; figure 4.21 seems irrelevant).

4.1.4: Analysis of Multiprogramming System Performance

Skipped

4.1.5: Relocation and Protection

Relocation was discussed as part of linker lab and at the beginning of this chapter. When done dynamically, a simple method is to have a base register whose value is added to every address by the hardware.

Similarly a limit register is checked by the hardware to be sure that the address (before the base register is added) is not bigger than the size of the program.

The base and limit register are set by the OS when the job starts.

4.2: Swapping

Moving the entire processes between disk and memory is called swapping.

Multiprogramming with Variable Partitions

Both the number and size of the partitions change with time.

Homework: 3

MVT Introduces the “Placement Question”

That is, which hole (partition) should one choose?

Homework: 5.

4.2.1: Memory Management with Bitmaps

Divide memory into blocks and associate a bit with each block, used to indicate if the corresponding block is free or allocated. To find a chunk of size N blocks need to find N consecutive bits indicating a free block.

The only design question is how much memory does one bit represent.

4.2.2: Memory Management with Linked Lists

Memory Management using Boundary Tags

MVT also introduces the “Replacement Question”

That is, which victim should we swap out? Note that this is an example of the suspend arc mentioned in process scheduling.

We will study this question more when we discuss demand paging in which case we swap out part of a process.

Considerations in choosing a victim

================ Start Lecture #8 ================

NOTEs:
  1. So far the schemes presented so far have had two properties:
    1. Each job is stored contiguously in memory. That is, the job is contiguous in physical addresses.
    2. Each job cannot use more memory than exists in the system. That is, the virtual addresses space cannot exceed the physical address space.

  2. Tanenbaum now attacks the second item. I wish to do both and start with the first.

  3. Tanenbaum (and most of the world) uses the term “paging” to mean what I call demand paging. This is unfortunate as it mixes together two concepts.
    1. Paging (dicing the address space) to solve the placement problem and essentially eliminate external fragmentation.
    2. Demand fetching, to permit the total memory requirements of all loaded jobs to exceed the size of physical memory.

  4. Tanenbaum (and most of the world) uses the term virtual memory as a synonym for demand paging. Again I consider this unfortunate.
    1. Demand paging is a fine term and is quite descriptive.
    2. Virtual memory “should” be used in contrast with physical memory to describe any virtual to physical address translation.

** (non-demand) Paging

Simplest scheme to remove the requirement of contiguous physical memory.

Example: Assume a decimal machine with page size = frame size = 1000.
Assume PTE 3 contains 459.
Then virtual address 3372 corresponds to physical address 459372.

Properties of (non-demand) paging (without segmentation).

Homework: 16.

Address translation

Choice of page size is discuss below.

Homework: 8.

4.3: Virtual Memory (meaning fetch on demand)

Idea is that a program can execute even if only the active portion of its address space is memory resident. That is, we are to swap in and swap out portions of a program. In a crude sense this could be called “automatic overlays”.

Advantages

Disadvantages

** 4.3.1: Paging (meaning demand paging)

Fetch pages from disk to memory when they are referenced, with a hope of getting the most actively used pages in memory.

Homework: 12.

4.3.2: Page tables

A discussion of page tables is also appropriate for (non-demand) paging, but the issues are more acute with demand paging since the tables can be much larger. Why?

  1. The total size of the active processes is no longer limited to the size of physical memory. Since the total size of the processes is greater, the total size of the page tables is greater and hence concerns over the size of the page table are more acute.

  2. With demand paging an important question is the choice of a victim page to page out. Data in the page table can be useful in this choice.

We must be able access to the page table very quickly since it is needed for every memory access.

Unfortunate laws of hardware.

So we can't just say, put the page table in fast processor registers, and let it be huge, and sell the system for $1000.

The simplest solution is to put the page table in main memory. However it seems to be both too slow and two big.

  1. Seems too slow since all memory references require two reference.
  2. The page table might be too big.

Contents of a PTE

Each page has a corresponding page table entry (PTE). The information in a PTE is for use by the hardware.
Why must it be tailored for the hardware and not the OS?
Because it is accessed frequently.
The page table format is determined by the hardware, so access routines are not portable. Information set by and used by the OS is normally kept in other OS tables.

(Actually some systems, those with software TLB reload, do not have hardware access.)

The following fields are often present in a PTE

  1. The valid bit. This tells if the page is currently loaded (i.e., is in a frame). If set, the frame number is valid. It is also called the presence or presence/absence bit. If a page is accessed with the valid bit unset, a page fault is generated by the hardware.

  2. The frame number. This field is the main reason for the table. It gives the virtual to physical address translation.

  3. The Modified bit. Indicates that some part of the page has been written since it was loaded. This is needed if the page is evicted so that the OS can tell if the page must be written back to disk.

  4. The referenced bit. Indicates that some word in the page has been referenced. Used to select a victim: unreferenced pages make good victims by the locality property (discussed below).

  5. Protection bits. For example one can mark text pages as execute only. This requires that boundaries between regions with different protection are on page boundaries. Normally many consecutive (in logical address) pages have the same protection so many page protection bits are redundant. Protection is more naturally done with segmentation, but in many current systems, it is done with paging (since the systems don't utilize segmentation, even though the hardware supports it).

Multilevel page tables

Recall the previous diagram. Most of the virtual memory is the unused space between the data and stack regions. However, with demand paging this space does not waste real memory. But the single large page table does waste real memory.

The idea of multi-level page tables (a similar idea is used in Unix i-node-based file systems, which we study later when we do I/O) is to add a level of indirection and have a page table containing pointers to page tables.

================ Start Lecture #9 ================

Remark: Lab 4 (the last lab) is assigned).

Address translation with a 2-level page table

For a two level page table the virtual address is divided into three pieces

+-----+-----+-------+
| P#1 | P#2 | Offset|
+-----+-----+-------+

Do an example on the board

The VAX used a 2-level page table structure, but with some wrinkles (see Tanenbaum for details).

Naturally, there is no need to stop at 2 levels. In fact the SPARC has 3 levels and the Motorola 68030 has 4 (and the number of bits of Virtual Address used for P#1, P#2, P#3, and P#4 can be varied).

4.3.3: TLBs--Translation Lookaside Buffers (and General Associative Memory)

Note: Tanenbaum suggests that “associative memory” and “translation lookaside buffer” are synonyms. This is wrong. Associative memory is a general concept of which translation lookaside buffer is a specific example. case.

An associative memory is a content addressable memory. That is you access the memory by giving the value of some (index) field and the hardware searches all the records and returns the record whose field contains the requested value.

For example

Name  | Animal | Mood     | Color
======+========+==========+======
Moris | Cat    | Finicky  | Grey
Fido  | Dog    | Friendly | Black
Izzy  | Iguana | Quiet    | Brown
Bud   | Frog   | Smashed  | Green

If the index field is Animal and Iguana is given, the associative memory returns

Izzy  | Iguana | Quiet    | Brown

A Translation Lookaside Buffer or TLB is an associate memory where the index field is the page number. The other fields include the frame number, dirty bit, valid bit, etc.

Homework: 17.

4.3.4: Inverted page tables

Keep a table indexed by frame number. The content of entry f contains the number of the page currently loaded in frame f. This is often called a frame table as well as an inverted page table.

4.4: Page Replacement Algorithms (PRAs)

These are solutions to the replacement question.

Good solutions take advantage of locality.

Pages belonging to processes that have terminated are of course perfect choices for victims.

Pages belonging to processes that have been blocked for a long time are good choices as well.

Random PRA

A lower bound on performance. Any decent scheme should do better.

4.4.1: The optimal page replacement algorithm (opt PRA) (aka Belady's min PRA)

Replace the page whose next reference will be furthest in the future.

4.4.2: The not recently used (NRU) PRA

Divide the frames into four classes and make a random selection from the lowest nonempty class.

  1. Not referenced, not modified
  2. Not referenced, modified
  3. Referenced, not modified
  4. Referenced, modified

Assumes that in each PTE there are two extra flags R (sometimes called U, for used) and M (often called D, for dirty).

Also assumes that a page in a lower priority class is cheaper to evict.

We again have the prisoner problem, we do a good job of making little ones out of big ones, but not the reverse. Need more resets.

Every k clock ticks, reset all R bits

What if the hardware doesn't set these bits?

4.4.3: FIFO PRA

Simple but poor since usage of the page is ignored.

Belady's Anomaly: Can have more frames yet generate more faults. Example given later.

The natural implementation is to have a queue of nodes each pointing to a page.

4.4.4: Second chance PRA

Similar to the FIFO PRA, but altered so that a page recently referenced is given a second chance.

4.4.5: Clock PRA

Same algorithm as 2nd chance, but a better implementation for the nodes: Use a circular list with a single pointer serving as both head and tail.

Let us begin by assuming that the number of pages loaded is constant.

What if the number of pages is not constant?

LIFO PRA

This is terrible! Why?
Ans: All but the last frame are frozen once loaded so you can replace only one frame. This is especially bad after a phase shift in the program when it is using all new pages.

4.4.6: Least Recently Used (LRU) PRA

When a page fault occurs, choose as victim that page that has been unused for the longest time, i.e. that has been least recently used.

LRU is definitely

PageLoadedLast ref.RM
012628010
123026501
214027000
311028511
Homework: 29, 23.


Note: there is a typo in 29; the table should be as shown on the right.

A hardware cutsie in Tanenbaum

4.4.7: Simulating (Approximating) LRU in Software

The Not Frequently Used (NFU) PRA

R counter
110000000
001000000
110100000
111010000
001101000
000110100
110011010
111001101
001100110

The Aging PRA

NFU doesn't distinguish between old references and recent ones. The following modification does distinguish.

Homework: 25, 34

4.4.8: The Working Set Page Replacement Problem (Peter Denning)

The working set policy

The goal is to specify which pages a given process needs to have memory resident in order for the process to run without too many page faults.

The idea of the working set policy is to ensure that each process keeps its working set in memory.

Homework: Describe a process (i.e., a program) that runs for a long time (say hours) and always has w<10 Assume ω=100,000, the page size is 4KB. The program need not be practical or useful.

Homework: Describe a process that runs for a long time and (except for the very beginning of execution) always has w>1000. Assume ω=100,000, the page size is 4KB. The program need not be practical or useful.

The definition of Working Set is local to a process. That is, each process has a working set; there is no system wide working set other than the union of all the working sets of each process.

However, the working set of a single process has effects on the demand paging behavior and victim selection of other processes. If a process's working set is growing in size, i.e. w(t,ω) is increasing as t increases, then we need to obtain new frames from other processes. A process with a working set decreasing in size is a source of free frames. We will see below that this is an interesting amalgam of local and global replacement policies.

Interesting questions concerning the working set include:

... Various approximations to the working set, have been devised. We will study two: using virtual time instead of memory references (immediately below) and Page Fault Frequency (section 4.6). In 4.4.9 we will see the popular WSClock algorithm that includes an approximation of the working set as well as several other ideas.

Using virtual time

4.4.9: The WSClock Page Replacement Algorithm

This treatment is based on one by Prof. Ernie Davis.

Tannenbaum suggests that the WSClock Page Replacement Algorithm is a natural outgrowth of the idea of a working set. However, reality is less clear cut. WSClock is actually embodies several ideas, one of which is connected to the idea of a working set. As the name suggests another of the ideas is the clock implementation of 2nd chance.

The actual implemented algorithm is somewhat complicated and not a clean elegant concept. It is important because

  1. It works well and is in common use.
  2. The embodied ideas are themselves interesting.
  3. Inelegant amalgamations of ideas are more commonly used in real systems than clean, elegant, one-idea algorithms.

Since the algorithm is complicated we present it in stages. As stated above this is an important algorithm since it works well and is used in practice. However, I certainly do not assume you remember all the details.

  1. We start by associating a node with every page loaded in memory (i.e., with every frame given to this process). In the node are stored R and M bits that we assume are set by the hardware. (Of course we don't design the hardware so really the R and M bits are set in a hardware defined table and the nodes reference the entries in that table.) Every k clock ticks the R bit is reset. So far this looks like NRU.

    To ease the explanation we will assume k=1, i.e., actions are done each clock tick.

  2. We now introduce an LRU aspect (with the virtual time approximation described above for working set): At each clock tick we examine all the nodes for the running process and store the current virtual time in all nodes for which R is 1.

    Thus, the time field is an approximation to the time of the most recent reference, accurate to the clock period. Note that this is done every clock tick (really every k ticks) and not every memory reference. That is why it is feasible.

    If we chose as victim the page with the smallest time field, we would be implementing a virtual time approximation to LRU. But in fact we do more.

  3. We now introduce some working set aspects into the algorithm by first defining a time constant τ (analogous to ω in the working set algorithm) and consider all pages older than τ (i.e., their stored time is earlier than the current time minus τ) as candidate victims. The idea is that these pages are not in the working set.

    The OS designer needs to tune τ just as one would need to tune ω and, like ω, τ is quite robust (the same value works well for a variety of job mixes).

    The advantage of introducing τ is that a victim search can stop (and the I/O begin) as soon as a page older than τ is found.

    If no pages have a reference time older than Tau, then the page with the earliest time is the victim.

  4. Next we introduce the other aspect of NRU, preferring clean to dirty victims. We search until we find a clean page older than τ, if there is one; if not, we use a dirty page older than τ. As before, if there are no clean pages older than τ, we evict the oldest page

  5. Now we introduce an optimization similar to prefetching (i.e., speculatively fetching some data before it is known to be needed). Specifically, when we encounter a dirty page older than τ (while looking for a clean old page), we write the dirty page back to disk (and clear the M bit, which Tanenbaum forgot to mention) without evicting the page, on the presumption that, since the page is not in (our approximation to) the working set, this I/O will be needed eventually. The down side is that the page could become dirty again, rendering our speculative I/O redundant.

    Suppose we've decided to write out old dirty pages D1 through Dd and to replace old clean page C with new page N.

    We must block the current process P until N is completely read in, but P can run while D1 through Dd are being written. Hence we would desire the I/O read to be done before the writes, but we shall see later, when we study I/O, that there are other considerations for choosing the order to perform I/O operations.

    Similarly, suppose we can not find an old clean page and have decided to replace old dirty page D0 with new page N, and have detected additional old dirty pages D1 through Dd (recall that we were searching for an old clean page). Then P must block until D0 has been written and N has been read, but can run while D1 through Dd are being written.

  6. We throttle the previous optimization to prevent overloading the I/O subsystem. Specifically we set a limit on the number of dirty pages the previous optimization can request be written.

  7. Finally, as in the clock algorithm, we keep the data structure (nodes associated with pages) organized as a circular list with a single pointer (the hand of the clock). Hence we start each victim search where the previous one left off.

4.4.10: Summary of Page Replacement Algorithms

AlgorithmComment
RandomPoor, used for comparison
OptimalUnimplementable, used for comparison
LIFOHorrible, useless
NRUCrude
FIFONot good ignores frequency of use
Second ChanceImprovement over FIFO
ClockBetter implementation of Second Chance
LRUGreat but impractical
NFUCrude LRU approximation
AgingBetter LRU approximation
Working SetGood, but expensive
WSClockGood approximation to working set

================ Start Lecture #10 ================

4.5: Modeling Paging Algorithms

4.5.1: Belady's anomaly

Consider a system that has no pages loaded and that uses the FIFO PRU.
Consider the following “reference string” (sequences of pages referenced).

    0 1 2 3 0 1 4 0 1 2 3 4
  

If we have 3 frames this generates 9 page faults (do it).

If we have 4 frames this generates 10 page faults (do it).

Theory has been developed and certain PRA (so called “stack algorithms”) cannot suffer this anomaly for any reference string. FIFO is clearly not a stack algorithm. LRU is. Tannenbaum has a few details, but we are skipping it.

Repeat the above calculations for LRU.

4.6: Design issues for (demand) Paging Systems

4.6.1: Local vs Global Allocation Policies

local PRA is one is which a victim page is chosen among the pages of the same process that requires a new page. That is the number of pages for each process is fixed. So LRU for a local policy means the page least recently used by this process. A global policy is one in which the choice of victim is made among all pages of all processes.

If we apply global LRU indiscriminately with some sort of RR processor scheduling policy, and memory is somewhat over-committed, then by the time we get around to a process, all the others have run and have probably paged out this process.

If this happens each process will need to page fault at a high rate; this is called thrashing.

It is therefore important to get a good idea of how many pages a process needs, so that we can balance the local and global desires. The working set size w(t,ω) is good for this.

An approximation to the working set policy that is useful for determining how many frames a process needs (but not which pages) is the Page Fault Frequency (PFF) algorithm.

As mentioned above a question arises what to do if the sum of the working set sizes exceeds the amount of physical memory available. This question is similar to the final point about PFF and brings us to consider controlling the load (or memory pressure).

4.6.2: Load Control

To reduce the overall memory pressure, we must reduce the multiprogramming level (or install more memory while the system is running, which is hardly practical). That is, we have a connection between memory management and process management. These are the suspend/resume arcs we saw way back when.

When the PFF (or another indicator) is too high, we choose a process and suspend it. When the frequency gets low, we can resume one or more suspended processes. We also need a policy to decide when a suspended process should be resumed even at the cost of suspending another.

This is called medium-term scheduling. Since suspending or resuming a process can take seconds, we clearly do not perform this scheduling decision every few milliseconds as we do for short-term scheduling. A time scale of minutes would be more appropriate.

4.6.3: Page size

Homework: Consider a 32-bit address machine using paging with 8KB pages and 4 byte PTEs. How many bits are used for the offset and what is the size of the largest page table? Repeat the question for 128KB pages.

4.6.4: Separate Instruction and Data (I and D) Spaces

Skipped.

4.6.5: Shared pages

Permit several processes to each have a page loaded in the same frame. Of course this can only be done if the processes are using the same program and/or data.

Homework: 33

4.6.6: Cleaning Policy (Paging Daemons)

Done earlier

4.6.7: Virtual Memory Interface

Skipped.

4.7: Implementation Issues

4.7.1: Operating System Involvement with Paging

  1. Process creation. OS must guess at the size of the process and then allocate a page table and a region on disk to hold the pages that are not memory resident. A few pages of the process must be loaded.
  2. Ready→Running transition by the scheduler. Real memory must be allocated for the page table if the table has been swapped out (which is permitted when the process is not running). Some hardware register(s) must be set to point to the page table. (There can be many page tables resident, but the hardware must be told the location of the page table for the running process--the "active" page table.
  3. Page fault. Lots of work. See 4.7.2 just below.
  4. Process termination. Free the page table and the disk region for swapped out pages.

4.7.2: Page Fault Handling

What happens when a process, say process A, gets a page fault?
  1. The hardware detects the fault and traps to the kernel (switches to supervisor mode and saves state).

  2. Some assembly language code save more state, establishes the C-language (or another programming language) environment, and “calls” the OS.

  3. The OS determines that a page fault occurred and which page was referenced.

  4. If the virtual address is invalid, process A is killed. If the virtual address is valid, the OS must find a free frame. If there is no free frames, the OS selects a victim frame. Call the process owning the victim frame, process B. (If the page replacement algorithm is local, the victim is process A.)

  5. The PTE of the victim page is updated to show that the page is no longer resident.

  6. If the victim page is dirty, the OS schedules an I/O write to copy the frame to disk and blocks A waiting for this I/O to occur.

  7. Assuming process A needed to be blocked (i.e., the victim page is dirty) the scheduler is invoked to perform a context switch.
  8. Now the O/S has a free frame (this may be much later in wall clock time if a victim frame had to be written). The O/S schedules an I/O to read the desired page into this free frame. Process A is blocked (perhaps for the second time) and hence the process scheduler is invoked to perform a context switch.

  9. Again, another process is selected by the scheduler as above and eventually a Disk interrupt occurs when the I/O completes (trap / asm / OS determines I/O done). The PTE in process A is updated to indicate that the page is in memory.

  10. The O/S may need to fix up process A (e.g. reset the program counter to re-execute the instruction that caused the page fault).

  11. Process A is placed on the ready list and eventually is chosen by the scheduler to run. Recall that process A is executing O/S code.

  12. The OS returns to the first assembly language routine.

  13. The assembly language routine restores registers, etc. and “returns” to user mode.

The user's program running as process A is unaware that all this happened (except for the time delay).

4.7.3: Instruction Backup

A cute horror story. The 68000 was so bad in this regard that early demand paging systems for the 68000, used two processors one running one instruction behind. If the first got a page fault, there wasn't always enough information to figure out what to do so the system switched to the second processor after it did the page fault. Don't worry about instruction backup. Very machine dependent and modern implementations tend to get it right. The next generation machine, 68010, provided extra information on the stack so the horrible 2-processor kludge was no longer necessary.

4.7.4: Locking (Pinning) Pages in Memory

We discussed pinning jobs already. The same (mostly I/O) considerations apply to pages.

4.7.5: Backing Store

The issue is where on disk do we put pages.

Homework: Assume every instruction takes 0.1 microseconds to execute providing it is memory resident. Assume a page fault takes 10 milliseconds to service providing the necessary disk block is actually on the disk. Assume a disk block fault takes 10 seconds service. So the worst case time for an instruction is 10.0100001 seconds. Finally assume the program requires that a billion instructions be executed.

  1. If the program is always completely resident, how long does it take to execute?
  2. If 0.1% of the instructions cause a page fault, but all the disk blocks are on the disk, how long does the program take to execute and what percentage of the time is the program waiting for a page fault to complete?
  3. If 0.1% of the instructions cause a page fault and 0.1% of the page faults cause a disk block fault, how long does the program take to execute, what percentage of the time is the program waiting for a disk block fault to complete?

4.7.6: Separation of Policy and Mechanism

Skipped.

4.8: Segmentation

Up to now, the virtual address space has been contiguous.

Homework: 37.

** Two Segments

Late PDP-10s and TOPS-10

** Three Segments

Traditional (early) Unix shown at right.

** Four Segments

Just kidding.

** General (not necessarily demand) Segmentation

** Demand Segmentation

Same idea as demand paging, but applied to segments.

The following table mostly from Tanenbaum compares demand paging with demand segmentation.

Consideration Demand
Paging
Demand
Segmentation
Programmer aware NoYes
How many addr spaces 1Many
VA size > PA size YesYes
Protect individual
procedures separately
NoYes
Accommodate elements
with changing sizes
NoYes
Ease user sharing NoYes
Why invented let the VA size
exceed the PA size
Sharing, Protection,
independent addr spaces

Internal fragmentation YesNo, in principle
External fragmentation NoYes
Placement question NoYes
Replacement question YesYes

** 4.8.2 and 4.8.3: Segmentation With (demand) Paging

(Tanenbaum gives two sections to explain the differences between Multics and the Intel Pentium. These notes cover what is common to all segmentation+paging systems).

Combines both segmentation and demand paging to get advantages of both at a cost in complexity. This is very common now.

Although it is possible to combine segmentation with non-demand paging, I do not know of any system that did this.

Homework: 38.

Homework: Consider a 32-bit address machine using paging with 8KB pages and 4 byte PTEs. How many bits are used for the offset and what is the size of the largest page table? Repeat the question for 128KB pages. So far this question has been asked before. Repeat both parts assuming the system also has segmentation with at most 128 segments.

4.9: Research on Memory Management

Skipped

4.10: Summary

Read

Some Last Words on Memory Management

Remark: I had second thought about an answer I gave at the end of last lecture and checked with the experts. Some modern hardware (especially x86-64/amd64) does not support segmentation, but does support multiple levels of page tables.

Many of the segmentation advantages/features are now supported by (in my judgment less clean) techniques using multiple levels of page tables.

Chapter 5: Input/Output

5.1: Principles of I/O Hardware

5.1.1: I/O Devices

5.1.2: Device Controllers

These are the “devices” as far as the OS is concerned. That is, the OS code is written with the controller spec in hand not with the device spec.

5.1.3: Memory-Mapped I/O

Think of a disk controller and a read request. The goal is to copy data from the disk to some portion of the central memory. How do we do this?

5.1.4: Direct Memory Access (DMA)

We now address the second question, moving data between the controller and the main memory.

Homework: 12

5.1.5: Interrupts Revisited

Skipped.

5.2: Principles of I/O Software

As with any large software system, good design and layering is important.

5.2.1: Goals of the I/O Software

Device independence

We want to have most of the OS, unaware of the characteristics of the specific devices attached to the system. (This principle of device independence is not limited to I/O; we also want the OS to be largely unaware of the CPU type itself.)

This works quite well for files stored on various devices. Most of the OS, including the file system code, and most applications can read or write a file without knowing if the file is stored on a floppy disk, a hard disk, a tape, or (for reading) a CD-ROM.

This principle also applies for user programs reading or writing streams. A program reading from ``standard input'', which is normally the user's keyboard can be told to instead read from a disk file with no change to the application program. Similarly, ``standard output'' can be redirected to a disk file. However, the low-level OS code dealing with disks is rather different from that dealing keyboards and (character-oriented) terminals.

One can say that device independence permits programs to be implemented as if they will read and write generic devices, with the actual devices specified at run time. Although writing to a disk has differences from writing to a terminal, Unix cp, DOS copy, and many programs we compose need not be aware of these differences.

However, there are devices that really are special. The graphics interface to a monitor (that is, the graphics interface presented by the video controller--often called a ``video card'') does not resemble the ``stream of bytes'' we see for disk files.

Homework: 9

Uniform naming

Recall that we discussed the value of the name space implemented by file systems. There is no dependence between the name of the file and the device on which it is stored. So a file called IAmStoredOnAHardDisk might well be stored on a floppy disk.

Error handling

There are several aspects to error handling including: detection, correction (if possible) and reporting.

  1. Detection should be done as close to where the error occurred as possible before more damage is done (fault containment). This is not trivial.

  2. Correction is sometimes easy, for example ECC memory does this automatically (but the OS wants to know about the error so that it can schedule replacement of the faulty chips before unrecoverable double errors occur).

    Other easy cases include successful retries for failed ethernet transmissions. In this example, while logging is appropriate, it is quite reasonable for no action to be taken.


  3. Error reporting tends to be awful. The trouble is that the error occurs at a low level but by the time it is reported the context is lost. Unix/Linux in particular is horrible in this area.

Creating the illusion of synchronous I/O

Buffering

Sharable vs dedicated devices

For devices like printers and tape drives, only one user at a time is permitted. These are called serially reusable devices, and were studied in the deadlocks chapter. Devices like disks and Ethernet ports can be shared by processes running concurrently.

5.2.2: Programmed I/O

5.2.3: Interrupt-Driven (Programmed) I/O

5.2.4: I/O Using DMA

5.3: I/O Software Layers

Layers of abstraction as usual prove to be effective. Most systems are believed to use the following layers (but for many systems, the OS code is not available for inspection).

  1. User-level I/O routines.

  2. Device-independent (kernel-level) I/O software.

  3. Device drivers.

  4. Interrupt handlers.

We will give a bottom up explanation.

5.3.1: Interrupt Handlers

We discussed an interrupt handler before when studying page faults. Then it was called “assembly language code”.

In the present case, we have a process blocked on I/O and the I/O event has just completed. So the goal is to make the process ready. Possible methods are.

Once the process is ready, it is up to the scheduler to decide when it should run.

5.3.2: Device Drivers

The portion of the OS that is tailored to the characteristics of the controller.

The driver has two “parts” corresponding to its two access points. Recall the figure on the right, which we saw at the beginning of the course.

  1. Accessed by the main line OS via the envelope in response to an I/O system call. The portion of the driver accessed in this way is sometimes call the “top” part.
  2. Accessed by the interrupt handler when the I/O completes (this completion is signaled by an interrupt). The portion of the driver accessed in this way is sometimes call the “bottom” part.

Tanenbaum describes the actions of the driver assuming it is implemented as a process (which he recommends). I give both that view point and the self-service paradigm in which the driver is invoked by the OS acting in behalf of a user process (more precisely the process shifts into kernel mode).

Driver in a self-service paradigm

  1. The user (A) issues an I/O system call.

  2. The main line, machine independent, OS prepares a generic request for the driver and calls (the top part of) the driver.
    1. If the driver was idle (i.e., the controller was idle), the driver writes device registers on the controller ending with a command for the controller to begin the actual I/O.
    2. If the controller was busy (doing work the driver gave it previously), the driver simply queues the current request (the driver dequeues this request below).

  3. The driver jumps to the scheduler indicating that the current process should be blocked.

  4. The scheduler blocks A and runs (say) B.

  5. B starts running.

  6. An interrupt arrives (i.e., an I/O has been completed) and the handler is invoked.

  7. The interrupt handler invokes (the bottom part of) the driver.
    1. The driver informs the main line perhaps passing data and surely passing status (error, OK).
    2. The top part is called to start another I/O if the queue is nonempty. We know the controller is free. Why?
      Answer: We just received an interrupt saying so.

  8. The driver jumps to the scheduler indicating that process A should be made ready.

  9. The scheduler picks a ready process to run. Assume it picks A.

  10. A resumes in the driver, which returns to the main line, which returns to the user code.

Driver as a process (Tanenbaum) (less detailed than above)

5.3.3: Device-Independent I/O Software

The device-independent code does most of the functionality, but not necessarily most of the code since there can be many drivers, all doing essentially the same thing in slightly different ways due to slightly different controllers.

5.3.4: User-Space Software

A good deal of I/O code is actually executed by unprivileged code running in user space. Some of this code consists of library routines linked into user programs, some are standard utilities, and some is in daemon processes.

Homework: 10, 13.

5.4: Disks

The ideal storage device is

  1. Fast
  2. Big (in capacity)
  3. Cheap
  4. Impossible

When compared to central memory, disks are big and cheap, but slow.

5.4.1: Disk Hardware

Show a real disk opened up and illustrate the components.

Consider the following characteristics of a disk.

Overlapping I/O operations is important. Many controllers can do overlapped seeks, i.e. issue a seek to one disk while another is already seeking.

As technology increases the space taken to store a bit decreases, i.e.. the bit density increases. This changes the number of cylinders per inch of radius (the cylinders are closer together) and the number of bits per inch along a given track.

(Unofficial) Modern disks cheat and have more sectors on outer cylinders as on inner one. For this course, however, we assume the number of sectors/track is constant. Thus for us there are fewer bits per inch on outer sectors and the transfer rate is the same for all cylinders. The modern disks have electronics and software (firmware) that hides the cheat and gives the illusion of the same number of sectors on all tracks.

(Unofficial) Despite what tanenbaum says later, it is not true that when one head is reading from cylinder C, all the heads can read from cylinder C with no penalty. It is, however, true that the penalty is very small.

Choice of block size

Homework: Consider a disk with an average seek time of 10ms, an average rotational latency of 5ms, and a transfer rate of 10MB/sec.

  1. If the block size is 1KB, how long would it take to read a block?
  2. If the block size is 100KB, how long would it take to read a block?
  3. If the goal is to read 1K, a 1KB block size is better as the remaining 99KB are wasted. If the goal is to read 100KB, the 100KB block size is better since the 1KB block size needs 100 seeks and 100 rotational latencies. What is the minimum size request for which a disk with a 100KB block size would complete faster than one with a 1KB block size?

RAID (Redundant Array of Inexpensive Disks)

================ Start Lecture #13 ================

Remark: A practice final is available off the web page. Note the format and good luck.

5.4.2: Disk Formatting

Skipped.

5.4.3: Disk Arm Scheduling Algorithms

There are three components to disk response time: seek, rotational latency, and transfer time. Disk arm scheduling is concerned with minimizing seek time by reordering the requests.

These algorithms are relevant only if there are several I/O requests pending. For many PCs this is not the case. For most commercial applications, I/O is crucial and there are often many requests pending.

The algorithm might actually be implemented in the electronics on the disk itself. The disks I brought in were somewhat old so I suspect those didn't do this (but the OS definitely did).

  1. FCFS (First Come First Served): Simple but has long delays.

  2. Pick: Same as FCFS but pick up requests for cylinders that are passed on the way to the next FCFS request.

  3. SSTF or SSF (Shortest Seek (Time) First): Greedy algorithm. Can starve requests for outer cylinders and almost always favors middle requests.

  4. Scan (Look, Elevator): The method used by an old fashioned jukebox (remember “Happy Days”) and by elevators. The disk arm proceeds in one direction picking up all requests until there are no more requests in this direction at which point it goes back the other direction. This favors requests in the middle, but can't starve any requests.

  5. C-Scan (C-look, Circular Scan/Look): Similar to Scan but only service requests when moving in one direction. When going in the other direction, go directly to the furthest away request. This doesn't favor any spot on the disk. Indeed, it treats the cylinders as though they were a clock, i.e., after the highest numbered cylinder comes cylinder 0.

  6. N-step Scan: This is what the natural implementation of Scan gives.

Minimizing Rotational Latency

Use Scan based on sector numbers not cylinder number. For rotational latency Scan is the same as C-Scan. Why?
Ans: Because the disk only rotates in one direction.

Homework: 24, 25

5.4.4: Error Handling

Disks error rates have dropped in recent years. Moreover, bad block forwarding is normally done by the controller (or disk electronics) so this topic is no longer as important for OS.

5.5: Clocks

Also called timers.

5.5.1: Clock Hardware

5.5.2: Clock Software

  1. Time of day (TOD): Bump a counter each tick (clock interupt). If counter is only 32 bits must worry about overflow so keep two counters: low order and high order.

  2. Time quantum for RR: Decrement a counter at each tick. The quantum expires when counter is zero. Load this counter when the scheduler runs a process (i.e., changes the state of the process from ready to running). This is what I (and I would guess you) did for the (processor) scheduling lab.

  3. Accounting: At each tick, bump a counter in the process table entry for the currently running process.

  4. Alarm system call and system alarms:
  5. Profiling

Homework: 27

5.6: Character-Oriented Terminals

5.6.1: RS-232 Terminal Hardware

Quite dated. It is true that modern systems can communicate to a hardwired ascii terminal, but most don't. Serial ports are used, but they are normally connected to modems and then some protocol (SLIP, PPP) is used not just a stream of ascii characters. So skip this section.

Memory-Mapped Terminals

Not as dated as the previous section but it still discusses the character not graphics interface.

Keyboards

Tanenbaum's description of keyboards is correct.

5.6.2: Input Software

5.6.3: Output Software

Again too dated and the truth is too complicated to deal with in a few minutes.

5.7: Graphical User Interfaces (GUIs)

Skipped.

5.8: Network Terminals

Skipped.

5.9: Power Management

Skipped.

5.10: Research on Input/Output

Skipped.

5.11: Summary

Read.

Chapter 6: File Systems

Requirements

  1. Size: Store very large amounts of data.
  2. Persistence: Data survives the creating process.
  3. Access: Multiple processes can access the data concurrently.

Solution: Store data in files that together form a file system.

6.1: Files

6.1.1: File Naming

Very important. A major function of the file system.

6.1.2: File structure

A file is a

  1. Byte stream
  2. (fixed size) Record stream: Out of date
  3. Varied and complicated beast.

6.1.3: File types

Examples

  1. (Regular) files.

  2. Directories: studied below.

  3. Special files (for devices). Uses the naming power of files to unify many actions.
        dir             # prints on screen
        dir > file      # result put in a file
        dir > /dev/tape # results written to tape
        
  4. “Symbolic” Links (similar to “shortcuts”): Also studied below.

“Magic number”: Identifies the command interpreter for an executable file.

Strongly typed files:

================ Start Lecture #14 ================

6.1.4: File access

There are basically two possibilities, sequential access and random access (a.k.a. direct access). Previously, files were declared to be sequential or random. Modern systems do not do this. Instead all files are random and optimizations are applied when the system dynamically determines that a file is (probably) being accessed sequentially.

  1. With Sequential access the bytes (or records) are accessed in order (i.e., n-1, n, n+1, ...). Sequential access is the most common and gives the highest performance. For some devices (e.g. tapes) access “must” be sequential.
  2. With random access, the bytes are accessed in any order. Thus each access must specify which bytes are desired.

6.1.5: File attributes

A laundry list of properties that can be specified for a file For example:

6.1.6: File operations

Homework: 6, 7.

6.1.7: An Example Program Using File System Calls

Homework: Read and understand “copyfile”.

Notes on copyfile

6.1.8: Memory mapped files (Unofficial)

Conceptually simple and elegant. Associate a segment with each file and then normal memory operations take the place of I/O.

Thus copyfile does not have fgetc/fputc (or read/write). Instead it is just like memcopy

while ( *(dest++) = *(src++) );

The implementation is via segmentation with demand paging but the backing store for the pages is the file itself. This all sounds great but ...

  1. How do you tell the length of a newly created file? You know which pages were written but not what words in those pages. So a file with one byte or 10, looks like a page.
  2. What if same file is accessed by both I/O and memory mapping.
  3. What if the file is bigger than the size of virtual memory (will not be a problem for systems built 3 years from now as all will have enormous virtual memory sizes).

These files may be making a comeback, at least in research.

6.2: Directories

Unit of organization.

6.2.1-6.2.3: Single-level, Two-level, and Hierarchical directory systems

Possibilities

These are not as wildly different as they sound.

6.2.4: Path Names

You can specify the location of a file in the file hierarchy by using either an absolute or a Relative path to the file

Homework: 1, 9.

6.2.5: Directory operations

  1. Create: Produces an “empty” directory. Normally the directory created actually contains . and .., so is not really empty

  2. Delete: Requires the directory to be empty (i.e., to just contain . and ..). Commands are normally written that will first empty the directory (except for . and ..) and then delete it. These commands make use of file and directory delete system calls.

  3. Opendir: Same as for files (creates a “handle”)

  4. Closedir: Same as for files

  5. Readdir: In the old days (of unix) one could read directories as files so there was no special readdir (or opendir/closedir). It was believed that the uniform treatment would make programming (or at least system understanding) easier as there was less to learn.

    However, experience has taught that this was not a good idea since the structure of directories then becomes exposed. Early unix had a simple structure (and there was only one type of structure for all implementations). Modern systems have more sophisticated structures and more importantly they are not fixed across implementations. So if programs just used read() to read directories, the programs would have to be changed whenever the structure of a directory changed. Now we have a readdir() system call that knows the structure of directories. Therefore if the structure is changed only readdir() need be changed.

  6. Rename: As with files.

  7. Link: Add a second name for a file; discussed below.

  8. Unlink: Remove a directory entry. This is how a file is deleted. But if there are many links and just one is unlinked, the file remains. Discussed in more detail below.

6.3: File System Implementation

6.3.1: File System Layout

6.3.2: Implementing Files

Contiguous allocation

Homework: 12.

Linked allocation

Consider the following two code segments that store the same data but in a different order. The first is analogous to the linked list file organization above and the second is analogous to the ms-dos FAT file system we study next.

struct node_type {
    float data;                  float node_data[100];
    int   next;                  int   node_next[100];
} node[100]

With the second arrangement the data could be stored far away from the next pointers. In FAT this idea is taken to an extreme: The data, which is large (a disk block), is stored on disk; whereas, the next pointers which are small (each is an integer) are stored in memory in a File Allocation Table or FAT.

FAT (file allocation table)


Why don't we mimic the idea of paging and have a table giving for each block of the file, where on the disk that file block is stored? In other words a ``file block table'' mapping its file block to its corresponding disk block. This is the idea of (the first part of) the unix inode solution, which we study next.

I-Nodes


Algorithm to retrieve a block

Let's say that you want to find block N
(N=0 is the "first" block) and that
  There are D direct pointers in the inode numbered 0..(D-1)
  There are K pointers in each indirect block numbered 0..K-1

If N < D            // This is a direct block in the i-node
   use direct pointer N in the i-node
else if N < D + K   // This is one of the K blocks pointed to by indirect blk
   use pointer D in the inode to get the indirect block
   use pointer N-D in the indirect block to get block N
else   // This is one of the K*K blocks obtained via the double indirect block
   use pointer D+1 in the inode to get the double indirect block
   let P = (N-(D+K)) DIV K      // Which single indirect block to use
   use pointer P to get the indirect block B
   let Q = (N-(D+K)) MOD K      // Which pointer in B to use
   use pointer Q in B to get block N

6.3.3: Implementing Directories

Recall that a directory is a mapping that converts file (or subdirectory) names to the files (or subdirectories) themselves.

Trivial File System (CP/M)

MS-DOS and Windows (FAT)

Unix/linux

Homework: 27

6.3.4: Shared files (links)

Hard Links

Start with an empty file system (i.e., just the root directory) and then execute:

    cd /
    mkdir /A; mkdir /B
    touch /A/X; touch /B/Y
  

We have the situation shown on the right.


Now execute

    ln /B/Y /A/New
  
This gives the new diagram to the right.

At this point there are two equally valid name for the right hand yellow file, /B/Y and /A/New. The fact that /B/Y was created first is NOT detectable.


Assume Bob created /B and /B/Y and Alice created /A, /A/X, and /A/New. Later Bob tires of /B/Y and removes it by executing

rm /B/Y

The file /A/New is still fine (see third diagram on the right). But it is owned by Bob, who can't find it! If the system enforces quotas bob will likely be charged (as the owner), but he can neither find nor delete the file (since bob cannot unlink, i.e. remove, files from /A)

Since hard links are only permitted to files (not directories) the resulting file system is a dag (directed acyclic graph). That is, there are no directed cycles. We will now proceed to give away this useful property by studying symlinks, which can point to directories.

Symlinks

Again start with an empty file system and this time execute

cd /
mkdir /A; mkdir /B
touch /A/X; touch /B/Y
ln -s /B/Y /A/New

We now have an additional file /A/New, which is a symlink to /B/Y.

The bottom line is that, with a hard link, a new name is created for the file. This new name has equal status with the original name. This can cause some surprises (e.g., you create a link but I own the file). With a symbolic link a new file is created (owned by the creator naturally) that contains the name of the original file. We often say the new file points to the original file.

Question: Consider the hard link setup above. If Bob removes /B/Y and then creates another /B/Y, what happens to /A/New?
Answer: Nothing. /A/New is still a file with the same contents as the original /B/Y.

Question: What about with a symlink?
Answer: /A/New becomes invalid and then valid again, this time pointing to the new /B/Y. (It can't point to the old /B/Y as that is completely gone.)

Note:
Shortcuts in windows 95/98/ME contain more that symlinks in unix. In addition to the file name of the original file, they can contain arguments to pass to the file if it is executable. So a shortcut to

    netscape.exe
  
can specify
    netscape.exe //allan.ultra.nyu.edu/~gottlieb/courses/os/class-notes.html
  

Moreover, as was pointed out by students in my 2006-07 fall class, the shortcuts are not a feature of the FAT filesystem itself, but simply the actions of the command interpreter when encountering a file named *.lnk
End of Note

What about symlinking a directory?

cd /
mkdir /A; mkdir /B
touch /A/X; touch /B/Y
ln -s /B /A/New

Is there a file named /A/New/Y ?
Yes.

What happens if you execute cd /A/New/.. ?

What did I mean when I said the pictures made it all clear?
Answer: From the file system perspective it is clear. It is not always so clear what programs will do.

6.3.5: Disk space management

All general purpose systems use a (non-demand) paging algorithm for file storage. Files are broken into fixed size pieces, called blocks that can be scattered over the disk. Note that although this is paging, it is never called paging.

The file is completely stored on the disk, i.e., it is not demand paging.

Actually, it is more complicated

  1. Various optimizations are performed to try to have consecutive blocks of a single file stored consecutively on the disk. Discussed below.

  2. One can imagine systems that store only parts of the file on disk with the rest on tertiary storage (some kind of tape).

  3. This would be just like demand paging.

  4. Perhaps NASA does this with their huge datasets.

  5. Caching (as done for example in microprocessors) is also the same as demand paging.

  6. We unify these concepts in the computer architecture course.

Choice of block size

We discussed this last chapter

Storing free blocks

There are basically two possibilities

  1. An in-memory bit map.
  2. Linked list with each free block pointing to next.

6.3.6: File System reliability

Bad blocks on disks

Not so much of a problem now. Disks are more reliable and, more importantly, disks take care of the bad blocks themselves. That is, there is no OS support needed to map out bad blocks. But if a block goes bad, the data is lost (not always).

Backups

All modern systems support full and incremental dumps.

Consistency

6.3.7 File System Performance

Buffer cache or block cache

An in-memory cache of disk blocks.

Homework: 29.

Block Read Ahead

When the access pattern “looks” sequential read ahead is employed. This means that after completing a read() request for block n of a file. The system guesses that a read() request for block n+1 will shortly be issued so it automatically fetches block n+1.

Reducing Disk Arm Motion

Try to place near each other blocks that are going to be read in succession.

  1. If the system uses a bitmap for the free list, it can allocate a new block for a file close to the previous block (guessing that the file will be accessed sequentially).

  2. The system can perform allocations in “super-blocks”, consisting of several contiguous blocks.
  3. For a unix-like file system, the i-nodes can be placed in the middle of the disk, instead of at one end, to reduce the seek time to access an i-node followed by a block of the file.

  4. Can divide the disk into cylinder groups, each of which is a consecutive group of cylinders.

6.3.8: Log-Structured File Systems (unofficial)

A file system that tries to make all writes sequential. That is, writes are treated as if going to a log file. The original research project worked with a unix-like file system, i.e. was i-node based.

6.4: Example File Systems

6.4.1: CD-ROM File Systems (skipped)

6.4.2: The CP/M File System

This was done above.

6.4.3: The MS-DOS File System

This was done above.

6.4.4: The windows 98 File System

Two changes were made: Long file names were supported and the allocation table was switched from FAT-16 to FAT-32.

  1. The only hard part was to keep compatibility with the old 8.3 naming rule. This is called “backwards compatibility”. A file has two name a long one and an 8.3. If the long name fits the 8.3 format, only one name is kept. If the long name does not fit the 8+3, an 8+3 version is produce via an algorithm, that works but the names produced are not lovely.

  2. FAT-32 used 32 bit words for the block numbers so the fat table could be huge. Windows 98 kept only a portion of the FAT-32 table in memory at a time. (I do not know the replacement policy, number of blocks kept in memory, etc).

6.4.5: The Unix V7 File System

This was done above.

6.5: Research on File Systems (skipped)

6.6 Summary (read)

The End: Good luck on the final