NOTE: These notes are adapted from those of Allan Gottlieb, and are reproduced here with his permission.


================ Start Lecture #2 (Jan. 27)

A Simple Example of a Multiprogramming OS

(Used in Homework 1 and Projects 1 and 2)

Memory model: Variable-length partitions

This is, at least conceptually, one of the simplest memory models. It was used in the CDC 6600. (Tanenbaum, pp. 26-27; p. 196)

Each process occupies a consecutive chunk of RAM. This chunk is assigned when the process is created, and does not change. The length of the chunk required is specified by the object file for the process.

An inherent drawback to this scheme is external fragmentation ; the free space in memory gets divided into small unusuable blocks. For example, in the above figure, after processes C and E exit, there are three chunks of free space open, of sizes 20M, 16M, and 16M. If process G of size 30M enters, there is no place to put it, even though there is considerably more than 30M free in memory.

Address translation

There are two registers that deal with address translation. The base register holds the starting address of the active process. The limit register holds the size of the starting address. Translating a virtual address VA to an physical address PA involves the following two steps. For example, when D is executing, the base register is 56M and the limit register is 24M. The virtual address 10M gets translated to the physical address 66M.

A drawback is that this requires an integer add (a relatively slow operation) for every address translation -- i.e. once or twice per machine instruction. Of course, this is built into the hardware, but even so it is a delay.

Another drawback is that each process has to predict, at loading time, the amount of memory it needs. If it overestimates, then space is wasted, and it may have to wait longer for a slot to open up. If it underestimates, then it will crash when it runs out of memory. Therefore everyone overestimates, leading to underuse of multiprogramming, and unnecessarily long turnaround times.

User processes certainly are not permitted to write to the base and limit registers, and probably not allowed to read them.

Memory management

There are two issues in memory management: (1) How do you keep track of free space? (2) How, there is a choice, do you choose the placement of a new process? Here is one solution to these; we will discuss several others later in the course.

(Tanenbaum section 4.2.2, pp. 200-202). A simple data structure for keeping track of free space is the free list a linked list of records, recording the starting address and size of each free chunk of memory, sorted in increasing order of starting address. We will discuss later the algorithm for maintaining this list.

A simple criterion for allocating a partition to a process is "first-fit": go through the free list until reaching the first free chunk large enough to accommodate the process, and allocate the process at the bottom of that chunk.

Scheduling: Round robin

(Tanenbaum, pp. 142-143). One of the most common and most important schedulers is round robin. This is not the simplest scheduler, but it is the simplest preemptive scheduler. It works as follows: Suppose the time quantum is 50 msec, process P is executing, and it blocks after 20 msec. When it unblocks, and gets through the ready queue, it gets the standard 50 msec again; it doesn't somehow "save" the 30 msec that it missed last time. (You could do things this way, but people don't.)

Chapter 2: Process and Thread Management

Tanenbaum's chapter title is ``Processes and Threads''. I prefer to add the word management. The subject matter is processes, threads, scheduling, interrupt handling, and IPC (InterProcess Communication--and Coordination).

2.1: Processes

Definition: A process is a program in execution.

2.1.1: The Process Model

Even though in actuality there are many processes running at once, the OS gives each process the illusion that it is running alone.

Virtual time and virtual memory are examples of abstractions provided by the operating system to the user processes so that the latter ``sees'' a more pleasant virtual machine than actually exists.

2.1.2:Process Creation

From the users or external viewpoint there are several mechanisms for creating a process.

  1. System initialization, including daemon processes.
  2. Execution of a process creation system call by a running process.
  3. A user request to create a new process.
  4. Initiation of a batch job.

But looked at internally, from the system's viewpoint, the second method dominates. Indeed in unix only one process is created at system initialization (the process is called init); all the others are children of this first process.

Why have init? That is why not have all processes created via method 2?
Ans: Because without init there would be no running process to create any others.

2.1.3: Process Termination

Again from the outside there appear to be several termination mechanism.

  1. Normal exit (voluntary).
  2. Error exit (voluntary).
  3. Fatal error (involuntary).
  4. Killed by another process (involuntary).

And again, internally the situation is simpler. In Unix terminology, there are two system calls kill and exit that are used. Kill (poorly named in my view) sends a signal to another process. If this signal is not caught (via the signal system call) the process is terminated. There is also an ``uncatchable'' signal. Exit is used for self termination and can indicate success or failure.

2.1.4: Process Hierarchies

Modern general purpose operating systems permit a user to create and destroy processes.

Old or primitive operating system like MS-DOS are not multiprogrammed so when one process starts another, the first process is automatically blocked and waits until the second is finished.

2.1.5: Process States and Transitions

The diagram on the right contains much information.


One can organize an OS around the scheduler.

2.1.6: Implementation of Processes

The OS organizes the data about each process in a table naturally called the process table. Each entry in this table is called a process table entry (PTE) or process control block.

Transfer of Control : Version 1

More details will be added when we study memory management and more again when we study interrupts.

Procedure calls

Procedure f calls g(a,b,c) in process P.

Steps when f carries out the call:

1. Complete all previous instructions in f. Therefore, the only registers important for the state of f are the stack pointer (SP) and the program counter (PC)

2. Push arguments c,b,a onto P's stack. Note: Stacks usually grow downward from the top of P's segment, so pushing an item onto the stack actually involves decrementing SP.

3. Execute PUSHJ < start-address of g >. This instruction pushes PC onto the stack, and then jumps to the start address of g.

4. The first step in g is to allocate space for its own local variables by suitably decrementing SP.

g now starts its execution from the beginning. This may involve calling other procedures, possibly including recursive calls to f.

Steps when g returns control to f:

5. At the end of g: Undo step (4) and deallocate its local variables by incrementing the SP.

6. Last step of g: POPJ has the effect PC = pop(stack)

7. We are now at the step in f immediately following the call to g. Pop the arguments a,b,c off the stack and continue the execution of f.

Features of procedure call