NOTE: These notes are adapted from those of
Allan Gottlieb, and are
reproduced here with his permission.
================ Start Lecture #2
A Simple Example of a Multiprogramming OS
(Used in Homework 1 and Projects 1 and 2)
Memory model: Variable-length partitions
This is, at least conceptually, one of the simplest memory models.
It was used in the CDC 6600.
(Tanenbaum, pp. 26-27; p. 196)
Each process occupies a consecutive chunk of RAM. This chunk is assigned
when the process is created, and does not change. The length of the chunk
required is specified by the object file for the process.
An inherent drawback to this scheme is external fragmentation ;
the free space in memory gets divided into small unusuable blocks.
For example, in the above figure, after processes C and E exit,
there are three chunks of free space open, of sizes 20M, 16M, and 16M.
If process G of size 30M enters, there is no place to put it, even though
there is considerably more than 30M free in memory.
There are two registers that deal with address translation. The base
register holds the starting address of the active process. The
limit register holds the size of the starting address.
Translating a virtual address VA to an physical address PA involves the
following two steps.
For example, when D is executing, the base register is 56M and the limit
register is 24M. The virtual address 10M gets translated to the physical
- Check that 0 =< VA < limit. If not, raise an error.
- PA = VA + base
A drawback is that this requires an integer add (a relatively slow operation)
for every address translation -- i.e. once or twice per machine instruction.
Of course, this is built into the hardware, but even so it is a delay.
Another drawback is that each process has to predict, at loading time,
the amount of memory it needs. If it overestimates, then space is wasted,
and it may have to wait longer for a slot to open up. If it underestimates,
then it will crash when it runs out of memory. Therefore everyone
overestimates, leading to underuse of multiprogramming, and unnecessarily
long turnaround times.
User processes certainly are not permitted to write to the base and
limit registers, and probably not allowed to read them.
There are two issues in memory management: (1) How do you keep track of
free space? (2) How, there is a choice, do you choose the placement of
a new process? Here is one solution to these; we will discuss several
others later in the course.
(Tanenbaum section 4.2.2, pp. 200-202).
A simple data structure for keeping track of free space is the
free list a linked list of records, recording the starting address
and size of each free chunk of memory, sorted in increasing order of
starting address. We will discuss later the algorithm
for maintaining this list.
A simple criterion for allocating a partition to a process is "first-fit":
go through the free list until reaching the first free chunk large
enough to accommodate the process, and allocate the process at the
bottom of that chunk.
Scheduling: Round robin
(Tanenbaum, pp. 142-143).
One of the most common and most important schedulers is
round robin. This is not the simplest scheduler, but it
is the simplest preemptive scheduler. It works as follows:
Suppose the time quantum is 50 msec, process P is executing, and it blocks
after 20 msec. When it unblocks, and gets through the ready queue, it
gets the standard 50 msec again; it doesn't somehow "save" the 30 msec that
it missed last time. (You could do things this way, but people
- The processes that are ready to run (i.e. not blocked) are
kept in a FIFO queue, called the "Ready" queue.
- There is a fixed time quantum (50 msec is a typical number) which
is the maximum length that any process runs at a time.
- The currently active process P runs until one of two things happens:
In either case, the process at the head of the ready queue is now made
the active process.
- P blocks (e.g. waiting for input). In that case, P is taken off the
ready queue; it is in the "blocked" state.
- P exhausts its time quantum. In this case, P is pre-empted, even though
it is still able to run. It is put at the end of the ready queue.
- When a process unblocks (e.g. the input it's waiting for is complete)
it is put at the end of the ready queue.
Chapter 2: Process and Thread Management
Tanenbaum's chapter title is ``Processes and Threads''. I prefer to
add the word
management. The subject matter is processes, threads, scheduling,
interrupt handling, and IPC (InterProcess Communication--and
Definition: A process is a program
- We are assuming a multiprogramming OS that
can switch from one process to another.
- Sometimes this is
called pseudoparallelism since one has the illusion of a
- The other possibility is real
parallelism in which two or more processes are actually running
at once because the computer system is a parallel processor, i.e., has
more than one processor.
- We do not study real parallelism (parallel
processing, distributed systems, multiprocessors, etc) in this course.
2.1.1: The Process Model
Even though in actuality there are many processes running at once, the
OS gives each process the illusion that it is running alone.
Virtual time and virtual memory are examples of abstractions
provided by the operating system to the user processes so that the
latter ``sees'' a more pleasant virtual machine than actually exists.
From the users or external viewpoint there are several mechanisms
for creating a process.
- System initialization, including daemon processes.
- Execution of a process creation system call by a running process.
- A user request to create a new process.
- Initiation of a batch job.
But looked at internally, from the system's viewpoint, the second
method dominates. Indeed in unix only one process is created at
system initialization (the process is called init); all the
others are children of this first process.
Why have init? That is why not have all processes created via
Ans: Because without init there would be no running process to create
2.1.3: Process Termination
Again from the outside there appear to be several termination
- Normal exit (voluntary).
- Error exit (voluntary).
- Fatal error (involuntary).
- Killed by another process (involuntary).
And again, internally the situation is simpler. In Unix
terminology, there are two system calls kill and
exit that are used. Kill (poorly named in my view) sends a
signal to another process. If this signal is not caught (via the
signal system call) the process is terminated. There
is also an ``uncatchable'' signal. Exit is used for self termination
and can indicate success or failure.
2.1.4: Process Hierarchies
Modern general purpose operating systems permit a user to create and
- In unix this is done by the fork
system call, which creates a child process, and the
exit system call, which terminates the current
- After a fork both parent and child keep running (indeed they
have the same program text) and each can fork off other
- A process tree results. The root of the tree is a special
process created by the OS during startup.
- A process can choose to wait for children to terminate.
For example, if C issued a wait() system call it would block until G
Old or primitive operating system like
MS-DOS are not multiprogrammed so when one process starts another,
the first process is automatically blocked and waits until
the second is finished.
2.1.5: Process States and Transitions
The diagram on the right contains much information.
- Consider a running process P that issues an I/O request
- The process blocks
- At some later point, a disk interrupt occurs and the driver
detects that P's request is satisfied.
- P is unblocked, i.e. is moved from blocked to ready
- At some later time the operating system looks for a ready job
to run and picks P.
- A preemptive scheduler has the dotted line preempt;
A non-preemptive scheduler doesn't.
- The number of processes changes only for two arcs: create and
- Suspend and resume are medium term scheduling
- Done on a longer time scale.
- Involves memory management as well.
- Sometimes called two level scheduling.
One can organize an OS around the scheduler.
- Write a minimal ``kernel'' consisting of the scheduler, interrupt
handlers, and IPC (interprocess communication)
- The rest of the OS consists of kernel processes (e.g. memory,
filesystem) that act as servers for the user processes (which of
course act as clients.
- The system processes also act as clients (of other system processes).
The above is called the client-server model and is one Tanenbaum likes.
His ``Minix'' operating system works this way.
Indeed, there was reason to believe that the client-server model
would dominate OS design.
But that hasn't happened.
Such an OS is sometimes called server based.
Systems like traditional unix or linux would then be
called self-service since the user process serves itself.
That is, the user process switches to kernel mode and performs
the system call.
To repeat: the same process changes back and forth from/to
user<-->system mode and services itself.
2.1.6: Implementation of Processes
The OS organizes the data about each process in a table naturally
called the process table.
Each entry in this table is called a
process table entry (PTE) or
process control block.
One entry per process.
The central data structure for process management.
A process state transition (e.g., moving from blocked to ready) is
reflected by a change in the value of one or more
fields in the PTE.
We have converted an active entity (process) into a data structure
(PTE). Finkel calls this the level principle ``an active
entity becomes a data structure when looked at from a lower level''.
The PTE contains a great deal of information about the process.
- Saved value of registers when process not running
- Stack pointer
- CPU time used
- Process id (PID)
- Process id of parent (PPID)
- User id (uid and euid)
- Group id (gid and egid)
- Pointer to text segment (memory for the program text)
- Pointer to data segment
- Pointer to stack segment
- UMASK (default permissions for new files)
- Current working directory
- Many others
Transfer of Control : Version 1
More details will be added when we study memory management
and more again when we study interrupts.
Procedure f calls g(a,b,c) in process P.
Steps when f carries out the call:
1. Complete all previous instructions in f. Therefore, the only
registers important for the state of f are the stack pointer (SP)
and the program counter (PC)
2. Push arguments c,b,a onto P's stack. Note: Stacks usually
grow downward from the top of P's segment, so pushing
an item onto the stack actually involves decrementing SP.
3. Execute PUSHJ < start-address of g >. This instruction
pushes PC onto the stack, and then jumps to the start address
4. The first step in g is to allocate space for its own local
variables by suitably decrementing SP.
g now starts its execution from the beginning. This may involve
calling other procedures, possibly including recursive calls to
Steps when g returns control to f:
5. At the end of g: Undo step (4) and deallocate its
local variables by incrementing the SP.
6. Last step of g: POPJ has the effect PC = pop(stack)
7. We are now at the step in f immediately following the call
to g. Pop the arguments a,b,c off the stack and continue the
execution of f.
Features of procedure call
- Predictable: f knows when the call is coming and can
make sure that it is in a good state for the transfer.
- LIFO structure of control: we can be sure that control
will return to f when this call to g exits. (Excluding language
features such as "throwing" and "catching" exceptions.)
- Recursive: g may (directly or indirectly) create a new
instance of f during the course of execution, but this must
end before g does.
- Entirely in user mode, user space.