Operating Systems
Start Lecture #3
One can organize an OS around the scheduler.
- Write a minimal
kernel
(a micro-kernel) consisting of the
scheduler, interrupt handlers, and IPC (interprocess
communication).
- The rest of the OS consists of kernel processes (e.g. memory,
filesystem) that act as servers for the user processes (which of
course act as clients).
- The system processes also act as clients (of other system
processes).
- The above is called the client-server model and is one Tanenbaum likes.
His
Minix
operating system works this way.
- Indeed, there was reason to believe that the client-server model
would dominate OS design.
But that hasn't happened.
- Such an OS is sometimes called server based.
- Systems like traditional unix or linux would then be
called self-service since the user process serves
itself.
That is, the user process switches to kernel mode (via the TRAP
instruction) and performs the system call itself without
transferring control to another process.
2.1.6 Implementation of Processes
The OS organizes the data about each process in a table naturally
called the process table.
Each entry in this table is called a
process table entry or
process control block (PCB).
(I have often referred to a process table entry as a PTE, but this
is bad since I also use PTE for Page Table Entry.
Because the latter usage is very common, I must stop using PTE to
abbreviate the former.
Please correct me if I slip up.)
Characteristics of the process table.
- One entry per process.
- The central data structure for process management.
- A process state transition (e.g., moving from blocked to ready) is
reflected by a change in the value of one or more
fields in the PCB.
- We have converted an active entity (process) into a data structure
(PCB).
Finkel calls this the level principle
an active entity becomes a data structure when looked at from
a lower level
.
- The PCB contains a great deal of information about the process.
For example,
- Saved value of registers including the program counter
(i.e., the address of the next instruction) when the process
not running.
- Stack pointer
- CPU time used
- Process id (PID)
- Process id of parent (PPID)
- User id (uid and euid)
- Group id (gid and egid)
- Pointer to text segment (memory for the program text)
- Pointer to data segment
- Pointer to stack segment
- UMASK (default permissions for new files)
- Current working directory
- Many others
2.1.6A: An Addendum on Interrupts
This should be compared with the addenda on
transfer of control and
trap.
In a well defined location in memory (specified by the hardware) the
OS stores an interrupt vector, which contains the
address of the interrupt handler.
- Tanenbaum calls the interrupt handler the interrupt service
routine.
- Actually one can have different priorities of interrupts and
the interrupt vector then contains one pointer for each level.
This is why it is called a vector.
Assume a process P is running and a disk interrupt occurs for the
completion of a disk read previously issued by process Q, which is
currently blocked.
Note that disk interrupts are unlikely to be for the currently running
process (because the process that initiated the disk access is likely
blocked).
Actions by P Just Prior to the Interrupt:
- Who knows??
This is the difficulty of debugging code depending on interrupts,
the interrupt can occur (almost) anywhere.
Thus, we do not know what happened just before the
interrupt.
Indeed, we do not even know which process P will be running when
the interrupt does occur.
We cannot (even for one specific execution) point to an
instruction and say this instruction caused the interrupt
.
Executing the interrupt itself:
- The hardware saves the program counter and some other registers
(or switches to using another set of registers, the exact
mechanism is machine dependent).
-
The hardware loads new program counter from the interrupt vector.
- Loading the program counter causes a jump.
- Steps 2 and 3 are similar to a procedure call.
But the interrupt is asynchronous.
- As with a trap, the hardware automatically switches the system
into privileged mode.
(It might have been in supervisor mode already.
That is, an interrupt can occur in supervisor or user mode.)
Actions by the interrupt handler (et al) upon being activated
- An assembly language routine saves registers.
- The assembly routine sets up a new stack.
(These last two steps are often called setting up the C
environment.)
- The assembly routine calls a procedure in a high level language,
often the C language (Tanenbaum forgot this step).
- The C procedure does the real work.
- Determines what caused the interrupt (in this case a disk
completed an I/O).
- How does it figure out the cause?
- It might know the priority of the interrupt being activated.
- The controller might write information in memory
before the interrupt.
- The OS might read registers in the controller.
- Mark process Q as ready to run.
- That is move Q to the ready list (note that again
we are viewing Q as a data structure).
- Q is now in ready state; it was in the blocked state
before.
- The code that Q needs to run initially is likely to be
OS code.
For example, the data just read is probably now in kernel
space and Q needs to copy it into user space.
- Now we have at least two processes ready to run, namely P and
Q.
There may be arbitrarily many others.
- The scheduler decides which process to run, P or Q or something
else.
(This very loosely corresponds to g calling other procedures in
the simple f calls g case we discussed previously).
Eventually the scheduler decides to run P.
Actions by P when control returns
- The C procedure (that did the real work in the interrupt
processing) continues and returns to the assembly code.
- Assembly language restores P's state (e.g., registers) and starts
P at the point it was when the interrupt occurred.
Properties of interrupts
- Phew.
- Unpredictable (to an extent).
We cannot tell what was executed just before the interrupt
occurred.
That is, the control transfer is asynchronous; it is difficult to
ensure that everything is always prepared for the transfer.
- The user code is unaware of the difficulty and cannot
(easily) detect that it occurred.
This is another example of the OS presenting the user with a
virtual machine environment that is more pleasant than reality (in
this case synchronous rather asynchronous behavior).
- Interrupts can also occur when the OS itself is executing.
This can cause difficulties since both the main line code
and the interrupt handling code are from the same
program
, namely the OS, and hence might well be
using the same variables.
We will soon see how this can cause great problems even in what
appear to be trivial cases.
- The interprocess control transfer is neither stack-like
nor queue-like.
That is if first P was running, then Q was running, then R was
running, then S was running, the next process to be run might be
any of P, Q, or R (or some other process).
- The system might have been in user-mode or supervisor mode when
the interrupt occurred.
The interrupt processing starts in supervisor mode.
2.1.7 Modeling Multiprogramming (Crudely)
Consider a job that is unable to compute (i.e., it is waiting for
I/O) a fraction p of the time.
- With monoprogramming, the CPU utilization is 1-p.
- Note that p is often > .5, so CPU utilization is poor.
- But, if n jobs are in memory, then the probability that all n
are waiting for I/O is approximately pn.
So, with a multiprogramming level (MPL) of n,
the CPU utilization is approximately 1-pn.
- If p=.5 and n=4, then the utilization 1-pn=15/16 is
much better than the monoprogramming (n=1) utilization of 1/2.
There are at least two causes of inaccuracy in the above modeling
procedure.
- Some CPU time is spent by the OS in switching from one process
to another.
So the "useful utilization", i.e. the proportion of time the CPU
is executing user code, is lower than predicted.
- The model assumes that the probability
that one process is waiting for I/O is independent of the
probability that another process is waiting for I/O.
This assumption was used when we asserted that the probability
all n jobs are waiting for I/O is pn.
Nonetheless, it is correct that increasing MPL does
increase CPU utilization up to a point.
An important limitation is memory.
That is, we assumed that we have many jobs loaded at once, which means we
must have enough memory for them.
There are other memory-related issues as well and we will discuss
them later in the course.
Homework: 5.
2.2 Threads
Process-Wide vs Thread-Specific Items
Per process items | Per thread items |
|
|
Address space | Program counter |
Global variables | Machine registers |
Open files | Stack |
Child processes |
Pending alarms |
Signals and signal handlers |
Accounting information |
The idea behind threads to have separate threads of
control (hence the name) running in the address space of a single
process as shown in the diagram to the right.
An address space is a memory management concept.
For now think of an address space as the memory in which a process
runs.
(In reality it also includes the mapping from virtual addresses,
i.e., addresses in the program, to physical addresses, i.e.,
addresses in the machine.
The table on the left shows which properties are common to all
threads in a given process and which properties are thread specific.
Each thread is somewhat like a process
(e.g., it shares the processor with other threads) but a thread
contains less state than a process (e.g., the address space belongs
to the process in which the thread runs.)
2.2.2 Thread Usage
Often, when a process P executing an application is blocked (say
for I/O), there is still computation that can be done for the
application.
Another process can't do this computation since it doesn't have
access to P's memory.
But two threads in the same process do share memory so that problem
doesn't occur.
An important modern example is a multithreaded web server.
Each thread is responding to a single WWW connection.
While one thread is blocked on I/O, another thread can be processing
another WWW connection.
Question: Why not use separate processes, i.e.,
what is the shared memory?
Answer: The cache of frequently referenced pages.
A common organization for a multithreaded application is to have a
dispatcher thread that fields requests and then passes each request
on to an idle worker thread.
Since the dispatcher and worker share memory, passing the request is
very low overhead.
Another example is a producer-consumer problem
(see below) in which we have 3
threads in a pipeline.
One thread reads data from an I/O device into an input buffer, the
second thread performs computation on the input buffer and places
results in an output buffer, and the third thread outputs the data
found in the output buffer.
Again, while one thread is blocked the others can execute.
Really you want 2 (or more) input buffers and 2 (or more) output
buffers.
Otherwise the middle thread would be using all the buffers and would
block both outer threads.
Question: When does each thread block?
Answer:
- The first thread blocks while waiting for the device to supply
the data.
It also blocks if all input buffers for the computational thread
are full.
- The second thread blocks when either all input buffers are
empty or all output buffers are full.
- The third thread blocks while waiting for the device to
complete the output (or at least indicate that it is ready for
another request).
It also blocks if all output buffers are empty.
A final (related) example is that an application wishing to perform
automatic backups can have a thread to do just this.
In this way the thread that interfaces with the user is not blocked
during the backup.
However some coordination between threads may be needed so that the
backup is of a consistent state.
2.2.2 The Classical Thread Model
A process contains a number of resources such as address space,
open files, accounting information, etc.
In addition to these resources, a process has a thread of control,
e.g., program counter, register contents, stack.
The idea of threads is to permit multiple threads of control to
execute within one process.
This is often called multithreading and threads are
sometimes called lightweight processes.
Because threads in the same process share so much state, switching
between them is much less expensive than switching between separate
processes.
Individual threads within the same process are not completely
independent.
For example there is no memory protection between them.
This is typically not a security problem as the threads are
cooperating and all are from the same user (indeed the same
process).
However, the shared resources do make debugging harder.
For example one thread can easily overwrite data needed by another
thread in the process and when the second thread fails, the cause
may be hard to determine because the tendency is to assume that the
failed thread caused the failure.
A new thread in the same process is created by a routine
named something like thread_create; similarly there
is thread_exit.
The analogue to waitpid is thread_join (the name comes
presumably from the fork-join model of parallel execution).
The routine tread_yield, which relinquishes the processor,
does not have a direct analogue for processes.
The corresponding system call (if it existed) would move the process
from running to ready.
Homework: 11.
Challenges and Questions
Assume a process has several threads.
What should we do if one of these threads
- Executes a fork?
- Closes a file?
- Requests more memory?
- Moves a file pointer via lseek?
2.2.3 POSIX Threads
POSIX threads (pthreads) is an IEEE standard specification that is
supported by many Unix and Unix-like systems.
Pthreads follows the classical thread model above and specifies routines
such as pthread_create, pthread_yield, etc.
An alternative to the classical model are the so-called Linux
threads (see the section 10.3 in the 3e).
2.2.4 Implementing Threads in User Space
Write a (threads) library that acts as a mini-scheduler and
implements thread_create, thread_exit,
thread_wait, thread_yield, etc.
This library acts as a run-time system for the threads in this
process.
The central data structure maintained and used by this library is
a thread table, the analogue of the process table in the
operating system itself.
There is a thread table and an instance of the threads library in
each multithreaded process.
Advantages of User-Mode Threads
:
- Requires no OS modification.
- Requires NO OS modification.
- Requires NO OS modification.
- Very fast since no context switching.
- Can customize the scheduler for each application.
Disadvantages
- Blocking system calls can't be executed directly since that
would block the entire process.
For example, consider the producer consumer example above
implemented in the natural manner with user-mode threads.
This implementation would not work well since, whenever an I/O
was issued that caused the process to block, all the threads
would be unable to run (but see just below).
- Similarly a page fault would block the entire process (i.e., all
the threads).
- in addition, a thread with an infinite loop prevents all other
threads in this process from running.
Possible Methods of Dealing With Blocking System Calls
- Perhaps the OS supplies a non-blocking version of the system call,
e.g. a non-blocking read.
- Perhaps the OS supplies another system call that tells if the
blocking system call will in fact block.
For example, a unix select() can be used to tell if a read would
block.
It might not block if, for example,
- The requested disk block is in the buffer cache (see the I/O
chapter).
- The request was for a keyboard or mouse or network event that
has already happened.
Relevance to Multiprocessors/Multicore
For a uniprocessor, which is all we are officially considering,
there is little gain in splitting pure computation into pieces.
If the CPU is to be active all the time for all the threads, it is
simpler to just have one (unithreaded) process.
But this changes for multiprocessors/multicores.
Now it is very useful to split computation into
threads and have each executing on a separate processor/core.
In this case, user-mode threads are wonderful, there are no system
calls and the extremely low overhead is beneficial.
However, there are serious issues involved is programming
applications for this environment.
2.2.4 Implementing Threads in the Kernel
One can move the thread operations into the operating system
itself.
This naturally requires that the operating system itself be
(significantly) modified and is thus not a trivial undertaking.
- There is only one thread table for the entire system and it is
in the OS.
- Thread-create and friends are now system calls and hence much
slower than with user-mode threads.
They are, however, still much faster than creating/switching/etc
processes since there is so much shared state that does not need
to be recreated.
- A thread that blocks causes no particular problem.
The kernel can run another thread from this process (or can run
another process).
- Similarly a page fault, or infinite loop in one thread does not
automatically block the other threads in the process.
2.2.5 Hybrid Implementations
One can write a (user-level) thread library even if the kernel also
has threads.
This is sometimes called the N:M model since N user-mode threads run
on M kernel threads.
In this scheme, the kernel threads cooperate to execute the
user-level threads.
- Different kernel threads in the same process can have
differing numbers of user threads assigned to them.
- Switching between user-level threads within one kernel thread
is very fast (no context switch).
It is essentially the same as in the case of pure user-mode
threads.
- Switching between kernel threads of the same process requires
a system call and is essentially the same as in the case of pure
kernel-level threads.
- Since a blocking system call or page fault blocks only one
kernel thread, the multi-threaded application as a whole can
still run since user-level threads in other kernel-level threads
of this process are still runnable.
An offshoot of the N:M terminology is that kernel-level threading
(without user-level threading) is sometimes referred to as the 1:1
model since one can think of each thread as being a user level
thread executed by a dedicated kernel-level thread.
Homework: 12, 14.
2.2.6 Scheduler Activations
Skipped
2.2.7 Popup Threads
The idea is to automatically issue a thread-create system call upon
message arrival.
(The alternative is to have a thread or process
blocked on a receive system call.)
If implemented well, the latency between message arrival and thread
execution can be very small since the new thread does not have state
to restore.
Making Single-threaded Code Multithreaded
Definitely NOT for the faint of heart.
- There often is state that should not be shared.
A well-cited example is the unix errno variable that
contains the error number (zero means no error) of the error
encountered by the last system call.
Errno is hardly elegant (even in normal, single-threaded,
applications), but its use is widespread.
If multiple threads issue faulty system calls the errno value of
the second overwrites the first and thus the first errno value
may be lost.
- Much existing code, including many libraries, are not
re-entrant.
- Managing the shared memory inherent in multi-threaded applications
opens up the possibility of race conditions that we will be
studying next.
- What should be done with a signal sent to a process.
Does it go to all or one thread?
- How should stack growth be managed.
Normally the kernel grows the (single) stack automatically
when needed.
What if there are multiple stacks?
Remark: We shall do section 2.4 before section 2.3
for two reasons.
- Sections 2.3 and 2.5 are closely related; having 2.4 in
between seeks awkward to me.
- Lab 2 uses material from 2.4 so I don't want to push 2.4
after 2.5.
2.4 Process Scheduling
Scheduling processes on the processor is often called
processor scheduling
or process scheduling
or
simply scheduling
.
As we shall see later in the course, a more descriptive name would
be short-term, processor scheduling
.
For now we are discussing the arcs
connecting running↔ready in the diagram on the right
showing the various states of a process.
Medium term scheduling is discussed later as is disk-arm scheduling.
Naturally, the part of the OS responsible for (short-term, processor)
scheduling is called the (short-term, processor)
scheduler and the algorithm used is called the
(short-term, processor) scheduling algorithm.
2.4.1 Introduction to Scheduling
Importance of Scheduling for Various Generations and Circumstances
Early computer systems were monoprogrammed and, as a result,
scheduling was a non-issue.
For many current personal computers, which are definitely
multiprogrammed, there is in fact very rarely more than one runnable
process.
As a result, scheduling is not critical.
For servers (or old mainframes), scheduling is indeed important and
these are the systems you should think of.
Process Behavior
Processes alternate CPU bursts with I/O activity, as we shall see
in lab2.
The key distinguishing factor between compute-bound (aka CPU-bound)
and I/O-bound jobs is the length of the CPU bursts.
The trend over the past decade or two has been for more and more
jobs to become I/O-bound since the CPU rates have increased faster
than the I/O rates.
When to Schedule
An obvious point, which is often forgotten (I don't think 3e
mentions it) is that the scheduler cannot run when
the OS is not running.
In particular, for the uniprocessor systems we are considering, no
scheduling can occur when a user process is running.
(In the mulitprocessor situation, no scheduling can occur when all
processors are running user jobs).
Again we refer to the state transition diagram above.
- Process creation.
The running process has issued a fork() system call and hence
the OS runs; thus scheduling is possible.
Scheduling is also desirable at this time since
the scheduling algorithm might favor the new process.
- Process termination.
The exit() system call has again transferred control to the OS
so scheduling is possible.
Moreover, scheduling is necessary since the
previously running process has terminated.
- Process blocks.
Same as termination.
- Interrupt received.
Since the OS takes control, scheduling is possible.
When an I/O interrupt occurs, this normally means that a blocked
process is now ready and, with a new candidate for running,
scheduling is desirable.
- Clock interrupts are treated next when we discuss preemption
and discuss the dotted arc in the process state diagram.
Preemption
It is important to distinguish preemptive from non-preemptive
scheduling algorithms.
- Preemption means the operating system moves a process from running
to ready without the process requesting it.
- Without preemption, the system implements
run until completion, yield, or block
.
- The
preempt
arc in the diagram is present for
preemptive scheduling algorithms.
- We do not emphasize yield (a solid arrow from running to
ready).
- Preemption needs a clock interrupt (or equivalent).
- Preemption is needed to guarantee fairness.
- Preemption is found in all modern general purpose operating
systems.
- Even non-preemptive systems can be multiprogrammed (remember
that processes do block for I/O).
- Preemption is not cheap.
Categories of Scheduling Algorithms
We distinguish three categories of scheduling algorithms with
regard to the importance of preemption.
- Batch.
- Interactive.
- Real Time.
For multiprogramed batch systems (we don't consider uniprogrammed
systems, which don't need schedulers) the primary concern is
efficiency.
Since no user is waiting at a terminal, preemption is not crucial
and if it is used, each process is given a long time period before
being preempted.
For interactive systems (and multiuser servers), preemption is
crucial for fairness and rapid response time to short requests.
We don't study real time systems in this course, but can say that
preemption is typically not important since all the processes are
cooperating and are programmed to do their task in a prescribed time
window.
Scheduling Algorithm Goals
There are numerous objectives, several of which conflict, that a
scheduler tries to achieve.
These include.
- Fairness.
Treating users fairly, which must be balanced against ...
- Respecting priority.
That is, giving more important
processes higher priority.
For example, if my laptop is trying to fold proteins in the
background, I don't want that activity to appreciably slow down
my compiles and especially don't want it to make my system seem
sluggish when I am modifying these class notes.
In general, interactive
jobs should have higher
priority.
- Efficiency.
This has two aspects.
- Do not spend excessive time in the scheduler.
- Try to keep all parts of the system busy.
- Low turnaround time
That is, minimize the time from the submission of a
job
to its termination.
This is important for batch jobs.
- High throughput.
That is, maximize the number of jobs completed per day.
Not quite the same as minimizing the (average) turnaround time
as we shall see when we discuss shortest job first
.
- Low response time.
That is, minimize the time from when an interactive user issues
a command to when the response is given.
This is very important for interactive jobs.
- Repeatability.
Dartmouth (DTSS)
wasted cycles
and limited logins for
repeatability.
- Degrade gracefully under load.
Deadline scheduling
This is used for real time systems.
The objective of the scheduler is to find a schedule for all the
tasks (there are a fixed set of tasks) so that each meets its
deadline.
The run time of each task is known in advance.
Actually it is more complicated.
- Periodic tasks
- What if we can't schedule all task so that each meets its deadline
(i.e., what should be the penalty function)?
- What if the run-time is not constant but has a known probability
distribution?
The Name Game
There is an amazing inconsistency in naming the different
(short-term) scheduling algorithms.
Over the years I have used primarily 4 books: In chronological order
they are Finkel, Deitel, Silberschatz, and Tanenbaum.
The table just below illustrates the name game for these four books.
After the table we discuss several scheduling policy in some detail.
Finkel Deitel Silbershatz Tanenbaum
-------------------------------------
FCFS FIFO FCFS FCFS
RR RR RR RR
PS ** PS PS
SRR ** SRR ** not in tanenbaum
SPN SJF SJF SJF
PSPN SRT PSJF/SRTF SRTF
HPRN HRN ** ** not in tanenbaum
** ** MLQ ** only in silbershatz
FB MLFQ MLFQ MQ
Remark: For an alternate organization of the
scheduling algorithms (due to my former PhD student Eric Freudenthal
and presented by him Fall 2002) click
here.
2.4.2 Scheduling in Batch Systems
First Come First Served (FCFS, FIFO, FCFS, --)
If the OS doesn't
schedule, it still needs to store the list
of ready processes in some manner.
If it is a queue you get FCFS.
If it is a stack (strange), you get LCFS.
Perhaps you could get some sort of random policy as well.
- Only FCFS is considered.
- Non-preemptive.
- The simplist scheduling policy.
- In some sense the fairest since it is first come first served.
But perhaps that is not so fair.
Consider a 1 hour job submitted one second before a 3 second
job.
- An efficient usage of cpu in the sense that the scheduler is
very fast.
- Does not favor interactive jobs.