Start Lecture #3
The OS organizes the data about each process in a table naturally called the process table. Each entry in this table is called a process table entry or process control block (PCB).
(I have often referred to a process table entry as a PTE, but this is bad since I recently realized that I use PTE for two different things: Process Table Entry and Page Table Entry. Since the latter is very common, I must stop using the former. Please correct me if I slip up.)
an active entity becomes a data structure when looked at from a lower level.
This should be compared with the addendum on transfer of control.
In a well defined location in memory (specified by the hardware) the OS stores an interrupt vector, which contains the address of the (first level) interrupt handler.
Assume a process P is running and a disk interrupt occurs for the completion of a disk read previously issued by process Q, which is currently blocked. Note that disk interrupts are unlikely to be for the currently running process (because the process that initiated the disk access is likely blocked).
this instruction caused the interrupt.
program, namely the OS, and hence might well be using the same variables. We will soon see how this can cause great problems even in what appear to be trivial cases.
Homework: 5.
Per process items | Per thread items |
---|---|
Address space | Program counter |
Global variables | Machine registers |
Open files | Stack |
Child processes | |
Pending alarms | |
Signals and signal handlers | |
Accounting information |
The idea is to have separate threads of control (hence the name) running in the same address space. An address space is a memory management concept. For now think of an address space as having two components: the memory in which a process runs, and the mapping from the virtual addresses (addresses in the program) to the physical addresses (addresses in the machine). Each thread is somewhat like a process (e.g., it shares the processor with other threads) but a thread contains less state than a process (e.g., the address space belongs to the process in which the thread runs.)
Often, when a process A is blocked (say for I/O) there is still computation that can be done. Another process B can't do this computation since it doesn't have access to A's memory. But two threads in the same process do share memory so that problem doesn't occur.
An important modern example is a multithreaded web server.
Each thread is responding to a single WWW connection.
While one thread is blocked on I/O, another thread can be processing
another WWW connection.
Question: Why not use separate processes, i.e., what is the shared
memory?
Ans: The cache of frequently referenced pages.
A common organization is to have a dispatcher thread that fields requests and then passes each request on to an idle thread.
Another example is a producer-consumer problem (c.f. below) in which we have 3 threads in a pipeline. One thread reads data from an I/O device into a buffer, the second thread performs computation on the input buffer and places results in an output buffer, and the third thread outputs the data found in the output buffer. Again, while one thread is blocked the others can execute.
Question: Why does each thread block?
Answer:
A final (related) example is that an application that wishes to perform automatic backups can have a thread to do just this. In this way the thread that interfaces with the user is not blocked during the backup. However some coordination between threads may be needed so that the backup is of a consistent state.
A process contains a number of resources such as address space, open files, accounting information, etc. In addition to these resources, a process has a thread of control, e.g., program counter, register contents, stack. The idea of threads is to permit multiple threads of control to execute within one process. This is often called multithreading and threads are often called lightweight processes. Because threads in the same process share so much state, switching between them is much less expensive than switching between separate processes.
Individual threads within the same process are not completely independent. For example there is no memory protection between them. This is typically not a security problem as the threads are cooperating and all are from the same user (indeed the same process). However, the shared resources do make debugging harder. For example one thread can easily overwrite data needed by another and if one thread closes a file other threads can't read from it.
A new thread in the same process is created by a library routine named something like thread_create; similarly there is thread_exit. The analogue to waitpid is thread_join (the name comes presumably from the fork-join model of parallel execution).
The routine tread_yield, which relinquishes the processor, does not have a direct analogue for processes. The corresponding system call (if it existed) would move the process from running to ready.
Homework: 11.
Assume a process has several threads. What should we do if one of these threads
POSIX threads (pthreads) is an IEEE standard specification that is supported by many Unix and Unix-like systems. Pthreads follows the classical thread model above and specifies routines such as pthread_create, pthread_yield, etc.
An alternative to the classical model (as implemented e.g., by pthread) are the so-called Linux threads (see the section 10.3 in the 3e).
Write a (threads) library that acts as a mini-scheduler and implements thread_create, thread_exit, thread_wait, thread_yield, etc. The central data structure maintained and used by this library is the thread table, the analogue of the process table in the operating system itself.
Advantages
Disadvantages
Move the thread operations into the operating system itself. This naturally requires that the operating system itself be (significantly) modified and is thus not a trivial undertaking.
One can write a (user-level) thread library even if the kernel also has threads. This is sometimes called the N:M model since N user-mode threads run on M kernel threads. In this scheme, the kernel threads cooperate to execute the user-level threads. Switching between user-level threads within one kernel thread is very fast (no context switch) and maintains the advantage that a blocking system call or page fault does not block the entire multi-threaded application since user-level threads in other kernel-level threads of this process are still runnable.
An offshoot of the N:M terminology is that kernel-level threading (without user-level threading) is sometimes referred to as the 1:1 model since one can think of each thread as being a user level thread executed by a dedicated kernel-level thread.
Homework: 12, 14.
Skipped
The idea is to automatically issue a thread-create system call upon message arrival. (The alternative is to have a thread or process blocked on a receive system call.) If implemented well, the latency between message arrival and thread execution can be very small since the new thread does not have state to restore.
Definitely NOT for the faint of heart.
A race condition occurs when two (or more) processes are about to perform some action. Depending on the exact timing, one or other goes first. If one of the processes goes first, everything works correctly, but if another one goes first, an error, possibly fatal, occurs.
Imagine two processes both accessing x, which is initially 10.
We must prevent interleaving sections of code that need to be atomic with respect to each other. That is, the conflicting sections need mutual exclusion. If process A is executing its critical section, it excludes process B from executing its critical section. Conversely if process B is executing is critical section, it excludes process A from executing its critical section.
Requirements for a critical section implementation.
We will consider only solutions in this class. Note that higher level solutions, e.g., having one process block (i.e., not execute) when it cannot enter its critical are implemented using busy waiting algorithms.
The operating system can choose not to preempt itself. That is, we do not preempt system processes (if the OS is client server) or processes running in system mode (if the OS is self service). Forbidding preemption for system processes would prevent the problem above where x<--x+1 not being atomic crashed the printer spooler if the spooler is part of the OS.
But simply forbidding preemption while in system mode is not sufficient.
Initially P1wants=P2wants=false Code for P1 Code for P2 Loop forever { Loop forever { P1wants <-- true ENTRY P2wants <-- true while (P2wants) {} ENTRY while (P1wants) {} critical-section critical-section P1wants <-- false EXIT P2wants <-- false non-critical-section } non-critical-section }
Explain why this works.
But it is wrong!
Why?
Let's try again. The trouble was that setting want before the loop permitted us to get stuck. We had them in the wrong order!
Initially P1wants=P2wants=false Code for P1 Code for P2 Loop forever { Loop forever { while (P2wants) {} ENTRY while (P1wants) {} P1wants <-- true ENTRY P2wants <-- true critical-section critical-section P1wants <-- false EXIT P2wants <-- false non-critical-section } non-critical-section }
Explain why this works.
But it is wrong again!
Why?
Now let's try being polite and really take turns. None of this wanting stuff.
Initially turn=1 Code for P1 Code for P2 Loop forever { Loop forever { while (turn = 2) {} while (turn = 1) {} critical-section critical-section turn <-- 2 turn <-- 1 non-critical-section } non-critical-section }
This one forces alternation, so is not general enough. Specifically, it does not satisfy condition three, which requires that no process in its non-critical section can stop another process from entering its critical section. With alternation, if one process is in its non-critical section (NCS) then the other can enter the CS once but not again.
The first example violated rule 4 (the whole system blocked). The second example violated rule 1 (both in the critical section. The third example violated rule 3 (one process in the NCS stopped another from entering its CS).
In fact, it took years (way back when) to find a correct solution.
Many earlier solutions
were found and several were published, but
all were wrong.
The first correct solution was found by a mathematician named Dekker,
who combined the ideas of turn and wants.
The basic idea is that you take turns when there is contention, but
when there is no contention, the requesting process can enter.
It is very clever, but I am skipping it (I cover it when I teach
distributed operating systems in V22.0480 or G22.2251).
Subsequently, algorithms with better fairness properties were found
(e.g., no task has to wait for another task to enter the CS twice).
What follows is Peterson's solution, which also combines turn and wants to force alternation only when there is contention. When Peterson's algorithm was published, it was a surprise to see such a simple solution. In fact Peterson gave a solution for any number of processes. A proof that the algorithm satisfies our properties (including a strong fairness condition) for any number of processes can be found in Operating Systems Review Jan 1990, pp. 18-22.
Initially P1wants=P2wants=false and turn=1 Code for P1 Code for P2 Loop forever { Loop forever { P1wants <-- true P2wants <-- true turn <-- 2 turn <-- 1 while (P2wants and turn=2) {} while (P1wants and turn=1) {} critical-section critical-section P1wants <-- false P2wants <-- false non-critical-section non-critical-section
Tanenbaum calls this instruction
test and set lock TSL
.
I call it test and set (TAS)
and define
TAS(b), where b is a binary variable,
to ATOMICALLY set b<--true and return the OLD value of b.
Of course it would be silly to return the new value of b since we know the new value is true.
The word atomically means that the two actions performed by TAS(x), testing (i.e., returning the old value of x) and setting (i.e., assigning true to x) are inseparable. Specifically it is not possible for two concurrent TAS(x) operations to both return false (unless there is also another concurrent statement that sets x to false).
With TAS available implementing a critical section for any number of processes is trivial.
loop forever { while (TAS(s)) {} ENTRY CS s<--false EXIT NCS }
Remark: Tanenbaum does both busy waiting (as above) and blocking (process switching) solutions. We will only do busy waiting, which is easier. Sleep and Wakeup are the simplest blocking primitives. Sleep voluntarily blocks the process and wakeup unblocks a sleeping process. We will not cover these.
Homework: Explain the difference between busy waiting and blocking process synchronization.
Remark: Tannenbaum use the term semaphore only for blocking solutions. I will use the term for our busy waiting solutions. Others call our solutions spin locks.
The entry code is often called P and the exit code V. Thus the critical section problem is to write P and V so that
loop forever P critical-section V non-critical-sectionsatisfies
Note that I use indenting carefully and hence do not need (and sometimes omit) the braces {} used in languages like C or java.
A binary semaphore abstracts the TAS solution we gave for the critical section problem.
openand
closed.
while (S=closed) {} S<--closed -- This is NOT the body of the whilewhere finding S=open and setting S<--closed is atomic
The above code is not real, i.e., it is not an implementation of P. It requires a sequence of two instructions to be atomic and that is after all what we are trying to implement in the first place. The above code is, instead, a definition of the effect P is to have.
To repeat: for any number of processes, the critical section problem can be solved by
loop forever P(S) CS V(S) NCS
The only solution we have seen for an arbitrary number of processes is the one just above with P(S) implemented via test and set.
Remark: Peterson's solution requires each process to know its process number; the TAS soluton does not. Moreover the definition of P and V does not permit use of the process number. Thus, strictly speaking Peterson did not provide an implementation of P and V. He did solve the critical section problem.
To solve other coordination problems we want to extend binary semaphores.
Both of the shortcomings can be overcome by not restricting ourselves to a binary variable, but instead define a generalized or counting semaphore.
while (S=0) {} S--where finding S>0 and decrementing S is atomic
Counting semaphores can solve what I call the semi-critical-section problem, where you premit up to k processes in the section. When k=1 we have the original critical-section problem.
initially S=k loop forever P(S) SCS -- semi-critical-section V(S) NCS