NOTE: These notes are by Allan Gottlieb, and are reproduced here, with superficial modifications, with his permission. "I" in this text generally refers to Prof. Gottlieb, except in regards to administrative matters.


================ Start Lecture #6 (Feb. 11) ================

P and V and Semaphores

Note:

Tanenbaum does both busy waiting (like above) and blocking (process switching) solutions. We will only do busy waiting, which is easier. Some authors use the term semaphore only for blocking solutions and would call our solutions spin locks.

End of Note.

The entry code is often called P and the exit code V (Tanenbaum only uses P and V for blocking, but we use it for busy waiting). So the critical section problem is to write P and V so that

loop forever
    P
    critical-section
    V
    non-critical-section
satisfies
  1. Mutual exclusion.
  2. No speed assumptions.
  3. No blocking by processes in NCS.
  4. Forward progress (my weakened version of Tanenbaum's last condition).

Note that I use indenting carefully and hence do not need (and sometimes omit) the braces {}

A binary semaphore abstracts the TAS solution we gave for the critical section problem.

The above code is not real, i.e., it is not an implementation of P. It is, instead, a definition of the effect P is to have.

To repeat: for any number of processes, the critical section problem can be solved by

loop forever
    P(S)
    CS
    V(S)
    NCS

The only specific solution we have seen for an arbitrary number of processes is the one just above with P(S) implemented via test and set.

Remark: Peterson's solution requires each process to know its processor number. The TAS soluton does not. Moreover the definition of P and V does not permit use of the processor number. Thus, strictly speaking Peterson did not provide an implementation of P and V. He did solve the critical section problem.

To solve other coordination problems we want to extend binary semaphores.

The solution to both of these shortcomings is to remove the restriction to a binary variable and define a generalized or counting semaphore.

These counting semaphores can solve what I call the semi-critical-section problem, where you premit up to k processes in the section. When k=1 we have the original critical-section problem.

initially S=k

loop forever
    P(S)
    SCS   <== semi-critical-section
    V(S)
    NCS

Producer-consumer problem

Initially e=k, f=0 (counting semaphore); b=open (binary semaphore)

Producer                         Consumer

loop forever                     loop forever
    produce-item                     P(f)
    P(e)                             P(b); take item from buf; V(b)
    P(b); add item to buf; V(b)      V(e)
    V(f)                             consume-item

Dining Philosophers

A classical problem from Dijkstra

What algorithm do you use for access to the shared resource (the forks)?

The purpose of mentioning the Dining Philosophers problem without giving the solution is to give a feel of what coordination problems are like. The book gives others as well. We are skipping these (again this material would be covered in a sequel course). If you are interested look, for example, here.

Readers and writers

Quite useful in multiprocessor operating systems. The ``easy way out'' is to treat all processes as writers in which case the problem reduces to mutual exclusion (P and V). The disadvantage of the easy way out is that you give up reader concurrency. Again for more information see the web page referenced above.

2.2: Threads

Per process itemsPer thread items
Address spaceProgram counter
Global variablesMachine registers
Open filesStack
Child processes
Pending alarms
Signals and signal handlers
Accounting information

The idea is to have separate threads of control (hence the name) running in the same address space. Each thread is somewhat like a process (e.g., it is scheduled to run) but contains less state (the address space belongs to the process in which the thread runs.

2.2.1: The Thread Model

A process contains a number of resources such as address space, open files, accounting information, etc. In addition to these resources, a process has a thread of control, e.g., program counter, register contents, stack. The idea of threads is to permit multiple threads of control to execute within one process. This is often called multithreading and threads are often called lightweight processes. Because the threads in a process share so much state, switching between them is much less expensive than switching between separate processes.

Individual threads within the same process are not completely independent. For example there is no memory protection between them. This is typically not a security problem as the threads are cooperating and all are from the same user (indeed the same process). However, the shared resources do make debugging harder. For example one thread can easily overwrite data needed by another and if one thread closes a file other threads can't read from it.

2.2.2: Thread Usage

Often, when a process A is blocked (say for I/O) there is still computation that can be done. Another process can't B do this computation since it doesn't have access to the A's memory. But two threads in the same process do share the memory so there is no problem.

An important example is a multithreaded web server. Each thread is responding to a single WWW connection. While one thread is blocked on I/O, another thread can be processing another WWW connection. Why not use separate processes, i.e., what is the shared memory?
Ans: The cache of frequently referenced pages.

A common organization is to have a dispatcher thread that fields requests and then passes this request on to an idle thread.

Another example is a producer-consumer problem (c.f. below) in which we have 3 threads in a pipeline. One reads data, the second processes the data read, and the third outputs the processed data. Again, while one thread is blocked the others can execute.