Operating System

================ Start Lecture #5 ================

Hardware assist (test and set)

TAS(b), where b is a binary variable, ATOMICALLY sets b<--true and returns the OLD value of b.
Of course it would be silly to return the new value of b since we know the new value is true.

The word atomically means that the two actions performed by TAS(x) (testing, i.e., returning the old value of x and setting , i.e., assigning true to x) are inseparable. Specifically it is not possible for two concurrent TAS(x) operations to both return false (unless there is also another concurrent statement that sets x to false).

With TAS available implementing a critical section for any number of processes is trivial.

loop forever {
    while (TAS(s)) {}   ENTRY
    CS
    s<--false           EXIT
    NCS

2.3.4: Sleep and Wakeup

Remark: Tanenbaum does both busy waiting (as above) and blocking (process switching) solutions. We will only do busy waiting, which is easier. Sleep and Wakeup are the simplest blocking primitives. Sleep voluntarily blocks the process and wakeup unblocks a sleeping process. We will not cover these.

Homework: Explain the difference between busy waiting and blocking.

2.3.5: Semaphores

Remark: Tannenbaum use the term semaphore only for blocking solutions. I will use the term for our busy waiting solutions. Others call our solutions spin locks.

P and V and Semaphores

The entry code is often called P and the exit code V. Thus the critical section problem is to write P and V so that

loop forever
    P
    critical-section
    V
    non-critical-section
satisfies
  1. Mutual exclusion.
  2. No speed assumptions.
  3. No blocking by processes in NCS.
  4. Forward progress (my weakened version of Tanenbaum's last condition).

Note that I use indenting carefully and hence do not need (and sometimes omit) the braces {} used in languages like C or java.

A binary semaphore abstracts the TAS solution we gave for the critical section problem.

The above code is not real, i.e., it is not an implementation of P. It is, instead, a definition of the effect P is to have.

To repeat: for any number of processes, the critical section problem can be solved by

loop forever
    P(S)
    CS
    V(S)
    NCS

The only specific solution we have seen for an arbitrary number of processes is the one just above with P(S) implemented via test and set.

Remark: Peterson's solution requires each process to know its processor number. The TAS soluton does not. Moreover the definition of P and V does not permit use of the processor number. Thus, strictly speaking Peterson did not provide an implementation of P and V. He did solve the critical section problem.

To solve other coordination problems we want to extend binary semaphores.

Both of the shortcomings can be overcome by not restricting ourselves to a binary variable, but instead define a generalized or counting semaphore.

These counting semaphores can solve what I call the semi-critical-section problem, where you premit up to k processes in the section. When k=1 we have the original critical-section problem.

initially S=k

loop forever
    P(S)
    SCS   <== semi-critical-section
    V(S)
    NCS

Producer-consumer problem

Initially e=k, f=0 (counting semaphore); b=open (binary semaphore)

Producer                         Consumer

loop forever                     loop forever
    produce-item                     P(f)
    P(e)                             P(b); take item from buf; V(b)
    P(b); add item to buf; V(b)      V(e)
    V(f)                             consume-item

2.3.6: Mutexes

Remark: Whereas we use the term semaphore to mean binary semaphore and explicitly say generalized or counting semaphore for the positive integer version, Tanenbaum uses semaphore for the positive integer solution and mutex for the binary version. Also, as indicated above, for Tanenbaum semaphore/mutex implies a blocking primitive.

My Terminology
Busy waitblock/switch
critical(binary) semaphore(binary) semaphore
semi-criticalcounting semaphorecounting semaphore
Tanenbaum's Terminology
Busy waitblock/switch
criticalenter/leave regionmutex
semi-criticalno namesemaphore

2.3.7: Monitors

Skipped.

2.3..8: Message Passing

Skipped. You can find some information on barriers in my lecture notes for a follow-on course (see in particular lecture #16).

2.4: Classical IPC Problems

2.4.1: The Dining Philosophers Problem

A classical problem from Dijkstra

What algorithm do you use for access to the shared resource (the forks)?

The purpose of mentioning the Dining Philosophers problem without giving the solution is to give a feel of what coordination problems are like. The book gives others as well. We are skipping these (again this material would be covered in a sequel course). If you are interested look, for example, here.

Homework: 31 and 32 (these have short answers but are not easy). Note that the problem refers to fig. 2-20, which is incorrect. It should be fig 2-33, as noticed by Liang Chen.

2.4.2: The Readers and Writers Problem

Quite useful in multiprocessor operating systems and database systems. The ``easy way out'' is to treat all processes as writers in which case the problem reduces to mutual exclusion (P and V). The disadvantage of the easy way out is that you give up reader concurrency. Again for more information see the web page referenced above.

2.4.3: The Sleeping Barber Problem

Skipped.

2.5: Process Scheduling

Scheduling processes on the processor is often called ``process scheduling'' or simply ``scheduling''.

The objectives of a good scheduling policy include

Recall the basic diagram describing process states

For now we are discussing short-term scheduling, i.e., the arcs connecting running <--> ready.

Medium term scheduling is discussed later.

Preemption

It is important to distinguish preemptive from non-preemptive scheduling algorithms.