Operating Systems

================ Start Lecture #6 ================

3.1.2: Resource Acquisition

Simple example of the trouble you can get into.

Recall from the semaphore/critical-section treatment last chapter, that it is easy to cause trouble if a process dies or stays forever inside its critical section; we assume processes do not do this. Similarly, we assume that no process retains a resource forever. It may obtain the resource an unbounded number of times (i.e. it can have a loop forever with a resource request inside), but each time it gets the resource, it must release it eventually.

3.2: Introduction to Deadlocks

To repeat: A deadlock occurs when a every member of a set of processes is waiting for an event that can only be caused by a member of the set.

Often the event waited for is the release of a resource.

3.2.1: (Necessary) Conditions for Deadlock

The following four conditions (Coffman; Havender) are necessary but not sufficient for deadlock. Repeat: They are not sufficient.

  1. Mutual exclusion: A resource can be assigned to at most one process at a time (no sharing).
  2. Hold and wait: A processing holding a resource is permitted to request another.
  3. No preemption: A process must release its resources; they cannot be taken away.
  4. Circular wait: There must be a chain of processes such that each member of the chain is waiting for a resource held by the next member of the chain.

The first three are characteristics of the system and resources. That is, for a given system with a fixed set of resources, the first three conditions are either true or false: They don't change with time. The truth or falsehood of the last condition does indeed change with time as the resources are requested/allocated/released.

3.2.2: Deadlock Modeling

On the right are several examples of a Resource Allocation Graph, also called a Reusable Resource Graph.

Homework: 5.

Consider two concurrent processes P1 and P2 whose programs are.

P1: request R1       P2: request R2
    request R2           request R1
    release R2           release R1
    release R1           release R2

On the board draw the resource allocation graph for various possible executions of the processes, indicating when deadlock occurs and when deadlock is no longer avoidable.

There are four strategies used for dealing with deadlocks.

  1. Ignore the problem
  2. Detect deadlocks and recover from them
  3. Avoid deadlocks by carefully deciding when to allocate resources.
  4. Prevent deadlocks by violating one of the 4 necessary conditions.

3.3: Ignoring the problem--The Ostrich Algorithm

The “put your head in the sand approach”.

3.4: Detecting Deadlocks and Recovering From Them

3.4.1: Detecting Deadlocks with Single Unit Resources

Consider the case in which there is only one instance of each resource.

To find a directed cycle in a directed graph is not hard. The algorithm is in the book. The idea is simple.

  1. For each node in the graph do a depth first traversal to see if the graph is a DAG (directed acyclic graph), building a list as you go down the DAG (and pruning it as you backtrack back up).

  2. If you ever find the same node twice on your list, you have found a directed cycle, the graph is not a DAG, and deadlock exists among the processes in your current list.

  3. If you never find the same node twice, the graph is a DAG and no deadlock occurs.

  4. The searches are finite since there are a finite number of nodes.

3.4.2: Detecting Deadlocks with Multiple Unit Resources

This is more difficult.

3.4.3: Recovery from deadlock

Preemption

Perhaps you can temporarily preempt a resource from a process. Not likely.

Rollback

Database (and other) systems take periodic checkpoints. If the system does take checkpoints, one can roll back to a checkpoint whenever a deadlock is detected. Somehow must guarantee forward progress.

Kill processes

Can always be done but might be painful. For example some processes have had effects that can't be simply undone. Print, launch a missile, etc.

Remark: We are doing 3.6 before 3.5 since 3.6 is easier.

3.6: Deadlock Prevention

Attack one of the coffman/havender conditions

3.6.1: Attacking Mutual Exclusion

Idea is to use spooling instead of mutual exclusion. Not possible for many kinds of resources

3.6.2: Attacking Hold and Wait

Require each processes to request all resources at the beginning of the run. This is often called One Shot.

3.6.3: Attacking No Preempt

Normally not possible. That is, some resources are inherently preemptable (e.g., memory). For those deadlock is not an issue. Other resources are non-preemptable, such as a robot arm. It is normally not possible to find a way to preempt one of these latter resources.

3.6.4: Attacking Circular Wait

Establish a fixed ordering of the resources and require that they be requested in this order. So if a process holds resources #34 and #54, it can request only resources #55 and higher.

It is easy to see that a cycle is no longer possible.

Homework: 7.

3.5: Deadlock Avoidance

Let's see if we can tiptoe through the tulips and avoid deadlock states even though our system does permit all four of the necessary conditions for deadlock.

An optimistic resource manager is one that grants every request as soon as it can. To avoid deadlocks with all four conditions present, the manager must be smart not optimistic.

3.5.1 Resource Trajectories

We plot progress of each process along an axis. In the example we show, there are two processes, hence two axes, i.e., planar. This procedure assumes that we know the entire request and release pattern of the processes in advance so it is not a practical solution. I present it as it is some motivation for the practical solution that follows, the Banker's Algorithm.

Homework: 10, 11, 12.

3.5.2: Safe States

Avoiding deadlocks given some extra knowledge.

Definition: A state is safe if there is an ordering of the processes such that: if the processes are run in this order, they will all terminate (assuming none exceeds its claim).

Recall the comparison made above between detecting deadlocks (with multi-unit resources) and the banker's algorithm

In the definition of a safe state no assumption is made about the running processes; that is, for a state to be safe termination must occur no matter what the processes do (providing the all terminate and to not exceed their claims). Making no assumption is the same as making the most pessimistic assumption.

Give an example of each of the four possibilities. A state that is

  1. Safe and deadlocked--not possible.

  2. Safe and not deadlocked--trivial (e.g., no arcs).

  3. Not safe and deadlocked--easy (any deadlocked state).

  4. Not safe and not deadlocked--interesting.

Is the figure on the right safe or not?

A manager can determine if a state is safe.

The manager then follows the following procedure, which is part of Banker's Algorithms discovered by Dijkstra, to determine if the state is safe.

  1. If there are no processes remaining, the state is safe.

  2. Seek a process P whose max additional requests is less than what remains (for each resource type).
  3. The banker now pretends that P has terminated (since the banker knows that it can guarantee this will happen). Hence the banker pretends that all of P's currently held resources are returned. This makes the banker richer and hence perhaps a process that was not eligible to be chosen as P previously, can now be chosen.

  4. Repeat these steps.

Example 1

A safe state with 22 units of one resource
processinitial claimcurrent allocmax add'l
X312
Y1156
Z19109
Total16
Available6

Example 2

An unsafe state with 22 units of one resource
processinitial claimcurrent allocmax add'l
X312
Y1156
Z19127
Total18
Available4

Start with example 1 and assume that Z now requests 2 units and the manager grants them.