================ Start Lecture #10 ================

Theoretical Issues

Considerable theory has been developed.

Medium-Term Scheduling

In addition to the short-term scheduling we have discussed, we add medium-term scheduling in which decisions are made at a coarser time scale.

Long Term Scheduling

2.5.4: Scheduling in Real Time Systems

Skipped

2.5.5: Policy versus Mechanism

Skipped.

2.5.6: Thread Scheduling

Skipped.

Research on Processes and Threads

Skipped.

Note on Date of Midterm (vote)

The midterm will cover chapters 0-3 of the notes, i.e., linkers and tanenbaum chapters 1-3. We will finish chapter 3 a week from today and the exam must be given before the break so that it can be graded a returned by the deadline.

Thus the only two possibilities are tues 5 march and thurs 7 march. The exam would be the same on either day. I will be in on wed afternoon 6 march.

The exam is closed book and closed notes. I will hand out some sample questions or review sheet on tuesday.

The vote was held and the winner was thursday 7 March. End of Note

Notes on lab (scheduling)

  1. If several processes are waiting on I/O, you may assume noninterference. For example, assume that on cycle 100 process A flips a coin and decides its wait is 6 units (i.e., during cycles 101-106 A will be blocked. Assume B begins running at cycle 101 for a burst of 1 cycle. So during 101 process B flips a coin and decides its wait is 3 units. You do NOT alter process A. That is, Process A will become ready after cycle 106 (100+6) so enters the ready list cycle 107 and process B becomes ready after cycle 104 (101+3) and enters ready list cycle 105.

  2. For processor sharing (PS), which is part of the extra credit:
    PS (processor sharing). Every cycle you see how many jobs are ready (or running). Say there are 7. Then during this cycle (an exception will be described below) each process gets 1/7 of a cycle.
    EXCEPTION: Assume there are exactly 2 ready jobs, one needs 1/3 cycle and one needs 1/2 cycle. The process needing only 1/3 gets only 1/3, i.e. it is finished after 2/3 cycle. So the other process gets 1/3 cycle during the first 2/3 cycle and then starts to get all the CPU. Hence it finishes after 2/3 + 1/6 = 5/6 cycle. The last 1/6 cycle is not used by any process.
End of Notes

Chapter 3: Deadlocks

A deadlock occurs when a every member of a set of processes is waiting for an event that can only be caused by a member of the set.

Often the event waited for is the release of a resource.

In the automotive world deadlocks are called gridlocks.

Reward: One point extra credit on the final exam for anyone who brings a real (e.g., newspaper) picture of an automotive deadlock. You must bring the clipping to the final and it must be in good condition. Hand it in with your exam paper. Note that it must really be a gridlock, i.e., motion is not possible without breaking the traffic rules. A huge traffic jam is not sufficient.

For a computer science example consider two processes A and B that each want to print a file currently on tape.

  1. A has obtained ownership of the printer and will release it after printing one file.
  2. B has obtained ownership of the tape drive and will release it after reading one file.
  3. A tries to get ownership of the tape drive, but is told to wait for B to release it.
  4. B tries to get ownership of the printer, but is told to wait for A to release the printer.

Bingo: deadlock!

3.1: Resources:

The resource is the object granted to a process.

3.1.1: Preemptable and Nonpreemptable Resourses

3.1.2: Resourse Acquisition

Simple example of the trouble you can get into.

Recall from the semaphore/critical-section treatment last chapter, that it is easy to cause trouble if a process dies or stays forever inside its critical section. Similarly, we assume that no process maintains a resource forever. It may obtain the resource an unbounded number of times (i.e. it can have a loop forever with a resource request inside), but each time it gets the resource, it must release it eventually.

3.2: Introduction to Deadlocks

To repeat: A deadlock occurs when a every member of a set of processes is waiting for an event that can only be caused by a member of the set.

Often the event waited for is the release of a resource.

3.2.1: (Necessary) Conditions for Deadlock

The following four conditions (Coffman; Havender) are necessary but not sufficient for deadlock. Repeat: They are not sufficient.

  1. Mutual exclusion: A resource can be assigned to at most one process at a time (no sharing).
  2. Hold and wait: A processing holding a resource is permitted to request another.
  3. No preemption: A process must release its resources; they cannot be taken away.
  4. Circular wait: There must be a chain of processes such that each member of the chain is waiting for a resource held by the next member of the chain.

The first three are characteristics of the system and resources. For a given system fixed set of resource they are either true or false, i.e., they don't change with time. The truth or falsehood of the last condition does indeed change with time as the resources are requested/allocated/released.

3.2.2: Deadlock Modeling

On the right is the Resource Allocation Graph, also called the Reusable Resource Graph.

Homework: 5.

Consider two concurrent processes P1 and P2 whose programs are.

P1: request R1       P2: request R2
    request R2           request R1
    release R2           release R1
    release R1           release R2

On the board draw the resource allocation graph for various possible executions of the processes, indicating when deadlock occurs and when deadlock is no longer avoidable.

There are four strategies used for dealing with deadlocks.

  1. Ignore the problem
  2. Detect deadlocks and recover from them
  3. Avoid deadlocks by carefully deciding when to allocate resources.
  4. Prevent deadlocks by violating one of the 4 necessary conditions.

3.3: Ignoring the problem--The Ostrich Algorithm

The ``put your head in the sand approach''.

3.4: Detecting Deadlocks and Recovering From Them

3.4.1: Detecting Deadlocks with Single Unit Resources

Consider the case in which there is only one instance of each resource.

To find a directed cycle in a directed graph is not hard. The algorithm is in the book. The idea is simple.

  1. For each node in the graph do a depth first traversal (hoping the graph is a DAG (directed acyclic graph), building a list as you go down the DAG.
  2. If you ever find the same node twice on your list, you have found a directed cycle and the graph is not a DAG and deadlock exists among the processes in your current list.
  3. If you never find the same node twice, the graph is a DAG and no deadlock occurs.
  4. The searches are finite since the list size is bounded by the number of nodes.

3.4.2: Detecting Deadlocks with Multiple Unit Resources

This is more difficult.