Lab2 is on the web page and is due in 3 NYU weeks 25 March.
End of Note
Run the process that has been ``hurt'' the most.
- For each process, let r = T/t; where T is the wall clock time this
process has been in system and t is the running time of the process to
date.
- If r=5, that means the job has been running 1/5 of the time it has been
in the system.
- We call r the penalty ratio and run the process having
the highest r value.
- HPRN is normally defined to be non-preemptive (i.e., the system
only checks r when a burst ends), but there is an preemptive analogue
- Do not worry about a process that just enters the system (its
ratio is undefined)
- When putting process into the run state compute the time at
which it will no longer have the highest ratio and set a timer.
- When a process is moved into the ready state, compute its ratio
and preempt if needed.
- HRN stands for highest response ratio next and means the same thing.
- This policy is yet another example of priority scheduling
Multilevel Queues (**, **, MLQ, **)
Put different classes of processs in different queues
- Processs do not move from one queue to another.
- Can have different policies on the different queues.
For example, might have a background (batch) queue that is FCFS and one or
more foreground queues that are RR.
- Must also have a policy among the queues.
For example, might have two queues, foreground and background, and give
the first absolute priority over the second
- Might apply aging to prevent background starvation.
- But might not, i.e., no guarantee of service for background
processes. View a background process as a ``cycle soaker''.
- Might have 3 queues, foreground, background, cycle soaker
Multilevel Feedback Queues (FB, MFQ, MLFBQ, MQ)
Many queues and processs move from queue to queue in an attempt to
dynamically separate ``batch-like'' from interactive processs so that
we can favor the latter.
- Run processs from the highest priority nonempty queue in a RR manner.
- When a process uses its full quanta (looks a like batch process),
move it to a lower priority queue.
- When a process doesn't use a full quanta (looks like an interactive
process), move it to a higher priority queue.
- A long process with frequent (perhaps spurious) I/O will remain
in the upper queues.
- Might have the bottom queue FCFS.
- Many variants
For example, might let process stay in top queue 1 quantum, next queue 2
quanta, next queue 4 quanta (i.e. return a process to the rear of the
same queue it was in if the quantum expires).
Theoretical Issues
Considerable theory has been developed.
- NP completeness results abound.
- Much work in queuing theory to predict performance.
- Not covered in this course.
Medium-Term Scheduling
In addition to the short-term scheduling we have discussed, we add
medium-term scheduling in which
decisions are made at a coarser time scale.
- Called memory scheduling by Tanenbaum (part of three level scheduling).
- Suspend (swap out) some process if memory is over-committed.
- Criteria for choosing a victim.
- How long since previously suspended.
- How much CPU time used recently.
- How much memory does it use.
- External priority (pay more, get swapped out less).
- We will discuss medium term scheduling again when we study memory
management.
Long Term Scheduling
- ``Job scheduling''. Decide when to start jobs, i.e., do not
necessarily start them when submitted.
- Force user to log out and/or block logins if over-committed.
- CTSS (an early time sharing system at MIT) did this to insure
decent interactive response time.
- Unix does this if out of processes (i.e., out of PTEs).
- ``LEM jobs during the day'' (Grumman).
- Called admission scheduling by Tanenbaum (part of three level scheduling).
- Many supercomputer sites.
2.5.4: Scheduling in Real Time Systems
Skipped
2.5.5: Policy versus Mechanism
Skipped.
2.5.6: Thread Scheduling
Skipped.
Research on Processes and Threads
Skipped.
Chapter 3: Deadlocks
A deadlock occurs when every member of a set of
processes is waiting for an event that can only be caused
by a member of the set.
Often the event waited for is the release of a resource.
In the automotive world deadlocks are called gridlocks.
- The processes are the cars.
- The resources are the spaces occupied by the cars
Reward: One point extra credit on the final exam
for anyone who brings a real (e.g., newspaper) picture of an
automotive deadlock. You must bring the clipping to the final and it
must be in good condition. Hand it in with your exam paper.
Note that it must really be a gridlock, i.e., motion is not possible
without breaking the traffic rules. A huge traffic jam is not
sufficient.
For a computer science example consider two processes A and B that
each want to print a file currently on tape.
- A has obtained ownership of the printer and will release it after
printing one file.
- B has obtained ownership of the tape drive and will release it after
reading one file.
- A tries to get ownership of the tape drive, but is told to wait
for B to release it.
- B tries to get ownership of the printer, but is told to wait for
A to release the printer.
Bingo: deadlock!
3.1: Resources:
The resource is the object granted to a process.
3.1.1: Preemptable and Nonpreemptable Resourses
- Resources come in two types
- Preemptable, meaning that the resource can be
taken away from its current owner (and given back later). An
example is memory.
- Non-preemptable, meaning that the resource
cannot be taken away. An example is a printer.
- The interesting issues arise with non-preemptable resources so
those are the ones we study.
- Life history of a resource is a sequence of
- Request
- Allocate
- Use
- Release
- Processes make requests, use the resourse, and release the
resourse. The allocate decisions are made by the system and we will
study policies used to make these decisions.
3.1.2: Resourse Acquisition
Simple example of the trouble you can get into.
- Two resources and two processes.
- Each process wants both resources.
- Use a semaphore for each. Call them S and T.
- If both processes execute P(S); P(T); --- V(T); V(S)
all is well.
- But if one executes instead P(T); P(S); -- V(S); V(T)
disaster! This was the printer/tape example just above.
Recall from the semaphore/critical-section treatment last
chapter, that it is easy to cause trouble if a process dies or stays
forever inside its critical section.
Similarly, we assume that no process maintains a resource forever.
It may obtain the resource an unbounded number of times (i.e. it can
have a loop forever with a resource request inside), but each time it
gets the resource, it must release it eventually.
3.2: Introduction to Deadlocks
To repeat: A deadlock occurs when a every member of a set of
processes is waiting for an event that can only be caused
by a member of the set.
Often the event waited for is the release of
a resource.
3.2.1: (Necessary) Conditions for Deadlock
The following four conditions (Coffman; Havender) are
necessary but not sufficient for deadlock. Repeat:
They are not sufficient.
- Mutual exclusion: A resource can be assigned to at most one
process at a time (no sharing).
- Hold and wait: A processing holding a resource is permitted to
request another.
- No preemption: A process must release its resources; they cannot
be taken away.
- Circular wait: There must be a chain of processes such that each
member of the chain is waiting for a resource held by the next member
of the chain.
The first three are characteristics of the system and resources.
For a given system fixed set of resource they are either true or
false, i.e., they don't change with time.
The truth or falsehood of the last condition does indeed change with
time as the resources are requested/allocated/released.