================ Start Lecture #11 ================
Recall that SFJ/PSFJ do a good job of minimizing the average waiting
The problem with them is the difficulty in finding the job whose next
CPU burst is minimal.
We now learn two scheduling algorithms that attempt to do this
The first one does this statically, presumably with some manual help;
the second is dynamic and fully automatic.
Multilevel Queues (**, **, MLQ, **)
Put different classes of processs in different queues
- Processes do not move from one queue to another.
Can have different policies on the different queues.
For example, might have a background (batch) queue that is FCFS and one or
more foreground queues that are RR.
Must also have a policy among the queues.
For example, might have two queues, foreground and background, and give
the first absolute priority over the second
Might apply aging to prevent background starvation.
But might not, i.e., no guarantee of service for background
processes. View a background process as a “cycle soaker”.
Might have 3 queues, foreground, background, cycle soaker.
Multilevel Feedback Queues (FB, MFQ, MLFBQ, MQ)
As with multilevel queues above we have many queues, but now processes
move from queue to queue in an attempt to
dynamically separate “batch-like” from interactive processs so that
we can favor the latter.
Remember that average waiting time is achieved by SJF, and this is
an attempt to determine dynamically those processes that are
interactive, which means have a very short cpu burst.
Run process from the highest priority nonempty queue in a RR manner.
When a process uses its full quanta (looks a like batch process),
move it to a lower priority queue.
When a process doesn't use a full quanta (looks like an interactive
process), move it to a higher priority queue.
A long process with frequent (perhaps spurious) I/O will remain
in the upper queues.
Might have the bottom queue FCFS.
For example, might let process stay in top queue 1 quantum, next queue 2
quanta, next queue 4 quanta (i.e., sometimes return a process to
the rear of the same queue it was in if the quantum expires).
Considerable theory has been developed.
NP completeness results abound.
Much work in queuing theory to predict performance.
Not covered in this course.
In addition to the short-term scheduling we have discussed, we add
medium-term scheduling in which
decisions are made at a coarser time scale.
Called memory scheduling by Tanenbaum (part of three level scheduling).
Suspend (swap out) some process if memory is over-committed.
Criteria for choosing a victim.
How long since previously suspended.
How much CPU time used recently.
How much memory does it use.
External priority (pay more, get swapped out less).
We will discuss medium term scheduling again when we study memory
Long Term Scheduling
- “Job scheduling”. Decide when to start jobs, i.e., do not
necessarily start them when submitted.
Force user to log out and/or block logins if over-committed.
CTSS (an early time sharing system at MIT) did this to insure
decent interactive response time.
Unix does this if out of processes (i.e., out of PTEs).
Called admission scheduling by Tanenbaum (part of three level scheduling).
Many supercomputer sites.
2.5.4: Scheduling in Real Time Systems
2.5.5: Policy versus Mechanism
2.5.6: Thread Scheduling
Research on Processes and Threads