Operating System

================ Start Lecture #6 ================

Deadline scheduling

This is used for real time systems. The objective of the scheduler is to find a schedule for all the tasks (there are a fixed set of tasks) so that each meets its deadline. The run time of each task is known in advance.

Actually it is more complicated.

We do not cover deadline scheduling in this course.

The name game

There is an amazing inconsistency in naming the different (short-term) scheduling algorithms. Over the years I have used primarily 4 books: In chronological order they are Finkel, Deitel, Silberschatz, and Tanenbaum. The table just below illustrates the name game for these four books. After the table we discuss each scheduling policy in turn.

Finkel  Deitel  Silbershatz Tanenbaum
-------------------------------------
FCFS    FIFO    FCFS        --    unnamed in tanenbaum
RR      RR      RR          RR
PS      **      PS          PS
SRR     **      SRR         **    not in tanenbaum
SPN     SJF     SJF         SJF
PSPN    SRT     PSJF/SRTF   --    unnamed in tanenbaum
HPRN    HRN     **          **    not in tanenbaum
**      **      MLQ         **    only in silbershatz
FB      MLFQ    MLFQ        MQ

Remark: For an alternate organization of the scheduling algorithms (due to Eric Freudenthal and presented by him Fall 2002) click here.

First Come First Served (FCFS, FIFO, FCFS, --)

If the OS ``doesn't'' schedule, it still needs to store the PTEs somewhere. If it is a queue you get FCFS. If it is a stack (strange), you get LCFS. Perhaps you could get some sort of random policy as well.

Round Robin (RR, RR, RR, RR)

Homework: 26, 35, 38.

Homework: Give an argument favoring a large quantum; give an argument favoring a small quantum.

ProcessCPU TimeCreation Time
P1200
P233
P325
Homework: (Remind me to discuss this last one in class next time): Consider the set of processes in the table below. When does each process finish if RR scheduling is used with q=1, if q=2, if q=3, if q=100. First assume (unrealistically) that context switch time is zero. Then assume it is .1. Each process performs no I/O (i.e., no process ever blocks). All times are in milliseconds. The CPU time is the total time required for the process (excluding any context switch time). The creation time is the time when the process is created. So P1 is created when the problem begins and P3 is created 5 milliseconds later. If two processes have equal priority (in RR this means if thy both enter the ready state at the same cycle), we give priority (in RR this means place first on the queue) to the process with the earliest creation time. If they also have the same creation time, then we give priority to the process with the lower number.

Processor Sharing (PS, **, PS, PS)

Merge the ready and running states and permit all ready jobs to be run at once. However, the processor slows down so that when n jobs are running at once, each progresses at a speed 1/n as fast as it would if it were running alone.

Homework: 34.

Variants of Round Robin

Priority Scheduling

Each job is assigned a priority (externally, perhaps by charging more for higher priority) and the highest priority ready job is run.

Priority aging

As a job is waiting, raise its priority so eventually it will have the maximum priority.

Selfish RR (SRR, **, SRR, **)

Shortest Job First (SPN, SJF, SJF, SJF)

Sort jobs by total execution time needed and run the shortest first.

Homework: 39, 40 (note that when he says RR with each process getting its fair share, he means PS).

Preemptive Shortest Job First (PSPN, SRT, PSJF/SRTF, --)

Preemptive version of above

Highest Penalty Ratio Next (HPRN, HRN, **, **)

Run the process that has been ``hurt'' the most.

Multilevel Queues (**, **, MLQ, **)

Put different classes of processs in different queues

Multilevel Feedback Queues (FB, MFQ, MLFBQ, MQ)

Many queues and processs move from queue to queue in an attempt to dynamically separate ``batch-like'' from interactive processs so that we can favor the latter.

Theoretical Issues

Considerable theory has been developed.

Medium-Term Scheduling

In addition to the short-term scheduling we have discussed, we add medium-term scheduling in which decisions are made at a coarser time scale.

Long Term Scheduling

2.5.4: Scheduling in Real Time Systems

Skipped

2.5.5: Policy versus Mechanism

Skipped.

2.5.6: Thread Scheduling

Skipped.

Research on Processes and Threads

Skipped.

Notes on lab (scheduling)

  1. If several processes are waiting on I/O, you may assume noninterference. For example, assume that on cycle 100 process A flips a coin and decides its wait is 6 units (i.e., during cycles 101-106 A will be blocked. Assume B begins running at cycle 101 for a burst of 1 cycle. So during 101 process B flips a coin and decides its wait is 3 units. You do NOT alter process A. That is, Process A will become ready after cycle 106 (100+6) so enters the ready list cycle 107 and process B becomes ready after cycle 104 (101+3) and enters ready list cycle 105.

  2. For processor sharing (PS), which is part of the extra credit:
    PS (processor sharing). Every cycle you see how many jobs are ready (or running). Say there are 7. Then during this cycle (an exception will be described below) each process gets 1/7 of a cycle.
    EXCEPTION: Assume there are exactly 2 ready jobs, one needs 1/3 cycle and one needs 1/2 cycle. The process needing only 1/3 gets only 1/3, i.e. it is finished after 2/3 cycle. So the other process gets 1/3 cycle during the first 2/3 cycle and then starts to get all the CPU. Hence it finishes after 2/3 + 1/6 = 5/6 cycle. The last 1/6 cycle is not used by any process.
End of Notes

Chapter 3: Deadlocks

A deadlock occurs when a every member of a set of processes is waiting for an event that can only be caused by a member of the set.

Often the event waited for is the release of a resource.

In the automotive world deadlocks are called gridlocks.

Reward: One point extra credit on the final exam for anyone who brings a real (e.g., newspaper) picture of an automotive deadlock. You must bring the clipping to the final and it must be in good condition. Hand it in with your exam paper. Note that it must really be a gridlock, i.e., motion is not possible without breaking the traffic rules. A huge traffic jam is not sufficient.

For a computer science example consider two processes A and B that each want to print a file currently on tape.

  1. A has obtained ownership of the printer and will release it after printing one file.
  2. B has obtained ownership of the tape drive and will release it after reading one file.
  3. A tries to get ownership of the tape drive, but is told to wait for B to release it.
  4. B tries to get ownership of the printer, but is told to wait for A to release the printer.

Bingo: deadlock!