Start of Lecture 3

Producer-consumer problem

initially e=k, f=0 (counting semaphore); b=open (binary semaphore)

Producer                         Consumer

loop forever                     loop forever
    produce-item                     P(f)
    P(e)                             P(b); take item from buf; V(b)
    P(b); add item to buf; V(b)      V(e)
    V(f)                             consume-item

Dining Philosophers

A classical problem from Dijkstra

What algorithm do you use for access to the shared resourse (the forks)?

The point of mentioning this without giving the solution is to give a feel of what coordination problems are like. The book gives others as well. We are skipping these (2nd semester, depending on instructor).

Homework: 14,15

Readers and writers

Quite useful in multiprocessor operating systems. The ``easy way out'' is to treat all as writers (i.e., give up reader concurrency).

2.4: Process Scheduling

Scheduling the processor is often called just scheduling or process scheduling.

The objectives of a good scheduling policy include

Recall the basic diagram about process states

For now we are discussing short-term scheduling running <--> ready.

Medium term scheduling is discussed a little later.

Preemption

This is an important distinction.

Deadline scheduling

This is used for real time systems. The objective of the scheduler is to find a schedule for all the tasks (there are a fixed set of tasks) so that each meets its deadline. You know how long each task executes

Actually it is more complicated.

We do not cover deadline schedling in this course.

The name game

There is an amazing inconsistency in naming the different (short-term) scheduling algorithms. Over the years I have used primarily 4 books: In chronological order they are Finkel, Deitel, Silberschatz, and Tanenbaum. The table just below illustrates the name game for these three books. After the table we discuss each scheduling policy in turn.

Finkel  Deitel  Silbershatz Tanenbaum
-------------------------------------
FCFS    FIFO    FCFS        --    unnamed in tanenbaum
RR      RR      RR          RR      
SRR     **      SRR         **    not in tanenbaum
PS      **      PS          PS
SPN     SJF     SJF         SJF   
PSPN    SRT     PSJF/SRTF   --    unnamed in tanenbaum
HPRN    HRN     **          **    not in tanenbaum
**      **      MLQ         **    only in silbershatz
FB      MLFQ    MLFQ        MQ

First Come First Served (FCFS, FIFO, FCFS, --)

If you ``don't'' schedule, you still have to store the PTEs somewhere. If it is a queue you get FCFS. If it is a stack (strange), you get LCFS. Perhaps you could get some sort of random policy as well.

Round Robbin (RR, RR, RR, RR)

Homework: 9, 19, 20, 21

Selfish RR (SRR, **, SRR, **)

Processor Sharing (PS, **, PS, PS)

All n processes are running, each on a processor 1/n as fast as the real processor.

Homework: 18.

Shortest Job First (SPN, SJF, SJF)

Sort jobs by total execution time needed and run the shortest first.

Preemptive Shortest Job First (PSPN, SRT, PSJF/SRTF)

Preemptive version of above

Priority aging

As a job is waiting, raise its priority and when it is time to choose, pick job with highest priority.

Homework: 22, 23

Highest Penalty Ratio Next (HPRN, HRN, **, **)

Run job that has been ``hurt'' the most.

Multilevel Queues (**, **, MLQ, **)

Put different classes of jobs in different queues

Multilevel Feedback Queues (FB, MFQ, MLFBQ, MQ)

Many queues and jobs move from queue to queue in an attempt to dynamically separate ``batch-like'' from interactive jobs.

Theoretical Issues

Much theory has been done (NP completeness results abound)
Queuing theory developed to predict performance

Medium Term scheduling

Decisions made at a coarser time scale.

Long Term Scheduling