================ Start Lecture #6 ================

You may use C++ for the labs.

The email address for the three e-tutors are

Robert Szarek szar9908@cs.nyu.edu
Aldo J Nunez ajn203@omicron.acf.nyu.edu
Franqueli Mendez fm201@omicron.acf.nyu.edu

Dining Philosophers

A classical problem from Dijkstra

What algorithm do you use for access to the shared resource (the forks)?

The point of mentioning this without giving the solution is to give a feel of what coordination problems are like. The book gives others as well. We are skipping these (again this material would be covered in a sequel course). If you are interested look, for example, at http://allan.ultra.nyu.edu/~gottlieb/courses/1997-98-spring/os/class-notes.html

Homework: 14,15 (these have short answers but are not easy).

Readers and writers

Quite useful in multiprocessor operating systems. The ``easy way out'' is to treat all processes as writers in which case the problem reduces to mutual exclusion (P and V). The disadvantage of the easy way out is that you give up reader concurrency. Again for more information see the web page referenced above.

2.4: Process Scheduling

Scheduling the processor is often called ``process scheduling'' or simply ``scheduling''.

The objectives of a good scheduling policy include

Recall the basic diagram describing process states

For now we are discussing short-term scheduling running <--> ready.

Medium term scheduling is discussed later.

Preemption

It is important to distinguish preemptive from non-preemptive scheduling algorithms.

Deadline scheduling

This is used for real time systems. The objective of the scheduler is to find a schedule for all the tasks (there are a fixed set of tasks) so that each meets its deadline. The run time of each task is known in advance.

Actually it is more complicated.

We do not cover deadline scheduling in this course.

The name game

There is an amazing inconsistency in naming the different (short-term) scheduling algorithms. Over the years I have used primarily 4 books: In chronological order they are Finkel, Deitel, Silberschatz, and Tanenbaum. The table just below illustrates the name game for these four books. After the table we discuss each scheduling policy in turn.

Finkel  Deitel  Silbershatz Tanenbaum
-------------------------------------
FCFS    FIFO    FCFS        --    unnamed in tanenbaum
RR      RR      RR          RR      
PS      **      PS          PS
SRR     **      SRR         **    not in tanenbaum
SPN     SJF     SJF         SJF   
PSPN    SRT     PSJF/SRTF   --    unnamed in tanenbaum
HPRN    HRN     **          **    not in tanenbaum
**      **      MLQ         **    only in silbershatz
FB      MLFQ    MLFQ        MQ

First Come First Served (FCFS, FIFO, FCFS, --)

If you ``don't'' schedule, you still have to store the PTEs somewhere. If it is a queue you get FCFS. If it is a stack (strange), you get LCFS. Perhaps you could get some sort of random policy as well.

Round Robbin (RR, RR, RR, RR)

Homework: 9, 19, 20, 21 (not assigned until lecture 7)

Priority Scheduling

Each job is assigned a priority (externally, perhaps by charging more for higher priority) and the highest priority ready job is run.

Processor Sharing (PS, **, PS, PS)

Merge the ready and running states and permit all ready jobs to be run at once. However, the processor slows down so that when n jobs are running at once each progresses at a speed 1/n as fast if it were running alone.

Homework: 18.