NOTE: These notes are adapted from those of Allan Gottlieb, and are reproduced here with his permission.


================ Start Lecture #3 (Jan. 29)

Transfer of Control : Version 1

More details will be added when we study memory management and more again when we study interrupts.

Procedure calls

Procedure f calls g(a,b,c) in process P.

Steps when f carries out the call:

1. Complete all previous instructions in f. Therefore, the only registers important for the state of f are the stack pointer (SP) and the program counter (PC)

2. Push arguments c,b,a onto P's stack. Note: Stacks usually grow downward from the top of P's segment, so pushing an item onto the stack actually involves decrementing SP.

3. Execute PUSHJ < start-address of g >. This instruction pushes PC onto the stack, and then jumps to the start address of g.

4. The first step in g is to allocate space for its own local variables by suitably decrementing SP.

g now starts its execution from the beginning. This may involve calling other procedures, possibly including recursive calls to f.

Steps when g returns control to f:

5. At the end of g: Undo step (4) and deallocate its local variables by incrementing the SP.

6. Last step of g: POPJ has the effect PC = pop(stack)

7. We are now at the step in f immediately following the call to g. Pop the arguments a,b,c off the stack and continue the execution of f.

Features of procedure call

System Call: non-blocking.

Function f in process P calls a non-blocking system routine.

Steps:

1. Procedure call to library routine in user space; just as above.

2. In library routine: put system call in specified location (e.g. register)

3. Put PC in specified location LPC (e.g. register or top of user stack).

4. TRAP: Jump to location in kernel and switch to kernel mode.

5. In kernel: read system call, and go to that routine.

6. Execute body of system call. Kernel does not encounter any need to block P.

7. Jump to location saved in LPC (next step in library routine) and switch to user mode.

8. Library routine: Library procedure returns to f, as above.

Features:

System call: Blocking

Function f in process P calls a blocking system routine.

Steps when P makes call:

Steps 1-5 are the same as for a non-blocking system call. (Necessarily, as there is no way to know whether P will block until step 6.)

6. Execute body of system call. Kernel realizes that P has to block.

7. Kernel determines that process P is running from corresponding flag in kernel.

8. Save next address for P from LPC into process table entry (PTE) for P.

9. Save SP in PTE for P.

10. Save other dynamic information in PTE for P. Some of this may be in registers, some in kernel space, some in P's user space.

11. Mark P as blocked in PTE.

12. Call the scheduler to choose a ready process Q.

13. Load Q's dynamic information from PTE for Q into wherever it goes.

14. Mark Q as the running process.

15. Jump to the next address of Q as recorded in Q's PTE. Switch to user mode.

16. In Q: Continue.

When P unblocks

When an event occurs to unblock P, mark P as ready.

When the scheduler choose to run P

Carry out steps 12-15 as above for P.

16. We are at the next step of the library routine in P. Return to f.

Features:

Interrupts (including clock interrupts)

P is tootling along when suddenly an interrupt occurs.

Steps when interrupts occur

1. I/O device notifies interrupt controller.

2. Interrupt controller places index of device and event into address lines.

3. Interrupt controller issues interrupt to CPU.

4. Hardware saves PC (next address in P) in LPC, switch to kernel mode.

5. Hardware load PC from interrupt vector [ index of device,event].

6. Assembly language procedure saves registers (expressing state of P) into PTE of P.

7. Assembly language procedure sets up new stack.

8. C interrupt server runs.

Continue as in steps 7-16 of "call to blocking routine," except that P will generally be marked as "ready" rather than blocked.

When the scheduler decides to run P again

Same as in call to blocking routine, except that registers have to be reloaded from PTE for P.

Features:

Page faults

Very similar to interrupts, but activated by P itself, and always block P. We'll talk about these when we do memory management.

2.4: Process Scheduling

Scheduling processes on the processor is often called ``process scheduling'' or simply ``scheduling''.

The objectives of a good scheduling policy include

Recall the basic diagram describing process states

For now we are discussing short-term scheduling, i.e., the arcs connecting running <--> ready.

Medium term scheduling is discussed later.

Preemption

It is important to distinguish preemptive from non-preemptive scheduling algorithms.

Deadline scheduling

This is used for real time systems. The objective of the scheduler is to find a schedule for all the tasks (there are a fixed set of tasks) so that each meets its deadline. The run time of each task is known in advance.

Actually it is more complicated.

We do not cover deadline scheduling in this course.

The name game

There is an amazing inconsistency in naming the different (short-term) scheduling algorithms. Over the years I have used primarily 4 books: In chronological order they are Finkel, Deitel, Silberschatz, and Tanenbaum. The table just below illustrates the name game for these four books. After the table we discuss each scheduling policy in turn.

Finkel  Deitel  Silbershatz Tanenbaum
-------------------------------------
FCFS    FIFO    FCFS        --    unnamed in tanenbaum
RR      RR      RR          RR
PS      **      PS          PS
SRR     **      SRR         **    not in tanenbaum
SPN     SJF     SJF         SJF
PSPN    SRT     PSJF/SRTF   --    unnamed in tanenbaum
HPRN    HRN     **          **    not in tanenbaum
**      **      MLQ         **    only in silbershatz
FB      MLFQ    MLFQ        MQ

First Come First Served (FCFS, FIFO, FCFS, --)

If the OS ``doesn't'' schedule, it still needs to store the PTEs somewhere. If it is a queue you get FCFS. If it is a stack (strange), you get LCFS. Perhaps you could get some sort of random policy as well.

Round Robin (RR, RR, RR, RR)

Processor Sharing (PS, **, PS, PS)

Merge the ready and running states and permit all ready jobs to be run at once. However, the processor slows down so that when n jobs are running at once each progresses at a speed 1/n as fast as it would if it were running alone.

Variants of Round Robbin

Priority Scheduling

Each job is assigned a priority (externally, perhaps by charging more for higher priority) and the highest priority ready job is run.

Priority aging

As a job is waiting, raise its priority so eventually it will have the maximum priority.

Selfish RR (SRR, **, SRR, **)

Shortest Job First (SPN, SJF, SJF, SJF)

Sort jobs by total execution time needed and run the shortest first.

Preemptive Shortest Job First (PSPN, SRT, PSJF/SRTF, --)

Preemptive version of above