Operating Systems

Start Lecture #7

Possible Methods of Dealing With Blocking System Calls

Relevance to Multiprocessors/Multicore

For a uniprocessor, which is all we are officially considering, there is little gain in splitting pure computation into pieces. If the CPU is to be active all the time for all the threads, it is simpler to just have one (unithreaded) process.

But this changes for multiprocessors/multicores. Now it is very useful to split computation into threads and have each executing on a separate processor/core. In this case, user-mode threads are wonderful, there are no system calls and the extremely low overhead is beneficial.

However, there are serious issues involved is programming applications for this environment.

2.2.4 Implementing Threads in the Kernel

One can move the thread operations into the operating system itself. This naturally requires that the operating system itself be (significantly) modified and is thus not a trivial undertaking.

2.2.5 Hybrid Implementations

One can write a (user-level) thread library even if the kernel also has threads. This is sometimes called the N:M model since N user-mode threads run on M kernel threads. In this scheme, the kernel threads cooperate to execute the user-level threads.

An offshoot of the N:M terminology is that kernel-level threading (without user-level threading) is sometimes referred to as the 1:1 model since one can think of each thread as being a user level thread executed by a dedicated kernel-level thread.

Homework: 12, 14.

2.2.6 Scheduler Activations

Skipped

2.2.7 Popup Threads

The idea is to automatically issue a thread-create system call upon message arrival. (The alternative is to have a thread or process blocked on a receive system call.) If implemented well, the latency between message arrival and thread execution can be very small since the new thread does not have state to restore.

Making Single-threaded Code Multithreaded

Definitely NOT for the faint of heart.

Remark: Since the subject matter of lab 2 will be processor scheduling, we will do 2.4 before 2.3.

2.4 Process Scheduling

Scheduling processes on the processor is often called processor scheduling or process scheduling or simply scheduling. As we shall see later in the course, a more descriptive name would be short-term, processor scheduling.

For now we are discussing the arcs connecting running↔ready in the diagram on the right showing the various states of a process. Medium term scheduling is discussed later as is disk-arm scheduling.

Naturally, the part of the OS responsible for (short-term, processor) scheduling is called the (short-term, processor) scheduler and the algorithm used is called the (short-term, processor) scheduling algorithm.

2.4.1 Introduction to Scheduling

Importance of Scheduling for Various Generations and Circumstances

Early computer systems were monoprogrammed and, as a result, scheduling was a non-issue.

For many current personal computers, which are definitely multiprogrammed, there is in fact very rarely more than one runnable process. As a result, scheduling is not critical.

For servers (or old mainframes), scheduling is indeed important and these are the systems you should think of.

Process Behavior

Processes alternate CPU bursts with I/O activity, as we shall see in lab2. The key distinguishing factor between compute-bound (aka CPU-bound) and I/O-bound jobs is the length of the CPU bursts.

The trend over the past decade or two has been for more and more jobs to become I/O-bound since the CPU rates have increased faster than the I/O rates.

When to Schedule

An obvious point, which is often forgotten (I don't think 3e mentions it) is that the scheduler cannot run when the OS is not running. In particular, for the uniprocessor systems we are considering, no scheduling can occur when a user process is running. (In the mulitprocessor situation, no scheduling can occur when all processors are running user jobs).

Again we refer to the state transition diagram above.

  1. Process creation.
    The running process has issued a fork() system call and hence the OS runs; thus scheduling is possible. Scheduling is also desirable at this time since the scheduling algorithm might favor the new process.
  2. Process termination.
    The exit() system call has again transferred control to the OS so scheduling is possible. Moreover, scheduling is necessary since the previously running process has terminated.
  3. Process blocks.
    Same as termination.
  4. Interrupt received.
    Since the OS takes control, scheduling is possible. When an I/O interrupt occurs, this normally means that a blocked process is now ready and, with a new candidate for running, scheduling is desirable.
  5. Clock interrupts are treated next when we discuss preemption and discuss the dotted arc in the process state diagram.

Preemption

It is important to distinguish preemptive from non-preemptive scheduling algorithms.

Categories of Scheduling Algorithms

We distinguish three categories of scheduling algorithms with regard to the importance of preemption.

  1. Batch.
  2. Interactive.
  3. Real Time.

For multiprogramed batch systems (we don't consider uniprogrammed systems, which don't need schedulers) the primary concern is efficiency. Since no user is waiting at a terminal, preemption is not crucial and if it is used, each process is given a long time period before being preempted.

For interactive systems (and multiuser servers), preemption is crucial for fairness and rapid response time to short requests.

We don't study real time systems in this course, but can say that preemption is typically not important since all the processes are cooperating and are programmed to do their task in a prescribed time window.

Scheduling Algorithm Goals

There are numerous objectives, several of which conflict, that a scheduler tries to achieve. These include.

  1. Fairness.
    Treating users fairly, which must be balanced against ...

  2. Respecting priority.
    That is, giving more important processes higher priority. For example, if my laptop is trying to fold proteins in the background, I don't want that activity to appreciably slow down my compiles and especially don't want it to make my system seem sluggish when I am modifying these class notes. In general, interactive jobs should have higher priority.

  3. Efficiency.
    This has two aspects.
  4. Low turnaround time
    That is, minimize the time from the submission of a job to its termination. This is important for batch jobs.

  5. High throughput.
    That is, maximize the number of jobs completed per day. Not quite the same as minimizing the (average) turnaround time as we shall see when we discuss shortest job first.

  6. Low response time.
    That is, minimize the time from when an interactive user issues a command to when the response is given. This is very important for interactive jobs.

  7. Repeatability. Dartmouth (DTSS) wasted cycles and limited logins for repeatability.

  8. Degrade gracefully under load.

Deadline scheduling

This is used for real time systems. The objective of the scheduler is to find a schedule for all the tasks (there are a fixed set of tasks) so that each meets its deadline. The run time of each task is known in advance.

Actually it is more complicated.

The Name Game

There is an amazing inconsistency in naming the different (short-term) scheduling algorithms. Over the years I have used primarily 4 books: In chronological order they are Finkel, Deitel, Silberschatz, and Tanenbaum. The table just below illustrates the name game for these four books. After the table we discuss several scheduling policy in some detail.

    Finkel  Deitel  Silbershatz Tanenbaum
    -------------------------------------
    FCFS    FIFO    FCFS        FCFS
    RR      RR      RR          RR
    PS      **      PS          PS
    SRR     **      SRR         **    not in tanenbaum
    SPN     SJF     SJF         SJF
    PSPN    SRT     PSJF/SRTF   SRTF
    HPRN    HRN     **          **    not in tanenbaum
    **      **      MLQ         **    only in silbershatz
    FB      MLFQ    MLFQ        MQ
  

Remark: For an alternate organization of the scheduling algorithms (due to my former PhD student Eric Freudenthal and presented by him Fall 2002) click here.

2.4.2 Scheduling in Batch Systems

First Come First Served (FCFS, FIFO, FCFS, --)

If the OS doesn't schedule, it still needs to store the list of ready processes in some manner. If it is a queue you get FCFS. If it is a stack (strange), you get LCFS. Perhaps you could get some sort of random policy as well.

Shortest Job First (SPN, SJF, SJF, SJF)

Sort jobs by execution time needed and run the shortest first.

This is a Non-preemptive algorithm.

First consider a static situation where all jobs are available in the beginning and we know how long each one takes to run. For simplicity lets consider run-to-completion, also called uniprogrammed (i.e., we don't even switch to another process on I/O). In this situation, uniprogrammed SJF has the shortest average waiting time.

The above argument illustrates an advantage of favoring short jobs (e.g., RR with small quantum): The average waiting time is reduced.

In the more realistic case of true SJF where the scheduler switches to a new process when the currently running process blocks (say for I/O), we could also consider the policy shortest next-CPU-burst first.

The difficulty is predicting the future (i.e., knowing in advance the time required for the job or the job's next-CPU-burst).

SJF Can starve a process that requires a long burst.