================ Start Lecture #8
================
2.3.6: Mutexes
Remark:
Whereas we use the term semaphore to mean binary semaphore and
explicitly say generalized or counting semaphore for the positive
integer version, Tanenbaum uses semaphore for the positive integer
solution and mutex for the binary version.
Also, as indicated above, for Tanenbaum semaphore/mutex implies a
blocking primitive.
My Terminology
| Busy wait | block/switch
|
---|
critical | (binary) semaphore | (binary) semaphore
|
semi-critical | counting semaphore | counting semaphore
|
Tanenbaum's Terminology
| Busy wait | block/switch
|
---|
critical | enter/leave region | mutex
|
semi-critical | no name | semaphore
|
2.3.7: Monitors
Skipped.
2.3..8: Message Passing
Skipped.
You can find some information on barriers in my
lecture notes for a follow-on course
(see in particular lecture #16).
2.4: Classical IPC Problems
2.4.1: The Dining Philosophers Problem
A classical problem from Dijkstra
- 5 philosophers sitting at a round table
- Each has a plate of spaghetti
- There is a fork between each two
- Need two forks to eat
What algorithm do you use for access to the shared resource (the
forks)?
- The obvious solution (pick up right; pick up left) deadlocks.
- Big lock around everything serializes.
- Good code in the book.
The purpose of mentioning the Dining Philosophers problem without giving
the solution is to give a feel of what coordination problems are like.
The book gives others as well. We are skipping these (again this
material would be covered in a sequel course). If you are interested
look, for example,
here.
Homework: 31 and 32 (these have short answers but are
not easy).
2.4.2: The Readers and Writers Problem
- Two classes of processes.
- Readers, which can work concurrently.
- Writers, which need exclusive access.
- Must prevent 2 writers from being concurrent.
- Must prevent a reader and a writer from being concurrent.
- Must permit readers to be concurrent when no writer is active.
- Perhaps want fairness (e.g., freedom from starvation).
- Variants
- Writer-priority readers/writers.
- Reader-priority readers/writers.
Quite useful in multiprocessor operating systems and database systems.
The ``easy way
out'' is to treat all processes as writers in which case the problem
reduces to mutual exclusion (P and V). The disadvantage of the easy
way out is that you give up reader concurrency.
Again for more information see the web page referenced above.
2.4.3: The Sleeping Barber Problem
Skipped.
2.5: Process Scheduling
Scheduling processes on the processor is often called ``process
scheduling'' or simply ``scheduling''.
The objectives of a good scheduling policy include
- Fairness.
- Efficiency.
- Low response time (important for interactive jobs).
- Low turnaround time (important for batch jobs).
- High throughput [the above are from Tanenbaum].
- Repeatability. Dartmouth (DTSS) ``wasted cycles'' and limited
logins for repeatability.
- Fair across projects.
- ``Cheating'' in unix by using multiple processes.
- TOPS-10.
- Fair share research project.
- Degrade gracefully under load.
Recall the basic diagram describing process states
For now we are discussing short-term scheduling, i.e., the arcs
connecting running <--> ready.
Medium term scheduling is discussed later.
Preemption
It is important to distinguish preemptive from non-preemptive
scheduling algorithms.
- Preemption means the operating system moves a process from running
to ready without the process requesting it.
- Without preemption, the system implements ``run to completion (or
yield or block)''.
- The ``preempt'' arc in the diagram.
- We do not consider yield (a solid arrow from running to ready).
- Preemption needs a clock interrupt (or equivalent).
- Preemption is needed to guarantee fairness.
- Found in all modern general purpose operating systems.
- Even non preemptive systems can be multiprogrammed (e.g., when processes
block for I/O).
Deadline scheduling
This is used for real time systems. The objective of the scheduler is
to find a schedule for all the tasks (there are a fixed set of tasks)
so that each meets its deadline. The run time of each task is known
in advance.
Actually it is more complicated.
- Periodic tasks
- What if we can't schedule all task so that each meets its deadline
(i.e., what should be the penalty function)?
- What if the run-time is not constant but has a known probability
distribution?
We do not cover deadline scheduling in this course.
The name game
There is an amazing inconsistency in naming the different
(short-term) scheduling algorithms. Over the years I have used
primarily 4 books: In chronological order they are Finkel, Deitel,
Silberschatz, and Tanenbaum. The table just below illustrates the
name game for these four books. After the table we discuss each
scheduling policy in turn.
Finkel Deitel Silbershatz Tanenbaum
-------------------------------------
FCFS FIFO FCFS -- unnamed in tanenbaum
RR RR RR RR
PS ** PS PS
SRR ** SRR ** not in tanenbaum
SPN SJF SJF SJF
PSPN SRT PSJF/SRTF -- unnamed in tanenbaum
HPRN HRN ** ** not in tanenbaum
** ** MLQ ** only in silbershatz
FB MLFQ MLFQ MQ
First Come First Served (FCFS, FIFO, FCFS, --)
If the OS ``doesn't'' schedule, it still needs to store the PTEs
somewhere. If it is a queue you get FCFS. If it is a stack
(strange), you get LCFS. Perhaps you could get some sort of random
policy as well.
- Only FCFS is considered.
- The simplist scheduling policy.
- Non-preemptive.
Round Robbin (RR, RR, RR, RR)
- An important preemptive policy.
- Essentially the preemptive version of FCFS.
- The key parameter is the quantum size q.
- When a process is put into the running state a timer is set to q.
- If the timer goes off and the process is still running, the OS
preempts the process.
- This process is moved to the ready state (the
preempt arc in the diagram), where it is placed at the
rear of the ready list (a queue).
- The process at the front of the ready list is removed from
the ready list and run (i.e., moves to state running).
- When a process is created, it is placed at the rear of the ready list.
- As q gets large, RR approaches FCFS
- As q gets small, RR approaches PS (Processor Sharing, described next)
- What value of q should we choose?
- Tradeoff
- Small q makes system more responsive.
- Large q makes system more efficient since less process switching.
Homework: 26, 35, 38.