Operating Systems
2000-01 Fall
M 5:00-6:50
Ciww 109
Allan Gottlieb
gottlieb@nyu.edu
http://allan.ultra.nyu.edu/~gottlieb
715 Broadway, Room 1001
212-998-3344
609-951-2707
email is best
================ Start Lecture #2
================
Note:
Lab 1 is assigned and due 2 October.
1.3.3: System Calls
System calls are the way a user (i.e. a program)
directly interfaces with the OS. Some textbooks use the term
envelope for the component of the OS responsible for fielding
system calls and dispatching them. Here is a picture showing some of
the OS components and the external events for which they are the
interface.
Note that the OS serves two masters. The hardware (below)
asynchronously sends interrupts and the user makes system
calls and generates page faults.
What happens when a user executes a system call such as read()?
We discuss this in much more detail later but briefly what happens is
...
- Normal function call (in C, ada, etc.).
- Library routine (in C).
- Small assembler routine.
- Move arguments to predefined place (perhaps registers).
- Poof (a trap instruction) and then the OS proper runs in
supervisor mode.
- Fixup result (move to correct place).
Homework: 6
1.3.4: The shell
Assumed knowledge
Homework: 9.
1.4: OS Structure
I must note that tanenbaum is a big advocate of the so called
microkernel approach in which as much as possible is moved out of the
(protected) microkernel into separate processes.
In the early 90s this was popular. Digital Unix (now called True64)
and Windows NT were examples. Digital Unix is based on Mach, a
research OS from Carnegie Mellon university. Lately, the growing
popularity of Linux has called into question the belief that ``all new
operating systems will be microkernel based''.
1.4.1: Monolithic approach
The previous picture: one big program
The system switches from user mode to kernel mode during the poof and
then back when the OS does a ``return''.
But of course we can structure the system better, which brings us to.
1.4.2: Layered Systems
Some systems have more layers and are more strictly structured.
An early layered system was ``THE'' operating system by Dijkstra. The
layers were.
- The operator
- User programs
- I/O mgt
- Operator-process communication
- Memory and drum management
The layering was done by convention, i.e. there was no enforcement by
hardware and the entire OS is linked together as one program. This is
true of many modern OS systems as well (e.g., linux).
The multics system was layered in a more formal manner. The hardware
provided several protection layers and the OS used them. That is,
arbitrary code could not jump to or access data in a more protected layer.
1.4.4: Virtual machines
Use a ``hypervisor'' (beyond supervisor, i.e. beyond a normal OS) to
switch between multiple Operating Systems
- Each App/CMS runs on a virtual 370
- CMS is a single user OS
- A system call in an App traps to the corresponding CMS
- CMS believes it is running on the machine so issues I/O
instructions but ...
- ... I/O instructions in CMS trap to VM/370
14.4: Client Server
When implemented on one computer, a client server OS is the
microkernel approach in which the microkernel just supplies
interprocess communication and the main OS functions are provided by a
number of separate processes.
This does have advantages. For example an error in the file server
cannot corrupt memory in the process server. This makes errors easier
to track down.
But it does mean that when a (real) user process makes a system call
there are more processes switches. These are
not free.
A distributed system can be thought of as an extension of the
client server concept where the servers are remote.
Homework: 11
Chapter 2: Process Management
Tanenbaum's chapter title is ``processes''. I prefer process
management. The subject matter is processes, process scheduling,
interrupt handling, and IPC (Interprocess communication--and
coordination).
2.1: Processes
Definition: A process is a program
in execution.
- We are assuming a multiprogramming OS that
can switch from one process to another.
- Sometimes this is
called pseudoparallelism since one has the illusion of a
parallel processor.
- The other possibility is real
parallelism in which two or more processes are actually running
at once because the computer system is a parallel processor, i.e., has
more than one processor.
- We do not study real parallelism (parallel
processing, distributed systems, multiprocessors, etc) in this course.
3.1.1: The Process Model
Even though in actuality there are many processes running at once, the
OS gives each process the illusion that it is running alone.
- Virtual time: The time used by just this
processes. Virtual time progresses at
a rate independent of other processes. Actually, this is false, the
virtual time is
typically incremented a little during systems calls used for process
switching; so if there are more other processors more ``overhead''
virtual time occurs.
- Virtual memory: The memory as viewed by the
process. Each process typically believes it has a contiguous chunk of
memory starting at location zero. Of course this can't be true of all
processes (or they would be using the same memory) and in modern
systems it is actually true of no processes (the memory assigned is
not contiguous and does not include location zero).
Virtual time and virtual memory are examples of abstractions
provided by the operating system to the user processes so that the
latter ``sees'' a more pleasant virtual machine than actually exists.
Process Hierarchies
- Modern general purpose operating systems permit a user to create and
destroy processes.
- In unix this is done by the fork
system call, which creates a child process, and the
exit system call, which terminates the current
process.
- After a fork both parent and child keep running (indeed they
have the same program text) and each can fork off other
processes.
- A process tree results. The root of the tree is a special
process created by the OS during startup.
MS-DOS is not multiprogrammed so when one process starts another,
the first process is blocked and waits until the second is finished.
Process states and transitions
The above diagram contains a great deal of information.
- Consider a running process P that issues an I/O request
- The process blocks
- At some later point, a disk interrupt occurs and the driver
detects that P's request is satisfied.
- P is unblocked, i.e. is moved from blocked to ready
- At some later time the operating system looks for a ready job
to run and picks P.
- A preemptive scheduler has the dotted line preempt;
A non-preemptive scheduler doesn't.
- The number of processes changes only for two arcs: create and
terminate.
- Suspend and resume are medium term scheduling
- Done on a longer time scale.
- Involves memory management as well.
- Sometimes called two level scheduling.
One can organize an OS around the scheduler.
- Write a minimal ``kernel'' consisting of the scheduler, interrupt
handlers, and IPC (interprocess communication)
- The rest of the OS consists of kernel processes (e.g. memory,
filesystem) that act as servers for the user processes (which of
course act as clients.
- The system processes also act as clients (of other system processes).
- The above is called the client-server model and is one Tanenbaum likes.
His ``Minix'' operating system works this way.
- Indeed, there was reason to believe that it would dominate. But that
hasn't happened.
- Such an OS is sometimes called server
based.
- Systems like traditional unix or linux would then be
called self-service since the user process serves itself.
- That is, the user process switches to kernel mode and performs
the system call.
- To repeat: the same process changes back and forth from/to
user<-->system mode and services itself.
2.1.3: Implementation of Processes
The OS organizes the data about each process in a table naturally
called the process table. Each entry in this table
is called a process table entry or PTE.
- One entry per process.
- The central data structure for process management.
- A process state transition (e.g., moving from blocked to ready) is
reflected by a change in the value of one or more
fields in the PTE.
- We have converted an active entity (process) into a data structure
(PTE). Finkel calls this the level principle ``an active
entity becomes a data structure when looked at from a lower level''.
- The PTE contains a great deal of information about the process.
For example,
- Saved value of registers when process not running
- Stack pointer
- CPU time used
- Process id (PID)
- Process id of parent (PPID)
- User id (uid and euid)
- Group id (gid and egid)
- Pointer to text segment (memory for the program text)
- Pointer to data segment
- Pointer to stack segment
- UMASK (default permissions for new files)
- Current working directory
- Many others
An aside on Interrupts
In a well defined location in memory (specified by the hardware) the
OS stores an interrupt vector, which contains the
address of the (first level) interrupt handler.
- Tanenbaum calls the interrupt handler the interrupt service routine.
- Actually one can have different priorities of interrupts and the
interrupt vector contains one pointer for each level. This is why it is
called a vector.
Assume a process P is running and a disk interrupt occurs for the
completion of a disk read previously issued by process Q, which is
currently blocked. Note that interrupts are unlikely to be for the
currently running process (because the process waiting for the
interrupt is likely blocked).
- The hardware stacks the program counter etc (possibly some
registers)
- Hardware loads new program counter from the interrupt vector.
- Loading the program counter causes a jump
- Steps 1 and 2 are similar to a procedure call. But the
interrupt is asynchronous
- Assembly language routine saves registers
- Assembly routine sets up new stack
- These last two steps can be called setting up the C environment
- Assembly routine calls C procedure (tanenbaum forgot this one)
- C procedure does the real work
- Determines what caused the interrupt (in this case a disk
completed an I/O)
- How does it figure out the cause?
- Which priority interrupt was activated.
- The controller can write data in memory before the
interrupt
- The OS can read registers in the controller
- Mark process Q as ready to run.
- That is move Q to the ready list (note that again
we are viewing Q as a data structure).
- The state of Q is now ready (it was blocked before).
- The code that Q needs to run initially is likely to be OS
code. For example, Q probably needs to copy the data just
read from a kernel buffer into user space.
- Now we have at least two processes ready to run: P and Q
- The scheduler decides which process to run (P or Q or
something else). Lets assume that the decision is to run P.
- The C procedure (that did the real work in the interrupt
processing) continues and returns to the assembly code.
- Assembly language restores P's state (e.g., registers) and starts
P at the point it was when the interrupt occurred.
2.2: Interprocess Communication (IPC) and Process Coordination and
Synchronization
2.2.1: Race Conditions
A race condition occurs when two processes can
interact and the outcome depends on the order in which the processes
execute.
- Imagine two processes both accessing x, which is initially 10.
- One process is to execute x <-- x+1
- The other is to execute x <-- x-1
- When both are finished x should be 10
- But we might get 9 and might get 11!
- Show how this can happen (x <-- x+1 is not atomic)
- Tanenbaum shows how this can lead to disaster for a printer
spooler
Homework: 2
2.2.2: Critical sections
We must prevent interleaving sections of code that need to be atomic with
respect to each other. That is, the conflicting sections need
mutual exclusion. If process A is executing its
critical section, it excludes process B from executing its critical
section. Conversely if process B is executing is critical section, it
excludes process A from executing its critical section.
Requirements for a critical section implementation.
- No two processes may be simultaneously inside their critical
section
- No assumption may be made about the speeds or the number of CPUs
- No process outside its critical section may block other processes
- No process should have to wait forever to enter its critical
section
- I do NOT make this last requirement.
- I just require that the system as a whole make progress (so not
all processes are blocked)
- I refer to solutions that do not satisfy tanenbaum's last
condition as unfair, but nonetheless correct, solutions
- Stronger fairness conditions can also be defined
2.2.3 Mutual exclusion with busy waiting
The operating system can choose not to preempt itself. That is, no
preemption for system processes (if the OS is client server) or for
processes running in system mode (if the OS is self service).
Forbidding preemption for system processes would prevent the problem
above where x<--x+1 not being atomic crashed the printer spooler if
the spooler is part of the OS.
But this is not adequate
- Does not work for user programs. So the Unix printer spooler would
not be helped.
- Does not prevent conflicts between the main line OS and interrupt
handlers
- This conflict could be prevented by blocking interrupts while the mail
line is in its critical section.
- Indeed, blocking interrupts is often done for exactly this reason.
- Do not want to block interrupts for too long or the system
will seem unresponsive
- Does not work if the system has several processors
- Both main lines can conflict
- One processor cannot block interrupts on the other
Software solutions for two processes
Initially P1wants=P2wants=false
Code for P1 Code for P2
Loop forever { Loop forever {
P1wants <-- true ENTRY P2wants <-- true
while (P2wants) {} ENTRY while (P1wants) {}
critical-section critical-section
P1wants <-- false EXIT P2wants <-- false
non-critical-section } non-critical-section }
Explain why this works.
But it is wrong! Why?
Let's try again. The trouble was that setting want before the
loop permitted us to get stuck. We had them in the wrong order!
Initially P1wants=P2wants=false
Code for P1 Code for P2
Loop forever { Loop forever {
while (P2wants) {} ENTRY while (P1wants) {}
P1wants <-- true ENTRY P2wants <-- true
critical-section critical-section
P1wants <-- false EXIT P2wants <-- false
non-critical-section } non-critical-section }
Explain why this works.
But it is wrong again! Why?
So let's be polite and really take turns. None of this wanting stuff.
Initially turn=1
Code for P1 Code for P2
Loop forever { Loop forever {
while (turn = 2) {} while (turn = 1) {}
critical-section critical-section
turn <-- 2 turn <-- 1
non-critical-section } non-critical-section }
This one forces alternation, so is not general enough. Specifically,
it does not satisfy condition three, which requires that no process in
its non-critical section can stop another process from entering its
critical section. With alternation, if one process is in its
non-critical section (NCS) then the other can enter the CS once but
not again.
In fact, it took years (way back when) to find a correct solution.
Many earlier ``solutions'' were found and several were published, but
all were wrong
The first true solution was found by dekker. It is very clever, but I am
skipping it (I cover it when I teach OS II). Subsequently, algorithms with
better fairness properties were found (e.g. no task has to wait for
another task to enter the CS twice).