Operating Systems

Start Lecture #2

Deadlocks

A set of processes is deadlocked if each of the processes is blocked by a process in the set. The automotive equivalent, shown below, is called gridlock. (The photograph below was sent to me by Laurent Laor.)



gridlock

1.5.2 Address Spaces

Clearly, each process requires memory, but there are other issues as well. For example, your linkers (will) produce a load module that assumes the process is loaded at location 0. The result would be that every load module has the same (virtual) address space. The operating system must ensure that the address spaces of concurrently executing processes are assigned disjoint real memory.

For another example note that current operating systems permit each process to be given more (virtual) memory than the total amount of (real) memory on the machine.

1.5.3 Files

Modern systems have a hierarchy of files. A file system tree.

You can name a file via an absolute path starting at the root directory or via a relative path starting at the current working directory.

In Unix, one file system can be mounted on (attached to) another. When this is done, access to an existing directory on the second filesystem is temporarily replaced by the entire first file system. Most often the directory chosen is empty before the mount so no files become temporarily invisible.

In addition to regular files and directories, Unix also uses the file system namespace for devices (called special files, which are typically found in the /dev directory. Often utilities that are normally applied to (ordinary) files can be applied as well to some special files. For example, when you are accessing a unix system using a mouse and do not have anything serious going on (e.g., right after you log in), type the following command

    cat /dev/mouse
  
and then move the mouse. You kill the cat (sorry) by typing cntl-C. I tried this on my linux box (using a text console) and no damage occurred. Your mileage may vary.

Before a file can be accessed, it must be opened and a file descriptor obtained. Subsequent I/O system calls (e.g., read and write) use the file descriptor rather that the file name. This is an optimization that enables the OS to find the file once and save the information in a file table accessed by the file descriptor. Many systems have standard files that are automatically made available to a process upon startup. These (initial) file descriptors are fixed.

A convenience offered by some command interpreters is a pipe or pipeline. The pipeline

    dir | wc
  
which pipes the output of dir into a character/word/line counter, will give the number of files in the directory (plus other info).

1.5.4 Input/Output

There are a wide variety of I/O devices that the OS must manage. For example, if two processes are printing at the same time, the OS must not interleave the output.

The OS contains device specific code (drivers) for each device (really each controller) as well as device-independent I/O code.

1.5.6 Protection

Files and directories have associated permissions.

Memory assigned to a process, i.e., an address space, must also be protected.

Security

Security has of course sadly become a very serious concern. The topic is quite deep and I do not feel that the necessarily superficial coverage that time would permit is useful so we are not covering the topic at all.

1.5.7 The Shell or Command Interpreter (DOS Prompt)

The command line interface to the operating system. The shell permits the user to

Instead of a shell, one can have a more graphical interface.

Homework: 7.

Ontogeny Recapitulates Phylogeny

Some concepts become obsolete and then reemerge due in both cases to technology changes. Several examples follow. Perhaps the cycle will repeat with smart card OS.

Large Memories (and Assembly Language)

The use of assembly languages greatly decreases when memories get larger. When minicomputers and microcomputers (early PCs) were first introduced, they each had small memories and for a while assembly language again became popular.

Protection Hardware (and Monoprogramming)

Multiprogramming requires protection hardware. Once the hardware becomes available monoprogramming becomes obsolete. Again when minicomputers and microcomputers were introduced, they had no such hardware so monoprogramming revived.

Disks (and Flat File Systems)

When disks are small, they hold few files and a flat (single directory) file system is adequate. Once disks get large a hierarchical file system is necessary. When mini and microcomputer were introduced, they had tiny disks and the corresponding file systems were flat.

Virtual Memory (and Dynamically Linked Libraries)

Virtual memory, among other advantages, permits dynamically linked libraries so as VM hardware appears so does dynamic linking.

1.6 System Calls

System calls are the way a user (i.e., a program) directly interfaces with the OS. Some textbooks use the term envelope for the component of the OS responsible for fielding system calls and dispatching them to the appropriate component of the OS. On the right is a picture showing some of the OS components and the external events for which they are the interface.

Note that the OS serves two masters. The hardware (at the bottom) asynchronously sends interrupts and the user (at the top) synchronously invokes system calls and generates page faults.

Homework: 14.

What happens when a user executes a system call such as read()? We show a more detailed picture below, but at a high level what happens is

  1. Normal function call (in C, Ada, Pascal, Java, etc.).
  2. Library routine (probably in the C, or similar, language).
  3. Small assembler routine.
    1. Move arguments to predefined place (perhaps registers).
    2. Poof (a trap instruction) and then the OS proper runs in supervisor mode.
    3. Fix up result (move to correct place).

The following actions occur when the user executes the (Unix) system call

    count = read(fd,buffer,nbytes)
  
which reads up to nbytes from the file described by fd into buffer. The actual number of bytes read is returned (it might be less than nbytes if, for example, an eof was encountered).
  1. Push third parameter on to the stack.
  2. Push second parameter on to the stack.
  3. Push first parameter on to the stack.
  4. Call the library routine, which involves pushing the return address on to the stack and jumping to the routine.
  5. Machine/OS dependent actions. One is to put the system call number for read in a well defined place, e.g., a specific register. This requires assembly language.
  6. Trap to the kernel. This enters the operating system proper and shifts the computer to privileged mode. Assembly language is again used.
  7. The envelope uses the system call number to access a table of pointers to find the handler for this system call.
  8. The read system call handler processes the request (see below).
  9. Some magic instruction returns to user mode and jumps to the location right after the trap.
  10. The library routine returns (there is more; e.g., the count must be returned).
  11. The stack is popped (ending the function invocation of read).

A major complication is that the system call handler may block. Indeed, the read system call handler is likely to block. In that case a context switch is likely to occur to another process. This is far from trivial and is discussed later in the course.

A Few Important Posix/Unix/Linux and Win32 System Calls
PosixWin32Description
Process Management
ForkCreateProcessClone current process
exec(ve)Replace current process
waid(pid)WaitForSingleObjectWait for a child to terminate.
exitExitProcessTerminate process & return status
File Management
openCreateFileOpen a file & return descriptor
closeCloseHandleClose an open file
readReadFileRead from file to buffer
writeWriteFileWrite from buffer to file
lseekSetFilePointerMove file pointer
statGetFileAttributesExGet status info
Directory and File System Management
mkdirCreateDirectoryCreate new directory
rmdirRemoveDirectoryRemove empty directory
link(none)Create a directory entry
unlinkDeleteFileRemove a directory entry
mount(none)Mount a file system
umount(none)Unmount a file system
Miscellaneous
chdirSetCurrentDirectoryChange the current working directory
chmod(none)Change permissions on a file
kill(none)Send a signal to a process
timeGetLocalTimeElapsed time since 1 jan 1970

1.6.1 System Calls for Process Management

We describe the unix (Posix) system calls. A short description of the Windows interface is in the book.

To show how the four process management calls enable much of process management, consider the following highly simplified shell. The fork() system call duplicates the process (so parent and child are each executing fork()); fork() returns true in the parent and false in the child.)

    while (true)
      display_prompt()
      read_command(command)
      if (fork() != 0)
        waitpid(...)
      else
        execve(command)
      endif
    endwhile
  

Simply removing the waitpid(...) gives background jobs.

1.6.2 System Calls for File Management

Most files are accessed sequentially from beginning to end. In this case the operations performed are

open -- possibly creating the file
multiple reads and writes
close

For non-sequential access, lseek is used to move the File Pointer, which is the location in the file where the next read or write will take place.

1.6.3 System Calls for Directory Management

Directories are created and destroyed by mkdir and rmdir. Directories are changed by the creation and deletion of files. As mentioned, open creates files. Files can have several names link is used to give another name and unlink to remove a name. When the last name is gone (and the file is no longer open by any process), the file data is destroyed. This description is approximate, we give the details later in the course where we explain Unix i-nodes.

Homework: 18.

1.6.4 Miscellaneous System Calls

Skipped

1.6.5 The Windows Win32 API

Skipped

1.6A Addendum on Transfer of Control

The transfer of control between user processes and the operating system kernel can be quite complicated, especially in the case of blocking system calls, hardware interrupts, and page faults. Before tackling these issues later, we begin with the familiar example of a procedure call within a user-mode process.

An important OS objective is that, even in the more complicated cases of page faults and blocking system calls requiring device interrupts, simple procedure call semantics are observed from a user process viewpoint. The complexity is hidden inside the kernel itself, yet another example of the operating system providing a more abstract, i.e., simpler, virtual machine to the user processes.

More details will be added when we study memory management (and know officially about page faults) and more again when we study I/O (and know officially about device interrupts).

A number of the points below are far from standardized. Such items as where to place parameters, which routine saves the registers, exact semantics of trap, etc, vary as one changes language/compiler/OS. Indeed some of these are referred to as calling conventions, i.e. their implementation is a matter of convention rather than logical requirement. The presentation below is, we hope, reasonable, but must be viewed as a generic description of what could happen instead of an exact description of what does happen with, say, C compiled by the Microsoft compiler running on Windows XP.

1.6A.1 User-mode procedure calls

Procedure f calls g(a,b,c) in process P. An example is above where a user program calls read(fd,buffer,nbytes).

Actions by f Prior to the Call

  1. Save the registers by pushing them onto the stack (in some implementations this is done by g instead of f).

  2. Push arguments c,b,a onto P's stack.
    Note: Stacks usually grow downward from the top of P's segment, so pushing an item onto the stack actually involves decrementing the stack pointer, SP.
    Note: Some compilers store arguments in registers not on the stack.

Executing the Call Itself

  1. Execute PUSHJUMP <start-address of g>.
    This instruction pushes the program counter PC (a.k.a. the instruction pointer IP) onto the stack, and then jumps to the start address of g. The value pushed is actually the updated program counter, i.e., the location of the next instruction (the instruction to be executed by f when g returns).

Actions by g upon Being Called

  1. Allocate space for g's local variables by suitably decrementing SP.

  2. Start execution from the beginning of the program, referencing the parameters as needed. The execution may involve calling other procedures, possibly including recursive calls to f and/or g.

Actions by g When Returning to f

  1. If g is to return a value, store it in the conventional place.

  2. Undo step 4: Deallocate local variables by incrementing SP.

  3. Undo step 3: Execute POPJUMP, i.e., pop the stack and set PC to the value popped, which is the return address pushed in step 4.

Actions by f upon the Return from g:

  1. (We are now at the instruction in f immediately following the call to g.)
    Undo step 2: Remove the arguments from the stack by incrementing SP.

  2. Undo step 1: Restore the registers while popping them off the stack.

  3. Continue the execution of f, referencing the returned value of g, if any.

Properties of (User-Mode) Procedure Calls

1.6A.2 Kernel-mode procedure calls

We mean one procedure running in kernel mode calling another procedure, which will also be run in kernel mode. Later, we will discuss switching from user to kernel mode and back.

There is not much difference between the actions taken during a kernel-mode procedure call and during a user-mode procedure call. The procedures executing in kernel-mode are permitted to issue privileged instructions, but the instructions used for transferring control are all unprivileged so there is no change in that respect.

One difference is that often a different stack is used in kernel mode, but that simply means that the stack pointer must be set to the kernel stack when switching from user to kernel mode. But we are not switching modes in this section; the stack pointer already points to the kernel stack. Often there are two stack pointers one for kernel mode and one for user mode.

1.6A.3 The Trap instruction

The trap instruction, like a procedure call, is a synchronous transfer of control: We can see where, and hence when, it is executed. In this respect, there are no surprises. Although not surprising, the trap instruction does have an unusual effect: processor execution is switched from user-mode to kernel-mode. That is, the trap instruction normally itself is executed in user-mode (it is naturally an UNprivileged instruction), but the next instruction executed (which is NOT the instruction written after the trap) is executed in kernel-mode.

Process P, running in unprivileged (user) mode, executes a trap. The code being executed is written in assembler since there are no high level languages that generate a trap instruction. There is no need to name the function that is executing. Compare the following example to the explanation of f calls g given above.

Actions by P prior to the trap

  1. Save the registers by pushing them onto the stack.

  2. Store any arguments that are to be passed. The stack is not normally used to store these arguments since the kernel has a different stack. Often registers are used.

Executing the trap itself

  1. Execute TRAP <trap-number>.
    This instruction switch the processor to kernel (privileged) mode, jumps to a location in the OS determined by trap-number, and saves the return address. For example, the processor may be designed so that the next instruction executed after a trap is at physical address 8 times the trap-number.
    The trap-number can be thought of as the name of the code-sequence to which the processor will jump rather than as an argument to trap.

Actions by the OS upon being TRAPped into

  1. Jump to the real code.
    Recall that trap instructions with different trap numbers jump to locations very close to each other. There is not enough room between them for the real trap handler. Indeed one can think of the trap as having an extra level of indirection; it jumps to a location that then jumps to the real start address. If you learned about writing jump tables in assembler, this is very similar.

  2. Check all arguments passed. The kernel must be paranoid and assume that the user mode program is evil and written by a bad guy.

  3. Allocate space by decrementing the kernel stack pointer.
    The kernel and user stacks are separate.

  4. Start execution from the jumped-to location.

Actions by the OS when returning to user mode

  1. Undo step 6: Deallocate space by incrementing the kernel stack pointer.

  2. Undo step 3: Execute (in assembler) another special instruction, RTI or ReTurn from Interrupt, which returns the processor to user mode and transfers control to the return location saved by the trap. The word interrupt appears because an RTI is also used when the kernel is returning from an interrupt as well as the present case when it is returning from an trap.

Actions by P upon the return from the OS

  1. We are now in at the instruction right after the trap
    Undo step 1: Restore the registers by popping the stack.

  2. Continue the execution of P, referencing the returned value(s) of the trap, if any.

Properties of TRAP/RTI

Remark: A good way to use the material in the addendum is to compare the first case (user-mode f calls user-mode g) to the TRAP/RTI case line by line so that you can see the similarities and differences.

1.7 Operating System Structure

I must note that Tanenbaum is a big advocate of the so called microkernel approach in which as much as possible is moved out of the (supervisor mode) kernel into separate processes. The (hopefully small) portion left in supervisor mode is called a microkernel.

In the early 90s this was popular. Digital Unix (now called True64) and Windows NT/2000/XP/Vista are examples. Digital Unix is based on Mach, a research OS from Carnegie Mellon university. Lately, the growing popularity of Linux has called into question the belief that all new operating systems will be microkernel based.

1.7.1 Monolithic approach

The previous picture: one big program

The system switches from user mode to kernel mode during the poof and then back when the OS does a return (an RTI or return from interrupt).

But of course we can structure the system better, which brings us to.

1.7.2 Layered Systems

Some systems have more layers and are more strictly structured.

An early layered system was THE operating system by Dijkstra and his students at Technische Hogeschool Eindhoven. This was a simple batch system so the operator was the user.

  1. The operator process
  2. User programs
  3. I/O mgt
  4. Operator console—process communication
  5. Memory and drum management

The layering was done by convention, i.e. there was no enforcement by hardware and the entire OS is linked together as one program. This is true of many modern OS systems as well (e.g., linux).

The multics system was layered in a more formal manner. The hardware provided several protection layers and the OS used them. That is, arbitrary code could not jump into or access data in a more protected layer.

1.7.3 Microkernels

The idea is to have the kernel, i.e. the portion running in supervisor mode, as small as possible and to have most of the operating system functionality provided by separate processes. The microkernel provides just enough to implement processes.

This does have advantages. For example an error in the file server cannot corrupt memory in the process server since they have separate address spaces (they are after all separate process). Confining the effect of errors makes them easier to track down. Also an error in the ethernet driver can corrupt or stop network communication, but it cannot crash the system as a whole.

But the microkernel approach does mean that when a (real) user process makes a system call there are more processes switches. These are not free.

Related to microkernels is the idea of putting the mechanism in the kernel, but not the policy. For example, the kernel would know how to select the highest priority process and run it, but some user-mode process would assign the priorities. One could envision changing the priority scheme being a relatively minor event compared to the situation in monolithic systems where the entire kernel must be relinked and rebooted.

Microkernels Not So Different In Practice

Dennis Ritchie, the inventor of the C programming language and co-inventor, with Ken Thompson, of Unix was interviewed in February 2003. The following is from that interview.

What's your opinion on microkernels vs. monolithic?

Dennis Ritchie: They're not all that different when you actually use them. "Micro" kernels tend to be pretty large these days, and "monolithic" kernels with loadable device drivers are taking up more of the advantages claimed for microkernels.

I should note, however, that the Minix microkernel (excluding the processes) is quite small, about 4000 lines.

1.7.4 Client-Server

When implemented on one computer, a client-server OS often uses the microkernel approach in which the microkernel just handles communication between clients and servers, and the main OS functions are provided by a number of separate processes.

A distributed system can be thought of as an extension of the client server concept where the servers are remote.

Today with plentiful memory, each machine would have all the different servers. So the only reason am OS-internal message would go to another computer is if the originating process wished to communicate with a specific process on that computer (for example wanted to access a remote disk).

Distributes systems are becoming increasingly important for application programs. Perhaps the program needs data found only on certain machine (no one machine has all the data). For example, think of (legal, of course) file sharing programs.

Homework: 24

1.7.5 Virtual Machines

Use a hypervisor (i.e., beyond supervisor, i.e. beyond a normal OS) to switch between multiple Operating Systems. A more modern name for a hypervisor is a Virtual Machine Monitor (VMM).

VM/370

The hypervisor idea was made popular by IBM's CP/CMS (now VM/370). CMS stood for Cambridge Monitor System since it was developed at IBM's Cambridge (MA) Science. It was renamed, with the same acronym (an IBM specialty, cf. RAID) to Conversational Monitor System.

Virtual Machines Redicovered

Recently, virtual machine technology has moved to machines (notably x86) that are not fully virtualizable. Recall that when CMS executed a privileged instruction, the hardware trapped to the real operating system. On x86, privileged instructions are ignored when executed in user mode, so running the guest OS in user mode won't work. Bye bye (traditional) hypervisor. But a new style emerged where the hypervisor runs, not on the hardware, but on the host operating system. See the text for a sketch of how this (and another idea paravirtualization) works. An important academic advance was Disco from Stanford that led to the successful commercial product VMware.

Sanity Restored

Both AMD and Intel have extended the x86 architecture to better support virtualization. The newest processors produced today (2008) by both companies now support an additional (higher) privilege mode for the VMM. The guest OS now runs in the old privileged mode (for which it was designed) and the hypervisor/VMM runs in the new higher privileged mode from which it is able to monitor the usage of hardware resources by the guest operating system(s).

The Java Virtual Machine

The idea is that a new (rather simple) computer architecture called the Java Virtual Machine (JVM) was invented but not built (in hardware). Instead, interpreters for this architecture are implemented in software on many different hardware platforms. Each interpreter is also called a JVM. The java compiler transforms java into instructions for this new architecture, which then can be interpreted on any machine for which a JVM exists.

This has portability as well as security advantages, but at a cost in performance.

Of course java can also be compiled to native code for a particular hardware architecture and other languages can be compiled into instructions for a software-implemented virtual machine (e.g., pascal with its p-code).

1.7.6 Exokernels

Similar to VM/CMS but the virtual machines have disjoint resources (e.g., distinct disk blocks) so less remapping is needed.

1.8 The World According to C

1.8.1 The C Language

Assumed knowledge.

1.8.2 Header Files

Assumed knowledge.

1.8.3 Large Programming Projects

Mostly assumed knowledge. Linker's are very briefly discussed. Our earlier discussion was much more detailed.

1.8.4 The model of Run Time

Extremely brief treatment with only a few points made about the running of the operating itself.

1.9 Research on Operating Systems

Skipped

1.10 Outline of the Rest of this Book

Skipped

1.11 Metric Units

Assumed knowledge. Note that what is covered is just the prefixes, i.e. the names and abbreviations for various powers of 10.

1.12 Summary

Skipped, but you should read and be sure you understand it (about 2/3 of a page).

Chapter 2 Process and Thread Management

Tanenbaum's chapter title is Processes and Threads. I prefer to add the word management. The subject matter is processes, threads, scheduling, interrupt handling, and IPC (InterProcess Communication—and Coordination).

2.1 Processes

Definition: A process is a program in execution.

2.1.1 The Process Model

Even though in actuality there are many processes running at once, the OS gives each process the illusion that it is running alone.

Virtual time and virtual memory are examples of abstractions provided by the operating system to the user processes so that the latter experiences a more pleasant virtual machine than actually exists.

2.1.2 Process Creation

From the users' or external viewpoint there are several mechanisms for creating a process.

  1. System initialization, including daemon (see below) processes.
  2. Execution of a process creation system call by a running process.
  3. A user request to create a new process.
  4. Initiation of a batch job.

But looked at internally, from the system's viewpoint, the second method dominates. Indeed in early Unix only one process is created at system initialization (the process is called init); all the others are decendents of this first process.

Why have init? That is why not have all processes created via method 2?
Ans: Because without init there would be no running process to create any others.

Definition of Daemon

Many systems have daemon process lurking around to perform tasks when they are needed. I was pretty sure the terminology was related to mythology, but didn't have a reference until a student found The {Searchable} Jargon Lexicon at http://developer.syndetic.org/query_jargon.pl?term=demon

daemon: /day'mn/ or /dee'mn/ n. [from the mythological meaning, later rationalized as the acronym `Disk And Execution MONitor'] A program that is not invoked explicitly, but lies dormant waiting for some condition(s) to occur. The idea is that the perpetrator of the condition need not be aware that a daemon is lurking (though often a program will commit an action only because it knows that it will implicitly invoke a daemon). For example, under {ITS}, writing a file on the LPT spooler's directory would invoke the spooling daemon, which would then print the file. The advantage is that programs wanting (in this example) files printed need neither compete for access to nor understand any idiosyncrasies of the LPT. They simply enter their implicit requests and let the daemon decide what to do with them. Daemons are usually spawned automatically by the system, and may either live forever or be regenerated at intervals. Daemon and demon are often used interchangeably, but seem to have distinct connotations. The term `daemon' was introduced to computing by CTSS people (who pronounced it /dee'mon/) and used it to refer to what ITS called a dragon; the prototype was a program called DAEMON that automatically made tape backups of the file system. Although the meaning and the pronunciation have drifted, we think this glossary reflects current (2000) usage.

As is often the case, wikipedia.org proved useful. Here is the first paragraph of a more thorough entry. The wikipedia also has entries for other uses of daemon.

In Unix and other computer multitasking operating systems, a daemon is a computer program that runs in the background, rather than under the direct control of a user; they are usually instantiated as processes. Typically daemons have names that end with the letter "d"; for example, syslogd is the daemon which handles the system log.

2.1.3 Process Termination

Again from the outside there appear to be several termination mechanism.

  1. Normal exit (voluntary).
  2. Error exit (voluntary).
  3. Fatal error (involuntary).
  4. Killed by another process (involuntary).

And again, internally the situation is simpler. In Unix terminology, there are two system calls kill and exit that are used. Kill (poorly named in my view) sends a signal to another process. If this signal is not caught (via the signal system call) the process is terminated. There is also an uncatchable signal. Exit is used for self termination and can indicate success or failure.

2.1.4 Process Hierarchies

Modern general purpose operating systems permit a user to create and destroy processes.

Old or primitive operating system like MS-DOS are not fully multiprogrammed, so when one process starts another, the first process is automatically blocked and waits until the second is finished. This implies that the process tree degenerates into a line.


2.1.5 Process States and Transitions

The diagram on the right contains much information. I often include it on exams.

Homework: 1.