Start Lecture #3
The final exam will be during finals week. Specifically it will be in Room 102 WWH on Thursday 18 dec at 5PM. Note that this is NOT the room the class meets, but is in the same building.
Tanenbaum's chapter title is Processes and Threads
.
I prefer to add the word management.
The subject matter is processes, threads, scheduling, interrupt
handling, and IPC (InterProcess Communication—and
Coordination).
Definition: A process is a program in execution.
Even though in actuality there are many processes running at once, the OS gives each process the illusion that it is running alone.
overheadvirtual time occurs.)
Virtual time and virtual memory are examples of abstractions provided by the operating system to the user processes so that the latter experiences a more pleasant virtual machine than actually exists.
From the users' or external viewpoint there are several mechanisms for creating a process.
But looked at internally, from the system's viewpoint, the second method dominates. Indeed in early Unix only one process is created at system initialization (the process is called init); all the others are decendents of this first process.
Why have init?
That is why not have all processes created via method 2?
Ans: Because without init there would be no running process to create
any others.
Many systems have daemon
process lurking around to perform
tasks when they are needed.
I was pretty sure the terminology was related to mythology, but
didn't have a reference until a student found
The {Searchable} Jargon Lexicon
at http://developer.syndetic.org/query_jargon.pl?term=demon
daemon: /day'mn/ or /dee'mn/ n. [from the mythological meaning, later rationalized as the acronym `Disk And Execution MONitor'] A program that is not invoked explicitly, but lies dormant waiting for some condition(s) to occur. The idea is that the perpetrator of the condition need not be aware that a daemon is lurking (though often a program will commit an action only because it knows that it will implicitly invoke a daemon). For example, under {ITS}, writing a file on the LPT spooler's directory would invoke the spooling daemon, which would then print the file. The advantage is that programs wanting (in this example) files printed need neither compete for access to nor understand any idiosyncrasies of the LPT. They simply enter their implicit requests and let the daemon decide what to do with them. Daemons are usually spawned automatically by the system, and may either live forever or be regenerated at intervals. Daemon and demon are often used interchangeably, but seem to have distinct connotations. The term `daemon' was introduced to computing by CTSS people (who pronounced it /dee'mon/) and used it to refer to what ITS called a dragon; the prototype was a program called DAEMON that automatically made tape backups of the file system. Although the meaning and the pronunciation have drifted, we think this glossary reflects current (2000) usage.
As is often the case, wikipedia.org proved useful. Here is the first paragraph of a more thorough entry. The wikipedia also has entries for other uses of daemon.
In Unix and other computer multitasking operating systems, a daemon is a computer program that runs in the background, rather than under the direct control of a user; they are usually instantiated as processes. Typically daemons have names that end with the letter "d"; for example, syslogd is the daemon which handles the system log.
Again from the outside there appear to be several termination mechanism.
And again, internally the situation is simpler.
In Unix
terminology, there are two system calls kill and
exit that are used. Kill (poorly named in my view) sends a
signal to another process.
If this signal is not caught (via the
signal system call) the process is terminated.
There is also an uncatchable
signal.
Exit is used for self termination and can indicate success or
failure.
Modern general purpose operating systems permit a user to create and destroy processes.
Old or primitive operating system like MS-DOS are not fully multiprogrammed, so when one process starts another, the first process is automatically blocked and waits until the second is finished. This implies that the process tree degenerates into a line.
The diagram on the right contains much information.
Homework: 1.
One can organize an OS around the scheduler.
kernel(a micro-kernel) consisting of the scheduler, interrupt handlers, and IPC (interprocess communication).
Minixoperating system works this way.
The OS organizes the data about each process in a table naturally called the process table. Each entry in this table is called a process table entry or process control block (PCB).
(I have often referred to a process table entry as a PTE, but this is bad since I recently realized that I use PTE for two different things: Process Table Entry and Page Table Entry. Since the latter is very common, I must stop using PTE to abbreviate the former. Please correct me if I slip up.)
Characteristics of the process table.
an active entity becomes a data structure when looked at from a lower level.
This should be compared with the addendum on transfer of control.
In a well defined location in memory (specified by the hardware) the OS stores an interrupt vector, which contains the address of the interrupt handler.
Assume a process P is running and a disk interrupt occurs for the completion of a disk read previously issued by process Q, which is currently blocked. Note that disk interrupts are unlikely to be for the currently running process (because the process that initiated the disk access is likely blocked).
this instruction caused the interrupt.
program, namely the OS, and hence might well be using the same variables. We will soon see how this can cause great problems even in what appear to be trivial cases.
Consider a job that is unable to compute (i.e., it is waiting for I/O) a fraction p of the time.
This is a crude model: it assumes that the probability that process A is waiting for I/O is independent of the probability that B is waiting for I/O. That is, it assumes those events are independent. Nonetheless, it is correct that increasing MPL does increase CPU utilization up to a point.
The limitation is memory, which is why the 2e discussed it in the memory management chapter instead of here. That is, we must have many jobs loaded at once, which means we must have enough memory for them. There are other memory-related issues as well and we will discuss them later in the course.
Some of the CPU utilization is time spent in the OS executing context switches so the gains are not a great as the crude model predicts.
Homework: 5.
Per process items | Per thread items |
---|---|
Address space | Program counter |
Global variables | Machine registers |
Open files | Stack |
Child processes | |
Pending alarms | |
Signals and signal handlers | |
Accounting information |
The idea behind threads to have separate threads of control (hence the name) running in the address space of a single process as shown in the diagram to the right. An address space is a memory management concept. For now think of an address space as having two components: the memory in which a process runs, and the mapping from the virtual addresses (addresses in the program) to the physical addresses (addresses in the machine). The table on the left shows which properties are common to all threads in a given process and which properties are thread specific.
Each thread is somewhat like a process (e.g., it shares the processor with other threads) but a thread contains less state than a process (e.g., the address space belongs to the process in which the thread runs.)
Often, when a process A is blocked (say for I/O) there is still computation that can be done. Another process B can't do this computation since it doesn't have access to A's memory. But two threads in the same process do share memory so that problem doesn't occur.
An important modern example is a multithreaded web server.
Each thread is responding to a single WWW connection.
While one thread is blocked on I/O, another thread can be processing
another WWW connection.
Question: Why not use separate processes, i.e.,
what is the shared memory?
Answer: The cache of frequently referenced pages.
A common organization is to have a dispatcher thread that fields requests and then passes each request on to an idle worker thread. Since the dispatcher and worker share memory, passing the request is very low overhead.
Another example is a producer-consumer problem (c.f. below) in which we have 3 threads in a pipeline. One thread reads data from an I/O device into a buffer, the second thread performs computation on the input buffer and places results in an output buffer, and the third thread outputs the data found in the output buffer. Again, while one thread is blocked the others can execute.
Really you want 2 (or more) input buffers and 2 (or more) output buffers. Otherwise the middle thread would be using all the buffers and would block both outer threads.
Question: Why does each thread block?
Answer:
A final (related) example is that an application wishing to perform automatic backups can have a thread to do just this. In this way the thread that interfaces with the user is not blocked during the backup. However some coordination between threads may be needed so that the backup is of a consistent state.
A process contains a number of resources such as address space, open files, accounting information, etc. In addition to these resources, a process has a thread of control, e.g., program counter, register contents, stack. The idea of threads is to permit multiple threads of control to execute within one process. This is often called multithreading and threads are sometimes called lightweight processes. Because threads in the same process share so much state, switching between them is much less expensive than switching between separate processes.
Individual threads within the same process are not completely independent. For example there is no memory protection between them. This is typically not a security problem as the threads are cooperating and all are from the same user (indeed the same process). However, the shared resources do make debugging harder. For example one thread can easily overwrite data needed by another and if one thread closes a file other threads can't read from it.
A new thread in the same process is created by a library routine named something like thread_create; similarly there is thread_exit. The analogue to waitpid is thread_join (the name comes presumably from the fork-join model of parallel execution).
The routine tread_yield, which relinquishes the processor, does not have a direct analogue for processes. The corresponding system call (if it existed) would move the process from running to ready.
Homework: 11.
Assume a process has several threads. What should we do if one of these threads