Describe two ways in which the virtual machine that the OS presents
to a user process differs from the actual machine. (Your answer
should be brief -- one or two sentences for each difference --
but more than a single phrase. "I/O" or "Memory management" is
not an acceptable answer.)
- In the virtual machine, each process "thinks" it has all
of memory; in the actual machine, several processes share memory.
- In the virtual machine, each process has the CPU all the time;
in the actual machine, the CPU switches between processes.
- Processes in the virtual machine interact with files; in actuality,
files are an abstract organization of the disk (or other storage
device) by the OS.
- In the virtual machines, system calls like I/O are simple;
in reality, they are complex.
In UNIX, what does the "fork" system call do? What value does it
"fork" spawns a new process, which is a clone of the parent process.
"fork" returns 0 to the child process. It returns the process ID of the new
child to the parent process.
Suppose that process P is ready, and the scheduler decides to resume
running it. How does it determine the address of the next instruction
to run in P?
Answer: The address will have been saved in the process table
entry for P when P last stopped running.
Threads may be implemented in user space or in kernel space.
Describe one advantage of each approach.
The major advantages of a user space implementation are:
The major disadvantages of a user space implementation are:
- It may be possible to implement threads on an existing OS, with
few or no changes to the kernel.
- The process can switch between threads without incurring the
overhead of a kernel trap.
- The user can custom-tailor his scheduling algorithm suitable
to the particular application.
- It is difficult to block a thread
without blocking the entire process.
- The user is obliged to write a scheduler, thus to some extent repeating
work already done in the kernel. Some scheduling algorithms, such as
round robin, are difficult to implement at the user level.
Consider the following situation: Semaphore s has an initial value of 2.
There are four active processes, A, B, C, and D. The following
events happen in order:
A executes P(s)
Which processes are now blocked and which are unblocked? There can
be more than one possible scenario, so I am looking for an answer of
the following kind: "Either A,D are blocked and B,C are unblocked or
B is blocked and A,C,D are unblocked."
B executes P(s)
C executes P(s)
D executes P(s)
A executes V(s)
Answer: A and B decrement s to 0, and C and D block on s.
When A executes V(s) either C or D may unblock. Therefore at the end
either A,B,C are unblocked and D is blocked or A,B,D are unblocked and C
Consider the following situation. The OS uses a round robin scheduler.
The FIFO queue of ready processes holds three processes A,B,C in that order.
The time quantum is 18 msec. A context switch takes 2 msec. After running
for 13 msec, B will block to do a disk read, which will take 30 msec
to complete. Trace what will happen over the first 100 msec. What
is the CPU efficiency over the first 100 msec?
Time Activity Ready queue
0-18 Run A B,C
18-20 Context switch B,C,A
20-33 Run B. B blocks at 33 C,A
33-35 Context switch C,A
35-53 Run C. A
53-55 Context switch A,C
55-63 Run A. C
63 B unblocks C,B
63-73 Continue to run A. C,B
73-75 Context switch C,B,A
75-93 Run C B,A
93-95 Context switch B,A,C
95-100 Run B A,C
In the first 100 msec, the CPU is running a user process for 90 msec and
doing context switch for 10msec. The CPU efficiency is therefore 90%.
What is starvation? Give an example of a scheduling algorithm that
is susceptible to starvation. How can this algorithm be modified
to avoid the problem?
Starvation occurs when one process is never run because the scheduler
always prefers some other process. Starvation can occur in such
scheduling algorithms as:
All such schedulers can be modified to prevent starvation by using
priority aging. This requires that the age of the process
-- that is, the time that it has been ready but not run -- be combined
with whatever other criterion is used for scheduling in such a way
that eventually the age of the process outweighs all other considerations.
- Priority scheduling. If there is always a higher priority process
ready, a lower priority process will never be run.
- Shortest job first, or shortest remaining time first. If there
is a continual stream of shorter jobs being submitted to the system,
then a longer job will never be run.
Suppose you have the following deadlocked situation. There are three
processes A, B, C, and four resources W,X,Y,Z.
A holds W and is requesting X and Y.
Which of the following actions, taken singly, will suffice to break the
deadlock? (There may be more than one correct answer; list all correct
answers. You need not explain your answer.)
B holds X and Z and is requesting W.
C is requesting X.
D holds Y.
Answer: You can either kill A, kill B, reassign W to B, reassign
X to A, or reassign X to C. Reassigning X to C gives a state that is
unsafe because if C completes and X is then given back to B, the
system will deadlock again, but it is not deadlocked , because if
X is given to A after C completes, the system will run to completion.
- Kill process A.
- Kill process B.
- Kill process C.
- Kill process D.
- Premept W and reassign it to B.
- Preempt X and reassign it to A.
- Preempt X and reassign it to C.
- Preempt Y and reassign it to A.