Suppose two nodes, A and B, communicate via messages and that the probability of receiving any message that is sent is P (0 < P < 1 ). You need not consider any other types of failures.
Solution:
Under round-robin scheduling the utilization increases
with the time quantum, so let us assume that the time quantum is larger
than 50msec. After loading (and running) the first two processes, running
the third and fourth require swapping out the first process to make room
available in memory. After the first round, we need to bring in the 1st
process from the swap space. Because process 4 is still performing an I/O,
it cannot be swapped out, and therefore it is the second process that has
enough space to allow the first process to get in, thus the second process
gets swapped out. But then, it will get swapped in again soon, instead
of processes 3 and 4 (process 1 cannot be swapped out because its address
space has to remain in memory while the I/O operation takes place). So,
in steady state a time window will look like:
Process 2 swapped out, process 1 swapped in, process 1 runs, processes 3 & 4 swapped out, process 2 swapped in, process 2 runs, process 1 swapped out, process 3 swapped in, process 3 runs, process 4 swapped in (note: we do not need to swap anybody out to get process 4 in), process 4 runs, and loop. The memory will look like:
Process 1: red, Process 2: blue, Process 3: yellow, Process 4: Green
Now, to compute the utilization under this model:
In each iteration, we have each process running for 50msec. The time wasted
for swapping
is: 64 * 20/8 (2 out) + 64 * 20/8 (1 in) + (32
+ 16) * 20/8 (3 & 4 out) + 64 * 20/8 (2 in) + 64 * 20/8 (1 out) + 32
* 20/8 (3 in) + 16 * 20/8
(4 in) = 880 msec. Utilization is 200 / ( 200
+ 880) = 18.5%
A better solution is to run the available processes
in memory for a few time slices, then swap them out and swap in other
processes, run them for a while, and so on.
For example:
Micro-level scheduling: round-robin. Macro-level
scheduling: each process is swapped out after 10 time slices. In this case,
the scenario would be, in steady state Processes 3 & 4 swapped out,
processes 1 & 2 swapped in, processes 1 & 2 run for 10 time slices
each, get swapped out, processes 3 & 4 swapped in, processes 3 &
4 run for 10 time slices, and so on...
Time wasted for swapping: 48 * 20/8 + 128 * 20/8
+ 128 * 20/8 + 48 * 20/8 = 880, while the utilization would be:
10 * 100 / (10 * 100 + 880) = 53.2%
This scheme could be optimized further by swapping out a process as soon as it finishes its 10th time slice, but the difference in utilization is not substantial.
Solution:
Solution:
This is a nice trick, but it ignores a small problem. What happens if
the binary file changes while the program is running in main memory? If
the operating system evicts the code pages (during swapping out) but does
not save them on the swap space, and just brings them from the binary file
(during swapping in), then we are loading a different program into the
process. We can have the situation then that pages belonging to two different
programs be in the process's text segment. The results of such a disastrous
mix are undefined at best. Operating systems that wish to play this trick
lock the binary file so that it cannot be modified while there is a process
that runs the code. A user who is trying to recompile a program that is
currently running in a process will get a message like "text file busy"
or some such.