Note: I will send mail to the class list when the exam and course grades are ready -- probably Monday or Tuesday next week. Send me email if you want to know your grades. Solutions to the exam will be posted on the Web some time next week.
That being said: The first two events must be H and G, in that order. These are followed by J and A -- almost certainly in that order, but possibly in order A,J. If control returns to P, then the next events are E and F, in either order, followed by B. If control is passed to Q, then E, F, and I occur in any order.
C and D do not occur. A very common error was to include C, but P is not blocked. It is interrupted by the kernel, and it may be preempted in favor of Q.
Answer: The twelve low-order bits are the offset O; the higher-order bits are the page number P. The value of PageTable[P] is the frame number F. The physical address PA has F has its high-order bits set to F and its low-order bits set to O.
Answer: Both algorithms are based on the assumption that pages that have not been recently referenced are likely not to be referenced again soon. The LRU measures the time of the most recent reference exactly. The NRU measures the time of the most recent reference only crudely, dividing it into two categories: since the last clock tick and before the last clock tick. Thus, much less information is available for choosing a page to replace, so the quality of choice is inferior.
Answer: Implementing LRU requires that the page be time-stamped at each memory reference (i.e. two such time-stamps per machine instruction.) The hardware to support this is certainly expensive and probably non-existent.
Answer: It can be used, after the fact, to establish a lower bound on the minimum number of page faults, which can be used to gauge how well the actual page replacement algorithm is working.
Almost everyone got part (C) correct, which surprised me a little, as it is a comparatively abstract and minor point.
Mostly, errors on this problem were of two types. One was to confuse starvation with deadlock. The other was to confuse starvation with long waits, of one kind or another. Long waits are sometimes inevitable. Starvation is different; starvation is the situation, such as those above, in which logically the system could eventually run process P (unlike deadlock) but it never does run P because it always prefers to run something else.
Answer: Each block holds 28 = 256 addresses (= 1K Bytes / 4 Bytes). Therefore the third-level indirect block points to 28 second-level indirect blocks. Each of these points to 28 indirect blocks. Each of these points to 28 data blocks. Each of these holds 210 bytes. So the total size of the data indexed under the third-level indirect block is 28 * 28 * 28 * 210 = 234 bytes = 16 GBytes. The total size of all the other blocks is comparatively negligible (about 1/256 the size). So the answer is "Between 234 and 235 bytes."
Only one student got this correct, though three or four other students were within a factor of 100 or so.
Answer: If one creates a hard link from the path name "/B/Y" to the file /A/X, then the directory B holds, under the name Y, the actual disk address of this same file. That is, the relation between the directory B and the file is exactly the same as the relation between the directory A and the file. If the file is deleted under the name /A/X it is still linked under the name /B/Y.
By contrast, if one creates a symbolic link from path name "/B/Y" to file /A/X, all this does is create a new file /B/Y which holds the name of the path "/A/X" plus some flag that this is a symbolic link. If file /A/X is now deleted, this link fails to refer. If a new file /A/X is created, this link now points to the new file.
One of the quantities that Amy is measuring is the number of blocks read from disk. When N=1, the process reads M blocks from disk. Amy expects, therefore, that in general N processes will read N*M blocks.
What she finds is quite different. For small values of N, the number of blocks read is much smaller than N*M; in fact, it's barely larger then M, regardless of N. For large value of N, the number of blocks read is much larger than N*M.
Explain these findings. (2 or 3 sentences is fine; in fact, I'm basically looking for two key words.)
Answer: The two keywords are "cache" and "thrash". Since all the processes are executing the same code at pretty much the same time, they will end up reading the same blocks of the same files at the same time. Hence, after the block has been first read by one process, it will almost certainly be available in the cache for all the other processes, so each block read done for a single process suffices for all the processes, as long as there are few processes.
However, if there are so many processes that the working sets for the processes exceed the size of physical memory, then the system will have to do a large amount of paging in and out (or swapping) to run the processes. This involves a large number of disk reads that were not needed when a single instance of the process was running.
Answer: Video RAM, though physically separate, from the point of view of the CPU is just part of the same physical address space as ordinary RAM. That is, there is a fixed range of physical addresses that refers to video RAM.
Answer: For each pixel in the display there is a corresponding set of 3 (or 4) bytes in video RAM. These hold 3 8-bit integers between 0 and 255, expressing the intensity of the three colors red, blue, and green.
Answer: In a download/upload model, the entire file is downloaded from server to client when the file is opened. When the file is closed, if it has been modified, it is uploaded from client to server. In a remote access model, blocks are downloaded as needed from server to client and uploaded when modified from client to server.
Advantages of download/upload: (1) Easier to implement. (2) More efficient if the client is actually going to read the entire file. (3) The client can continue to read the file if the server or the network connection subsequently crashes.
Advantages of remote access: (1) More efficient if the client is only going to read a small fraction of the file. (2) Much easier to maintain reasonable consistency if several clients are simultaneously working on the same file.