NOTE: These notes are by Allan Gottlieb, and are reproduced here, with superficial modifications, with his permission. "I" in this text generally refers to Prof. Gottlieb, except in regards to administrative matters.


================ Start Lecture #15 (Apr. 1) ================

5.2.2: Programmed I/O

5.2.3: Interrupt-Driven I/O

I/O Using DMA

5.3: I/O Software Layers (skipped with a heavy heart, but see picture)

Layers of abstraction as usual prove to be effective. Most systems are believed to use the following layers (but for many systems, the OS code is not available for inspection).

  1. User level I/O routines.
  2. Device independent I/O software.
  3. Device drivers.
  4. Interrupt handlers.

We will give a bottom up explanation.

5.3.1: Interrupt Handlers

We discussed an interrupt handler before when studying page faults. Then it was called ``assembly language code''.

In the present case, we have a process blocked on I/O and the I/O event has just completed. So the goal is to make the process ready. Possible methods are.

Once the process is ready, it is up to the scheduler to decide when it should run.

5.3.2: Device Drivers

The portion of the OS that ``knows'' the characteristics of the controller.

The driver has two ``parts'' corresponding to its two access points. Recall the following figure from the beginning of the course.

  1. Accessed by the main line OS via the envelope in response to an I/O system call. The portion of the driver accessed in this way is sometimes call the ``top'' part.
  2. Accessed by the interrupt handler when the I/O completes (this completion is signaled by an interrupt). The portion of the driver accessed in this way is sometimes call the ``bottom'' part.

Tanenbaum describes the actions of the driver assuming it is implemented as a process (which he recommends). I give both that view point and the self-service paradigm in which the driver is invoked by the OS acting in behalf of a user process (more precisely the process shifts into kernel mode).

Driver in a self-service paradigm

  1. The user (A) issues an I/O system call.

  2. The main line, machine independent, OS prepares a generic request for the driver and calls (the top part of) the driver.
    1. If the driver was idle (i.e., the controller was idle), the driver writes device registers on the controller ending with a command for the controller to begin the actual I/O.
    2. If the controller was busy (doing work the driver gave it previously), the driver simply queues the current request (the driver dequeues this request below).

  3. The driver jumps to the scheduler indicating that the current process should be blocked.

  4. The scheduler blocks A and runs (say) B.

  5. B starts running.

  6. An interrupt arrives (i.e., an I/O has been completed).

  7. The interrupt handler invokes (the bottom part of) the driver.
    1. The driver informs the main line perhaps passing data and surely passing status (error, OK).
    2. The top part is called to start another I/O if the queue is nonempty. We know the controller is free. Why?
      Answer: We just received an interrupt saying so.

  8. The driver jumps to the scheduler indicating that process A should be made ready.

  9. The scheduler picks a ready process to run. Assume it picks A.

  10. A resumes in the driver, which returns to the main line, which returns to the user code.

Driver as a process (Tanenbaum) (less detailed than above)

5.3.3: Device-Independent I/O Software

The device-independent code does most of the functionality, but not necessarily most of the code since there can be many drivers all doing essentially the same thing in slightly different ways due to slightly different controllers.

5.3.4: User-Space Software

A good deal of I/O code is actually executed in user space. Some is in library routines linked into user programs and some is in daemon processes.

Homework: 10, 13.

5.4: Disks

The ideal storage device is

  1. Fast
  2. Big (in capacity)
  3. Cheap
  4. Impossible

Disks are big and cheap, but slow.

5.4.1: Disk Hardware

Show a real disk opened up and illustrate the components

Overlapping I/O operations is important. Many controllers can do overlapped seeks, i.e. issue a seek to one disk while another is already seeking.

Modern disks cheat and do not have the same number of sectors on outer cylinders as on inner one. However, the disks have electronics and software (firmware) that hides the cheat and gives the illusion of the same number of sectors on all cylinders.

(Unofficial) Despite what tanenbaum says later, it is not true that when one head is reading from cylinder C, all the heads can read from cylinder C with no penalty. It is, however, true that the penalty is very small.

Choice of block size

Homework: Consider a disk with an average seek time of 10ms, an average rotational latency of 5ms, and a transfer rate of 10MB/sec.

  1. If the block size is 1KB, how long would it take to read a block?
  2. If the block size is 100KB, how long would it take to read a block?
  3. If the goal is to read 1K, a 1KB block size is better as the remaining 99KB are wasted. If the goal is to read 100KB, the 100KB block size is better since the 1KB block size needs 100 seeks and 100 rotational latencies. What is the minimum size request for which a disk with a 100KB block size would complete faster than one with a 1KB block size?

RAID (Redundant Array of Inexpensive Disks)

5.4.3: Disk Arm Scheduling Algorithms

There are three components to disk response time: seek, rotational latency, and transfer time. Disk arm scheduling is concerned with minimizing seek time by reordering the requests.

These algorithms are relevant only if there are several I/O requests pending. For many PCs this is not the case. For most commercial applications, I/O is crucial and there are often many requests pending.

  1. FCFS (First Come First Served): Simple but has long delays.

  2. Pick: Same as FCFS but pick up requests for cylinders that are passed on the way to the next FCFS request.

  3. SSTF or SSF (Shortest Seek (Time) First): Greedy algorithm. Can starve requests for outer cylinders and almost always favors middle requests.

  4. Scan (Look, Elevator): The method used by an old fashioned jukebox (remember ``Happy Days'') and by elevators. The disk arm proceeds in one direction picking up all requests until there are no more requests in this direction at which point it goes back the other direction. This favors requests in the middle, but can't starve any requests.

  5. C-Scan (C-look, Circular Scan/Look): Similar to Scan but only service requests when moving in one direction. When going in the other direction, go directly to the furthest away request. This doesn't favor any spot on the disk. Indeed, it treats the cylinders as though they were a clock, i.e. after the highest numbered cylinder comes cylinder 0.

  6. N-step Scan: This is what the natural implementation of Scan gives.

Minimizing Rotational Latency

Use Scan based on sector numbers not cylinder number. For rotational latency Scan which is the same as C-Scan. Why?
Ans: Because the disk only rotates in one direction.

Homework: 24, 25

5.4.4: Error Handling

Disks error rates have dropped in recent years. Moreover, bad block forwarding is normally done by the controller (or disk electronic) so this topic is no longer as important for OS.