# Operating Systems

================ Start Lecture #13 ================

## 5.4: Disks

The ideal storage device is

1. Fast
2. Big (in capacity)
3. Cheap
4. Impossible

When compared to central memory, disks are big and cheap, but slow.

### 5.4.1: Disk Hardware

Show a real disk opened up and illustrate the components (Done last week).

• Platter
• Surface
• Track
• Sector
• Cylinder
• Seek time
• Rotational latency
• Transfer rate

Consider the following characteristics of a disk (Done last week).

• RPM (revolutions per minute)
• Seek time. This is actually quite complicated to calculate since you have to worry about, acceleration, travel time, deceleration, and "settling time".
• Rotational latency. The average value is the time for (approximately) one half a revolution.
• Transfer rate, determined by RPM and bit density.
• Sectors per track, determined by bit density
• Tracks per surface (i.e., number of cylinders), determined by bit density.
• Tracks per cylinder (i.e, the number of surfaces)

Overlapping I/O operations is important. Many controllers can do overlapped seeks, i.e. issue a seek to one disk while another is already seeking.

As technology increases the space taken to store a bit decreases, i.e.. the bit density increases. This changes the number of cylinders per inch of radius (the cylinders are closer together) and the number of bits per inch along a given track.

(Unofficial) Modern disks cheat and have more sectors on outer cylinders as on inner one. For this course, however, we assume the number of sectors/track is constant. Thus for us there are fewer bits per inch on outer sectors and the transfer rate is the same for all cylinders. The modern disks have electronics and software (firmware) that hides the cheat and gives the illusion of the same number of sectors on all tracks.

(Unofficial) Despite what tanenbaum says later, it is not true that when one head is reading from cylinder C, all the heads can read from cylinder C with no penalty. It is, however, true that the penalty is very small. Notes

1. Final exam is definitely here at 5pm on 16 dec 2003.
2. The last lecture 2 Dec. will be give by ernie davis (I will be out of town).
3. The following tuesday (9 dec.) is an nyu thursday so we do not meet.
4. I return on the 8th and will have extensive office hours during that week.

#### Choice of block size

• We discussed this before when studying page size.
• Current commodity disk characteristics (not for laptops) result in about 15ms to transfer the first byte and 10K bytes per ms for subsequent bytes (if contiguous).
• Rotation rate often 5400 or 7200 RPM with 10k, 15k and (just now) 20k available.
• Recall that 6000 RPM is 100 rev/sec or one rev per 10ms. So half a rev (the average time for to rotate to a given point) is 5ms.
• Transfer rates around 10MB/sec = 10KB/ms.
• Seek time around 10ms.
• This favors large blocks, 100KB or more.
• But the internal fragmentation would be severe since many files are small.
• Multiple block sizes have been tried as have techniques to try to have consecutive blocks of a given file near each other.
• Typical block sizes are 4KB-8KB.
• Unofficial
• Systems that contain multiple block sizes have been tried (i.e., the system uses blocks of size A for some files and blocks of size B for other files).
• Some systems use techniques to try to have consecutive blocks of a given file near each other as well as blocks of “related” files (e.g., files in the same directory).

Homework: Consider a disk with an average seek time of 10ms, an average rotational latency of 5ms, and a transfer rate of 10MB/sec.

1. If the block size is 1KB, how long would it take to read a block?
2. If the block size is 100KB, how long would it take to read a block?
3. If the goal is to read 1K, a 1KB block size is better as the remaining 99KB are wasted. If the goal is to read 100KB, the 100KB block size is better since the 1KB block size needs 100 seeks and 100 rotational latencies. What is the minimum size request for which a disk with a 100KB block size would complete faster than one with a 1KB block size?

#### RAID (Redundant Array of Inexpensive Disks)

• The name RAID is from Berkeley.

• IBM changed the name to Redundant Array of Independent Disks. I wonder why?

• A simple form is mirroring, where two disks contain the same data.

• Another simple form is striping (interleaving) where consecutive blocks are spread across multiple disks. This helps bandwidth, but is not redundant. Thus it shouldn't be called RAID, but it sometimes is.

• One of the normal RAID methods is to have N (say 4) data disks and one parity disk. Data is striped across the data disks and the bitwise parity of these sectors is written in the corresponding sector of the parity disk.

• On a read if the block is bad (e.g., if the entire disk is bad or even missing), the system automatically reads the other blocks in the stripe and the parity block in the stripe. Then the missing block is just the bitwise exclusive or of all these blocks.

• For reads this is very good. The failure free case has no penalty (beyond the space overhead of the parity disk). The error case requires N+1 (say 5) reads.

• A serious concern is the small write problem. Writing a sector requires 4 I/O. Read the old data sector, compute the change, read the parity, compute the new parity, write the new parity and the new data sector. Hence one sector I/O became 4, which is a 300% penalty.

• Writing a full stripe is not bad. Compute the parity of the N (say 4) data sectors to be written and then write the data sectors and the parity sector. Thus 4 sector I/Os become 5, which is only a 25% penalty and is smaller for larger N, i.e., larger stripes.

• A variation is to rotate the parity. That is, for some stripes disk 1 has the parity, for others disk 2, etc. The purpose is to not have a single parity disk since that disk is needed for all small writes and could become a point of contention.

Skipped.

### 5.4.3: Disk Arm Scheduling Algorithms

There are three components to disk response time: seek, rotational latency, and transfer time. Disk arm scheduling is concerned with minimizing seek time by reordering the requests.

These algorithms are relevant only if there are several I/O requests pending. For many PCs this is not the case. For most commercial applications, I/O is crucial and there are often many requests pending.

1. FCFS (First Come First Served): Simple but has long delays.

2. Pick: Same as FCFS but pick up requests for cylinders that are passed on the way to the next FCFS request.

3. SSTF or SSF (Shortest Seek (Time) First): Greedy algorithm. Can starve requests for outer cylinders and almost always favors middle requests.

4. Scan (Look, Elevator): The method used by an old fashioned jukebox (remember “Happy Days”) and by elevators. The disk arm proceeds in one direction picking up all requests until there are no more requests in this direction at which point it goes back the other direction. This favors requests in the middle, but can't starve any requests.

5. C-Scan (C-look, Circular Scan/Look): Similar to Scan but only service requests when moving in one direction. When going in the other direction, go directly to the furthest away request. This doesn't favor any spot on the disk. Indeed, it treats the cylinders as though they were a clock, i.e. after the highest numbered cylinder comes cylinder 0.

6. N-step Scan: This is what the natural implementation of Scan gives.
• While the disk is servicing a Scan direction, the controller gathers up new requests and sorts them.
• At the end of the current sweep, the new list becomes the next sweep.

#### Minimizing Rotational Latency

Use Scan based on sector numbers not cylinder number. For rotational latency Scan is the same as C-Scan. Why?
Ans: Because the disk only rotates in one direction.

Homework: 24, 25

### 5.4.4: Error Handling

Disks error rates have dropped in recent years. Moreover, bad block forwarding is normally done by the controller (or disk electronics) so this topic is no longer as important for OS.

## 5.5: Clocks

Also called timers.

### 5.5.1: Clock Hardware

• Generates an interrupt when timer goes to zero
• Counter reload can be automatic or under software (OS) control.
• If done automatically, the interrupt occurs periodically and thus is perfect for generating a clock interrupt at a fixed period.

### 5.5.2: Clock Software

1. Time of day (TOD): Bump a counter each tick (clock interupt). If counter is only 32 bits must worry about overflow so keep two counters: low order and high order.

2. Time quantum for RR: Decrement a counter at each tick. The quantum expires when counter is zero. Load this counter when the scheduler runs a process. This is presumably what you did for the (processor) scheduling lab.

3. Accounting: At each tick, bump a counter in the process table entry for the currently running process.

4. Alarm system call and system alarms:
• Users can request an alarm at some future time.
• The system also on occasion needs to schedule some of its own activities to occur at specific times in the future (e.g. turn off the floppy motor).
• The conceptually simplest solution is to have one timer for each event.
• Instead, we simulate many timers with just one.
• The data structure on the right works well.
• The time in each list entry is the time after the preceding entry that this entry's alarm is to ring.
• For example, if the time is zero, this event occurs at the same time as the previous event.
• The other entry is a pointer to the action to perform.
• At each tick, decrement next-signal.
• When next-signal goes to zero, process the first entry on the list and any others following immediately after with a time of zero (which means they are to be simultaneous with this alarm). Then set next-signal to the value in the next alarm.

5. Profiling
• Want a histogram giving how much time was spent in each 1KB (say) block of code.
• At each tick check the PC and bump the appropriate counter.
• A user-mode program can determine the software module associated with each 1K block.
• If we use finer granularity (say 10B instead of 1KB), we get increased accuracy but more memory overhead.

Homework: 27

## 5.6: Character-Oriented Terminals

### 5.6.1: RS-232 Terminal Hardware

Quite dated. It is true that modern systems can communicate to a hardwired ascii terminal, but most don't. Serial ports are used, but they are normally connected to modems and then some protocol (SLIP, PPP) is used not just a stream of ascii characters. So skip this section.

#### Memory-Mapped Terminals

Not as dated as the previous section but it still discusses the character not graphics interface.

• Today, software writes into video memory the bits that are to be put on the screen and then the graphics controller converts these bits to analog signals for the monitor (actually laptop displays and some modern monitors are digital).
• But it is much more complicated than this. The graphics controllers can do a great deal of video themselves (like filling).
• This is a subject that would take many lectures to do well.
• I believe some of this is covered in 201.

#### Keyboards

Tanenbaum description of keyboards is correct.

• At each key press and key release a code is written into the keyboard controller and the computer is interrupted.
• By remembering which keys have been depressed and not released the software can determine Cntl-A, Shift-B, etc.

### 5.6.2: Input Software

• We are just looking at keyboard input. Once again graphics is too involved to be treated well.
• There are two fundamental modes of input, sometimes called raw and cooked.
• In raw mode the application sees every “character” the user types. Indeed, raw mode is character oriented.
• All the OS does is convert the keyboard “scan codes” to “characters” and and pass these characters to the application.
• Some examples
1. down-cntl down-x up-x up-cntl is converted to cntl-x
2. down-cntl up-cntl down-x up-x is converted to x
3. down-cntl down-x up-cntl up-x is converted to cntl-x (I just tried it to be sure).
4. down-x down-cntl up-x up-cntl is converted to x
• Full screen editors use this mode.
• Cooked mode is line oriented. The OS delivers lines to the application program.
• Special characters are interpreted as editing characters (erase-previous-character, erase-previous-word, kill-line, etc).
• Erased characters are not seen by the application but are erased by the keyboard driver.
• Need an escape character so that the editing characters can be passed to the application if desired.
• The cooked characters must be echoed (what should one do if the application is also generating output at this time?)
• The (possibly cooked) characters must be buffered until the application issues a read (and an end-of-line EOL has been received for cooked mode).

### 5.6.3: Output Software

Again too dated and the truth is too complicated to deal with in a few minutes.

Skipped.

Skipped.

Skipped.

Skipped.