================ Start Lecture #22 ================
4.9: Research on Memory Management
Some Last Words on Memory Management
- Segmentation / Paging / Demand Loading (fetch-on-demand)
- Each is a yes or no alternative.
- Gives 8 possibilities.
- Placement and Replacement.
- Internal and External Fragmentation.
- Page Size and locality of reference.
- Multiprogramming level and medium term scheduling.
Chapter 5: Input/Output
5.1: Principles of I/O Hardware
5.1.1: I/O Devices
- Not much to say. Devices are varied.
- Block versus character devices:
- Devices, such as disks and CDROMs, with addressable chunks
(sectors in this case) are called block
These devices support seeking.
- Devices, such as Ethernet and modem connections, that are a
stream of characters are called character
These devices do not support seeking.
- Some cases, like tapes, are not so clear.
5.1.2: Device Controllers
These are the “devices” as far as the OS is concerned. That
is, the OS code is written with the controller spec in hand not with
the device spec.
Also called adaptors.
The controller abstracts away some of the low level features of
For disks, the controller does error checking and buffering.
(Unofficial) In the old days it handled interleaving of sectors.
(Sectors are interleaved if the
controller or CPU cannot handle the data rate and would otherwise have
to wait a full revolution. This is not a concern with modern systems
since the electronics have increased in speed faster than the
For analog monitors (CRTs) the controller does
a great deal. Analog video is very far from a bunch of ones and
5.1.3: Memory-Mapped I/O
Think of a disk controller and a read request. The goal is to copy
data from the disk to some portion of the central memory. How do we
- The controller contains a microprocessor and memory and is
connected to the disk (by a cable).
When the controller asks the disk to read a sector, the contents
come to the controller via the cable and are stored by the controller
in its memory.
The question is how does the OS, which is running on another
processor, let the controller know that a disk read is desired and how
is the data eventually moved from the controller's memory to the
general system memory.
Typically the interface the OS sees consists of some device
registers located on the controller.
- These are memory locations into which the OS writes
information such as sector to access, read vs. write, length,
where in system memory to put the data (for a read) or from where
to take the data (for a write).
- There is also typically a device register that acts as a
- There are also devices registers that the OS reads, such as
status of the controller, errors found, etc.
- So now the question is how does the OS read and write the device
With Memory-mapped I/O the device registers
appear as normal memory. All that is needed is to know at which
address each device regester appears. Then the OS uses normal
load and store instructions to write the registers.
Some systems instead have a special “I/O space” into which
the registers are mapped and require the use of special I/O space
instructions to accomplish the load and store.
From a conceptual point of view there is no difference between
the two models.
5.1.4: Direct Memory Access (DMA)
- With or without DMA, the disk controller pulls the desired data
from the disk to its buffer (and pushes data from the buffer to the
- Without DMA, i.e., with programmed I/O (PIO), the
cpu then does loads and stores (or I/O instructions) to copy the data
from the buffer to the desired memory location.
- With a DMA controller, the controller writes the memory without
intervention of the CPU.
- Clearly DMA saves CPU work. But this might not be important if
the CPU is limited by the memory or by system buses.
- Very important is that there is less data movement so the buses
are used less and the entire operation takes less time.
- Since PIO is pure software it is easier to change, which is an
- DMA does need a number of bus transfers from the CPU to the
controller to specify the DMA. So DMA is most effective for large
transfers where the setup is amortized.
- Why have the buffer? Why not just go from the disk straight to
Answer: Speed matching. The disk supplies data at a fixed rate, which might
exceed the rate the memory can accept it. In particular the memory
might be busy servicing a request from the processor or from another
5.1.5: Interrupts Revisited
5.2: Principles of I/O Software
As with any large software system, good design and layering is
5.2.1: Goals of the I/O Software
We want to have most of the OS, unaware of the characteristics of
the specific devices attached to the system. Indeed we also want the
OS to be largely unaware of the CPU type itself.
Due to this device independence, programs are
written to read and write generic devices and then at run time
specific devices are assigned. Writing to a disk has differences from
writing to a terminal, but Unix cp and DOS copy do not see these
differences. Indeed, most of the OS, including the file system code,
is unaware of whether the device is a floppy or hard disk.
Recall that we discussed the value
of the name space implemented by file systems. There is no dependence
between the name of the file and the device on which it is stored. So
a file called IAmStoredOnAHardDisk might well be stored on a floppy disk.
There are several aspects to error handling including: detection,
correction (if possible) and reporting.
- Detection should be done as close to where the error occurred as
possible before more damage is done (fault containment). This is not
Correction is sometimes easy, for example ECC memory does this
automatically (but the OS wants to know about the error so that it can
schedule replacement of the faulty chips before unrecoverable double
Other easy cases include successful retries for failed ethernet
transmissions. In this example, while logging is appropriate, it is
quite reasonable for no action to be taken.
- Error reporting tends to be awful. The trouble is that the error
occurs at a low level but by the time it is reported the
context is lost. Unix/Linux in particular is horrible in this area.
Creating the illusion of synchronous I/O
- I/O must be asynchronous for good performance. That is
the OS cannot simply wait for an I/O to complete. Instead, it
proceeds with other activities and responds to the notification when
the I/O has finished.
- Users (mostly) want no part of this. The code sequence
Y <-- X+1
should print a value one greater than that read. But if the
assignment is performed before the read completes, the wrong value is
- Performance junkies sometimes do want the asynchrony so that they
can have another portion of their program executed while the I/O is
underway. That is they implement a mini-scheduler in their
Often needed to hold data for examination prior to sending it to
its desired destination.
But this involves copying and takes time.
Modern systems try to avoid as much buffering as possible. This
is especially noticeable in network transmissions, where the data
could conceivably be copied many times.
User space --> kernel space as part of the write system call
kernel space to kernel I/O buffer.
I/O buffer to buffer on the network adapter/controller.
From adapter on the source to adapter on the destination.
From adapter to I/O buffer.
From I/O buffer to kernel space.
From kernel space to user space as part of the read system call.
I don't know if any systems actually do all seven.
Sharable vs dedicated devices
For devices like printers and tape drives, only one user at a time
is permitted. These are called serially reusable
devices, and were studied in the deadlocks chapter.
Devices like disks and Ethernet ports can be shared by processes