NOTE: These notes are by Allan Gottlieb, and are
reproduced here, with superficial modifications, with his permission.
"I" in this text generally refers to Prof. Gottlieb, except
in regards to administrative matters.
================ Start Lecture #12
Suspending: Taking an active process entirely out of
memory and saving its state on disk.
Resuming: Loading a suspended process from disk to
A suspended process may be blocked (waiting for some other event to
complete) or it may be suspended and ready (i.e. ready to run
as soon as it is swapped in.)
In an OS that uses variable-length partitions, swapping can be done
when some waiting process can't fit in memory; especially when
the memory requirements of all active process is larger than total
memory. In an OS that uses paging, swapping can be done when the
working set sizes of the active processes are together larger than
physical memory, so that keeping all processes active leads to
Unlike previous similar issues, such as short-term scheduling and
page replacement, I am not going to give a list of a half-dozen
simple swapping algorithms. Swapping is more complicated and
doesn't lend itself to simple algorithms. (Also, it is done
comparatively infrequently, so one can afford to spend time on a complicated
algorithm.) Rather, there are a number of considerations that must be
First, note that there are two decisions to be made: which,
if any, jobs to swap in and which, if any, to swap out. With variable
length partitioning, these are always combined. The only reason to swap
one job out is to swap another in. You can only swap a job in if
either an active job terminates or if you swap some other job out.
With a paging system, it may become desirable to swap one job out and not
bring any in, if the working sets of the active jobs grow; or it may
become feasible to swap a job in without any other job terminating or
being suspended, if the working sets shrink.
Criteria for choosing jobs to swap in / out
The bottom line is that if you have to do a lot of swapping, you're
- Fairness. You want to play fair among the various jobs. Debatable
whether you want jobs to have equal time in memory; or equal time waiting
suspended; or equal time running the CPU; etc.
- Size. It's always easier and less costly to swap in a small job.
In choosing a job to swap out, there's a trade-off: Swapping out a large
job takes longer (to write to disk); on the other hand, it clears up
more space that you can use for other jobs. In particular, swapping
in a large job may require swapping out several small jobs.
- Blockedness. If the job is blocked in any case waiting for some
event that may be slow, why not swap it out?
- Real-time/interactive. If a job is supposed to run in real time
or if it is supposed to be interactive, it had better not be swapped out.
- Page faulting: If all the processes are page-faulting a lot, then
you have better swap one or more of them out.
Up to now, the virtual address space has been contiguous.
- Among other issues this makes memory management difficult when
there are more that two dynamically growing regions.
- With two regions you start them on opposite sides of the virtual
space as we did before.
- Better is to have many virtual address spaces each starting at
- This split up is user visible.
- Without segmentation (equivalently said with just one segment) all
procedures are packed together so if one changes in size all the virtual
addresses following are changed and the program must be re-linked.
- Eases flexible protection and sharing (share a segment). For
example, can have a shared library.
** Two Segments
Late PDP-10s and TOPS-10
- One shared text segment, that can also contain shared
(normally read only) data.
- One (private) writable data segment.
- Permission bits on each segment.
- Which kind of segment is better to evict?
- Swap out shared segment hurts many tasks.
- The shared segment is read only (probably) so no writeback
- ``One segment'' is OS/MVT done above.
** Three Segments
Traditional (early) Unix shown at right.
- Shared text marked execute only.
- Data segment (global and static variables).
- Stack segment (automatic variables).
- (In reality, since the text doesn't grow, this was sometimes
treated as 2 segments.)
** General (not necessarily demand) Segmentation
- Permits fine grained sharing and protection. For a simple example
can share the text segment in early unix.
- Visible division of program.
- Variable size segments.
- Virtual Address = (seg#, offset).
- Does not mandate how stored in memory.
- One possibility is that the entire program must be in memory
in order to run it.
Use whole process swapping.
Very early versions of Unix did this.
- Can also implement demand segmentation.
- Can combine with demand paging (done below).
- Requires a segment table with a base and limit value for each
segment. Similar to a page table. Why is there no limit value in a
Ans: All pages are the same size so the limit is obvious.
- Entries are called STEs, Segment Table Entries.
- (seg#, offset) --> if (offset<limit) base+offset else error.
- Segmentation exhibits external fragmentation, just as whole program
Since segments are smaller than programs (several segments make up one
program), the external fragmentation is not as bad.
** Demand Segmentation
Same idea as demand paging applied to segments.
- If a segment is loaded, base and limit are stored in the STE and
the valid bit is set in the PTE.
- The PTE is accessed for each memory reference (not really, TLB).
- If the segment is not loaded, the valid bit is unset.
The base and limit as well as the disk
address of the segment is stored in the an OS table.
- A reference to a non-loaded segment generate a segment fault
(analogous to page fault).
- To load a segment, we must solve both the placement question and the
replacement question (for demand paging, there is no placement question).
- I believe demand segmentation was once implemented by Burroughs,
but am not sure.
It is not used in modern systems.
The following table mostly from Tanenbaum compares demand
paging with demand segmentation.
|How many addr spaces
|VA size > PA size
with changing sizes
|Ease user sharing
||let the VA size
exceed the PA size
independent addr spaces
||Yes||No, in principle
** 4.8.2 and 4.8.3: Segmentation With Paging
(Tanenbaum gives two sections to explain the differences between
Multics and the Intel Pentium. These notes cover what is common to
Combines both segmentation and paging to get advantages of both at
a cost in complexity. This is very common now.
- A virtual address becomes a triple: (seg#, page#, offset).
- Each segment table entry (STE) points to the page table for that
Compare this with a
multilevel page table.
- The size of each segment is a multiple of the page size (since the
segment consists of pages). Perhaps not. Can keep the exact size in
the STE (limit value) and shoot the process if it referenced beyond
the limit. In this case the last page of each segment is partially valid.
- The page# field in the address gives the entry in the chosen page
table and the offset gives the offset in the page.
- From the limit field, one can easily compute the size of the
segment in pages (which equals the size of the corresponding page
table in PTEs). Implementations may require the size of a segment to
be a multiple of the page size in which case the STE would store the
number of pages in the segment.
- A straightforward implementation of segmentation with paging
would requires 3 memory references (STE, PTE, referenced word) so a
TLB is crucial.
- Some books carelessly say that segments are of fixed size.
This is wrong.
They are of variable size with a fixed maximum and possibly with the
requirement that the size of a segment is a multiple of the page size.
- The first example of segmentation with paging was Multics.
- Keep protection and sharing information on segments.
This works well for a number of reasons.
- A segment is variable size.
- Segments and their boundaries are user (i.e., linker) visible.
- Segments are shared by sharing their page tables. This
eliminates the problem mentioned above with
- Do replacement on pages so there is no placement question and
no external fragmentation.
- Do fetch-on-demand with pages (i.e., do demand paging).
- In general, segmentation with demand paging works well and is
widely used. The only problems are the complexity and the resulting 3
memory references for each user memory reference. The complexity is
real, but can be managed. The three memory references would be fatal
were it not for TLBs, which considerably ameliorate the problem. TLBs
have high hit rates and for a TLB hit there is essentially no penalty.
Chapter 5: Input/Output
5.1: Principles of I/O Hardware
5.1.1: I/O Devices
Old division between block vs. character devices seems not very useful
to me. My taxonomy:
- Information storage (support file system)
- Read only, random access: CD-ROM, DVD ..
- Read/write, random access: Disk, diskette, CD-RW, ...
- Input device
- Character oriented: Keyboard, punch card ...
- Other: Mouse, Microphone, Scanner, Sensors ...
- Output device
- 2D Image (or sequence of images): Monitor, Printer ...
- Other: Speaker, physical control (robot, manipulator, machine ..)
- Input & Output: Touchscreen, physical control with feedback
5.1.2: Device Controllers
These are the ``devices'' as far as the OS is concerned. That
is, the OS code is written with the controller spec in hand not with
the device spec.
- Also called adaptors.
- The controller abstracts away some of the low level features of
- For disks, the controller does error checking and buffering.
Block of data followed by checksum = total numbers of "1" bits
in the data block, for verification that the data is not corrupted or
- (Unofficial) In the old days it handled interleaving of sectors.
(Sectors are interleaved if the
controller or CPU cannot handle the data rate and would otherwise have
to wait a full revolution. This is not a concern with modern systems
since the electronics have increased in speed faster than the
- For analog monitors (CRTs) the controller does
a great deal. Analog video is far from a bunch of ones and
5.1.3: Memory-Mapped I/O
Think of a disk controller and a read request. The goal is to copy
data from the disk to some portion of the central memory. How do we
- The controller contains a microprocessor and memory and is
connected to the disk (by a cable).
- When the controller asks the disk to read a sector, the contents
come to the controller via the cable and are stored by the controller
in its memory.
- The question is how does the OS, which is running on another
processor, let the controller know that a disk read is desired and how
is the data eventually moved from the controller's memory to the
general system memory.
- Typically the interface the OS sees consists of some device
registers located on the controller.
- These are memory locations into which the OS writes
information such as sector to access, read vs. write, length,
where in system memory to put the data (for a read) or from where
to take the data (for a write).
- There is also typically a device register that acts as a
- There are also devices registers that the OS reads, such as
status of the controller, errors found, etc.
- So now the question is how does the OS read and write the device
- With Memory-mapped I/O the device registers
appear as normal memory. All that is needed is to know at which
address each device regester appears. Then the OS uses normal
load and store instructions to write the registers.
- Some systems instead have a special ``I/O space'' into which
the registers are mapped and require the use of special I/O space
instructions to accomplish the load and store. From a conceptual
point of view there is no difference between the two models.
5.1.4: Direct Memory Access (DMA)
- With or without DMA, the disk controller pulls the desired data
from the disk to its buffer (and pushes data from the buffer to the
- Without DMA, i.e., with programmed I/O (PIO), the
cpu then does loads and stores (or I/O instructions) to copy the data
from the buffer to the desired memory location.
- With a DMA controller, the controller writes the memory without
intervention of the CPU.
- Clearly DMA saves CPU work. But this might not be important if
the CPU is limited by the memory or by system buses.
- Very important is that there is less data movement so the buses
are used less and the entire operation takes less time.
- Since PIO is pure software it is easier to change, which is an
- DMA does need a number of bus transfers from the CPU to the
controller to specify the DMA. So DMA is most effective for large
transfers where the setup is amortized.
- Why have the buffer? Why not just go from the disk straight to
Answer: Speed matching. The disk supplies data at a fixed rate, which might
exceed the rate the memory can accept it. In particular the memory
might be busy servicing a request from the processor or from another
5.1.5: Interrupts Revisited