NOTE: These notes are by Allan Gottlieb, and are reproduced here, with superficial modifications, with his permission. "I" in this text generally refers to Prof. Gottlieb, except in regards to administrative matters.

================ Start Lecture #12 (Mar. 26) ================

4.2: Swapping

Medium-term scheduling:

Suspending: Taking an active process entirely out of memory and saving its state on disk.

Resuming: Loading a suspended process from disk to memory.

A suspended process may be blocked (waiting for some other event to complete) or it may be suspended and ready (i.e. ready to run as soon as it is swapped in.)

In an OS that uses variable-length partitions, swapping can be done when some waiting process can't fit in memory; especially when the memory requirements of all active process is larger than total memory. In an OS that uses paging, swapping can be done when the working set sizes of the active processes are together larger than physical memory, so that keeping all processes active leads to thrashing.

Unlike previous similar issues, such as short-term scheduling and page replacement, I am not going to give a list of a half-dozen simple swapping algorithms. Swapping is more complicated and doesn't lend itself to simple algorithms. (Also, it is done comparatively infrequently, so one can afford to spend time on a complicated algorithm.) Rather, there are a number of considerations that must be combined.

First, note that there are two decisions to be made: which, if any, jobs to swap in and which, if any, to swap out. With variable length partitioning, these are always combined. The only reason to swap one job out is to swap another in. You can only swap a job in if either an active job terminates or if you swap some other job out.

With a paging system, it may become desirable to swap one job out and not bring any in, if the working sets of the active jobs grow; or it may become feasible to swap a job in without any other job terminating or being suspended, if the working sets shrink.

Criteria for choosing jobs to swap in / out

The bottom line is that if you have to do a lot of swapping, you're in trouble.

4.8: Segmentation

Up to now, the virtual address space has been contiguous.

** Two Segments

Late PDP-10s and TOPS-10

** Three Segments

Traditional (early) Unix shown at right.

  1. Shared text marked execute only.
  2. Data segment (global and static variables).
  3. Stack segment (automatic variables).
  4. (In reality, since the text doesn't grow, this was sometimes treated as 2 segments.)

** General (not necessarily demand) Segmentation

** Demand Segmentation

Same idea as demand paging applied to segments.

The following table mostly from Tanenbaum compares demand paging with demand segmentation.

Consideration Demand
Programmer aware NoYes
How many addr spaces 1Many
VA size > PA size YesYes
Protect individual
procedures separately
Accommodate elements
with changing sizes
Ease user sharing NoYes
Why invented let the VA size
exceed the PA size
Sharing, Protection,
independent addr spaces

Internal fragmentation YesNo, in principle
External fragmentation NoYes
Placement question NoYes
Replacement question YesYes

** 4.8.2 and 4.8.3: Segmentation With Paging

(Tanenbaum gives two sections to explain the differences between Multics and the Intel Pentium. These notes cover what is common to all segmentation).

Combines both segmentation and paging to get advantages of both at a cost in complexity. This is very common now.

Chapter 5: Input/Output

5.1: Principles of I/O Hardware

5.1.1: I/O Devices

Old division between block vs. character devices seems not very useful to me. My taxonomy:

IO Device

5.1.2: Device Controllers

These are the ``devices'' as far as the OS is concerned. That is, the OS code is written with the controller spec in hand not with the device spec.

5.1.3: Memory-Mapped I/O

Think of a disk controller and a read request. The goal is to copy data from the disk to some portion of the central memory. How do we do this?

5.1.4: Direct Memory Access (DMA)

5.1.5: Interrupts Revisited