Operating Systems

Start Lecture #7

Remark: Lab 3 is available

6.7 Other Issues

6.7.1 Two-phase locking

This is covered (MUCH better) in a database text. We will skip it.

6.7.2 Communication Deadlocks

We have mostly considered actually hardware resources such as printers, but have also considered more abstract resources such as semaphores.

There are other possibilities. For example a server often waits for a client to make a request. But if the request msg is lost the server is still waiting for the client and the client is waiting for the server to respond to the (lost) last request. Each will wait for the other forever, a deadlock.

A solution to this communication deadlock would be to use a timeout so that the client eventually determines that the msg was lost and sends another.

But it is not nearly that simple: The msg might have been greatly delayed and now the server will get two requests, which could be bad, and is likely to send two replies, which also might be bad.

This gives rise to the serious subject of communication protocols.

6.7.3 Livelock

Instead of blocking when a resource is not available, a process may (wait and then) try again to obtain it. Now assume process A has the printer, and B the CD-ROM, and each process wants the other resource as well. A will repeatedly request the CD-ROM and B will repeatedly request the printer. Neither can ever succeed since the other process holds the desired resource. Since no process is blocked, this is not technically deadlock, but a related concept called livelock.

6.7.4 Starvation

As usual FCFS is a good cure. Often this is done by priority aging and picking the highest priority process to get the resource. Also can periodically stop accepting new processes until all old ones get their resources.

6.8 Research on Deadlocks

6.9 Summary

Read.

Chapter 3 Memory Management

Also called storage management or space management.

The memory manager must deal with the storage hierarchy present in modern machines.

We will see in the next few weeks that there are three independent decision:

  1. Should we have segmentation.
  2. Should we have paging.
  3. Should we employ fetch on demand.

Memory management implements address translation.

Homework: What is the difference between a physical address and a virtual address?

When is address translation performed?

  1. At compile time
  2. At link-edit time (the linker lab)
  3. At load time
  4. At execution time

Extensions

  1. Dynamic Loading
  2. Dynamic Linking.
Note: I will place ** before each memory management scheme.

3.1 No Memory Management

The entire process remains in memory from start to finish and does not move.

The sum of the memory requirements of all jobs in the system cannot exceed the size of physical memory.

Monoprogramming

The good old days when everything was easy.

Running Multiple Programs Without a Memory Abstraction

This can be done via swapping if you have only one program loaded at a time. A more general version of swapping is discussed below.

One can also support a limited form of multiprogramming, similar to MFT (which is described next). In this limited version, the loader relocates all relative addresses, thus permitting multiple processes to coexist in physical memory the way your linker permitted multiple modules in a single process to coexist.

**Multiprogramming with Fixed Partitions

Two goals of multiprogramming are to improve CPU utilization, by overlapping CPU and I/O, and to permit short jobs to finish quickly.

3.2 A Memory Abstraction: Address Spaces

3.2.1 The Notion of an Address Space

Just as the process concept creates a kind of abstract CPU to run programs, the address space creates a kind of abstract memory for programs to live in.

This does for processes, what you so kindly did for modules in the linker lab: permit each to believe it has its own memory starting at address zero.

Base and Limit Registers

Base and limit registers are additional hardware, invisible to the programmer, that supports multiprogramming by automatically adding the base address (i.e., the value in the base register) to every relative address when that address is accessed at run time.

In addition the relative address is compared against the value in the limit register and if larger, the processes aborted since it has exceeded its memory bound. Compare this to your error checking in the linker lab.

The base and limit register are set by the OS when the job starts.

3.2.2 Swapping

Moving an entire processes between disk and memory is called swapping.

Multiprogramming with Variable Partitions

Both the number and size of the partitions change with time.

Homework: A swapping system eliminates holes by compaction. Assume a random distribution of holes and data segments, assume the data segments are much bigger than the holes, and assume a time to read or write a 32-bit memory word of 10ns. About how long does it take to compact 128 MB? For simplicity, assume that word 0 is part of a hole and the highest word in memory conatains valid data.

3.2.3 Managing Free Memory

MVT Introduces the Placement Question

That is, which hole (partition) should one choose?

Homework: Consider a swapping system in which memory consists of the following hole sizes in memory order: 10K, 4K, 20K, 18K 7K, 9K, 12K, and 15K. Which hole is taken for successive segment requests of

  1. 12K
  2. 10K
  3. 9K
for first fit? Now repeat the question for best fit, worst fit, and next fit.

Memory Management with Bitmaps

Divide memory into blocks and associate a bit with each block, used to indicate if the corresponding block is free or allocated. To find a chunk of size N blocks need to find N consecutive bits indicating a free block.

The only design question is how much memory does one bit represent.

Memory Management with Linked Lists

Instead of a bit map, use a linked list of nodes where each node corresponds to a region of memory either allocated to a process or still available (a hole).

Memory Management using Boundary Tags

See Knuth, The Art of Computer Programming vol 1.

MVT also introduces the Replacement Question

That is, which victim should we swap out?

This is an example of the suspend arc mentioned in process scheduling.

We will study this question more when we discuss demand paging in which case we swap out only part of a process.

Considerations in choosing a victim

NOTEs:
  1. So far the schemes presented so far have had two properties:
    1. Each job is stored contiguously in memory. That is, the job is contiguous in physical addresses.
    2. Each job cannot use more memory than exists in the system. That is, the virtual addresses space cannot exceed the physical address space.

  2. Tanenbaum now attacks the second item. I wish to do both and start with the first.

  3. Tanenbaum (and most of the world) uses the term paging to mean what I call demand paging. This is unfortunate as it mixes together two concepts.
    1. Paging (dicing the address space) to solve the placement problem and essentially eliminate external fragmentation.
    2. Demand fetching, to permit the total memory requirements of all loaded jobs to exceed the size of physical memory.

  4. Most of the world uses the term virtual memory as a synonym for demand paging. Again I consider this unfortunate.
    1. Demand paging is a fine term and is quite descriptive.
    2. Virtual memory should be used in contrast with physical memory to describe any virtual to physical address translation.

** (non-demand) Paging

Simplest scheme to remove the requirement of contiguous physical memory.