Operating Systems
2000-01 Fall
M 5:00-6:50
Ciww 109

Allan Gottlieb
gottlieb@nyu.edu
http://allan.ultra.nyu.edu/~gottlieb
715 Broadway, Room 1001
212-998-3344
609-951-2707
email is best

================ Start Lecture #5 ================

Chapter 3: Memory Management

Also called storage management or space management.

Memory management must deal with the storage hierarchy present in modern machines.

We will see in the next few lectures that there are three independent decision:

  1. Segmentation (or no segmentation)
  2. Paging (or no paging)
  3. Fetch on demand (or no fetching on demand)

Memory management implements address translation.

Homework: 7.

When is address translation performed?

  1. At compile time
  2. At link-edit time (the ``linker lab'')
  3. At load time
  4. At execution time

Extensions

Note: I will place ** before each memory management scheme.

3.1: Memory management without swapping or paging

Entire process remains in memory from start to finish.

The sum of the memory requirements of all jobs in the system cannot exceed the size of physical memory.

** 3.1.1: Monoprogramming without swapping or paging (Single User)

The ``good old days'' when everything was easy.

3.1.2: Multiprogramming and Memory Usage

Goal is to improve CPU utilization, by overlapping CPU and I/O

Homework: 1, 3.

3.1.3: Multiprogramming with fixed partitions

3.2: Swapping

Moving entire processes between disk and memory is called swapping.

3.2.1: Multiprogramming with variable partitions

Homework: 4

MVT Introduces the ``Placement Question'', which hole (partition) to choose

Homework: 2, 5.

MVT Also introduces the ``Replacement Question'', which victim to swap out

We will study this question more when we discuss demand paging

Considerations in choosing a victim

NOTEs:
  1. So far the schemes presented have had two properties:
    1. Each job is stored contiguously in memory. That is, the job is contiguous in physical addresses.
    2. Each job cannot use more memory than exists in the system. That is, the virtual addresses space cannot exceed the physical address space.

  2. Tanenbaum now attacks the second item. I wish to do both and start with the first

  3. Tanenbaum (and most of the world) uses the term ``paging'' to mean what I call demand paging. This is unfortunate as it mixes together two concepts
    1. Paging (dicing the address space) to solve the placement problem and essentially eliminate external fragmentation.
    2. Demand fetching, to permit the total memory requirements of all loaded jobs to exceed the size of physical memory.

  4. Tanenbaum (and most of the world) uses the term virtual memory as a synonym for demand paging. Again I consider this unfortunate.
    1. Demand paging is a fine term and is quite descriptive
    2. Virtual memory ``should'' be used in contrast with physical memory to describe any virtual to physical address translation.

** (non-demand) Paging

Simplest scheme to remove the requirement of contiguous physical memory.

Example: Assume a decimal machine with page size = frame size = 1000.
Assume PTE 3 contains 459.
Then virtual address 3372 corresponds to physical address 459372.

Properties of (non-demand) paging.

Homework: 13