================ Start Lecture #21 ================
NOTE: Lab4 is available.
Up to now, the virtual address space has been contiguous.
Among other issues this makes memory management difficult when
there are more that two dynamically growing regions.
With two regions you start them on opposite sides of the virtual
space as we did before.
Better is to have many virtual address spaces each starting at
This split up is user visible.
Without segmentation (equivalently said with just one segment) all
procedures are packed together so if one changes in size all the
virtual addresses following are changed and the program must be
re-linked. With multiple segments this relinking would be limited
to the symbols defined or used in the modified procedure.
Eases flexible protection and sharing (share a segment). For
example, can have a shared library.
** Two Segments
Late PDP-10s and TOPS-10
One shared text segment, that can also contain shared
(normally read only) data.
One (private) writable data segment.
Permission bits on each segment.
Which kind of segment is better to evict?
Swapping out the shared segment hurts many tasks.
The shared segment is read only (probably) so no writeback
“One segment” is OS/MVT done above.
** Three Segments
Traditional (early) Unix shown at right.
- Shared text marked execute only.
- Data segment (global and static variables).
- Stack segment (automatic variables).
- In reality, since the text doesn't grow, this was sometimes
treated as 2 segments by combining text and data into one segment
** Four Segments
** General (not necessarily demand) Segmentation
Permits fine grained sharing and protection. For a simple example
can share the text segment in early unix.
Visible division of program.
Variable size segments.
Virtual Address = (seg#, offset).
Does not mandate how stored in memory.
One possibility is that the entire program must be in memory
in order to run it.
Use whole process swapping.
Very early versions of Unix did this.
Can also implement demand segmentation.
Can combine with demand paging (done below).
Requires a segment table with a base and limit value for each
segment. Similar to a page table. Why is there no limit value in a
Ans: All pages are the same size so the limit is obvious.
Entries are called STEs, Segment Table Entries.
(seg#, offset) --> if (offset<limit) base+offset else error.
Segmentation exhibits external fragmentation, just as whole program
Since segments are smaller than programs (several segments make up one
program), the external fragmentation is not as bad.
** Demand Segmentation
Same idea as demand paging, but applied to segments.
- If a segment is loaded, base and limit are stored in the STE and
the valid bit is set in the STE.
- The STE is accessed for each memory reference (not really, TLB).
- If the segment is not loaded, the valid bit is unset.
The base and limit as well as the disk
address of the segment is stored in the an OS table.
- A reference to a non-loaded segment generate a segment fault
(analogous to page fault).
- To load a segment, we must solve both the placement question and the
replacement question (for demand paging, there is no placement question).
- I believe demand segmentation was once implemented by Burroughs,
but am not sure.
It is not used in modern systems.
The following table mostly from Tanenbaum compares demand
paging with demand segmentation.
|How many addr spaces
|VA size > PA size
with changing sizes
|Ease user sharing
||let the VA size
exceed the PA size
independent addr spaces
||Yes||No, in principle
** 4.8.2 and 4.8.3: Segmentation With (demand) Paging
(Tanenbaum gives two sections to explain the differences between
Multics and the Intel Pentium. These notes cover what is common to
all segmentation+paging systems).
Combines both segmentation and demand paging to get advantages of
both at a cost in complexity. This is very common now.
Although it is possible to combine segmentation with non-demand
paging, I do not know of any system that did this.
A virtual address becomes a triple: (seg#, page#, offset).
Each segment table entry (STE) points to the page table for that
Compare this with a
multilevel page table.
The physical size of each segment is a multiple of the page size
(since the segment consists of pages). The logical size is not;
instead we keep the exact size in the STE (limit value) and shoot
the process if it referenced beyond the limit. In this case the
last page of each segment is partially valid (internal
The page# field in the address gives the entry in the chosen page
table and the offset gives the offset in the page.
From the limit field, one can easily compute the size of the
segment in pages (which equals the size of the corresponding page
table in PTEs).
A straightforward implementation of segmentation with paging
would requires 3 memory references (STE, PTE, referenced word) so a
TLB is crucial.
Some books carelessly say that segments are of fixed size. This
is wrong. They are of variable size with a fixed maximum and with
the requirement that the physical size of a segment is a multiple
of the page size.
The first example of segmentation with paging was Multics.
Keep protection and sharing information on segments.
This works well for a number of reasons.
A segment is variable size.
Segments and their boundaries are user (i.e., linker) visible.
Segments are shared by sharing their page tables. This
eliminates the problem mentioned above with
- Since we have paging, there is no placement question and
no external fragmentation.
Do fetch-on-demand with pages (i.e., do demand paging).
In general, segmentation with demand paging works well and is
widely used. The only problems are the complexity and the resulting 3
memory references for each user memory reference. The complexity is
real, but can be managed. The three memory references would be fatal
were it not for TLBs, which considerably ameliorate the problem. TLBs
have high hit rates and for a TLB hit there is essentially no penalty.