Operating Systems
Start Lecture #7
Remark: Lab 3 (banker) assigned.
It is due in 2 weeks.
Homework:
What is the difference between a physical address and a virtual address?
When is address translation performed?
- At compile time
- Compiler generates physical addresses.
- Requires knowledge of where the compilation unit will be loaded.
- No linker.
- Loader is trivial.
- Primitive.
- Rarely used (MSDOS .COM files).
- At link-edit time (the
linker lab
)
- Compiler
- Generates relative (a.k.a. relocatable) addresses for each
compilation unit.
- References external addresses.
- Linkage editor
- Converts relocatable addresses to absolute.
- Resolves external references.
- Must also converts virtual to physical addresses by
knowing where the linked program will be loaded.
Linker lab
does
this, but it is trivial since we
assume the linked program will be loaded at 0.
- Loader is still trivial.
- Hardware requirements are small.
- A program can be loaded only where specified and
cannot move once loaded.
- Not used much any more.
- At load time
- Similar to at link-edit time, but do not fix
the starting address.
- Program can be loaded anywhere.
- Program can move but cannot be split.
- Need modest hardware: base/limit registers.
- Loader sets the base/limit registers.
- No longer common.
- At execution time
- Addresses translated dynamically during execution.
- Hardware needed to perform the virtual to physical address
translation quickly.
- Currently dominates.
- Much more information later.
Extensions
- Dynamic Loading
- When executing a call, check if the module is loaded.
- If it is not loaded, have a linking loader load it and
update the tables to indicate that it now is loaded and where
it is.
- This procedure slows down all calls to the routine (not just
the first one that must load the module) unless you rewrite
code dynamically.
- Not used much.
- Dynamic Linking.
- This is covered later.
- Commonly used.
Note: I will place ** before each memory management
scheme.
3.1 No Memory Management
The entire process remains in memory from start to finish and does
not move.
The sum of the memory requirements of all jobs in the system cannot
exceed the size of physical memory.
Monoprogramming
The good old days
when everything was easy.
- No address translation done by the OS (i.e., address translation is
not performed dynamically during execution).
- Either reload the OS for each job (or don't have an OS, which is almost
the same), or protect the OS from the job.
- One way to protect (part of) the OS is to have it in ROM.
- Of course, must have the OS (read-write) data in RAM.
- Can have a separate OS address space that is accessible
only in supervisor mode.
- Might put just some drivers in ROM (BIOS).
- The user employs overlays if the memory needed
by a job exceeds the size of physical memory.
- Programmer breaks program into pieces.
- A
root
piece is always memory resident.
- The root contains calls to load and unload various pieces.
- Programmer's responsibility to ensure that a piece is already
loaded when it is called.
- No longer used, but we couldn't have gotten to the moon in the
60s without it (I think).
- Overlays have been replaced by dynamic address translation and
other features (e.g., demand paging) that have the system support
logical address sizes greater than physical address sizes.
- Fred Brooks (leader of IBM's OS/360 project and author of
The
mythical man month
) remarked that the OS/360 linkage editor was
terrific, especially in its support for overlays, but by the time
it came out, overlays were no longer used.
Running Multiple Programs Without a Memory Abstraction
This can be done via swapping if you have only one program loaded
at a time.
A more general version of swapping is discussed below.
One can also support a limited form of multiprogramming, similar to
MFT (which is described next).
In this limited version, the loader relocates all relative
addresses, thus permitting multiple processes to coexist in physical
memory the way your linker permitted multiple modules in a single
process to coexist.
**Multiprogramming with Fixed Partitions
Two goals of multiprogramming are to improve CPU utilization, by
overlapping CPU and I/O, and to permit short jobs to finish quickly.
- This scheme was used by IBM for system 360 OS/MFT
(multiprogramming with a fixed number of tasks).
- An alternative would have a single input
queue
instead
of one for each partition.
- With this alternative, if there are no big jobs, one can
use the big partition for little jobs.
- Not a true queue since would want to remove the first job
for each partition.
- I don't think IBM did this.
- You can think of the input queue(s) as the ready list(s)
with a scheduling policy of FCFS in each partition.
- Each partition was monoprogrammed, the
multiprogramming occurred
across partitions.
- The partition boundaries are not movable
(must reboot to move a job).
- So the partitions are of fixed size.
- MFT can have large internal fragmentation,
i.e., wasted space inside a region of memory assigned
to a process.
- Each process has a single
segment
(i.e., the virtual
address space is contiguous).
We will discuss segments later.
- The physical address space is also contiguous (i.e., the program
is stored as one piece).
- No sharing of memory between process.
- No dynamic address translation.
- OS/MFT is an example of address translation during load time.
- The system must
establish addressability
.
- That is, the system must set a register to the location at
which the process was loaded (the bottom of the partition).
Actually this is done with a user-mode instruction so could
be called execution time, but it is only done once at the
very beginning.
- This register (often called a base register by ibm) is
part of the programmer visible register set.
Soon we will meet base/limit registers, which, although
related to the IBM base register above, have the important
difference of being outside the programmer's control or
view.
- Also called relocation.
- In addition, since the linker/assembler allow the use of
addresses as data, the loader itself relocates these at load
time.
- Storage keys are adequate for protection (the IBM method).
- An alternative protection method is base/limit registers,
which are discussed below.
- An advantage of the base/limit scheme is that it is easier to
move a job.
- But MFT didn't move jobs so this disadvantage of storage keys
is moot.
3.2 A Memory Abstraction: Address Spaces
the Notion of an Address Space
Just as the process concept creates a kind of abstract CPU to run
programs, the address space creates a kind of abstract memory for
programs to live in.
This does for processes, what you so kindly did for modules in the
linker lab: permit each to believe it has its own memory starting at
address zero.
Base and Limit Registers
Base and limit registers are additional hardware, invisible to the
programmer, that supports multiprogramming by automatically adding
the base address (i.e., the value in the base register) to every
relative address when that address is accessed at run time.
In addition the relative address is compared against the value in
the limit register and if larger, the processes aborted since it has
exceeded its memory bound.
Compare this to your error checking in the linker lab.
The base and limit register are set by the OS when the job starts.
3.2.2 Swapping
Moving an entire processes between disk and memory is called
swapping.
Multiprogramming with Variable Partitions
Both the number and size of the partitions change with
time.
Homework: A swapping system eliminates holes by
compaction.
Assume a random distribution of holes and data segments, assume the
data segments are much bigger than the holes, and assume a time to
read or write a 32-bit memory word of 10ns.
About how long does it take to compact 128 MB?
For simplicity, assume that word 0 is part of a hole and the highest
word in memory conatains valid data.
3.2.3 Managing Free Memory
MVT Introduces the Placement Question
That is, which hole (partition) should one choose?
- There are various algorithms for choosing a hole including best
fit, worst fit, first fit, circular first fit, quick fit, and
Buddy.
- Best fit doesn't waste big holes, but does leave slivers and
is expensive to run.
- Worst fit avoids slivers, but eliminates all big holes so
a big job will require compaction.
Even more expensive than best fit (best fit stops if it
finds a perfect fit).
- Quick fit keeps lists of some common sizes (but has other
problems, see Tanenbaum).
- Buddy system
- Round request to next highest power of two (causes
internal fragmentation).
- Look in list of blocks this size (as with quick fit).
- If list empty, go higher and split into buddies.
- When returning coalesce with buddy.
- Do splitting and coalescing recursively, i.e. keep
coalescing until can't and keep splitting until successful.
- See Tanenbaum (look in the index) or an algorithms
book for more details.
- A current favorite is circular first fit, also known as next fit.
- Use the first hole that is big enough (first fit) but start
looking where you left off last time.
- Doesn't waste time constantly trying to use small holes that
have failed before, but does tend to use many of the big holes,
which can be a problem.
- Buddy comes with its own implementation.
How about the others?
Homework:
Consider a swapping system in which memory consists of the following
hole sizes in memory order: 10K, 4K, 20K, 18K 7K, 9K, 12K, and 15K.
Which hole is taken for successive segment requests of
- 12K
- 10K
- 9K
for first fit?
Now repeat the question for best fit, worst fit, and next fit.
Memory Management with Bitmaps
Divide memory into blocks and associate a bit with each block, used
to indicate if the corresponding block is free or allocated.
To find a chunk of size N blocks need to find N consecutive
bits indicating a free block.
The only design question is how much memory does one bit represent.
- Big: Serious internal fragmentation.
- Small: Many bits to store and process.
Memory Management with Linked Lists
Instead of a bit map, use a linked list of nodes where each node
corresponds to a region of memory either allocated to a process or
still available (a hole).
- Each item on list gives the length and starting location of
the corresponding region of memory and says whether it is a hole
or Process.
- The items on the list are not taken from the memory to be
used by processes.
- The list is kept in order of starting address.
- Merge adjacent holes when freeing memory.
- Use either a singly or doubly linked list.
Memory Management using Boundary Tags
See Knuth, The Art of Computer Programming vol 1.
- Use the same memory for list items as for processes.
- Don't need an entry in linked list for the blocks in use, just
the avail blocks are linked.
- The avail blocks themselves are linked, not a node that points to
an avail block.
- When a block is returned, we can look at the boundary tag of the
adjacent blocks and see if they are avail.
If so they must be merged with the returned block.
- For the blocks currently in use, just need a hole/process bit at
each end and the length.
Keep this in the block itself.
- We do not need to traverse the list when returning a block can use
boundary tags to find predecessor.
MVT also introduces the Replacement Question
That is, which victim should we swap out?
This is an example of the suspend arc mentioned in process
scheduling.
We will study this question more when we discuss
demand paging in which case
we swap out only part of a process.
Considerations in choosing a victim
- Cannot replace a job that is pinned,
i.e. whose memory is tied down. For example, if Direct Memory
Access (DMA) I/O is scheduled for this process, the job is pinned
until the DMA is complete.
- Victim selection is a medium term scheduling decision
- A job that has been in a wait state for a long time is a good
candidate.
- Often choose as a victim a job that has been in memory for a long
time.
- Another question is how long should it stay swapped out.
- For demand paging, where swaping out a page is not as drastic
as swapping out a job, choosing the victim is an important
memory management decision and we shall study several policies.
NOTEs:
- So far the schemes presented so far have had two properties:
- Each job is stored contiguously in memory.
That is, the job is contiguous in physical addresses.
- Each job cannot use more memory than exists in the system.
That is, the virtual addresses space cannot exceed the
physical address space.
- Tanenbaum now attacks the second item.
I wish to do both and start with the first.
- Tanenbaum (and most of the world) uses the term
paging
to mean what I call demand paging.
This is unfortunate as it mixes together two concepts.
- Paging (dicing the address space) to solve the placement
problem and essentially eliminate external fragmentation.
- Demand fetching, to permit the total memory requirements of
all loaded jobs to exceed the size of physical memory.
- Most of the world uses the term virtual memory as a synonym for
demand paging.
Again I consider this unfortunate.
- Demand paging is a fine term and is quite descriptive.
- Virtual memory
should
be used in contrast with
physical memory to describe any virtual to physical address
translation.
** (non-demand) Paging
Simplest scheme to remove the requirement of contiguous physical
memory.
- Chop the program into fixed size pieces called
pages, which are invisible to the user.
Tanenbaum sometimes calls pages
virtual pages.
- Chop the real memory into fixed size pieces called
page frames or
simply frames.
- The size of a page (the page size) = size of a frame (the frame
size).
- Sprinkle the pages into the frames.
- Keep a table (called the page table) having
an entry for each page.
The page table entry or PTE for page p contains
the number of the frame f that contains page p.
Example: Assume a decimal machine with
page size = frame size = 1000.
Assume PTE 3 contains 459.
Then virtual address 3372 corresponds to physical address 459372.
Properties of (non-demand) paging (without segmentation).
- The entire process must be memory resident to run.
- No holes, i.e., no external fragmentation.
- If there are 50 frames available and the page size is 4KB than a
job requiring ≤ 200KB will fit, even if the available frames are
scattered over memory.
- Hence (non-demand) paging is useful.
- Introduces internal fragmentation approximately equal to 1/2 the
page size for every process (really every segment).
- Can have a job unable to run due to insufficient memory and
have some (but not enough) memory available.
This is not called external fragmentation since it is
not due to memory being fragmented.
- Eliminates the placement question. All pages are equally
good since don't have external fragmentation.
- The replacement question remains.
- Since page boundaries occur at
random
points and can
change from run to run (the page size can change with no effect
on the program—other than performance), pages are not
appropriate units of memory to use for protection and sharing.
But if all you have is a hammer, everything looks like a nail.
Segmentation, which is discussed later, is sometimes more
appropriate for protection and sharing.
- Virtual address space remains contiguous.
Address translation
- Each memory reference turns into 2 memory references
- Reference the page table
- Reference central memory
- This would be a disaster!
- Hence the MMU caches page#→frame# translations.
This cache is kept near the processor and can be accessed
rapidly.
- This cache is called a translation lookaside buffer (TLB) or
translation buffer (TB).
- For the above example, after referencing virtual address 3372,
there would be an entry in the TLB containing the mapping
3→459.
- Hence a subsequent access to virtual address 3881 would be
translated to physical address 459881 without an extra memory
reference.
Naturally, a memory reference for location 459881 itself would be
required.
Choice of page size is discuss below.
Homework:
Using the page table of Fig. 3.9, give the physical address
corresponding to each of the following virtual addresses.
- 20
- 4100
- 8300
3.3 Virtual Memory (meaning Fetch on Demand)
The idea is to enable a program to execute even if only the active
portion of its address space is memory resident.
That is, we are to swap in and swap out portions of
a program.
In a crude sense this could be called automatic overlays
.
Advantages
- Can run a program larger than the total physical memory.
- Can increase the multiprogramming level since the total size of
the active, i.e. loaded, programs (running + ready + blocked) can
exceed the size of the physical memory.
- Since some portions of a program are rarely if ever used, it
is an inefficient use of memory to have them loaded all the
time. Fetch on demand will not load them if not used and will
(hopefully) unload them during replacement if they are not used
for a long time.
- Simpler for the user than overlays or aliasing variables
(older techniques to run large programs using limited memory).
Disadvantages
- More complicated for the OS.
- Execution time less predictable (depends on other jobs).
- Can over-commit memory.
The Memory Management Unit and Virtual to Physical Address Translation
The memory management unit is a piece of hardware in the processor
that translates virtual addresses (i.e., the addresses in the
program) into physical addresses (i.e., real hardware addresses in
the memory).
The memory management unit is abbreviated as and normally referred
to as the MMU.
(The idea of an MMU and virtual to physical address translation
applies equally well to non-demand paging and in olden days the
meaning of paging and virtual memory included that case as well.
Sadly, in my opinion, modern usage of the term paging and virtual
memory are limited to fetch-on-demand memory systems, typically some
form of demand paging.)
** 3.3.1 Paging (Meaning Demand Paging)
The idea is to fetch pages from disk to memory when they are
referenced,hoping to get the most actively used pages in memory.
The choice of page size is discussed below.
Demand paging is very common: More complicated variants,
multilevel-level paging and paging plus segmentation (both of which
we will discuss), have been used and the former dominates modern
operating systems.
Started by the Atlas system at Manchester University in the 60s
(Fortheringham).
Each PTE continues to contain the frame number if the page is
loaded.
But what if the page is not loaded (i.e., the page exists only on disk)?
The PTE has a flag indicating if the page is loaded (can think of
the X in the diagram on the right as indicating that this flag is
not set).
If the page is not loaded, the location on disk could be kept
in the PTE, but normally it is not
(discussed below).
When a reference is made to a non-loaded page (sometimes
called a non-existent page, but that is a bad name), the system
has a lot of work to do.
We give more details below.
- Choose a free frame, if one exists.
- What if there is no free frame?
Make one!
- Choose a victim frame.
This is the replacement question about
which we will have more to say latter.
- Write the victim back to disk if it is dirty,
- Update the victim PTE to show that it is not loaded.
- Now we have a free frame.
- Copy the referenced page from disk to the free frame.
- Update the PTE of the referenced page to show that it is
loaded and give the frame number.
- Do the standard paging address translation
(p#,off)→(f#,off).
Really not done quite this way
- There is
always
a free frame because ...
- ... there is a deamon active that checks the number of free frames
and if this is too low, chooses victims and
pages them out
(writing them back to disk if dirty).
- Deamon reactivated when low water mark passed and suspended
when high water mark passed.
Homework: 9.
3.3.2 Page tables
A discussion of page tables is also appropriate for (non-demand)
paging, but the issues are more acute with demand paging and the
tables can be much larger.
Why?
- The total size of the active processes is no longer limited to
the size of physical memory.
Since the total size of the processes is greater, the total size
of the page tables is greater and hence concerns over the size of
the page table are more acute.
- With demand paging an important question is the choice of a
victim page to page out.
Data in the page table can be useful in this choice.
We must be able access to the page table very quickly since it is
needed for every memory access.
Unfortunate laws of hardware.
- Big and fast are essentially incompatible.
- Big and fast and low cost is hopeless.
So we can't just say, put the page table in fast processor registers,
and let it be huge, and sell the system for $1000.
The simplest solution is to put the page table in main memory.
However it seems to be both too slow and two big.
- This solution seems too slow since all memory references now
require two reference.
- We will soon see how to speed up the references and for many
programs eliminate extra reference by using a
TLB.
- This solution seems too big.
- Currently we are considering contiguous virtual
addresses ranges (i.e. the virtual addresses have no holes).
- One often puts the stack at one end of the virtual address
space and the global (or static) data at the other end and
let them grow towards each other.
- The virtual memory in between is unused.
- That does not sound so bad.
Why should we care about virtual memory?
- This unused virtual memory can be huge (in address range) and
hence the page table (which is stored in real memory)
will mostly contain unneeded PTEs.
- Works fine if the maximum virtual address size is small, which
was once true (e.g., the PDP-11 of the 1970s) but is no longer the
case.
- A
fix
is to use multiple levels of mapping.
We will see two examples below:
multilevel page tables and
segmentation plus paging.
Structure of a Page Table Entry
Each page has a corresponding page table entry (PTE).
The information in a PTE is used by the hardware and its format is
machine dependent; thus the OS routines that access PTEs are not portable.
Information set by and used by the OS is normally kept in other OS tables.
(Actually some systems, those with software
TLB reload, do not require hardware access to
the page table.)
The page table is index by the page number; thus the page number
is not stored in the table.
The following fields are often present in a PTE.
- The valid bit.
This tells if the page is currently loaded (i.e., is in a
frame).
If set, the frame number is valid.
It is also called the presence or
presence/absence bit.
If a page is accessed whose valid bit is unset, a
page fault is generated by the hardware.
- The frame number.
This field is the main reason for the table.
It gives the virtual to physical address translation.
- The Modified or Dirty bit.
Indicates that some part of the page has been written since it
was loaded.
This is needed if the page is evicted so that the OS can tell if
the page must be written back to disk.
- The Referenced or Used bit.
Indicates that some word in the page has been referenced.
Used to select a victim: unreferenced pages make good victims by
the locality property (discussed below).
- Protection bits.
For example one can mark text pages as execute only.
This requires that boundaries between regions with different
protection are on page boundaries.
Normally many consecutive (in logical address) pages have the
same protection so many page protection bits are redundant.
Protection is more naturally done with
segmentation, but in many
current systems, it is done with paging (since the systems don't
utilize segmentation, even though the hardware supports it).
Why are the disk address of non-resident pages not
in the PTE?
On most systems the PTEs are accessed by the hardware automatically
on a TLB miss (see immediately below).
Thus the format of the PTEs is determined by the hardware and
contains only information used on page hits.
Hence the disk address, which is only used on page faults, is not
present.
3.3.3 Speeding Up Paging
As mentioned above the simple scheme of storing the page table in
its entirety in central memory alone appears to be both too slow and
too big.
We address both these issues here, but note that a second solution
to the size question (segmentation) is discussed later.
Translation Lookaside Buffers (and General Associative Memory)
Note: Tanenbaum suggests that
associative memory
and translation lookaside buffer
are synonyms.
This is wrong.
Associative memory is a general concept of which translation
lookaside buffer is a specific example.
An associative memory is a
content addressable memory.
That is you access the memory by giving the value of some
field (called the index) and the hardware searches all the records
and returns the record whose index field contains the requested value.
For example
Name | Animal | Mood | Color
======+========+==========+======
Moris | Cat | Finicky | Grey
Fido | Dog | Friendly | Black
Izzy | Iguana | Quiet | Brown
Bud | Frog | Smashed | Green
If the index field is Animal and Iguana is given, the associative
memory returns
Izzy | Iguana | Quiet | Brown
A Translation Lookaside Buffer
or TLB is an associate memory where the index field
is the page number.
The other fields include the frame number, dirty bit, valid bit,
etc.
Note that unlike the situation with a the page table, the page
number is stored in the TLB; indeed it is the index
field.
A TLB is small and expensive but at least it
is fast.
When the page number is in the TLB, the frame number is returned
very quickly.
On a miss, a TLB reload is performed.
The page number is looked up in the page table.
The record found is placed in the TLB and a victim is discarded (not
really discarded, dirty and referenced bits are copied back to the
PTE).
There is no placement question since all TLB entries are accessed at
the same time and hence are equally suitable.
But there is a replacement question.
Homework: 15.
As the size of the TLB has grown, some processors have switched
from single-level, fully-associative, unified TLBs to multi-level,
set-associative, separate instruction and data, TLBs.
We are actually discussing caching, but using different
terminology.
- Page frames are a cache for pages (one could say that
central memory is a cache of the disk).
- The TLB is a cache of the page table.
- Also the processor almost surely has a cache (most likely
several) of central memory.
- In all the cases, we have small-and-fast acting as a cache
of big-and-slow.
However what is big-and-slow in one level of caching, can be
small-and-fast in another level.
Software TLB Management
The words above assume that, on a TLB miss, the MMU (i.e., hardware
and not the OS) loads the TLB with the needed PTE and then performs
the virtual to physical address translation.
This implies that the OS need not be concerned with TLB misses.
Some newer systems do this in software, i.e., the
OS is involved.
Multilevel Page Tables
Recall the diagram above showing
the data and stack growing towards each other.
Most of the virtual memory is the unused space between the data and
stack regions.
However, with demand paging this space does not waste real
memory.
But the single large page table does waste real
memory.
The idea of multi-level page tables (a similar idea is used in Unix
i-node-based file systems, which we study later when we do I/O) is
to add a level of indirection and have a page table containing
pointers to page tables.
- Imagine one big page table, which we will (eventually) call
the second level page table.
- We want to apply paging to this large table, viewing it as
simply memory not as a page table.
So we (logically) cut it into pieces each the size of a page.
Note that many (typically 1024 or 2048) PTEs fit in one page so
there are far fewer of these pages than PTEs.
- Now construct a first level page table containing PTEs
that point to the pages produced in the previous bullet.
- This first level PT is small enough to store in memory.
It contains one PTE for every page of PTEs in the 2nd level PT,
which reduces space by a factor of one or two thousand.
- But since we still have the 2nd level PT, we have made the
world bigger not smaller!
- Don't store in memory those 2nd level page tables all of whose
PTEs refer to unused memory.
That is
use demand paging on the (second level) page table!
This idea can be extended to three or more levels.
The largest I know of has four levels.
We will be content with two levels.
Address Translation With a 2-Level Page Table
For a two level page table the virtual address is divided into
three pieces
+-----+-----+-------+
| P#1 | P#2 | Offset|
+-----+-----+-------+
- P#1, page number sub 1, gives the index into the first level
page table.
- Follow the pointer in the corresponding PTE to reach the frame
containing the relevant 2nd level page table.
- P#2 gives the index into this 2nd level page table.
- Follow the pointer in the corresponding PTE to reach the frame
containing the (originally) requested page.
- Offset gives the offset in this frame where the originally
requested word is located.
Do an example on the board
The VAX used a 2-level page table structure, but with some wrinkles
(see Tanenbaum for details).
Naturally, there is no need to stop at 2 levels.
In fact the SPARC has 3 levels and the Motorola 68030 has 4 (and the
number of bits of Virtual Address used for P#1, P#2, P#3, and P#4
can be varied).
More recently, x86-64 also has 4-levels.
Inverted Page Tables
For many systems the virtual address range is much bigger that the
size of physical memory.
In particular, with 64-bit addresses, the range is 264
bytes, which is 16 million terabytes.
If the page size is 4KB and a PTE is 4 bytes, a full page table
would be 16 thousand terabytes.
A two level table would still need 16 terabytes for the first level
table, which is stored in memory.
A three level table reduces this to 16 gigabytes, which is still
large and only a 4-level table gives a reasonable memory footprint
of 16 megabytes.
An alternative is to instead keep a table indexed by frame number.
The content of entry f contains the number of the page currently
loaded in frame f.
This is often called a frame table as well as an
inverted page table.
Now there is one entry per frame.
Again using 4KB pages and 4 byte PTEs, we see that the table would
be a constant 0.1% of the size of real memory.
But on a TLB miss, the system must search the
inverted page table, which would be hopelessly slow except that some
tricks are employed.
Specifically, hashing is used.
Also it is often convenient to have an inverted table as we will
see when we study global page replacement algorithms.
Some systems keep both page and inverted page tables.