char
a[100];
main(int
argc, char **
argv)
{
int
d;
staticdouble
b;
char *s
= "boo", * p;
p = malloc(300);
return
0;
}
Identify the segment in which each variable
resides
and indicate if the variable is private to the thread or is
shared among
threads.
Be careful.
The array a, the static variable b, and the
string
constant "boo" are all in the data segment and are shared across
threads.
The arguments argc and argv are in the
stack
segment and are private to the thread.
The automatic variables d, s, and p are
also
in the stack segment and are private to the thread.
Note that the variable p itself is in the
stack
segment (private), but the object it points to is in the data
segment which
is a shared region of an address space(that's why the be
careful
warning). The contents of s consist of the address of the string
"boo"
which happens to be in the data segment (shared).
False. One of the advantages to paging is that it does not result in external fragmentation because the physical pages are parceled out, and the address space grown accordingly (e.g., for the stack and heap), at the granularity of pages.
This solution is after Andreas Reifert, a distinguished student in
the
class of Fall 1999.
(x) A yK - means: in step x we allocate yK of memory
(x) D (y) - means: in step x we deallocate the memory allocated
in
step y
Best fit outperforms:
(1) A 5K
(2) A 8K
(3) A 3K - buddy cannot do this
(4) D (1)
(5) D (3)
(6) A 3K - first and worst take the 5K part, best the 3K
part
(7) A 5K - first and worst cannot do this, best
can
#
Worst fit outperforms:
(1) A 3K
(2) A 8K
(3) A 5K - buddy cannot do this
(4) D (1)
(5) D (3)
(6) A 2K - first and best take the 3K part, worst the 5K
part
(7) A 3K - first and best take the 5K part, worst a 3K
(8) A 3K - first and best cannot do this, worst can
First fit outperforms:
(1) A 4K
(2) A 2K
(3) A 2K
(4) A 3K
(5) A 5K - buddy cannot do this
(6) D (1)
(7) D (3)
(8) D (5)
(9) A 1K - best takes the 2K part, worst the 5K
part, first
the 4K part
(10) A 3K - best takes the 4K part, worst a 4K part, first
the
3K part
(11) A 2K - best takes the 5K part, worst the 4K part,
first
the 2K part
(12) A 5K - best and worst cannot do this, first can
Buddy outperforms:
(1) A 2K
(2) A 4K
(3) A 8K
(4) D (1) - only buddy can merge the 2K with the neighbouring 2K
to
a 4K part
(5) A 4K - best, worst and first cannot do this, buddy can
Each partition will need the following data structure:
typedef struct
{
unsigned base_address;
// starting address of the partition
unsigned size; &nb\
sp;
// size of the partition in bytes
enum {free, in_use} status;
// status of the partition
}
PartitionRecord;
Then, we need to implement a data structure that
contains the partition records and such that:
* partition records are sorted by size
* partition records can be inserted into the
data structure efficiently
* partition records can removed from the data
structure efficiently
* the smallest partition larger than a particular
size can be located efficiently
* allows a partition to be checked against its adjacent neighbors so
that it can be merged with them if they are all free (or a subset thereof)
There is no known data structure that can achieve all these feats simultaneously. There are however two reasonable alternatives:
4 bit segment number | 12 bit page number | 16 bit offset |
Here are the relevant tables (all values in hexadecimal):
Segment Table | Page Table A | Page Table B | |||||
0 | Page Table A | 0 | CAFE | 0 | F000 | ||
1 | Page Table B | 1 | DEAD | 1 | D8BF | ||
x | (rest invalid) | 2 | BEEF | x | (rest invalid) | ||
3 | BA11 | ||||||
x | (rest invalid) |
|
|
|
|
We need to analyze memory and time requirements of paging schemes in order to make a decision. Average process size is considered in the calculations below.
1 Level Paging
Since we have 2^23 pages in each virtual address space, and we
use
4 bytes per page table entry, the size of the page table will be
2^23 * 2^2 =
2^25. This is 1/256 of the process' own memory space, so it is
quite costly.
(32 MB)
2 Level Paging
The address would be divided up as 12 | 11 | 13 since we want
page
table pages to fit into one page and we also want to divide the
bits roughly
equally.
Since the process' size is 8GB = 2^33 B, I assume what this means is that the total size of all the distinct pages that the process accesses is 2^33 B. Hence, this process accesses 2^33 / 2^13 = 2^20 pages. The bottom level of the page table then holds 2^20 references. We know the size of each bottom level chunk of the page table is 2^11 entries. So we need 2^20 / 2^11 = 2^9 of those bottom level chunks.
The total size of the page table is then:
//size of the outer page table | //total size of the inner pages | |
1 * 2^12 * 4 | + 2^9 * 2^11 * 4 | = 2^20 * ( 2^-6 + 4) ~4MB |
3 Level Paging
For 3 level paging we can divide up the address as
follows:
8 | 8 | 7 | 13
Again using the same reasoning as above we need 2^20/2^7 = 2^13 level 3 page table chunks. Each level 2 page table chunk references 2^8 level 3 page table chunks. So we need 2^13/2^8 = 2^5 level-2 tables. And, of course, one level-1 table.
The total size of the page table is then:
//size of the outer page table | //total size of the level 2 tables | //total size of innermost tables | |
1 * 2^8 * 4 | 2^5 * 2^8 *4 | 2^13 * 2^7 * 4 | ~4MB |
8-bit 4-bit 8-bit 12-bitWe use a 3-level page table, such that the first 8 bits are for the first level and so on. Physical addresses are 44 bits and there are 4 protection bits per page.
Since physical addresses are 44 bits and page size is 4K, the page frame number occupies 32 bits. Taking the 4 protection bits into account, each entry of the level-3 page table takes (32+4) = 36 bits. Rounding up to make entries byte (word) aligned would make each entry consume 40 (64) bits or 5 (8) bytes. For a 256 entry table, we need 1280 (2048) bytes.
The top-level page table should not assume that 2nd level page tables are page-aligned. So, we store full physical addresses there. Fortunately, we do not need control bits. So, each entry is at least 44 bits (6 bytes for byte-aligned, 8 bytes for word-aligned). Each top-level page table is therefore 256*6 = 1536 bytes (256 * 8 = 2048 bytes).
Trying to take advantage of the 256-entry alignment to reduce entry size is probably not worth the trouble. Doing so would be complex; you would need to write a new memory allocator that guarantees such alignment. Further, we cannot quite fit a table into a 1024-byte aligned region (44-10 = 34 bits per address, which would require more than 4 bytes per entry), and rounding the size up to the next power of 2 would not save us any size over just storing pointers and using the regular allocator.
Similarly, each entry in the 2nd level page table is a 44-bit physical pointer, 6 bytes (8 bytes) when aligned to byte (word) alignment. A 16 entry table is therefore 96 (128) bytes. So the space required is 1536 (2048) bytes for the top-level page table + 96 (128) bytes for one second-level page table + 1280 (2048) bytes for one third-level page table = 2912 (4224) bytes. Since the process can fit exactly into 16 pages, there is no memory wasted by internal fragmentation.
So the space required is 1536 (2048) bytes for the top level page table + 3 * 96 (3 * 128) bytes for 3 second-level page tables + 3 * 1280 (3 * 2048) for 3 third-level page table = 5664 (8576) bytes.
As the code, data, stack segment of the process fits exactly into 12, 150, 16 pages respectively, there is no memory wasted by internal fragmentation.
Solution TBD.
Solution TBD.