Therefore, to get the 16K bytes out of the disk, it takes 32 sectors * 22 microseconds/sector = 694 microseconds. Note that the bandwidth of the disk itself is therefore about 23MB/s. But, we also have to pay the DMA overhead, which is
So, the total time is 3.9 ms + .694 ms = 4.59 ms. The transfer time is dominated by the DMA overhead, and the throughput is slightly less than the 4MB/sec rate of the DMA controller (4.59 ms per 16KB = 3.49 MB/s).
Moral of the story: When we make a transfer and the disk head does not have to move, then we transfer data at the maximum speed that the disk controller electronics allow.
Minimum throughput: This will occur when the disk head has to travel the maximum distance to get to the data. In this case, at worst, it has to go over the entire platter, and then wait for an entire rotation before it gets to the beginning of the file. The time to do this is:
The DMA overhead is the same as above. Therefore, the overhead is dominated by the cost of seeking and waiting for the head to get to the right sector on the track. The result is 16 KB/66ms = 247KB/sec (about 7% of the sequential throughput).
Moral of the story: When the head has to move,
the throughput drops considerably, and you can see that the effect may
produce only 7% of the controller bandwidth on a read or write (and
with a better DMA controller the peak bandwidth would be higher and the long seek and rotation would reduce
the bandwidth to about 1% of peak).
Exercise: Compute
the same if the sectors were not contiguous. You will see that 7% of the
max bandwidth may not be even reachable if the sectors are scattered throughout
the disk. Sequential allocation is good!!
Solution:
The best place would be the middle track within
the disk. The reason is that under an elevator scheduling policy, the middle
track would be visited by the disk head twice in one round trip, with the
time between the two visits being the maximum for any track. This is the
most desirable condition for a track that will be heavily accessed.
Solution:
Using 16-bit numbers to express the clusters allows only up to 32K clusters
per disk (because negative numbers are needed to express end of file and
end of table symbols).. For a large disk the cluster size, which is the
unit of allocation for files, must become excessively large (e.g.
a cluster must be 64K in a 2GB disk paritition.). This results in tremendous
internal fragmentation for files. Also, it limits the size of a disk partition
using FAT to only 2GB.
To lift these restrictions, we use 32-bit numbers to express the clusters.
However, one must be carefuly because this arrangement potentially allows
up to 2G entries in the FAT table, each consisting of 4 bytes! Therefore,
we need to store the actual size of the FAT table somewhere on disk, and
use only the minimum number of entries that would allow the FAT table to
use 32-bit numbers for a desired cluster size. For example, a 4GB disk
can be subdivided to 8M clusters of 512 bytes each. The size of the table
required would be 32MB, or about less than 1% of the disk size.
Solution:
Both! A file is still allocated in disk block
units, which means the last block in a file would be half empty (internal
fragmentation). Additionally, contiguous allocation results in external
fragmentation because it could be the case that there many small unallocated
fragments between files and they cannot be used to allocate a file whose
size is greater than the size of any fragment, but less than the total
size of the free space.
Solution 1:
List Game's synchronization and state variable here:
Mutex lock;
Cond donePlaying;
public:
static const int RED = 0;
static const int GREEN = 1;
static const int BLUE = 2;
static const int NCOLORS = 3;
private:
int waiting[NCOLORS];
int previousColor;
bool busy;
public
Game::Game() // constructor
{
waiting[0] = waiting[1] = waiting[2] = 0;
previousColor =
BLUE;
busy = FALSE;
}
public
Game::myTurn(int color)
{
lock.acquire();
waiting[color]++;
while (iShouldWait(color) )
{
donePlaying.wait(&lock);
}
busy = TRUE;
waiting[color]--;
previousColor = color;
lock.release();
}
public
Game::doneTurn()
{
lock.acquire();
busy = false;
donePlaying.broadcast(&lock);
}
private:
int Game::iShouldWait(int color)
{
assert(lock is held);
if(busy){
return TRUE;
}
if(color == (previousColor + 1) % NCOLORS){
return FALSE;
}
if((color == (previousColor + 2) % NCOLORS) && (waiting[(previousColor
+ 1) % NCOLORS] == 0){
return FALSE;
}
if((color == (previousColor + 3) % NCOLORS) && (waiting[(previousColor
+ 2) % NCOLORS] == 0){
assert(color == previousColor);
assert((waiting[(color + 1) %
NCOLORS] == 0) && (waiting[(color + 1) % NCOLORS] == 0));
return FALSE;
}
return TRUE;
}
Solution 2:
List Game's synchronization and state variable here:
Mutex lock;
Cond No_One_Play;
Boolean Busy;
Queue q[3];
int Id_Count;
int nextId;
int previous_color;
Game::Game() // constructor
{
Busy = false;
q[0] = q[1] = q[2] = NULL;
Id_Count = 0;
nextId = -1; // -1 means no one is waiting
previous_color = -1;
}
Game::myTurn(int color)
{
int myId;
lock.acquire();
myId = Id_Count;
Id_Count++;
q[color].enque(myId);
while (Busy || ((nextId >= 0) && (myId != nextId))
)
{
No_One_Play.wait(&lock);
}
q[color].deque();
Busy = true;
previous_color = color;
lock.release();
}
Game::doneTurn()
{
lock.acquire();
Busy = false;
//compute the next one who should go if there's anyone waiting.
next_id = -1;
int next_color = (previous_color+1)mod 3;
if ( !q[next_color].isEmpty()){
nextId = q[next_color].front();
} else{
next_color = (next_color+1)mod 3;
if ( !q[next_color].isEmpty()){
nextId = q[next_color].front();
} else
if(!q[previous_color].isEmpty())
{
nextId = q[previous_color].front();
}
}
No_One_Play.broadcast(&lock);
lock.release();
}