Start Lecture #25
Many seemingly simple I/O operations are actually composed of sub-actions. For example, deleting a file on an i-node based system (really this means deleting the last link to the i-node) requires removing the entry from the directory, placing the i-node on the free list, and placing the file blocks on the free list.
What happens if the system crashes during a delete and some, but not all three, of the above actions occur?
A journaling file system prevents the problems by using an idea from database theory, namely transaction logs. To ensure that the multiple sub-actions are all performed the larger I/O operation (delete in the example) is broken into 3 steps.
After a crash, the log (called a journal) is examined and if there are pending sub-actions, they are done before the system is made available to users.
Since sub-actions may be repeated (once before the crash, and once after), it is required that they all be idempotent (applying the action twice is the same as applying it once).
Some history.
A single operating system needs to support a variety of file systems. The software support for each file system would have to handle the various I/O system calls defined.
Not surprisingly the various file systems often have a great deal in common and large parts of the implementations would be essentially the same. Thus for software engineering reasons one would like to abstract out the common part.
This was done by Sun Microsystems when they introduced NFS the Network File System for Unix and by now most unix-like operating systems have adopted this idea. The common code is called the VFS layer and is illustrated on the right.
I consider the idea of VFS as good software engineering, rather than OS design. The details are naturally OS specific.
Since I/O operations can dominate the time required for complete user processes, considerable effort has been expended to improve the performance of these operations.
All general purpose file systems use a (non-demand) paging algorithm for file storage (read-only systems, which often use contiguous allocation, are the major exception). Files are broken into fixed size pieces, called blocks that can be scattered over the disk. Note that although this is paging, it is not called paging (and may not have an explicit page table).
Actually, it is more complicated since various optimizations are performed to try to have consecutive blocks of a single file stored consecutively on the disk. This is discussed below.
Note that all the blocks of the file are stored on the disk, i.e., it is not demand paging.
One can imagine systems that do utilize demand-paging-like algorithms for disk block storage. In such a system only some of the file blocks would be stored on disk with the rest on tertiary storage (some kind of tape). Perhaps NASA does this with their huge datasets.
We discussed a similar question before when studying page size.
There are two conflicting goals, performance and efficiency.
startup timerequired before any bytes are transferred. This favors a large block size.
For some systems, the vast majority of the space used is consumed by the very largest files. For example, it would be easy to have a few hundred gigabytes of video. In that case the space efficiency of small files is largely irrelevant since most of the disk space is used by very large files.
Typical block sizes today are 4-8KB.
There are basically two possibilities, a bit map and a linked list.
A region of kernel memory is dedicated to keeping track of the free blocks. One bit is assigned to each block of the file system. The bit is 1 if the block is free.
If the block size is 8KB the bitmap uses 1 bit for every 64 kilobits of disk space. Thus a 64GB disk would require 1MB of RAM to hold its bitmap.
One can break the bitmap into (fixed size) pieces and apply demand paging. This saves RAM at the cost of increased I/O.
A naive implementation would simply link the free blocks together and just keep a pointer to the head of the list. This simple scheme has poor performance since it requires an extra I/O for every acquisition or return of a free block.
In the naive scheme a free disk block contains just one pointer; whereas it could hold around a thousand of them. The improved scheme, shown on the right, has only a small number of the blocks on the list. Those blocks point not only to the next block on the list, but also to many other free blocks that are not directly on the list.
As a result only one in about 1000 requests for a free block requires an extra I/O, a great improvement.
Unfortunately, a bad case still remains. Assume the head block on the list is exhausted, i.e. points only to the next block on the list. A request for a free block will receive this block, and the next one on the list is brought it. It is full of pointers to free blocks not on the list (so far so good).
If a free block is now returned we repeat the process and get back to the in-memory block being exhausted. This can repeat forever, with one extra I/O per request.
Tanenbaum shows an improvement where you try to keep the one in-memory free block half full of pointers. Similar considerations apply when splitting and coalescing nodes in a B-tree.
Two limits can be placed on disk blocks owned by a given user, the
so called soft
and hard
limits.
A user is never permitted to exceed the hard limit.
This limitation is enforced by having system calls such
as write return failure if the user is already at the hard
limit.
A user is permitted to exceed the soft limit during a login session provided it is corrected prior to logout. This limitation is enforced by forbidding logins (or issuing a warning) if the user is above the soft limit.
Often files on directories such as /tmp are not counted towards either limit since the system is permitted to deleted these files when needed.
A physical backup simply copies every block in order onto a tape (or other backup media). It is simple and useful for disaster protection, but not useful for retrieving individual files.
We will study logical backups, i.e., dumps that are file and directory based not simply block based.
Tanenbaum describes the (four phase) unix dump algorithm.
All modern systems support full and incremental dumps.
An interesting problem is that tape densities are increasing slower than disk densities so an ever larger number of tapes are needed to dump a full disk. This has lead to disk-to-disk dumps; another possibility is to utilize raid, which we study next chapter.
Modern systems have utility programs that check the consistency of
a file system.
A different utility is needed for each file system type in the
system, but a wrapper
program is often created so that the
user is unaware of the different utilities.
The unix utility is called fsck (file system check) and the window utility is called chkdsk (check disk).
fixthe errors found (for most errors).
Not so much of a problem now. Disks are more reliable and, more importantly, disks and disk controllers take care most bad blocks themselves.
Demand paging again!
Demand paging is a form of caching: Conceptually, the process resides on disk (the big and slow medium) and only a portion of the process (hopefully a small portion that is heavily access) resides in memory (the small and fast medium).
The same idea can be applied to files. The file resides on disk but a portion is kept in memory. The area in memory used to for those file blocks is called the buffer cache or block cache.
Some form of LRU replacement is used.
The buffer cache is clearly good and simple for reads.
What about writes?
write-allocate policyAlthough
no-write-allocateis possible and sometimes used for memory caches, it performs poorly for disk caching.
needed.
Homework: 27.
When the access pattern looks
sequential read ahead is employed.
This means that after completing a read() request for block n of a file,
the system guesses that a read() request for block n+1 will shortly be
issued and hence automatically fetches block n+1.
The idea is to try to place near each other blocks that are likely to be accessed sequentially.
super-blocks, consisting of several contiguous blocks.
If clustering is not done, files can become spread out all over the disk and a utility (defrag on windows) should be run to make the files contiguous on the disk.
CP/M was a very early and simple OS. It ran on primitive hardware with very little ram and disk space. CP/M had only one directory in the entire system. The directory entry for a file contained pointers to the disk blocks of the file. If the file contained more blocks than could fit in a directory entry, a second entry was used.
File systems on cdroms do not need to support file addition or deletion and as a result have no need for free blocks. A CD-R (recordable) does permit files to be added, but they are always added at the end of the disk. The space allocated to a file is not recovered even when the file is deleted, so the (implicit) free list is simply the blocks after the last file recorded.
The result is that the file systems for these devices are quite simple.
This international standard forms the basis for essentially all file systems on data cdroms (music cdroms are different and are not discussed). Most Unix systems use iso9660 with the Rock Ridge extensions, and most windows systems use iso9660 with the Joliet extensions.
The ISO9660 standard permits a single physical CD to be partitioned and permits a cdrom file system to span many physical CDs. However, these features are rarely used and we will not discuss them.
Since files do not change, they are stored contiguously and each directory entry need only give the starting location and file length.
File names are 8+3 characters (directory names just 8) for iso9660-level-1 and 31 characters for -level-2. There is also a -level-3 in which a file is composed of extents which can be shared among files and even shared within a single file (i.e. a single physical extent can occur multiple times in a given file).
Directories can be nested only 8 deep.
The Rock Ridge extensions were designed by a committee from the unix community to permit a unix file system to be copied to a cdrom without information loss.
These extensions included.
special files, i.e. including devices in the file system name structure.
The Joliet extensions were designed by Microsoft to permit a windows file system to be copied to a cdrom without information loss.
These extensions included.
We discussed this linked-list, File-Allocation-Table-based file system previously. Here we add a little history.
The FAT file system has been supported since the first IBM PC (1981) and is still widely used. Indeed, considering the number of cameras and MP3 players, it is very widely used.
Unlike CP/M, MS-DOS always had support for subdirectories and metadata such as date and size.
File names were restricted in length to 8+3.
As described above, the directory entries point to the first block of each file and the FAT contains pointers to the remaining blocks.
The free list was supported by using a special code in the FAT for
free blocks.
You can think of this as a bitmap with a wide bit
.
The first version FAT-12 used 12-bit block numbers so a partition could not exceed 212 blocks. A subsequent release went to FAT-16.
Two changes were made: Long file names were supported and the file allocation table was switched from FAT-16 to FAT-32. These changes first appeared in the second release of Windows 95.
The hard part of supporting long names was keeping compatibility
with the old 8+3 naming rule.
That is, new file systems created with windows 98 using long file
names must be accessible if the file system is subsequently used
with an older version of windows that supported only 8+3 file names.
The ability for new old systems to read data from new systems was
important since users often had both new and old systems and kept
many files on floppy disks that were used on both systems.
This abiliity called backwards compatibility
.
The solution was to permit a file to have two names: a long one and an 8+3 one. The primary directory entry for a file in windows 98 is the same format as it was in MS-DOS and contains the 8+3 file name. If the long name fits the 8+3 format, the story ends here.
If the long name does not fit in 8+3, an 8+3 version is produce
via an algorithm that works but produces names with severely
limited aesthetic value.
The long name is stored in one or more axillary
directory
entries adjacent to the main entry.
These axillary entries are set up to appear invalid to the old OS,
which therefore ignores them.
FAT-32 used 32 bit words for the block numbers (actually, it used 28 bits) so the FAT could be huge (228 entries). Windows 98 kept only a portion of the FAT-32 table in memory at a time.
I presented the inode system in some detail above. Here we just describe a few properties of the filesystem beyond the inode structure itself.
touch 255-char-nameis OK but
touch 256-char-nameis not.
Skipped
Read
The most noticeable characteristic of current ensemble of I/O devices is their great diversity.
output onlydevice such as a printer supplies very little output to the computer (perhaps an out of paper indication) but receives voluminous input from the computer. Again it is better thought of as a transducer, converting electronic data from the computer to paper data for humans.
These are the devices as far as the OS is concerned. That is, the OS code is written with the controller specification in hand not with the device specification.
Consider a disk controller processing a read request. The goal is to copy data from the disk to some portion of the central memory. How is this to be accomplished?
The controller contains a microprocessor and memory, and is connected to the disk (by wires). When the controller requests a sector from the disk, the sector is transmitted to the control via the wires and is stored by the controller in its memory.
The separate processor and memory on the controller gives rise to two questions.
Typically the interface the OS sees consists of some several registers located on the controller.
go button.
So the first question above becomes, how does the OS read and write the device register?
I/O spaceinto which the registers are mapped. In this case special I/O space instructions are used to accomplish the loads and stores.
elegantsolution in that it uses an existing mechanism to accomplish a second objective.
We now address the second question, moving data between the controller and the main memory. Recall that (independent of the issue with respect to DMA) the disk controller, when processing a read request pulls the desired data from the disk to its own buffer (and pushes data from the buffer to the disk when processing a write).
Without DMA, i.e., with programmed I/O (PIO), the cpu then does loads and stores (assuming the controller buffer is memory mapped, or uses I/O instructions if it is not) to copy the data from the buffer to the desired memory locations.
A DMA controller, instead writes the main memory itself, without intervention of the CPU.
Clearly DMA saves CPU work. But this might not be important if the CPU is limited by the memory or by system buses.
An important point is that there is less data movement with DMA so the buses are used less and the entire operation takes less time. Compare the two blue arrows vs. the single red arrow.