NOTE: These notes are by Allan Gottlieb, and are
reproduced here, with superficial modifications, with his permission.
"I" in this text generally refers to Prof. Gottlieb, except
in regards to administrative matters.
================ Start Lecture #17
Also called timers.
5.5.1: Clock Hardware
- Generates an interrupt when timer goes to zero
- Counter reload can be automatic or under software (OS) control.
- If done automatically, the interrupt occurs periodically and thus
is perfect for generating a clock interrupt at a fixed period.
5.5.2: Clock Software
- TOD: Bump a counter each tick (clock interupt). If counter is
only 32 bits must worry about overflow so keep two counters: low order
and high order.
- Time quantum for RR: Decrement a counter at each tick. The quantum
expires when counter is zero. Load this counter when the scheduler
runs a process.
- Accounting: At each tick, bump a counter in the process table
entry for the currently running process.
- Alarm system call and system alarms:
- Users can request an alarm at some future time.
- The system also on occasion needs to schedule some of its own
activities to occur at specific times in the future (e.g. turn off
the floppy motor).
- The conceptually simplest solution is to have one timer for
- Instead, we simulate many timers with just one.
- The data structure on the right works well.
- The time in each list entry is the time after the
preceding entry that this entry's alarm is to ring.
- For example, if the time is zero, this event occurs at the
same time as the previous event.
- The other entry is a pointer to the action to perform.
- At each tick, decrement next-signal.
- When next-signal goes to zero,
process the first entry on the list and any others following
immediately after with a time of zero (which means they are to be
simultaneous with this alarm). Then set next-signal to the value
in the next alarm.
- Want a histogram giving how much time was spent in each 1KB
(say) block of code.
- At each tick check the PC and bump the appropriate counter.
- A user-mode program can determine the software module
associated with each 1K block.
- If we use finer granularity (say 10B instead of 1KB), we get
increased accuracy but more memory overhead.
Old-style (1980's) terminal: Keyboard and monitor.
Computer received a sequence of characters from the keyboard
and sent sequence of characters to the monitor. Keyboard and
monitor have separate drivers.
Sequence of ordinary characters plus control characters.
One process is the foreground process, receives
Raw mode vs. cooked mode. In raw mode, characters are delivered to the process
one at a time (e.g. screen editor). In cooked mode, data is delivered
one line at a time; i.e. the process blocks until "Return" is hit,
and characters like "BackSpace" perform an editing function.
Reasonably similar to reading from a text file. Main difference is that
there is no case in which the system waits for a block-sized
buffer to be filled, as with disk reads.
Collection of dull issues:
- Carriage return and line feed.
- Line-based editing
- End of file symbol
24x80 character display
Boolean pixels. (Monochromatic, one level of "on" intensity.)
Sequence of ordinary characters plus control characters sent to monitor,
displayed in sequence left-to-right, top-to-bottom. Cursor
at position for next character.
Fixed character sets. Device controller responsible for translating into
CRT image. Control characters did things like
Essentially identical to writing to text file, except that there is no
buffering and no need to maintain a file structure.
- New line
- Erase character
- Erase screen
- Put cursor at top-left corner
More dull issues:
- Line wrap-around when line is too long.
- Scrolling when bottom of screen is reached.
A terminal consists of a keyboard, a mouse (or mouse-equivalent, such as
a touch-pad), and a display. At a low level
these are separate I/O devices, with separate controllers. At a higher
level, it's arguably better to think of the terminal as a single I/O device.
The user interface is essentially a device driver for the terminal as a
whole. There are not device drivers of any importance for the separate
Keyboard controller reports each pressing of a key and each release of
a key as an "event" to OS; everything else is done in OS software.
Reports distance moved in units of 0.1 mm in each direction (X,Y) + push
click + release click.
Software keeps track of position by dead reckoning. This is not very
accurate -- e.g. you can't use a mouse for tracing a picture, you need
a device like a want that reports actual coordinate. But it doesn't
matter, because the user is judging the position, not by the position
of the mouse, but by position of mouse icon (or surrogate) in display.
What matters is that the position of icon accurately reflects the
software's idea of the position of the mouse, and that
OK, because of monitor feedback.
A monitor has M x N (e.g. 1024 x 768) pixels, each of which has three displays
(Red, Green, Blue) of intensity 0 ... 255 (8 bits)
Thus, a complete screen image is about 3 MByte, depending on resolution.
Each pixel refreshed 50 or 22 times per second.
Monitors are characterized by:
Image compression (e.g. JPEG) exploits loss tolerance and
the prevalence of
large patches of constant or near constant color. Video compression
(e.g. MPEG) additionally exploits (5); you represent the change from
one frame to the next.
- 1. Always on.
- 2. Real time. Each frame must follow at regular intervals of about 20 msec
otherwise display will flicker
- 3. Ephemeral
- 4. High data rate: 1024 x 768 x 3 Bytes of color x 50 times / sec
= 117MBytes per sec.
- 5. Changes gradually. Generally each frame is very similar to previous
- 6. Loss tolerant.
CPU writes to video RAM; Video controller scans video RAM continuously.
Video RAM is addressable like ordinary RAM.
- 24-bit (also called 32 bit) -- Eight bits 0..255 for each of the primary
- 16 bit -- 5 bits 0..31 for each of the primary colors.
- Color pallette. Table of 256 entries with 256 different colors.
1 byte for each pixel, index in pallette table.
The user interface is in effect a (very complicated) device driver for
the terminal as a whole. Some features:
To some degree, each window is a virtual output device. E.g. "Standard
output" in a C or a Java program writes to one window, and the
"device independence" of such programs -- i.e. the equivalence of writing
to a window with writing to a file --- applies to this virtual device.
(This is more important in UNIX systems, where most windows simulate
a character-based monitor, than in WINDOWS or Macintosh systems.)
However, this is purely a software construction, not an actual I/O
- The shell integrate the three actual devices in a way that seems natural
to the user. User has many different options in communicating with shell
and in adjusting display.
- Likewise, applications programs need to interact with the terminal
in many different ways.
- Application programs typically need to be event driven, responding
to keyboard and mouse when relevant.
- The issue of who deals with a given input on keyboard or mouse --
shell? process? which process? shell and process? -- is complex. E.g.
if you move a window, the window manager has to deal with that, but the
process does not have to deal with it. If you resize a window, both
the window manager and (in general) the process have to deal with it.
From the point of view of each individual node, network communication is
just another I/O device.
Processes generate messages to be sent out. Device driver packages them
appropriately and sends them out on the network.
Message arrive from the network. Device driver interprets the packaging,
reassembles the original message, and figures out what to do with them:
The more interesting standpoint is looking at the distributed system
as a whole. There are varying degrees of integration:
- Pass it on to a user process -- e.g. web browser.
- Pass it on to a server process (demon) -- e.g. file server, web server,
- Messages passed back and forth. At a lower level this is always
the basis for any distributed system, but often this is the user's view as
e.g. Email, ftp, etc.
- WWW. Web browser allows access to documents on other machine.
- Shared file system. The files across all machines constitute a single
- Remote processes. A process on one machine can create and communicate
with a process on a different machine.
- Migrating processes. A process may be moved from one machine to
another, according to the judgment of either the system or the user.
- Parallel processes. A process may be divided into pieces that
run in parallel on separate machine.
Local Area Network (LAN) vs. Wide Area Network (WAN)
LAN (e.g. Ethernet)
- Spatially close.
- Often homogenous (all computers the same).
- Single owner and administration
- Presumably cooperative. Therefore extreme security not required.
- High data rate, low delay.
- The only thing under central authority are
All other aspects of administration are distributed.
- The assignment of IP addresses to machines. (High-level)
- DNS: The assignment of domain names to IP addresses. (High-level)
- Communication protocols
- There are malicious users on the network. Therefore, strong security
- Variable data rate and delay.