Computer Systems Org I - Prof. Grishman

Lecture 28 - Dec. 13, 2005

Computer logic:  pushing the limits

The time to execute an instruction = (number of clock cycles for an instruction) * (clock period)
How can we reduce this?

Raising the clock frequency

Clock frequency has been going up for a long time .. the original PCs (early 1980s) ran at 4.77 MHz;  now a typical machine runs at 3 GHz.  But clock rates are rising now much more slowly.

One problem is heat dissipation.  A CPU requires a certain amount of energy every time is turns a switch on or off;  the faster it runs, the more energy is uses and dissipates as heat.  A modern CPU may dissipate 80 or 90 watts.  See the curves at

To keep the CPU from melting, new processors require large heat sinks.  Remove the heat sink, and the processor burns up.  See pictures at   Somewhat faster clocks are possible with elaborate cooling (water or freon), but these are not widely used.

Reducing cycles per instruction

A simple processor needs several clock cycles per instruction.  For example, for a LD we have to fetch the instruction, compute the (PC) relative address, and the actually do the load.  Even fairly simple processors will overlap instruction fetch with execution of the previous instruction, saving one clock cycle.  Fancier machines will pipeline instructions, overlapping one stage of execution of one instruction with the following stage of the next instruction.  Even with fancy pipelining, however, modern machines require around one clock cycle per instruction.

Another problem is that main memory is much slower than the CPU, with access time in the tens of microseconds.  If the CPU had to wait for main memory for each instruction, it would be much slower.  So a modern processor puts a fast memory -- a cache -- between the CPU and the main memory, and keeps the most recently referenced instructions and data in the cache.  The number of transistors on a chip keeps increasing, so it is possible now to put quite a large cache on the same chip with the CPU.


All of these factors together mean than processors are not getting faster at the same rate as they used to.  What to do?  One answer is to built multiprocessors -- sets of connected machines.  Multiprocessors have been around for a long time, but have become particularly attractive now that CPUs are so cheap (single chips).

To take advantage of the increasing number of transistors on a chip, some current CPU's act in some ways like multiprocessors.  "hyperthreading" provides a machine with two complete sets of registers, so it can run two programs (threads) at the same time, at some gain in speed.  "dual core" chips (e.g., Intel 800 series Pentiums) have two full processors on a single chip.