V22.0436 - Prof. Grishman
Lecture 18: Pipelining (Chap. 6)
If we had designed the multi-cycle processor with a shorter clock cycle
(e.g., 50 ps), allowing 4 cycles for memory access and 2 for ALU
operations, we could have gotten a modest speed-up. Much greater
speed-ups are possible, however, by overlapping the execution of
The simplest such overlap is instruction fetch overlap: fetch the
instruction while executing the current instruction. Even relatively
processors employed such overlap.
Greater gain can be achieved by overlapping the execution (register
fetch, ALU operation, ...) of successive instructions. A full
scheme overlaps such operations completely, resulting ideally in a CPI
(cycles per instruction) of 1. However, machines which employ such
must deal with data and branch hazards: instructions which influence
instructions in the pipeline. This makes the design of pipelined
much more complex.
The benefits of pipelining increase when we have instructions which are
relatively uniform in execution time and which can be finely divided
into pipeline stages. We can then have a relatively short clock
cycle and issue one instruction each clock cycle (Fig. 6.3). Under ideal conditions,
throughput is multiplied by the number of pipeline stages.
Instruction sets for pipelining (text, p. 374)
RISC machines like MIPS are well suited for pipelining.
Instruction format is simple and execution relatively uniform.
Pipelining is more complex for CISC machines, because the instructions
may take different lengths of time to execute. However, RISC-style
is now incorporated into high-performance CISC processors (such as the
Pentium and Core 2) by translating most instructions into a series
Pipelined Data Path (text, section 6.2)
The basic idea is to introduce a set of pipeline registers which hold
all the information required to complete execution of the
instruction. This includes portions of the instruction, control
signals which have been decoded from the instruction, computed
effective addresses and data. Starting with the single-cycle
machine, we can build a system with a 5-stage pipeline; the basic
design is shown in Figure 6.11 and the
details, including control signals, in Figure
Pipeline hazards: overlapping instruction execution can give
- structural hazards: two instructions want to use the same
(e.g., the ALU) in the same clock cycle. This problem is reduced
in machines with a very uniform instruction set, such as MIPS.
that logic is cheap, we may duplicate some components to avoid
- for load instructions, memory is needed at two points in
execution (instruction fetch and data fetch); this is addressed here
by having two separate memories. In machines with a single memory,
we may still have separate caches for instructions and data.
- in the pipelined MIPS, the registers are needed at two points
in instruction execution: loading registers and storing into
registers. This is not a problem as both can be done in a
- data hazards: one instruction uses the result of the
The simplest solution is to "stall" ... to hold up the current
until the prior one has finished. A more efficient solution is data
forwarding ... to send a result of one instruction directly to
for the next instruction, as well as putting it in the register.
- branch hazards: we must wait until a conditional branch
before we know whether the following instructions should be
Again, we could stall the instruction after the branch, but this is
Alternatively, we can guess whether or not the branch is taken, start
instructions based upon our guess, but wait to store their results
we know the outcome of the branch. If our guess is correct, we
if it is wrong, we invalidate the instructions we issued following the
branch, and try again. Modern CPUs use a branch prediction table
which keeps track of recent branches and whether they were taken in
to "guess" more accurately.
Multiple issue and Superscalar (text, section 6.9)
Some machines now try to go beyond pipelining to execute more than one
instruction at a clock cycle, producing an effective CPI < 1. This
possible if we duplicate some of the functional parts of the processor
(e.g., have two ALUs or a register file with 4 read ports and 2 write
and have logic to issue several instructions concurrently. There
are two general approaches to multiple issue: static multiple issue (where the
scheduling is done at compile time) and dynamic multiple issue (where the
scheduling is done at execution time), also known as superscalar. Intel Core
2 processors are superscalar and can issue up to 4 instructions per