V22.0436 - Prof. Grishman


Begin discussion of Assignment #7

Lecture 19: Pipelining (Chap. 6)

The benefits of pipelining increase when we have instructions which are relatively uniform in execution time and which can be finely divided into pipeline stages.  We can then have a relatively short clock cycle and issue one instruction each clock cycle (Fig. 6.3).  Under ideal conditions, instruction throughput is multiplied by the number of pipeline stages.

Instruction sets for pipelining (text, p. 374)

RISC machines like MIPS are well suited for pipelining.  Instruction format is simple and execution relatively uniform.  Pipelining is more complex for CISC machines, because the instructions may take different lengths of time to execute. However, RISC-style pipelining is now incorporated into high-performance CISC processors (such as the Pentium) by translating most instructions into a series of  RISC-like operations.

Pipelined Data Path (text, section 6.2)

The basic idea is to introduce a set of pipeline registers which hold all the information required to complete execution of the instruction.  This includes portions of the instruction, control signals which have been decoded from the instruction, computed effective addresses and data.  Starting with the single-cycle machine, we can build a system with a 5-stage pipeline;  the basic design is shown in Figure 6.11 and the details, including control signals, in Figure 6.27.

Pipeline hazards:  overlapping instruction execution can give rise to problems

Multiple issue and Superscalar (text, section 6.9)

Some machines now try to go beyond pipelining to execute more than one instruction at a clock cycle, producing an effective CPI < 1. This is possible if we duplicate some of the functional parts of the processor (e.g., have two ALUs or a register file with 4 read ports and 2 write ports), and have logic to issue several instructions concurrently.  There are two general approaches to multiple issue:  static multiple issue (where the scheduling is done at compile time) and dynamic multiple issue (where the scheduling is done at execution time), also known as superscalar.