V22.0436 - Prof. Grishman

Lecture 19: Performance Improvements

Text: Section 5.4, Chapter 6

MIPS Implementations: multiple clock cycles / instruction (cont'd)

We can modify the design of the MIPS machine to use a faster clock (10 ns) and multiple clock cycles per instruction. In the design given in section 5.4, instructions require up to 5 clock cycles:

  1. instruction fetch (for all instructions)
  2. instruction decode and register fetch (for all instructions)
  3. ALU operation (for all instructions)
  4. for R-type instructions, register store; for lw/sw, data memory operation
  5. for lw, register store

This revised design enables a single memory to be used for instructions and data, but requires an additional register to hold the instruction.

How do we compute the net effect on performance? We need to compute cycle time * average CPI. Average CPI depends in turn on the relative frequency of instructions: in computing the average, we need to weight the CPI of each instruction by its relative frequency. These relative frequencies are determined by simulating a variety of programs and counting the frequency of each instructions. Some examples are shown in P&H, figure 4.46, page 248.

The net gains from this design are small or negative for this limited instruction set. They would be greater if memory operations took much longer (relative to register and ALU times), or if the instruction set included more complex instructions such as a multiply.

The control unit must be more complex, to handle the sequential execution: we would create a finite-state machine in which the transition between steps is determined by the opcode. This control unit can be optimized down to individual gates, as was the design of the combinational control unit (for the single-cycle design). Alternatively, we can employ a microprogrammed design, in which the tables for the control unit (the state transition table and the output table) are stored directly in a microprogam memory. This provides a more uniform structure and a design which is easier to change.

Pipelining

Much greater speed-ups are possible by overlapping the execution of successive instructions.

The simplest such overlap is instruction fetch overlap: fetch the next instruction while executing the current instruction. Even relatively simple processors employed such overlap.

Greater gain can be achieved by overlapping the execution (register fetch, ALU operation, ...) of successive instructions. A full pipelining scheme overlaps such operations completely, resulting ideally in a CPI (cycles per instruction) of 1. However, machines which employ such overlap must deal with data and branch hazards: instructions which influence later instructions in the pipeline. This makes the design of pipelined machines much more complex.

Some machines now try to go beyond pipelining to execute more than one instruction at a clock cycle, producing an effective CPI < 1. This is possible if we duplicate some of the functional parts of the processor (e.g., have two ALUs or a register file with 4 read ports and 2 write ports), but it requires even more complex logic to guard against hazards. Such designs are called superscalar.