G22.2233 - Prof. Grishman

Lecture 11:  Memory Technology

Early technologies: delay line; magnetic core (dominant mid-50's to mid-70's) (text, section 7.9)

Primary technology today: transistor RAM (text, pages B-26 to B-33)

static RAM (SRAM):

dynamic RAM (DRAM): Recent DRAMs aim to improve streaming rate -- rate at which successive bytes can be read out -- through interfaces such as SDRAMs (synchronous DRAMs -- DRAMs operating synchronously with CPU clock) and DDR DRAMS (double data rate DRAMs).

For some applications, memory does not have to change --- use ROM (read-only memory).

Static and dynamic RAM are both volatile: data disappears when power is lost. For some applications, data must be preserved, so a special (slow) non-volatile RAM is used.


Memory Hierarchy (Text, Section 7.1)

There is a trade-off between memory speed, cost, and capacity: A cost effective system must have a mix of all of these memories, which means that the system must manage its data so that it is rapidly available when it is needed. In earlier machines, all of this memory management was done explicitly by the programmer; now more of it is done automatically and invisibly by the system. The ideal is to create a system with the cost and capacity of the cheapest technology along with the speed of the fastest.


If memory access was entirely random, automatic memory management would not be possible. Management relies on:

Cache (Text, Section 7.2)

A cache is an (automatically-managed) level of memory between main memory and the CPU (registers). The goal with a cache is to get the speed of a small SRAM with the capacity of main memory.

Each entry in the cache includes the data, the memory address (or a partial address, called the tag), and a valid bit.

One issue in cache design is cache addressing:  determining where a word from main memory may be placed in the cache (Fig. 7.15).

Spring 2002