A Whirlwind Tour through Computer Architecture:  Part IV

Honors Computer Systems Organization (Prof. Grishman)


Executing machine instructions involves a lot of memory references, both for instructions and for data read/write.  However, the reality is that current large memories (64 MB and up) are much slower than current processors -- roughly two orders of magnitude slower (memory random access times measured in 10's of nanoseconds).  If the processor had to issue a request and then wait for memory for each instruction, it would spend almost all its time waiting for memory.

This effect is greatly reduced by having a cache -- a small memory placed between the main memory and the processor.  Basically, the smaller the memory, the faster it can operate.  The cache controller keeps in the cache the most recently referenced data, on the assumption that data which was recently referenced is very likely to be referenced again in the near future ("temporal locality").  Most memory references can be (quickly) satisfied from the cache;  only if the processor asks for a word which it has not requested recently will the system be forced to get the word from main memory, and wait until it is returned.

Modern processors have taken this a step further by having several levels of cache -- a very small Level 1 cache and a larger Level 2 cache.  The Level 1 cache will be fast enough to operate at processor speeds.