Saturday, September 4, 2021

Direct Mapping Cache Memory

 

What is Cache Memory?

Small section of SRAM added between processor and main memory to speed up execution process is known as cache memory. Cache memory is a special high-speed memory. It is buffer between CPU and Main Memory.

It helps to increase the processing speed of CPU to provide data of program which is currently in execution mode. A cache memory system includes a small amount of fast memory (SRAM) and a large amount of slow memory (DRAM).

Direct Mapping Cache Memory

It is the simplest technique to mapping cache memory to main memory. In this technique, each block from the main memory has only one possible location in the cache organization. In this example, the block i of the main memory maps onto block j (j = i modulo 128) of the cache, as shown on figure. Therefore, whenever one of memory blocks 0, 128, 256, … is loaded in the cache, it is stored in cache block 0.


Figure: Direct mapping Cache memory

Block 1, 129, 257, … are stored in cache block 1 and so on.  In general mapping expression is,

     j = i modulo m

      Where,          

       i = number of blocks in RAM

       j = number of blocks in the cache

      m = number of blocks (lines) in the cache

To implement such cache system, main memory address is divided into three fields, as shown in diagram. The lower order 4-bits select one of the 16 words in a block. This field is known as word field. The second field known as block field is used to distinguish a block from other blocks. Its length is 7-bits since 27 = 128.

When a new block enters the cache, the 7-bit cache block field determines the cache position in which this block must be stored. The third field is a tag field. It is used to store the higher-order 5-bits of memory address of the block. These are used to identify which of the pages that are mapped into the cache.

The main drawback of direct mapped cache is that each block of main memory maps to a fixed location in the cache; therefore, if two different blocks map to the same location in cache and they are continually referenced, the two blocks will be continually swapped in and out (known as thrashing in cache memory). So, direct memory mapping is easy to implement but not flexible.


Click here to watch video of Direct Mapping Cache Memory



Watch more videos click here.

No comments:

Post a Comment