You are on page 1of 18

Cache memory

Dr. Abeer saber


Mapping Functions

Because there are fewer cache lines than main


memory blocks, an algorithm is needed for mapping
main memory blocks into cache lines
Three techniques can be used:

Direct
Associative
Set associative
Mapping Functions

Direct Associative Set Associative


•The simplest technique •Permits each main memory •A compromise that exhibits
block to be loaded into any the strengths of both the
•Maps each block of main
line of the cache direct and associative
memory into only one
approaches while reducing
possible cache line •The cache control logic their disadvantages
interprets a memory address
simply as a Tag and a Word
field
•To determine whether a
block is in the cache, the
cache control logic must
simultaneously examine
every line’s Tag for a match
Direct Functions
s+w

Cache Main Memory


Memory Address Tag Data WO
Tag Line Word W1 B0
L0 W2
s–r r w W3

s–r

s
W4j
w Li
W(4j+1) Bj
Compare w
W(4j+2)
W(4j+3)
(hit in cache)
1 if match
0 if no match

Lm–1
0 if match
1 if no match
(miss in cache)

Figure 4.9 Direct-Mapping Cache Organization


Associative mapping
Associative mapping

❑ With associative mapping, there is flexibility as to which block to


replace when a new block is read into the cache

❑ The principal disadvantage of associative mapping is the complex


circuitry required to examine the tags of all cache lines in parallel.
s+w

Cache Main Memory


Memory Address Tag Data W0
Tag Word W1
W2
B0
L0
s W3

w Lj
s
W4j
W(4j+1)
Compare w Bj
W(4j+2)
W(4j+3)
(hit in cache)
1 if match
0 if no match
s
Lm–1

0 if match
1 if no match
(miss in cache)

Figure 4.11 Fully Associative Cache Organization


Set Associative
Set Associative

As with associative mapping, each word maps into


multiple cache lines. For set-associative mapping, each
word maps into all the cache lines in a specific set, so that
main memory block B0 maps into set 0, and so on. Thus,
the set-associative cache can be physically implemented as
n associative caches. It is also possible to implement the
set-associative cache as k direct mapping caches.
s+w

Cache Main Memory


Memory Address Tag Data
B0
Tag Set Word
F0
B1
s–d d w F1

Set 0

s–d Fk–1

Fk s+w
Bj

Compare Fk+i Set 1

(hit in cache) F2k–1


1 if match
0 if no match

0 if match
1 if no match
(miss in cache)

Figure 4.14 k-Way Set Associative Cache Organization


Replacement algorithms

 What if cache is full

 Replace the old line with the new one


In direct mapping, there is only one line to be
replaced
Replacement algorithms

 Least recently used (LRU)

❑ Each line has a USE bit


❑ Replace the lowest USE line
Replacement algorithms

 First-in-First-out (FIFO)

 Round-robin (Circular Buffer)


Replacement algorithms

 Least frequently used (LFU)

❑ Each line have a counter determine its use


❑ Replace the lowest used line
Replacement algorithms

Random
Write police

 Write back
❑Write to memory when replace
❑Each line have DIRTY bit

 Write through
❑Write to memory & cache

You might also like