You are on page 1of 57

+

William Stallings
Computer Organization
and Architecture
9th Edition
+
Chapter 4
Cache Memory
Key Characteristics of Computer
Memory Systems

Table 4.1 Key Characteristics of Computer Memory Systems


+
Characteristics of Memory
Systems
◼ Location
◼ Refers to whether memory is internal and external to the computer
◼ Internal memory is often equated with main memory
◼ Processor requires its own local memory, in the form of registers
◼ Cache is another form of internal memory
◼ External memory consists of peripheral storage devices that are
accessible to the processor via I/O controllers

◼ Capacity
◼ Memory is typically expressed in terms of bytes (or words)

◼ Unit of transfer
◼ For internal memory the unit of transfer is equal to the number of
electrical lines into and out of the memory module (continue)
+
Characteristics of Memory
Systems
This equal to the word length (often larger; 64, 128, 256 bits)

◼ Related concepts for internal memory:-


◼ Word: natural unit of memory organization. The size of the word is
equal the # of bits used to represent an integer or an instruction
◼ Addressable units: in some systems its the word. Many systems
allow addressing at the byte level
◼ Unit of transfer
◼ For main memory, is the # of bits R/W at a time. Need not equal
a word or an addressable unit
◼ For external memory, often much larger than a word (blocks)
Method of Accessing Units of Data
Sequential Direct Random
Associative
access access access

Each addressable This a random-access


Memory is organized into
Involves a shared read- location in memory has a type that enable one to
units of data called unique, physically wired- make a comparison for a
records write mechanism
in addressing mechanism specific match (parallel)

The time to access a


Individual blocks or A word is retrieved
Access must be made in given location is
records have a unique independent of the based on a portion of its
a specific linear address based on contents rather than its
sequence sequence of prior
physical location address
accesses and is constant

Each location has its own


Any location can be addressing mechanism
selected at random and and retrieval time is
Access time is variable Access time is variable directly addressed and constant independent of
accessed location or prior access
patterns

Main memory and some Cache memories may


cache systems are employ associative
random access access
Capacity and Performance:

The two most important characteristics of


memory

Three performance parameters are used:

Memory cycle time


Access time (latency) Transfer rate
•Access time plus any additional
•For random-access memory it is the time required before second •The rate at which data can be
time it takes to perform a read or access can commence transferred into or out of a memory
write operation •Additional time may be required unit
•For non-random-access memory it for transients to die out on signal •For random-access memory it is
is the time it takes to position the lines or to regenerate data if they equal to 1/(cycle time)
read-write mechanism at the are read destructively •For non-random, the following
desired location •Concerned with the system bus, relationship holds: see book
not the processor
+ Memory
◼ Physical Type: The most common forms are:
◼ Semiconductor memory
◼ Magnetic surface memory
◼ Optical
◼ Magneto-optical

◼ Physical characteristics: Several physical characteristics of data storage are important:


◼ Volatile memory
◼ Information decays naturally or is lost when electrical power is switched off
◼ Nonvolatile memory
◼ Once recorded, information remains without deterioration until deliberately changed
◼ No electrical power is needed to retain information
◼ Magnetic-surface memories
◼ Are nonvolatile
◼ Semiconductor memory
◼ May be either volatile or nonvolatile
◼ Nonerasable memory
◼ Cannot be altered, except by destroying the storage unit
◼ Semiconductor memory of this type is known as read-only memory (ROM)

◼ Organization: For random-access memory the organization is a key design issue


◼ Organization refers to the physical arrangement of bits to form words
+
Memory Hierarchy

◼ Design constraints on a computer’s memory can be summed


up by three questions:
◼ How much, how fast, how expensive

◼ There is a trade-off among capacity, access time, and cost


◼ Faster access time, greater cost per bit
◼ Greater capacity, smaller cost per bit
◼ Greater capacity, slower access time

◼ The way out of the memory dilemma is not to rely on a single


memory component or technology, but to employ a memory
hierarchy (next page)
+
Memory Hierarchy
A typical hierarchy is
illustrated in Fig 4.1.
As one going down the
following occurs:-
A. Decreasing cost/bit
B. Increasing capacity
C. Increasing access time
D. Decreasing frequently
of access by processor

The basis For the validity of condition (D) is a principle known


as locality of refrence
+
Memory Hierarchy (locality of reference)
locality of reference; basically during execution of a program
memory reference tend to cluster (‫ )تميل الى التجمع‬e.g (loops,
table) and this will be referenced repeatedly. OR it is the
tendency of CPU to access the same set of memory
repeatedly over a short period of time
There are two forms of locality:
◼ Spatial: Sequential execution (Sequential locality). When a
given address has been referenced;
◼ it is most likely that addresses near it will be referenced within a
short period of time (e.g. Consecutive instructions in a program,
arrays)

◼ Temporal: once a particular memory item has been reference;


◼ It is most likely that it will be referenced next (e.g. loops)

In general, over a short period of time, the processor is


primarily working with fixed cluster of memory references
+
Memory Hierarchy (locality of reference)
It is possible to organize data across the hierarchy such that:-
◼ Percentage of access to each successively lower level is
substantially less than that of the level above.

◼ Consider the two-level example 4.1 (explained in class)


◼ Two level s of memory level 1 and level 2
◼ L1 contain 1000 words with access time of 0.01 µs and L2 contain
100000 words with access time of 0.1 µs

◼ Current cluster can be temporarily placed in L1. From time to


time, swapping of clusters between two levels is happened.
◼ On average, most references will be to instructions in L1.

◼ In example 4.1, suppose 95% of memory access are found in


L1. then the average time to access a word is:-
(.95)(0.01 µs) + (.05)(.01 µs + .1 µs) = .0095 + .0055 = .015 µs (much
closer to L1 access time)
+
Memory Hierarchy (locality of reference)

This principle can be applied to more than tow levels (Fig. 4.1)

◼ When processor makes a request for an item, then

1. The item is sought in L1 of memory hierarchy


◼ Probability of finding the requested item is called hit ratio (h1).
And probability of not finding is called miss ratio (1 – h1)

2. When the requested item causes a “miss”, it is sought in L2

◼ Probability of finding the requested item in L2 is called hit ratio


(h2). And probability of not finding is called miss ratio (1 – h2)

3. The process is repeated until the item is found, the it is


brought to first level and sent to the processor
+
Cache Memory Principles
◼ The concept is illustrated in Figure 4.3 a.
◼ Relatively large and slow Main Memory together with a smaller,
fast Cache Memory. Cache contain a copy of portion of MM
◼ When CPU attempts to read a word, it checks the cache,
◼ If found, the word delivered to the processor (fast access).
◼ If not a block of MM is read into cache (containing required
word) and then the word delivered to the processor.
◼ Because the phenomenon of locality of reference, it is likely future
references will be to same location or to other word in the block
◼ Figure 4.3b depicts the use of multiple levels of cache. L2 is
slower than L1, and L3 is slower than L2
Cache and Main Memory
+
Cache/Main Memory Structure
◼ Fig. 4.4 (next page) depicts the structure of cache/main
memory system
◼ MM consists of up to 2n addressable words (with each word
having a unique n-bit address).
◼ For mapping purposes, MM consists of M block each with K words
◼ The cache consists of m blocks, called lines, (each line contain K
words + tag + control bits (not shown))
◼ Number of lines is much less than number of blocks (m << M)
◼ At any time, some subset of blocks of MM resides in lines in cache
◼ If a word in a block of MM is read, that block is transferred to
one of the lines of the cache.
◼ Each line include a tag that identifies which block is currently
being stored . (tag is usually a portion of MM address, later
Cache/Main Memory Structure
+
Cache Read Operation
◼ Fig. 4.5 illustrate the read operation. The CPU generates the read
address (RA) of a word. If in cache delivered to CPU. If not, the block
containing that word loaded into cache and the word delivered to the
processor

Fig. 4.6 depicts a

Typical Cache

Organization (book
Elements of Cache Design

Table 4.2 Elements of Cache Design


+
Cache Addresses
Virtual Memory
◼ Virtual memory
◼ Facility that allows programs to address memory from a logical
point of view, without regard to the amount of main memory
physically available
◼ When used, the address fields of machine instructions contain
virtual addresses
◼ For reads to and writes from main memory, a hardware memory
management unit (MMU) translates each virtual address into a
physical address in main memory

◼ When virtual addresses are used, the designer may choose:


◼ To place the Cache between processor and the MMU (fig 4.7a)or
◼ Between the MMU and the main memory (fig. 4.7b). Next page

◼ Logical cache verses physical cache is a complex one, and


the scope of our book
+

Logical
and
Physical
Caches
Cache Size
◼ We would like the size to be small enough so that the
◼ Overall average cost per bit is close to that of main memory alone

◼ OR, large enough so that the


◼ The overall average access time is close to that of cache size

◼ Other motivations for minimizing cache size


◼ The larger the cache, the larger the number of gates involved in
addressing the cache
◼ Large caches tend to be slightly slower than small caches
◼ The available chip and board area also limits cache size

◼ Because the performance of the cache is very sensitive to


the nature of the workload, it is impossible to arrive at a
single “optimum” cache size
Mapping Function
◼ Because there are fewer cache lines than main memory
blocks,
◼ An algorithm is needed for mapping main memory blocks into
cache lines

◼ Three techniques can be used:

Direct Associative Set Associative


• The simplest technique • Permits each M. memory • A compromise that
• Maps each block of main block to be loaded into exhibits the strengths of
memory into only one any line of the cache both the direct and
possible cache line associative approaches
• The cache control logic while reducing their
interprets a memory disadvantages
address simply as a Tag
and a Word field
• To determine whether a
block is in the cache, the
cache control logic must
simultaneously examine
every line’s Tag for a
match
Direct Mapping
◼ Maps each block of main memory into only one possible
cache line

◼ The mapping is expressed as: i=j modulo m ( i= j%m), where


◼ i = cache line number, j = main memory block number, m=
number of lines in cache

◼ Fig. 4.8a shows the mapping for the first m blocks of MM


◼ Each block of MM maps into one unique line of the cache (B 0
maps into L0, B1 maps into L1, ... Bm-1 maps into Lm-1,
◼ the next m blocks map into the cache in the same fashion; that is,
block Bm maps into L0, Bm+1 maps into L1, and so on

+

Direct

Mapping
Direct Mapping (mechanism)
◼ The mapping function is easily implemented using the main
memory address
◼ Fig. 4.9 illustrate the general mechanism, each MM address is
divided into 3 fields
◼ Least significant w bits identify a unique word or byte within a block
of MM
◼ Remaining s bits specify one of the 2s block of main memory
◼ Cache logic interprets s bits as:
◼ Tag of s-r (MS portion), identifies which particular block (MM) is
currently is being stored in that line c
◼ A line field of r bits (identify one of the m = 2r lines of the cache
◼ Direct mapping summary
◼ Address length = (s + w) bits
◼ Number of addressable units = 2s+w words or bytes
◼ Block size = line size = 2w words or bytes
◼ Number of blocks in main memory = 2s+ w/2w = 2s
◼ Number of lines in cache = m = 2r
◼ Size of tag = (s – r) bits
Direct Mapping Cache Organization
Example 4.2 for all 3 cases
◼ The example include the following elements:
◼ The cache can hold 64 Kbytes
◼ Data are transferred between MM and the cache in blocks of 4
byte each (i.e. Block size = Line size = 4 bytes). So cache is
organized as 16K = 214 lines of 4 bytes each
◼ The main memory consists of 16 Mbytes, with each byte directly
addressable by 24-bit address (224 = 16M).
◼ Thus for mapping purposes, we can consider MM to consist of 4M
blocks of 4 bytes each

◼ Fig. 4.10 shows the direct mapping using:- i = j modulo 214 ,


remember i = cache line number, and j = MM block number
◼ The main memory address = 24-bit (above example) divided:
◼ 2-bits for word (each block = 4 bytes = 22
◼ 14-bits for line (16K = 214 lines )
◼ 8-bits for tag (22 – 14 = 8), or = MM blocks/ Cache lines = 4M/16K
+
Direct

Mapping

Example
+
Direct Mapping Advs / disAdvs
+Victim Cache
◼ Advantages direct mapping
◼ Simple and inexpensive to implement
◼ Disadvantage of direct mapping
◼ Thrashing:- If a program access 2 blocks that map to the same
line repeatedly, then continually swapping, and this will low the
hit ratio (cache misses)

Victim Cache
◼ Originally proposed as an approach to reduce the conflict
misses of direct mapped caches without affecting its fast access
◼ Fully associative cache
◼ Typical size is 4 to 16 cache lines
◼ Residing between direct mapped L1 cache and the next level of
memory
+
Fully Associative Mapping
◼ Each memory block can be loaded into any line of the cache
◼ The memory address is interprets as a Tag and a Word field
◼ The Tag field uniquely identifies a block of main memory
◼ The word field identifies the word within the requested block
◼ To determine whether a block is in cache, the cache control
logic must simultaneously examine every line’s tag for a
mach
◼ Fully associative mapping summary
◼ Address length = (s + w) bits
◼ Number of addressable units = 2s+w words or bytes
◼ Block size = line size = 2w words or bytes
◼ Number of blocks in main memory = 2s+ w/2w = 2s
◼ Number of lines in cache = undetermined
◼ Size of tag = s bits
◼ Fig. 4.11 illustrate the logic of Fully Associative Cache Org.
Fully Associative Cache Organization
+
Fully Associative Mapping Example
◼ Remember our example
◼ The cache can hold 64 Kbytes
◼ Data are transferred between MM and the cache in blocks of 4
byte each (i.e. Block size = Line size = 4 bytes). So cache is
organized as 16K = 214 lines of 4 bytes each
◼ The main memory consists of 16 Mbytes, with each byte directly
addressable by 24-bit address (224 = 16M).
◼ Thus for mapping purposes, we can consider MM to consist of 4M
blocks of 4 bytes each

◼ Fig. 4.11 shows our example using associative mapping


◼ Tag = 22-bit, stored with 32-bit block (4 bytes) of data
◼ 2-bit byte number
+
Associative

Mapping

Example
+
Set Associative Mapping
◼ Compromise that exhibits the strengths of both the direct and
associative approaches while reducing their disadvantages
◼ Cache consists of (divided into) a number of sets
◼ Each set contains a number of lines
◼ A given block maps to any line in a given set
◼ e.g. 2 lines per set
◼ This referred to 2- way set-associative mapping
◼ A given block can be in one of 2 lines in only one set

◼ The relationships are m = v × k and i = j modulo v , where


◼ i = cache set number, j = main memory block number,
m = number of lines in cache, v = number of set,
k = number of lines in each set
+
Set Associative Mapping
Mapping
◼ Each block maps to any line in a specific set. So B0 maps to
set 0 (inside the set maps into any line), B1 maps to set 1, and
so on.
◼ Thus a set-associative can be physically implemented as v
associative caches (v = # of sets)
◼ Fig. 4.13a illustrates this mapping for the first v blocks
◼ As with fully associative mapping, each word maps into multiple
cache lines . For set associative, each maps into all the cache line
in a specific set

◼ Finally
◼ Each block maps into a specific set (direct mapping)
◼ And, maps into any line in the specific set (fully associative )
+

Mapping From
Main Memory
to Cache:

k-Way
Set Associative
+
Set Associative Mapping
Set associative cache organization
◼ Here the cache control interprets a memory address as three
fields:- Tag, Set, and Word
◼ Fig. 4.14 illustrates the cache control logic.
◼ Note that the Tag size with fully associative is quit large, while with set
associative is much smaller
◼ Set Associative Mapping Summary
◼ Address length = (s + w) bits
◼ Number of addressable units = 2s+w words or bytes
◼ Block size = line size = 2w words or bytes
◼ Number of blocks in main memory = 2s+w/2w=2s
◼ Number of lines in set = k
◼ Number of sets = v = 2d
◼ Number of lines in cache = m= kv = k * 2d
◼ Size of cache = k * 2d+wwords or bytes
◼ Size of tag = (s – d) bits
k-Way
Set
Associative
Cache
Organization
Example 4.2c for set associative
◼ Remember the example include the following elements:
◼ The cache can hold 64 Kbytes, MM consists of 16 Mbytes
◼ Block size = Line size = 4 bytes, assume 2-way set associative

◼ Fig. 4.14 shows the set associative mapping and the address
format
◼ The main memory address length = s + w = 24-bit (2s+w = 16 Mbytes
= 224 byte) divided into:
◼ w = 2-bits for word identity (line= block = 4 bytes = 2w = 22) and s =
24 – 2 = 22-bit ( s + w = 24)
◼ Number of line in set = k = 2
◼ Number of sets v = 2d = 16Klines/ 2 (#of lines in se) =8k sets = 213
sets, so d = 13 set number
◼ Tag = (22 – 13 = 9), or = MM blocks/ Cache sets = 4M/8K

In extreme case v (# of line in set) = m (# of line in cache), k=1, the


set-associative reduces to direct mapping, and for v = 1, k=m,
the set-associative reduces to fully associative mapping
+ Varying Associativity Over Cache Size. Fig. 4.16 shows
results of one simulation of set-associative performance
as a function of cache size. (Study in text)
+
Replacement Algorithms
◼ Once the cache has been filled, when a new block is brought
into the cache, one of the existing blocks must be replaced

◼ For direct mapping there is only one possible line for any
particular block and no choice is possible

◼ For the associative and set-associative techniques, a


replacement algorithm is needed

◼ To achieve high speed, an algorithm must be implemented in


hardware
◼ A number of algorithms have been tried. The four most common
algoriths are mentioned next page
+
The four most common replacement
algorithms are:
◼ Least recently used (LRU)
◼ Most effective one
◼ Replace that block in the set that has been in the cache longest with no
reference. Each line include a USE bit, for 2-way, when a line is
referenced, its USE bit is set to 1(other line to 0, and this will be
replaced)
◼ Because of its simplicity of implementation, LRU is the most popular
replacement algorithm. And it Gives the best hit of ratio

◼ First-in-first-out (FIFO)
◼ Replace that block in the set that has been in the cache longest
◼ Easily implemented as a round-robin or circular buffer technique

◼ Least frequently used (LFU)


◼ Replace that block in the set that has the fewest references
◼ Could be implemented by associating a counter with each line

◼ Random: pick a line at random, give slightly lower performance


Write Policy
When a block that is resident in
There are two problems to
the cache is to be replaced
contend with:
there are two cases to consider:

If the old block in the cache has not been


altered then it may be overwritten with a More than one device may have access to
new block without first writing out the old main memory
block

If at least one write operation has been A more complex problem occurs when
performed on a word in that line of the multiple processors are attached to the
cache then main memory must be same bus and each processor has its own
updated by writing the line of cache out local cache - if a word is altered in one
to the block of memory before bringing cache it could conceivably invalidate a
in the new block word in other caches
+ Write Policy:- Write Through and
Write Back
◼ Write through
◼ Simplest technique
◼ All write operations are made to main memory as well as to the cache
◼ The main disadvantage of this technique is that it generates substantial
memory traffic and may create a bottleneck

◼ Write back
◼ Minimizes memory writes, Updates are made only in the cache
◼ When an update occurs, a dirty bit associated with the line is set
◼ If a block is replaced, it written back to MM if dirty bit is set
◼ Problem: Portions of main memory are invalid and hence accesses by
I/O modules can be allowed only through the cache. And this`
◼ Makes for complex circuitry and a potential bottleneck
◼ See example 4.3 in text
◼ Cache coherency (read in text)
+
Line size
◼ When a block of data is retrieved and placed in the cache
not only the desired word but also some number of adjacent
words are retrieved

◼ As the block size increases more useful data are brought into
the cache. Two specific effects come into play:
◼ Larger blocks reduce the number of blocks that fit into a cache
◼ As a block becomes larger each additional word is farther from
the requested word

◼ As the block size increases the hit ratio will at first increase
because of the principle of locality

◼ The hit ratio will begin to decrease as the block becomes


bigger and the probability of using the newly fetched
information becomes less than the probability of reusing the
information that has to be replaced
+
Number of caches (Multilevel Caches)
◼ As logic density has increased, it has become possible to have a
cache on the same chip as the processor
◼ On-chip cache reduces the processor’s external bus activity and
speeds up execution time and increases overall performance
◼ When the requested instruction or data is found in the on-chip cache,
the bus access is eliminated
◼ On-chip cache accesses will complete appreciably faster than would
even zero-wait state bus cycles (same speed as bus speed, no waiting
◼ During this period the bus is free to support other transfers

◼ Two-level cache:
◼ Internal cache designated as level 1 (L1)
◼ External cache designated as level 2 (L2)
◼ Potential savings due to the use of an L2 cache depends on the
hit rates in both the L1 and L2 caches
◼ The use of multilevel caches complicates all of the design issues
related to caches, including size, replacement algorithm, write
policy, line size, and set size continue
+
Number of caches (Multilevel Caches)
◼ Question ?, inclusion of an off-chip cache is desirable ?. yes
and most designs includes both on-chip and off –chip
◼ If there is no off-chip (L2), processor will access DRAM or ROM if
the request word is not in on-chip (L1) across the bus
◼ Typically bus speed is slow and MM access time is slow
◼ On other hand if L2 (static RAM) is used, the missing information
can be quickly retrieved

◼ Two features note-worthy for multilevel caches:


◼ Many designer do not use the system bus for transfer between
the L2 cache and the CPU (use separate bus, release system bus)
◼ A number of CPU incorporate L2 on-chip, improving performance

◼ Fig. 4.17 shows the results of one simulation of two-level


cache performance as a function of cache size. (study in text)
Hit Ratio (L1 & L2)
For 8 Kbyte and 16Kbyte L1
+
Unified Versus Split Caches
◼ Has become common to split cache:
◼ One dedicated to instructions and One dedicated to data
◼ Both exist at the same level, typically as two L1 caches

◼ Advantages of unified cache:


◼ Higher hit rate, why:-
◼ Balances load of instruction and data fetches automatically
◼ Only one cache needs to be designed and implemented

◼ Trend is toward split caches at the L1 and unified caches for


higher levels

◼ Advantages of split cache:


◼ Eliminates cache contention between instruction fetch/decode
unit and execution unit
◼ Important in pipelining
Pentium
4
Cache

Table 4.4 Intel Cache Evolution


Pentium 4 Block Diagram (No)
Pentium 4 Cache Operating Modes (No)

Note: CD = 0; NW = 1 is an invalid combination.

Table 4.5 Pentium 4 Cache Operating Modes


ARM Cache Features

Table 4.6 ARM Cache Features


ARM Cache and Write Buffer Organization (No)
+ Summary Cache
Memory
Chapter 4
◼ Elements of cache design
◼ Characteristics of Memory
◼ Cache addresses
Systems
◼ Cache size
◼ Location
◼ Mapping function
◼ Capacity
◼ Replacement algorithms
◼ Unit of transfer
◼ Write policy
◼ Memory Hierarchy ◼ Line size
◼ How much? ◼ Number of caches

◼ How fast?
◼ Pentium 4 cache organization
◼ How expensive?
◼ Cache memory principles ◼ ARM cache organization

You might also like