You are on page 1of 43

+

William Stallings
Computer Organization
and Architecture
10th Edition

© 2016 Pearson Education, Inc., Hoboken,


NJ. All rights reserved.
+ Chapter 4
Cache Memory
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Location Performance
Internal (e.g. processor registers, cache, Access time
main memory) Cycle time
External (e.g. optical disks, magnetic disks, Transfer rate
tapes) Physical Type
Capacity Semiconductor
Number of words Magnetic
Number of bytes Optical
Unit of Transfer Magneto-optical
Word Physical Characteristics
Block Volatile/nonvolatile
Access Method Erasable/nonerasable
Sequential Organization
Direct Memory modules
Random
Associative

Table 4.1
Key Characteristics of Computer Memory Systems

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Characteristics of Memory Systems

 Location
 Refers to whether memory is internal and external to the computer
 Internal memory is often equated with main memory
 Processor requires its own local memory, in the form of registers
 Cache is another form of internal memory
 External memory consists of peripheral storage devices that are accessible to the
processor via I/O controllers

 Capacity
 Memory is typically expressed in terms of bytes

 Unit of transfer
 For internal memory the unit of transfer is equal to the number of electrical lines
into and out of the memory module

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Method of Accessing Units of Data
Sequential Direct Random
Associative
access access access

Each addressable location in


A word is retrieved based on
Memory is organized into Involves a shared read-write memory has a unique,
a portion of its contents
units of data called records mechanism physically wired-in
rather than its address
addressing mechanism

Each location has its own


The time to access a given
Individual blocks or records addressing mechanism and
Access must be made in a location is independent of
have a unique address based retrieval time is constant
specific linear sequence the sequence of prior
on physical location independent of location or
accesses and is constant
prior access patterns

Any location can be selected


Cache memories may
Access time is variable Access time is variable at random and directly
employ associative access
addressed and accessed

Main memory and some


cache systems are random
access

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Capacity and Performance:

The two most important characteristics of memory

Three performance parameters are used:

Memory cycle time


Access time (latency) • Access time plus any additional time
Transfer rate
• For random-access memory it is the required before second access can • The rate at which data can be
time it takes to perform a read or write commence transferred into or out of a memory unit
operation • Additional time may be required for • For random-access memory it is equal
• For non-random-access memory it is transients to die out on signal lines or to 1/(cycle time)
the time it takes to position the read- to regenerate data if they are read
write mechanism at the desired location destructively
• Concerned with the system bus, not the
processor

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+ Memory
 The most common forms are:
 Semiconductor memory
 Magnetic surface memory
 Optical
 Magneto-optical

 Several physical characteristics of data storage are important:


 Volatile memory
 Information decays naturally or is lost when electrical power is switched off
 Nonvolatile memory
 Once recorded, information remains without deterioration until deliberately changed
 No electrical power is needed to retain information
 Magnetic-surface memories
 Are nonvolatile
 Semiconductor memory
 May be either volatile or nonvolatile
 Nonerasable memory
 Cannot be altered, except by destroying the storage unit
 Semiconductor memory of this type is known as read-only memory (ROM)

 For random-access memory the organization is a key design issue


 Organization refers to the physical arrangement of bits to form words

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Memory Hierarchy

 Design constraints on a computer’s memory can be summed up by


three questions:
 How much, how fast, how expensive

 There is a trade-off among capacity, access time, and cost


 Faster access time, greater cost per bit
 Greater capacity, smaller cost per bit
 Greater capacity, slower access time

 The way out of the memory dilemma is not to rely on a single


memory component or technology, but to employ a memory
hierarchy

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Figure 4.1 The Memory Hierarchy

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


T 1 + T2

T2

Average access time


T1

0 1
Fraction of accesses involving only Level 1 (Hit ratio)

Figure 4.2 Performance of a Simple Two-Level Memory

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Memory
 The use of three levels exploits the fact that semiconductor memory
comes in a variety of types which differ in speed and cost

 Data are stored more permanently on external mass storage devices

 External, nonvolatile memory is also referred to as secondary


memory or auxiliary memory

 Disk cache
 A portion of main memory can be used as a buffer to hold data temporarily
that is to be read out to disk
 A few large transfers of data can be used instead of many small transfers of
data
 Data can be retrieved rapidly from the software cache rather than slowly
from the disk

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Block Transfer
Word Transfer

CPU Cache Main Memory


Fast Slow

(a) Single cache

Level 1 Level 2 Level 3 Main


CPU
(L1) cache (L2) cache (L3) cache Memory

Fastest Fast
Less Slow
fast

(b) Three-level cache organization

Figure 4.3 Cache and Main Memory

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Line Memory
Number Tag Block address
0 0
1 1
2 2 Block 0
3 (K words)

C–1
Block Length
(K Words)

(a) Cache

Block M – 1

2n – 1
Word
Length
(b) Main memory

Figure 4.4 Cache/Main-Memory Structure

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


START

Receive address
RA from CPU

Is block No Access main


containing RA memory for block
in cache? containing RA
Yes

Fetch RA word Allocate cache


and deliver line for main
to CPU memory block

Load main
Deliver RA word
memory block
to CPU
into cache line

DONE

Figure 4.5 Cache Read Operation

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Address

Address
buffer

System Bus
Control Control
Processor Cache

Data
buffer

Data

Figure 4.6 Typical Cache Organization

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Cache Addresses Write Policy
Logical Write through
Physical Write back
Cache Size Line Size
Mapping Function Number of caches
Direct Single or two level
Associative Unified or split
Set Associative
Replacement Algorithm
Least recently used (LRU)
First in first out (FIFO)
Least frequently used (LFU)
Random

Table 4.2
Elements of Cache Design
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
+
Cache Addresses
Virtual Memory
 Virtual memory
 Facility that allows programs to address memory from a logical point of
view, without regard to the amount of main memory physically available
 When used, the address fields of machine instructions contain virtual
addresses
 For reads to and writes from main memory, a hardware memory
management unit (MMU) translates each virtual address into a physical
address in main memory

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Logical address Physical address
MMU

Processor Main
Cache memory

Data

(a) Logical Cache

Logical address Physical address


MMU

Processor Main
Cache memory

Data

(b) Physical Cache

Figure 4.7 Logical and Physical Caches

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Year of
Processor Type L1 Cachea L2 cache L3 Cache
Introduction
IBM 360/85 Mainframe 1968 16 to 32 kB — —
PDP-11/70 Minicomputer 1975 1 kB — —
VAX 11/780 Minicomputer 1978 16 kB — —
IBM 3033 Mainframe 1978 64 kB — —
IBM 3090 Mainframe 1985 128 to 256 kB — —
Intel 80486 PC 1989 8 kB — —
Pentium
PowerPC 601
PC
PC
1993
1993
8 kB/8 kB
32 kB
256 to 512 KB


— Table 4.3
PowerPC 620 PC 1996 32 kB/32 kB — —
PowerPC G4 PC/server 1999 32 kB/32 kB 256 KB to 1 MB 2 MB
IBM S/390 G6 Mainframe 1999 256 kB 8 MB — Cache Sizes of
Pentium 4 PC/server
High-end
2000 8 kB/8 kB 256 KB —
Some
IBM SP server/ 2000 64 kB/32 kB 8 MB — Processors
supercomputer
CRAY MTAb Supercomputer 2000 8 kB 2 MB —
Itanium PC/server 2001 16 kB/16 kB 96 KB 4 MB
Itanium 2 PC/server 2002 32 kB 256 KB 6 MB
IBM High-end
2003 64 kB 1.9 MB 36 MB
POWER5 server
CRAY XD-1 Supercomputer 2004 64 kB/64 kB 1MB —
IBM
a
Two values separated by a
PC/server 2007 64 kB/64 kB 4 MB 32 MB slash refer to instruction and
POWER6
data caches.
IBM z10 Mainframe 2008 64 kB/128 kB 3 MB 24-48 MB
Intel Core i7 Workstaton/ b
Both caches are instruction
2011 6 ´ 32 kB/32 kB 1.5 MB 12 MB
EE 990 server only; no data caches.
IBM 24 MB L3
Mainframe/ 24 ´ 64 kB/
zEnterprise 2011 24 ´ 1.5 MB 192 MB (Table can be found on page
Server 128 kB
196 L4 134 in the textbook.)
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Mapping Function
 Because there are fewer cache lines than main memory blocks, an
algorithm is needed for mapping main memory blocks into cache
lines

 Three techniques can be used:

Direct Associative Set Associative


• The simplest technique • Permits each main memory • A compromise that exhibits the
• Maps each block of main block to be loaded into any line strengths of both the direct and
memory into only one possible of the cache associative approaches while
reducing their disadvantages
cache line • The cache control logic
interprets a memory address
simply as a Tag and a Word
field
• To determine whether a block
is in the cache, the cache
control logic must
simultaneously examine every
line’s Tag for a match

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


b t b
B0 L0

m lines
Bm–1 Lm–1
First m blocks of
cache memory
main memory
(equal to size of cache) b = length of block in bits
t = length of tag in bits
(a) Direct mapping

t b
L0

one block of
main memory

Lm–1
cache memory
(b) Associative mapping

Figure 4.8 Mapping From Main Memory to Cache:


Direct and Associative

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


s+w

Cache Main Memory


Memory Address Tag Data WO
Tag Line Word W1 B0
L0 W2
s–r r w W3

s–r

s
W4j
w Li
Compare W(4j+1) Bj
w
W(4j+2)
W(4j+3)
(hit in cache)
1 if match
0 if no match

Lm–1
0 if match
1 if no match
(miss in cache)

Figure 4.9 Direct-Mapping Cache Organization

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Main memory address (binary)
Tag
(hex) Tag Line + Word Data
00 000000000000000000000000 13579246
00 000000000000000000000100

00 000000001111111111111000
00 000000001111111111111100
Line
Tag Data Number
16 000101100000000000000000 77777777 00 13579246 0000
16 000101100000000000000100 11235813 16 11235813 0001

16 000101100011001110011100 FEDCBA98 16 FEDCBA98 0CE7

FF 11223344 3FFE
16 000101101111111111111100 12345678 16 12345678 3FFF

8 bits 32 bits
FF 111111110000000000000000 16-Kline cache
FF 111111110000000000000100

FF 111111111111111111111000 11223344
FF 111111111111111111111100 24682468
Note: Memory address values are
in binary representation;
32 bits other values are in hexadecimal
16-MByte main memory

Tag Line Word


Main memory address =

8 bits 14 bits 2 bits

Figure 4.10 Direct Mapping Example

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Victim Cache

 Originally proposed as an approach to reduce the conflict misses of


direct mapped caches without affecting its fast access time

 Fully associative cache

 Typical size is 4 to 16 cache lines

 Residing between direct mapped L1 cache and the next level of


memory

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


s+w

Cache Main Memory


Memory Address Tag Data W0
Tag Word W1
W2
B0
L0
s W3

w Lj
s
W4j
W(4j+1)
Compare w Bj
W(4j+2)
W(4j+3)
(hit in cache)
1 if match
0 if no match
s
Lm–1

0 if match
1 if no match
(miss in cache)

Figure 4.11 Fully Associative Cache Organization

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Main memory address (binary)

Tag (hex) Tag Word Data


000000 000000000000000000000000 13579246
000001 000000000000000000000100

Line
Tag Data Number
3FFFFE 11223344 0000
058CE7 FEDCBA98 0001
058CE6 000101100011001110011000
058CE7 000101100011001110011100 FEDCBA98 FEDCBA98
058CE8 000101100011001110100000
3FFFFD 33333333 3FFD
000000 13579246 3FFE
3FFFFF 24682468 3FFF

22 bits 32 bits
16 Kline Cache

3FFFFD 111111111111111111110100 33333333


3FFFFE 111111111111111111111000 11223344
3FFFFF 111111111111111111111100 24682468 Note: Memory address values are
in binary representation;
32 bits other values are in hexadecimal

16 MByte Main Memory

Tag Word
Main Memory Address =

22 bits 2 bits

Figure 4.12 Associative Mapping Example

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Set Associative Mapping

 Compromise that exhibits the strengths of both the direct and


associative approaches while reducing their disadvantages

 Cache consists of a number of sets

 Each set contains a number of lines

 A given block maps to any line in a given set

 e.g. 2 lines per set


 2 way associative mapping
 A given block can be in one of 2 lines in only one set

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


B0 L0

k lines
Lk–1
Cache memory - set 0
Bv–1
First v blocks of
main memory
(equal to number of sets)

Cache memory - set v–1

(a) v associative-mapped caches

B0 L0
one
set

v lines
Bv–1 Lv–1
First v blocks of Cache memory - way 1 Cache memory - way k
main memory
(equal to number of sets)

(b) k direct-mapped caches

Figure 4.13 Mapping From Main Memory to Cache:


k-way Set Associative

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


s+w

Cache Main Memory


Memory Address Tag Data
B0
Tag Set Word
F0
B1
s–d d w F1

Set 0

s–d Fk–1

Fk s+w
Bj

Compare Fk+i Set 1

(hit in cache) F2k–1


1 if match
0 if no match

0 if match
1 if no match
(miss in cache)

Figure 4.14 k-Way Set Associative Cache Organization

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Main memory address (binary)
Tag Main Memory Address =
(hex) Tag Set + Word Data
Tag Set Word
000 000000000000000000000000 13579246
000 000000000000000000000100

9 bits 13 bits 2 bits

000 000000001111111111111000
000 000000001111111111111100
Set
Tag Data Number Tag Data
02C 000101100000000000000000 77777777 000 13579246 0000 02C 77777777
02C 000101100000000000000100 11235813 02C 11235813 0001

02C 000101100011001110011100 FEDCBA98 02C FEDCBA98 0CE7

1FF 11223344 1FFE


02C 000101100111111111111100 12345678 02C 12345678 1FFF 1FF 24682468

9 bits 32 bits 9 bits 32 bits


1FF 111111111000000000000000 16 Kline Cache
1FF 111111111000000000000100

1FF 111111111111111111111000 11223344


1FF 111111111111111111111100 24682468

32 bits
Note: Memory address values are
16 MByte Main Memory in binary representation;
other values are in hexadecimal

Figure 4.15 Two-Way Set Associative Mapping Example

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


1.0
0.9
0.8
0.7
Hit ratio

0.6
0.5
0.4
0.3
0.2
0.1
0.0
1k 2k 4k 8k 16k 32k 64k 128k 256k 512k 1M
Cache size (bytes)
direct
2-way
4-way
8-way
16-way

Figure 4.16 Varying Associativity over Cache Size

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Replacement Algorithms

 Once the cache has been filled, when a new block is brought into the
cache, one of the existing blocks must be replaced

 For direct mapping there is only one possible line for any particular
block and no choice is possible

 For the associative and set-associative techniques a replacement


algorithm is needed

 To achieve high speed, an algorithm must be implemented in


hardware

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+ The most common replacement
algorithms are:
 Least recently used (LRU)
 Most effective
 Replace that block in the set that has been in the cache longest with no
reference to it
 Because of its simplicity of implementation, LRU is the most popular
replacement algorithm

 First-in-first-out (FIFO)
 Replace that block in the set that has been in the cache longest
 Easily implemented as a round-robin or circular buffer technique

 Least frequently used (LFU)


 Replace that block in the set that has experienced the fewest references
 Could be implemented by associating a counter with each line

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Write Policy

When a block that is resident in the


There are two problems to contend
cache is to be replaced there are two
with:
cases to consider:

If the old block in the cache has not been


More than one device may have access to main
altered then it may be overwritten with a new
memory
block without first writing out the old block

If at least one write operation has been A more complex problem occurs when
performed on a word in that line of the cache multiple processors are attached to the same
then main memory must be updated by writing bus and each processor has its own local cache
the line of cache out to the block of memory - if a word is altered in one cache it could
before bringing in the new block conceivably invalidate a word in other caches

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Write Through
and Write Back
 Write through
 Simplest technique
 All write operations are made to main memory as well as to the cache
 The main disadvantage of this technique is that it generates substantial memory
traffic and may create a bottleneck

 Write back
 Minimizes memory writes
 Updates are made only in the cache
 Portions of main memory are invalid and hence accesses by I/O modules can be
allowed only through the cache
 This makes for complex circuitry and a potential bottleneck

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Line Size
When a block of data Two specific effects
is retrieved and come into play:
placed in the cache • Larger blocks reduce the
not only the desired As the block size number of blocks that fit into a
word but also some increases more useful cache
• As a block becomes larger each
number of adjacent data are brought into additional word is farther from
words are retrieved the cache the requested word

As the block size The hit ratio will


increases the hit ratio begin to decrease as
will at first increase the block becomes
because of the bigger and the
principle of locality probability of using
the newly fetched
information becomes
less than the
probability of reusing
the information that
has to be replaced

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Multilevel Caches
 As logic density has increased it has become possible to have a cache on the same
chip as the processor

 The on-chip cache reduces the processor’s external bus activity and speeds up
execution time and increases overall system performance
 When the requested instruction or data is found in the on-chip cache, the bus access is
eliminated
 On-chip cache accesses will complete appreciably faster than would even zero-wait state
bus cycles
 During this period the bus is free to support other transfers

 Two-level cache:
 Internal cache designated as level 1 (L1)
 External cache designated as level 2 (L2)

 Potential savings due to the use of an L2 cache depends on the hit rates in both the
L1 and L2 caches

 The use of multilevel caches complicates all of the design issues related to caches,
including size, replacement algorithm, and write policy
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
0.98

0.96

0.94

0.92

0.90
L1 = 16k
Hit ratio

0.88 L1 = 8k

0.86

0.84

0.82

0.80

0.78
1k 2k 4k 8k 16k 32k 64k 128k 256k 512k 1M 2M

L2 Cache size (bytes)

Figure 4.17 Total Hit Ratio (L1 and L2) for 8 Kbyte and 16 Kbyte L1

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Unified Versus Split Caches
 Has become common to split cache:
 One dedicated to instructions
 One dedicated to data
 Both exist at the same level, typically as two L1 caches

 Advantages of unified cache:


 Higher hit rate
 Balances load of instruction and data fetches automatically
 Only one cache needs to be designed and implemented

 Trend is toward split caches at the L1 and unified caches for higher levels

 Advantages of split cache:


 Eliminates cache contention between instruction fetch/decode unit and execution
unit
 Important in pipelining

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Processor on which
Feature First
Problem Solution Appears
External memory slower than the system Add external cache using 386
bus. faster memory
technology.
Move external cache on- 486
Increased processor speed results in
external bus becoming a bottleneck for chip, operating at the
cache access. same speed as the
processor.
Internal cache is rather small, due to Add external L2 cache 486 Table 4.4
using faster technology
limited space on chip
than main memory
Contention occurs when both the Create separate data and Pentium Intel
Instruction Prefetcher and the Execution instruction caches.
Unit simultaneously require access to the Cache
cache. In that case, the Prefetcher is stalled Evolution
while the Execution Unit’s data access
takes place.
Create separate back-side Pentium Pro
bus that runs at higher
speed than the main
Increased processor speed results in
(front-side) external bus.
external bus becoming a bottleneck for L2
The BSB is dedicated to
cache access.
the L2 cache.
Move L2 cache on to the Pentium II
processor chip.
Some applications deal with massive Add external L3 cache. Pentium III
databases and must have rapid access to
large amounts of data. The on-chip caches Move L3 cache on-chip. Pentium 4
are too small. (Table is on page 150
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. in the textbook.)
System Bus

Out-of-order L1 instruction Instruction


execution cache (12K mops) fetch/decode
logic unit
64
bits
L3 cache
(1 MB)

Integer register file FP register file

Load Store Simple Simple Complex FP/ FP


address address integer integer integer MMX move
unit unit ALU ALU ALU unit unit L2 cache
(512 KB)

L1 data cache (16 KB) 256


bits

Figure 4.18 Pentium 4 Block Diagram

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Table 4.5 Pentium 4 Cache Operating Modes

Control Bits Operating Mode


CD NW Cache Fills Write Throughs Invalidates
0 0 Enabled Enabled Enabled
1 0 Disabled Enabled Enabled
1 1 Disabled Disabled Disabled

Note: CD = 0; NW = 1 is an invalid combination.

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+ Summary Cache
Memory
Chapter 4
 Elements of cache design
 Computer memory system
overview  Cache addresses
 Cache size
 Characteristics of Memory
Systems  Mapping function

 Memory Hierarchy  Replacement algorithms

 Cache memory principles  Write policy

 Pentium 4 cache  Line size

organization  Number of caches

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.

You might also like