Professional Documents
Culture Documents
The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by Professor Mike Schulte. Lecture 4 from William Stallings, Computer Organization and Architecture, Prentice Hall; 6th edition, July 15, 2002. Lecture 6 from the course Systems Architectures II by Professors Jeremy R. Johnson and Anatole D. Ruslanov Some of figures are from Computer Organization and Design: The Hardware/Software Approach, Third Edition, by David Patterson and John Hennessy, are copyrighted material (COPYRIGHT 2004 MORGAN KAUFMANN PUBLISHERS, INC. ALL RIGHTS RESERVED).
Datapath
Output
Proc 60%/yr. Moores Law (2X/1.5yr) Processor-Memory Performance Gap: (grows 50% / year) DRAM DRAM 9%/yr. (2X/10 yrs)
CPU
1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000
Time
Memory Hierarchy
Memory technology
SRAM
DRAM Magnetic disk
CPU
$ per GB in 2004
$4000$10,000
$100$200 $0.50$2
Processor
Level 1 Increasing distance from the CPU in access time
Level 2
Level n
Memory
SRAM:
Value is stored on a pair of inverting gates Very fast but takes up more space than DRAM (4 to 6 transistors)
DRAM:
Value is stored as a charge on capacitor (must be refreshed) Very small but slower than SRAM (factor of 5 to 10)
Dynamic RAM
Bits stored as charge in capacitors Charges leak Need refreshing even when powered Simpler construction Smaller per bit Less expensive Need refresh circuits Slower Main memory Essentially analogue
Level of charge determines value
DRAM Operation
Address line active when bit read or written
Transistor switch closed (current flows)
Write
Voltage to bit line High for 1 low for 0 Then signal address line Transfers charge to capacitor
Read
Address line selected transistor turns on Charge from capacitor fed via bit line to sense amplifier Compares with reference value to determine 0 or 1 Capacitor charge must be restored
Static RAM
Bits stored as on/off switches No charges to leak No refreshing needed when powered More complex construction Larger per bit More expensive Does not need refresh circuits Faster Cache Digital
Uses flip-flops
State 0
C2 high, C1 low T2 T3 off, T1 T4 on
Address line transistors T5 T6 is switch Write apply value to B & compliment to B Read value is on line B
SRAM v DRAM
Both volatile
Power needed to preserve data
Dynamic cell
Simpler to build, smaller More dense Less expensive Needs refresh Larger memory units
Static
Faster Cache
Organisation in detail
A 16Mbit chip can be organised as 1M of 16 bit words A bit per chip system has 16 lots of 1Mbit chip with bit 1 of each word in chip 1 and so on A 16Mbit chip can be organised as a 2048 x 2048 x 4bit array
Reduces number of address pins Multiplex row address and column address 11 pins to address (211=2048) Adding one more pin doubles range of values so x4 capacity
Refreshing
Refresh circuit included on chip Disable chip Count through rows Read & Write back Takes time Slows down apparent performance
To Processor
From Processor
Blk Y
Miss: data needs to be retrieve from a block in the lower level (Block Y)
Miss Rate = 1 - (Hit Rate) Miss Penalty: Time to replace a block in the upper level + Time to deliver the block the processor
From Processor
Blk Y
Processor Control Second Level Cache (SRAM) Main Memory (DRAM) Secondary Storage (Disk)
On-Chip Cache
Registers
Datapath
10s Ks
100s Ms
Definitions
Upper: memory closer to processor Block: minimum unit that is present or not present Block address: location of block in memory Hit: Data is found in the desired location Hit time: time to access upper level Miss rate: percentage of time item not found in upper level
Cache
Small amount of fast memory Sits between normal main memory and CPU May be located on CPU chip or module
Cache Design
Size Mapping Function Replacement Algorithm Write Policy Block Size Number of Caches
I-$
D-$
Next PC
Next SEQ PC
RS1 RS2
MUX
Adder
Adder
Zero?
MEM/WB
RD
RD
RD
WB Data
Address
EX/MEM
Memory
Reg File
ID/EX
IF/ID
ALU
Data Memory
MUX
MUX
Cache/memory structure
00001
00101
01001
11001
11101
Memory
Hit
20 Tag Index
10
Data
Data
Valid bit indicates whether an entry contains a valid address or not Tag bits is usually indicated by address size (log2(memory size) + 2)
E.g. 32 (10 + 2) = 20
Example
32-word memory 8-word cache (The addresses below are word addresses.)
Index Valid Tag Data 000 001 010 011 100 101 110 111
Address Binary Cache block Hit or miss 22 10110 110 26 11010 010 22 10110 110 26 11010 010 16 10000 000 3 00011 011 16 10000 000 18 10010 010
Example-Bits in Cache
How many total bits are required for a directmapping cache with 16KB of data and 4-word blocks, assuming a 32-bit address.
In part a, we have a 1 + (4 x 15) + (4 x 1) = 65 clock cycle miss penalty, and a (4x4)/65 = 0.25 bytes per clock cycle In part b, we have a 1 + (2 x 15) + (2 x 1) = 33 clock cycle miss penalty, and a (4x4)/33 = 0.48 bytes per clock cycle (assuming memory width of two words)
Wider bus and higher cache access time
In part c, we have a 1 + (1x15) + (4 x 1) = 20 clock cycle miss penalty, and a (4x4)/20 = 0.80 bytes per clock cycle (assuming 4 interleaving banks)
Associativity Examples
Cache size is 8 blocks Where does word 12 from memory go? Fully associative: Block 12 can go anywhere Direct mapped: Block no. = (Block address) mod (No. of blocks in cache) Block 12 can go only into block 4 (12 mod 8 = 4) => Access block using lower 3 bits 2-way set associative: Set no. = (Block address) mod (No. of sets in cache) Block 12 can go anywhere in set 0 (12 mod 4 = 0) => Access set using lower 2 bits
Each block has a valid bit that tells if the block is valid - the block is in the cache if the tags match and the valid bit is set.
Tag Index
Miss rates for caches with different size, associativity and replacemnt algorithm.
Associativity: 2-way 4-way Size LRU Random LRU Random 16 KB 5.18% 5.69% 4.67% 5.29% 64 KB 1.88% 2.01% 1.54% 1.66% 256 KB 1.15% 1.17% 1.13% 1.13% LRU 4.39% 1.39% 1.12% 8-way Random 4.96% 1.53% 1.12%
For caches with low miss rates, random is almost as good as LRU.
Example I
Assume I-miss rate of 2% and D-miss rate of 4% (gcc) Assume CPI = 2 (without stalls) and miss penalty of 40 cycles Assume 36% loads/stores What is the CPI with memory stalls? How much faster would a machine with perfect cache run? What happens if the processor is made faster, but the memory system stays the same (e.g. reduce CPI to 1)? How does Amdahlss law come into play?
Calculation I
Instruction miss cycles = I x 100% x 2% x 40 = .80 x I Data miss cycles = I x 36% x 4% x 40 = .58 x I Total miss cycles = .80 x I + .58 x I = 1.38 x I CPI = 2 + 1.38 = 3.38 PerfPerf / PerfStall = 3.38/2 = 1.69 For a processor with base CPI = 1: CPI = 1 + 1.38 = 2.38 PerfPerf / PerfStall = 2.38 Time spent on stalls for slower processor 1.38/3.38 = 41% Time spent on stalls for faster processor 1.38/2.38 = 58%
Example II
Suppose the performance of the machine in the previous example is improved by doubling the clock speed (main memory speed remains the same). Hint: since the clock rate is doubled and the memory speed remains the same, the miss penalty becomes twice as much (80 cycles). How much faster will the machine be assuming the same miss rate as the previous example?
Calculation II
If clock speed is doubled but memory speed remains same: Instruction miss cycles = I x 100% x 2% x 80 = 1.60 x I Data miss cycles = I x 36% x 4% x 80 = 1.16 x I Total miss cycles = 1.60 x I + 1.16 x I = 2.76 x I CPI = 2 + 2.76 = 4.76
Tag Search
1 2
Tag Search
1 2
Tag Search
1 2
Example
Three small 4 word caches: Direct mapped, two-way set associative, fully associative How many misses in the sequence of block addresses: 0, 8, 0, 6, 8? How does this change with 8 words, 16 words?
Address 31 30 12 11 10 9 8 22 8 3210
Tag
Data
Tag
Data
Tag
Data
Tag
Data
22
32
4-to-1 multiplexor
I miss rate D miss rate Combined rate 2.0% 1.7% 1.9% 1.6% 1.4% 1.5% 1.6% 1.4% 1.5%
Hit
Data
2 KB 4 KB
64 KB
128 KB
Design Tradeoffs
As in everything in engineering, multiple design tradeoffs exist when discussing memory hierarchies There are many more factors involved, but the presented ones are the most important and accessible ones
Change
Negative effect May increase access time May increase access time
Increase size Decreases capacity misses Increase associativity Decreases miss rate due to conflict misses
Increase block sizeDecreases miss rate for a wide range of block sizes ay increase miss penalty M
Example
A computer system contains a main memory of 32K 16-bit words. It has also a 4Kword cache divided into 4-line sets with 64 words per line. The processor fetches words from locations 0, 1, 2,, 4351 in that order sequentially 10 times. The cache is 10 times faster than the main memory. Assume LRU policy.
With no cache Fetch time = (10 passes) (68 blocks/pass) (10T/block) = 6800T With cache Fetch time + =
Improvement = 2.15
Modern Systems
Questions
What is the difference between DRAM and SRAM in terms of applications? What is the difference between DRAM and SRAM in terms of characteristics such as speed, size and cost? Explain why one type of RAM is considered to be analog and the other digital. What is the distinction between spatial and temporal locality? What are the strategies for exploring spatial and temporal locality? What is the difference among direct mapping, associative mapping and set-associative mapping? List the fields of the direct memory cache. List the fields of associative and set- associative caches.