You are on page 1of 32

Computer Architecture

Lecture 4
Memory
• Memory has many types

• No one individual memory can fulfil all the requirements of computer system

• Memory of computer system can be classified into two main categories


• Internal Memory
• External Memory
Memory System
Characteristics

• Memory exhibit
• Widest Range of Technology
• Organization
• Performance
• Cost
Memory System
Characteristics

• LOCATION:
• Refers to physical placement (internal or external)
• Internal is often main memory
• Processor require its own memory (registers)
• Cache is another form of internal memory

• CAPACITY:
• Memory is expressed in terms of byte or words
• Word length are 8, 16 and 32 bit
Memory System
Characteristics
• UNIT OF TRANSFER:
• Internal memory unit of transfer is equal to number of
electrical lines
• External memory data are transferred in blocks (larger than
word)
Memory System
Characteristics
Sequential Direct Random
Access Access Associative
Access
Memory is Each addressable
Involves shared Word is retrieved
organized into location in memory
read-write based on a portion
units of data has unique physical
mechanism of its content
called records address

Individual blocks
Access must be Time to access is Each location has
have unique
made in a linear independent of its own address
address based on
sequence sequence mechanism
physical locations

Any location can be Retrieval time is


Access time is Access time is selected at random constant &
variable variable and directly Independent of
accessed location

Ex. Tape Ex. Disk RAM Ex. Cache


Memory System
Characteristics

Sequential

Direct

Random
Memory System
Characteristics

• CAPACITY & PERFORMANCE:


• Two most important parameters in memory
• Three parameters are used to judge

• Access Time (Latency) • Memory Cycle: • Transfer Rate:


• RAM: Time takes to perform a • Access time + additional time for • Rate at which data can be
read or write operation second access transferred in or out of memory
• Non RAM: Time takes to position • Additional time due to transient
read-write mechanism in desired signa
location
Memory System
Characteristics

• PHYSICAL TYPE
• Semiconductor (RAM)
• Magnetic surface (HDD)
• Optical (CS/DVD)

• PHYSICAL CHARACTERISTICS OF DATA STORAGE


• Volatility
• Erasable

• ORGANIZATION
• Physical arrangement of bits to form words
Memory System
Hierarchy

• How much (Capacity)


• How fast (Speed)
• How expensive (Cost)

• Trade off
• Faster access time = greater cost
• Greater capacity = smaller cost
• Greater capacity = slower access time

• Solution: Rely on more than one memory technology & employ memory hierarchy
Memory System
Levels
• Two Level Memory:
• Use of two level memory to reduce access time

• Based on principle Locality of reference

• Three level Memory:


• Data stored permanently on external devices
• Disk Cache
• Portion of main memory used as buffer for temporary storage
• Large data transfer as compared to small
• Software cache retrieval is faster than physical disk a
Memory System
Levels
• Cache memory is designed to combine
expensive memory (high speed) with
large size (less expensive)

• Sits between normal memory & CPU

• Commonly located on same chip as


CPU
Memory System
Levels

• Cache contains copy of portions of main memory. When processor


attempts to read a word of memory, a check is made to determine
of the word is in cache
• If Yes, Word is delivered to processor
• If No, Block of main memory is read into cache & then word is delivered

• Locality of Reference: Block of data is fetched into cache to satisfy a


single reference, it is likely that there will be future reference from
same memory location in block
Memory System
Cache/Main Memory Structure
• Main memory consists of 2n addressable
words, each with unique address

• For mapping purposes, this memory is divided into blocks of K


words each. (total blocks M = 2n/K)

• Cache has m blocks called lines. Each line contains K words plus
tag bits
Memory System
Cache Organization

• Cache connects to processor via data,


control and address lines.
Memory System
Cache Design Parameters
Memory System
Cache Address

• Logical:
• Stores data in virtual address
• Processor access directly
• Faster cache
• Must be flush on each switch

• Physical:
• Stores data using Memory
Management Unit
Memory System
Cache Size

• Cost
• More cache more expensive

• Speed
• More cache faster speed
• Cache searching takes time
Memory System
Mapping Function

• Algorithm needed to determine which main block occupy which


cache line

Set
Direct Associative
Associative
• Simplest technique • Permits each main • Compromise between
• Maps each block of memory block to be previous two
main memory into loaded into any line of techniques
only one possible cache
cache line • To determine whether
a block is in cache or
not, cache control
logic must examine
every line
Memory System
Direct Mapping

• Each block of main memory maps to


only one cache line

• Address is in two parts


• Least significant w bits identify unique
word
• Most significant s bits specify one
memory block
• The MSB are split into cache line field
r and tag of s –r
Memory System
Direct Mapping

• Pros:
• Simple
• Inexpensive

• Cons
• Fixed location to given block
• If program access 2 blocks that map the same line repeatedly then
caches misses are very high
• Victim Cache
• Remembers what is fetch and use again
Memory System
Associative Mapping

• Associative mapping allow each main memory block to be loaded


into any cache line
• Memory address interpreted as tag and word
• Tag uniquely identifies block of memory
Memory System
Associative Mapping
Memory System
Set Associative Mapping

• Cache divided into a number of


sets
• Each set contains a number of
lines
• Block maps to any line in a
given set
Memory System
Replacement Algorithms

• Space needs to be created for a new data block in a filled cache


• Direct mapping: One possible line for any particular block
• Associative & Set Associative a replacement algorithm is needed
• Hardware implemented
• Least Recently Used (LRU)
• First In First Out (FIFO)
• Least Frequently Used (LFU)
Random
Memory System
Write Policy
Memory System
Write Policy

• Write through
• Simplest technique
• All write operations to be made in main memory & cache
• Generates substantial memory traffic & may cause bottleneck

• Write back
• Minimized memory writes
• Updates are made only in cache
• Requires complex circuitry
Memory System
Line Size

• Block of data is retrieved & placed in cache (it contains required word
+ near by location data also)
• Large block size means more useful data
• Larger block size mean increase in hit ratio
• Even larger block size will decrease hit ratio due to less probability of
reusing information

• Two ways
• Larger block size with reduce number of blocks
• Block become bigger each additional word become farther
Memory System
Multilevel Caches

• High logic density enable caches on chip


• Faster than bus access
• Frees bus for other transfer

• Common to use both on and off chip cache


• L1 on chip while L2 off chip in SRAM
• L2 access faster than DRAM or ROM
• L2 often uses separate path

• Design become complicated due to placement, size, algorithm and write


policy
Memory System
Unified vs Split Caches

• Dedicated cache for both instructions and data (Split) or one cache shared by
both instruction and data (unified)

• Advantages of unified
• Higher hit rate
• Balances load of instructions and data
• Simple design (as only one cache)

• Advantages of Split cache


• Eliminates cache contention between instruction and data (execution unit)
• Assists in pipelining
Memory System
Cache Evolution
Memory System
Intel vs ARM

Intel ARM

You might also like