You are on page 1of 9

TUTORIAL NO :- 5

Computer Architecture

Research article published on the area of COA, Cache memory and main memory
mapping

PRESENTED BY :-
Krishna Tikarya – A208
Niraj Mitalia – A209
Siddharth Mathur – A225
A COMPARATIVE STUDY OF SET ASSOCIATIVE MEMORY MAPPING
ALGORITHMS AND THEIR USE FOR CACHE AND MAIN MEMORY
[ALAN JAY SMITH, MEMBER, IEEE]

The research paper attempts to Set associative page mapping algorithms have become widespread for the operation
of cache memories for reasons of cost and efficiency. We show how to calculate analytically the effectiveness of
standard bit-selection set associative page mapping or random mapping relative to fully associative. algorithms
currently used only for cache paging will be applied to main memory, for the same reasons of efficiency and
implementation ease.

The objectives of the study is


 To calculate analytically the effectiveness of standard bit-selection set associative page mapping or
random mapping relative to fully associative (unconstrained mapping) paging
 To discuss cache architecture and examine its implementation.
 To use main memory as a high speed buffer for some form of "gap filler" technology.
Introduction

 CACHE memories were proposed early in the 1960’s as high speed memory buffers used to hold the contents of
recently accessed main memory locations.
 It was already known at that time that recently used information (instructions and data) is likely to be used again
in the near future.
 The idea was that although the cache (buffer) memory would hold only a small fraction of the contents of main
memory, a disproportionate fraction of all memory references would be satisfied by information contained within
the buffer.
 That this does happen is attested to by the prevalence of machines such as the IBM using cache Memory
Cache Memory

 Cache Memory is a special very high-speed memory.


 It is used to speed up and synchronizing with high-speed CPU.
 Cache memory is costlier than main memory or disk memory but economical than CPU registers.
 Cache memory is an extremely fast memory type that acts as a buffer between RAM and the CPU.
 It holds frequently requested data and instructions so that they are immediately available to the CPU when
needed.
 Cache memory is used to reduce the average time to access data from the Main memory.
 The cache is a smaller and faster memory which stores copies of the data from frequently used main
memory locations.
Main memory

 Main memory refers to physical memory that is internal to the computer.


 The word main is used to distinguish it from external mass storage devices such as disk drives.
 Other terms used to mean main memory include RAM and primary storage
 The computer can manipulate only data that is in main memory.
 Therefore, every program you execute and every file you access must be copied from a storage
device into main memory.
 The amount of main memory on a computer is crucial because it determines how many programs
can be executed at one time and how much data can be readily available to a program
 Main memory is the primary, internal workspace in the computer, commonly known as RAM
(random access memory). Specifications such as 4GB, 8GB, 12GB and 16GB almost always refer
to the capacity of RAM. In contrast, disk or solid state storage capacities in a computer are typically
128GB or 256GB and higher.
Techniques

 We experiment with LRU and FIFO dynamic mapping and find that they often perform significantly better
than either of the two static algorithms.
 The First-In, First-Out (FIFO) Page Replacement Algorithm; FIFO the frames are treated as a circular list;
the oldest (longest resident) page is replaced. ... The Least Recently Used (LRU) Page Replacement
Algorithm; LRU the frame whose contents have not been used for the longest time is replaced.
 Comparisons indicating the performance penalty to be expected from decreases in the degree of
associativity are also presented.
Research Methodology

 This paper will present both mathematical and experimental analysis of some memory mapping algorithms.
 Experiments with dynamic mapping algorithms indicated that most of the penalty associated with static mapping over
fully associative mapping was eliminated for dynamic mapping.
 From both our measurements and calculations, we draw the conclusion that there is only a small miss ratio penalty for set
associative bit-selection mapping.
 We believe that the implementation advantages for set associative mapping will result in its use when electronic third-
level memories become fully integrated into computer designs.
Conclusion

To sum up , From the experiment with two (infeasible to implement) dynamic mapping
algorithms, in which pages are assigned to sets either in an LRU or FIFO manner at fault
times, and found.. that they often yield significantly lower miss ratios than static
algorithms such as bit selection. Trace driven simulations are used to generate
experimental results and to verify the accuracy of the calculations
THANK YOU

You might also like