Professional Documents
Culture Documents
MEMORY
WHAT IS MEMORY HIERARCHY?
The memory hierarchy is the arrangement of
various types of storage on a computing system
based on access speed.
1. Cache Memory
It is the fastest type of memory and is located
closest to the CPU.
It stores frequently accessed data and
instructions, making it faster to retrieve them
than from the main memory.
There are typically two or three levels of cache
memory, with each level having a larger capacity
and slower access speed than the previous level.
2. MAIN MEMORY
It is the primary memory of a computer system.
It is slower than cache memory but faster than
secondary storage.
The data and instructions stored in the main
memory can be accessed directly by the CPU.
3. SECONDARY STORAGE
Disadvantages:
Limited Bandwidth: 2D memory organization has limited
bandwidth due to the sequential access pattern of memory chips,
which can lead to slower data transfer rates.
Limited Capacity: 2D memory organization has limited capacity
since it requires memory chips to be arranged in a two-dimensional
grid, limiting the number of memory chips that can be used.
Limited Scalability: 2D memory organization is not scalable,
making it difficult to increase memory capacity or performance
without adding more memory chips.
2.5D MEMORY ORGANIZATION:
Advantages:
Higher Bandwidth: 2.5D memory organization has higher bandwidth since it
uses a high-speed interconnect between memory chips, enabling faster data
transfer rates.
Higher Capacity: 2.5D memory organization has higher capacity since it can
stack multiple memory chips on top of each other, enabling more memory to be
packed into a smaller space.
Scalability: 2.5D memory organization is highly scalable, making it easier to
increase memory capacity or performance without adding more memory chips.
Disadvantages:
Complexity: 2.5D memory organization is more complex than 2D memory
organization since it requires additional interconnects and packaging
technologies.
Higher Cost: 2.5D memory organization is generally more expensive than 2D
memory organization due to the additional interconnects and packaging
technologies required.
Higher Power Consumption: 2.5D memory organization has higher power
consumption due to the additional interconnects and packaging technologies,
making it less ideal for use in mobile devices and other low-power electronics.
WHAT IS CACHE MAPPING?
As we know that the cache memory bridges the
mismatch of speed between the main memory and the
processor. Whenever a cache hit occurs,
The word that is required is present in the memory of
the cache.
Then the required word would be delivered from the
cache memory to the CPU.
And, whenever a cache miss occurs,
The word that is required isn’t present in the memory
of the cache.
The page consists of the required word that we need
to map from the main memory.
We can perform such a type of mapping using various
different techniques of cache mapping.
CHARACTERISTICS OF CACHE MEMORY
Cache memory is an extremely fast memory type
that acts as a buffer between RAM and the CPU.
Cache Memory holds frequently requested data and
instructions so that they are immediately available
to the CPU when needed.
Cache memory is costlier than main memory or disk
memory but more economical than CPU registers.
Cache Memory is used to speed up and synchronize
with a high-speed CPU.
More than 4Mb
8 kb - 64kb 64kb – 4
mb
LEVELS OF MEMORY
Level 1
It is a type of memory in which data is stored and
accepted that are immediately stored in CPU. Most
commonly used register is accumulator, Program
counter, address register etc.
Level 2
It is the fastest memory which has faster access time
where data is temporarily stored for faster access.
Level 3
It is memory on which computer works currently. It is
small in size and once power is off data no longer
stays in this memory.
Level 4
It is external memory which is not as fast as main
memory but data stays permanently in this memory.
BASIC OPERATIONS OF CACHE MEMORY
Its basic operations are as follows:
The CPU first checks any required data in the
cache. Furthermore, it does not access the main
memory if that data is present in the cache.
On the other hand, if the data is not present in
the cache then it accesses the main memory.
The block of words that the CPU accesses
currently is transferred from the main memory to
the cache for quick access in the future.
The hit ratio defines the performance of the
cache memory.
CACHE PERFORMANCE
The performance of the cache is in terms of the hit ratio.
The CPU searches the data in the cache when it requires writing or read
any data from the main memory. In this case, two cases may occur as
follows:
If the CPU finds that data in the cache, a cache hit occurs and it reads the
data from the cache.
On the other hand, if it does not find that data in the cache, a cache
miss occurs. Furthermore, during cache miss, the cache allows the entry of
data and then reads data from the main memory.
Therefore, we can define the hit ratio as the number of hits divided by the
sum of hits and misses.
hit ratio = hit / (hit + miss)
= number of hits/total accesses
Also, we can improve cache performance by:
using a higher cache block size.
higher associativity.
reducing the miss rate.
reducing the time to hit in the cache.
WHAT IS CACHE MAPPING?
As we know that the cache memory bridges the
mismatch of speed between the main memory and the
processor. Whenever a cache hit occurs,
The word that is required is present in the memory of
the cache.
Then the required word would be delivered from the
cache memory to the CPU.
And, whenever a cache miss occurs,
The word that is required isn’t present in the memory
of the cache.
The page consists of the required word that we need to
map from the main memory.
We can perform such a type of mapping using various
different techniques of cache mapping.
PROCESS OF CACHE MAPPING
The process of cache mapping helps us define
how a certain block that is present in the main
memory gets mapped to the memory of a cache in
the case of any cache miss.
In simpler words, cache mapping refers to a
technique using which we bring the main
memory into the cache memory. Here is a
diagram that illustrates the actual process of
mapping:
TECHNIQUES OF CACHE MAPPING
DIRECT MAPPING
In the case of direct mapping, a certain block of the
main memory would be able to map a cache only up
to a certain line of the cache. The total line
numbers of cache to which any distinct block can
map are given by the following:
Cache line number = (Address of
the Main Memory Block ) Modulo
(Total number of lines in Cache)
FOR EXAMPLE,
Let us consider that particular cache memory is
divided into a total of ‘n’ number of lines.
Then, the block ‘j’ of the main memory would be able
to map to line number only of the cache (j mod n).
DIVISION OF PHYSICAL ADDRESS
In the case of direct mapping, the division of the
physical address occurs as follows:
FULLY ASSOCIATIVE MAPPING
In the case of fully associative mapping,
The main memory block is capable of mapping to
any given line of the cache that’s available freely
at that particular moment.
It helps us make a fully associative mapping
comparatively more flexible than direct mapping.
For Example
Let us consider the scenario given as follows:
Solution-
Given-
• Cache memory size = 16 KB
• Block size = Frame size = Line size = 256 bytes
• Main memory size = 128 KB
We consider that the memory is byte addressable.
Number of Bits in Physical Address-
We have,
Size of main memory = 128 KB
= 2^17 bytes
Thus, Number of bits in physical address = 17 bits
It is quite expensive.
The storage capacity is limited.
AUXILIARY MEMORY
It is a reusable memory.