You are on page 1of 24

MEMORY

What is Computer Memory?

• Computer memory is just like the human brain. It is used to store data
/information and instructions. It is a data storage unit or a data
storage device where data is to be processed and instructions required
for processing are stored. It can store both the input and output
Memory Hierarchy Design
• Memory Hierarchy is one
of the most required
things in Computer
Memory as it helps in
optimizing the memory
available in the computer.
There are multiple levels
present in the memory,
each one having a
different size, different
cost, etc.
Types of Memory Hierarchy : This Memory Hierarchy Design is divided into 2 main types:
External Memory or Secondary Memory: Secondary memory is the backup memory of a computer system that
stores a huge amount of data on a permanent basis. The various secondary memories are as follows: Hard disk,
Compact disk, Pen drive, SD card, DVD etc.

Internal Memory or Primary Memory: primary memory is the core memory of the computer system where data
and information are stored. Primary memory is basically internal memory that resides within the processor unit.
Memory Hierarchy Design
1. Registers
Registers are small, high-speed memory units located in the CPU. They are used to store the most frequently used data and
instructions. Registers have the fastest access time and the smallest storage capacity, typically ranging from 16 to 64 bits.
2. Cache Memory
Cache memory is a small, fast memory unit located close to the CPU. It stores frequently used data and instructions that
have been recently accessed from the main memory. Cache memory is designed to minimize the time it takes to access
data by providing the CPU with quick access to frequently used data.
3. Main Memory
Main memory, also known as RAM (Random Access Memory), is the primary memory of a computer system. It has a
larger storage capacity than cache memory, but it is slower. Main memory is used to store data and instructions that are
currently in use by the CPU.
• Types of Main Memory
• Static RAM: Static RAM stores the binary information in flip flops and information remains valid until power is
supplied. It has a faster access time and is used in implementing cache memory.
• Dynamic RAM: It stores the binary information as a charge on the capacitor. It requires refreshing circuitry to maintain
the charge on the capacitors after a few milliseconds. It contains more memory cells per unit area as compared to
SRAM.

4. Secondary Storage
• Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a non-volatile memory
unit that has a larger storage capacity than main memory. It is used to store data and instructions that are not
currently in use by the CPU. Secondary storage has the slowest access time and is typically the least
expensive type of memory in the memory hierarchy.

5. Magnetic Disk
• Magnetic Disks are simply circular plates that are fabricated with either a metal or a plastic or a magnetized
material. The Magnetic disks work at a high speed inside the computer and these are frequently used.

6. Magnetic Tape
• Magnetic Tape is simply a magnetic recording device that is covered with a plastic film. It is generally used
for the backup of data. In the case of a magnetic tape, the access time for a computer is a little slower and
therefore, it requires some amount of time for accessing the strip.
Primary memory Secondary memory

Primary memory is temporary. Secondary memory is permanent.

Primary memory is directly accessible by Processor/CPU. Secondary memory is not directly accessible by the CPU.

Nature of Parts of Primary memory varies, RAM- volatile in nature. ROM- Non- It’s always Non-volatile in nature.
volatile.

Primary memory devices are more expensive than secondary storage devices. Secondary memory devices are less expensive when compared to primary
memory devices.

The memory devices used for primary memory are semiconductor memories. The secondary memory devices are magnetic and optical memories.

Primary memory is also known as Main memory or Internal memory. Secondary memory is also known as External memory or Auxiliary memory.

Examples: RAM, ROM, Cache memory, PROM, EPROM, Registers, etc. Examples: Hard Disk, Floppy Disk, Magnetic Tapes, etc.
ROM Memories
• Types of Read-Only Memory (ROM) :

1. PROM (Programmable read-only memory): It can be programmed by


the user. Once programmed, the data and instructions in it cannot be
changed.
2.EPROM (Erasable Programmable read-only memory): It can be reprogrammed.
To erase data from it, expose it to ultraviolet light. To reprogram it, erase all the
previous data.
3.EEPROM(Electrically erasable programmable read-only memory): The
data can be erased by applying an electric field, with no need for ultraviolet
light. We can erase only portions of the chip.
4. MROM(Mask ROM): Mask ROM is a kind of read-only memory, that is
masked off at the time of production. Like other types of ROM, mask ROM
cannot enable the user to change the data stored in it. If it can, the process
would be difficult or slow.
2D and 2.5D Memory organization

• The internal structure of Memory either RAM or ROM is made up


of memory cells that contain a memory bit. A group of 8 bits makes a
byte. The memory is in the form of a multidimensional array of rows
and columns. In which, each cell stores a bit and a complete row
contains a word. A memory simply can be divided into this form ---

2n = N
where n is the no. of address lines and N is the total memory in bytes.
There will be 2n words.
2D Memory organization –
• In 2D organization, memory is divided in
the form of rows and columns(Matrix).
• Each row contains a word, now in this
memory organization, there is a decoder.
• A decoder is a combinational circuit that
contains n input lines and 2n output lines.
• One of the output lines selects the row by
the address contained in the MAR and
the word which is represented by that
row gets selected and is either read or
written through the data lines.
2.5D Memory organization –
• In 2.5D Organization the scenario is the
same but we have two different
decoders one is a column decoder and
another is a row decoder.
• Column decoder is used to select the
column and a row decoder is used to
select the row.
• The address from the MAR goes as the
decoders’ input.
• Decoders will select the respective cell
through the bit outline, then the data
from that location will be read or
through the bit, inline data will be
written at that memory location.
• Read and Write Operations –
1.If the select line is in Reading mode then the Word/bit which is represented by
the MAR will be available to the data lines and will get read.
2.If the select line is in write mode then the data from the memory data register
(MDR) will be sent to the respective cell which is addressed by the memory
address register (MAR).
3.With the help of the select line, we can select the desired data and we can
perform read and write operations on it.

• Comparison between 2D & 2.5D Organizations –


1.In 2D organization hardware is fixed but in 2.5D hardware changes.
2.2D Organization requires more gates while 2.5D requires less.
3.2D is more complex in comparison to the 2.5D organization.
4.Error correction is not possible in the 2D organization but in 2.5D it could be
done easily.
5.2D is more difficult to fabricate in comparison to the 2.5D organization.
• 2D Memory Organization:

• Advantages:
• Simplicity: 2D memory organization is a simple and straightforward approach, with
memory chips arranged in a two-dimensional grid.
• Cost-Effective: 2D memory organization is cost-effective, making it a popular choice for
many low-power and low-cost devices.
• Low Power: 2D memory organization has low power consumption, making it ideal for
use in mobile devices and other portable electronics.
• Disadvantages:
• Limited Bandwidth: 2D memory organization has limited bandwidth due to the
sequential access pattern of memory chips, which can lead to slower data transfer rates.
• Limited Capacity: 2D memory organization has limited capacity since it requires
memory chips to be arranged in a two-dimensional grid, limiting the number of memory
chips that can be used.
• Limited Scalability: 2D memory organization is not scalable, making it difficult to
increase memory capacity or performance without adding more memory chips.
• 2.5D Memory Organization:

• Advantages:
• Higher Bandwidth: 2.5D memory organization has higher bandwidth since it uses a high-
speed interconnect between memory chips, enabling faster data transfer rates.
• Higher Capacity: 2.5D memory organization has higher capacity since it can stack multiple
memory chips on top of each other, enabling more memory to be packed into a smaller
space.
• Scalability: 2.5D memory organization is highly scalable, making it easier to increase
memory capacity or performance without adding more memory chips.
• Disadvantages:
• Complexity: 2.5D memory organization is more complex than 2D memory organization
since it requires additional interconnects and packaging technologies.
• Higher Cost: 2.5D memory organization is generally more expensive than 2D memory
organization due to the additional interconnects and packaging technologies required.
• Higher Power Consumption: 2.5D memory organization has higher power consumption
due to the additional interconnects and packaging technologies, making it less ideal for use
in mobile devices and other low-power electronics.
Semiconductor RAM Memories:
• Semi-Conductor memories are available is a wide range of speeds.
• Their cycle time ranges from 100ns to 10ns

• Memory cells are usually organized in the form


of array, in which each cell is capable of storing
one bit of in formation.
• Each row of cells constitute a memory word and
all cells of a row are connected to a common line
called as word line.
• The cells in each column are connected to
Sense / Write circuit by two bit lines.
• The Sense / Write circuits are connected to data
input or output lines of the chip.
• During a write operation, the sense / write circuit
receive input information and store it in the cells
of the selected word.
• As shown in the figure,
• R/W  Specifies the required operation.
• CS Chip Select input selects a given chip
in the multi-chip memory system
• The SRAM memories consist of circuits capable of retaining the stored information as long as the power is applied. That
SRAM And SRAM Memory Cell

means this type of memory requires constant power.


• SRAM memories are used to build Cache Memory.
SRAM Memory Cell
• The figure shows a cell diagram of SRAM.
• A latch is formed by two inverters connected as shown in
the figure.
• Two transistors T1 and T2 are used for connecting the latch
with two-bit lines.
• The purpose of these transistors is to act as switches that can
be opened or closed under the control of the word line,
which is controlled by the address decoder. Then the bit values at points A and B can transmit
• When the word line is at 0-level, the transistors are turned to their respective bit lines. The sense/write circuit
off and the latch remains its information. at the end of the bit lines sends the output to the
• SRAM does not require refresh time. For example, the cell is processor.
at state 1 if the logic value at point A is 1 and at point, B is 0. •
This state is retained as long as the word line is not Write operation: The address provided to the
activated. decoder activates the word line to close both
• Read operation: The word line is activated by the address switches. Then the bit value that is to be written into
input to the address decoder. The activated word line closes the cell is provided through the sense/write circuit
both the transistors (switches) T1 and T2. and the signals in bit lines are then stored in the
cell.
Figure shows the 1-bit SRAM
Memory Cell and an array of SRAMs
• DRAM stores the binary information in the form of electric charges applied to capacitors. The stored
information on the capacitors tends to lose over a period of time and thus the capacitors must be periodically
recharged to retain their usage. DRAM requires refresh time. The main memory is generally made up of DRAM
chips.

DRAM Memory Cell


• Though SRAM is very fast, it is expensive because of its
every cell requires several transistors.
• Relatively less expensive RAM is DRAM, due to the use of
one transistor and one capacitor in each cell, as shown in the
below figure., where C is the capacitor and T is the
transistor.
• Information is stored in a DRAM cell in the form of a charge
on a capacitor and this charge needs to be periodically • After the transistor is turned off, due to the
recharged. property of the capacitor, it starts to discharge.
• For storing information in this cell, transistor T is turned on • Hence, the information stored in the cell can
and an appropriate voltage is applied to the bit line. be read correctly only if it is read before the
• This causes a known amount of charge to be stored in the charge on the capacitors drops below some
capacitor. threshold value.
Types of DRAM
There are mainly 5 types of DRAM.

•Asynchronous DRAM (ADRAM): The DRAM described above is the asynchronous type of DRAM. The timing of the memory
device is controlled asynchronously. A specialized memory controller circuit generates the necessary control signals to control the
timing. The CPU must take into account the delay in the response of the memory.

•Synchronous DRAM (SDRAM): These RAM chips’ access speed is directly synchronized with the CPU’s clock. For this, the
memory chips remain ready for operation when the CPU expects them to be ready. These memories operate at the CPU-memory bus
without imposing wait states. SDRAM is commercially available as modules incorporating multiple SDRAM chips and forming the
required capacity for the modules.

•Double-Data-Rate SDRAM (DDR SDRAM): This faster version of SDRAM performs its operations on both edges of the clock
signal; whereas a standard SDRAM performs its operations on the rising edge of the clock signal. Since they transfer data on both
edges of the clock, the data transfer rate is doubled. To access the data at a high rate, the memory cells are organized into two groups.
Each group is accessed separately.

•Rambus DRAM (RDRAM): The RDRAM provides a very high data transfer rate over a narrow CPU-memory bus. It uses various
speedup mechanisms, like synchronous memory interface, caching inside the DRAM chips and very fast signal timing. The Rambus
data bus width is 8 or 9 bits.

•Cache DRAM (CDRAM): This memory is a special type of DRAM memory with an on-chip cache memory (SRAM) that acts as a
high-speed buffer for the main DRAM.
Cache Memory in Computer Organization

• Cache Memory is a special very high-speed memory.


• The cache is a smaller and faster memory that stores copies of the data from frequently used main memory
locations.
• The most important use of cache memory is that it is used to reduce the average time to access data from the main
memory.

Characteristics of Cache Memory :

•Cache memory is an extremely fast memory


type that acts as a buffer between RAM and the
CPU.
Fig : Cache Memory
•Cache Memory holds frequently requested data
and instructions so that they are immediately
available to the CPU when needed.

•Cache Memory is used to speed up and •Cache memory is costlier than main memory or disk
synchronize with a high-speed CPU. memory but more economical than CPU registers.
Cache Memory Design Issues:
Cache Memory design represents the following categories:
Block size, Cache size, Mapping function, Replacement
algorithm, and Write policy.

These are as follows,

1. Block Size
• Block size is the unit of information changed between
cache and main memory. On the storage system, all
volumes share the same cache space, so that, the
volumes can have only one cache block size.
• As the block size increases from small to larger sizes,
the cache hit magnitude relation increases as a result
of the principle of locality and a lot of helpful data or
knowledge can be brought into the cache.
• Since the block becomes even larger, the hit
magnitude relation can begin to decrease.
2. Cache Size
• If the size of the cache is small it increases the
performance of the system.
Fig : Cache Read Operation
3. Mapping Function
• Cache lines or cache blocks are fixed-size blocks in which data transfer takes place between memory and cache. When a
cache line is copied into the cache from memory, a cache entry is created.
• There are fewer cache lines than memory blocks that are why we need an algorithm for mapping memory into the cache
lines.
• This is a means to determine which memory block is in which cache lines. Whenever a cache entry is created, the
mapping function determines that the cache location the block will occupy.
• There are two main points that are one block of data scan in and another could be replaced.

For example, cache is of 64kb and cache block of 4 bytes i,e. cache is 16k lines of cache.
4. Replacement Algorithm
• If the cache already has all slots of alternative blocks are full and we want to read a line is from memory it replaces some
other line which is already in cache. The replacement algorithmic chooses, at intervals, once a replacement block is to be
loaded into the cache then which block to interchange. We replace that block of memory is not required within the close
to future.
• Policies to replace a block are the least-recently-used (LRU) algorithm. According to this rule most recently used is likely
to be used again. Some other replacement algorithms are FIFO (First Come First Serve), least-frequently-used.
5. Write Policy
• If we want to write data to the cache, at that point it must also be written to the main memory and the timing of this write
is referred to as the write policy.
• In a write-through operation, every write to the cache follows a write to main memory.
• In a write-back operation or copy-back cache operation, writes are not immediately appeared to the main memory that is
writing is done only to the cache and the cache memory tracks the locations have been written over, marking them as
dirty.

Cache Performance
• When the processor needs to read or write a location in the main memory, it first checks for a
corresponding entry in the cache.
• If the processor finds that the memory location is in the cache, a Cache Hit has occurred and data
is read from the cache.
• If the processor does not find the memory location in the cache, a cache miss has occurred. For a
cache miss, the cache allocates a new entry and copies in data from the main memory, then the
request is fulfilled from the contents of the cache.
• The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.

Hit Ratio(H) = hit / (hit + miss) = no. of hits/total accesses


Miss Ratio = miss / (hit + miss) = no. of miss/total accesses = 1 - hit ratio(H)

We can improve Cache performance using higher cache block size, and higher associativity, reduce
miss rate, reduce miss penalty, and reduce the time to hit in the cache.

You might also like