You are on page 1of 45

Group Members:

1.Natnael Wondimu
2.Rediet Abayneh
3.Tamrat Hordofa
Classifications of memory system according
to their key Characteristics
, memory is the most essential component of the
normal functioning of any system. The computer
system categorizes the memory for different purposes
and uses. In this slide, we have discussed the
classification of memory in detail. Also, we will
discuss types of memory, features of memory, RAM,
ROM, SRAM, DRAM, and its advantages and
disadvantages.
Characteristics of memory systems
The memory system can be characterized with their
Location, Capacity, Unit of transfer, Access method,
Performance, Physical type, Physical characteristics,
Organization.
1. Location:
● It represents the internal or external location of the
memory in a computer.
The internal memory is inbuilt in computer memory. It is
also known as primary memory. And its is used to store
data that is used by the system at start-up and to run
various types of programs such as the operating system
the example of primary memory are registers, cache
and main memory.
Whereas, external memory can also be known as
secondary memory or backing store. It is used to store
a huge amount of data because it has a huge capacity.
At present, it can measure the data in hundreds of
megabytes or even in gigabytes.
The important property of external memory is that
whenever the computer switches off, then stored
information will not be lost. such as disk, tape, USB pen
drive.
2. Capacity:

 It is the most important feature of computer memory.


 Memory capacity is the amount of memory that can be used for an
electronic device such as a computer, laptop, smartphone or other
smart device. Every hardware device or computer has a minimum and
maximum amount of memory.
 The performance of a device and the efficiency of its input/output
operations is dependent on memory capacity.
 The memory capacity of a device is commonly expressed in bytes,
kilobytes, megabytes, gigabytes or terabytes. A device’s memory
capacity can be obtained from either the operating system or
motherboard, Storage capacity can vary in external and internal
memory.
3. Access Methods:
It defines the technique that is used to store and
retrieve data. Access methods have their own data set
structures to organize data, system-provided programs
(or macros) to define data sets, and utility programs to
process data sets.
Memory can be accessed through four modes of
memory:
▪ Direct Access
▪ Sequential Access Method
▪ Random Access Method
▪ Associative Access Method:
Direct Access Method:
its also called Direct Memory Address (DMA), is a
method that allows input/output (I/O) devices to access
or retrieve data directly or from the main memory.
Sequential Access Method:
The sequential access method is used in a data storage
device to read stored data sequentially from the
computer memory
the memory is accessed in a specific linear sequential
manner, like accessing in a single Linked List. The
access time depends on the location of the data.
Random Access Method:
It is a method used to randomly access data from
memory
This method is the opposite of Sequential access
method. any location of the memory can be accessed
randomly like accessing in Array. Physical locations
are independent in this access method.
Associative Access Method:
In this memory, a word is accessed rather than its
address.
This access method is a special type of random access
method. Application of thus Associate memory access
is Cache memory.
4. Unit of transfer:
a unit of transfer measures the transfer rate of bits that
can be read or write in or out of the memory devices.
The transfer rate of data can be different in external
and internal memory.
For internal memory, the unit of transfer is equal to the
number of electrical lines into and out of the memory
module. This may be equal to the word length, but is
often larger, such as 64, 128, or 256 bits.
For External memory, The transfer rate of bit or unit is
not equal to the word length. It is always greater than a
word or may be referred to as blocks.
5. Performance:
 performance refers to the speed and efficiency at
which a computer system can execute tasks and
process data.
 A high-performing computer system is one that can
perform tasks quickly and efficiently while minimizing
the amount of time and resources required to complete
these tasks.
The performance of memory is majorly divided into
three parts:
▪Access Time
▪Memory Cycle Time
▪ Transfer rate
Access Time:
is the elapsed time between the initiation of a request
for data and receipt of the first bit or byte of that data.
Direct access devices require varying times to position
a read-write head over a particular record. In random
access memory, it represents the total time taken by
memory devices to perform a read or write operation
that an address is sent to memory.
Memory Cycle Time:
 is the total time that is required to store next memory
access operation from the previous memory access
operation.
Memory cycle time = access time plus transient time
(any additional time required before a second access
can commence).
Transfer Rate:
This is the rate at which data can be transferred in and
out of a memory unit.
For random access, R = 1 / cycle time
For non-random access, Tn = Ta + N / R; where Tn –
average time to read orwrite N bits,
Ta – average access time,
N – number of bits,
R – Transfer rate in bits per second (bps).
6. Physical types:
It defines the physical type of memory used in a
computer such as magnetic, semiconductor, magneto-
optical and optical
Semiconductor:
RAM
Magnetic:
Disk & Tape
Optical:
CD & DVD
7. Organization:
It defines the physical structure of the bits used in
memory.
8. Physical characteristics:
It specifies the physical behavior of the memory like
volatile, non-volatile or non-erasable memory.
 Volatile memory is known as RAM, which requires
power to retain stored information, and if any power
loss has occurred, stored data will be lost.
Non-volatile memory is a permanent storage memory
that is used to obtain any stored information, even
when the power is off.
 The primary memory is further divided into two parts:
▪ RAM (Random Access Memory)
▪ ROM (Read only Memory)
Random Access Memory (RAM): is one of the faster types
of main memory accessed directly by the CPU. It is the
hardware in a computer device to temporarily store data,
programs or program results.
. It is used to read/write data in memory until the machine is
working. It is volatile, which means if a power failure occurs
or the computer is turned off, the information stored in RAM
will be lost. All data stored in computer memory can be read
or accessed randomly at any time.
There are two types of RAM:
1. Static Random-Access Memory
2. Dynamic Random-Access Memory
Static Random-Access Memory: is a type of RAM
used to store static data in the memory. It means to store
data in SRAM remains active as long as the computer
system has a power supply. However, data is lost in
SRAM when power failures have occurred.
Dynamic Random-Access Memory: : is a type of RAM
that is used for the dynamic storage of data in RAM. In
DRAM, each cell carries one-bit information. The cell is
made up of two parts: a capacitor and a transistor. The
size of the capacitor and the transistor is so small,
requiring millions of them to store on a single chip.
Hence, a DRAM chip can hold more data than an SRAM
chip of the same size. However, the capacitor needs to be
continuously refreshed to retain information because
DRAM is volatile. If the power is switched off, the data
store in memory is lost.
Read Only Memory (ROM):
ROM is a memory device or storage medium that is
used to permanently store information inside a chip. It
is a read-only memory that can only read stored
information, data or programs, but we cannot write or
modify anything. A ROM contains some important
instructions or program data that are required to start
or boot a computer. It is a non-volatile memory it
means that the stored information cannot be lost even
when the power is turned off or the system is shut
down.
Types of ROM:
There are five types of Read Only Memory:
1.MROM (Masked Read Only Memory):is the oldest
type of read-only memory whose program or data is pre-
configured by the integrated circuit manufacture at the
time of manufacturing.
2. PROM (Programmable Read Only Memory): It is a
type of digital read-only memory, in which the user can
write any type of information or program only once.
3.EPROM (Erasable and Programmable Read
OnlyMemory):
● It is the type of read only memory in which stored data
can be erased and re-programmed only once in the
EPROM memory
●It is a non-volatile memory chip that holds data when
there is no power supply and can also store data for a
minimum of 10 to 20 years.
4. EEPROM (Electrically Erasable and Programmable
Read Only Memory):
● The EEROM is an electrically erasable and
programmable read only memory used to erase stored
data using a high voltage electrical charge and re-
programmed it.
● It is also a non-volatile memory whose data cannot be
erased or lost; even the power is turned off.
5. Flash ROM:
● Flash memory is a non-volatile storage memory chip
that can be written or programmed in small units called
Block or Sector.
● Flash Memory is an EEPROM form of computer
memory, and the contents or data cannot be lost when the
power source is turned off. It is also used to transfer data
between the computer and digital devices.
CACHEMEMORY:
Cache memory is a small-sized type of volatile
computer memory that provides high-speed data
access to a processor and stores frequently used
computer programs, applications and data.
 is also a technique in which computer applications
temporarily store data in a computer’s main memory
(i.e., random access memory, or RAM) to enable fast
retrievals of that data.
Placed between two levels of memory hierarchy to
bridge the gap in access times between processor and
main memory
Caching Principle :
 The intent of cache memory is to provide the fastest access to
resources without compromising on size and price of the
memory.
 The processor attempting to read a byte of data, first looks at the
cache memory. If the byte does not exist in cache memory, it
searches for the byte in the main memory. Once the byte is found
in the main memory, the block containing a fixed number of byte
is read into the cache memory and then on to the processor.
 The probability of finding subsequent byte in the cache memory
increases as the block read into the cache memory earlier
contains relevant bytes to the process because of the
phenomenon called Locality of Reference or Principle of
Locality.
Cache Memory Design :
How Does A Cache Works?
The details of how cache memory works vary
depending on the different cache controllers and
processors.

The most recently accessed information or


instructions can help the controller to guess at
what RAM locations may be accessed next and
these are stored in the cache.
 Since there many elements of cache design, we are going to look at
the important one which is:
Mapping Function:
When a block of data is read from the main memory, the mapping
function decides which location in the cache gets occupied by the
read-in main memory block.
So there are 3 mapping functions that can be used to map a block of
memory to cache line. These are :
▪ Direct Mapping
▪ Associative Mapping
▪ Set-associative Mapping
Direct Mapping:
The easiest technique used for mapping is known as
direct mapping
The direct mapping maps every block of the main
memory into only a single possible cache line. In
simpler words, in the case of direct mapping, we
assign every memory block to a certain line in the
cache. Here, the line number of the cache to which any
given block can map is basically given by this:
Cache line number = (The Block Address of the Main
Memory ) Modulo (Total number of lines present in the
cache)
In the direct mapping scheme, we map each block of main
memory to only one specific cache location. In this scheme,
memory blocks are mapped to cache lines using a hashing
or indexing mechanism. When accessing the main memory
address, we consider its three components. Those are the
tag, the index, and the offset. The number of bits allocated
to each component depends on the size of the cache, main
memory, and blocks. We use the tag to represent the high-
order bits of the memory address and uniquely identify the
memory block. We determine the cache line to which the
memory block is mapped based on the index bits, while the
offset bits specify the position of the data within the line
In the above example, each address in the main
memory is mapped to one of the four cache lines. To
achieve this, we use the modulus operation.So, if there
are N lines in the cache, direct mapping computes the
index using mod N, and the index takes log_2 N bits.
To search for a block, we compare the tag stored in a
cache line of the selected set with the tag derived from
the address. If the tags match, a cache hit occurs, and
the requested data receives from the cache. If the tags
don’t match, it’s a cache miss.
In this case, the requested memory block isn’t present
in the cache. The cache controller fetches the data from
the main memory and stores it in the cache line of the
selected set. It evicts the existing block from that cache
line and replaces it with the new block.Once the data is
fetched from the cache (cache hit) or main memory
(cache miss), it’s returned to the requesting processor
or core.
Let’s consider an 8-bit memory address 01000011 and
cache with 8 lines, each containing 8 blocks. We use
three bits for the index and three for the offset (since
cache lines hold 8 = 2^3 blocks). The remaining two
bits are for the tag:
 The cache controller retrieves the tag associated with the cache
line at the first index because that’s where the index bits 000
point. Then, it compares it to the tag derived from the memory
address and present in the cache, which is 01 in this example.
The tags don’t match, which means the requested memory
block is not in the cache, so we have a cache miss.
 Now, the cache controller fetches the requested data from the
main memory address = 01000011 and stores it in the cache
line corresponding to the first index. If there is already data
present in that cache line, it will be replaced with the new
block.If the tags matched, the controller would retrieve the third
word from the first cache line because that’s what the offset bits
(011) specify.
Associative Mapping:
“Every memory block can be mapped to any cache
line.”The fully associative mapping helps us resolve
the issue related to conflict misses. It means that any
block of the main memory can easily come in a line of
cache memory. Here, for instance, B0 can easily come
in L1, L2, L3, and L4. Also, the case would be similar
for all the other blocks. This way, the chances of a
cache hit increase a lot
Set-Associative Mapping:
Set associative mapping combines direct mapping with
fully associative mapping by arrangement lines of a cache
into sets. The sets are persistent using a direct mapping
scheme. However, the lines within each set are treated as a
small fully associative cache where any block that can save
in the set can be stored to any line inside the set.
. A set-associative cache that includes k lines per set is
known as a k way set-associative cache. Because the
mapping approach uses the memory address only like
direct mapping does, the number of lines included in a set
should be similar to an integer power of two, for example,
two, four, eight, sixteen, etc.
Example : Consider a cache with 29 = 512 lines, a block of
memory contains 23 = 8 words, and the full memory space
includes 230 = 1G words. In a direct mapping scheme, this
can leave 30 – 9 – 3 = 18 bits for the tag. By sending from
direct mapping to set associative with a set size of two lines
per set, the various sets achieved equals half the number of
lines. In the instance of the cache having 512 lines, we can
achieve 256 sets of two lines each, which would require
eight bits from the memory address to recognize the set. This
can leave 30 – 8 – 3 = 19 bits for the tag. By sending to four
lines per set, the number of sets is decreased to 128 sets
needing 7 bits to recognize the set and twenty bits for the tag.
 Computer memory is generally classified as:
1. Internal Memory and
2. External Memory
Internal Memory:
Internal memory in the computer is the memory that is
directly accessible by the processor without accessing
the input-output channel of the computer.
is used to store data that is used by the system at start-
up and to run various types of programs such as the
operating system. Internal memory is mostly contained
on small microchips that are either attached or
connected to the computer's motherboard.
The internal memories of the computer are made up of
semiconductor material usually silicon. This memory
is costlier and is usually small in size as compared to
the external memory.
External Memory:
External memory refers to storage devices that are
separate from the main device, and they can be
connected to it externally. These devices are used to
expand the storage capacity or facilitate the transfer of
data between devices.

You might also like