Module 4: Memory System
Memory system: basic concepts, semiconductor ram, rom, cache memories,
improving cache performance, virtual memory, memory management
requirements, associative memories, secondary storage devices
Memory System
Basic Concepts
Primary Memory
« Primary memory, also known as main memory, is the memory that
is directly accessible to the CPU. It is the fastest type of memory,
but it is also the most expensive. Primary memory is typically
made up of semiconductor RAM (random access memory) or
ROM (read-only memory).
* RAM is volatile memory, which means that it loses its data when
the power is turned off. This is because the transistors in RAM
need to be constantly refreshed in order to retain their data. RAM
is used to store the operating system, applications, and data that is
currently being used by the CPU.
» ROMis non-volatile memory, which means that it retains its data
even when the power is tumed off. ROM is typically used to store
the computer's BIOS (basic input/output system) and other
firmware.
Secondary Memory
« Secondary memory, also known as auxiliary memory, is slower
than primary memory, but it is much cheaper and can store much
more data. Secondary memory devices are used to store data that
is not currently being used by the CPU, such as files, applications,
and databases
Common secondary memory devices include:+ Hard disk drives (HDDs): HDDs are the most common type of
secondary storage device. They use magnetic platters to store
data. HDDs are relatively slow, but they are also very durable and
can store large amounts of data.
+ Optical discs (CDs, DVDs, and Blu-ray discs): Optical discs use
lasers to read and write data. They are slower than HDDs, but they
are more portable and can be removed from the computer.
+ Solid state drives (SSDs): SSDs use flash memory to store data.
They are much faster than HDDs, but they are also more
expensive. SSDs are typically used as the primary storage device
in modern laptops and smartphones.
Memory Hierarchy
The memory hierarchy is a system of organizing memory that takes
advantage of the different speeds and costs of different types of
memory. The memory hierarchy is typically divided into three levels:
+ Level 1 (L1) cache: L1 cache is the smallest and fastest type of
memory. It is located on the CPU chip and stores frequently
accessed data and instructions.
+ Level 2 (L2) cache: L2 cache is larger and slower than L1 cache. It
is typically located on the motherboard.
+ Main memory: Main memory is the largest and slowest type of
memory. It is typically made up of semiconductor RAM.
When the CPU needs to access data, it first checks the L1 cache. If the
data is not in the L1 cache, the CPU checks the L2 cache. If the data is
not in the L2 cache, the CPU accesses main memory.
The memory hierarchy allows the computer to achieve good
performance without having to use expensive, fast memory for all of its
data.Memory Management
| Memory management is the process of allocating and managing
memory resources. The operating system is responsible for
memory management.
* When a program starts, the operating system allocates it a block of
memory. The operating system also tracks which programs are
using which blocks of memory.
* If aprogram needs more memory than is available, the operating
system can swap some of the program's memory to secondary
storage. When the program needs to access the swapped-out
memory, the operating system swaps it back into main memory.
| Memory management is a complex task, but it is essential for the
efficient operation of a computer system.
Electronic Disk
Magnetic Disk
Optical Disk
Magnetic TapesSemiconductor Memories
Semiconductor RAM:
Dynamic RAM
¢ Semiconductor RAM, also known as dynamic RAM (DRAM), is the
most common type of primary memory. It is made up of
semiconductor chips that contain millions of tiny transistors. Each
transistor can store a single bit of data:
* DRAM is volatile memory, which means that it loses its data when
the power is turned off. This is because the transistors in DRAM
need to be constantly refreshed in order to retain their data.
How DRAM works
* DRAM works by storing data in the form of an electrical charge on
a capacitor. Each transistor in a DRAM chip has a capacitor
associated with it. The capacitor can be charged or discharged to
represent a binary 1 or 0, respectively.
« In order to read data from DRAM, the capacitor must be charged
to a certain voltage. If the voltage is above a certain threshold, thebit is interpreted as a 1. If the voltage is below a certain threshold,
the bit is interpreted as a 0.
* To write data to DRAM, the capacitor must be charged to a higher
or lower voltage, depending on the value of the bit to be written
DRAM refreshing
« As mentioned above, DRAM is volatile memory, which means that
it loses its data when the power is turned off. This is because the
capacitors in DRAM gradually lose their charge over time.
* To prevent data loss, DRAM chips need to be refreshed
periodically. Refreshing involves recharging the capacitors in
DRAM to their original voltage levels.
* DRAM chips are refreshed by the memory controller, which is a
chip on the motherboard that is responsible for managing the
memory system. The memory controller refreshes all of the DRAM
chips on the motherboard at regular intervals.
DRAM types
* There are two main types of DRAM: synchronous DRAM (SDRAM)
and double data rate (DDR) DRAM.
* SDRAN is the older type of DRAM. It is slower than DDR DRAM,
but it is also cheaper.
* DDR DRAM is a newer type of DRAM that transfers data twice as
fast as SDRAM. DDR DRAM is also more expensive than SDRAM.
« DDR DRAM is the most common type of DRAM used in modern
computers.Static RAM
Static RAM (SRAM) is a type of semiconductor RAM that uses a flip-flop
circuit to store each bit of data. The flip-flop circuit delivers two stable
states, which are read as 1 or 0. To support these states, the circuit
requires six transistors, four to store the bit and two to control access to
the cell.
SRAM Cell
SRAM is non-volatile memory, which means that it retains its data even
when the power is tured off. This is because the flip-flop circuit in each
SRAM cell stores the data in a stable state.
SRAM is faster than DRAM, but itis also more expensive and consumes
more power. SRAM is typically used for cache memory and other
applications where high performance is required.
Here is a more detailed explanation of how SRAM works:
+ Reading data from SRAM: When the CPU needs to read data from
SRAM, it sends a read signal to the SRAM controller. The SRAM
controller then selects the desired SRAM cell and reads the data
from the flip-flop circuit.
+ Writing data to SRAM: When the CPU needs to write data to
SRAM, it sends a write signal to the SRAM controller along withthe data to be written. The SRAM controller then writes the data to
the flip-flop circuit in the desired SRAM cell.
SRAM is a versatile type of memory that can be used in a variety of
applications. It is typically used for cache memory, but it can also be
used for main memory, frame buffers, and other applications where high
performance is required.
Here are some of the advantages of SRAM:
+ Fast access times
+ Non-volatile memory
+ Low power consumption
+ High reliability
Here are some of the disadvantages of SRAM:
+ Expensive
+ Requires more space than DRAM
Overall, SRAM is a high-performance memory that is well-suited for
applications where speed and reliability are critical.
Semiconductor ROM (Read-Only Memory)
Semiconductor ROM (read-only memory) is a type of non-volatile
memory that is used to store data that needs to be permanent, such as
the BIOS (basic input/output system) and firmware. ROM is made up of
semiconductor chips that contain millions of tiny transistors. Each
transistor can store a single bit of data.
There are two main types of ROM:+ Masked ROM: Masked ROM is programmed during the
manufacturing process. Once the ROM is programmed, the data
cannot be changed.
Example: Boot code, BIOS info, Security credentials-SIM Card
+ Programmable ROM (PROM): PROM is programmed using a
special device called a PROM programmer. Once the PROM is
programmed, the data cannot be changed.
Example:
Storage of Game Data: In video game cartridges, PROMs were
used to store the game's code, graphics, and other data necessary
for the game to run, This data is programmed into the PROM
during the manufacturing process.
Non-Volatile Storage: PROMs are non-volatile, meaning that the
data programmed into them remains intact even when the power is
turned off. This property is essential for video game cartridges
because players expect their saved progress to be preserved, and
the game code to remain intact between sessions.
Read-Only Data: PROMs are read-only memory, which means that
players couldn't alter the game code or data on the cartridge. This
ensured the integrity of the game and prevented tampering,
+ Erasable programmable ROM (EPROM): EPROM can be erased
using ultraviolet (UV) light and then reprogrammed using a PROM
programmer.
+ Electrically erasable programmable ROM (EEPROM): EEPROM
can be erased and reprogrammed electrically.
EEPROM is the most common type of ROM used in modern computers.ROM is used in a variety of applications, including
+ Computers: ROM is used to store the BIOS and firmware.
+ Microcontrollers: ROM is used to store the program code that
controls the microcontroller.
+ Consumer electronics: ROM is used to store the firmware that
controls devices such as TVs, DVD players, and microwave ovens.
ROM is a reliable and durable type of memory. It is also relatively
inexpensive to manufacture. However, ROM is slower than RAM and it is
not possible to change the data stored in ROM without using a special
device or erasing the ROM.
Cache Memory
Cache memory is a small, high-speed memory that is located between
the CPU and primary memory. It stores frequently accessed data and
instructions so that the CPU can access them quickly.
[[2] cache Memory
oPu Primary Memory |Secondary Memory]
Cache memory
Cache memory is typically made up of SRAM, which is faster than
DRAM but also more expensive. The cache stores copies of the most
frequently accessed data and instructions from primary memory. When
the CPU needs to access data or instructions, it first checks the cache. If
the data or instructions are in the cache, the CPU can access them veryquickly. If the data or instructions are not in the cache, the CPU must
access them from primary memory, which is much slower.
Improving Cache Performance
There are a number of ways to improve cache performance, including:
+ Increasing the size of the cache: A larger cache can store more
frequently accessed data and instructions, which can improve
performance.
+ Using multiple levels of cache: Multiple levels of cache can
reduce the number of times that the CPU needs to access primary
memory. For example, a computer might have two levels of
cache: L1 cache and L2 cache. L1 cache is smaller and faster than
L2 cache. The CPU first checks the L1 cache for data and
instructions. If the data or instructions are not in the L1 cache, the
CPU checks the L2 cache.
+ Using sophisticated cache algorithms: Sophisticated cache
algorithms are advanced techniques used in computer systems to
manage and optimize the performance of cache memory. Cache
memory is a small, high-speed storage component that stores
frequently accessed data to reduce the latency of accessing that
data from slower main memory or other storage devices.
Sophisticated cache algorithms are designed to make efficient and
intelligent decisions about what data to store in the cache and
when to evict or update that data. Some of these advanced cache
algorithms include:
LRU (Least Recently Used): LRU is a classic cache
replacement policy that removes the least recently accessed
item when the cache is full. While LRU is simple and
intuitive, it may not always be the most efficient choice for all
workloads.
¢ LFU (Least Frequently Used): LFU removes the item from
the cache that has been accessed the least number of times.LFU aims to keep items that are accessed frequently in the
cache.
LRU-K: LRU-K is an extension of the LRU algorithm that
considers the last K accesses to determine which item to
evict. By taking into account a longer history of accesses, it
can be more effective than basic LRU in some cases.
MRU (Most Recently Used): MRU removes the most recently
accessed item when the cache is full. MRU has its use
cases, especially when you want to prioritize keeping the
most recent data in the cache.
ARC (Adaptive Replacement Cache): ARC combines
elements of LRU and LFU to provide a more adaptive cache
management algorithm. It dynamically adjusts the size of the
LRU and LFU segments based on the access patterns of the
data.
CLOCK (or Second Chance): CLOCK is a simplified
algorithm that maintains a circular buffer of cache items. It
considers both recency and a reference bit to make eviction
decisions. Items with a reference bit set are given a “second
chance" before eviction.
MQ (Multi-Queue): MQ is a multi-level caching algorithm that
divides the cache into multiple queues or tiers, each with a
different eviction policy. This approach allows for more fine-
grained control over cache management
Random Replacement: This simple cache replacement
strategy selects items to evict at random. While not as
sophisticated as some other algorithms, it can be surprisingly
effective in certain scenarios and can be computationally
efficient.
Two-Queue: This algorithm divides the cache into two
queues, a frequently used queue and a not-so-frequently
used queue. Data is promoted from the latter to the former
based on access patterns
2Q (Two Queues): 2Q is an improved version of the Two-
Queue algorithm that uses a probationary cache and a
protected cache to differentiate between new and long-term
residents of the cache.* The choice of a cache algorithm depends on the specific use
case, workload, and system requirements. Sophisticated
cache algorithms aim to maximize cache hit rates and
minimize cache miss penalties to improve overall system
performance.
Cache Hit Rate
The cache hit rate is the percentage of times that the CPU can access
data or instructions from the cache. A high cache hit rate can improve
performance significantly.
Cache Miss Rate
The cache miss rate is the percentage of times that the CPU cannot
access data or instructions from the cache. A high cache miss rate can
reduce performance significantly.
Cache Replacement Algorithms
When the cache is full and the CPU needs to store new data or
instructions, the cache must replace some of the existing data or
instructions. The cache replacement algorithm determines which data or
instructions to replace.
Common cache replacement algorithms include:
+ Least recently used (LRU): The LRU algorithm replaces the data
or instructions that have been accessed least recently.
+ Most frequently used (MFU): The MFU algorithm replaces the data
or instructions that have been accessed least frequently
+ Random replacement algorithm: The random replacement
algorithm randomly replaces data or instructions from the cache.
The best cache replacement algorithm depends on the specific
applicationConclusion
Cache memory is an important component of modern computer
systems. It can improve performance significantly by storing frequently
accessed data and instructions in high-speed memory. There are a
number of ways to improve cache performance, including increasing the
size of the cache, using multiple levels of cache, and using sophisticated
cache algorithms.
Virtual Memory:
« Virtual memory is a technique that allows a computer to use more
memory than is physically available. This is done by storing part of
the memory on a secondary storage device, such as a hard disk
drive or solid-state drive. When the CPU needs to access data that
is not in primary memory, the operating system transfers it from
secondary storage to primary memory.
* Virtual memory is implemented using a technique called paging.
Paging divides the virtual memory space into pages, which are
typically 4 kilobytes (KB) in size. The operating system maintains a
page table, which maps each virtual page to a physical page in
primary memory.
* When the CPU needs to access a virtual page, it first checks the
page table to see if the page is in primary memory. If the page is in
primary memory, the CPU can access it directly. If the page is not
in primary memory, the operating system generates a page fault.
« When a page fault occurs, the operating system transfers the page
from secondary storage to primary memory. The operating system
then updates the page table to map the virtual page to the physical
page in primary memory.« Virtual memory allows computers to run programs that are larger
than the amount of physical memory available. It also allows
multiple programs to run at the same time, even if the total
memory requirements of the programs exceed the amount of
physical memory available.
Viral memory Physical
Advantages of virtual memory:
+ Virtual memory allows computers to run programs that are larger
than the amount of physical memory available.
+ Virtual memory allows multiple programs to run at the same
time, even if the total memory requirements of the programs
exceed the amount of physical memory available.
+ Virtual memory can improve performance by reducing the number
of times that the CPU has to access secondary storage.
Disadvantages of virtual memory:
+ Virtual memory can reduce performance if the page fault rate is
high.+ Virtual memory requires additional hardware support, such as a
memory management unit (MMU)
Overall, virtual memory is a valuable technique that allows computers to
run more powerful and flexible programs.
Memory Allocation
The operating system is responsible for allocating memory to processes.
When a process is started, the operating system allocates it a block of
memory. The size of the memory block depends on the needs of the
process.
The operating system uses a variety of algorithms to allocate memory to
processes. Some common memory allocation algorithms include:
+ First-fit: The first-fit algorithm allocates the first memory block that
is large enough to hold the process
+ Best-fit: The best-fit algorithm allocates the smallest memory block
that is large enough to hold the process
+ Worst-fit: The worst-ft algorithm allocates the largest memory
block that is available.
Memory Swapping
If a process needs more memory than is available in primary memory,
the operating system can swap the process to secondary storage.
Swapping involves moving a process from primary memory to secondary
storage and then moving it back to primary memory when it is needed
Swapping can reduce performance, but it is necessary to allow multiple
processes to run at the same time.Memory Protection
The operating system is responsible for preventing processes from
accessing each other's memory. This is important to ensure that
processes cannot interfere with each other or corrupt each other's data.
The operating system uses a variety of techniques to protect memory,
including:
+ Memory segmentation: Memory segmentation divides the memory
space of a process into segments. Each segment has its own
permissions, which control which processes can access the
segment and how they can access it.
+ Memory paging: Memory paging divides the memory space of a
process into pages. Each page has its own permissions, which
control which processes can access the page and how they can
access it.
+ Memory management unit (MMU): The MMU is a hardware device
that helps the operating system to manage memory. The MMU
translates virtual addresses (addresses used by processes) into
physical addresses (addresses used by the hardware). The MMU
can also be used to enforce memory protection permissions.
Conclusion
Memory management is an important task that the operating system
performs. By carefully managing memory, the operating system can
ensure that processes have the memory they need to run and that
processes cannot interfere with each other.
Associative memories are a type of memory that can store data based
on its content, rather than its address. This means that associative
memories can be used to search for data without knowing its physicallocation in memory. This makes associative memories very fast for
search operations.
Associative memories are typically implemented using a neural network
architecture. The neural network is trained on a set of data, and the
weights of the neural network are adjusted so that the network can
accurately retrieve data based on its content.
Associative memories are used in a variety of applications, including:
+ Content-addressable memory (CAM): CAM is a type of memory
that is used to search for data based on its content. CAM is
typically used in networking applications, such as routing tables
and packet filtering.
+ Translation lookaside buffers (TLBs): TLBs are used to translate
virtual addresses to physical addresses. TLBs are typically used in
computer processors to improve performance.
+ Pattern matching: Associative memories can be used to match
patterns in data. This is useful for applications such as image
recognition and speech recognition
+ Natural language processing: Associative memories can be used
to store and process natural language data. This is useful for
applications such as machine translation and text summarization
Here are some of the advantages of associative memories:
+ Very fast search operations
+ Can be used to search for data without knowing its physical
location in memory
+ Can be used to match patterns in data
+ Can be used to store and process natural language dataHere are some of the disadvantages of associative memories:
+ Can be complex to implement
«Can be expensive to implement
+ Can be slow to train
+ Can be susceptible to noise
Overall, associative memories are a powerful type of memory that can
be used for a variety of applications.
Here is an example of how an associative memory could be used in a
real-world application:
A company that sells books could use an associative memory to store
information about its books, such as the title, author, genre, and
publication date. The company could then use the associative memory
to search for books based on different criteria, such as the title, author,
or genre. This would allow the company to quickly find the books that its
customers are looking for.
Secondary Storage Devices
Secondary storage devices are used to store large amounts of data that
are not frequently accessed. Common secondary storage devices
include:
+ Hard disk drives (HDDs)
+ Optical disks (CDs, DVDs, and Blu-ray discs)
+ Magnetic tape
Hard disk drives (HDDs):‘Computer hard drive
tive configuration port (92019 Freya sania he
« HDDs are the most common type of secondary storage device.
They use magnetic platters to store data. The platters are coated
with a magnetic material, and data is stored by magnetizing and
demagnetizing the material.
« HDDs are relatively slow, but they are also very durable and can
store large amounts of data. HDDs are typically used to store data
such as operating systems, applications, and files.
Optical disks (CDs, DVDs, and Blu-ray discs)
Optical Disks Optical disks, such as CDs, DVDs, and Blu-ray discs, store
data using lasers. The laser burns pits into the surface of the disk, and
data is stored in the pattern of the pits.
Optical disks are relatively slow, but they are also portable and can be
easily removed from the computer. Optical disks are typically used to
store data such as music, movies, and softwareMagnetic tape
Magnetic tape is a long strip of plastic that is coated with a magnetic
material. Data is stored on the tape by magnetizing and demagnetizing
the material
Magnetic tape is the slowest type of secondary storage device, but it is
also the most durable and can store the largest amounts of data.
Magnetic tape is typically used to back up data or to store data that is
not frequently accessed
Other Secondary Storage Devices
Other secondary storage devices include:Solid state drives (SSDs): SSDs use flash memory to store
data. SSDs are much faster than HDDs, but they are also more
expensive. SSDs are typically used as the primary storage device
in modern
laptops
and
smartphones.
Cloud storage: Cloud storage is a type of secondary storage
device that is located on remote servers. Cloud storage is
accessed over the internet, and it can be used to store data from
anywhere in the world. Cloud storage is typically used to store data
such as files, photos, and videos.Conclusion
Secondary storage devices are an important part of any computer
system. They allow users to store large amounts of data that are not
frequently accessed. The best type of secondary storage device for a
particular application depends on the specific requirements of the
application.