You are on page 1of 46

GROUP 03 Optimal Memory

Magnetic Disks

EXTERNAL MEMORY: Flash Memories


RAID Levels
EXTERNAL MEMORY

● also be known as secondary memory or backing store


● consists of storage devices not directly accessible by the CPU
● It is used to store a huge amount of data because it has a huge capacity.
● The important property of external memory is that whenever the
computer switches off, then stored information will not be lost.
● The external memory can be categorized into four parts:
NAME REG. NUMBER
PROGRAM
1.Celistine Chipangura R223844F HCS 17.Simon Mkaro R223872N HCS
2.Lister Chabaya R223968Y HCS 18.Tinashe Chinhamo R205996A HCS
3.Ropafadzo K Gobvu R227926M HCS 19. Mudyahope Takudzwa J R228160L HCS
4.Daniel Maonga R201473H CTHSC 20.Stanley B Mberengwa R223916R HCS
5.Tadiwanashe Magaracha R223880K HCS 21. Gashirai Kamucha R228657J HCS
6.Gilbert T Mashawi R223989Z HCS 22. Mutsawashe Mubvakure R195817P CTHSC
7.Kudzai Mamutse R223828U HCS 23.Emmanuel Mutangiri R223912V HCS
8.Henry Panashe Sithole R223902P HCS 24.Ruvimbo Utahwarova R227582A HCS
9.Phylosipy Betera R223886G HCS 25.Solomon Mupona R223993C HCS
10.Craig mazorodze R217022N HCS 26.Timothy T Muparuri R223875T HCS
11.Tawanda Mudamburi R224004T HCS 27. 28.Phillip M Gamunorwa R223846B HCS
12. Mcdonald A Mpofu R199178D HCS 28.Tadiwa A Machiri R223826H HCS
13. Panashe P Tiriboyi R223944L HCS 29.Samuel Sithole R223866E HCS
14.Panashe A Chinyerere R223948U HCS 30.Wisdom Marore R207521B HCS
15.Tafadzwa Sigauke R223898J HCS 31.Denzel T. Mashozhera R223840K HCS
16.Brandon K Mhako R223931W HCS 32. Sithole Wilbert R223832Y HCS
MAGNETIC DISK
In modern computers, most of the secondary storage is in the
form of magnetic disks. Hence, knowing the structure of a
magnetic disk is necessary to understand how the data in the
disk is accessed by the computer.

A magnetic disk contains several platters. Each platter is


divided into circular shaped tracks. The length of the tracks
near the centre is less than the length of the tracks farther from
the centre. Each track is further divided into sectors, as shown
in the figure.

Tracks of the same distance from centre form a cylinder. A


read-write head is used to read data from a sector of the
magnetic disk.
DISK STRUCTURE
Seek time – The time taken by the R-W head to reach the desired track from its current
position.
Rotational latency – Time is taken by the sector to come under the R-W head.
Data transfer time – Time is taken to transfer the required amount of data. It depends upon
the rotational speed.
Controller time – The processing time taken by the controller.
Average Access time – seek time + Average Rotational latency + data transfer time +
controller time.
MAGNETIC DISK STRUCTURE
The speed of the disk is measured as two parts:

Transfer rate: This is the rate at which the data moves from disk to the computer.
Random access time: It is the sum of the seek time and rotational latency.

Seek time is the time taken by the arm to move to the required track. Rotational latency is defined as
the time taken by the arm to reach the required sector in the track.

Even though the disk is arranged as sectors and tracks physically, the data is logically arranged and
addressed as an array of blocks of fixed size. The size of a block can be 512 or 1024 bytes. Each
logical block is mapped with a sector on the disk, sequentially. In this way, each sector in the disk will
have a logical address.
DISK SCHEDULING

Operating systems use disk scheduling to plan when I/O requests for the disk will arrive. I/O
scheduling is another name for disk scheduling.

Disk scheduling is important because:

1. Multiple I/O requests may arrive by different processes and only one I/O request can be served at a
time by the disk controller. Thus other I/O requests need to wait in the waiting queue and need to be
scheduled.

2. Two or more request may be far from each other so can result in greater disk arm movement.

3. Hard drives are one of the slowest parts of the computer system and thus need to be accessed in an
efficient manner.
DISK SCHEDULING ALGORITHMS INCLUDE

First Come First Serve Shortest Seek Time First Last In First Out

Among all the disk scheduling algorithms, FCFS is Requests with the shortest seek times are carried out The LIFO (Last In, First Out) algorithm prioritizes
the most straightforward. The requests are handled first in SSTF (Shortest Seek Time First). As a result, serving newer jobs above serving older ones, thus the
by FCFS in the disk queue in the order they are each request's seek time in the queue is computed most recent or most recently entered jobs are served
received. beforehand, and it is then scheduled in accordance with first, followed by the remaining jobs in the same
that seek time. Hence, the request closest to the disk sequence.
Advantages arm will be carried out first. SSTF is undoubtedly
● Every request gets a fair chance superior to FCFS since it reduces the system's average Advantages
● No indefinite postponement reaction time and boosts system throughput. ● Maximizes locality and resource utilization

Disadvantages: Advantages
● Does not try to optimize seek time. ● Average Response Time decreases Disadvantages
● May not provide the best possible ● Throughput increases ● Can seem a little unfair to other requests
service and if new requests keep coming in, it cause
Disadvantages: starvation to the old and existing ones
● Overhead to calculate seek time in advance
● Can cause Starvation for a request if it has
higher seek time as compared to incoming
requests
● High variance of response time as SSTF
favours only some requests
DISK SCHEDULING ALGORITHMS INCLUDE

SCAN CSCAN

In the SCAN algorithm, the disk arm goes in one way and The disk arm scans the previously scanned path once more using the
responds to requests coming in its path. Whenever it reaches the SCAN algorithm after changing its direction. So, it's possible that there
disk's end, it changes directions and responds to requests once aren't any or very few requests outstanding at the scanned region, or
again coming in its path. As a result, this algorithm is also known that there are too many requests waiting at the other end.
as the elevator algorithm because it functions like one. As a result,
requests arriving after the disk arm are delayed while those at the These circumstances are avoided by the CSCAN algorithm, which
middle receive more attention. directs the disk arm to go to the opposite end of the disk and begin
processing requests there rather than in the opposite direction. The disk
Advantages arm moves in a circular motion as a result, and since this technique is
● High throughput similar to SCAN algorithm, it is also known as C-SCAN algorithm
● Low variance of response time (Circular SCAN).
● Average response time
Advantages
Disadvantages: ● Provides more uniform wait time compared to SCAN
● Long waiting time for requests for locations just visited
by disk arm
DISK MANAGEMENT
The range of services and add-ons provided by modern operating systems is constantly expanding,
and four basic operating system management functions are implemented by all operating systems.
These management functions are briefly described below and given the following overall context.
The four main operating system management functions (each of which are dealt with in more detail
in different places) are:
● Process Management
● Memory Management
● File and Disk Management
● I/O System Management
DISK MANAGEMENT OF THE OPERATING
SYSTEM INCLUDES:
● Disk Format

● Booting from disk

● Bad block recovery


THE LOW-LEVEL FORMAT OR
PHYSICAL FORMAT:
Divides the disk into sectors before storing data so that the disk controller can read and write Each sector can be:

The header retains information, data, and error correction code (ECC) sectors of data, typically 512 bytes of data,
but optional disks use the operating system’s own data structures to preserve files using disks.

It is conducted in two stages:

1. Divide the disc into multiple cylinder groups. Each is treated as a logical disk.

2. Logical format or “Create File System”. The OS stores the data structure of the first file system on the disk.
Contains free space and allocated space.

For efficiency, most file systems group blocks into clusters. Disk I / O runs in blocks. File I / O runs in a cluster.
BOOT BLOCK
● When the computer is turned on or restarted, the program stored in the initial bootstrap ROM
finds the location of the OS kernel from the disk, loads the kernel into memory, and runs the
OS. start.
● To change the bootstrap code, you need to change the ROM and hardware chip. Only a small
bootstrap loader program is stored in ROM instead.
● The full bootstrap code is stored in the “boot block” of the disk.
● A disk with a boot partition is called a boot disk or system disk.
BAD BLOCKS
● Disks are error-prone because moving parts have small tolerances.
● Most disks are even stuffed from the factory with bad blocks and are handled in a variety of
ways.
● The controller maintains a list of bad blocks.
● The controller can instruct each bad sector to be logically replaced with one of the spare
sectors. This scheme is known as sector sparing or transfer.
● A soft error triggers the data recovery process.
● However, unrecoverable hard errors may result in data loss and require manual intervention.
There is no guarantee that files will be stored in contiguous locations on physical disk drives,
especially large files. It depends greatly on the amount of space available. When the disc is
full, new files are more likely to be recorded in multiple locations. However, as far as the
user is concerned, the example file provided by the operating system hides the fact that the
file is fragmented into multiple parts.

The operating system needs to track the location of the disk for every part of every file on
the disk. In some cases, this means tracking hundreds of thousands of files and file fragments
on a single physical disk. Additionally, the operating system must be able to locate each file
and perform read and write operations on it whenever it needs to. Therefore, the operating
system is responsible for configuring the file system, ensuring the safety and reliability of
reading and write operations to secondary storage, and maintains access times (the time
required to write data to or read data from secondary storage).
SWAP SPACE MANAGEMENT
Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or
move) to secondary storage (disk) and make that memory available to other processes. At some
later time, the system swaps back the process from the secondary storage to main memory.

Swap space is a space on a hard disk where the swapped-out processes are stored.

Though performance is usually affected by swapping process but it helps in running multiple and
big processes in parallel
SWAP SPACE
● Whenever our computer runs short of physical memory, it uses its virtual
memory and stores information in memory on a disk.
● Virtual memory is a combination of RAM and disk space that running processes
can use. Swap space is the portion of virtual memory on the hard disk, used
when RAM is full.
● The total time taken by swapping process includes the time it takes to move the
entire process to a secondary disk and then to copy the process back to memory,
as well as the time the process takes to regain main memory.
● Note that it may be safer to overestimate than to underestimate the amount of
swap space required because if a system runs out of swap space, it may be forced
to abort processes or may crash entirely. Overestimation wastes disk space that
could otherwise be used for files, but it does no other harm.
● Throughput is a measure of how many units of information a system can process
in a given amount of time.
SWAP SPACE
Swap-Space Management : Swap-Space Use :

● Swap-Space management is another low-level ● Swap-space is used by the different operating-


task of the operating system. system in various ways.

● Disk space is used as an extension of main ● The systems which are implementing swapping
memory by the virtual memory. may use swap space to hold the entire process
which may include image, code and data
● As we know the fact that disk access is much segments.
slower than memory access, so it will ● Paging systems may simply store pages that
significantly decreases system performance. have been pushed out of the main memory.
● Basically, in all our systems we require the ● The need of swap space on a system can vary
best throughput, so the goal of this swap-space from a megabytes to gigabytes but it also
implementation is to psrovide the virtual depends on the amount of physical memory, the
memory the best throughput. virtual memory it is backing and the way in
which it is using the virtual memory.
WHERE DOES THE SWAP SPACE
RESIDE?
Swap space can reside in one of these two places: The
normal file system or Separate disk partition.

Normal File Routines


● IF the swap space is simply a large file system,
normal file routines used to create it, name it and
allocate the space
● This technique is easy but inefficient.
● External Fragmentation

Separate Disk Partition


● No file system or directory structure is placed in this
system .
● A separate swap space manager is used to allocate
and deallocate the blocks from raw partition
● This manager used optimized algorithm to speed up
rather than storage efficiency.
● Internal fragmentation.
DISK RELIABILITY
● Disk reliability refers to the ability of the disk system to accommodate a single or multi-disk failure and still remain available to the users
● Disk failures can be caused by a variety of factors, including physical damage, manufacturing defects, and wear and tear.
● There are several types of disk failures, including logical failures, physical failures, and firmware failures.
● Predicting disk failures can be done through methods such as SMART monitoring, maintenance, and failure rate analysis.
● Improving disk reliability can be achieved by using high-quality disks, monitoring disk health and implementing redundancy.
● Backup and recovery are crucial in ensuring disk reliability, as they allow for data recover the event of a disk failure.
● Different types of backups include full backups, incremental backups, and differential backups.
● Testing backups and recovery procedures is important to ensure they are effective and reliable.
● Overall, disk reliability is a critical issue for businesses and individuals who rely on data
● Storage and access. By understanding the common causes and types of disk failures, and
● Implementing strategies for improving reliability and backup and recovery, it is possible to
● Reduce the risk of data loss and ensure that data is always available when needed.
OPTICAL MEMORY

● Optical memory is an electronic storage medium that uses a laser beam to


store and retrieve digital (binary) data
● In optical storage technology, a laser beam encodes digital data on an
optical disc or laser disc in the form of tiny pits arranged in a spiral pattern
on the surface of the disc.
● Digital Versatile Disk (DVD), Compact Disk (CD) and Blu-Ray Disk are the
examples of optical storage device.
OPTICAL STORAGE SYSTEM
● An optical-disk storage system consists of a rotating disk, which is
coated with a thin metal or any other material that is highly
reflective.
● Laser beam technology is used for recording/reading data on a disk.
● Due to this, optical disks are also known as laser disks or optical
laser disks.
TYPES OF OPTICAL MEMORY
● The optical memory can be classified into three categories:

1. CD (Compact Disk)
2. DVD (Digital Video Disk)
3. Blu-Ray Disk
COMPACT DISK

● CD has a diameter of 12 cm. The track on the CD is spiral-shaped, with around 20,000
windings.
● It is affordable, portable, and has a storage capacity of approximately 700 MB
● It is classified into three ways.
1. CD-ROM: Compact Disk Read Only Memory, on which the manufacturer has pre-stored
data.
2. CD-R: Compact Disk-R is recordable, the data within this is written only one time by the user
but it is not rewritable.
3. CD-RW: Compact Disk-RW is rewritable, the data can be modified multiple times i.e., this
can be read, erased, and written again and again as well.
DIGITAL VERSATILE DISK
● DVD is also known as digital versatile disk.
● This is a different kind of optical disk that holds data and multimedia
content. It is the second form of optical disc to be introduced, following
the compact disk.
● A DVD provides approximately 4.7 GB to 17 GB of storage space.
● DVD is also classified into mainly three parts.
1. DVD-ROM: DVD-Read Only Memory permits the manufacturer to store
data/information within it at the time of disk manufacturing and does not
allow the disk to be rewritten.
2. DVD-R: DVD-Recordable data is available to the user in blank format,
and the user can only store data in it once.
3. DVD-RW: DVD-Rewritable provides users with the ability to read and
write data many times
BLU-RAY DISK
● It is regarded as the third generation of compact disk technology,
following the launch of CDs and DVDs.
● It contains about 128 GB of storage space.
● Unlike CDs and DVDs, which employ a red laser beam for
read/write operations, blu-ray disks use a blue-violet laser with a
wavelength of 405 nm. The tracks on the disk can be wounded more
closely due to the shorter wavelength of the laser than the high
wavelength beam utilized by CDs and DVDs. As a result, it has a
large storage capacity and can hold more data than CDs and
DVDs.
ADVANTAGES AND
DISADVANTAGES
ADVANTAGES DISADVANTAGES
• The cost-per-bit of storage for optical disks is very • It is read-only (permanent) storage medium. Data
low. once recorded, cannot be erased and hence, the
optical disks cannot be reused.
• The use of a single spiral track optical disks an
ideal storage medium for reading large blocks of • The data access speed for optical disks is slower
sequential data such as music. than magnetic disks.
• Due to their compact size and light weight, optical • It requires a more complicated drive mechanism
disks are easy to handle, store, and port from one than magnetic disks.
place to another.
FLASH MEMORIES
• Flash memory is a type of non-volatile computer storage that retains data even when the power
is turned off. It is widely used in computer architecture, ranging from USB drives and solid-
state drives (SSDs) to embedded systems and mobile devices.

• This flash memory works on the principle of EEPROM. EEPROM stands for Electrical Erasable
Programmable Read-Only Memory.
• ROM operation can only one time write and many times read and we can’t erase it. But Flash
Memory can be erased multiple times and update the data or program integrated into it.
• So it gives flexibility to the updation of the program but ROM has no such type of feature.
FEATURES OF FLASH MEMORY
 Non-volatile: There is no loss of date when there is no electricity supply.
 Solid-state: It is SS technology so it is faster than HDD type storage.
 Fast access times: It supports solid-state technology so it has faster access time.
 Large storage capacity: Flash memory devices can store large amounts of data, from a few GB
(Gigabytes) to several TB(Terabytes).
 Low power consumption: It is not based on header like HDD so no mechanical components in
flash memory so it uses less amount of electricity from read the data.
 Flexibility towards Erase and write operations: Flash memory can be erased electrically
multiple times and read multiple times so flexibility towards read/write operation is more in
flash memory.
APPLICATIONS OF FLASH MEMORY
● Used in SSDs: Flash memory is used in SSDs to increase the speed of read/write of
operations.
● Embedded systems: Flash memory is used in embedded systems. Examples: digital cameras,
camcorders, MP3 players etc.
● Smartphones and tablets: Flash memory is used in smartphones and tablets.
● USB drives: Flash memory is commonly used in USB drives.
STRUCTURE OF FLASH MEMORY:
 Flash memory is composed of memory cells organized into blocks and
sectors.
 Each memory cell in flash memory consists of a floating-gate transistor
that stores charge to represent binary data (0s and 1s).
 The cells are arranged in a grid-like structure, with each cell capable of
holding multiple bits of data (typically 1, 2, or 3 bits per cell).
 The blocks are a higher-level organization of memory cells, usually
consisting of multiple sectors.
 A sector is the smallest writable unit and typically contains a few
kilobytes of data.
OPERATION OF FLASH MEMORY:
 Writing: To write data to flash memory, an electrical charge is
applied to the floating gate of a memory cell, trapping electrons
inside. The presence or absence of charge determines the stored
data.
 Erasing: Unlike writing, erasing flash memory is performed at a
block level. The entire block is erased by removing the charge from
all memory cells within the block, resetting them to an erased state
(typically all 1s).
 Reading: Reading data from flash memory involves sensing the
electrical charge of each memory cell to determine the stored data.
TYPES OF FLASH MEMORY
● NAND flash memory: NAND flash memory has high memory cell density so it has high capacity to store the data. Memory cards,
USB drives, and SSD (Solid State Drive) use this type of flash memory which uses low power to store data, it has low power
consumption. NAND flash memory cell is made up of two gates.
Control gate
Floating gate
● NOR flash memory: NOR Flash Memory is made up of FGMOS or FGMOSFETs (Floating Gate Metal Oxide Field Effect
Transistors). FGMOS is an electronic component in NOR flash memory that can contain 0 or 1. So we can say that the memory
cell in the NOR flash memory is FGMOS. Cells of NOR flash memory are attached in a parallel manner so reading speed of this
type of flash memory is faster compared to other.
● 3D flash memory: It is a newer type of flash memory. It has a memory cell density greater than NAND flash memory also. It is
used in high capacity SSDs.
DIFFERENCE BETWEEN NAND AND
NOR FLASH MEMORY
KEY FEATURES AND ADVANTAGES:
 Non-Volatile: Flash memory retains data even when the power supply is disconnected,
making it suitable for storage applications where data persistence is crucial.
 High-Speed Access: Flash memory provides fast read access times, allowing for quick
retrieval of data.
 Durability: Unlike mechanical hard drives, flash memory has no moving parts, making it
resistant to shock, vibration, and mechanical failures.
 Low Power Consumption: Flash memory operates at low power levels, making it
suitable for battery-powered devices.
 Compact Size: Flash memory is available in small form factors, making it ideal for
portable devices with limited physical space.
BENEFITS OF FLASH MEMORY

Large storage capacity: Flash memory has high memory density so it is able to
store a high volume of data.
High speed: Some flash memory has parallel architecture of memory cells so it
has faster speed to read and write operation.
Persistent Data: Without supply of electricity it persist the data like HDDs.
Low power consumption: Flash memory don’t have mechanical components
like HDD’s so it consume less amount of power than HDDs.
LIMITATIONS OF FLASH MEMORY
● Limited lifespan: Writing onto the flash memory by electrical supply may damage the
hardware so it has some limitations to the lifespan of flash memory.
● Slower write speeds: Frequency of writing speed so less than RAM and to write the data by
using an electrical pulse every time may take more time than RAM.
● Limited storage capacity: Flash memory has a high storage density, but lesser than some other
memory devices such as HDDs or tape drives.
● Data corruption: When we are writing to the flash memory by using electrical pulse and when
power supply is cut off accidentally then loss of data will be there.
RAID LEVELS
Redundant Array of Independent Disks
is a setup consisting of multiple disks for data storage

• RAID’s two primary underlying concepts are:


• Distributing data over multiple hard drives improves performance.
• Using multiple drives properly allows for any one drive to fail without loss of data
and without system downtime.
• There are several ways to implement a RAID array, using a combination of mirroring,
striping, duplexing, and parity technologies
RAID 0(STRIPING)
● data is evenly distributed (stripes data ) across multiple
disks
● the focus is solely on speed and performance
● multiple disks do reading and writing operations
simultaneously
● No overhead (total capacity use) and allows expanding
storage
● relatively easy to set up
● any drive failure will cause complete data loss across all
drives
● less reliable than a single drive.
● Not suitable for critical data
● No redundancy, fault tolerance, or party in its
composition
RAID 1(MIRRORING)
● Data is duplicated (mirrored) across two or more disks.
● primary goal of RAID 1 is to provide redundancy
● It eliminates the possibility of data loss and downtime
by replacing a failed drive with its replica.
● Provides fault tolerance
● provides a high level of data protection
● Increased read performance
● no increase in storage capacity
● Uses only half of the storage capacity
● More expensive (needs twice as many drivers)
● used for mission-critical storage that requires a minimal
risk of data loss .e.g Accounting systems
RAID 5(STRIPING WITH
DISTRIBUTED PARITY)
● Data is striped across multiple disks, with parity
information distributed across all drives.
● Offers a balance of performance, capacity, and
redundancy.
● high performance rates due to fast and reliable read
data transactions which can be done simultaneously
by different drives in the array.
● Parity bits are distributed evenly on all disks after
each sequence of data has been saved.
● Tolerates single drive failure without data loss.
● Uses half of the storage capacity (due to parity).
● More complex to implement.
● often used for file and application servers because
of its high efficiency and optimized storage
RAID 6(STRIPING WITH DOUBLE
PARITY)
● Similar to RAID 5, but with additional
parity information for enhanced fault
tolerance.
● Can tolerate the failure of two drives
without data loss
● Storage efficiency (when more than four
drives are used).
● Provides better protection against data
loss compared to RAID 5, especially in
large capacity arrays.
● Fast read operations.
● Slow write performance.
● Complex to implement.
RAID 10(STRIPING AND
MIRRORING)
● A combination of two different RAID levels: RAID 0
and RAID 1.
● Data is striped across mirrored pairs of drives
● Offers high performance and redundancy, requiring a
minimum of four drives.
● Provides both speed and fault tolerance, but at the
expense of usable capacity (half of the total drive
capacity).
● Fast read and write operations.
● Fast rebuild time.
● Costly (compared to other RAID levels).
● used in use cases that require storing high volumes of
data, fast read and write times, and high fault tolerance
e.g. email servers, web hosting servers, and databases.
RAID 50 AND RAID 60:
THESE ARE COMBINATIONS OF RAID 5 AND RAID 0
(RAID 50) OR RAID 6 AND RAID 0 (RAID 60).
PROVIDE A BALANCE OF PERFORMANCE, REDUNDANCY,
AND CAPACITY BY STRIPING DATA ACROSS MULTIPLE
RAID 5 OR RAID 6 ARRAYS.

THE CHOICE OF RAID LEVEL DEPENDS ON FACTORS


SUCH AS PERFORMANCE REQUIREMENTS, FAULT
TOLERANCE, AND BUDGET.
THERE ARE ALSO LESS COMMON RAID LEVELS SUCH AS
RAID 2, RAID 3, RAID 4
COMPARISONS OF RAID LEVELS
THE END

You might also like