You are on page 1of 45

Reliability and availability have become increasingly important in today’s computer dependent world.

In
many applications where computers are used, outages or malfunction can be expensive, or even disas-
trous. Just imagine the computer system in a nuclear plant malfunctioning. Or the computer systems in a
space shuttle booting just when the shuttle is about to land... These are the more exotic examples. More

close to everyday life, are the telecommunications switching systems and the bank transaction systems.

To achieve the needed reliability and availability, we need fault-tolerant computers. They have the ability

to tolerate faults by detecting failures, and isolate defect modules so that the rest of the system can oper-

ate correctly. Reliability techniques have also become of increasing interest to general-purpose computer

systems. Four trends contribute to this:

 The first is that computers now have to operate in harsher environments. Earlier, computers
operated in clean computer rooms, with stable climate and clean air. Now the computers have
moved out to industrial environments, with temperatures over a wide range, dust, humidity and
unstable power supply. All these factors alone could make a computer fail.

 Second, the users have changed. Earlier, computer operators were trained personnel. Now, with
an increasing number of users, the typical user knows less about proper operation of the system.
The con-sequence is that computers have to be able to tolerate more. Haven’t we all seen users
swearing over a disappeared document in a text editor (Backup? What is that?), or heard about
people that accidently have poured coffee into the computer?

 Third, the service costs increases relative to hardware costs. Earlier the average machine was a
very expensive, big monster. At that time, it was common with one or several dedicated operators
to keep the system up and running. Today, a computer is cheap, and the user has the job of being
the “operator”. The user can not afford frequent calls for field service.

 The fourth and last trend is larger systems. As systems become larger, there are more components
that can fail. This means, to keep the reliability at an acceptable level, designs have to tolerate
faults resulting from component failures.

So, what can cause outages of equipment, making fault-tolerance techniques necessary? We can split
them into outages caused by:

• Environment: This is facilities failures, e.g. dust, fire in the machine room, problems with the cooling,
earthquakes or sabotage.

• Operations: Procedures and activities of normal system administration, system configuration and system
operation. This can be installation of a new operating system (requires booting of the ma-chine), or
installation of new application programs (which requires exit and restart of programs in use).

• Maintenance: This does not include software maintenance, but could be hardware upgrading.

• Hardware: Hardware device faults.

• Software: Faults in the software.


• Process: Outages due to something else, e.g. a strike.

Interesting to note is that, contrary to common assumptions, few outages are caused by hardware faults. In
a modern system, fault-tolerance masks most hardware faults, and the percentage of outages caused by
hardware faults are decreasing. On the other side, outages caused by software faults are increasing.
According to a study on Tandem systems [4], the percentage of outages caused by hardware faults was
30% in 1985, but had decreased to 10% in 1989. Outages caused by software faults increased in the same
period, from 43% to over 60%!

Operating system

An Operating System (OS) is an interface between a computer user and computer hardware. An operating
system is a software which performs all the basic tasks like file management, memory management,
process management, handling input and output, and controlling peripheral devices such as disk drives
and printers.

An operating system is software that enables applications to interact with a computer's hardware.
The software that contains the core components of the operating system is called the kernel. The primary
purposes of an Operating System are to enable applications (software) to interact with a computer's
hardware and to manage a system's hardware and software resources. Some popular Operating Systems
include Linux Operating System, Windows Operating System, VMS, OS/400, AIX, z/OS, etc. Today,
Operating systems is found almost in every device like mobile phones, personal computers, mainframe
computers, automobiles, TV, Toys etc.

Operating System Generations

Operating systems have been evolving over the years. We can categorise this evaluation based on
different generations which is briefed below:

0th Generation

The term 0th generation is used to refer to the period of development of computing when Charles
Babbage invented the Analytical Engine and later John Atanasoff created a computer in 1940. The
hardware component technology of this period was electronic vacuum tubes. There was no Operating
System available for this generation computer and computer programs were written in machine language.
The computers in this generation were inefficient and dependent on the varying competencies of the
individual programmer as operators.

First Generation (1951-1956)

The first generation marked the beginning of commercial computing including the introduction of Eckert
and Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701.System operation was performed
with the help of expert operators and without the benefit of an operating system for a time though
programs began to be written in higher level, procedure-oriented languages, and thus the operator’s
routine expanded. Later mono-programmed operating system was developed, which eliminated some of
the human intervention in running job and provided programmers with a number of desirable functions.
These systems still continued to operate under the control of a human operator who used to follow a
number of steps to execute a program. Programming language like FORTRAN was developed by John
W. Backus in 1956.

Second Generation (1956-1964)

The second generation of computer hardware was most notably characterised by transistors replacing
vacuum tubes as the hardware component technology. The first operating system GMOS was developed
by the IBM computer. GMOS was based on single stream batch processing system, because it collects all
similar jobs in groups or batches and then submits the jobs to the operating system using a punch card to
complete all jobs in a machine. Operating system is cleaned after completing one job and then continues
to read and initiates the next job in punch card. Researchers began to experiment with multiprogramming
and multiprocessing in their computing services called the time-sharing system. A noteworthy example is
the Compatible Time Sharing System (CTSS), developed at MIT during the early 1960s.

Third Generation (1964-1979)

The third generation officially began in April 1964 with IBM’s announcement of its System/360 family of
computers. Hardware technology began to use integrated circuits (ICs) which yielded significant
advantages in both speed and economy. Operating system development continued with the introduction
and widespread adoption of multiprogramming. The idea of taking fuller advantage of the computer’s
data channel I/O capabilities continued to develop. Another progress which leads to developing of
personal computers in fourth generation is a new development of minicomputers with DEC PDP-1. The
third generation was an exciting time, indeed, for the development of both computer hardware and the
accompanying operating system.

Fourth Generation (1979 – Present)

The fourth generation is characterised by the appearance of the personal computer and the workstation.
The component technology of the third generation, was replaced by very large scale integration (VLSI).
Many Operating Systems which we are using today like Windows, Linux, MacOS etc developed in the
fourth generation.

Following are some of important functions of an operating System.

Memory Management

Processor Management

Device Management

File Management

Network Management

Security

Control over system performance

Job accounting
Error detecting aids

Coordination between other software and users

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main memory is a
large array of words or bytes where each word or byte has its own address.

Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be
executed, it must in the main memory. An Operating System does the following activities for memory
management −

 Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in
use.

 In multiprogramming, the OS decides which process will get memory when and how much.

 Allocates the memory when a process requests it to do so.

 De-allocates the memory when a process no longer needs it or has been terminated.

Processor Management

In multiprogramming environment, the OS decides which process gets the processor when and for how
much time. This function is called process scheduling. An Operating System does the following activities
for processor management −

 Keeps tracks of processor and status of process. The program responsible for this task is known
as traffic controller.

 Allocates the processor (CPU) to a process.

 De-allocates processor when a process is no longer required.

Device Management

An Operating System manages device communication via their respective drivers. It does the following
activities for device management −

 Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.

 Decides which process gets the device when and for how much time.

 Allocates the device in the efficient way.

 De-allocates devices.

File Management
A file system is normally organized into directories for easy navigation and usage. These directories may
contain files and other directions. An Operating System does the following activities for file management

 Keeps track of information, location, uses, status etc. The collective facilities are often known as
file system.

 Decides who gets the resources.

 Allocates the resources.

 De-allocates the resources.

Other Important Activities

Following are some of the important activities that an Operating System performs −

 Security − By means of password and similar other techniques, it prevents unauthorized access to
programs and data.

 Control over system performance − Recording delays between request for a service and response
from the system.

 Job accounting − Keeping track of time and resources used by various jobs and users.

 Error detecting aids − Production of dumps, traces, error messages, and other debugging and error
detecting aids.

 Coordination between other softwares and users − Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer systems.

Components of an operating system

There are various components of an Operating System to perform well defined tasks. Though most of the
Operating Systems differ in structure but logically they have similar components. Each component must
be a well-defined portion of a system that appropriately describes the functions, inputs, and outputs.

There are following 8-components of an Operating System:

Process Management

I/O Device Management

File Management

Network Management

Main Memory Management

Secondary Storage Management


Security Management

Command Interpreter System

Process Management

A process is program or a fraction of a program that is loaded in main memory. A process needs certain
resources including CPU time, Memory, Files, and I/O devices to accomplish its task. The process
management component manages the multiple processes running simultaneously on the Operating
System.

A program in running state is called a process. The operating system is responsible for the following
activities in connection with process management:

 Create, load, execute, suspend, resume, and terminate processes.

 Switch system among multiple processes in main memory.

 Provides communication mechanisms so that processes can communicate with each others

 Provides synchronization mechanisms to control concurrent access to shared data to keep shared
data consistent.

 Allocate/de-allocate resources properly to prevent or avoid deadlock situation.

I/O Device Management

One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from
the user. I/O Device Management provides an abstract level of H/W devices and keep the details from
applications to ensure proper use of devices, to prevent errors, and to provide users with convenient and
efficient programming environment.

Following are the tasks of I/O Device Management component:

 Hide the details of H/W devices

 Manage main memory for the devices using cache, buffer, and spooling

 Maintain and provide custom drivers for each device.

File Management

File management is one of the most visible services of an operating system. Computers can store
information in several different physical forms; magnetic tape, disk, and drum are the most common
forms. A file is defined as a set of correlated information and it is defined by the creator of the file.
Mostly files represent data, source and object forms, and programs. Data files can be of any type like
alphabetic, numeric, and alphanumeric.

A files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and
user. The operating system implements the abstract concept of the file by managing mass storage device,
such as types and disks. Also files are normally organized into directories to ease their use. These
directories may contain files and other directories and so on.

The operating system is responsible for the following activities in connection with file management:

 File creation and deletion

 Directory creation and deletion

 The support of primitives for manipulating files and directories

 Mapping files onto secondary storage

 File backup on stable (nonvolatile) storage media

Network Management

The definition of network management is often broad, as network management involves several different
components. Network management is the process of managing and administering a computer network. A
computer network is a collection of various types of computers connected with each other. Network
management comprises fault analysis, maintaining the quality of service, provisioning of networks, and
performance management. Network management is the process of keeping your network healthy for an
efficient communication between different computers.

Following are the features of network management:

 Network administration

 Network maintenance

 Network operation

 Network provisioning

 Network security

Main Memory Management

Memory is a large array of words or bytes, each with its own address. It is a repository of quickly
accessible data shared by the CPU and I/O devices. Main memory is a volatile storage device which
means it loses its contents in the case of system failure or as soon as system power goes down. The main
motivation behind Memory Management is to maximize memory utilization on the computer system. The
operating system is responsible for the following activities in connections with memory management:

 Keep track of which parts of memory are currently being used and by whom.

 Decide which processes to load when memory space becomes available.


 Allocate and deallocate memory space as needed.

Secondary Storage Management

The main purpose of a computer system is to execute programs. These programs, together with the data
they access, must be in main memory during execution. Since the main memory is too small to
permanently accommodate all data and program, the computer system must provide secondary storage to
backup main memory. Most modern computer systems use disks as the principle on-line storage medium,
for both programs and data. Most programs, like compilers, assemblers, sort routines, editors, formatters,
and so on, are stored on the disk until loaded into memory, and then use the disk as both the source and
destination of their processing.

The operating system is responsible for the following activities in connection with disk management:

 Free space management

 Storage allocation

 Disk scheduling

Security Management

The operating system is primarily responsible for all task and activities happen in the computer system.
The various processes in an operating system must be protected from each other’s activities. For that
purpose, various mechanisms which can be used to ensure that the files, memory segment, cpu and other
resources can be operated on only by those processes that have gained proper authorization from the
operating system. Security Management refers to a mechanism for controlling the access of programs,
processes, or users to the resources defined by a computer controls to be imposed, together with some
means of enforcement.

For example, memory addressing hardware ensure that a process can only execute within its own address
space. The timer ensure that no process can gain control of the CPU without relinquishing it. Finally, no
process is allowed to do it’s own I/O, to protect the integrity of the various peripheral devices.

Command Interpreter System

One of the most important component of an operating system is its command interpreter. The command
interpreter is the primary interface between the user and the rest of the system. Command Interpreter
System executes a user command by calling one or more number of underlying system programs or
system calls. Command Interpreter System allows human users to interact with the Operating System and
provides convenient programming environment to the users.
Many commands are given to the operating system by control statements. A program which reads
and interprets control statements is automatically executed. This program is called the shell and few
examples are Windows DOS command window, Bash of Unix/Linux or C-Shell of Unix/Linux.

Other Important Activities

An Operating System is a complex Software System. Apart from the above mentioned components and
responsibilities, there are many other activities performed by the Operating System. Few of them are
listed below:

Security − By means of password and similar other techniques, it prevents unauthorized access to
programs and data.

Control over system performance − Recording delays between request for a service and response from the
system.

Job accounting − Keeping track of time and resources used by various jobs and users.

Error detecting aids − Production of dumps, traces, error messages, and other debugging and error
detecting aids.

Coordination between other softwares and users − Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer systems.

2.3 Design issues (efficiency, robustness, flexibility, portability, security, compatibility)

Efficiency: An OS allows the computer system resources to be used efficiently. Ability to Evolve: An OS
should be constructed in such a way as to permit the effective development, testing, and introduction of
new system functions at the same time without interfering with service.

Efficiency: Most I/O devices slow compared to main memory (and the CPU)

▪ Use of multiprogramming allows for some processes to be waiting on I/O while another process
executes

▪ Often, I/O still cannot keep up with processor speed

▪ Swapping may use to bring in additional Ready processes; More I/O operations

✓ Optimize I/O efficiency especially Disk & Network I/O

✓ The quest for generality/uniformity:

o Ideally, handle all I/O devices in the same way; Both in the OS and in user applications

o Problem:
▪ Diversity of I/O devices

▪ Especially, different access methods (random access versus stream based) as well as vastly different
data rates.

▪ Generality often compromises efficiency!

o Hide most of the details of device I/O in lower-level routines so that processes and upper levels see
devices in general terms such as read, write, open, close, lock, unlock

Robustness

In computer science, robustness is the ability of a computer system to cope with errors during execution
and cope with erroneous input. Robustness can encompass many areas of computer science, such as
robust programming, robust machine learning, and Robust Security Network. Formal techniques, such as
fuzz testing, are essential to showing robustness since this type of testing involves invalid or unexpected
inputs. Alternatively, fault injection can be used to test robustness. Various commercial products perform
robustness testing of software analysis.

A distributed system may suffer from various types of hardware failure. The failure of a link, the failure
of a site, and the loss of a message are the most common types. To ensure that the system is robust, we
must detect any of these failures, reconfigure the system so that computation can continue, and recover
when a site or a link is repaired.

In general, building robust systems that encompass every point of possible failure is difficult because of
the vast quantity of possible inputs and input combinations. Since all inputs and input combinations
would require too much time to test, developers cannot run through all cases exhaustively. Instead, the
developer will try to generalize such cases. For example, imagine

inputting some integer values. Some selected inputs might consist of a negative number, zero, and a
positive number. When using these numbers to test software in this way, the developer generalizes the set
of all reals into three numbers. This is a more efficient and manageable method, but more prone to failure.
Generalizing test cases is an example of just one technique to deal with failure specifically, failure due to
invalid user input. Systems generally may also fail due to other reasons as well, such as disconnecting
from a network.

Regardless, complex systems should still handle any errors encountered gracefully. There are many
examples of such successful systems. Some of the most robust systems are evolvable and can be easily
adapted to new situations.

Portability

Portability is the ability of an application to run properly in a different platform to the one it was designed
for, with little or no modification. Portability in high-level computer programming is the usability of the
same software in different environments. When software with the same functionality is produced for
several computing platforms, portability is the key issue for development cost reduction.

Compatibility
Compatibility is the capacity for two systems to work together without having to be altered to do so.
Compatible software applications use the same data formats. For example, if word processor applications
are compatible, the user should be able to open their document files in either product. Compatibility
issues come up when users are using the same type of software for a task, such as word processors, that
cannot communicate with each other. This could be due to a difference in their versions or because they
are made by different companies. The huge variety of application software available and all the versions
of the same software mean there are bound to be compatibility issues, even when people are using the
same kind of software. Compatibility issues come up when users are using the same type of software for a
task, such as word processors, that cannot communicate with each other. This could be due to a difference
in their versions or because they are made by different companies. Compatibility issues can be small, for
example certain features not working properly in older versions of the same software, but they can also be
problematic, such as when a newer version of the software cannot open a document created in an older
version. In Microsoft Word for example, documents created in Word 2016 or 2013 can be opened in
Word 2010 or 2007, but some of the newer features (such as collapsed headings or embedded videos) will
not work in the older versions. If someone using Word 2016 opens a document created in Word 2010, the
document will open in Compatibility Mode. Microsoft Office does this to make sure that documents
created in older versions still work properly.

The feature in figure 7 is an example of something called backwards compatibility, which is the ability of
newer software to interact with files (or programs or systems) made with older versions of that software.
It is usually built into the software and is a way to avoid compatibility issues. Another way to avoid this is
to update your software.

Flexibility

Flexible operating systems are taken to be those whose designs have been motivated to some degree by
the desire to allow the system to be tailored, either statically or dynamically, to the requirements of
specific applications or application domains.

Process synchronization

Introduction:

Process Synchronization is the coordination of execution of multiple processes in a multi-process system


to ensure that they access shared resources in a controlled and predictable manner. It aims to resolve the
problem of race conditions and other synchronization issues in a concurrent system.

The main objective of process synchronization is to ensure that multiple processes access shared
resources without interfering with each other, and to prevent the possibility of inconsistent data due to
concurrent access. To achieve this, various synchronization techniques such as semaphores, monitors, and
critical sections are used.

In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to
avoid the risk of deadlocks and other synchronization problems. Process synchronization is an important
aspect of modern operating systems, and it plays a crucial role in ensuring the correct and efficient
functioning of multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two types:

Independent Process: The execution of one process does not affect the execution of other processes.

Cooperative Process: A process that can affect or be affected by other processes executing in the system.
Process synchronization problem arises in the case of Cooperative process also because resources are
shared in Cooperative processes.

Race Condition:

When more than one process is executing the same code or accessing the same memory or any shared
variable in that condition there is a possibility that the output or the value of the shared variable is wrong
so for that all the processes doing the race to say that my output is correct this condition known as a race
condition. Several processes access and process the manipulations over the same data concurrently, then
the outcome depends on the particular order in which the access takes place. A race condition is a
situation that may occur inside a critical section. This happens when the result of multiple thread
execution in the critical section differs according to the order in which the threads execute. Race
conditions in critical sections can be avoided if the critical section is treated as an atomic instruction.
Also, proper thread synchronization using locks or atomic variables can prevent race conditions.

Critical Section Problem:

A critical section is a code segment that can be accessed by only one process at a time. The critical
section contains shared variables that need to be synchronized to maintain the consistency of data
variables. So the critical section problem means designing a way for cooperative processes to access
shared resources without creating data inconsistencies.

critical section problem

In the entry section, the process requests for entry in the Critical Section.

Any solution to the critical section problem must satisfy three requirements:

Mutual Exclusion: If a process is executing in its critical section, then no other process is allowed to
execute in the critical section.

Progress: If no process is executing in the critical section and other processes are waiting outside the
critical section, then only those processes that are not executing in their remainder section can participate
in deciding which will enter in the critical section next, and the selection can not be postponed
indefinitely.

Bounded Waiting: A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that request
is granted.

Peterson’s Solution:

Peterson’s Solution is a classical software-based solution to the critical section problem. In Peterson’s
solution, we have two shared variables:
boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section

int turn: The process whose turn is to enter the critical section.

peterson

Peterson’s Solution preserves all three conditions:

Mutual Exclusion is assured as only one process can access the critical section at any time.

Progress is also assured, as a process outside the critical section does not block other processes from
entering the critical section.

Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s solution:

It involves busy waiting.(In the Peterson’s solution, the code statement- “while(flag[j] && turn == j);” is
responsible for this. Busy waiting is not favored because it wastes CPU cycles that could be used to
perform other tasks.)

It is limited to 2 processes.

Peterson’s solution cannot be used in modern CPU architectures.

Semaphores:

A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled by
another thread. This is different than a mutex as the mutex can be signaled only by the thread that is
called the wait function.

A semaphore uses two atomic operations, wait and signal for process synchronization.

A Semaphore is an integer variable, which can be accessed only through two operations wait() and
signal().

There are two types of semaphores: Binary Semaphores and Counting Semaphores.

Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks, as the locks can
provide mutual exclusion. All the processes can share the same mutex semaphore that is initialized to 1.
Then, a process has to wait until the lock becomes 0. Then, the process can make the mutex semaphore 1
and start its critical section. When it completes its critical section, it can reset the value of the mutex
semaphore to 0 and some other process can enter its critical section.

Counting Semaphores: They can have any value and are not restricted over a certain domain. They can be
used to control access to a resource that has a limitation on the number of simultaneous accesses. The
semaphore can be initialized to the number of instances of the resource. Whenever a process wants to use
that resource, it checks if the number of remaining instances is more than zero, i.e., the process has an
instance available. Then, the process can enter its critical section thereby decreasing the value of the
counting semaphore by 1. After the process is over with the use of the instance of the resource, it can
leave the critical section thereby adding 1 to the number of available instances of the resource.

Advantages and Disadvantages:

Advantages of Process Synchronization:

Ensures data consistency and integrity

Avoids race conditions

Prevents inconsistent data due to concurrent access

Supports efficient and effective use of shared resources

Disadvantages of Process Synchronization:

Adds overhead to the system

Can lead to performance degradation

Increases the complexity of the system

Can cause deadlocks if not implemented properly.

I/O techniques

I/O operation are accomplished through a wide assortment of external devices that provide a means of
exchanging the data between the external environment and the computer. Inputs are the signals or data
received by the system and outputs are the signals or data sent from it. The term can also be used as part
of an action; to "perform I/O" is to perform an input or output operation. I/O devices are used by a human
(or other system) to communicate with a computer. For instance, a keyboard or mouse is an input device
for a computer, while monitors and printers are output devices. In computer architecture, the combination
of the CPU and main memory, to which the CPU can read or write directly using individual instructions,
is considered the brain of a computer. Any transfer of information to or from the CPU/memory combo,
for example by reading data from a disk drive, is considered I/O. An I/O interface is required whenever
the I/O device is driven by the processor. The interface must have necessary logic to interpret the device
address generated by the processor. An external device attaches to the computer by a link to an I/O
module as shown in the Figure 1.

The link is used to exchange control, status, and data between the I/O module and the external device. As
shown in the figure with an I/O module. The module actually consists of a few function, which are:

Control and timing

Processor communication

Device Communication

Data buffering
Error detection

Since I/O operation deals with the exchanges of data between the memory and the external devices either
in the direction to the memory (READ) or in the direction from the memory (WRITE). But the problem
arise on how the processor will manage the flow of data to and from the external devices in term of
transfer speed, processor idle time, complexity and etc. So in general, there are three technique for I/O
operation , which are:

Programmed I/O (click here for more info)

Interrupt driven I/O (click here for more info)

Direct Memory Access (click here for more info)

With programmed I/O, data are exchanged between the processor and the I/O module. The processor
executes a program that gives it direct control of the I/O operation, including sensing device status,
sending a read or write command, and transferring the data. When the processor issues a command to the
I/O module, it must wait until the I/O operation is complete.

With interrupt-driven I/O, the processor issues an I/O command, continues to execute other instructions,
and is interrupted by the I/O module when the latter has completed its work.

With direct memory access (DMA), the I/O module and main memory exchange data directly without
processor involvement

Buffering

Buffering in Operating System

The buffer is an area in the main memory used to store or hold the data temporarily. In other words,
buffer temporarily stores data transmitted from one place to another, either between two devices or an
application. The act of storing data temporarily in the buffer is called buffering.

A buffer may be used when moving data between processes within a computer. Buffers can be
implemented in a fixed memory location in hardware or by using a virtual data buffer in software,
pointing at a location in the physical memory. In all cases, the data in a data buffer are stored on a
physical storage medium.

Most buffers are implemented in software, which typically uses the faster RAM to store temporary data
due to the much faster access time than hard disk drives. Buffers are typically used when there is a
difference between the rate of received data and the rate of processed data, for example, in a printer
spooler or online video streaming.

A buffer often adjusts timing by implementing a queue or FIFO algorithm in memory, simultaneously
writing data into the queue at one rate and reading it at another rate.

Purpose of Buffering
You face buffer during watching videos on YouTube or live streams. In a video stream, a buffer
represents the amount of data required to be downloaded before the video can play to the viewer in real-
time. A buffer in a computer environment means that a set amount of data will be stored to preload the
required data before it gets used by the CPU.

Computers have many different devices that operate at varying speeds, and a buffer is needed to act as a
temporary placeholder for everything interacting. This is done to keep everything running efficiently and
without issues between all the devices, programs, and processes running at that time. There are three
reasons behind buffering of data,

It helps in matching speed between two devices in which the data is transmitted. For example, a hard disk
has to store the file received from the modem. As we know, the transmission speed of a modem is slow
compared to the hard disk. So bytes coming from the modem is accumulated in the buffer space, and
when all the bytes of a file has arrived at the buffer, the entire data is written to the hard disk in a single
operation.

It helps the devices with different sizes of data transfer to get adapted to each other. It helps devices to
manipulate data before sending or receiving it. In computer networking, the large message is fragmented
into small fragments and sent over the network. The fragments are accumulated in the buffer at the
receiving end and reassembled to form a complete large message.

It also supports copy semantics. With copy semantics, the version of data in the buffer is guaranteed to be
the version of data at the time of system call, irrespective of any subsequent change to data in the buffer.
Buffering increases the performance of the device. It overlaps the I/O of one job with the computation of
the same job.

Types of Buffering

There are three main types of buffering in the operating system, such as:

Buffering in Operating System

1. Single Buffer

In Single Buffering, only one buffer is used to transfer the data between two devices. The producer
produces one block of data into the buffer. After that, the consumer consumes the buffer. Only when the
buffer is empty, the processor again produces the data.

Buffering in Operating System

Block oriented device: The following operations are performed in the block-oriented device,

System buffer takes the input.

After taking the input, the block gets transferred to the user space and then requests another block.
Two blocks work simultaneously. When the user processes one block of data, the next block is being read
in.

OS can swap the processes.

OS can record the data of the system buffer to user processes.

Stream oriented device: It performed the following operations, such as:

Line-at a time operation is used for scroll made terminals. The user inputs one line at a time, with a
carriage return waving at the end of a line.

Byte-at a time operation is used on forms mode, terminals when each keystroke is significant.

2. Double Buffer

In Double Buffering, two schemes or two buffers are used in the place of one. In this buffering, the
producer produces one buffer while the consumer consumes another buffer simultaneously. So, the
producer not needs to wait for filling the buffer. Double buffering is also known as buffer swapping.

Buffering in Operating System

Block oriented: This is how a double buffer works. There are two buffers in the system.

The driver or controller uses one buffer to store data while waiting for it to be taken by a higher hierarchy
level.

Another buffer is used to store data from the lower-level module.

A major disadvantage of double buffering is that the complexity of the process gets increased.

If the process performs rapid bursts of I/O, then using double buffering may be deficient.

Stream oriented: It performs these operations, such as:

Line- at a time I/O, the user process does not need to be suspended for input or output unless the process
runs ahead of the double buffer.

Byte- at time operations, double buffer offers no advantage over a single buffer of twice the length.

3. Circular Buffer

When more than two buffers are used, the buffers' collection is called a circular buffer. Each buffer is
being one unit in the circular buffer. The data transfer rate will increase using the circular buffer rather
than the double buffering.

Buffering in Operating System

In this, the data do not directly pass from the producer to the consumer because the data would change
due to overwriting of buffers before consumed.
The producer can only fill up to buffer x-1 while data in buffer x is waiting to be consumed.

How Buffering Works

In an operating system, buffer works in the following way:

Buffering in Operating System

Buffering is done to deal effectively with a speed mismatch between the producer and consumer of the
data stream.

A buffer is produced in the main memory to heap up the bytes received from the modem.

After receiving the data in the buffer, the data get transferred to a disk from the buffer in a single
operation.

This process of data transfer is not instantaneous. Therefore the modem needs another buffer to store
additional incoming data.

When the first buffer got filled, then it is requested to transfer the data to disk.

The modem then fills the additional incoming data in the second buffer while the data in the first buffer
gets transferred to the disk.

When both the buffers completed their tasks, the modem switches back to the first buffer while the data
from the second buffer gets transferred to the disk.

Two buffers disintegrate the producer and the data consumer, thus minimising the time requirements
between them.

Buffering also provides variations for devices that have different data transfer sizes.

Advantages of Buffer

Buffering plays a very important role in any operating system during the execution of any process or task.
It has the following advantages.

The use of buffers allows uniform disk access. It simplifies system design.

The system places no data alignment restrictions on user processes doing I/O. By copying data from user
buffers to system buffers and vice versa, the kernel eliminates the need for special alignment of user
buffers, making user programs simpler and more portable.

The use of the buffer can reduce the amount of disk traffic, thereby increasing overall system throughput
and decreasing response time.

The buffer algorithms help ensure file system integrity.

Disadvantages of Buffer

Buffers are not better in all respects. Therefore, there are a few disadvantages as follows, such as:
It is costly and impractical to have the buffer be the exact size required to hold the number of elements.
Thus, the buffer is slightly larger most of the time, with the rest of the space being wasted.

Buffers have a fixed size at any point in time. When the buffer is full, it must be reallocated with a larger
size, and its elements must be moved. Similarly, when the number of valid elements in the buffer is
significantly smaller than its size, the buffer must be reallocated with a smaller size and elements be
moved to avoid too much waste.

Use of the buffer requires an extra data copy when reading and writing to and from user processes. When
transmitting large amounts of data, the extra copy slows down performance

File system

A file system is a method an operating system uses to store, organize, and manage files and directories on
a storage device. Some common types of file systems include:

FAT (File Allocation Table): An older file system used by older versions of Windows and other operating
systems.

NTFS (New Technology File System): A modern file system used by Windows. It supports features such
as file and folder permissions, compression, and encryption.

ext (Extended File System): A file system commonly used on Linux and Unix-based operating systems.

HFS (Hierarchical File System): A file system used by macOS.

APFS (Apple File System): A new file system introduced by Apple for their Macs and iOS devices.

The advantages of using a file system include the:

Organization: A file system allows files to be organized into directories and subdirectories, making it
easier to manage and locate files.

Data protection: File systems often include features such as file and folder permissions, backup and
restore, and error detection and correction, to protect data from loss or corruption.

Improved performance: A well-designed file system can improve the performance of reading and writing
data by organizing it efficiently on disk.

Disadvantages of using a file system include:

Compatibility issues: Different file systems may not be compatible with each other, making it difficult to
transfer data between different operating systems.

Disk space overhead: File systems may use some disk space to store metadata and other overhead
information, reducing the amount of space available for user data.

Vulnerability: File systems can be vulnerable to data corruption, malware, and other security threats,
which can compromise the stability and security of the system.
A file is a collection of related information that is recorded on secondary storage. Or file is a collection of
logically related entities. From the user’s perspective, a file is the smallest allotment of logical secondary
storage.

The name of the file is divided into two parts as shown below:

name

extension, separated by a period.

Files attributes and its operations:

Attributes Types Operations

Name Doc Create

Type Exe Open

Size Jpg Read

Creation Data Xis Write

Author C Append

Last Modified Java Truncate

protection class Delete

Close

File type Usual extension Function

Executable exe, com, bin Read to run machine language program

Object obj, o Compiled, machine language not linked

Source Code C, java, pas, asm, a Source code in various languages

Batch bat, sh Commands to the command interpreter

Text txt, doc Textual data, documents

Word Processor wp, tex, rrf, doc Various word processor formats

Archivearc, zip, tar Related files grouped into one compressed file

Multimedia mpeg, mov, rm For containing audio/video information


Markup xml, html, tex It is the textual data and documents

Library lib, a ,so, dll It contains libraries of routines for programmers

Print or View gif, pdf, jpg It is a format for printing or viewing a ASCII or binary file.

FILE DIRECTORIES:

Collection of files is a file directory. The directory contains information about the files, including
attributes, location and ownership. Much of this information, especially that is concerned with storage, is
managed by the operating system. The directory is itself a file, accessible by various file management
routines.

Information contained in a device directory are:

Name

Type

Address

Current length

Maximum length

Date last accessed

Date last updated

Owner id

Protection information

Operation performed on directory are:

Search for a file

Create a file

Delete a file

List a directory

Rename a file

Traverse the file system

Advantages of maintaining directories are:

Efficiency: A file can be located more quickly.


Naming: It becomes convenient for users as two users can have same name for different files or may have
different name for same file.

Grouping: Logical grouping of files can be done by properties e.g. all java programs, all games etc.

SINGLE-LEVEL DIRECTORY

In this a single directory is maintained for all the users.

Naming problem: Users cannot have same name for two files.

Grouping problem: Users cannot group files according to their need.

file_sys_5

TWO-LEVEL DIRECTORY

In this separate directories for each user is maintained.

Path name:Due to two levels there is a path name for every file to locate that file.

Now,we can have same file name for different user.

Searching is efficient in this method.

file_sys_6

TREE-STRUCTURED DIRECTORY :

Directory is maintained in the form of a tree. Searching is efficient and also there is grouping capability.
We have absolute or relative path name for a file.

file_sys_7

FILE ALLOCATION METHODS :

1. Continuous Allocation –

A single continuous set of blocks is allocated to a file at the time of file creation. Thus, this is a pre-
allocation strategy, using variable size portions. The file allocation table needs just a single entry for each
file, showing the starting block and the length of the file. This method is best from the point of view of
the individual sequential file. Multiple blocks can be read in at a time to improve I/O performance for
sequential processing. It is also easy to retrieve a single block. For example, if a file starts at block b, and
the ith block of the file is wanted, its location on secondary storage is simply b+i-1.

file_sys_8

Disadvantage –

External fragmentation will occur, making it difficult to find contiguous blocks of space of sufficient
length. Compaction algorithm will be necessary to free up additional space on disk.
Also, with pre-allocation, it is necessary to declare the size of the file at the time of creation.

2. Linked Allocation(Non-contiguous allocation) –

Allocation is on an individual block basis. Each block contains a pointer to the next block in the chain.
Again the file table needs just a single entry for each file, showing the starting block and the length of the
file. Although pre-allocation is possible, it is more common simply to allocate blocks as needed. Any free
block can be added to the chain. The blocks need not be continuous. Increase in file size is always
possible if free disk block is available. There is no external fragmentation because only one block at a
time is needed but there can be internal fragmentation but it exists only in the last disk block of file.

Disadvantage –

Internal fragmentation exists in last disk block of file.

There is an overhead of maintaining the pointer in every disk block.

If the pointer of any disk block is lost, the file will be truncated.

It supports only the sequential access of files.

3. Indexed Allocation –

It addresses many of the problems of contiguous and chained allocation. In this case, the file allocation
table contains a separate one-level index for each file: The index has one entry for each block allocated to
the file. Allocation may be on the basis of fixed-size blocks or variable-sized blocks. Allocation by blocks
eliminates external fragmentation, whereas allocation by variable-size blocks improves locality. This
allocation technique supports both sequential and direct access to the file and thus is the most popular
form of file allocation.

file_sys_9

Disk Free Space Management :

Just as the space that is allocated to files must be managed ,so the space that is not currently allocated to
any file must be managed. To perform any of the file allocation techniques,it is necessary to know what
blocks on the disk are available. Thus we need a disk allocation table in addition to a file allocation
table.The following are the approaches used for free space management.

Bit Tables : This method uses a vector containing one bit for each block on the disk. Each entry for a 0
corresponds to a free block and each 1 corresponds to a block in use.

For example: 00011010111100110001

In this vector every bit correspond to a particular block and 0 implies that, that particular block is free and
1 implies that the block is already occupied. A bit table has the advantage that it is relatively easy to find
one or a contiguous group of free blocks. Thus, a bit table works well with any of the file allocation
methods. Another advantage is that it is as small as possible.
Free Block List : In this method, each block is assigned a number sequentially and the list of the numbers
of all free blocks is maintained in a reserved block of the disk.

Process scheduling

What is a process?

In computing, a process is the instance of a computer program that is being executed by one or many
threads. It contains the program code and its activity. Depending on the operating system (OS), a process
may be made up of multiple threads of execution that execute instructions concurrently.

How is process memory used for efficient operation?

The process memory is divided into four sections for efficient operation:

The text category is composed of integrated program code, which is read from fixed storage when the
program is launched.

The data class is made up of global and static variables, distributed and executed before the main action.

Heap

is used for flexible, or dynamic memory allocation and is managed by calls to new, delete, malloc, free,
etc.

The stack

is used for local variables. The space in the stack is reserved for local variables when it is announced.

To know further, you can refer to our detailed article on States of a Process in Operating system.

What is Process Scheduling?

Process Scheduling is the process of the process manager handling the removal of an active process from
the CPU and selecting another process based on a specific strategy. Process Scheduling is an integral part
of Multi-programming applications. Such operating systems allow more than one process to be loaded
into usable memory at a time and the loaded shared CPU process uses repetition time.

There are three types of process schedulers:

Long term or Job Scheduler

Short term or CPU Scheduler

Medium-term Scheduler

Why do we need to schedule processes?


Scheduling is important in many different computer environments. One of the most important areas is
scheduling which programs will work on the CPU. This task is handled by the Operating System (OS) of
the computer and there are many different ways in which we can choose to configure programs. Process
Scheduling allows the OS to allocate CPU time for each process. Another important reason to use a
process scheduling system is that it keeps the CPU busy at all times. This allows you to get less response
time for programs.

Considering that there may be hundreds of programs that need to work, the OS must launch the program,
stop it, switch to another program, etc. The way the OS configures the system to run another in the CPU
is called “

context switching

”. If the OS keeps context-switching programs in and out of the provided CPUs, it can give the user a
tricky idea that he or she can run any programs he or she wants to run, all at once.

So now that we know we can run 1 program at a given CPU, and we know we can change the operating
system and remove another one using the context switch, how do we choose which programs we need.
run, and with what program?

That’s where scheduling comes in! First, you determine the metrics, saying something like “the amount of
time until the end”. We will define this metric as “the time interval between which a function enters the
system until it is completed”. Second, you decide on a metrics that reduces metrics. We want our tasks to
end as soon as possible.

What is the need for CPU scheduling algorithm?

CPU scheduling is the process of deciding which process will own the CPU to use while another process
is suspended. The main function of the CPU scheduling is to ensure that whenever the CPU remains idle,
the OS has at least selected one of the processes available in the ready-to-use line.

In Multiprogramming, if the long-term scheduler selects multiple I / O binding processes then most of the
time, the CPU remains an idle. The function of an effective program is to improve resource utilization.

If most operating systems change their status from performance to waiting then there may always be a
chance of failure in the system. So in order to minimize this excess, the OS needs to schedule tasks in
order to make full use of the CPU and avoid the possibility of deadlock.

Objectives of Process Scheduling Algorithm:

Utilization of CPU at maximum level. Keep CPU as busy as possible.

Allocation of CPU should be fair.

Throughput should be Maximum. i.e. Number of processes that complete their execution per time unit
should be maximized.

Minimum turnaround time, i.e. time taken by a process to finish execution should be the least.
There should be a minimum waiting time and the process should not starve in the ready queue.

Minimum response time. It means that the time when a process produces the first response should be as
less as possible.

What are the different terminologies to take care of in any CPU Scheduling algorithm?

Arrival Time: Time at which the process arrives in the ready queue.

Completion Time: Time at which process completes its execution.

Burst Time: Time required by a process for CPU execution.

Turn Around Time: Time Difference between completion time and arrival time.

Turn Around Time = Completion Time – Arrival Time

Waiting Time(W.T): Time Difference between turn around time and burst time.

Waiting Time = Turn Around Time – Burst Time

Things to take care while designing a CPU Scheduling algorithm?

Different CPU Scheduling algorithms have different structures and the choice of a particular algorithm
depends on a variety of factors. Many conditions have been raised to compare CPU scheduling
algorithms.

The criteria include the following:

CPU utilization: The main purpose of any CPU algorithm is to keep the CPU as busy as possible.
Theoretically, CPU usage can range from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the system load.

Throughput: The average CPU performance is the number of processes performed and completed during
each unit. This is called throughput. The output may vary depending on the length or duration of the
processes.

Turn round Time: For a particular process, the important conditions are how long it takes to perform that
process. The time elapsed from the time of process delivery to the time of completion is known as the
conversion time. Conversion time is the amount of time spent waiting for memory access, waiting in line,
using CPU, and waiting for I / O.

Waiting Time: The Scheduling algorithm does not affect the time required to complete the process once it
has started performing. It only affects the waiting time of the process i.e. the time spent in the waiting
process in the ready queue.

Response Time: In a collaborative system, turn around time is not the best option. The process may
produce something early and continue to computing the new results while the previous results are
released to the user. Therefore another method is the time taken in the submission of the application
process until the first response is issued. This measure is called response time.
What are the different types of CPU Scheduling Algorithms?

There are mainly two types of scheduling methods:

Preemptive Scheduling

: Preemptive scheduling is used when a process switches from running state to ready state or from the
waiting state to the ready state.

Non-Preemptive Scheduling

: Non-Preemptive scheduling is used when a process terminates , or when a process switches from
running state to waiting state.

Different types of CPU Scheduling Algorithms

Different types of CPU Scheduling Algorithms

Let us now learn about these CPU scheduling algorithms in operating systems one by one:

1. First Come First Serve:

FCFS considered to be the simplest of all operating system scheduling algorithms. First come first serve
scheduling algorithm states that the process that requests the CPU first is allocated the CPU first and is
implemented by using FIFO queue.

Characteristics of FCFS:

FCFS supports non-preemptive and preemptive CPU scheduling algorithms.

Tasks are always executed on a First-come, First-serve concept.

FCFS is easy to implement and use.

This algorithm is not much efficient in performance, and the wait time is quite high.

Advantages of FCFS:

Easy to implement

First come, first serve method

Disadvantages of FCFS:

FCFS suffers from Convoy effect.

The average waiting time is much higher than the other algorithms.

FCFS is very simple and easy to implement and hence not much efficient.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on
First come, First serve Scheduling.

2. Shortest Job First(SJF):

Shortest job first (SJF) is a scheduling process that selects the waiting process with the smallest execution
time to execute next. This scheduling method may or may not be preemptive. Significantly reduces the
average waiting time for other processes waiting to be executed. The full form of SJF is Shortest Job
First.

Characteristics of SJF:

Shortest Job first has the advantage of having a minimum average waiting time among all

operating system scheduling algorithms.

It is associated with each task as a unit of time to complete.

It may cause starvation if shorter processes keep coming. This problem can be solved using the concept of
ageing.

Advantages of Shortest Job first:

As SJF reduces the average waiting time thus, it is better than the first come first serve scheduling
algorithm.

SJF is generally used for long term scheduling

Disadvantages of SJF:

One of the demerit SJF has is starvation.

Many times it becomes complicated to predict the length of the upcoming CPU request

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on
Shortest Job First.

3. Longest Job First(LJF):

Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF), as the name suggests
this algorithm is based upon the fact that the process with the largest burst time is processed first. Longest
Job First is non-preemptive in nature.

Characteristics of LJF:

Among all the processes waiting in a waiting queue, CPU is always assigned to the process having largest
burst time.
If two processes have the same burst time then the tie is broken using

FCFS

i.e. the process that arrived first is processed first.

LJF CPU Scheduling can be of both preemptive and non-preemptive types.

Advantages of LJF:

No other task can schedule until the longest job or process executes completely.

All the jobs or processes finish at the same time approximately.

Disadvantages of LJF:

Generally, the LJF algorithm gives a very high

average waiting time

and

average turn-around time

for a given set of processes.

This may lead to convoy effect.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on
the Longest job first scheduling.

4. Priority Scheduling:

Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling algorithm
that works based on the priority of a process. In this algorithm, the editor sets the functions to be as
important, meaning that the most important process must be done first. In the case of any conflict, that is,
where there are more than one processor with equal value, then the most important CPU planning
algorithm works on the basis of the FCFS (First Come First Serve) algorithm.

Characteristics of Priority Scheduling:

Schedules tasks based on priority.

When the higher priority work arrives while a task with less priority is executed, the higher priority work
takes the place of the less priority one and

The latter is suspended until the execution is complete.

Lower is the number assigned, higher is the priority level of a process.

Advantages of Priority Scheduling:


The average waiting time is less than FCFS

Less complex

Disadvantages of Priority Scheduling:

One of the most common demerits of the Preemptive priority CPU scheduling algorithm is the

Starvation Problem

. This is the problem in which a process has to wait for a longer amount of time to get scheduled into the
CPU. This condition is called the starvation problem.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on
Priority Preemptive Scheduling algorithm.

5. Round robin:

Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed time slot. It
is the preemptive version of First come First Serve CPU Scheduling algorithm. Round Robin CPU
Algorithm generally focuses on Time Sharing technique.

Characteristics of Round robin:

It’s simple, easy to use, and starvation-free as all processes get the balanced CPU allocation.

One of the most widely used methods in CPU scheduling as a core.

It is considered preemptive as the processes are given to the CPU for a very limited time.

Advantages of Round robin:

Round robin seems to be fair as every process gets an equal share of CPU.

The newly created process is added to the end of the ready queue.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on
the Round robin Scheduling algorithm.

6. Shortest Remaining Time First:

Shortest remaining time first is the preemptive version of the Shortest job first which we have discussed
earlier where the processor is allocated to the job closest to completion. In SRTF the process with the
smallest amount of time remaining until completion is selected to execute.

Characteristics of Shortest remaining time first:

SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given it’s overhead charges
are not counted.
The context switch is done a lot more times in SRTF than in SJF and consumes the CPU’s valuable time
for processing. This adds up to its processing time and diminishes its advantage of fast processing.

Advantages of SRTF:

In SRTF the short processes are handled very fast.

The system also requires very little overhead since it only makes a decision when a process completes or
a new process is added.

Disadvantages of SRTF:

Like the shortest job first, it also has the potential for process starvation.

Long processes may be held off indefinitely if short processes are continually added.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on
the shortest remaining time first.

7. Longest Remaining Time First:

The longest remaining time first is a preemptive version of the longest job first scheduling algorithm. This
scheduling algorithm is used by the operating system to program incoming processes for use in a
systematic way. This algorithm schedules those processes first which have the longest processing time
remaining for completion.

Characteristics of longest remaining time first:

Among all the processes waiting in a waiting queue, the CPU is always assigned to the process having the
largest burst time.

If two processes have the same burst time then the tie is broken using

FCFS

i.e. the process that arrived first is processed first.

LJF CPU Scheduling can be of both preemptive and non-preemptive types.

Advantages of LRTF:

No other process can execute until the longest task executes completely.

All the jobs or processes finish at the same time approximately.

Disadvantages of LRTF:

This algorithm gives a very high

average waiting time


and

average turn-around time

for a given set of processes.

This may lead to a convoy effect.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on
the longest remaining time first.

8. Highest Response Ratio Next:

Highest Response Ratio Next is a non-preemptive CPU Scheduling algorithm and it is considered as one
of the most optimal scheduling algorithms. The name itself states that we need to find the response ratio
of all available processes and select the one with the highest Response Ratio. A process once selected will
run till completion.

Characteristics of Highest Response Ratio Next:

The criteria for HRRN is Response Ratio, and the mode is Non-Preemptive.

HRRN is considered as the modification of

Shortest Job First

to reduce the problem of

starvation

In comparison with SJF, during the HRRN scheduling algorithm, the CPU is allotted to the next process
which has the highest response ratio and not to the process having less burst time.

Response Ratio = (W + S)/S

Here, W is the waiting time of the process so far and S is the Burst time of the process.

Advantages of HRRN:

HRRN Scheduling algorithm generally gives better performance than the

shortest job first

Scheduling.

There is a reduction in waiting time for longer jobs and also it encourages shorter jobs.

Disadvantages of HRRN:

The implementation of HRRN scheduling is not possible as it is not possible to know the burst time of
every job in advance.
In this scheduling, there may occur an overload on the CPU.

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on
Highest Response Ratio Next.

9. Multiple Queue Scheduling:

Processes in the ready queue can be divided into different classes where each class has its own scheduling
needs. For example, a common division is a foreground (interactive) process and a background (batch)
process. These two classes have different scheduling needs. For this kind of situation Multilevel Queue
Scheduling is used.

The description of the processes in the above diagram is as follows:

System Processes: The CPU itself has its process to run, generally termed as System Process.

Interactive Processes: An Interactive Process is a type of process in which there should be the same type
of interaction.

Batch Processes: Batch processing is generally a technique in the Operating system that collects the
programs and data together in the form of a batch before the processing starts.

Advantages of multilevel queue scheduling:

The main merit of the multilevel queue is that it has a low scheduling overhead.

Disadvantages of multilevel queue scheduling:

Starvation problem

It is inflexible in nature

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on
Multilevel Queue Scheduling.

10. Multilevel Feedback Queue Scheduling::

Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel Queue Scheduling
but in this process can move between the queues. And thus, much more efficient than multilevel queue
scheduling.

Characteristics of Multilevel Feedback Queue Scheduling:

In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on entry to the
system, and processes are not allowed to move between queues.

As the processes are permanently assigned to the queue, this setup has the advantage of low scheduling
overhead,
But on the other hand disadvantage of being inflexible.

Advantages of Multilevel feedback queue scheduling:

It is more flexible

It allows different processes to move between different queues

Disadvantages of Multilevel feedback queue scheduling:

It also produces CPU overheads

It is the most complex algorithm.

Memory Management in Operating System

The term Memory can be defined as a collection of data in a specific format. It is used to store
instructions and process data. The memory comprises a large array or group of words or bytes, each with
its own location. The primary motive of a computer system is to execute programs. These programs,
along with the information they access, should be in the main memory during execution. The CPU fetches
instructions from memory according to the value of the program counter.

To achieve a degree of multiprogramming and proper utilization of memory, memory management is


important. Many memory management methods exist, reflecting various approaches, and the
effectiveness of each algorithm depends on the situation.

What is Main Memory:

The main memory is central to the operation of a modern computer. Main Memory is a large array of
words or bytes, ranging in size from hundreds of thousands to billions. Main memory is a repository of
rapidly available information shared by the CPU and I/O devices. Main memory is the place where
programs and information are kept when the processor is effectively utilizing them. Main memory is
associated with the processor, so moving instructions and information into and out of the processor is
extremely fast. Main memory is also known as RAM(Random Access Memory). This memory is a
volatile memory.RAM lost its data when a power interruption occurs.

What is Memory Management :

In a multiprogramming computer, the operating system resides in a part of memory and the rest is used by
multiple processes. The task of subdividing the memory among different processes is called memory
management. Memory management is a method in the operating system to manage operations between
main memory and disk during process execution. The main aim of memory management is to achieve
efficient utilization of memory.

Why Memory Management is required:

Allocate and de-allocate memory before and after process execution.


To keep track of used memory space by processes.

To minimize fragmentation issues.

To proper utilization of main memory.

To maintain data integrity while executing of process.

Now we are discussing the concept of logical address space and Physical address space:

Logical and Physical Address Space:

Logical Address space: An address generated by the CPU is known as a “Logical Address”. It is also
known as a Virtual address. Logical address space can be defined as the size of the process. A logical
address can be changed.

Physical Address space: An address seen by the memory unit (i.e the one loaded into the memory address
register of the memory) is commonly known as a “Physical Address”. A Physical address is also known
as a Real address. The set of all physical addresses corresponding to these logical addresses is known as
Physical address space. A physical address is computed by MMU. The run-time mapping from virtual to
physical addresses is done by a hardware device Memory Management Unit(MMU). The physical
address always remains constant.

Static and Dynamic Loading:

Loading a process into the main memory is done by a loader. There are two different types of loading :

Static loading:- loading the entire program into a fixed address. It requires more memory space.

Dynamic loading:- The entire program and all data of a process must be in physical memory for the
process to execute. So, the size of a process is limited to the size of physical memory. To gain proper
memory utilization, dynamic loading is used. In dynamic loading, a routine is not loaded until it is called.
All routines are residing on disk in a relocatable load format. One of the advantages of dynamic loading is
that unused routine is never loaded. This loading is useful when a large amount of code is needed to
handle it efficiently.

Static and Dynamic linking:

To perform a linking task a linker is used. A linker is a program that takes one or more object files
generated by a compiler and combines them into a single executable file.

Static linking: In static linking, the linker combines all necessary program modules into a single
executable program. So there is no runtime dependency. Some operating systems support only static
linking, in which system language libraries are treated like any other object module.

Dynamic linking: The basic concept of dynamic linking is similar to dynamic loading. In dynamic
linking, “Stub” is included for each appropriate library routine reference. A stub is a small piece of code.
When the stub is executed, it checks whether the needed routine is already in memory or not. If not
available then the program loads the routine into memory.
Swapping :

When a process is executed it must have resided in memory. Swapping is a process of swapping a process
temporarily into a secondary memory from the main memory, which is fast as compared to secondary
memory. A swapping allows more processes to be run and can be fit into memory at one time. The main
part of swapping is transferred time and the total time is directly proportional to the amount of memory
swapped. Swapping is also known as roll-out, roll in, because if a higher priority process arrives and
wants service, the memory manager can swap out the lower priority process and then load and execute the
higher priority process. After finishing higher priority work, the lower priority process swapped back in
memory and continued to the execution process.

swapping in memory management

memory management with monoprogramming(without swapping):

this is the simplest memory management approach the memory is divided into two sections:

one part for operating system

second part for user program

fence register

operating system user program

in this approach operating system keep tracks of first and last location available for allocation of user
program

operating system is loaded either at bottom or at top

interrupt vector are often loaded in low memory therefore it makes sense to load operating system in low
memory

sharing of data and code does not make much sense in single process environment

operating system can be protected from user program with the help of fence register.

Advantage

it is simple management approach

Disadvantage

it does not support multiprogramming

memory is wasted

multiprogramming with fixed partitions(without swapping):

memory partitions scheme with fixed number of partitions was introduced to support multiprogramming.
this scheme is based on contiguous allocation
each partition is block of contiguous memory

memory is partition into fixed number of partition

each partition is of fixed size

partition table:

once partitions are defined operating system keeps track of status of memory partitions it is done through
data structure called partition table

sr.no starting address of partition size of partition status

1 0k 200k allocated

2 200k 100k free

3 300k 150k free

4 450k 250k allocated

sample partition table

logical versus physical address

an address generated by CPU is commonly referred to a logical address. the address seen by memory unit
is known as physical address

logical address can be mapped to physical address by hardware with the help of base register this is
known as dynamic relocation of memory reference.

Contiguous Memory Allocation :

The main memory should oblige both the operating system and the different client processes. Therefore,
the allocation of memory becomes an important task in the operating system. The memory is usually
divided into two partitions: one for the resident operating system and one for the user processes. We
normally need several user processes to reside in memory simultaneously. Therefore, we need to consider
how to allocate available memory to the processes that are in the input queue waiting to be brought into
memory. In adjacent memory allotment, each process is contained in a single contiguous segment of
memory.

contiguous memory allocation

Memory allocation:

To gain proper memory utilization, memory allocation must be allocated efficient manner. One of the
simplest methods for allocating memory is to divide memory into several fixed-sized partitions and each
partition contains exactly one process. Thus, the degree of multiprogramming is obtained by the number
of partitions.

Multiple partition allocation: In this method, a process is selected from the input queue and loaded into
the free partition. When the process terminates, the partition becomes available for other processes.

Fixed partition allocation: In this method, the operating system maintains a table that indicates which
parts of memory are available and which are occupied by processes. Initially, all memory is available for
user processes and is considered one large block of available memory. This available memory is known
as a “Hole”. When the process arrives and needs memory, we search for a hole that is large enough to
store this process. If the requirement is fulfilled then we allocate memory to process, otherwise keeping
the rest available to satisfy future requests. While allocating a memory sometimes dynamic storage
allocation problems occur, which concerns how to satisfy a request of size n from a list of free holes.
There are some solutions to this problem:

First fit:-

In the first fit, the first available free hole fulfills the requirement of the process allocated.

Here, in this diagram 40 KB memory block is the first available free hole that can store process A (size of
25 KB), because the first two blocks did not have sufficient memory space.

Best fit:-

In the best fit, allocate the smallest hole that is big enough to process requirements. For this, we search
the entire list, unless the list is ordered by size.

Here in this example, first, we traverse the complete list and find the last hole 25KB is the best suitable
hole for Process A(size 25KB).

In this method memory utilization is maximum as compared to other memory allocation techniques.

Worst fit:-In the worst fit, allocate the largest available hole to process. This method produces the largest
leftover hole.

Here in this example, Process A (Size 25 KB) is allocated to the largest available memory block which is
60KB. Inefficient memory utilization is a major issue in the worst fit.

Fragmentation:

Fragmentation is defined as when the process is loaded and removed after execution from memory, it
creates a small free hole. These holes can not be assigned to new processes because holes are not
combined or do not fulfill the memory requirement of the process. To achieve a degree of
multiprogramming, we must reduce the waste of memory or fragmentation problems. In the operating
systems two types of fragmentation:

Internal fragmentation:
Internal fragmentation occurs when memory blocks are allocated to the process more than their requested
size. Due to this some unused space is leftover and creates an internal fragmentation problem.

Example: Suppose there is a fixed partitioning is used for memory allocation and the different size of
block 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size 2MB comes and demand for
the block of memory. It gets a memory block of 3MB but 1MB block memory is a waste, and it can not
be allocated to other processes too. This is called internal fragmentation.

External fragmentation:

In external fragmentation, we have a free memory block, but we can not assign it to process because
blocks are not contiguous.

Example: Suppose (consider above example) three process p1, p2, p3 comes with size 2MB, 4MB, and
7MB respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB allocated respectively.
After allocating process p1 process and p2 process left 1MB and 2MB. Suppose a new process p4 comes
and demands a 3MB block of memory, which is available, but we can not assign it because free memory
space is not contiguous. This is called external fragmentation.

Both the first fit and best-fit systems for memory allocation affected by external fragmentation. To
overcome the external fragmentation problem Compaction is used. In the compaction technique, all free
memory space combines and makes one large block. So, this space can be used by other processes
effectively.

Another possible solution to the external fragmentation is to allow the logical address space of the
processes to be noncontiguous, thus permit a process to be allocated physical memory wherever the latter
is available.

Paging:

Paging is a memory management scheme that eliminates the need for contiguous allocation of physical
memory. This scheme permits the physical address space of a process to be non-contiguous.

Logical Address or Virtual Address (represented in bits): An address generated by the CPU

Logical Address Space or Virtual Address Space (represented in words or bytes): The set of all logical
addresses generated by a program

Physical Address (represented in bits): An address actually available on a memory unit

Physical Address Space (represented in words or bytes): The set of all physical addresses corresponding
to the logical addresses

Example:

If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G words (1 G = 230)
If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 = 27 bits

If Physical Address = 22 bits, then Physical Address Space = 222 words = 4 M words (1 M = 220)

If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address = log2 224 = 24 bits

The mapping from virtual to physical address is done by the memory management unit (MMU) which is a
hardware device and this mapping is known as the paging technique.

The Physical Address Space is conceptually divided into several fixed-size blocks, called frames.

The Logical Address Space is also split into fixed-size blocks, called pages.

Page Size = Frame Size

Let us consider an example:

Physical Address = 12 bits, then Physical Address Space = 4 K words

Logical Address = 13 bits, then Logical Address Space = 8 K words

Page size = frame size = 1 K words (assumption)

The address generated by the CPU is divided into

Page number(p): Number of bits required to represent the pages in Logical Address Space or Page
number

Page offset(d): Number of bits required to represent a particular word in a page or page size of Logical
Address Space or word number of a page or page offset.

Physical Address is divided into

Frame number(f): Number of bits required to represent the frame of Physical Address Space or Frame
number frame

Frame offset(d): Number of bits required to represent a particular word in a frame or frame size of
Physical Address Space or word number of a frame or frame offset.

The hardware implementation of the page table can be done by using dedicated registers. But the usage of
register for the page table is satisfactory only if the page table is small. If the page table contains a large
number of entries then we can use TLB(translation Look-aside buffer), a special, small, fast look-up
hardware cache.

The TLB is an associative, high-speed memory.

Each entry in TLB consists of two parts: a tag and a value.

When this memory is used, then an item is compared with all tags simultaneously. If the item is found,
then the corresponding value is returned.
Main memory access time = m

If page table are kept in main memory,

Effective access time = m(for page table) + m(for particular page in page table)

save2

Job scheduling

Job scheduling, also known as batch scheduling, is a process that allocates system resources to control the
execution of unattended background programs. The scheduler decides which jobs to execute, at which
time, and the central processing unit (CPU) resources needed to complete the job. It ensures that all jobs
are completed according to set priorities.

Job scheduling software can perform scheduling and monitor jobs or batches in real-time. Modern job
schedulers have a graphical user interface (GUI) with a single point control system. Many companies use
workload automation software to automate error-prone tasks related to job scheduling, processing, and
warehousing.

Workload automation software helps businesses reduce manual interaction, enabling the IT department to
focus on tasks with higher priority. IT professionals can quickly address central logging and reporting
issues and make use of other capabilities such as auto-remediation, alerts, and notifications.

Job schedulers use some standard parameters to decide which job to run. These parameters are as follows:

Job priority

Job dependency

Computer resource availability

File dependency

Operator prompt dependency

Estimated execution time

Elapsed execution time

Execution time allocated to a user

Simultaneous jobs allowed for a user

Peripheral device availability

Prescribed events’ occurrence

Availability of license key when a job is using a licensed software


Types of job scheduling

Companies schedule jobs or batches through multiple types of scheduling processes. Below are three
common job scheduling types that IT teams use to optimize their environment.

Long-term scheduling: A long list of items is ready for processing when new processes are created. This
requires substantial processing power and adds to overhead on the operating system. The OS maintains a
long list, and there’s an increase in context switching and dispatching. This type caters to managing such
a long list of processes. A long-term scheduler decides jobs that go into short-term or medium-term
schedulers' processing queue. It limits the processes that go into the queue based on different processing
algorithms.

Medium-term scheduling: For some operating systems, a new process begins in a swapped-out condition.
A swap-out happens when a process is removed from the random access memory (RAM) and is added to
the hard disk. This type is a part of the swapping function. When there’s free space in the main memory,
the scheduler decides which process can be swapped in. This depends on the memory, priority, and other
required resources. A medium-term scheduler often performs the swapping-in function for swapped-out
processes.

Short-term scheduling: A short-term scheduler, also called a dispatcher, starts when a new event occurs.
This occurs more frequently and might interrupt a running process. Short-term schedulers are fast and
select new processes ready for execution, allocating CPU to one of them, which happens very frequently.

Job scheduling algorithms

Short-term scheduling primarily uses job scheduling algorithms to allocate processes and optimize system
behavior. Below are some common scheduling algorithms or policies that impact which processes should
be assigned to the CPU.

FCFS scheduling algorithm

The first-come, first-serve (FCFS) job scheduling algorithm follows the first-in, first-out method. As
processes join the ready queue, the scheduler picks the oldest job in the queue and sends it for processing.
The average processing time for these jobs is comparatively long.

Below are the advantages and disadvantages of FCFS algorithms.

Advantage: FCFS adds minimum overhead on the processor and is better for lengthy processes.

Disadvantage: Convoy effects occur when even a tiny job waits for a long time to move into processing,
resulting in lower CPU utilization.

SJF scheduling

Shortest job first (SJF), also known as shortest job next (SJN), selects a job that would require the shortest
processing time and allocates it to the CPU. This algorithm associates each process with the length of the
next CPU burst. A CPU burst is when processes utilize the CPU before it’s no longer ready.
Suppose two jobs have the same CPU burst. The scheduler would then use the FCFS algorithm to resolve
the tie and move one of them to execution.

Below are the advantages and disadvantages of the shortest job first scheduling.

Advantage: The throughput is high as the shortest jobs are preferred over a long-run process.

Disadvantage: Records elapsed time that adds to additional overhead on the CPU. Furthermore, it can
result in starvation as long processes will be in the queue for a long time.

Priority scheduling

Priority scheduling associates a priority (an integer) to each process. The one with the highest priority
gets executed first. Usually, the smallest integer is assigned to a job with the highest priority. If there are
two jobs with similar priority, the algorithm uses FCFS to determine which would move into processing.

Below is an advantage and disadvantage of priority scheduling.

Advantage: Priority jobs have a good response time.

Disadvantage: Longer jobs may experience starvation.

Round robin scheduling

Round robin scheduling is designed for time-sharing systems. It’s a preemptive scheduler based on the
clock and is often called a time-slicing scheduler. Whenever a periodic clock interval occurs, the
scheduler moves a currently processing job to the ready queue. It takes the next job in the queue for
processing on a first-come, first-serve basis.

Deciding a time quantum or a time slice is tricky in this scheduling algorithm. If the time slice is short,
small jobs get processed faster.

Below are some advantages and disadvantages of round-robin scheduling.

Advantages: Provides fair treatment to all processes, and the processor overhead is low.

Disadvantages: Throughput can be low if the time slice is concise.

How does job scheduling software work?

Enterprise job scheduling software consists of a job scheduling interface and an execution agent. These
elements play a vital role in the overall function of a job scheduling system.

Below are a few primary responsibilities of a job or batch scheduler:

Define tasks to execute with the help of the drag and drop feature

Create a queue and schedule jobs to prioritize task execution

Allocate jobs to the right agent based on multiple factors such as priority, frequency, and more
On the other hand, an execution agent looks after the following processes:

Submitting tasks to execution

Monitoring tasks during execution

An execution agent refers to technical information such as CPU availability, projected execution time,
and dependencies during execution.

Companies can automate various tasks with workload scheduling software.

Below are some of the common tasks that job schedulers automate.

Event triggering: Job schedulers can detect triggering events such as emails, file modifications, system
updates, file transfers, and user-defined events. They can be connected to different APIs to detect such
triggers.

File processing: Job scheduling tools monitor file movements. As soon as a triggering file enters the
system, it informs the execution agent to process the preset task.

File transferring: Job scheduling programs can trigger a file transfer protocol (FTP) to initiate a secure
transfer from the server to the internet or pull data from the internet to the server.

Event logging: Job scheduling systems generate and record event logs for regulatory compliance.

Job scheduling vs. CPU scheduling vs. workload automation

Both job scheduling and CPU scheduling are associated with process execution. Job scheduling is the
mechanism that decides which process should be moved to the ready queue. Usually, long-term
schedulers perform job scheduling.

On the other hand, CPU scheduling is a mechanism that determines which process should be executed
next and allocates the CPU accordingly. Short-term schedulers usually perform CPU scheduling.

Traditional job scheduling tools automate tasks for specific platforms or applications. On the flip side,
workload automation software centralizes job controls over multiple platforms, increasing coordination
between operating systems and reducing conflicts.

Resource allocation

The Operating System allocates resources when a program need them. When the program terminates, the
resources are de-allocated, and allocated to other programs that need them. Now the question is, what
strategy does the operating system use to allocate these resources to user programs?

There are two Resource allocation techniques:


Resource partitioning approach –

In this approach, the operating system decides beforehand, that what resources should be allocated to
which user program. It divides the resources in the system to many resource partitions, where each
partition may include various resources – for example, 1 MB memory, disk blocks, and a printer.

Then, it allocates one resource partition to each user program before the program’s initiation. A resource
table records the resource partition and its current allocation status (Allocated or Free).

Advantages:

Easy to Implement

Less Overhead

Disadvantages:

Lacks flexibility – if a resource partition contains more resources than what a particular process requires,
the additional resources are wasted.

If a program needs more resources than a single resource partition, it cannot execute (Though free
resources are present in other partitions).

An example resource table may look like:

Pool based approach –

In this approach, there is a common pool of resources. The operating System checks the allocation status
in the resource table whenever a program makes a request for a resource. If the resource is free, it
allocates the resource to the program.

Advantages:

Allocated resources are not wasted.

Any resource requirement can be fulfilled if the resource is free (unlike Partitioning approach)

Disadvantages:

Overhead of allocating and de-allocating the resources on every request and release

You might also like