You are on page 1of 33

OVER VIEW OF OPERATING SYSTEM

What is an Operating System?


A program that acts as an intermediary between a user of a computer and the computer hardware
Operating system goals:
 Execute user programs and make solving user problems easier
 Make the computer system convenient to use
 Use the computer hardware in an efficient manner
Computer System Structure
Computer system can be divided into four components
 Hardware – provides basic computing resources
CPU, memory, I/O devices
 Operating system
Controls and coordinates use of hardware among various applications and users
 Application programs – define the ways in which the system resources are used to solve the computing
problems of the users
Word processors, compilers, web browsers, database systems, video games
 Users
People, machines, other computers
Four Components of a Computer System

Operating System Definition


 OS is a resource allocator
 Manages all resources
 Decides between conflicting requests for efficient and fair resource use
 OS is a control program
 Controls execution of programs to prevent errors and improper use of the computer
 No universally accepted definition
 Everything a vendor ships when you order an operating system” is good approximation
But varies wildly
 “The one program running at all times on the computer” is the kernel. Everything else is either a system
program (ships with the operating system) or an application program
Operating System Structure

o Simple Structure
o Monolithic Structure
o Layered Approach Structure
o Micro-Kernel Structure
o Exo-Kernel Structure
o Virtual Machines

SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is only appropriate for usage with
tiny and restricted systems. Since the interfaces and degrees of functionality in this structure are clearly defined,
programs are able to access I/O routines, which may result in unauthorized access to I/O procedures.
This organizational structure is used by the MS-DOS operating system:
o There are four layers that make up the MS-DOS operating system, and each has its own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application programs, and system
programs.
o The MS-DOS operating system benefits from layering because each level can be defined independently and,
when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update. Because of this, simple structures
can be used to build constrained systems that are less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O procedures are visible to end users,
giving them the potential for unwanted access.

The following figure illustrates layering in simple structure:

Advantages of Simple Structure:


o Because there are only a few interfaces and levels, it is simple to develop.
o Because there are fewer layers between the hardware and the applications, it offers superior performance.

Disadvantages of Simple Structure:


o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another, there is no abstraction or data
hiding.
o The operating system's operations are accessible to layers, which can result in data tampering and system
failure.

MONOLITHIC STRUCTURE
The monolithic operating system controls all aspects of the operating system's operation, including file management,
memory management, device management, and operational operations.
The core of an operating system for computers is called the kernel (OS). All other System components are provided with
fundamental services by the kernel. The operating system and the hardware use it as their main interface. When an
operating system is built into a single piece of hardware, such as a keyboard or mouse, the kernel can directly access all
of its resources.
The monolithic operating system is often referred to as the monolithic kernel. Multiple programming techniques such as
batch processing and time-sharing increase a processor's usability. Working on top of the operating system and under
complete command of all hardware, the monolithic kernel performs the role of a virtual computer. This is an old
operating system that was used in banks to carry out simple tasks like batch processing and time-sharing, which allows
numerous users at different terminals to access the Operating System.
The following diagram represents the monolithic structure:

Advantages of Monolithic Structure:


o Because layering is unnecessary and the kernel alone is responsible for managing all operations, it is easy to
design and execute.
o Due to the fact that functions like memory management, file management, process scheduling, etc., are
implemented in the same address area, the monolithic kernel runs rather quickly when compared to other
systems. Utilizing the same address speeds up and reduces the time required for address allocation for new
processes.

Disadvantages of Monolithic Structure:


o The monolithic kernel's services are interconnected in address space and have an impact on one another, so if
any of them malfunctions, the entire system does as well.
o It is not adaptable. Therefore, launching a new service is difficult.

LAYERED STRUCTURE
The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest layer) contains the hardware,
and layer 1 (the highest layer) contains the user interface (layer N). These layers are organized hierarchically, with the
top-level layers making use of the capabilities of the lower-level ones.
The functionalities of each layer are separated in this method, and abstraction is also an option. Because layered
structures are hierarchical, debugging is simpler, therefore all lower-level layers are debugged before the upper layer is
examined. As a result, the present layer alone has to be reviewed since all the lower layers have already been examined.
The image below shows how OS is organized into layers:
Advantages of Layered Structure:
o Work duties are separated since each layer has its own functionality, and there is some amount of abstraction.
o Debugging is simpler because the lower layers are examined first, followed by the top layers.

Disadvantages of Layered Structure:


o Performance is compromised in layered structures due to layering.
o Construction of the layers requires careful design because upper layers only make use of lower layers'
capabilities.

MICRO-KERNEL STRUCTURE
The operating system is created using a micro-kernel framework that strips the kernel of any unnecessary parts. Systems
and user applications are used to implement these optional kernel components. So, Micro-Kernels is the name given to
these systems that have been developed.
Each Micro-Kernel is created separately and is kept apart from the others. As a result, the system is now more
trustworthy and secure. If one Micro-Kernel malfunctions, the remaining operating system is unaffected and continues
to function normally.
The image below shows Micro-Kernel Operating System Structure:
Advantages of Micro-Kernel Structure:
o It enables portability of the operating system across platforms.
o Due to the isolation of each Micro-Kernel, it is reliable and secure.
o The reduced size of Micro-Kernels allows for successful testing.
o The remaining operating system remains unaffected and keeps running properly even if a component or Micro-
Kernel fails.

Disadvantages of Micro-Kernel Structure:


o The performance of the system is decreased by increased inter-module communication.
o The construction of a system is complicated.

EXOKERNEL
An operating system called Exokernel was created at MIT with the goal of offering application-level management of
hardware resources. The exokernel architecture's goal is to enable application-specific customization by separating
resource management from protection. Exokernel size tends to be minimal due to its limited operability.
Because the OS sits between the programs and the actual hardware, it will always have an effect on the functionality,
performance, and breadth of the apps that are developed on it. By rejecting the idea that an operating system must
offer abstractions upon which to base applications, the exokernel operating system makes an effort to solve this issue.
The goal is to give developers as few restriction on the use of abstractions as possible while yet allowing them the
freedom to do so when necessary. Because of the way the exokernel architecture is designed, a single tiny kernel is
responsible for moving all hardware abstractions into unreliable libraries known as library operating systems. Exokernels
differ from micro- and monolithic kernels in that their primary objective is to prevent forced abstraction.
Exokernel operating systems have a number of features, including:
o Enhanced application control support.
o Splits management and security apart.
o A secure transfer of abstractions is made to an unreliable library operating system.
o Brings up a low-level interface.
o Operating systems for libraries provide compatibility and portability.

Advantages of Exokernel Structure:


o Application performance is enhanced by it.
o Accurate resource allocation and revocation enable more effective utilisation of hardware resources.
o New operating systems can be tested and developed more easily.
o Every user-space program is permitted to utilise its own customised memory management.

Disadvantages of Exokernel Structure:


o A decline in consistency
o Exokernel interfaces have a complex architecture.

VIRTUAL MACHINES (VMs)


The hardware of our personal computer, including the CPU, disc drives, RAM, and NIC (Network Interface Card), is
abstracted by a virtual machine into a variety of various execution contexts based on our needs, giving us the impression
that each execution environment is a separate computer. A virtual box is an example of it.
Using CPU scheduling and virtual memory techniques, an operating system allows us to execute multiple processes
simultaneously while giving the impression that each one is using a separate processor and virtual memory. System calls
and a file system are examples of extra functionalities that a process can have that the hardware is unable to give.
Instead of offering these extra features, the virtual machine method just offers an interface that is similar to that of the
most fundamental hardware. A virtual duplicate of the computer system underneath is made available to each process.
We can develop a virtual machine for a variety of reasons, all of which are fundamentally connected to the capacity to
share the same underlying hardware while concurrently supporting various execution environments, i.e., various
operating systems.
Disk systems are the fundamental problem with the virtual machine technique. If the actual machine only has three-disc
drives but needs to host seven virtual machines, let's imagine that. It is obvious that it is impossible to assign a disc drive
to every virtual machine because the program that creates virtual machines would require a sizable amount of disc
space in order to offer virtual memory and spooling. The provision of virtual discs is the solution.
The result is that users get their own virtual machines. They can then use any of the operating systems or software
programs that are installed on the machine below. Virtual machine software is concerned with programming numerous
virtual machines simultaneously into a physical machine; it is not required to take into account any user-support
software. With this configuration, it may be possible to break the challenge of building an interactive system for several
users into two manageable chunks.
Advantages of Virtual Machines:
o Due to total isolation between each virtual machine and every other virtual machine, there are no issues with
security.
o A virtual machine may offer an architecture for the instruction set that is different from that of actual
computers.
o Simple availability, accessibility, and recovery convenience.

Disadvantages of Virtual Machines:


o Depending on the workload, operating numerous virtual machines simultaneously on a host computer may have
an adverse effect on one of them.
o When it comes to hardware access, virtual computers are less effective than physical ones.

Operating System Services


 One set of operating-system services provides functions that are helpful to the user:
 User interface - Almost all operating systems have a user interface (UI)
 Varies between Command-Line (CLI), Graphics User Interface (GUI), Batch
 Program execution - The system must be able to load a program into memory and to run that program, end
execution, either normally or abnormally (indicating error)
 I/O operations - A running program may require I/O, which may involve a file or an I/O device
 File-system manipulation - The file system is of particular interest. Obviously, programs need to read and write
files and directories, create and delete them, search them, list file Information, permission management.

A View of Operating System Services

Operating System Services


 One set of operating-system services provides functions that are helpful to the user (Cont):lCommunications –
Processes may exchange information, on the same computer or between computers over a network
Communications may be via shared memory or through message passing (packets moved by the OS)
 Error detection – OS needs to be constantly aware of possible errors
May occur in the CPU and memory hardware, in I/O devices, in user program
For each type of error, OS should take the appropriate action to ensure correct and consistent computing
Debugging facilities can greatly enhance the user’s and programmer’s abilities to efficiently use the system
 Another set of OS functions exists for ensuring the efficient operation of the system itself via resource sharing
 Resource allocation - When multiple users or multiple jobs running concurrently, resources must be allocated
to each of them
 Many types of resources - Some (such as CPU cycles, main memory, and file storage) may have special
allocation code, others (such as I/O devices) may have general request and release code
 Accounting - To keep track of which users use how much and what kinds of computer resources
 Protection and security - The owners of information stored in a multiuser or networked computer system may
want to control use of that information, concurrent processes should not interfere with each other
 Protection involves ensuring that all access to system resources is controlled
 Security of the system from outsiders requires user authentication, extends to defending external I/O devices
from invalid access attempts
 If a system is to be protected and secure, precautions must be instituted throughout it. A chain is only as strong
as its weakest link.
User Operating System Interface - CLI
 Command Line Interface (CLI) or command interpreter allows direct command entry
Sometimes implemented in kernel, sometimes by systems program
Sometimes multiple flavors implemented – shells
Primarily fetches a command from user and executes it
 Sometimes commands built-in, sometimes just names of programs
 If the latter, adding new features doesn’t require shell modification
User Operating System Interface - GUI

 User-friendly desktop metaphor interface


 Usually mouse, keyboard, and monitor
 Icons represent files, programs, actions, etc
 Various mouse buttons over objects in the interface cause various actions (provide information, options, execute
function, open directory (known as a folder)
 Invented at Xerox PARC
 Many systems now include both CLI and GUI interfaces
 Microsoft Windows is GUI with CLI “command” shell
 Apple Mac OS X as “Aqua” GUI interface with UNIX kernel underneath and shells available
 Solaris is CLI with optional GUI interfaces
(Java Desktop, KDE)

System Calls
What is a System Call
 Programming interface to the services provided by the OS
 Typically written in a high-level language (C or C++)
 Mostly accessed by programs via a high-level Application Program Interface (API) rather than direct system call
usenThree most common APIs are Win32 API for Windows, POSIX API for POSIX-based systems (including
virtually all versions of UNIX, Linux, and Mac OS X), and Java API for the Java virtual machine (JVM)
 Why use APIs rather than system calls?(Note that the system-call names used throughout this text are generic)

Here is difference between API and system call--


What is an API
Different devices and applications share data between them. Some of them include online reservations and booking
systems. API (Application Programming Interface) helps to establish connectivity among devices and applications.
Moreover, it is an interface that takes the requests from the user and informs the system about what should be done
and return the response back to the user.

For example, assume an online travel service that aggregates information from multiple airlines. The travel service
interacts with the airline’s API. The API takes the requests to book seats and select meals from the travel service to the
airline system. Then it delivers the airlines responses back to the online travel service and the travel service displays the
details to the users. This is a real-world application for an API.

Fig-API – System Call – OS Relationship


Example of System Calls

Example of Standard API


Consider the ReadFile() function in the
API—a function for reading from a file
A description of the parameters passed to ReadFile()
 HANDLE file—the file to be read
 LPVOID buffer—a buffer where the data will be read into and written from
 DWORD bytesToRead—the number of bytes to be read into the buffer
 LPDWORD bytesRead—the number of bytes read during the last read
 LPOVERLAPPED ovl—indicates if overlapped I/O is being used
System Call Implementation
 Typically, a number associated with each system call
 System-call interface maintains a table indexed according to these
Numbers
 The system call interface invokes intended system call in OS kernel and returns status of the system call and any
return values
 The caller need know nothing about how the system call is implemented
 Just needs to obey API and understand what OS will do as a result call
 Most details of OS interface hidden from programmer by API
Managed by run-time support library (set of functions built into libraries included with compiler)

Standard C Library Example


Here C program is invoking printf() library call, which calls write () system call.

System Call Parameter Passing


 Often, more information is required than simply identity of desired system call
 Exact type and amount of information vary according to OS and call
 Three general methods used to pass parameters to the OS
 (1)Simplest: pass the parameters in registers
 In some cases, may be more parameters than registers
 Parameters are accessed much faster in registers
 (2)Parameters stored in a block, or table, in memory, and address of block passed as a parameter in a(through)
register
This approach taken by Linux and Solaris
 (3)Parameters placed, or pushed, onto the stack by the program and popped off the stack by the operating
system to get the parameters.
 Block and stack methods do not limit the number or length of parameters being passed
Parameter Passing via Table

Types of System Calls


 Process control
 File management
 Device management
 Information maintenance
 Communications
 Protection
Process control system calls – Create, execute, terminate processes, set process attributes, etc.

File management system calls – Create, read, write, delete files, open and close files, set file attributes, etc.

Device management system calls – Request and release devices, set device attributes, etc.

Information management system calls – Get and set system data, get and set time and date, etc.

Communication system calls – Send and receive messages, transfer status information, create and delete
communication connections, etc.

Types of Operating Systems


There are several types of Operating Systems which are mentioned below.
 Batch Operating System
 Multi-Programming System
 Multi-Processing System
 Multi-Tasking Operating System
 Time-Sharing Operating System
 Distributed Operating System
 Network Operating System
 Real-Time Operating System

1. Batch Operating System

This type of operating system does not interact with the computer directly. There is an operator which takes similar
jobs having the same requirement and groups them into batches. It is the responsibility of the operator to sort jobs
with similar needs.
Batch Operating System
Advantages of Batch Operating System
 It is very difficult to guess or know the time required for any job to complete. Processors of the batch systems
know how long the job would be when it is in the queue.
 Multiple users can share the batch systems.
Disadvantages of Batch Operating System
 The computer operators should be well known with batch systems.
 Batch systems are hard to debug.
 It is sometimes costly.
 The other jobs will have to wait for an unknown time if any job fails.
Examples of Batch Operating Systems: Payroll Systems, Bank Statements, etc.

2. Multi-Programming Operating System

Multiprogramming Operating Systems can be simply illustrated as more than one program is present in the main
memory and any one of them can be kept in execution. This is basically used for better execution of resources.

MultiProgramming
Advantages of Multi-Programming Operating System
 Multi Programming increases the Throughput of the System.
 It helps in reducing the response time.
Disadvantages of Multi-Programming Operating System
 There is not any facility for user interaction of system resources with the system.

3. Multi-Processing Operating System

Multi-Processing Operating System is a type of Operating System in which more than one CPU is used for the
execution of resources. It betters the throughput of the System.

Multiprocessing
Advantages of Multi-Processing Operating System
 It increases the throughput of the system.
 As it has several processors, so, if one processor fails, we can proceed with another processor.
Disadvantages of Multi-Processing Operating System
 Due to the multiple CPU, it can be more complex and somehow difficult to understand.

4. Multi-Tasking Operating SystemMultitasking Operating System is simply a multiprogramming Operating System


with having facility of a Round-Robin Scheduling Algorithm. It can run multiple programs simultaneously.

Multitasking
Advantages of Multi-Tasking Operating System
 Multiple Programs can be executed simultaneously in Multi-Tasking Operating System.
 It comes with proper memory management.
Disadvantages of Multi-Tasking Operating System
 The system gets heated in case of heavy programs multiple times.
5. Time-Sharing Operating Systems

Each task is given some time to execute so that all the tasks work smoothly. Each user gets the time of the CPU as
they use a single system. These systems are also known as Multitasking Systems. The task can be from a single user or
different users also. The time that each task gets to execute is called quantum. After this time interval is over OS
switches over to the next task.

Time-Sharing OS
Advantages of Time-Sharing OS
 Each task gets an equal opportunity.
 Fewer chances of duplication of software.
 CPU idle time can be reduced.
 Resource Sharing: Time-sharing systems allow multiple users to share hardware resources such as the CPU,
memory, and peripherals, reducing the cost of hardware and increasing efficiency.
Disadvantages of Time-Sharing OS
 Reliability problem.
 One must have to take care of the security and integrity of user programs and data.
 Data communication problem.
 High Overhead: Time-sharing systems have a higher overhead than other operating systems due to the need for
scheduling, context switching, and other overheads that come with supporting multiple users.

6. Distributed Operating System

These types of operating system is a recent advancement in the world of computer technology and are being widely
accepted all over the world and, that too, at a great pace. Various autonomous interconnected computers
communicate with each other using a shared communication network. Independent systems possess their own
memory unit and CPU. These are referred to as loosely coupled systems or distributed systems . These systems’
processors differ in size and function. The major benefit of working with these types of the operating system is that it
is always possible that one user can access the files or software which are not actually present on his system but some
other system connected within this network i.e., remote access is enabled within the devices connected in that
network.
Distributed OS
Examples of Distributed Operating Systems are LOCUS, etc.

7. Network Operating System:

These systems run on a server and provide the capability to manage data, users, groups, security, applications, and
other networking functions. These types of operating systems allow shared access to files, printers, security,
applications, and other networking functions over a small private network. One more important aspect of Network
Operating Systems is that all the users are well aware of the underlying configuration, of all other users within the
network, their individual connections, etc. and that’s why these computers are popularly known as tightly coupled
systems.

Network Operating System


Examples of Network Operating Systems are Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX,
Linux, Mac OS X, Novell NetWare, BSD, etc.

8. Real-Time Operating System

These types of OSs serve real-time systems. The time interval required to process and respond to inputs is very small.
This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile systems, air traffic
control systems, robots, etc.

PROCESS MANAGEMENT
A Program does nothing unless its instructions are executed by a CPU. A program in execution is called a process. In
order to accomplish its task, process needs the computer resources.
There may exist more than one process in the system which may require the same resource at the same time.
Therefore, the operating system has to manage all the processes and the resources in a convenient and efficient way.
Some resources may need to be executed by one process at one time to maintain the consistency otherwise the system
can become inconsistent and deadlock may occur.
The operating system is responsible for the following activities in connection with Process Management
1. Scheduling processes and threads on the CPUs.
2. Creating and deleting both user and system processes.
3. Suspending and resuming processes.
4. Providing mechanisms for process synchronization.
5. Providing mechanisms for process communication.

Attributes of a process (PCB)


The Attributes of the process are used by the Operating System to create the process control block (PCB) for each of
them. This is also called context of the process. Attributes which are stored in the PCB are described below.
1. Process ID
When a process is created, a unique id is assigned to the process which is used for unique identification of the process in
the system.
2. Program counter
A program counter stores the address of the last instruction of the process on which the process was suspended. The
CPU uses this address when the execution of this process is resumed.
3. Process State
The Process, from its creation to the completion, goes through various states which are new, ready, running and waiting.
We will discuss about them later in detail.
4. Priority
Every process has its own priority. The process with the highest priority among the processes gets the CPU first. This is
also stored on the process control block.
5. General Purpose Registers
Every process has its own set of registers which are used to hold the data which is generated during the execution of the
process.
6. List of open files
During the Execution, Every process uses some files which need to be present in the main memory. OS also maintains a
list of open files in the PCB.
7. List of open devices
OS also maintain the list of all open devices which are used during the execution of the process.
Process States
State Diagram

The process, from its creation to completion, passes through various states. The minimum number of states is five.
The names of the states are not standardized although the process may be in one of the following states during
execution.
1. New
A program which is going to be picked up by the OS into the main memory is called a new process.
2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU to be assigned. The OS
picks the new processes from the secondary memory and put all of them in the main memory.
The processes which are ready for the execution and reside in the main memory are called ready state processes. There
can be many processes present in the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS depending upon the scheduling algorithm. Hence, if
we have only one CPU in our system, the number of running processes for a particular time will always be one. If we
have n processors in the system then we can have n processes running simultaneously.

4. Block or wait
From the Running state, a process can make the transition to the block or wait state depending upon the scheduling
algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user then the OS move this process
to the block or wait state and assigns the CPU to the other processes.
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the context of the process (Process Control
Block) will also be deleted the process will be terminated by the Operating system.
Operations on the Process
1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory) and will be ready for the
execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system chooses one process and start executing it.
Selecting the process which is to be executed next, is known as scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts executing it. Process may come to the blocked or
wait state during the execution then in that case the processor starts executing the other processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of the process (PCB) will be
deleted and the process gets terminated by the Operating system.
Process Scheduling in OS (Operating System)
Operating system uses various schedulers for the process scheduling described below.
1. Long term scheduler
Long term scheduler is also known as job scheduler. It chooses the processes from the pool (secondary memory) and
keeps them in the ready queue maintained in the primary memory.
Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long term scheduler is to choose
a perfect mix of IO bound and CPU bound processes among the jobs present in the pool.
If the job scheduler chooses more IO bound processes then all of the jobs may reside in the blocked state all the time
and the CPU will remain idle most of the time. This will reduce the degree of Multiprogramming. Therefore, the Job of
long term scheduler is very critical and may affect the system for a very long time.
2. Short term scheduler
Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready queue and dispatch to
the CPU for the execution.
A scheduling algorithm is used to select which job is going to be dispatched for the execution. The Job of the short term
scheduler can be very critical in the sense that if it selects job whose CPU burst time is very high then all the jobs after
that, will have to wait in the ready queue for a very long time.
This problem is called starvation which may arise if the short term scheduler makes some mistakes while selecting the
job.
3. Medium term scheduler
Medium term scheduler takes care of the swapped out processes.If the running state processes needs some IO time for
the completion then there is a need to change its state from running to waiting.
Medium term scheduler is used for this purpose. It removes the process from the running state to make room for the
other processes. Such processes are the swapped out processes and this procedure is called swapping. The medium
term scheduler is responsible for suspending and resuming the processes.
It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix of processes in the ready
queue.
Process Queues
The Operating system manages various types of queues for each of the process states. The PCB related to the process is
also stored in the queue of the same state. If the Process is moved from one state to another state then its PCB is also
unlinked from the corresponding queue and added to the other state queue in which the transition is made.

There are the following queues maintained by the Operating system.


1. Job Queue
In starting, all the processes get stored in the job queue. It is maintained in the secondary memory. The long term
scheduler (Job scheduler) picks some of the jobs and put them in the primary memory.
2. Ready Queue
Ready queue is maintained in primary memory. The short term scheduler picks the job from the ready queue and
dispatch to the CPU for the execution.
3. Waiting Queue
When the process needs some IO operation in order to complete its execution, OS changes the state of the process from
running to waiting. The context (PCB) associated with the process gets stored on the waiting queue which will be used
by the Processor when the process finishes the IO
CPU Scheduling Criteria:

1. Arrival Time
The time at which the process enters into the ready queue is called the arrival time.
2. Burst Time
The total amount of time required by the CPU to execute the whole process is called the Burst Time. This does not
include the waiting time. It is confusing to calculate the execution time for a process even before executing it hence the
scheduling problems based on the burst time cannot be implemented in reality.
3. Completion Time
The Time at which the process enters into the completion state or the time at which the process completes its
execution, is called completion time.
4. Turnaround time
The total amount of time spent by the process from its arrival to its completion, is called Turnaround time.
5. Waiting Time
The Total amount of time for which the process waits for the CPU to be assigned is called waiting time.
6. Response Time
The difference between the arrival time and the time at which the process first gets the CPU is called Response Time.
Process Table and Process Control Block (PCB)

While creating a process the operating system performs several operations. To identify the processes, it assigns a
process identification number (PID) to each process. As the operating system supports multi-programming, it needs to
keep track of all the processes. For this task, the process control block (PCB) is used to track the process’s execution
status. Each block of memory contains information about the process state, program counter, stack pointer, status of
opened files, scheduling algorithms, etc. All this information is required and must be saved when the process is switched
from one state to another. When the process makes a transition from one state to another, the operating system must
update information in the process’s PCB. A process control block (PCB) contains information about the process, i.e.
registers, quantum, priority, etc. The process table is an array of PCBs, that means logically contains a PCB for all of the
current processes in the system.

 Pointer – It is a stack pointer which is required to be saved when the process is switched from one state to another
to retain the current position of the process.
 Process state – It stores the respective state of the process.
 Process number – Every process is assigned with a unique id known as process ID or PID which stores the process
identifier.
 Program counter – It stores the counter which contains the address of the next instruction that is to be executed for
the process.
 Register – These are the CPU registers which includes: accumulator, base, registers and general purpose registers.
 Memory limits – This field contains the information about memory management system used by operating system.
This may include the page tables, segment tables etc.
 Open files list – This information includes the list of files opened for a process.

Inter Process Communication:


A process can be of two types:
 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes while a co-operating process can be affected
by other executing processes. Though one can think that those processes, which are running independently, will execute
very efficiently, in reality, there are many situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity. Inter-process communication (IPC) is a mechanism that allows
processes to communicate with each other and synchronize their actions. The communication between these processes
can be seen as a method of co-operation between them. Processes can communicate with each other through both:

1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between processes via the shared memory method and via
the message passing method.
An operating system can implement both methods of communication. First, we will discuss the shared memory methods
of communication and then message passing. Communication between processes using shared memory requires
processes to share some variable, and it completely depends on how the programmer will implement it. One way of
communication using shared memory can be imagined like this: Suppose process1 and process2 are executing
simultaneously, and they share some resources or use some information from another process. Process1 generates
information about certain computations or resources being used and keeps it as a record in shared memory. When
process2 needs to use the shared information, it will check in the record stored in shared memory and take note of the
information generated by process1 and act accordingly. Processes can use shared memory for extracting information as

a record from another process as well as for delivering any specific information to other processes.

i) Shared Memory Method


Ex: Producer-Consumer problem
There are two processes: Producer and Consumer. The producer produces some items and the Consumer consumes that
item. The two processes share a common space or memory location known as a buffer where the item produced by the
Producer is stored and from which the Consumer consumes the item if needed. There are two versions of this problem:
the first one is known as the unbounded buffer problem in which the Producer can keep on producing items and there is
no limit on the size of the buffer, the second one is known as the bounded buffer problem in which the Producer can
produce up to a certain number of items before it starts waiting for Consumer to consume it. We will discuss the
bounded buffer problem. First, the Producer and the Consumer will share some common memory, then the producer
will start producing items. If the total produced item is equal to the size of the buffer, the producer will wait to get it
consumed by the Consumer. Similarly, the consumer will first check for the availability of the item. If no item is available,
the Consumer will wait for the Producer to produce it. If there are items available, Consumer will consume them. The
pseudo-code to demonstrate is provided below:
Shared Data between the two Processes
Note that the atomic class is used to make sure that the shared variables free_index and full_index are updated
atomically. The mutex is used to protect the critical section where the shared buffer is accessed. The sleep_for
function is used to simulate the production and consumption of items.
ii) Messaging Passing Method
Now, We will start our discussion of the communication between processes via message passing. In this method,
processes communicate with each other without using any kind of shared memory. If two processes p1 and p2 want
to communicate with each other, they proceed as follows:

 Establish a communication link (if a link already exists, no need to establish it again.)
 Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)

Message Passing through Communication Link.


Direct and Indirect Communication link
Now, We will start our discussion about the methods of implementing communication links. While implementing the
link, there are some questions that need to be kept in mind like :

1. How are links established?


2. Can a link be associated with more than two processes?
3. How many links can there be between every pair of communicating processes?
4. What is the capacity of a link? Is the size of a message that the link can accommodate fixed or variable?
5. Is a link unidirectional or bi-directional?
A link has some capacity that determines the number of messages that can reside in it temporarily for which every
link has a queue associated with it which can be of zero capacity, bounded capacity, or unbounded capacity. In zero
capacity, the sender waits until the receiver informs the sender that it has received the message. In non-zero capacity
cases, a process does not know whether a message has been received or not after the send operation. For this, the
sender must communicate with the receiver explicitly. Implementation of the link depends on the situation, it can be
either a direct communication link or an in-directed communication link.
Direct Communication links are implemented when the processes use a specific process identifier for the
communication, but it is hard to identify the sender ahead of time.
For example the print server.
In-direct Communication is done via a shared mailbox (port), which consists of a queue of messages. The sender
keeps the message in mailbox and the receiver picks them up.

Message Passing through Exchanging the Messages.


Synchronous and Asynchronous Message Passing:
A process that is blocked is one that is waiting for some event, such as a resource becoming available or the
completion of an I/O operation. IPC is possible between the processes on same computer as well as on the processes
running on different computer i.e. in networked/distributed system. In both cases, the process may or may not be
blocked while sending a message or attempting to receive a message so message passing may be blocking or non-
blocking. Blocking is considered synchronous and blocking send means the sender will be blocked until the message
is received by receiver. Similarly, blocking receive has the receiver block until a message is available. Non-blocking is
considered asynchronous and Non-blocking send has the sender sends the message and continue. Similarly, Non-
blocking receive has the receiver receive a valid message or null. After a careful analysis, we can come to a conclusion
that for a sender it is more natural to be non-blocking after message passing as there may be a need to send the
message to different processes. However, the sender expects acknowledgment from the receiver in case the send
fails. Similarly, it is more natural for a receiver to be blocking after issuing the receive as the information from the
received message may be used for further execution. At the same time, if the message send keep on failing, the
receiver will have to wait indefinitely.
Threads:
Within a program, a Thread is a separate execution path. It is a lightweight process that the operating system can
schedule and run concurrently with other threads. The operating system creates and manages threads, and they
share the same memory and resources as the program that created them. This enables multiple threads to
collaborate and work efficiently within a single program.
A thread is a single sequence stream within a process. Threads are also called lightweight processes as they possess
some of the properties of processes. Each thread belongs to exactly one process. In an operating system that supports
multithreading, the process can consist of many threads. But threads can be effective only if CPU is more than 1
otherwise two threads have to context switch for that single CPU.
Why Do We Need Thread?
 Threads run in parallel improving the application performance. Each such thread has its own CPU state and stack,
but they share the address space of the process and the environment.
 Threads can share common data so they do not need to use interprocess communication. Like the processes,
threads also have states like ready, executing, blocked, etc.
 Priority can be assigned to the threads just like the process, and the highest priority thread is scheduled first.
 Each thread has its own Thread Control Block (TCB) . Like the process, a context switch occurs for the thread, and
register contents are saved in (TCB). As threads share the same address space and resources, synchronization is
also required for the various activities of the thread.
Why Multi-Threading?
A thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a process into multiple
threads. For example, in a browser, multiple tabs can be different threads. MS Word uses multiple threads: one
thread to format the text, another thread to process inputs, etc. More advantages of multithreading are discussed
below.
Multithreading is a technique used in operating systems to improve the performance and responsiveness of computer
systems. Multithreading allows multiple threads (i.e., lightweight processes) to share the same resources of a single
process, such as the CPU, memory, and I/O devices.
Difference Between Process and Thread
The primary difference is that threads within the same process run in a shared memory space, while processes run in
separate memory spaces. Threads are not independent of one another like processes are, and as a result, threads
share with other threads their code section, data section, and OS resources (like open files and signals). But, like a
process, a thread has its own program counter (PC), register set, and stack space.
Advantages of Thread
 Responsiveness: If the process is divided into multiple threads, if one thread completes its execution, then its
output can be immediately returned.
 Faster context switch: Context switch time between threads is lower compared to the process context switch.
Process context switching requires more overhead from the CPU.
 Effective utilization of multiprocessor system: If we have multiple threads in a single process, then we can
schedule multiple threads on multiple processors. This will make process execution faster.
 Resource sharing: Resources like code, data, and files can be shared among all threads within a process. Note:
Stacks and registers can’t be shared among the threads. Each thread has its own stack and registers.
 Communication: Communication between multiple threads is easier, as the threads share a common address
space. while in the process we have to follow some specific communication techniques for communication
between the two processes.
 Enhanced throughput of the system: If a process is divided into multiple threads, and each thread function is
considered as one job, then the number of jobs completed per unit of time is increased, thus increasing the
throughput of the system.
Types of Threads
Threads are of two types. These are described below.
 User Level Thread
 Kernel Level Thread

User Level Thread and Kernel Level Thread

User Level Threads

User Level Thread is a type of thread that is not created using system calls. The kernel has no work in the
management of user-level threads. User-level threads can be easily implemented by the user. In case when user-level
threads are single-handed processes, kernel-level thread manages them. Let’s look at the advantages and
disadvantages of User-Level Thread.
Advantages of User-Level Threads
 Implementation of the User-Level Thread is easier than Kernel Level Thread.
 Context Switch Time is less in User Level Thread.
 User-Level Thread is more efficient than Kernel-Level Thread.
 Because of the presence of only Program Counter, Register Set, and Stack Space, it has a simple representation.
Disadvantages of User-Level Threads
 There is a lack of coordination between Thread and Kernel.
 Inc case of a page fault, the whole process can be blocked.

Kernel Level Threads

A kernel Level Thread is a type of thread that can recognize the Operating system easily. Kernel Level Threads has its
own thread table where it keeps track of the system. The operating System Kernel helps in managing threads. Kernel
Threads have somehow longer context switching time. Kernel helps in the management of threads.
The concept of multi-threading needs proper understanding of these two terms – a process and a thread. A process
is a program being executed. A process can be further divided into independent units known as threads. A thread is
like a small light-weight process within a process. Or we can say a collection of threads is what is known as a process.

Applications – Threading is used widely in almost every field. Most widely it is seen over the internet nowadays
where we are using transaction processing of every type like recharges, online transfer, banking etc. Threading is a
segment which divide the code into small parts that are of very light weight and has less burden on CPU memory so
that it can be easily worked out and can achieve goal in desired field. The concept of threading is designed due to the
problem of fast and regular changes in technology and less the work in different areas due to less application. Then as
says “need is the generation of creation or innovation” hence by following this approach human mind develop the
concept of thread to enhance the capability of programming.
CPU Scheduling
In the uniprogrammming systems like MS DOS, when a process waits for any I/O operation to be done, the CPU remains
idol. This is an overhead since it wastes the time and causes the problem of starvation. However, In Multiprogramming
systems, the CPU doesn't remain idle during the waiting time of the Process and it starts executing other processes.
Operating System has to define which process the CPU will be given.
In Multiprogramming systems, the Operating system schedules the processes on the CPU to have the maximum
utilization of it and this procedure is called CPU scheduling. The Operating System uses various scheduling algorithm to
schedule the processes.
Why do we need Scheduling?
In Multiprogramming, if the long term scheduler picks more I/O bound processes then most of the time, the CPU
remains idol. The task of Operating system is to optimize the utilization of resources.
If most of the running processes change their state from running to waiting then there may always be a possibility of
deadlock in the system. Hence to reduce this overhead, the OS needs to schedule the jobs to get the optimal utilization
of CPU and to avoid the possibility to deadlock.
What are the different types of CPU Scheduling Algorithms?
There are mainly two types of scheduling methods:
 Preemptive Scheduling: Preemptive scheduling is used when a process switches from running state to ready state
or from the waiting state to the ready state.
 Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process terminates , or when a process
switches from running state to waiting state.

Different types of CPU Scheduling Algorithms


Scheduling Algorithms in OS (Operating System)
There are various algorithms which are used by the Operating System to schedule the processes on the processor in an
efficient way.
The Purpose of a Scheduling algorithm
1. Maximum CPU utilization
2. Fare allocation of CPU
3. Maximum throughput
4. Minimum turnaround time
5. Minimum waiting time
6. Minimum response time

There are the following algorithms which can be used to schedule the jobs.
1. First Come First Serve
It is the simplest algorithm to implement. The process with the minimal arrival time will get the CPU first. The lesser the
arrival time, the sooner will the process gets the CPU. It is the non-preemptive type of scheduling.
2. Shortest Job First
The job with the shortest burst time will get the CPU first. The lesser the burst time, the sooner will the process get the
CPU. It is the non-preemptive type of scheduling.
3. Shortest remaining time first
It is the preemptive form of SJF. In this algorithm, the OS schedules the Job according to the remaining time of the
execution.
4. Round Robin
In the Round Robin scheduling algorithm, the OS defines a time quantum (slice). All the processes will get executed in
the cyclic way. Each of the process will get the CPU for a small amount of time (called time quantum) and then get back
to the ready queue to wait for its next turn. It is a preemptive type of scheduling.
5. Priority based scheduling
In this algorithm, the priority will be assigned to each of the processes. The higher the priority, the sooner will the
process get the CPU. If the priority of the two processes is same then they will be scheduled according to their arrival
time.

1. First Come First Serve:

FCFS considered to be the simplest of all operating system scheduling algorithms. First come first serve scheduling
algorithm states that the process that requests the CPU first is allocated the CPU first and is implemented by
using FIFO queue.
Characteristics of FCFS:
 FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS:
 Easy to implement
 First come, first serve method
Disadvantages of FCFS:
 FCFS suffers from Convoy effect.
 The average waiting time is much higher than the other algorithms.
 FCFS is very simple and easy to implement and hence not much efficient.
First-Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P P P
1 2 3

0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
Suppose that the processes arrive in the order
P2 , P3 , P1
The Gantt chart for the schedule is:
n
n
n
nWaiting time for P1 = 6; P2 = 0; P3 = 3
nAverage waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect short process behind long process

Example 2:
Example

S. No Process ID Process Name Arrival Time Burst Time


___ ______ _______ _______ _______
1 P1 A 0 9
2 P2 B 1 3
3 P3 C 1 2
4 P4 D 1 4
5 P5 E 2 3
6 P6 F 3 2

Now, let us solve this problem with the help of the Scheduling Algorithm named First Come First Serve.
Gantt chart for the above Example 1 is:

Turn Around Time = Completion Time - Arrival Time


Waiting Time = Turn Around Time - Burst Time

Solution to the Above Question Example 2

S. Process Arrival Burst Completion Turn Around Waiting RT


No ID Time Time Time Time Time
1 P1 A 0 9 9 9 0

2 P2 B 1 3 12 11 8

3 P3 C 1 2 14 13 11

4 P4 D 1 4 18 17 13

5 P5 E 2 3 21 19 16

6 P6 F 3 2 23 20 18

The Average Completion Time is:


Average CT = ( 9 + 12 + 14 + 18 + 21 + 23 ) / 6
Average CT = 97 / 6
Average CT = 16.16667
The Average Waiting Time is:

Average WT = ( 0 + 8 + 11 + 13 + 16 + 18 ) /6
Average WT = 66 / 6
Average WT = 11
The Average Turn Around Time is:
Average TAT = ( 9 + 11 + 13 + 17 + 19 +20 ) / 6
Average TAT = 89 / 6
Average TAT = 14.83334
This is how the FCFS is solved.
2. Shortest Job First (SJF) Scheduling
Till now, we were scheduling the processes according to their arrival time (in FCFS scheduling). However, SJF scheduling
algorithm, schedules the processes according to their burst time.In SJF scheduling, the process with the lowest burst
time, among the list of available processes in the ready queue, is going to be scheduled next.However, it is very difficult
to predict the burst time needed for a process hence this algorithm is very difficult to implement in the
system.34Abstract Class vs Interface | Difference between Abstract class and Interface in Java
Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time

Disadvantages of SJF
1. May suffer with the problem of starvation
2. It is not implementable because the exact Burst time for a process can't be known in advance.

There are different techniques available by which, the CPU burst time of the process can be determined. We will discuss
them later in detail.
Example
In the following example, there are five jobs named as P1, P2, P3, P4 and P5. Their arrival time and burst time are given
in the table below.

PID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time

1 1 7 8 7 0

2 3 3 13 10 7

3 6 2 10 4 2

4 7 10 31 24 14

5 9 8 21 12 4

Since, No Process arrives at time 0 hence; there will be an empty slot in the Gantt chart from time 0 to 1 (the time at
which the first process arrives).
According to the algorithm, the OS schedules the process which is having the lowest burst time among the available
processes in the ready queue.
Till now, we have only one process in the ready queue hence the scheduler will schedule this to the processor no matter
what is its burst time.
This will be executed till 8 units of time. Till then we have three more processes arrived in the ready queue hence the
scheduler will choose the process with the lowest burst time.
Among the processes given in the table, P3 will be executed next since it is having the lowest burst time among all the
available processes.
So that's how the procedure will go on in shortest job first (SJF) scheduling algorithm.

Avg Waiting Time = 27/5


3. Shortest Remaining Time First (SRTF) Scheduling Algorithm
This Algorithm is the preemptive version of SJF scheduling. In SRTF, the execution of the process can be stopped after
certain amount of time. At the arrival of every process, the short term scheduler schedules the process with the least
remaining burst time among the list of available processes and the running process.
Once all the processes are available in the ready queue, No preemption will be done and the algorithm will work as SJF
scheduling. The context of the process is saved in the Process Control Block when the process is removed from the
execution and the next process is scheduled. This PCB is accessed on the next execution of this process.
Refer Examples in notes for all Scheduling algorithms****
4.Round Robin (RR)

 Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has
elapsed, the process is preempted and added to the end of the ready queue.
 If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU
time in chunks of at most q time units at once. No process waits more than (n-1)q time units.
 Performance
 q large Þ FIFO
 q small Þ q must be large with respect to context switch, otherwise overhead is too high
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3

The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
Typically, higher average turnaround than SJF, but better response

5. Priority Scheduling:

Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling algorithm that
works based on the priority of a process. In this algorithm, the editor sets the functions to be as important, meaning
that the most important process must be done first. In the case of any conflict, that is, where there are more than one
processor with equal value, then the most important CPU planning algorithm works on the basis of the FCFS (First
Come First Serve) algorithm.
Characteristics of Priority Scheduling:
 Schedules tasks based on priority.
 When the higher priority work arrives while a task with less priority is executed, the higher priority work takes the
place of the less priority one and
 The latter is suspended until the execution is complete.
 Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling:
 The average waiting time is less than FCFS
 Less complex
Disadvantages of Priority Scheduling:
 One of the most common demerits of the Preemptive priority CPU scheduling algorithm is the Starvation
Problem. This is the problem in which a process has to wait for a longer amount of time to get scheduled into the
CPU. This condition is called the starvation problem.

6. Multilevel Queue
 Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
 Each queue has its own scheduling algorithm
 foreground – RR
 background – FCFS
 Scheduling must be done between the queues
 Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e.,
80% to foreground in RR
20% to background in FCFS

Multilevel Queue Scheduling

Multiple-Processor Scheduling
In multiple-processor scheduling multiple CPU’s are available and hence Load Sharing becomes possible. However
multiple processor scheduling is more complex as compared to single processor scheduling. In multiple processor
scheduling there are cases when the processors are identical i.e. HOMOGENEOUS, in terms of their functionality, we
can use any processor available to run any process in the queue.
Why is multiple-processor scheduling important?
Multiple-processor scheduling is important because it enables a computer system to perform multiple tasks
simultaneously, which can greatly improve overall system performance and efficiency.
How does multiple-processor scheduling work?
Multiple-processor scheduling works by dividing tasks among multiple processors in a computer system, which allows
tasks to be processed simultaneously and reduces the overall time needed to complete them.

Approaches to Multiple-Processor Scheduling –

One approach is when all the scheduling decisions and I/O processing are handled by a single processor which is
called the Master Server and the other processors executes only the user code. This is simple and reduces the need of
data sharing. This entire scenario is called Asymmetric Multiprocessing. A second approach uses Symmetric
Multiprocessing where each processor is self scheduling. All processes may be in a common ready queue or each
processor may have its own private queue for ready processes. The scheduling proceeds further by having the
scheduler for each processor examine the ready queue and select a process to execute.

You might also like