You are on page 1of 15

MODULE II

PROCESS MANAGEMENT
PROCESS
 A process is basically a program in execution.
 The execution of a process must progress in a sequential fashion.
 A process is the basic unit of execution in an operating system.
 A program is a passive entity that stored in secondary storage device, while process is
active entity that stored in main memory.
 A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
 When a program is loaded into the memory and it becomes a process, it can be divided
into four sections ─ stack, heap, text and data.
 The following figure shows structure of a process inside main memory –

Stack
The process Stack contains the temporary data such as method/function parameters, return address
and local variables.
Heap
This is dynamically allocated memory to a process during its run time.
Text
This includes the current activity represented by the value of Program Counter and the contents of
the processor's registers.
Data
This section contains the global and static variables.

PROCESS STATE
The state of a process is defined in part by the current activity of that process. The change
the state of process during execution of process.
Each process may be in one of the following states
Start/New
This is the initial state when a process is first started/created.
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come into
this state after Start state or while running it by but interrupted by the scheduler to assign CPU to
some other process.
Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to
running and the processor executes its instructions.
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved to
the terminated state where it waits to be removed from main memory.

Figure shows process state diagram

PROCESS CONTROL BLOCK


 A process control block (PCB) is a data structure maintained by the operating systems for
every process.
 Each process is represented in the operating system by a PCB.
 It also called task control block.
 The process control block (PCB) is used to track the process’s execution status.
 Each block of memory in PCB contains information about the process state, program
counter, stack pointer, status of opened files, scheduling algorithms, etc.
The PCB includes,

 Pointer – It is a stack pointer which is required to be saved when the process is


switched from one state to another to retain the current position of the process.
 Process state – It stores the respective state of the process i.e. new, ready, running,
waiting or terminated.
 Process number – Every process is assigned with a unique id known as process ID or
PID which stores the process identifier.
 Program counter – It stores the counter which contains the address of the next
instruction that is to be executed for the process.
 Register – These are the CPU registers which includes: accumulator, base, registers
and general-purpose registers.
 CPU Scheduling Information
The process priority, pointers to scheduling queues etc. is the CPU scheduling information
that is contained in the PCB. This may also include any other scheduling parameters.
 Memory Management Information
The memory management information includes the page tables or the segment tables
depending on the memory system used. It also contains the value of the base registers,
limit registers etc.
 I/O Status Information
This information includes the list of I/O devices used by the process, the list of files etc.
 Accounting information
The time limits, account numbers, amount of CPU used, process numbers etc. are all a part
of the PCB accounting information.
PROCESS SCHEDULING
 The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.
 Process scheduling is an essential part of a Multiprogramming operating systems.
 Multiprogramming operating systems allow more than one process to be loaded into the
executable memory at a time.
Categories of Scheduling
There are two categories of scheduling:

1. Non-preemptive:

 Non-preemptive Scheduling is used when a process terminates, or a process


switch from running to the waiting state.
 In non-preemptive scheduling resource can’t be taken from a process until the
process completes execution.
 In this scheduling, once the resources (CPU cycles) are allocated to a process,
the process holds the CPU till it gets terminated or reaches a waiting state.
 Algorithms based on non-preemptive scheduling are: Shortest Job First ,First
come first serve and Priority (non -preemptive version)

Advantages
1. It has a minimal scheduling burden.
2. It is a very easy procedure.
3. Less computational resources are used.
Disadvantages
1. Its response time to the process is super.
2. Bugs can cause a computer to freeze up.

2. Preemptive:

 Preemptive scheduling is used when a process switches from the running state to
the ready state or from the waiting state to the ready state.
 Here the OS allocates the resources to a process for a fixed amount of time.
 This switching occurs as the CPU may give priority to other processes and replace
the process with higher priority with the running process.
 Algorithms based on preemptive scheduling are Round Robin (RR), Shortest
Remaining Time First (SRTF), Priority (preemptive version),
Advantages
1. Because a process may not monopolize the processor, it is a more reliable method.
2. Each occurrence prevents the completion of ongoing tasks.
3. The average response time is improved.
Disadvantages
1. Limited computational resources must be used.

Difference between pre-emptive and non-preemptive scheduling

Process Scheduling Queues


 The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
 The OS maintains a separate queue for each of the process states and PCBs of all processes
in the same execution state are placed in the same queue.
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.
Schedulers
 Schedulers are special system software which handle process scheduling in various ways.
 Their main task is to select the jobs to be submitted into the system and to decide which
process to run.
Schedulers are of three types −

1. Long-Term Scheduler
2. Short-Term Scheduler
3. Medium-Term Scheduler
1 Long Term Scheduler
 It is also called a job scheduler.
 A long-term scheduler determines which programs are admitted to the system for
processing.
 It selects processes from the queue and loads them into memory for execution.
 Process loads into the memory for CPU scheduling.
 The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound.
 It also controls the degree of multiprogramming.
 If the degree of multiprogramming is stable, then the average rate of process creation must
be equal to the average departure rate of processes leaving the system.
 On some systems, the long-term scheduler may not be available or minimal.
 Time-sharing operating systems have no long-term scheduler.
 When a process changes the state from new to ready, then there is use of long-term
scheduler.
Short Term Scheduler
 It is also called as CPU scheduler.
 Its main objective is to increase system performance in accordance with the chosen set of
criteria.
 It is the change of ready state to running state of the process.
 CPU scheduler selects a process among the processes that are ready to execute and allocates
CPU to one of them.
 Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next.
 Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
 Medium-term scheduling is a part of swapping.
 It removes the processes from the memory.
 It reduces the degree of multiprogramming.
 The medium-term scheduler is in-charge of handling the swapped out-processes.
 A running process may become suspended if it makes an I/O request.
 A suspended processes cannot make any progress towards completion.
 In this condition, to remove the process from memory and make space for other processes,
the suspended process is moved to the secondary storage. This process is called swapping,
and the process is said to be swapped out or rolled out.

Scheduling algorithms

A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. They are,
1. First-Come, First-Served (FCFS) Scheduling

2. Shortest-Job-Next (SJN) Scheduling


3. Priority Scheduling
4. Round Robin (RR) Scheduling

 Arrival Time: Time at which the process arrives in the ready queue.
 Completion Time: Time at which process completes its execution.
 Burst Time: Time required by a process for CPU execution.
 Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
 Waiting Time (W.T): Time Difference between turn-around time and burst time.
Waiting Time = Turn Around Time – Burst Time
Gantt chart
Generalized Activity Normalization Time Table (GANTT) chart is the type of chart that shows
the amount of work completed in given period of time.
Process Burst time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order p1 p2 p3
The Gantt chart for the schedule is

P1 P2 P3
0 24 27 30

First Come First Serve (FCFS) Scheduling.


 It is a simplest CPU scheduling algorithm, that schedules according to arrival time of
processor.
 FCFS algorithm states that the process that request the CPU first is allocated the CPU
first.
 It is implemented by FIFO queue
 Jobs are executed on first come, first serve basis.
 Easy to understand and implement.

Advantage

 It is simple to implement
 Easy to understand
Disadvantage

 Poor in performance
 Average wait time is high.
 It is not suitable for time sharing system.

Process Arrival Burst Completion Turn Waiting Response


no Time time Time Around time time
time

P1 0 3
P2 2 1
P3 3 2
P4 3 1
P5 8 4
Gantt Chart

Shortest-Job-First (SJF) Scheduling


 Shortest Job First (SJF) is a type of scheduling algorithm in the operating system in
which the processor executes the job first that has the smallest execution time.
 In the shortest Job First algorithm, the processes are scheduled according to the burst
time of these processes.
 The SJF (Shortest Job First) scheduling algorithm in which the CPU executes the job first
has the shortest execution time.

 This algorithm is related to each job as a unit of time for completion.


 This algorithm is helpful for those types of processing of jobs in which the completion
time of a job can be calculated easily like Batch-type processing.
 This algorithm can improve the CPU processing as it executes the shorter jobs first which
leads to a short turnaround time.

Advantages

 This algorithm comes into use when there is long-term scheduling.


 This algorithm reduces the average waiting time.
 This algorithm is useful if there is a batch-type processor.
 This algorithm can be considered an optimal algorithm because, in this algorithm, the
average waiting time is minimum.

Disadvantages

 There are chances that this algorithm will face starvation if there are too many processes
that have shorter burst times then it will cause longer processes to wait in the queue will
lead to Starvation.
 This algorithm can lead to a long turnaround time.

 It is not an easy task to get knowledge about the length of future jobs.

Process Arrival Burst CT TAT WT RT


no Time Time
P1 21 2
P2 3 1
P3 6 4
P4 2 3
Gantt Chart

Priority Scheduling
 Priority is assigned for each process.
 Process with highest priority is executed first and so on
 Processes with same priority are executed in FCFS manner.
 Priority can be decided based on memory requirements, time requirements or any
other resource requirement.
 It is a non-preemptive or pre-emptive scheduling algorithm.
Ques: Solve the problem non-preemptive based on priority scheduling algorithm.

Process Arrival Burst Priority CT TAT WT RT


no Time Time
P1 0 5 2
P2 1 3 1
P3 2 8 4
P4 3 6 3

Gantt Chart
P3 P4 P1 P2
0 6 8 29 31

Round Robin (RR) Scheduling


 A fixed time is allotted to each process, called quantum, for execution.
 Once a process is executed for given time period that process is preempted and other
process executes for given time period.
 Context switching is used to save states of preempted processes.
 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.

Process Arrival Burst CT TAT WT RT


no Time Time
P1 21 2
P2 3 1
P3 6 4
P4 2 3

Gantt Chart

Advantage
1. Every process gets an equal share of the CPU.
2. RR is cyclic in nature, so there is no starvation.
Disadvantage

1.The average waiting time under the RR policy is often long.


2. If time quantum is very high then RR degrades to FCFS.

Resource Allocation Graph (RAG)


 The resource allocation graph is the pictorial representation of the state of a system.
 RAG is a tool for recognizing deadlock.
 It gives complete information about state of the system such as,
Number of processes in the system
How many resources are available
How many resources are allocated etc
 It also used for deadlock detection.

Components of RAG (Resource Allocation Graph)


There are two components of the resource allocation graph:

1. Vertices
2. Edges

1. Vertices: - In the resource allocation graph, we use two kinds of vertices.

 Process Vertices
 Resource Vertices

Process Vertices: - To represent a process, we use process vertices. We draw the process
vertices by using a circle, and inside the circle, we mention the name of the process.
Resource Vertices: - To represent a resource, we use resource vertices. We draw the resource
vertices by using a rectangle, and we use dots inside the circle to mention the number of
instances of that resource.

It is two types,
 Single instance resource type: - In single instance resource type, we use only a
single dot inside the box.
 Multiple instance resource type: - In multiple instance resource type, we use
multiple dots inside the box.

2. Edges: - There are two types of edges we use in the resource allocation graph:

 Assign Edges
 Request Edges

Assign Edges: - We use an assign edge to represent the allocation of resources to the process
Request Edges: - We use request edge to signify the waiting state of the process.

Deadlock

Deadlock is a situation where a set of process are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.

Causes of deadlock
1 Mutual Exclusion

A resource can only be shared in mutually exclusive manner. It implies, if two process cannot
use the same resource at the same time.

2. Hold and wait

A process waits for some resources while holding another resource at the same time. This
deadlock condition in OS requires a process to wait for an occupied resource.

3. No preemption

The process which once scheduled will be executed till the completion. No other process can be
scheduled by the scheduler meanwhile. Only one process can be scheduled at a point in time.
So, it is said that no two processes can be executed simultaneously. This implies that only
when a process is completed, then only the allocated or occupied resource will be duly
released.

4. Circular wait

All the processes must be waiting for the resources in a cyclic manner so that the last process is
waiting for the resource which is being held by the first process. Every set of processes waits in
a cyclic manner. This keeps them waiting forever, and they never get executed. As a result,
deadlocks are called “circular wait” since they get a process stuck in a circular fashion. Every
set of processes waits in a cyclic manner. This keeps them waiting forever, and they never get
executed. As a result, deadlocks are also called “circular wait” since they get a process stuck
in a circular fashion.

Process Synchronization.

 Process Synchronization is the coordination of execution of multiple processes in a


multi-process system to ensure that they access shared resources in a controlled and
predictable manner.
 It aims to resolve the problem of race conditions and other synchronization issues in a
concurrent system.

 The main objective of process synchronization is to ensure that multiple processes


access shared resources without interfering with each other.

On the basis of synchronization, processes are categorized as one of the following two types:
 Independent Process: The execution of one process does not affect the execution
of other processes.
 Cooperative Process: A process that can affect or be affected by other processes
executing in the system.

Race Condition:

 When more than one process is executing the same code or accessing the same memory
or any shared variable in that condition there is a possibility that the output or the value
of the shared variable is wrong so for that all the processes doing the race to say that
my output is correct this condition known as a race condition.
 Several processes access and process the manipulations over the same data
concurrently, then the outcome depends on the particular order in which the access
takes place.
 A race condition is a situation that may occur inside a critical section.

Critical Section Problem:

A critical section is a code segment that can be accessed by only one process at a time. The
critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
 Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
 Progress: If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not executing
in their remainder section can participate in deciding which will enter in the critical
section next, and the selection cannot be postponed indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted.

You might also like