You are on page 1of 60

UNIT-II

PROCESS MANAGEMENT

2. Process Management: A process can be thought of as a program in execution. A process will


need certain resources—such as CPU time, memory, files, and I/O devices—to accomplish its task.
These resources are allocated to the process either when it is created or while it is executing. A
process is the unit of work in most systems. Systems consist of a collection of processes: operating-
system processes execute system code, and user processes execute user code. All these processes may
execute concurrently.Although traditionally a process contained only a single thread of control as it
ran, most modern operating systems now support processes that have multiple threads.
The operating system is responsible for several important aspects of process and
thread management: the creation and deletion of both user and system processes; the scheduling of
processes; and the provision of mechanisms for synchronization, communication, and deadlock
handling for processes.

2.1 Process Concept:-

Process: A process is a program in execution.

For example, when we write a program in C or C++ and compile it, the compiler creates binary
code. The original code and binary code are both programs. When we actually run the binary code,
it becomes a process.

A process is more than the program code, which is sometimes known as the text section.
It also includes the current activity, as represented by the value of the program counter and the
contents of the processor’s registers. A process generally also includes the process stack, which
contains temporary data (such as function parameters, return addresses, and local variables), and a
data section, which contains global variables.A process may also include a heap, which is memory
that is dynamically allocated during process run time.
The structure of a process in memory is shown in Figure 3.1

1
A program by itself is not a process. A program is a passive entity, such as a file containing a list of
instructions stored on disk (often called an executable file). In contrast, a process is an active entity,
with a program counter specifying the next instruction to execute and a set of associated resources.
Process Vs Program

Process Program

The process is basically an A Program is basically a collection of


instance of the computer program instructions that mainly performs a specific
that is being executed. task when executed by the computer.

A process has a shorter lifetime. A Program has a longer lifetime.

2
Process Program

A Process requires resources such


A Program is stored by hard-disk and does not
as memory, CPU, Input-Output
require any resources.
devices.

A process has a dynamic instance


A Program has static code and static data.
of code and data

Basically, a process is On the other hand, the program is


the running instance of the code. the executable code.

2.2 Process State:

A process is a program in execution and it is more than a program code called as text section and this
concept works under all the operating system because all the task perform by the operating system
needs a process to perform the task.

The process executes when it changes the state. The state of a process is defined by the current
activity of the process.

A process may be in one of the following states −

 New − The process is being created.


 Running − In this state the instructions are being executed.
 Waiting − The process is in waiting state until an event occurs like I/O operation completion
or receiving a signal.
 Ready − The process is waiting to be assigned to a processor.
 Terminated − the process has finished execution.
3
It is important to know that only one process can be running on any processor at any instant. Many

processes may be ready and waiting.

Now let us see the state diagram of these process states −

FIG:Diagram of process state.

Explanation:

Step 1 − Whenever a new process is created, it is admitted into ready state.

Step 2 − If no other process is present at running state, it is dispatched to running based on scheduler
dispatcher.

Step 3 − If any higher priority process is ready, the uncompleted process will be sent to the waiting
state from the running state.

Step 4 − Whenever I/O or event is completed the process will send back to ready state based on the
interrupt signal given by the running state.

Step 5 − Whenever the execution of a process is completed in running state, it will exit to terminate
state, which is the completion of process

4
Seven state Process Model:-

New (Create) – In this step, the process is about to be created but not yet created, it is the program
which is present in secondary memory that will be picked up by OS to create the process.
Ready – New -> Ready to run. After the creation of a process, the process enters the ready state i.e.
the process is loaded into the main memory. The process here is ready to run and is waiting to get
the CPU time for its execution. Processes that are ready for execution by the CPU are maintained in
a queue for ready processes.
Run – The process is chosen by CPU for execution and the instructions within the process are
executed by any one of the available CPU cores.
Blocked or wait – Whenever the process requests access to I/O or needs input from the user or
needs access to a critical region(the lock for which is already acquired) it enters the blocked or wait
state. The process continues to wait in the main memory and does not require CPU. Once the I/O
operation is completed the process goes to the ready state.
Terminated or completed – Process is killed as well as PCB is deleted.
Suspend ready – Process that was initially in the ready state but was swapped out of main
memory(refer Virtual Memory topic) and placed onto external storage by scheduler is said to be in
suspend ready state. The process will transition back to ready state whenever the process is again
brought onto the main memory.
Suspend wait or suspend blocked – Similar to suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary memory.
When work is finished it may go to suspend ready.

5
2.3 PROCESS CONTROL BLOCK:-

Process Control Block is a data structure that contains information of the process related to it. The
process control block is also known as a task control block, entry of the process table, etc.

It is very important for process management as the data structuring for processes is done in terms of
the PCB. It also defines the current state of the operating system.

Structure of the Process Control Block


The process control stores many data items that are needed for efficient process management. Some
of these data items are explained with the help of the given diagram −

The following are the data items −

Process State: This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number: This shows the number of the particular process.
Program Counter:This contains the address of the next instruction that needs to be executed in the
process.
CPU Registers: This specifies the registers that are used by the process. They may include
accumulators, index registers, stack pointers, general purpose registers etc.
Along with the program counter, this state information must be saved when an interrupt
occurs, to allow the process to be continued correctly afterward

6
List of Open Files:
These are the different files that are associated with the process

CPU Scheduling Information:


The process priority, pointers to scheduling queues etc. is the CPU scheduling information that is
contained in the PCB. This may also include any other scheduling parameters.

Memory Management Information:


The memory management information includes the page tables or the segment tables depending on
the memory system used. It also contains the value of the base registers, limit registers etc.

I/O Status Information:


This information includes the list of I/O devices used by the process, the list of files etc.

Accounting information:
The time limits, account numbers, amount of CPU used, process numbers etc. are all a part of the
PCB accounting information.

Location of the Process Control Block:


The process control block is kept in a memory area that is protected from the normal user access.
This is done because it contains important process information. Some of the operating systems place
7
the PCB at the beginning of the kernel stack for the process as it is a safe location.
2.4 Scheduling Queues:-

The processes that are entering into the system are stored in the Job Queue. Suppose if the processes
are in the Ready state are generally placed in the Ready Queue.

The processes waiting for a device are placed in Device Queues. There are unique device queues
which are available for every I/O device.

First place a new process in the Ready queue and then it waits in the ready queue till it is selected for
execution.

Once the process is assigned to the CPU and is executing, any one of the following events occur −

 The process issue an I/O request, and then placed in the I/O queue.
 The process may create a new sub process and wait for termination.
 The process may be removed forcibly from the CPU, which is an interrupt, and it is put back
in the ready queue.

In the first two cases, the process switches from the waiting state to the ready state, and then puts it
back in the ready queue. A process continues this cycle till it terminates, at which time it is removed
from all queues and has its PCB and resources deallocated.

8
2.5 Process Scheduling:

The objective of multiprogramming is to have some process running at all times,to maximize CPU
utilization.The objective of time sharing Is to switch the CPU among processes so frequently that
users can interact with each program.

while it is running. To meet these objectives, the process scheduler selects an available process
(possibly from a set of several available processes) for program execution on the CPU.For a single-
processor system,there will never be more than one running process. If there are more processes, the
rest will have to wait until the CPU is free and can be rescheduled

9
Type of Process Schedulers

A scheduler is a type of system software that allows you to handle process scheduling.

There are mainly three types of Process Schedulers:

1. Long Term Scheduler


2. Short Term Scheduler
3. Medium Term Scheduler

Long Term Scheduler

Long term scheduler is also known as a job scheduler. This scheduler regulates the program and
select process from the queue and loads them into memory for execution. It also regulates the degree
of multi-programing.

However, the main goal of this type of scheduler is to offer a balanced mix of jobs, like Processor, I/O
jobs., that allows managing multiprogramming.

Medium Term Scheduler

Medium-term scheduling is an important part of swapping. It enables you to handle the swapped out-
processes. In this scheduler, a running process can become suspended, which makes an I/O request.

A running process can become suspended if it makes an I/O request. A suspended processes can’t
make any progress towards completion. In order to remove the process from memory and make space
for other processes, the suspended process should be moved to secondary storage.

Short Term Scheduler

Short term scheduling is also known as CPU scheduler. The main goal of this scheduler is to boost
the system performance according to set criteria. This helps you to select from a group of processes
that are ready to execute and allocates CPU to one of them. The dispatcher gives control of the CPU
to the process selected by the short term scheduler.

10
2.6 Scheduling Algorithms:-
CPU Scheduling is a process of determining which process will own CPU for execution while
another process is on hold. The main task of CPU scheduling is to make sure that whenever the CPU
remains idle, the OS at least select one of the processes available in the ready queue for execution.
The selection process will be carried out by the CPU scheduler. It selects one of the processes in
memory that are ready for execution.

Types of CPU Scheduling

Here are two kinds of Scheduling methods:

11
Preemptive Scheduling

In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it is
important to run a task with a higher priority before another lower priority task, even if the lower
priority task is still running. The lower priority task holds for some time and resumes when the higher
priority task finishes its execution.

Non-Preemptive Scheduling

In this type of scheduling method, the CPU has been allocated to a specific process. The process that
keeps the CPU busy will release the CPU either by switching context or terminating. It is the only
method that can be used for various hardware platforms. That’s because it doesn’t need special
hardware (for example, a timer) like preemptive scheduling.

Important CPU scheduling Terminologies

 Burst Time/Execution Time: It is a time required by the process to complete execution. It is


also called running time.
 Arrival Time: when a process enters in a ready state
 Finish Time: when process complete and exit from a system
 Multiprogramming: A number of programs which can be present in memory at the same
time.
 Jobs: It is a type of program without any kind of user interaction.
 User: It is a kind of program having user interaction.
 Process: It is the reference that is used for both job and user.
 CPU/IO burst cycle: Characterizes process execution, which alternates between CPU and
I/O activity. CPU times are usually shorter than the time of I/O.

CPU Scheduling Criteria:-

A CPU scheduling algorithm tries to maximize and minimize the following:

12
Types of CPU scheduling Algorithm

There are mainly six types of process scheduling algorithms

1. First Come First Serve (FCFS)


2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling
6. Multilevel Queue Scheduling

Scheduling Algorithms
13
First Come First Serve

First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU scheduling
algorithm. In this type of algorithm, the process which requests the CPU gets the CPU allocation first.
This scheduling method can be managed with a FIFO queue.

As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of the
queue. So, when CPU becomes free, it should be assigned to the process at the beginning of the
queue.

Characteristics of FCFS method:

 It offers non-preemptive and pre-emptive scheduling algorithm.


 Jobs are always executed on a first-come, first-serve basis
 It is easy to implement and use.
 However, this method is poor in performance, and the general wait time is quite high.

Example

Let's take an example of The FCFS scheduling algorithm. In the Following schedule, there are 5
processes with process ID P0, P1, P2, P3 and P4. P0 arrives at time 0, P1 at time 1, P2 at time 2, P3
arrives at time 3 and Process P4 arrives at time 4 in the ready queue. The processes and their
respective Arrival and Burst time are given in the following table.

The Turnaround time and the waiting time are calculated by using the following formula.

1. Turn Around Time = Completion Time - Arrival Time


2. Waiting Time = Turnaround time - Burst Time

The average waiting Time is determined by summing the respective waiting time of all the processes
and divided the sum by the total number of processes.

Process ID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time

0 0 2 2 2 0

1 1 6 8 7 1

14
2 2 4 12 10 6
3 3 9 21 18 9

4 6 12 33 29 17

Avg Waiting Time=31/5

(Gantt chart)

Convoy Effect in FCFS

FCFS may suffer from the convoy effect if the burst time of the first job is the highest among all. As
in the real life, if a convoy is passing through the road then the other persons may get blocked until it
passes completely. This can be simulated in the Operating System also.

If the CPU gets the processes of the higher burst time at the front end of the ready queue then the
processes of lower burst time may get blocked which means they may never get the CPU if the job in
the execution has a very high burst time. This is called convoy effect or starvation.

Advantages of FCFS

Here, are pros/benefits of using FCFS scheduling algorithm:

 The simplest form of a CPU scheduling algorithm


15
 Easy to program
 First come first served

Disadvantages of FCFS

Here, are cons/ drawbacks of using FCFS scheduling algorithm:

 It is a Non-Preemptive CPU scheduling algorithm, so after the process has been allocated to
the CPU, it will never release the CPU until it finishes executing.
 The Average Waiting Time is high.
 Short processes that are at the back of the queue have to wait for the long process at the front
to finish.
 Not an ideal technique for time-sharing systems.
 Because of its simplicity, FCFS is not very efficient.

PRACTICE PROBLEMS BASED ON FCFS SCHEDULING-


Problem-01:

Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 3 4

P2 5 3

P3 0 2

P4 5 1

P5 4 3

If the CPU scheduling policy is FCFS, calculate the average waiting time and average turn around
time.

Solution-
16
antt Chart-

Here, black box represents the idle time of CPU.

Now, we know-

 Turn Around time = Exit time – Arrival time


 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 7 7–3=4 4–4=0

P2 13 13 – 5 = 8 8–3=5

P3 2 2–0=2 2–2=0

P4 14 14 – 5 = 9 9–1=8

P5 10 10 – 4 = 6 6–3=3

17
Now,
 Average Turn Around time = (4 + 8 + 2 + 9 + 6) / 5 = 29 / 5 = 5.8 unit
 Average waiting time = (0 + 5 + 0 + 8 + 3) / 5 = 16 / 5 = 3.2 unit

Problem-02:

Consider the set of 3 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 0 2

P2 3 1

P3 5 6

If the CPU scheduling policy is FCFS, calculate the average waiting time and average turn around
time.

Solution-

Gantt Chart-

18
Here, black box represents the idle time of CPU.

Now, we know-

 Turn Around time = Exit time – Arrival time


 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 2 2–0=2 2–2=0

P2 4 4–3=1 1–1=0

P3 11 11- 5 = 6 6–6=0

Now,

Average Turn Around time = (2 + 1 + 6) / 3 = 9 / 3 = 3 unit


Average waiting time = (0 + 0 + 0) / 3 = 0 / 3 = 0 unit

Shortest Job First (SJF) :-


It is an algorithm in which the process having the smallest execution time is chosen for the next
execution. This scheduling method can be preemptive or non-preemptive. It significantly reduces the
average waiting time for other processes awaiting execution. The full form of SJF is Shortest Job
First.
There are basically two types of SJF methods:

 Non-Preemptive SJF
 Preemptive SJF

Characteristics of SJF Scheduling

 It is associated with each job as a unit of time


19to complete.
 This algorithm method is helpful for batch-type processing, where waiting for jobs to
complete is not critical.
 It can improve process throughput by making sure that shorter jobs are executed first, hence
possibly have a short turnaround time.
 It improves job output by offering shorter jobs, which should be executed first, which mostly
have a shorter turnaround time.

In the following example, there are five jobs named as P1, P2, P3, P4 and P5. Their arrival time and
burst time are given in the table below.

PID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time

1 1 7 8 7 0

2 3 3 13 10 7

3 6 2 10 4 2

4 7 10 31 24 14

5 9 8 21 12 4

So that's how the procedure will go on in shortest job first (SJF) scheduling algorithm.

Avg Waiting Time = 27/5

Non Pre-emptive Shortest Job First

Consider the below processes available in the ready queue for execution, with arrival time as 0 for all
and given burst times.

20
PRACTICE PROBLEMS BASED ON SJF SCHEDULING-

Problem-01:

Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 3 1

P2 1 4

P3 4 2

P4 0 21 6
P5 2 3

If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting time and average
turn around time.

Solution-

Gantt Chart-

Now, we know-

 Turn Around time = Exit time – Arrival time


 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 7 7–3=4 4–1=3

P2 16 16 – 1 = 15 15 – 4 = 11

P3 9 9 –22
4=5 5–2=3
P4 6 6–0=6 6–6=0

P5 12 12 – 2 = 10 10 – 3 = 7

Now,

 Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit


 Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit

Problem-02:

Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 3 1

P2 1 4

P3 4 2

P4 0 6

P5 2 3

If the CPU scheduling policy is SJF preemptive, calculate the average waiting time and average turn
23
around time.
Solution-
Gantt Chart-

Now, we know-

 Turn Around time = Exit time – Arrival time


 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 4 4–3=1 1–1=0

P2 6 6–1=5 5–4=1

P3 8 8–4=4 4–2=2

P4 16 16 – 0 = 16 16 – 6 = 10

P5 11 11 – 2 = 9 9–3=6

Now,

 Average Turn Around time = (1 + 5 + 4 + 16 + 9) / 5 = 35 / 5 = 7 unit


 Average waiting time = (0 + 1 + 2 + 10 + 6) / 5 = 19 / 5 = 3.8 unit

Problem-03: 24
Consider the set of 6 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 0 7

P2 1 5

P3 2 3

P4 3 1

P5 4 2

P6 5 1

If the CPU scheduling policy is shortest remaining time first, calculate the average waiting time and
average turn around time.

Solution-

Gantt Chart-

25
Now, we know-

 Turn Around time = Exit time – Arrival time


 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 19 19 – 0 = 19 19 – 7 = 12

P2 13 13 – 1 = 12 12 – 5 = 7

P3 6 6–2=4 4–3=1

P4 4 4–3=1 1–1=0

P5 9 9–4=5 5–2=3

P6 7 7–5=2 2–1=1

Now,

 Average Turn Around time = (19 + 12 + 4 + 1 + 5 + 2) / 6 = 43 / 6 = 7.17 unit


 Average waiting time = (12 + 7 + 1 + 0 + 3 + 1) / 6 = 24 / 6 = 4 unit

Problem-04:

Consider the set of 3 processes whose arrival time and


26 burst time are given below-
Process Id Arrival time Burst time

P1 0 9

P2 1 4

P3 2 9

If the CPU scheduling policy is SRTF, calculate the average waiting time and average turn around
time.

Solution-
Gantt Chart-

Now, we know-

 Turn Around time = Exit time – Arrival time


 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 13 13 –27
0 = 13 13 – 9 = 4
P2 5 5–1=4 4–4=0

P3 22 22- 2 = 20 20 – 9 = 11

Now,

 Average Turn Around time = (13 + 4 + 20) / 3 = 37 / 3 = 12.33 unit


 Average waiting time = (4 + 0 + 11) / 3 = 15 / 3 = 5 unit
Advantages of SJF

Here are the benefits/pros of using SJF method:

 SJF is frequently used for long term scheduling.


 It reduces the average waiting time over FIFO (First in First Out) algorithm.
 SJF method gives the lowest average waiting time for a specific set of processes.
 It is appropriate for the jobs running in batch, where run times are known in advance.
 For the batch system of long-term scheduling, a burst time estimate can be obtained from the
job description.
 For Short-Term Scheduling, we need to predict the value of the next burst time.
 Probably optimal with regard to average turnaround time.

Disadvantages/Cons of SJF

Here are some drawbacks/cons of SJF algorithm:

28
 Job completion time must be known earlier, but it is hard to predict.
 It is often used in a batch system for long term scheduling.
 SJF can’t be implemented for CPU scheduling for the short term. It is because there is no
specific method to predict the length of the upcoming CPU burst.
 This algorithm may cause very long turnaround times or starvation.
 Requires knowledge of how long a process or job will run.
 It leads to the starvation that does not reduce average turnaround time.
 It is hard to know the length of the upcoming CPU request.
 Elapsed time should be recorded, that results in more overhead on the processor.

Round-Robin Scheduling Algorithm:-


The name of this algorithm comes from the round-robin principle, where each person gets an equal
share of something in turns. It is the oldest, simplest scheduling algorithm, which is mostly used for
multitasking.

In Round-robin scheduling, each ready task runs turn by turn only in a cyclic queue for a limited time
slice. This algorithm also offers starvation free execution of processes

Characteristics of Round-Robin Scheduling

Here are the important characteristics of Round-Robin Scheduling:

 Round robin is a pre-emptive algorithm


 The CPU is shifted to the next process after fixed interval time, which is called time
quantum/time slice.
 The process that is preempted is added to the end of the queue.
 Round robin is a hybrid model which is clock-driven
 Time slice should be minimum, which is assigned for a specific task that needs to be
processed. However, it may differ OS to OS.
 It is a real time algorithm which responds to the event within a specific time limit.
 Round robin is one of the oldest, fairest, and easiest algorithm.
 Widely used scheduling method in traditional OS.

PRACTICE PROBLEMS BASED ON ROUND ROBIN SCHEDULING-


Problem-01:

Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


29
P1 0 5

P2 1 3

P3 2 1

P4 3 2

P5 4 3

If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate the average
waiting time and average turn around time.

olution-

Gantt Chart-

Ready Queue-

P5, P1, P2, P5, P4, P1, P3, P2, P1

Now, we know-

 Turn Around time = Exit time – Arrival time


 Waiting time = Turn Around time – Burst time

30
Process Id Exit time Turn Around time Waiting time

P1 13 13 – 0 = 13 13 – 5 = 8

P2 12 12 – 1 = 11 11 – 3 = 8

P3 5 5–2=3 3–1=2

P4 9 9–3=6 6–2=4

P5 14 14 – 4 = 10 10 – 3 = 7

Now,

 Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit


 Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit

Problem-02:
Consider the set of 6 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 0 4

P2 1 5

P3 2 2

P4 3 1
31
P5 4 6

P6 6 3

If the CPU scheduling policy is Round Robin with time quantum = 2, calculate the average waiting
time and average turn around time.

Solution-

Gantt chart-

Ready Queue-

P5, P6, P2, P5, P6, P2, P5, P4, P1, P3, P2, P1

Now, we know-

 Turn Around time = Exit time – Arrival time


 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 8 8–0=8 8–4=4

P2 18 18 – 1 = 17 17 – 5 = 12

32
P3 6 6–2=4 4–2=2

P4 9 9–3=6 6–1=5

P5 21 21 – 4 = 17 17 – 6 = 11

P6 19 19 – 6 = 13 13 – 3 = 10

Now,

 Average Turn Around time = (8 + 17 + 4 + 6 + 17 + 13) / 6 = 65 / 6 = 10.84 unit


 Average waiting time = (4 + 12 + 2 + 5 + 11 + 10) / 6 = 44 / 6 = 7.33 unit

Problem-03:

Consider the set of 6 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 5 5

P2 4 6

P3 3 7

P4 1 9

P5 2 2

33
P6 6 3

If the CPU scheduling policy is Round Robin with time quantum = 3, calculate the average waiting
time and average turn around time.

Solution-

Gantt chart-

Ready Queue-

P3, P1, P4, P2, P3, P6, P1, P4, P2, P3, P5, P4

Now, we know-

 Turn Around time = Exit time – Arrival time


 Waiting time = Turn Around time – Burst time

34
Turn Around
Process Id Exit time Waiting time
time

P1 32 32 – 5 = 27 27 – 5 = 22

P2 27 27 – 4 = 23 23 – 6 = 17

P3 33 33 – 3 = 30 30 – 7 = 23

P4 30 30 – 1 = 29 29 – 9 = 20

P5 6 6–2=4 4–2=2

P6 21 21 – 6 = 15 15 – 3 = 12

Now,

 Average Turn Around time = (27 + 23 + 30 + 29 + 4 + 15) / 6 = 128 / 6 = 21.33 unit


 Average waiting time = (22 + 17 + 23 + 20 + 2 + 12) / 6 = 96 / 6 = 16 unit

Advantage of Round-robin Scheduling

Here, are pros/benefits of Round-robin scheduling method:

 It doesn’t face the issues of starvation or convoy effect.


 All the jobs get a fair allocation of CPU.
 It deals with all process without any priority
 If you know the total number of processes on the run queue, then you can also assume the
worst-case response time for the same process.
 This scheduling method does not depend upon burst time. That’s why it is easily
implementable on the system.
 Once a process is executed for a specific set of the period, the process is preempted, and
another process executes for that given time35period.
 Allows OS to use the Context switching method to save states of preempted processes.
 It gives the best performance in terms of average response time.

Disadvantages of Round-robin Scheduling

Here, are drawbacks/cons of using Round-robin scheduling:

 If slicing time of OS is low, the processor output will be reduced.


 This method spends more time on context switching
 Its performance heavily depends on time quantum.
 Priorities cannot be set for the processes.
 Round-robin scheduling doesn’t give special priority to more important tasks.
 Decreases comprehension
 Lower time quantum results in higher the context switching overhead in the system.
 Finding a correct time quantum is a quite difficult task in this system.

2.7 Multi-Threading:-

Thread is an execution unit that consists of its own program counter, a stack, and a set of registers
where the program counter mainly keeps track of which instruction to execute next, a set of registers
mainly hold its current working variables, and a stack mainly contains the history of execution.

Threads are also known as Lightweight processes. Threads are a popular way to improve the
performance of an application through parallelism. Threads are mainly used to represent a software
approach in order to improve the performance of an operating system just by reducing the overhead
thread that is mainly equivalent to a classical process.

The CPU switches rapidly back and forth among the threads giving the illusion that the threads are
running in parallel.

As each thread has its own independent resource for process execution; thus Multiple processes can
be executed parallelly by increasing the number of threads.

It is important to note here that each thread belongs to exactly one process and outside a process no
threads exist. Each thread basically represents the flow of control separately. In the implementation of
36
network servers and web servers threads have been successfully used. Threads provide a suitable
foundation for the parallel execution of applications on shared-memory multiprocessors.

The given below figure shows the working of a single-threaded and a multithreaded process:

37
Before moving on further let us first understand the difference between a process and a thread.

38
Ad
Process Thread
va
nta
ges
of

A Process simply means any program in Thread simply means a segment of a


execution. process.

The process consumes more resources Thread consumes fewer resources.

Thread requires comparatively less time


The process requires more time for creation.
for creation than process.

The process is a heavyweight process Thread is known as a lightweight process

The process takes more time to terminate The thread takes less time to terminate.

A thread mainly shares the data segment,


Processes have independent data and code
code segment, files, etc. with its peer
segments
threads.

The process takes more time for context The thread takes less time for context
switching. switching.

Communication between processes needs Communication between threads needs


more time as compared to thread. less time as compared to processes.

39

For some reason, if a process gets blocked


In case if a user-level thread gets blocked,
then the remaining processes can continue
all of its peer threads also get blocked.
their execution
Thread

Some advantages of thread are given below:

1. Responsiveness

2. Resource sharing, hence allowing better utilization of resources.

3. Economy. Creating and managing threads becomes easier.

4. Scalability. One thread runs on one CPU. In Multithreaded processes, threads can be
distributed over a series of processors to scale.

5. Context Switching is smooth. Context switching refers to the procedure followed by the CPU
to change from one task to another.

6. Enhanced Throughput of the system. Let us take an example for this: suppose a process is
divided into multiple threads, and the function of each thread is considered as one job, then
the number of jobs completed per unit of time increases which then leads to an increase in
the throughput of the system.

2.8 CONCURRENCY AND SYNCHRONIZATION:

2.8 Process Synchronization:-

40
On the basis of synchronization, processes are categorized as one of the following two types:
Independent Process: The execution of one process does not affect the execution of other processes.
Cooperative Process: A process that can affect or be affected by other processes executing in the
system.
Process synchronization problem arises in the case of Cooperative process also because resources are
shared in Cooperative processes.

Race Condition:

When more than one process is executing the same code or accessing the same memory or any shared
variable in that condition there is a possibility that the output or the value of the shared variable is
wrong so for that all the processes doing the race to say that my output is correct this condition known
as a race condition. Several processes access and process the manipulations over the same data
concurrently, then the outcome depends on the particular order in which the access takes place. A race
condition is a situation that may occur inside a critical section. This happens when the result of
multiple thread execution in the critical section differs according to the order in which the threads
execute. Race conditions in critical sections can be avoided if the critical section is treated as an
atomic instruction. Also, proper thread synchronization using locks or atomic variables can prevent
race conditions.

2.9 Critical Section Problem:

A critical section is a code segment that can be accessed by only one process at a time. The critical
section contains shared variables that need to be synchronized to maintain the consistency of data
variables. So the critical section problem means designing a way for cooperative processes to access
shared resources without creating data inconsistencies.

41
 Entry Section: The entry Section decides the entry of a process.
 Critical Section: Critical section allows and makes sure that only one process is modifying
the shared data.
 Exit Section: The entry of other processes in the shared data after the execution of one
process is handled by the Exit section.
 Remainder Section: The remaining part of the code which is not categorized as above is
contained in the Remainder section.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:

Mutual Exclusion: If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
Progress: If no process is executing in the critical section and other processes are waiting outside
the critical section, then only those processes that are not executing in their remainder section can
participate in deciding which will enter in the critical section next, and the selection can not be
postponed indefinitely.
Bounded Waiting: A bound must exist on the number of times that other processes are allowed
to enter their critical sections after a process has made a request to enter its critical section and
before that request is granted 42
2.10 Petersons Solution:-

Peterson's approach to critical section problems is extensively utilized. It is a classical software-based


solution.

The solution is based on the idea that when a process is executing in a critical section, then the other
process executes the rest of the code and vice-versa is also possible, i.e., this solution makes sure that
only one process executes the critical section at any point in time.

In Peterson's solution, we have two shared variables that are used by the processes.

 A boolean Flag[]: A boolean array Flag which is initialized to FALSE. This Flag array
represents which process is which process wants to enter into the critical solution.
 int Turn: A integer variable Turn indicates the process number which is ready to enter into
the critical section.

43
do{
//A process Pi wants to enter into the critical section

//The ith index of flag is set


Flag[i] = True;
Turn = i;
while(Flag[i] && Turn == i);

{ Critical Section };

Flag[i] = False;
// another process can go to Critical Section
Turn = j;

Remainder Section

} while ( True);

some disadvantages of Peterson's solution are:

 The Peterson's solution involves Busy waiting

The solution is also limited to only 2 processes

2.11 Synchronization Hardware:-


Process Synchronization problems occur when two processes running concurrently share the same
data or same variable. The value of that variable may not be updated correctly before its being used by
a second process. Such a condition is known as Race Around Condition. There are a software as well
as hardware solutions to this problem. In this article, we will talk about the most efficient hardware
solution to process synchronization problems and its implementation.
There are three algorithms in the hardware approach of solving Process Synchronization problem:

44
1. Test and Set
2. Swap
3. Unlock and Lock

Hardware instructions in many operating systems help in the effective solution of critical section
problems.

1. Test and Set:


Here, the shared variable is lock which is initialized to false. TestAndSet(lock) algorithm works in
this way – it always returns whatever value is sent to it and sets lock to true. The first process will
enter the critical section at once as TestAndSet(lock) will return false and it’ll break out of the while
loop. The other processes cannot enter now as lock is set to true and so the while loop continues to be
true. Mutual exclusion is ensured. Once the first process gets out of the critical section, lock is
changed to false. So, now the other processes can enter one by one. Progress is also ensured.
However, after the first process, any process can go in. There is no queue maintained, so any new
process that finds the lock to be false again can enter. So bounded waiting is not ensured.

Test and Set Pseudocode –


//Shared variable lock initialized to false

boolean lock;

boolean TestAndSet (boolean &target){

boolean rv = target;

target = true;

return rv;

while(1){

while (TestAndSet(lock));

critical section
lock = false;
remainder section
} 45
2. Swap:
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to true in the
swap function, key is set to true and then swapped with lock. First process will be executed, and in
while(key), since key=true , swap will take place and hence lock=true and key=false. Again next
iteration takes place while(key) but key=false , so while loop breaks and first process will enter in
critical section. Now another process will try to enter in Critical section, so again key=true and hence
while(key) loop will run and swap takes place so, lock=true and key=true (since lock=true in first
process). Again on next iteration while(key) is true so this will keep on executing and another process
will not be able to enter in critical section. Therefore Mutual exclusion is ensured. Again, out of the
critical section, lock is changed to false, so any process finding it gets t enter the critical section.
Progress is ensured. However, again bounded waiting is not ensured for the very same reason.

Swap Pseudocode –
// Shared variable lock initialized to false

// and individual key initialized to false;

boolean lock;

Individual key;

void swap(boolean &a, boolean &b){

boolean temp = a;

a = b;

b = temp;

while (1){

key = true;

while(key)

swap(lock,key);
46
critical section
lock = false;
remainder section
}

4. Unlock and Lock :


Unlock and Lock Algorithm uses TestAndSet to regulate the value of lock but it adds another
value, waiting[i], for each process which checks whether or not a process has been waiting. A
ready queue is maintained with respect to the process in the critical section. All the processes
coming in next are added to the ready queue with respect to their process number, not necessarily
sequentially. Once the ith process gets out of the critical section, it does not turn lock to false so
that any process can avail the critical section now, which was the problem with the previous
algorithms. Instead, it checks if there is any process waiting in the queue. The queue is taken to be
a circular queue. j is considered to be the next process in line and the while loop checks from jth
process to the last process and again from 0 to (i-1)th process if there is any process waiting to
access the critical section. If there is no process waiting then the lock value is changed to false
and any process which comes next can enter the critical section. If there is, then that process’
waiting value is turned to false, so that the first while loop becomes false and it can enter the
critical section. This ensures bounded waiting. So the problem of process synchronization can be
solved through this algorithm.

Unlock and Lock Pseudocode –


// Shared variable lock initialized to false

// and individual key initialized to false

boolean lock;

Individual key;

Individual waiting[i];

while(1){

waiting[i] = true;

key = true;

while(waiting[i] && key)


47
key = TestAndSet(lock);
critical section
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
remainder section
}

2.12 Semaphores:-

Semaphores are integer variables that are used to solve the critical section problem by using two
atomic operations, wait and signal that are used for process synchronization.

The definitions of wait and signal are as follows −

 Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or
zero, then no operation is performed.

wait(S)

while (S<=0);

S--;

 Signal
The signal operation increments the value of its argument S.

signal(S)

S++;

}
48
Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details
about these are given as follows −

 Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores
are used to coordinate the resource access, where the semaphore count is the number of
available resources. If the resources are added, semaphore count automatically incremented
and if the resources are removed, the count is decremented.
 Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1.
The wait operation only works when the semaphore is 1 and the signal operation succeeds
when semaphore is 0. It is sometimes easier to implement binary semaphores than counting
semaphores.
Advantages of Semaphores:
 Semaphores allow only one process into the critical section. They follow the mutual exclusion
principle strictly and are much more efficient than some other methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor time is not
wasted unnecessarily to check if a condition is fulfilled to allow a process to access the critical
section.
 Semaphores are implemented in the machine independent code of the microkernel. So they are
machine independent.

Disadvantages of Semaphores:

 Semaphores are complicated so the wait and signal operations must be implemented in the
correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout for
the system.
 Semaphores may lead to a priority inversion where low priority processes may access the
critical section first and high priority processes later.

2.13 Classic Problems of Synchronization:-

Producer-Consumer Problem:-
49
The Producer-Consumer problem is a classical multi-process synchronization problem, that is we are
trying to achieve synchronization between more than one process.

There is one Producer in the producer-consumer problem, Producer is producing some items, whereas
there is one Consumer that is consuming the items produced by the Producer. The same memory
buffer is shared by both producers and consumers which is of fixed-size.

The task of the Producer is to produce the item, put it into the memory buffer, and again start
producing items. Whereas the task of the Consumer is to consume the item from the memory buffer.

Below are a few points that considered as the problems occur in Producer-Consumer:

o The producer should produce data only when the buffer is not full. In case it is found that the
buffer is full, the producer is not allowed to store any data into the memory buffer.
o Data can only be consumed by the consumer if and only if the memory buffer is not empty. In
case it is found that the buffer is empty, the consumer is not allowed to use any data from the
memory buffer.
o Accessing memory buffer should not be allowed to producer and consumer at the same time.

To solve this problem, we need two counting semaphores – Full and Empty. “Full” keeps track
of number of items in the buffer at any given time and “Empty” keeps track of number of
unoccupied slots.

Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially
Solution for Producer –
do{

//produce an item

wait(empty);

wait(mutex); 50
//place in buffer

signal(mutex);

signal(full);

}while(true)

When producer produces an item then the value of “empty” is reduced by 1 because one slot
will be filled now. The value of mutex is also reduced to prevent consumer to access the buffer.
Now, the producer has placed the item and thus the value of “full” is increased by 1. The value
of mutex is also increased by 1 because the task of producer has been completed and consumer
can access the buffer.

Solution for Consumer –


do{

wait(full);

wait(mutex);

// remove item from buffer

signal(mutex);

signal(empty);

// consumes item

}while(true)

As the consumer is removing an item from buffer, therefore the value of “full” is reduced by 1
and the value is mutex is also reduced so that the producer cannot access the buffer at this
moment. Now, the consumer has consumed the item, thus increasing the value of “empty” by 1.
The value of mutex is also increased so that producer can access the buffer now.

2.14 Readers-Writers Problem:-

There is a shared resource which should be accessed by multiple processes. There are two types of
processes in this context. They are reader and writer. Any number of readers can read from the
shared resource simultaneously, but only one writer can write to the shared resource. When
a writer is writing data to the resource, no other process can access the resource. A writer cannot
write to the resource if there are non zero number of readers accessing the resource at that time.
51
The Solution:

From the above problem statement, it is evident that readers have higher priority than writer. If a
writer wants to write to the resource, it must wait until there are no readers currently accessing that
resource.

Here, we use one mutex m and a semaphore w. An integer variable read_count is used to maintain
the number of readers currently accessing the resource. The variable read_count is initialized to 0. A
value of 1 is given initially to m and w.

Instead of having the process to acquire lock on the shared resource, we use the mutex m to make the
process to acquire and release lock whenever it is updating the read_count variable.

The code for the writer process looks like this:

while(TRUE)

wait(w);

/* perform the write operation */

signal(w);

Copy
And, the code for the reader process looks like this:

while(TRUE)

{ 52
//acquire lock

wait(m);

read_count++;

if(read_count == 1)

wait(w);

//release lock

signal(m);

/* perform the reading operation */

// acquire lock

wait(m);

read_count--;

if(read_count == 0)

signal(w);

// release lock

53
signal(m);
}

Here is the Code uncoded(explained)

 As seen above in the code for the writer, the writer just waits on the w semaphore until it gets
a chance to write to the resource.

 After performing the write operation, it increments w so that the next writer can access the
resource.

 On the other hand, in the code for the reader, the lock is acquired whenever the read_count is
updated by a process.

 When a reader wants to access the resource, first it increments the read_count value, then
accesses the resource and then decrements the read_count value.

 The semaphore w is used by the first reader which enters the critical section and the last
reader which exits the critical section.

 The reason for this is, when the first readers enters the critical section, the writer is blocked
from the resource. Only new readers can access the resource now.

 Similarly, when the last reader exits the critical section, it signals the writer using
the w semaphore because there are zero readers now and a writer can have the chance to
access the resource.

lassic Problems of Synchronization:-


2.15 Dining-Philosophers Problem:-
The dining philosophers problem states that there are 5 philosophers sharing a circular table and they
eat and think alternatively. There is a bowl of rice for each of the philosophers and 5 chopsticks. A
philosopher needs both their right and left chopstick to eat. A hungry philosopher may only eat if
there are both chopsticks available. Otherwise a philosopher puts down their chopstick and begin
thinking again.

The dining philosopher is a classic synchronization problem as it demonstrates a large class of


concurrency control problems.

54
Solution of Dining Philosophers Problem

A solution of the Dining Philosophers Problem is to use a semaphore to represent a chopstick. A


chopstick can be picked up by executing a wait operation on the semaphore and released by
executing a signal semaphore.

The structure of the chopstick is shown below −

semaphore chopstick [5];

Initially the elements of the chopstick are initialized to 1 as the chopsticks are on the table and not
picked up by a philosopher.

The structure of a random philosopher i is given as follows −

do {

wait( chopstick[i] );

wait( chopstick[ (i+1) % 5] );

..

. EATING THE RICE

signal( chopstick[i] );

signal( chopstick[ (i+1) % 5] );

. THINKING

} while(1);

In the above structure, first wait operation is performed on chopstick[i] and chopstick[ (i+1) % 5].
This means that the philosopher i has picked up the chopsticks on his sides. Then the eating function
is performed.

After that, signal operation is performed on chopstick[i] and chopstick[ (i+1) % 5]. This means that
the philosopher i has eaten and put down the chopsticks on his sides. Then the philosopher goes back
to thinking.

55
Difficulty with the solution

The above solution makes sure that no two neighboring philosophers can eat at the same time. But
this solution can lead to a deadlock. This may happen if all the philosophers pick their left chopstick
simultaneously. Then none of them can eat and deadlock occurs.

Some of the ways to avoid deadlock are as follows −

 There should be at most four philosophers on the table.


 An even philosopher should pick the right chopstick and then the left chopstick while an odd
philosopher should pick the left chopstick and then the right chopstick.
 A philosopher should only be allowed to pick their chopstick if both are available at the same
time.

2.16 MONITORS IN OS:-

Monitors are a programming language component that aids in the regulation of shared data access.
The Monitor is a package that contains shared data structures, operations, and synchronization
between concurrent procedure calls. Therefore, a monitor is also known as a synchronization
tool. Java, C#, Visual Basic, Ada, and concurrent Euclid are among some of the languages that
allow the use of monitors. Processes operating outside the monitor can't access the monitor's internal
variables, but they can call the monitor's procedures.

For example, synchronization methods like the wait() and notify() constructs are available in the Java
programming language.

Syntax of monitor in OS

Monitor in os has a simple syntax similar to how we define a class, it is as follows:

Monitor monitorName{
variables_declaration;
condition_variables;

procedure p1{ ... };


procedure p2{ ... };
...
procedure pn{ ... };
56
{
initializing_code;
}

Monitor in an operating system is simply a class containing variable_declarations,


condition_variables, various procedures (functions), and an initializing_code block that is used for
process synchronization.

Characteristics of Monitors in OS

A monitor in os has the following characteristics:

 We can only run one program at a time inside the monitor.


 Monitors in an operating system are defined as a group of methods and fields that are
combined with a special type of package in the os.
 A program cannot access the monitor's internal variable if it is running outside the monitor.
Although, a program can call the monitor's functions.
 Monitors were created to make synchronization problems less complicated.
 Monitors provide a high level of synchronization between processes.

Components of Monitor in an operating system

The monitor is made up of four primary parts:

1. Initialization: The code for initialization is included in the package, and we just need it once
when creating the monitors.
2. Private Data: It is a feature of the monitor in an operating system to make the data private. It
holds all of the monitor's secret data, which includes private functions that may only be
utilized within the monitor. As a result, private fields and functions are not visible outside of
the monitor.
3. Monitor Procedure: Procedures or functions that can be invoked from outside of the monitor
are known as monitor procedures.
4. Monitor Entry Queue: Another important component of the monitor is the Monitor Entry
Queue. It contains all of the threads, which are commonly referred to as procedures only.

57
Condition Variables

There are two sorts of operations we can perform on the monitor's condition variables:

1. Wait
2. Signal

Consider a condition variable (y) is declared in the monitor:

y.wait(): The activity/process that applies the wait operation on a condition variable will be
suspended, and the suspended process is located in the condition variable's block queue.

y.signal(): If an activity/process applies the signal action on the condition variable, then one of the
blocked activity/processes in the monitor is given a chance to execute

Advantages of Monitor in OS

 Monitors offer the benefit of making concurrent or parallel programming easier and less
error-prone than semaphore-based solutions.
 It helps in process synchronization in the operating system.
 Monitors have built-in mutual exclusion.
 Monitors are easier to set up than semaphores.
 Monitors may be able to correct for the timing faults that semaphores cause.

Disadvantages of Monitor in OS

 Monitors must be implemented with the programming language.


 Monitor increases the compiler's workload.
 The monitor requires to understand what operating system features are available
for controlling crucial sections in the parallel procedures.

58
2.16 Synchronization Examples(solaris):

 Windows XP
 Linux Windows

Windows XP Synchronization:

 Uses interrupt masks to protect access to global resources on uniprocessor systems


 Uses spinlocks on multiprocessor systems
 • Also provides dispatcher objects which may act as either mutexes and semaphores
 • Dispatcher objects may also provide events – An event acts much like a condition variable

Linux Synchronization:

 Linux: – disables interrupts to implement short critical sections

• Linux provides:

– semaphores

– spin locks

59
60

You might also like