You are on page 1of 145

Operating system BIT253CO

By: Er Suvash Chandra Gautam (ME Computer)


(System Development and Implementation Expert, NBCC-JICA)
December , 2023
Processes and Threads
Introduction
• A process is an instance of a program that is being executed. When
we run a program, it does not execute directly. It takes some time to
follow all the steps required to execute the program, and following
these execution steps is known as a process.
• A process can create other processes to perform multiple tasks at a
time; the created processes are known as clone or child process, and
the main process is known as the parent process. Each process
contains its own memory space and does not share it with the other
processes. It is known as the active entity. A typical process remains in
the below form in memory.

02/25/2024 Process and Thread 3


Process
• Process memory is divided into four sections for efficient working :
• The Text section is made up of the compiled program code, read in
from non-volatile storage when the program is launched.
• The Data section is made up of the global and static variables,
allocated and initialized prior to executing the main.
• The Heap is used for the dynamic memory allocation and is managed
via calls to new, delete, malloc, free, etc.
• The Stack is used for local variables. Space on the stack is reserved
for local variables when they are declared.

02/25/2024 Process and Thread 4


Process
• A process is an 'active' entity as opposed to the program which is
considered to be a 'passive' entity. Attributes held by the process
include hardware state, memory, CPU, etc.

02/25/2024 Process and Thread 5


Process

02/25/2024 Process and Thread 6


Process State
• NEW: A new process is being created.
• READY: A process is ready and waiting to be allocated to a processor.
• RUNNING: The program is being executed.
• WAITING: Waiting for some event to happen or occur.
• TERMINATED: Execution finished.

02/25/2024 Process and Thread 7


02/25/2024 Process and Thread 8
Features of Process

• Each time we create a process, we need to make a separate system call


for each process to the OS. The fork() function creates the process.
• Each process exists within its own address or memory space.
• Each process is independent and treated as an isolated process by the
OS.
• Processes need IPC (Inter-process Communication) in order to
communicate with each other.
• A proper synchronization between processes is not required.

02/25/2024 Process and Thread 9


Process Model
• A process is a program under execution that consists of a number of
elements including, program code and a set of data. To execute a
program, a process has to be created for that program. Here the
process may or may not run but if it is in a condition of running then
that has to be maintained by the OS for appropriate progress of the
process to be gained.

02/25/2024 Process and Thread 10


02/25/2024 Process and Thread 11
Process Model
1.Running: It means a process that is currently being executed. Assuming that there is
only a single processor in the below execution process, so there will be at most one
processor at a time that can be running in the state.
2.Ready: It means a process that is prepared to execute when given the opportunity by
the OS.
3.Blocked/Waiting: It means that a process cannot continue executing until some
event occurs like for example, the completion of an input-output operation.
4.New: It means a new process that has been created but has not yet been admitted by
the OS for its execution. A new process is not loaded into the main memory, but its
process control block (PCB) has been created.
5.Exit/Terminate: A process or job that has been released by the OS, either because it
is completed or is aborted for some issue.
02/25/2024 Process and Thread 12
Possible State Transitions

1.Null -> New: A new process is created for the execution of a process.
2.New -> Ready: The system will move the process from new to ready
state and now it is ready for execution. Here a system may set a limit
so that multiple processes can’t occur otherwise there may be a
performance issue.
3.Ready -> Running: The OS now selects a process for a run and the
system chooses only one process in a ready state for execution.
4.Running -> Exit: The system terminates a process if the process
indicates that is now completed or if it has been aborted.

02/25/2024 Process and Thread 13


Possible State Transitions

1.Running -> Ready: The reason for which this transition occurs is that when the
running process has reached its maximum running time for uninterrupted
execution. An example of this can be a process running in the background that
performs some maintenance or other functions periodically.
2.Running -> Blocked: A process is put in the blocked state if it requests for
something it is waiting. Like, a process may request some resources that might
not be available at the time or it may be waiting for an I/O operation or waiting
for some other process to finish before the process can continue.
3.Blocked -> Ready: A process moves from blocked state to the ready state when
the event for which it has been waiting.
4.Ready -> Exit: This transition can exist only in some cases because, in some
systems, a parent may terminate a child’s process at any time.
02/25/2024 Process and Thread 14
Process Control Block
• While creating a process, the operating system performs several
operations. To identify the processes, it assigns a process identification
number (PID) to each process. As the operating system supports multi-
programming, it needs to keep track of all the processes. For this task,
the process control block (PCB) is used to track the process’s
execution status. Each block of memory contains information about
the process state, program counter, stack pointer, status of opened
files, scheduling algorithms, etc.

02/25/2024 Process and Thread 15


Process Control Block
• All this information is required and must be saved when the process is
switched from one state to another. When the process makes a
transition from one state to another, the operating system must update
information in the process’s PCB. A process control block (PCB)
contains information about the process, i.e. registers, quantum,
priority, etc. The process table is an array of PCBs, that means
logically contains a PCB for all of the current processes in the system.

02/25/2024 Process and Thread 16


02/25/2024 Process and Thread 17
Process Control Block
1. Pointer: It is a stack pointer that is required to be saved when the process is switched
from one state to another to retain the current position of the process.
2. Process state: It stores the respective state of the process.
3. Process number: Every process is assigned a unique id known as process ID or PID
which stores the process identifier.
4. Program counter: It stores the counter,: which contains the address of the next
instruction that is to be executed for the process.
5. Register: These are the CPU registers which include the accumulator, base, registers,
and general-purpose registers.
6. Memory limits: This field contains the information about memory management system
used by the operating system. This may include page tables, segment tables, etc.
7. Open files list : This information includes the list of files opened for a process.
02/25/2024 Process and Thread 18
Introduction to Thread
• A thread is the subset of a process and is also known as the
lightweight process.
• A process within a process is called thread.
• A process can have more than one thread, and these threads are
managed independently by the scheduler. All the threads within one
process are interrelated to each other. Threads have some common
information, such as data segment, code segment, files, etc., that is
shared to their peer threads. But contains its own registers, stack, and
counter.

02/25/2024 Process and Thread 19


Features of Thread

• Threads share data, memory, resources, files, etc., with their peer
threads within a process.
• One system call is capable of creating more than one thread.
• Each thread has its own stack and register.
• Threads can directly communicate with each other as they share the
same address space.
• Threads need to be synchronized in order to avoid unexpected
scenarios.

02/25/2024 Process and Thread 20


02/25/2024 Process and Thread 21
How does thread work?

• When a process starts, OS assigns the memory and resources to it.


Each thread within a process shares the memory and resources of that
process only.
• Threads are mainly used to improve the processing of an application.
In reality, only a single thread is executed at a time, but due to fast
context switching between threads gives an illusion that threads are
running parallelly.
• If a single thread executes in a process, it is known as a single-
threaded And if multiple threads execute simultaneously, then it is
known as multithreading.

02/25/2024 Process and Thread 22


Types of Thread
• 1. User Level Thread:
• As the name suggests, the user-level threads are only managed by
users, and the kernel does not have its information.
• These are faster, easy to create and manage.
• The kernel takes all these threads as a single process and handles them
as one process only.
• The user-level threads are implemented by user-level libraries, not by
the system calls.

02/25/2024 Process and Thread 23


Advantages of User-level threads
1.The user threads can be easily implemented than the kernel thread.
2.User-level threads can be applied to such types of operating systems that do not
support threads at the kernel-level.
3.It is faster and efficient.
4.Context switch time is shorter than the kernel-level threads.
5.It does not require modifications of the operating system.
6.User-level threads representation is very simple. The register, PC, stack, and
mini thread control blocks are stored in the address space of the user-level
process.
7.It is simple to create, switch, and synchronize threads without the intervention
of the process.
02/25/2024 Process and Thread 24
Process Thread

A process is an instance of a program that is Thread is a segment of a process or a lightweight


being executed or processed. process that is managed by the scheduler
independently.

Processes are independent of each other and Threads are interdependent and share memory.
hence don't share a memory or other
resources.

Each process is treated as a new process by The operating system takes all the user-level threads as
the operating system. a single process.

If one process gets blocked by the operating If any user-level thread gets blocked, all of its peer
system, then the other process can continue threads also get blocked because OS takes all of them
the execution. as a single process.

Context switching between two processes Context switching between the threads is fast because
takes much time as they are heavy compared they are very lightweight.
to thread.

The data segment and code segment of each Threads share data segment and code segment with
process are independent of the other. their peer threads; hence are the same for other threads
also.

The operating system takes more time to Threads can be terminated in very little time.
terminate a process.

New process creation is more time taking as A thread needs less time for creation.
each new process takes all the resources.

02/25/2024 Process and Thread 25


Types of Thread
• 2. Kernel-Level Thread:
• The kernel-level threads are handled by the Operating system and
managed by its kernel. These threads are slower than user-level
threads because context information is managed by the kernel. To
create and implement a kernel-level thread, we need to make a system
call.

02/25/2024 Process and Thread 26


Advantages of Kernel-level threads
1.The kernel-level thread is fully aware of all threads.
2.The scheduler may decide to spend more CPU time in the process of
threads being large numerical.
3.The kernel-level thread is good for those applications that block the
frequency.

02/25/2024 Process and Thread 27


02/25/2024 Process and Thread 28
Feature User Level Threads Kernel Level Threads

Implemented by It is implemented by the users. It is implemented by the OS.

Context switch time Its time is less. Its time is more.

Multithreading Multithread applications are unable to employ multiprocessing in It may be multithreaded.


user-level threads.

Implementation It is easy to implement. It is complicated to implement.

Blocking Operation If a thread in the kernel is blocked, it blocks all other threads in If a thread in the kernel is blocked, it does not block all other
the same process. threads in the same process.

Recognize OS doesn't recognize it. It is recognized by OS.

Thread Management Its library includes the source code for thread creation, data The application code on kernel-level threads does not include
transfer, thread destruction, message passing, and thread thread management code, and it is simply an API to the kernel
scheduling. mode.

Hardware Support It doesn't need hardware support. It requires hardware support.

Creation and Management It may be created and managed much faster. It takes much time to create and handle.

Examples Some instances of user-level threads are Java threads and POSIX Some instances of Kernel-level threads are Windows and Solaris.
threads.

Operating System Any OS may support it. The specific OS may support it.

02/25/2024 Process and Thread 29


Components of Threads

1.Program counter
2.Register set
3.Stack space

02/25/2024 Process and Thread 30


Benefits of Threads

• Enhanced throughput of the system: When the process is split into many
threads, and each thread is treated as a job, the number of jobs done in the unit
time increases. That is why the throughput of the system also increases.
• Effective Utilization of Multiprocessor system: When you have more than one
thread in one process, you can schedule more than one thread in more than one
processor.
• Faster context switch: The context switching period between threads is less
than the process context switching. The process context switch means more
overhead for the CPU.
• Responsiveness: When the process is split into several threads, and when a
thread completes its execution, that process can be responded to as soon as
possible.
02/25/2024 Process and Thread 31
Benefits of Threads
• Communication: Multiple-thread communication is simple because
the threads share the same address space, while in process, we adopt
just a few exclusive communication strategies for communication
between two processes.
• Resource sharing: Resources can be shared between all threads
within a process, such as code, data, and files. Note: The stack and
register cannot be shared between threads. There is a stack and register
for each thread.

02/25/2024 Process and Thread 32


Process vs Program
Process Program

A Program is basically a collection of instructions that


The process is basically an instance of the computer mainly performs a specific task when executed by the
program that is being executed. computer.

A process has a shorter lifetime. A Program has a longer lifetime.

A Process requires resources such as memory, CPU, A Program is stored by hard-disk and does not
Input-Output devices. require any resources.

A process has a dynamic instance of code and data A Program has static code and static data.

Basically, a process is the running instance of the On the other hand, the program is the executable
code. code.

02/25/2024 Process and Thread 33


Process Scheduling

• When there are two or more runnable processes then it is decided by


the Operating system which one to run first then it is referred to as
Process Scheduling.
• A scheduler is used to make decisions by using some scheduling
algorithm.
• Given below are the properties of a Good Scheduling Algorithm:

02/25/2024 Process and Thread 34


• Response time should be minimum for the users.
• The number of jobs processed per hour should be maximum i.e. Good
scheduling algorithm should give maximum throughput.
• The utilization of the CPU should be 100%.
• Each process should get a fair share of the CPU.

02/25/2024 Process and Thread 35


02/25/2024 Process and Thread 36
Process scheduling (FCFS, SJF, RR, Priority, Real-time
scheduling)

• The Purpose of a Scheduling algorithm


1.Maximum CPU utilization
2.Fare allocation of CPU
3.Maximum throughput
4.Minimum turnaround time
5.Minimum waiting time
6.Minimum response time

02/25/2024 Process and Thread 37


CPU Scheduling
• CPU Scheduling is a process of determining which process will own
CPU for execution while another process is on hold. The main task of
CPU scheduling is to make sure that whenever the CPU remains idle,
the OS at least select one of the processes available in the ready
queue for execution. The selection process will be carried out by the
CPU scheduler. It selects one of the processes in memory that are
ready for execution.
• Types of CPU Scheduling
a) Preemptive Scheduling
b) Non Preemptive Scheduling

02/25/2024 Process and Thread 38


CPU Scheduling
• CPU scheduling is the basic of multiprogram med operating system.
• By switching the CPU among processes, the operating system can
make the computer more productive.
• In a single processor system, only one process run at a time.
• Any other must wait until the CPU is free and can be rescheduled.

02/25/2024 Process and Thread 39


Scheduling Criteria
• CPU Utilization: We want to keep the CPU as busy as possible at all the time.
Conceptually, CPU utilization can range from 0 to 100%. In a real system, it
should range from 40%( for a lightly loaded system) to 90% (for a heavily used
system)
• Throughput: The number of processes that are completed per unit time. If the
CPU is busy executing processes, then work is being done.
• Turnaround time: The interval from the time of submission of a process to the
time of completion is the turnaround time.
• Waiting Time: The CPU scheduling algorithm does not affect the amount of time
during which a process executes or does I/O ; it affects only the amount of time
that process spends waiting in the ready queue. Waiting time is the sum of the
periods spent waiting in the ready queue.
02/25/2024 Process and Thread 40
CPU Scheduling
• CPU scheduling is a process of determining which process will own
CPU for the execution while another process is on hold.

02/25/2024 Process and Thread 41


02/25/2024 Process and Thread 42
CPU Scheduling

02/25/2024 Process and Thread 43


Preemptive Scheduling

• In Preemptive Scheduling, the tasks are mostly assigned with their


priorities. Sometimes it is important to run a task with a higher
priority before another lower priority task, even if the lower priority
task is still running. The lower priority task holds for some time and
resumes when the higher priority task finishes its execution.

02/25/2024 Process and Thread 44


Non Preemptive Scheduling

• In this type of scheduling method, the CPU has been allocated to a


specific process. The process that keeps the CPU busy will release the
CPU either by switching context or terminating. It is the only method that
can be used for various hardware platforms. That’s because it doesn’t
need special hardware (for example, a timer) like preemptive scheduling.
• When scheduling is Preemptive or Non-Preemptive?
1.A process switches from the running to the waiting state.
2.Specific process switches from the running state to the ready state.
3.Specific process switches from the waiting state to the ready state.
4.Process finished its execution and terminated.
02/25/2024 Process and Thread 45
When scheduling is Preemptive or Non-Preemptive?

• Only conditions 1 and 4 apply, the scheduling is called non-


preemptive.
• All other scheduling are preemptive.

02/25/2024 Process and Thread 46


Important CPU scheduling Terminologies

• Burst Time/Execution Time: It is a time required by the process to complete


execution. It is also called running time.
• Arrival Time: when a process enters in a ready state
• Finish Time: when process complete and exit from a system
• Multiprogramming: A number of programs which can be present in memory at
the same time.
• Jobs: It is a type of program without any kind of user interaction.
• User: It is a kind of program having user interaction.
• Process: It is the reference that is used for both job and user.
• CPU/IO burst cycle: Characterizes process execution, which alternates between
CPU and I/O activity. CPU times are usually shorter than the time of I/O.
02/25/2024 Process and Thread 47
Process Scheduling Algorithm
• There are six popular process scheduling algorithms
which we are going to discuss in this chapter −
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-Next (SJN) Scheduling
• Priority Scheduling
• Shortest Remaining Time
• Round Robin(RR) Scheduling
• Multiple-Level Queues Scheduling

02/25/2024 Process and Thread 48


First Come First Serve (FCFS)

• Jobs are executed on first come, first serve basis.


• It is a non-preemptive, pre-emptive scheduling
algorithm.
• The simple CPU Scheduling algorithm
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high.

02/25/2024 Process and Thread 49


Convoy Effect

02/25/2024 Process and Thread 50


02/25/2024 Process and Thread 51
FCFS
• The shaded box represents the idle time of CPU.

02/25/2024 Process and Thread 52


02/25/2024 Process and Thread 53
02/25/2024 Process and Thread 54
Shortest Job First (SJF) /Shortest Job Next

02/25/2024 Process and Thread 55


(Shortest job first)
• SJF is a full form of (Shortest job first) is a scheduling algorithm in
which the process with the shortest execution time should be
selected for execution next. This scheduling method can be
preemptive or non-preemptive. It significantly reduces the average
waiting time for other processes awaiting execution.

02/25/2024 Process and Thread 56


Shortest job first
• It is associated with each job as a unit of time to complete.
• In this method, when the CPU is available, the next process or job
with the shortest completion time will be executed first.
• It is Implemented with non-preemptive policy.
• This algorithm method is useful for batch-type processing, where
waiting for jobs to complete is not critical.
• It improves job output by offering shorter jobs, which should be
executed first, which mostly have a shorter turnaround time.

02/25/2024 Process and Thread 57


02/25/2024 Process and Thread 58
02/25/2024 Process and Thread 59
Example for Shortest Job First
Process ID Arrival Time Burst time Completion TAT=CT-AT WT=TAT-BT
Time
P1 0 8 8 =8-0=8 =8-8=0
P2 1 4 12 =12-1=11 =11-4=7
P3 2 9 32 =32-2=30 =30-9=21
P4 3 5 17 17-3=14 =14-5=9
P5 4 6 23 23-4=19 =19-6=13

02/25/2024 Process and Thread 60


• Mode: Non preemptive/No Interruption
• Criteria : Burst Time
• Process will be executing completely.
• In above Figure, At the time zero(0), only one process that is P1 will
be available in ready queue.
• Total Burst Time: 8+4+9+5+6=32
• Turnaround Time=Completion Time-Arrival Time…………(1)
• Waiting Time(WT): TAT-Burst Time………………………………..(2)

02/25/2024 Process and Thread 61


• Average Turnaround Time: (8+11+30+14+19)/5=82/5=16.4ms
• Average Waiting Time: 0+7+21+9+13=50/5=10ms
• Questions: Calculate the average TAT and WT. [Class Work]
PROCESS ID ARRIVAL TIME BURST TIME COMPLETION TURN WAITING
TIME AROUND TIME=TAT-BT
TIME=CT-AT

P1 1 21 ? 22 ? ?
P2 2 3 ?27 ? ?
P3 3 6 ? 33 ? ?
P4 4 2 ? 24 ? ?

02/25/2024 Process and Thread 62


• Solution:
• Total TAT=96
• Average TAT=96/4=24ms
• Waiting Time=64.
• Average WT=64/4=16ms.

• Wa
• Let us construct the Gantt Chart:

02/25/2024 Process and Thread 63


SHORTEST JOB FIRST(SJF)
• Mode: Preemptive Mode
• Criteria: Burst Time
• There will be an interruption
• If one process is being executed by the CPU, at the same time, if any
other process with lower burst time will appear , then automatically
executing process will be send back to the ready state and high
PRIORITY PROCESS will be given to the CPU; which is called
Preemption.
• Concept: Lower Burst Time Should be executed First.

02/25/2024 Process and Thread 64


Example 1
PROCESS ID ARRIVAL TIME BURST TIME COMPLETION TAT=CT-AT WT=TAT-BT
TIME
P1 0 8, 1 second has 20 20-0=20 =20-8=12
been already
completed
executed, so P1
Burst time is: 8-
1=7
P2 1 1/0 2 2-1=1 =1-1=0
P3 2 3/2/0 5 5-2=3 =3-3=0
P4 3 2 7 7-3=4 =4-2=2
P5 4 6 13 13-4=9 =9-6=3

02/25/2024 Process and Thread 65


SHORTEST JOB FIRST(SJF)
• At time zero(0), only process P1 is available to the ready queue.
• After 1 sec, P2 has arrived into the ready state.
• P2 should be allocated to the CPU, because it has the lowest burst
time (1ms).
• P3 is allocated to the CPU, having the lowest burst time (3ms).
• If any two process having the same burst time, then select the
process which is having the lowest ID.
• What is ATAT and AWT?

02/25/2024 Process and Thread 66


Gantt Chart

02/25/2024 Process and Thread 67


PRIORITY SCHEDULING

02/25/2024 Process and Thread 68


PRIORITY SCHEDULING
• The lower the value higher the priority.

02/25/2024 Process and Thread 69


02/25/2024 Process and Thread 70
PRIORITY SCHEDULING
• Each process has its own priority.
• Out of all available process, highest priority process gets the CPU.
• Two Types:
• Static Priority( Does not change throughout the execution of program)
• Dynamic(Change after some interval of time)
• Version: Non Preemptive and Preemptive
• Remember: Lesser the number higher the priority.

02/25/2024 Process and Thread 71


Round Robin Algorithm( Mode: Preemptive)
• Used in time sharing system.
• Time Sharing is the main emphasis of the algorithm. Each step of this
algorithm is carried out cyclically. The system defines a specific time
slice, known as a time quantum.
• Similar to FCFS with time quantum.
• Criteria: Time Quantum
• Ready Queue(RAM, select process from RAM) To Running Queue (CPU).
• Sequences of processes in ready queue (Context Switching)
• Two Queue: Ready Queue and Running Queue.

02/25/2024 Process and Thread 72


Round Robin Algorithm
• Context Switching: Context switching occurs when the time quantum
expires or when a process voluntarily yields the CPU.
• The operating system saves the current state (context) of the running
process, including register values, program counter, and other relevant
information.
• The next process in the ready queue is selected, and its saved context
is loaded.
• Control is transferred to the newly loaded process, and it continues its
execution from where it was last interrupted.
• Assign a fixed time unit, known as a time quantum or time slice.
02/25/2024 Process and Thread 73
Summary of Round Robin Algorithm
• Round Robin Scheduling is the preemptive scheduling algorithm.
• We assign a fixed time to all processes for execution, this time is
called time quantum.
• All processes can execute only until their time quantum and then
leave the CPU and give a chance to other processes to complete their
execution according to time quantum.
• Context switching mechanisms store the states of preempted
processes.

02/25/2024 Process and Thread 74


Formula
• The formula of Round robin Turn Around Time =
Completion Time – Arrival Time
The formula of Round robin Waiting Time(W.T): Time
Difference between the turnaround and the burst
time.
The formula of Round robin Waiting Time = Turn
Around Time – Burst Time

02/25/2024 Process and Thread 75


02/25/2024 Process and Thread 76
02/25/2024 Process and Thread 77
02/25/2024 Process and Thread 78
Advantage of Round-robin Scheduling
1.Round-robin Scheduling avoids starvation or convoy effect.
2.In Round-robin Scheduling , all the jobs get a fair and efficient allocation of
CPU.
3.Round-robin Scheduling don’t care about priority of the processes.
4.Round-robin Scheduling gives the best performance in terms of average
response time.
5.We can assume the worst case response time for the process, If we know
the total number of processes on the running queue.
6.Round-robin Scheduling is easily implementable on the system because it
does not depend upon burst time.
7.Round-robin Scheduling promotes Context switching to save the states of
the preempted processes.
02/25/2024 Process and Thread 79
Disadvantages of Round-robin Scheduling
1.Round-robin Scheduling utilize more time on context
switching which is not good in some cases.
2.The processor output will be reduced in Round-robin
Scheduling, If slicing time of OS is low.
3.If the time quantum is low, then it increases the context
switching time in Round-robin Scheduling .
4.Round-robin Scheduling can leads to decreases the
comprehension
5.In Round-robin Scheduling , it’s a difficult task to decide a
correct time quantum to increase the efficiency and speed.
6.In Round-robin Scheduling , We can’t set priorities for the
processes.
02/25/2024 Process and Thread 80
Practice Session
• Look The Videos:
https://www.youtube.com/watch?v=6PEyXwdxeIc

02/25/2024 Process and Thread 81


02/25/2024 Process and Thread 82
02/25/2024 Process and Thread 83
02/25/2024 Process and Thread 84
02/25/2024 Process and Thread 85
02/25/2024 Process and Thread 86
02/25/2024 Process and Thread 87
02/25/2024 Process and Thread 88
02/25/2024 Process and Thread 89
02/25/2024 Process and Thread 90
02/25/2024 Process and Thread 91
Round Robin Algorithm (Numerical)

PID AT BT CT TAT WT RT
P1 0 5
P2 1 6
P3 2 3
P4 3 1
P5 4 5
P6 6 4
IF CPU SCHEDULING POLICY IS ROUND ROBIN WITH TIME QUANTUM=2 UNITS, CALCULATE THE AVERAGE WAITING TIME
(AWT) and AVERAGE TAT.

02/25/2024 Process and Thread 92


• In the following example, there are six processes named as P1, P2, P3,
P4, P5 and P6. Their arrival time and burst time are given below in the
table. The time quantum of the system is 4 units.
• According to the algorithm, we have to maintain the ready queue and
the Gantt chart. The structure of both the data structures will be
changed after every scheduling.
• Ready Queue:
• Initially, at time 0, process P1 arrives which will be scheduled for the
time slice 4 units. Hence in the ready queue, there will be only one
process P1 at starting with CPU burst time 5 units.

02/25/2024 Process and Thread 93


• GANTT chart
• The P1 will be executed for 4 units first.

• Ready Queue
• Meanwhile the execution of P1, four more processes P2, P3, P4 and
P5 arrives in the ready queue. P1 has not completed yet, it needs
another 1 unit of time hence it will also be added back to the ready
queue.

02/25/2024 Process and Thread 94


• GANTT chart
• After P1, P2 will be executed for 4 units of time which is shown in the
Gantt chart.

• Ready Queue
• During the execution of P2, one more process P6 is arrived in the
ready queue. Since P2 has not completed yet hence, P2 will also be
added back to the ready queue with the remaining burst time 2 units.

02/25/2024 Process and Thread 95


GANTT chart
• After P1 and P2, P3 will get executed for 3 units of time since its CPU
burst time is only 3 seconds.

Ready Queue
• Since P3 has been completed, hence it will be terminated and not be
added to the ready queue. The next process will be executed is P4.

02/25/2024 Process and Thread 96


GANTT chart
• After, P1, P2 and P3, P4 will get executed. Its burst time is only 1 unit
which is lesser then the time quantum hence it will be completed.

Ready Queue
• The next process in the ready queue is P5 with 5 units of burst time.
Since P4 is completed hence it will not be added back to the queue.

02/25/2024 Process and Thread 97


GANTT chart
P5 will be executed for the whole time slice because it requires 5 units of
burst time which is higher than the time slice.

Ready Queue
P5 has not been completed yet; it will be added back to the queue with the
remaining burst time of 1 unit.

02/25/2024 Process and Thread 98


GANTT Chart
The process P1 will be given the next turn to complete its execution.
Since it only requires 1 unit of burst time hence it will be completed.

Ready Queue
P1 is completed and will not be added back to the ready queue. The
next process P6 requires only 4 units of burst time and it will be
executed next.

02/25/2024 Process and Thread 99


GANTT chart
• P6 will be executed for 4 units of time till completion.

Ready Queue
Since P6 is completed, hence it will not be added again to the queue.
There are only two processes present in the ready queue. The Next
process P2 requires only 2 units of time.

02/25/2024 Process and Thread 100


GANTT Chart
• P2 will get executed again, since it only requires only 2 units of time
hence this will be completed.

Ready Queue
• Now, the only available process in the queue is P5 which requires 1
unit of burst time. Since the time slice is of 4 units hence it will be
completed in the next burst.

02/25/2024 Process and Thread 101


GANTT chart
• P5 will get executed till completion.

• the completion time, Turnaround time and waiting time will be


calculated as shown in the table below.
• As, we know,
1.TURN AROUND TIME = COMPLETION TIME - ARRIVAL TIME

2.WAITING TIME = TURN AROUND TIME - BURST TIME


02/25/2024 Process and Thread 102
Process ID Arrival Time Burst Time Completion Turn Around Waiting Time
Time Time

1 0 5 17 17 12
2 1 6 23 22 16
3 2 3 11 9 6
4 3 1 12 9 8
5 4 5 24 20 15
6 6 4 21 15 11

Avg Waiting Time = (12+16+6+8+15+11)/6 = 76/6 units

02/25/2024 Process and Thread 103


Round Robin Algorithm

02/25/2024 Process and Thread 104


02/25/2024 Process and Thread 105
02/25/2024 Process and Thread 106
02/25/2024 Process and Thread 107
02/25/2024 Process and Thread 108
Inter- process communication
• "Inter-process communication is used for exchanging useful
information between numerous threads in one or more processes (or
programs).“
• The main aim or goal of this mechanism is to provide communications
in between several processes. In short, the intercommunication allows
a process letting another process know that some event has occurred.
• Cooperating process or threads that needs inter process
communication

02/25/2024 Process and Thread 109


Inter- process communication
• Cooperating process or threads that needs inter process
communication
• Data Transfer
• Sharing Data
• Even notification
• Resources sharing
• Process Control
• Processes executing concurrently in the operating system may be
either independent process or cooperating processes.

02/25/2024 Process and Thread 110


Inter- process communication
• Independent Processes: They cannot affect or be affected by the
other processes executing in the system.
• Cooperating Processes: They can affect or be affected by the other
process executing in the system.
• A process that share the data with other processes is called
cooperating processes.

02/25/2024 Process and Thread 111


APPROACH IN IPC
• Pipes
• Shared Memory
• Message Queue
• Direct Communication
• Indirect communication
• Message Passing
• FIFO

02/25/2024 Process and Thread 112


Pipe:-
• The pipe is a type of data channel that is unidirectional in nature. It
means that the data in this type of data channel can be moved in only a
single direction at a time.
• Still, one can use two-channel of this type, so that he can able to send
and receive data in two processes. Typically, it uses the standard
methods for input and output.
• These pipes are used in all types of POSIX systems and in different
versions of window operating systems as well.

02/25/2024 Process and Thread 113


Shared Memory:-
• It can be referred to as a type of memory that can be used or accessed
by multiple processes simultaneously.
• It is primarily used so that the processes can communicate with each
other.
• Therefore the shared memory is used by almost all POSIX(Portable
Operating System Interface) and Windows operating systems as well.

02/25/2024 Process and Thread 114


Message Queue:-

• In general, several different messages are allowed to read and write the
data to the message queue.
• In the message queue, the messages are stored or stay in the queue
unless their recipients retrieve them. In short, we can also say that the
message queue is very helpful in inter-process communication and
used by all operating systems.
• To understand the concept of Message queue and Shared memory in
more detail, let's take a look at its diagram given below:

02/25/2024 Process and Thread 115


02/25/2024 Process and Thread 116
Message Passing:-
• is a type of mechanism that allows processes to synchronize and
communicate with each other. However, by using the message passing,
the processes can communicate with each other without restoring the
hared variables.
• Usually, the inter-process communication mechanism provides two
operations that are as follows:
• send (message)
• received (message)

02/25/2024 Process and Thread 117


Message Passing
• Direct Communication: In this type of communication process,
usually, a link is created or established between two communicating
processes. However, in every pair of communicating processes, only
one link can exist.
• Indirect Communication: Indirect communication can only exist or
be established when processes share a common mailbox, and each pair
of these processes shares multiple communication links. These shared
links can be unidirectional or bi-directional.

02/25/2024 Process and Thread 118


ASSIGNMENT NO:2
1. Define an operating system. Explain its role and importance in computer
systems.

2. Describe the process model in operating systems. What are the different states
a process can be in, and explain the transitions between these states?

3. Explain the concept of the Process Control Block (PCB). What information
does it contain, and how is it used during context switching?

4. Define a thread. What is the difference between a process and a thread?


Explain the advantages of using threads in a multitasking environment.
02/25/2024 Process and Thread 119
ASSIGNMENT NO:2
1. Compare and contrast kernel-level threads and user-level threads.
What are the advantages and disadvantages of each approach?
2. Discuss the concept of multiprogramming and parallel processing.
How do they enhance system performance and resource utilization?
3. Define critical sections. Explain the challenges of race conditions in
concurrent programming. How can mutual exclusion be achieved
using busy waiting?
4. Explain the concept of semaphores in operating systems. How do
they provide synchronization between processes?

02/25/2024 Process and Thread 120


ASSIGNMENT NO:2
1. Define monitors and describe how they address the issues of
semaphores in concurrent programming.
2. Compare and contrast preemptive and non-preemptive scheduling
algorithms. Discuss the advantages and disadvantages of each
approach in a multitasking environment.
3. Define First-Come-First-Serve (FCFS) scheduling. What are its
strengths and weaknesses? Provide an example to illustrate.

02/25/2024 Process and Thread 121


ASSIGNMENT NO:2
12.Explain Shortest Job First (SJF) scheduling. How does it minimize
waiting time, and what challenges may arise in its implementation?
13.Discuss Round Robin (RR) scheduling. What is the significance of
the time quantum, and how does it affect system performance?
14.Define Priority scheduling. How does it work, and what
considerations should be taken into account when assigning priorities
to processes?
15.Briefly explain Real-Time scheduling. What distinguishes it from
other scheduling algorithms, and what are the key requirements for
real-time systems?
02/25/2024 Process and Thread 122
Critical Section
• Critical Section is the part of a program which tries to access shared
resources. That resource may be any resource in a computer like a
memory location, Data structure, CPU or any IO device.
• The critical section cannot be executed by more than one process at
the same time; operating system faces the difficulties in allowing and
disallowing the processes from entering the critical section.
• The critical section problem is used to design a set of protocols which
can ensure that the Race condition among the processes will never
arise.

02/25/2024 Process and Thread 123


Critical Section
• The critical section in a code segment where the shared variables can be
accessed. Atomic action is required in a critical section i.e. only one
process can execute in its critical section at a time. All the other
processes have to wait to execute in their critical sections.
do{
Entry Section
Critical Section
Exit Section
Remainder Section
} while (TRUE);
02/25/2024 Process and Thread 124
Figure: General Structure of Typical Process

02/25/2024 Process and Thread 125


Critical Section
• Each process must request permission to enter its critical section.
• The section of code implementing this request is the entry section.
• The critical Section may be followed by an exit section.
• The remaining code is the remainder section.

02/25/2024 Process and Thread 126


Requirements of Synchronization mechanisms(The solution to the critical section problem)

1. Primary:
a. Mutual Exclusion: Our solution must provide mutual exclusion. By Mutual
Exclusion, we mean that if one process is executing inside critical section then
the other process must not enter in the critical section.
a. Progress: Progress means that if one process doesn't need to execute into critical
section then it should not stop other processes to get into the critical section.
The main job of progress is to ensure one process is executing in the critical section
at any point in time (so that some work is always being done by the processor). This
decision cannot be “postponed indefinitely” – in other words, it should take a
limited amount of time to select which process should be allowed to enter the
critical section. If this decision cannot be taken in a finite time, it leads to a
deadlock.

02/25/2024 Process and Thread 127


Progress

02/25/2024 Process and Thread 128


Progress

02/25/2024 Process and Thread 129


Requirements of Synchronization
mechanisms

02/25/2024 Process and Thread 130


Requirements of Synchronization
mechanisms
Secondary
a) Bounded Waiting
• We should be able to predict the waiting time for every process to get
into the critical section. The process must not be endlessly waiting for
getting into the critical section.
a) Architectural Neutrality
• Our mechanism must be architectural natural. It means that if our
solution is working fine on one architecture then it should also run on
the other ones as well.

02/25/2024 Process and Thread 131


Semaphore
• Semaphore proposed by Edsger Dijkstra, is a technique to mange
concurrent processes by using a simple integer value, which is known
as a semaphore.
• Semaphore is simply a variable which is non-negative and shared
between threads. This variable is used to solve the critical section
problem and to achieve process synchronization in the
multiprocessing environment.
• A semaphore S is an integer variable that, apart from initialization, is
accessed only through two standard atomic operations: wait ( ) and
signal ( ).

02/25/2024 Process and Thread 132


Semaphore
• Wait ( ) P [From the Dutch word proberen, which means “to test”]
• Signal ( ) V [From the Dutch word verhogen, which means to
“increment”]
• Semaphore does not supports the busy waiting.
• Semaphore is the Synchronization tool that does not require the busy
waiting
• The least value for a Semaphore is zero (0). The Maximum value of a
Semaphore can be anything.

02/25/2024 Process and Thread 133


Semaphore Operation
• The Semaphores usually have two operations. The two operations
have the capability to decide the values of the semaphores.
• The two Semaphore Operations are:
a) Wait ( )
b) Signal ( )

02/25/2024 Process and Thread 134


Wait ( ) Operation
• The Wait Operation works on the basis of Semaphore or Mutex Value.
• Here, if the Semaphore value is greater than zero or positive then the
Process can enter the Critical Section Area.
• P(Semaphore S) {
While (S<=0)
;// no operation
S--;
}

02/25/2024 Process and Thread 135


Wait ( ) Operation
• Do not allow the process if the value of Semaphore is less than zero or
equal to zero. The wait operation decrements the value of the
semaphore.
• The Wait Operation is used for deciding the condition for the process
to enter the critical state or wait for execution of process. Here, the
wait operation has many different names. The different names are:
• P operation is also called wait, sleep, decrease or down operation.

02/25/2024 Process and Thread 136


signal ( ) Operation
• The Signal Semaphore Operation is used to update the value of
Semaphore. The Semaphore value is updated when the new
processes are ready to enter the Critical Section.
• The Signal Operation is also known as: Wake up Operation, Up
Operation, Increase Operation, V Function (most important alias
name for signal operation).
V(Semaphore S){
S++;
}

02/25/2024 Process and Thread 137


signal ( ) Operation
• All the modifications to the integer value of the semaphore in the
wait ( ) and signal ( ) operations must be executed indivisibly.
• That is, when one process modifies the semaphore value, no other
process can simultaneously modify that some semaphore value.

02/25/2024 Process and Thread 138


Types of Semaphores

• 1. Binary Semaphore [0 and 1 only]


• Here, there are only two values of Semaphore in Binary Semaphore
Concept. The two values are 1 and 0.
• If the Value of Binary Semaphore is 1, then the process has the
capability to enter the critical section area. If the value of Binary
Semaphore is 0 then the process does not have the capability to enter
the critical section area.

02/25/2024 Process and Thread 139


02/25/2024 Process and Thread 140
2. Counting Semaphore (-∞ to ∞)

• Here, there are two sets of values of Semaphore in Counting


Semaphore Concept. The two types of values are values greater than
and equal to one and other type is value equal to zero.
• If the Value of Binary Semaphore is greater than or equal to 1, then the
process has the capability to enter the critical section area. If the value
of Binary Semaphore is 0 then the process does not have the capability
to enter the critical section area.
• Its value can range over an unrestricted domain. It is used to control
access to a resource that has multiple instances.

02/25/2024 Process and Thread 141


Race Condition

• A race condition is a situation that may occur inside a critical section.


This happens when the result of multiple thread execution in critical
section differs according to the order in which the threads execute.
• Race conditions in critical sections can be avoided if the critical
section is treated as an atomic instruction. Also, proper thread
synchronization using locks or atomic variables can prevent race
conditions.
• It occurs when two or more processes are executed at the same time,
not scheduled in proper sequence and not executed in critical section
correctly which causes data inconsistency and data loss.

02/25/2024 Process and Thread 142


02/25/2024 Process and Thread 143
02/25/2024 Process and Thread 144
Monitor

02/25/2024 Process and Thread 145

You might also like