You are on page 1of 56

Introduction

to
Operating Systems - 2
Process management
PROF. K. ADISESHA
2
Process Process management
management Process Concept

Inter process Communication

Threads

CPU Scheduling Algorithm

Process Synchronization
3
Introduction

Introduction to Process:
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
➢ When a program is loaded into the main memory it becomes a process.
➢ It can be divided into four sections ─
❖ Stack: The process Stack contains the temporary data such as
method/function parameters, return address and local variables
❖ Heap: This is dynamically allocated memory to a process during its run
time
❖ Text: This includes the current activity represented by the value of
Program Counter and the contents of the processor's registers.
❖ Data: This section contains the global and static variables.
Prof. K. Adisesha (Ph. D)
4
Introduction

Process Life Cycle:


When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.
➢ A process can have one of the following five states at a time.
❖ Start
❖ Ready
❖ Running
❖ Waiting
❖ Terminated or Exit
Prof. K. Adisesha (Ph. D)
5
Introduction

Process Life Cycle:


➢ Start: This is the initial state when a process is first started/created.
➢ Ready: The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the OS so that they can run.
➢ Running: Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.
➢ Waiting: Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.
➢ Terminated or Exit: Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits to be removed from
main memory.
Prof. K. Adisesha (Ph. D)
6
Process Control Block

Process Control Block (PCB):


A Process Control Block is a data structure maintained by the Operating System for
every process.
➢ The PCB is identified by an integer process ID (PID).
➢ A PCB keeps all the information needed to keep track of
a process.
➢ The PCB is maintained for a process throughout its
lifetime, and is deleted once the process terminates.

Prof. K. Adisesha (Ph. D)


7
Process Control Block

Process Control Block (PCB):


A Process Control Block is a data structure maintained by the Operating System for
every process.
➢ Information associated with each process
❖ Process state
❖ Program counter
❖ CPU registers
❖ CPU scheduling information
❖ Memory-management information
❖ Accounting information
❖ I/O status information
Prof. K. Adisesha (Ph. D)
8
Process Control Block

CPU Switch From Process to Process:

Prof. K. Adisesha (Ph. D)


9
Process Communication

Context Switch:
When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process via a context switch.
➢ Context of a process represented in the PCB
➢ Context-switch time is overhead; the system does no useful work while switching
❖ The more complex the OS and the PCB -> longer the context switch
➢ Time dependent on hardware support
❖ Some hardware provides multiple sets of registers per CPU -> multiple contexts
loaded at once

Prof. K. Adisesha (Ph. D)


10
Process Communication

Process Creation:
Parent process create children processes, which, in turn create other processes,
forming a tree of processes.
➢ Generally, process identified and managed via a process identifier (pid)
➢ Resource sharing
❖ Parent and children share all resources
❖ Children share subset of parent’s resources
❖ Parent and child share no resources
➢ Execution
❖ Parent and children execute concurrently
❖ Parent waits until children terminate
Prof. K. Adisesha (Ph. D)
11
Process Communication

Process Termination:
Process executes last statement and asks the operating system to delete it (exit).
➢ Output data from child to parent (via wait)
➢ Process’ resources are deallocated by operating system
➢ Parent may terminate execution of children processes (abort)
❖ Child has exceeded allocated resources
❖ Task assigned to child is no longer required
❖ If parent is exiting
▪ Some operating systems do not allow child to continue if its parent terminates
▪ All children terminated - cascading termination

Prof. K. Adisesha (Ph. D)


12
Process Communication

Inter process Communication (IPC):


Processes within a system may be independent or cooperating, Cooperating process can
affect or be affected by other processes, including sharing data.
➢ Reasons for cooperating processes:
❖ Information sharing
❖ Computation speedup
❖ Modularity
❖ Convenience
➢ Cooperating processes need interprocess communication (IPC)
➢ Two models of IPC
❖ Shared memory
❖ Message passing
Prof. K. Adisesha (Ph. D)
13
Process Communication

Inter process Communication (IPC):


Inter process communication is a mechanism which allows processes to communicate
with each other and synchronize their actions.
➢ The communication between these processes can be seen as a method of co-operation
between them.
➢ Some of the methods to provide IPC:
❖ Message Queue.
❖ Shared Memory.
❖ Signal.
❖ Shared Files and Pipe
❖ Socket
Prof. K. Adisesha (Ph. D)
14
Threading
Thread:
A thread is a single sequential flow of execution of tasks of a process so it is also known
as thread of execution or thread of control.
➢ Each thread of the same process makes use of a separate program counter and a stack of
activation records and control blocks.
➢ Thread is often referred to as a lightweight process.
➢ Need of Thread:−
❖ It takes far less time to create a new thread in an existing process than to create a new
process.
❖ Threads can share the common data, they do not need to use Inter- Process
communication.
❖ Context switching is faster when working with threads.
❖ It takes less time to terminate a thread than a process.
Prof. K. Adisesha (Ph. D)
15
Threading

Thread:
Threads are implemented in following two ways :
➢ User Level Threads − User managed threads.
❖ The operating system does not recognize the user-level thread.
❖ User threads can be easily implemented and it is implemented by the user.
❖ If a user performs a user-level thread blocking operation, the whole process is
blocked.
➢ Kernel Level Threads − The kernel thread recognizes the operating system.
❖ There is a thread control block and process control block in the system for
each thread and process in the kernel-level thread.
❖ The kernel-level thread is implemented by the operating system.
❖ The kernel knows about all the threads and manages them.
Prof. K. Adisesha (Ph. D)
16
Threading

Multithreading Models:
Some operating system provide a combined user level thread and Kernel level thread
facility.
➢ In a combined system, multiple threads within the same application can run in parallel on
multiple processors and a blocking system call need not block the entire process.
➢ The idea is to achieve parallelism by dividing a process into multiple threads.
➢ For example, in a browser, multiple tabs can be different threads.
➢ Multithreading models are three types
❖ Many to many relationship
❖ Many to one relationship
❖ One to one relationship
Prof. K. Adisesha (Ph. D)
17
Threading
Thread:
Difference between User-Level & Kernel-Level Thread.
User-Level Threads Kernel-Level Thread
User-level threads are faster to create and Kernel-level threads are slower to create and
manage. manage.
Implementation is by a thread library at the Operating system supports creation of Kernel
user level. threads.
User-level thread is generic and can run on Kernel-level thread is specific to the
any operating system. operating system.
Multi-threaded applications cannot take Kernel routines themselves can be
advantage of multiprocessing. multithreaded.
Prof. K. Adisesha (Ph. D)
18
Threading

Advantages of Thread:
The various advantages of using Thread are:
➢ Threads minimize the context switching time.
➢ Use of threads provides concurrency within a process.
➢ Efficient communication.
➢ It is more economical to create and context switch threads.
➢ Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency..

Prof. K. Adisesha (Ph. D)


19
Swapping

Swapping:
Swapping is a mechanism in which a process can be swapped temporarily out of main
memory to secondary storage and make that memory available to other processes.
➢ Swapping is also known as a technique for memory compaction.
➢ The total time taken by swapping process includes
the time it takes to move the entire process to a
secondary disk and then to copy the process back to
memory

Prof. K. Adisesha (Ph. D)


20
Process Scheduling

Process Scheduling:
Maximize CPU use, quickly switch processes onto CPU for time sharing.
➢ Process scheduler selects among available processes for next execution on CPU
➢ Maintains scheduling queues of processes
❖ Job queue – set of all processes in the system
❖ Ready queue – set of all processes residing in main memory, ready and waiting to
execute
❖ Device queues – set of processes waiting for an I/O device
❖ Processes migrate among the various queues

Prof. K. Adisesha (Ph. D)


21
Process Scheduling

Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU on the basis of a particular strategy.
➢ Process scheduling is an essential part of a Multiprogramming operating systems used
to select among available processes for next execution on CPU
➢ The OS maintains all PCBs in Process Scheduling Queues.
❖ Job queue: This queue keeps all the processes in the
system.
❖ Ready queue: This queue keeps a set of all processes
residing in main memory, ready and waiting to execute.
❖ Device queues: The processes which are blocked due to
unavailability of an I/O device constitute this queue
Prof. K. Adisesha (Ph. D)
22
Process Scheduling

Representation of Process Scheduling:


Processes can be described as either:
➢ I/O-bound process – spends more time
doing I/O than computations, many short
CPU bursts
➢ CPU-bound process – spends more time
doing computations; few very long CPU
bursts

Prof. K. Adisesha (Ph. D)


23
Process Scheduling
Schedulers:
Schedulers are special system software which handle process scheduling in various ways.
➢ The main task is to select the jobs to be submitted into the system and to decide which process to run.
➢ Schedulers are of three types:
❖ Long-Term Scheduler:
▪ It is also called a job scheduler.
▪ A long-term scheduler determines which programs are admitted to the system for processing.
❖ Short-Term Scheduler:
▪ It is also called as CPU scheduler.
▪ Its main objective is to increase system performance in accordance with the chosen set of criteria.
❖ Medium-Term Scheduler:
▪ Medium-term scheduling is a part of swapping.
▪ It removes the processes from the memory. It reduces the degree of multiprogramming.
Prof. K. Adisesha (Ph. D)
24
Process Scheduling
Context Switch:
When CPU switches to another process, the system must save the state of the old process
and load the saved state for the new process via a context switch.
➢ Context of a process represented in the PCB
➢ Context-switch time is overhead; the system does no useful work while switching
❖ The more complex the OS and the PCB -> longer the context switch

➢ Time dependent on hardware support


❖ Some hardware provides multiple sets of registers per CPU -> multiple contexts
loaded at once

Prof. K. Adisesha (Ph. D)


25
Process Scheduling

Scheduling Criteria:
Schedulers selects the processes in memory that are ready to execute, and allocates the
CPU based on certain scheduling Criterias.
➢ Scheduling Criteria are based on:
❖ CPU utilization – keep the CPU as busy as possible
❖ Throughput – No. of processes that complete their execution per time unit
❖ Turnaround time – amount of time to execute a particular process
❖ Waiting time – amount of time a process has been waiting in the ready queue
❖ Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)

Prof. K. Adisesha (Ph. D)


26
Process Scheduling

Scheduling algorithms:
A Process Scheduler schedules different processes to be assigned to the CPU based on
particular scheduling algorithms.
➢ These algorithms are either non-preemptive or preemptive
➢ There are popular process scheduling algorithms:
❖ First-Come, First-Served (FCFS) Scheduling
❖ Shortest-Job-Next (SJN) Scheduling
❖ Priority Scheduling
❖ Round Robin(RR) Scheduling
❖ Multiple-Level Queues Scheduling.

Prof. K. Adisesha (Ph. D)


27
Scheduling Algorithms

First Come First Serve (FCFS):


In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first
serve basis.
➢ It is a non-preemptive, pre-emptive scheduling algorithm.
➢ Easy to understand and implement.
➢ Its implementation is based on FIFO queue.
➢ Poor in performance as average wait time is high..

Prof. K. Adisesha (Ph. D)


28
Scheduling Algorithms

First Come First Serve (FCFS):


In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first
serve basis.
Example: FCFS Scheduling

Prof. K. Adisesha (Ph. D)


29
Scheduling Algorithms

Shortest Job Next (SJN):


This is also known as shortest job first, associate with each process the length of its
next CPU burst. Use these lengths to schedule the process with the shortest time.
➢ This is a non-preemptive, pre-emptive scheduling algorithm.
➢ Best approach to minimize waiting time.
➢ Easy to implement in Batch systems where required CPU time is known in advance.
➢ Impossible to implement in interactive systems where required CPU time is not
known.
➢ The processer should know in advance how much time process will take..

Prof. K. Adisesha (Ph. D)


30
Scheduling Algorithms

Shortest Job Next (SJN):


This is also known as shortest job first, associate with each process the length of its
next CPU burst. Use these lengths to schedule the process with the shortest time.
➢ Example:

Prof. K. Adisesha (Ph. D)


31
Scheduling Algorithms

Priority Scheduling:
Priority scheduling is a priority based algorithm and one of the most common
scheduling algorithms in batch systems.
➢ Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
➢ Processes with same priority are executed on first come first served basis.
➢ Priority can be decided based on memory requirements, time requirements or any other
resource requirement.

Prof. K. Adisesha (Ph. D)


32
Scheduling Algorithms

Priority Based Scheduling:


Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
➢ Example:

Prof. K. Adisesha (Ph. D)


33
Scheduling Algorithms

Round Robin Scheduling:


Each process gets a small unit of CPU time (time quantum), after this time has elapsed,
the process is preempted and added to the end of the ready queue.
➢ Round Robin is the preemptive process scheduling algorithm.
➢ Each process is provided a fix time to execute, it is called a quantum.
➢ Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
➢ Context switching is used to save states of preempted processes.

Prof. K. Adisesha (Ph. D)


34
Scheduling Algorithms

Round Robin Scheduling:


Each process gets a small unit of CPU time (time quantum), after this time has elapsed,
the process is preempted and added to the end of the ready queue.
➢ Example:

Prof. K. Adisesha (Ph. D)


35
Scheduling Algorithms

Multiple-Level Queues Scheduling:


Multiple-level queues are not an independent scheduling algorithm.
➢ They make use of other existing algorithms to group and schedule jobs with common
characteristics.
❖ Multiple queues are maintained for processes with common characteristics.
❖ Each queue can have its own scheduling algorithms.
❖ Priorities are assigned to each queue.

Prof. K. Adisesha (Ph. D)


36
Process Synchronization

Process Synchronization:
Process Synchronization means sharing system resources by processes in a such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.
➢ Process Synchronization ensures a perfect co-ordination among the process.
➢ Maintaining data consistency demands mechanisms to ensure synchronized execution
of cooperating processes.
➢ Process Synchronization can be provided by using several different tools like:
❖ Semaphores
❖ Mutual Exclusion or Mutex
❖ Monitor
Prof. K. Adisesha (Ph. D)
37
Process Synchronization

Process Synchronization:
Process Synchronization means sharing system resources by processes in a such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.
➢ Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
➢ On the basis of synchronization, processes are categorized as one of the following two types:
❖ Independent Process: Execution of one process does not affects the execution of
other processes.
❖ Cooperative Process: Execution of one process affects the execution of other
processes.
Prof. K. Adisesha (Ph. D)
38
Process Synchronization

Process Synchronization:
Race Condition:
➢ When serval process access and manipulates the same data at the same time, they may enter into a race
condition
➢ Race condition occurs among process that share common storage for read and write.
➢ Race condition occurs due to improper synchronization of shared memory access.
Critical section problem:
➢ Critical section is a code segment that can be accessed by only one process at a time.
➢ Critical section contains shared variables which need to be synchronized to maintain consistency of data
variables.
➢ Any solution to the critical section problem must satisfy three requirements:
❖ Mutual Exclusion
❖ Progress
❖ Bounded Waiting.
Prof. K. Adisesha (Ph. D)
39
Process Synchronization

Semaphores:
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore
can be signaled by another thread.
➢ A semaphore uses two atomic operations, wait and signal for process synchronization.
➢ Classical problems of Synchronization with Semaphore Solution:
❖ Bounded-buffer (or Producer-Consumer) Problem
❖ Dining- Philosophers Problem
❖ Readers and Writers Problem
❖ Sleeping Barber Problem
Prof. K. Adisesha (Ph. D)
40
Process Synchronization

Bounded-buffer (or Producer-Consumer) Problem:


Bounded Buffer problem is also called producer consumer problem. This problem is
generalized in terms of the Producer-Consumer problem.
➢ Solution to this problem is, creating two counting semaphores “full” and “empty” to
keep track of the current number of full and empty buffers respectively.
➢ Producers produce a product and consumers consume the product, but both use of one
of the containers each time.

Prof. K. Adisesha (Ph. D)


41
Process Synchronization

Bounded-buffer (or Producer-Consumer) Problem:


One solution of this problem is to use semaphores. The semaphores which will be used
here are:
➢ m, a binary semaphore which is used to acquire and release the lock.
➢ empty, a counting semaphore whose initial value is the number of slots in the buffer,
since, initially all slots are empty.
➢ full, a counting semaphore whose initial value is 0.

Prof. K. Adisesha (Ph. D)


42
Process Synchronization

Bounded-buffer (or Producer-Consumer) Problem:


The Producer Operation: The Consumer Operation:
do do {
{ // wait until empty > 0 and then decrement 'empty' // wait until full > 0 and then decrement 'full'
wait(empty); wait(full);
// acquire lock // acquire the lock
wait(mutex); wait(mutex);
/* perform the insert operation in a slot */ /* perform the remove operation in a slot */
// release lock // release the lock
signal(mutex); signal(mutex);
// increment 'full' // increment 'empty'
signal(full); signal(empty);
} while(TRUE); } while(TRUE);
Prof. K. Adisesha (Ph. D)
43
Process Synchronization

Dining- Philosophers Problem:


The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers.
➢ There is one chopstick between each philosopher.
➢ A philosopher may eat if he can pickup the two chopsticks
adjacent to him.
➢ One chopstick may be picked up by any one of its adjacent
followers but not both.
➢ This problem involves the allocation of limited resources to a
group of processes in a deadlock-free and starvation-free
manner.
Prof. K. Adisesha (Ph. D)
44
Process Synchronization

Dining- Philosophers Problem:


Semaphore Solution to Dining Philosopher.
➢ There are three states of the philosopher: THINKING, process P[i]
HUNGRY, and EATING. while true do
{ THINK;
➢ Here there are two semaphores: Mutex and a
PICKUP(CHOPSTICK[i],
semaphore array for the philosophers. CHOPSTICK[i+1 mod 5]);
➢ Mutex is used such that no two philosophers may EAT;
access the pickup or put it down at the same time. PUTDOWN(CHOPSTICK[i],
➢ The array is used to control the behavior of each CHOPSTICK[i+1 mod 5])
philosopher }

Prof. K. Adisesha (Ph. D)


45
Process Synchronization

Readers and Writers Problem:


The reader-writer problem is a classic synchronization problem in operating systems
where multiple processes require access to a shared resource.
➢ In this problem, some processes may only read the resource while others may write to it.
➢ Problem parameters:
❖ One set of data is shared among a number of processes.
❖ Once a writer is ready, it performs its write. Only one writer
may write at a time.
❖ If a process is writing, no other process can read it.
❖ If at least one reader is reading, no other process can write.
❖ Readers may not write and only read.

Prof. K. Adisesha (Ph. D)


46
Process Synchronization

Readers and Writers Problem:


The solution of readers and writers can be implemented using binary semaphores.
➢ We use two binary semaphores "write" and "mutex", where binary semaphore can be
defined as an integer variable in S, Writer process:
➢ Functions for semaphore : do {
❖ wait() : decrements the semaphore value. // writer requests for critical section
❖ signal() : increments the semaphore value. wait(w);
➢ Writer process: // performs the write
❖ Writer requests the entry to critical section. // leaves the critical section
❖ If allowed i.e. wait() gives a true value, it enters and signal(w);
performs the write. If not allowed, it keeps on waiting. } while(true);
❖ It exits the critical section.
Prof. K. Adisesha (Ph. D)
47
Process Synchronization

Readers and Writers Problem: while(TRUE)


{ //acquire lock
Reader process:
wait(m);
➢ Reader requests the entry to critical section. read_count++;
➢ If allowed: if(read_count == 1)
❖ it increments the count of number of readers inside the critical wait(w);
section. If this reader is the first reader entering, it locks the w //release lock
signal(m);
semaphore to restrict the entry of writers if any reader is inside. /* perform the reading operation */
❖ It then, signals mutex as any other reader is allowed to enter // acquire lock
while others are already reading. wait(m);
❖ After performing reading, it exits the critical section. When read_count--;
exiting, it checks if no more reader is inside, it signals the if(read_count == 0)
signal(w);
semaphore “w” as now, writer can enter the critical section. // release lock
➢ If not allowed, it keeps on waiting.
Prof. K. Adisesha (Ph. D) signal(m); }
48
Process Synchronization

Sleeping Barber Problem:


The analogy is based upon a hypothetical barber shop with one barber. There is a
barber shop which has one barber, one barber chair, and n chairs for waiting for
customers if there are any to sit on the chair..
➢ Barber shop with one barber, one barber chair and N
chairs to wait in.
➢ When no customers the barber goes to sleep in barber
chair and must be woken when a customer comes in.
➢ When barber is cutting hair new customers take empty
seats to wait, or leave if no vacancy.

Prof. K. Adisesha (Ph. D)


49
Process Synchronization

Sleeping Barber Problem:


In the solution, we use three semaphores:
➢ One for customers, which counts the number of waiting
customers, excludes the customer in the barber chair
since he isn’t waiting.
➢ Another one for the situation of the barber, whether he’s
waiting or busy.
➢ And the last one for the mutual exclusion since there’re
some critical sections that should be thread-safe.

Prof. K. Adisesha (Ph. D)


50
Process Synchronization

Sleeping Barber Problem:


In the solution, we use three semaphores:
➢ We can start with the semaphores and some constants
like the number of chairs in the waiting room:

➢ When the barber comes to the barbershop, he executes


the barber routine below. Since there are no
customers, he needs to wait until a customer arrives.
By using the customers semaphore we wait for the
barber and provide one of the important
synchronization conditions:
Prof. K. Adisesha (Ph. D)
51
Deadlocks

Deadlock:
Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
➢ In operating systems when there are two or more processes
that hold some resources and wait for resources held by
other(s).
➢ Example:
❖ Process 1 is holding Resource 1 and waiting for resource 2
which is acquired by process 2, and process 2 is waiting
for resource 1.

Prof. K. Adisesha (Ph. D)


52
Deadlocks

Deadlock:
Necessary conditions for Deadlocks:
➢ Mutual Exclusion: A resource can only be shared in mutually exclusive manner. It implies, if
two process cannot use the same resource at the same time.
➢ Hold and Wait: A process waits for some resources while holding another resource at the
same time.
➢ No preemption: The process which once scheduled will be executed till the completion. No
other process can be scheduled by the scheduler meanwhile.
➢ Circular Wait: All the processes must be waiting for the resources in a cyclic manner so that
the last process is waiting for the resource which is being held by the first process.

Prof. K. Adisesha (Ph. D)


53
Deadlocks

Deadlock:
Methods for handling deadlock .
➢ There are three ways to handle deadlock:
❖ Deadlock prevention or avoidance (Banker's Algorithm): The idea is to not let the
system into a deadlock state. by using strategy of “Avoidance”, we have to make
an assumption.
❖ Deadlock detection and recovery: Let deadlock occur, then do preemption to
handle it once occurred.
❖ Deadlock Ignorance (Ostrich Method): If deadlock is very rare, then let it happen
and reboot the system. This is the approach that both Windows and UNIX take.

Prof. K. Adisesha (Ph. D)


54
Deadlocks

Banker’s Algorithm:
It is a banker algorithm used to avoid deadlock and allocate resources safely to each
process in the computer system.
➢ it is also known as deadlock avoidance algorithm or deadlock detection in the operating
system.
➢ The banker's algorithm is named because it checks whether a person should be sanctioned a
loan amount or not to help the bank system safely simulate allocation resources.
❖ Suppose the number of account holders in a particular bank is 'n', and the total money in
a bank is 'T’.
❖ If an account holder applies for a loan; first, the bank subtracts the loan amount from full
cash and then estimates the cash difference is greater than T to approve the loan amount.
❖ It helps the bank manage and operate all things without any restriction in the functionality
of the banking system.
Prof. K. Adisesha (Ph. D)
55
Deadlocks

Banker’s Algorithm:
The characteristics of Banker's algorithm are as follows:
❖ If any process requests for a resource, then it has to wait.
❖ This algorithm consists of advanced features for maximum resource allocation.
❖ There are limited resources in the system we have.
❖ In this algorithm, if any process gets all the needed resources, then it is that it
should return the resources in a restricted period.
❖ Various resources are maintained in this algorithm that can fulfill the needs of at
least one client.
➢ Banker’s algorithm comprises of two algorithms:
❖ Safety algorithm
❖ Resource request algorithm
Prof. K. Adisesha (Ph. D)
56
Discussion

Queries ?
Prof. K. Adisesha
9449081542

Prof. K. Adisesha (Ph. D)

You might also like