You are on page 1of 15

O P E R AT I N G S Y S T E M

PROCESS MANAGEMENT
B Y: Aniruddha Halder

Wo m e n ’s P o l y t e c h n i c , C h a n d a n n a g a r

Courtesy: https://www.tutorialspoint.com/operating_system/
https://www.geeksforgeeks.org/
https://www.javatpoint.com/
PROCESS MANAGEMENT
Process management involves various tasks like creation, scheduling, termination of
• processes,
What isand
Process?
a dead lock.
Process is the execution of a program that performs the actions
specified in that program. It can be defined as an execution unit
The OS must allocate resources that enable processes to share and exchange information. It
where a program runs.
also protects the resources of each process from other methods and allows synchronization
among processes. Process is a program that is under execution, which is an
important part of modern-day operating systems.
• Process States:
The process, from its creation to completion, passes through
various states. The minimum number of states is five.
One of the processes from the ready state will be
A program which is going to be picked up by the OS
3. RUNNING:
chosen
1. NEW: into by memory
the main the OS is
depending upon
called a new the scheduling
process.
algorithm. Hence, if we have only one CPU in our
system, the
Whenever number
a process of running
is created, processes
it directly enters for a ready state, in which, it waits for the CPU to
in the
2. READY:beparticular
assigned. time
The OS willpicks
always be one.
the new If we from
processes havethen secondary memory and put all of them in the
processors
main memory.inThe theprocesses
system which
then we can have
are ready for then execution and reside in the main memory are
processes
called readyrunning simultaneously.
state processes. There can be many processes present in the ready state.
PROCESS MANAGEMENT
• Process States:
6. Suspend
4. WAIT ready :
OR BLOCK:
From the Running
A process state,state,
in the ready a process can
which is make
movedthetotransition
secondaryto
the
memoryblockfromor wait
the state
main depending
memory due upontothelack
scheduling
of the
algorithm
resources or the intrinsic
(mainly behavior
primary of theis process.
memory) called When
in thea
process
suspend waits
readyfor a certain resource to be assigned or for the
state.
input
7. from the user
SUSPENDED WAIT:then the OS move this process to the
block
Insteadorof wait state the
removing andprocess
assigns thethe
from CPU
readyto queue,
the other
it's
processes.
better to remove the blocked process which is waiting for
some resources in the main memory. Since it is already
5. TERMINATION
waiting / COMPLETION:
for some resource to get available hence it is better
When a process
if it waits finishes memory
in the secondary its execution,
and makeit comes in the
room for the
termination
higher priority state.process.
All theThese
contextprocesses
of the process (Process
complete their
Control
executionBlock)
once thewillmain
also memory
be deleted
getsthe processand
available will be
their
terminated by the Operating system.
wait is finished.
PROCESS MANAGEMENT
• Process Control Block (PCB) :
The Operating system maintains a process control block during the lifetime of the
process. The Process control block is deleted when the process is terminated or
killed. The information which are saved in the process control block (PCB) and is
changing with the state of the process as given in the figure.

 Process ID – Every process is assigned with a unique id known as process ID or PID


which stores the process identifier.
 Process state – It stores the respective state of the process.
 Pointer – It is a stack pointer which is required to be saved when the process is switched from one state to another to
retain the current position of the process.
• Open files– The
Priority list – This information
process includestothe
priority, pointers list of files
scheduling opened
queues etc.for
is athe
process.
CPU scheduling information that is contained in
• Miscellaneous
the PCB. This may accounting andany
also include status
otherdata – This parameters.
scheduling field includes information about the amount of CPU used, time
 constraints,
Program Counter – It stores jobsthe counter which or contains the address process
of the next instruction number,
that is to be executed for etc.
the
The process control block stores the register content also known as execution content of the processor when it was
process.
 blocked from running.
CPU Registers – ThisThis execution
specifies the content
registersarchitecture
that are usedenables the process.
by the operatingThey
system
mayto restore
include aaccumulators,
process’s execution
index
context when
registers, stack thepointers,
processgeneral
returns purpose
to the running state.
registers etc.When the process makes a transition from one state to another, the
 operating
Memory system
Limitsupdates
– This its information
field in the
contains the process’s PCB.
information about The operating
memory system maintains
management pointers
system used to each process’s
by operating system.
PCB
This in
maya process
includetable so that
the page it can
tables, access the
segment PCB
tables etc.quickly.
PROCESS SCHEDULING
The process scheduling is the activity of the process manager that handles the removal of the running process
from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating


systems allow more than one process to be loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.
• Process Scheduling Queues:
The OS maintains all PCBs in Process Scheduling Queues. The
OS maintains a separate queue for each of the process states and
PCBs of all processes in the same execution state are placed in
the same queue. When the state of a process is changed, its PCB
is unlinked from its current queue and moved to its new state
queue.

• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to execute. A new
process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O device constitute this queue.
PROCESS SCHEDULING
• Schedulers:
Schedulers are special system software which handle process scheduling in various ways. Their main task is to
select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types:

• Long-Term Scheduler • Short-Term Scheduler • Medium-Term Scheduler


as CPU
It is also called a job scheduler.
scheduler. Its main objective
A long-term schedulerisdetermines
to increasewhich
system performance
programs in accordance
are admitted with the
to the system
chosen
for set of criteria.
processing. It selectsItprocesses
is the change of ready
from the queueItstate
and to running
loads them statememory
into of the for
process. CPU Process
execution. scheduler selects
into a
Medium-term scheduling is a part of swapping. removes the processes from the memory. It reducesloads
the degree
process
the among
memory forthe
CPU processes that are ready to execute and allocates CPU to one of them.
scheduling.
of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-processes.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
Short-term
processor bound.schedulers, also known
It also controls as dispatchers,
the degree make the decision
of multiprogramming. If theofdegree
whichofprocess to execute next.
multiprogramming Short-
is stable,
term A running process may
thanbecome suspended if it makes an I/O request. A suspended processes cannot make
then schedulers
the averageare faster
rate of process long-term
creationschedulers.
must be equal to the average departure rate of processes leaving the
any progress towards completion. In this condition, to remove the process from memory and make space for
system.
other processes, the suspended process is moved to the secondary storage. This process is called swapping, and
the process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix.

• Home Task: Comparative Study between different Schedulers.


CONTEXT SWITCH
A context switch is the mechanism to store and restore the state or
context of a CPU in Process Control block so that a process execution can be
resumed from the same point at a later time. Using this technique, a context
switcher enables multiple processes to share a single CPU. Context
switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is stored into the
process control block. After this, the state for the process to run next is
loaded from its own PCB and used to set the PC, registers, etc. At that point,
the second process can start executing.

Context switches are computationally intensive since register and


memory state must be saved and restored. To avoid the amount of context
switching time, some hardware systems employ two or more sets of
processor registers. When the process is switched, the following information
is stored for later use.
 Scheduling information  Base and limit register value  Changed State
 Program Counter  Accounting information
 Currently used register  I/O State information
OPERATION ON PROCESSES
1.Creation: This the initial step of process execution activity. Process creation means the construction of a new
process for the execution. This might be performed by system, user or old process itself. There are several
events that leads to the process creation. Some of the such events are following:

 When we start the computer, system creates several background processes.


 A user may request to create a new process.
 A process can create a new process itself while executing.
 Batch system takes initiation of a batch job.
2. Scheduling/Dispatching: The event or activity in which the state of the process is changed from ready to
running. It means the operating system puts the process from ready state into the running state. Dispatching is
done by operating system when the resources are free or the process has higher priority than the ongoing
process. There are various other cases in which the process in running state is preempted and process in ready
state is dispatched by the operating system.

3. Blocking: When a process invokes an input-output system call that blocks the process and operating system
put in block mode. Block mode is basically a mode where process waits for input-output. Hence on the demand
of process itself, operating system blocks the process and dispatches another process to the processor. Hence,
in process blocking operation, the operating system puts the process in ‘waiting’ state.
OPERATION ON PROCESSES
4. Preemption: When a timeout occurs that means the process hadn’t been terminated in the allotted time
interval and next process is ready to execute, then the operating system preempts the process. This operation is
only valid where CPU scheduling supports preemption. Basically this happens in priority scheduling where on
the incoming of high priority process the ongoing process is preempted. Hence, in process preemption operation,
the operating system puts the process in ‘ready’ state.

5. Termination: Process termination is the activity of ending the process. In other words, process termination is
the relaxation of computer resources taken by the process for the execution. Like creation, in termination also
there may be several events that may lead to the process termination. Some of them are:

 Process completes its execution fully and it indicates to the OS that it has finished.
 Operating system itself terminates the process due to service errors.
 There may be problem in hardware that terminates the process.
 One process can be terminated by another process.
INTER PROCESS COMMUNICATION
A process can be of two types: • Independent process. • Co-operating process.
An independent process is not affected by the execution of other processes while a co-operating process
can be affected by other executing processes.
There are many situations when co-operative nature can be utilized for increasing computational speed,
convenience, and modularity.
Inter-process communication (IPC) is a mechanism that allows processes to communicate with each other and
synchronize their actions. The communication between these processes can be seen as a method of co-
operation between them.
Approaches to Interprocess Communication

• Pipes
• Shared Memory
• Message Queue
• Direct Communication
• Indirect communication
• Message Passing
• FIFO
Advantages of IPC (H/T)
CLASSICAL PROBLEMS
Role of Synchronization in Inter Process Communication:
It is one of the essential part of inter process communication. Typically, this is provided by interprocess
communication control mechanisms, but sometimes it can also be controlled by communication processes.

There are the different methods that used to provide the synchronization:
• Mutual Exclusion • Semaphore
Semaphore was proposed by Dijkstra in 1965 which is a very significant
Mutual exclusion implies that only one process can be inside the critical
technique to manage concurrent processes by using a simple integer value,
section at any time. If any other process requires the critical section, they
which is known as a semaphore. Semaphore is simply an integer variable that
must wait until it is free. Progress means that if a process is not using the
is shared between threads. This variable is used to solve the critical section
critical section, then it should not stop any other process from accessing it.
problem and to achieve process synchronization in the multiprocessing
environment. 


Now,
P semaphore
let us see function
how it implements
signals thatmutual
the taskexclusion.
requires aLet
resource
there be
andtwo
if not
processes P1
available
and P2 and waits
a semaphore
for it. S (Semaphore value) is initialized as 1. Now if suppose P1

enters
V semaphore
in its critical
function signals
section thenwhich
the value
the task
of semaphore
passes to the
S becomes
OS that the0 (through P
resource
operation).is Now
now free
if P2for
wants
the other
to enter
users.
its critical section then it will wait until S > 0,
this can only happen when P1 finishes its critical section and calls V operation on
semaphore S. This way mutual exclusion is achieved.
T H R E A D S I N O
THREADS: A thread is a single sequential flow of execution of tasks of a
S
process so it is also known as thread of execution or thread of control.

There is a way of thread execution inside the process of any operating system. Apart
from this, there can be more than one thread inside a process. Each thread of the
same process makes use of a separate program counter and a stack of activation
records and control blocks. Thread is often referred to as a lightweight process.
Threads provide a way to improve application performance through parallelism.

Benefits
• It takes far less time to create a new thread in an existing process than to
create a new process.
• Threads can share the common data, they do not need to use Inter-
Process communication.
• Context switching is faster when working with threads.
• It takes less time to terminate a thread than a process.
• Enhanced throughput of the system.
• Effective Utilization of Multiprocessor system.
• Resource sharing
THREADS IN OS
Types of Threads
• Kernel level thread. • User-level thread.
In this case, thread management is done by the In Kernel. Therethread
User level is no management kernel is not aware of the
thread management code in the application area. Kernelofthreads
existence areThe thread library contains code for creating
threads.
supported directly by the operating system. Anyandapplication
destroyingcan be
threads, for passing message and data between
programmed to be multithreaded. All of the threadsforwithin
threads, an
scheduling thread execution and for saving and
application are supported within a single process.
restoring thread contexts. The application starts with a single
thread.
Advantages:
The Kernel maintains context information for the process as a
• Thread switching does not require Kernel mode privileges.
whole and for individuals threads within the process. Scheduling by
• User level thread can run on any operating system.
the Kernel is done on a thread basis. The Kernel performs thread
• Scheduling can be application specific in the user level thread.
creation, scheduling and management in Kernel space. Kernel
Examples: Java thread, POSIX • User level threads are fast to create
Example: Window Solaris.
and manage.
threads are generally slowerthreads, etc. and manage
to create than the user
threads.

Disadvantages:
• In a typical operating system, most system calls are blocking.
• Multithreaded application cannot take advantage of multiprocessing.
T H RMultithreading
E A D SModels IN OS
Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a good
example of this combined approach. In a combined system, multiple threads within the same application can run
in parallel on multiple processors and a blocking system call need not block the entire process.

• Many to many relationship. • Many to one relationship. • One to one relationship.


The many-to-many model multiplexes any number of user threads
onto an equal or smaller number of kernel threads.

In this model, developers can create as many user threads as


necessary and the corresponding Kernel threads can run in parallel
on a multiprocessor machine. This model provides the best accuracy
on concurrency and when a thread performs a blocking system call,
the kernel can schedule another thread for execution.
T H RMultithreading
E A D SModels IN OS
• Many to one relationship.
Many-to-one model maps many user level threads to one Kernel-
level thread. Thread management is done in user space by the
thread library. When thread makes a blocking system call, the
entire process will be blocked. Only one thread can access the
Kernel at a time, so multiple threads are unable to run in parallel on
multiprocessors.

• One to one relationship.


In one-to-one relationship of user-level thread to the kernel-level
thread the model provides more concurrency than the many-to-one
model. It also allows another thread to run when a thread makes a
blocking system call. It supports multiple threads to execute in
parallel on microprocessors.

Disadvantage: Creating user thread requires the corresponding


Kernel thread.
Example : OS/2, windows NT and windows 2000. 

You might also like