Professional Documents
Culture Documents
PROCESS MANAGEMENT
PROCESS
A process is basically a program in execution.
The execution of a process must progress in a sequential fashion.
A process is the basic unit of execution in an operating system.
A program is a passive entity that stored in secondary storage device, while process is
active entity that stored in main memory.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
When a program is loaded into the memory and it becomes a process, it can be divided
into four sections ─ stack, heap, text and data.
The following figure shows structure of a process inside main memory –
Stack
The process Stack contains the temporary data such as method/function parameters, return address
and local variables.
Heap
This is dynamically allocated memory to a process during its run time.
Text
This includes the current activity represented by the value of Program Counter and the contents of
the processor's registers.
Data
This section contains the global and static variables.
PROCESS STATE
The state of a process is defined in part by the current activity of that process. The change
the state of process during execution of process.
Each process may be in one of the following states
Start/New
This is the initial state when a process is first started/created.
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come into
this state after Start state or while running it by but interrupted by the scheduler to assign CPU to
some other process.
Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to
running and the processor executes its instructions.
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved to
the terminated state where it waits to be removed from main memory.
1. Non-preemptive:
Advantages
1. It has a minimal scheduling burden.
2. It is a very easy procedure.
3. Less computational resources are used.
Disadvantages
1. Its response time to the process is super.
2. Bugs can cause a computer to freeze up.
2. Preemptive:
Preemptive scheduling is used when a process switches from the running state to
the ready state or from the waiting state to the ready state.
Here the OS allocates the resources to a process for a fixed amount of time.
This switching occurs as the CPU may give priority to other processes and replace
the process with higher priority with the running process.
Algorithms based on preemptive scheduling are Round Robin (RR), Shortest
Remaining Time First (SRTF), Priority (preemptive version),
Advantages
1. Because a process may not monopolize the processor, it is a more reliable method.
2. Each occurrence prevents the completion of ongoing tasks.
3. The average response time is improved.
Disadvantages
1. Limited computational resources must be used.
1. Long-Term Scheduler
2. Short-Term Scheduler
3. Medium-Term Scheduler
1 Long Term Scheduler
It is also called a job scheduler.
A long-term scheduler determines which programs are admitted to the system for
processing.
It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound.
It also controls the degree of multiprogramming.
If the degree of multiprogramming is stable, then the average rate of process creation must
be equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal.
Time-sharing operating systems have no long-term scheduler.
When a process changes the state from new to ready, then there is use of long-term
scheduler.
Short Term Scheduler
It is also called as CPU scheduler.
Its main objective is to increase system performance in accordance with the chosen set of
criteria.
It is the change of ready state to running state of the process.
CPU scheduler selects a process among the processes that are ready to execute and allocates
CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next.
Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping.
It removes the processes from the memory.
It reduces the degree of multiprogramming.
The medium-term scheduler is in-charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request.
A suspended processes cannot make any progress towards completion.
In this condition, to remove the process from memory and make space for other processes,
the suspended process is moved to the secondary storage. This process is called swapping,
and the process is said to be swapped out or rolled out.
Scheduling algorithms
A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. They are,
1. First-Come, First-Served (FCFS) Scheduling
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time (W.T): Time Difference between turn-around time and burst time.
Waiting Time = Turn Around Time – Burst Time
Gantt chart
Generalized Activity Normalization Time Table (GANTT) chart is the type of chart that shows
the amount of work completed in given period of time.
Process Burst time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order p1 p2 p3
The Gantt chart for the schedule is
P1 P2 P3
0 24 27 30
Advantage
It is simple to implement
Easy to understand
Disadvantage
Poor in performance
Average wait time is high.
It is not suitable for time sharing system.
P1 0 3
P2 2 1
P3 3 2
P4 3 1
P5 8 4
Gantt Chart
Advantages
Disadvantages
There are chances that this algorithm will face starvation if there are too many processes
that have shorter burst times then it will cause longer processes to wait in the queue will
lead to Starvation.
This algorithm can lead to a long turnaround time.
It is not an easy task to get knowledge about the length of future jobs.
Priority Scheduling
Priority is assigned for each process.
Process with highest priority is executed first and so on
Processes with same priority are executed in FCFS manner.
Priority can be decided based on memory requirements, time requirements or any
other resource requirement.
It is a non-preemptive or pre-emptive scheduling algorithm.
Ques: Solve the problem non-preemptive based on priority scheduling algorithm.
Gantt Chart
P3 P4 P1 P2
0 6 8 29 31
Gantt Chart
Advantage
1. Every process gets an equal share of the CPU.
2. RR is cyclic in nature, so there is no starvation.
Disadvantage
1. Vertices
2. Edges
Process Vertices
Resource Vertices
Process Vertices: - To represent a process, we use process vertices. We draw the process
vertices by using a circle, and inside the circle, we mention the name of the process.
Resource Vertices: - To represent a resource, we use resource vertices. We draw the resource
vertices by using a rectangle, and we use dots inside the circle to mention the number of
instances of that resource.
It is two types,
Single instance resource type: - In single instance resource type, we use only a
single dot inside the box.
Multiple instance resource type: - In multiple instance resource type, we use
multiple dots inside the box.
2. Edges: - There are two types of edges we use in the resource allocation graph:
Assign Edges
Request Edges
Assign Edges: - We use an assign edge to represent the allocation of resources to the process
Request Edges: - We use request edge to signify the waiting state of the process.
Deadlock
Deadlock is a situation where a set of process are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Causes of deadlock
1 Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies, if two process cannot
use the same resource at the same time.
A process waits for some resources while holding another resource at the same time. This
deadlock condition in OS requires a process to wait for an occupied resource.
3. No preemption
The process which once scheduled will be executed till the completion. No other process can be
scheduled by the scheduler meanwhile. Only one process can be scheduled at a point in time.
So, it is said that no two processes can be executed simultaneously. This implies that only
when a process is completed, then only the allocated or occupied resource will be duly
released.
4. Circular wait
All the processes must be waiting for the resources in a cyclic manner so that the last process is
waiting for the resource which is being held by the first process. Every set of processes waits in
a cyclic manner. This keeps them waiting forever, and they never get executed. As a result,
deadlocks are called “circular wait” since they get a process stuck in a circular fashion. Every
set of processes waits in a cyclic manner. This keeps them waiting forever, and they never get
executed. As a result, deadlocks are also called “circular wait” since they get a process stuck
in a circular fashion.
Process Synchronization.
On the basis of synchronization, processes are categorized as one of the following two types:
Independent Process: The execution of one process does not affect the execution
of other processes.
Cooperative Process: A process that can affect or be affected by other processes
executing in the system.
Race Condition:
When more than one process is executing the same code or accessing the same memory
or any shared variable in that condition there is a possibility that the output or the value
of the shared variable is wrong so for that all the processes doing the race to say that
my output is correct this condition known as a race condition.
Several processes access and process the manipulations over the same data
concurrently, then the outcome depends on the particular order in which the access
takes place.
A race condition is a situation that may occur inside a critical section.
A critical section is a code segment that can be accessed by only one process at a time. The
critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
Progress: If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not executing
in their remainder section can participate in deciding which will enter in the critical
section next, and the selection cannot be postponed indefinitely.
Bounded Waiting: A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted.