You are on page 1of 39

UNIT 2: Process Management

Introduction of Process Management

A process is a program in execution. For example, when we write a program in C or C++ and compile
it, the compiler creates binary code. The original code and binary code are both programs. When we
actually run the binary code, it becomes a process. A process is an ‘active’ entity instead of a
program, which is considered a ‘passive’ entity. A single program can create many processes when
run multiple times; for example, when we open a .exe or binary file multiple times, multiple
instances begin (multiple processes are created).

Process management includes various tools and techniques such as process mapping, process
analysis, process improvement, process automation, and process control. By applying these tools and
techniques, organizations can streamline their processes, eliminate waste, and improve productivity.
Overall, process management is a critical aspect of modern business operations and can help
organizations achieve their goals and stay competitive in today’s rapidly changing marketplace.

What is Process Management?

If the operating system supports multiple users then services under this are very important. In this
regard, operating systems have to keep track of all the completed processes, Schedule them, and
dispatch them one after another. However, the user should feel that he has full control of the CPU.
Process management refers to the techniques and strategies used by organizations to design,
monitor, and control their business processes to achieve their goals efficiently and effectively. It
involves identifying the steps involved in completing a task, assessing the resources required for each
step, and determining the best way to execute the task.

Process management can help organizations improve their operational efficiency, reduce costs,
increase customer satisfaction, and maintain compliance with regulatory requirements. It involves
analyzing the performance of existing processes, identifying bottlenecks, and making changes to
optimize the process flow.

Some of the systems call in this category are as follows.

• Create a child’s process identical to the parent’s.

• Terminate a process

• Wait for a child process to terminate

• Change the priority of the process

• Block the process

• Ready the process

• Dispatch a process

• Suspend a process

• Resume a process

• Delay a process

• Fork a process
How Does a Process Look Like in Memory?

The process looks like

Explanation of Process

• Text Section: A Process, sometimes known as the Text Section, also includes the current
activity represented by the value of the Program Counter.

• Stack: The stack contains temporary data, such as function parameters, returns addresses,
and local variables.

• Data Section: Contains the global variable.

• Heap Section: Dynamically memory allocated to process during its run time.

Key Components of Process Management

Below are some key component of process management.

• Process mapping: Creating visual representations of processes to understand how tasks flow,
identify dependencies, and uncover improvement opportunities.

• Process analysis: Evaluating processes to identify bottlenecks, inefficiencies, and areas for
improvement.

• Process redesign: Making changes to existing processes or creating new ones to optimize
workflows and enhance performance.

• Process implementation: Introducing the redesigned processes into the organization and
ensuring proper execution.

• Process monitoring and control: Tracking process performance, measuring key metrics, and
implementing control mechanisms to maintain efficiency and effectiveness.
Importance of Process Management System

It is critical to comprehend the significance of process management for any manager overseeing a
firm. It does more than just make workflows smooth. Process Management makes sure that every
part of business operations moves as quickly as possible.

By implementing business processes management, we can avoid errors caused by inefficient human
labor and cut down on time lost on repetitive operations. It also keeps data loss and process step
errors at bay. Additionally, process management guarantees that resources are employed effectively,
increasing the cost-effectiveness of our company. Process management not only makes business
operations better, but it also makes sure that our procedures meet the needs of your clients. This
raises income and improves consumer happiness.

Advantages of Process Management

• Improved Efficiency: Process management can help organizations identify bottlenecks and
inefficiencies in their processes, allowing them to make changes to streamline workflows and
increase productivity.

• Cost Savings: By identifying and eliminating waste and inefficiencies, process management
can help organizations reduce costs associated with their business operations.

• Improved Quality: Process management can help organizations improve the quality of their
products or services by standardizing processes and reducing errors.

• Increased Customer Satisfaction: By improving efficiency and quality, process management


can enhance the customer experience and increase satisfaction.

• Compliance with Regulations: Process management can help organizations comply with
regulatory requirements by ensuring that processes are properly documented, controlled,
and monitored.

Disadvantages of Process Management

• Time and Resource Intensive: Implementing and maintaining process management


initiatives can be time-consuming and require significant resources.

• Resistance to Change: Some employees may resist changes to established processes, which
can slow down or hinder the implementation of process management initiatives.

• Overemphasis on Process: Overemphasis on the process can lead to a lack of focus on


customer needs and other important aspects of business operations.

• Risk of Standardization: Standardizing processes too much can limit flexibility and creativity,
potentially stifling innovation.

• Difficulty in Measuring Results: Measuring the effectiveness of process management


initiatives can be difficult, making it challenging to determine their impact on organizational
performance.

GATE-CS-Questions on Process Management

Q.1: Which of the following need not necessarily be saved on a context switch between processes?
(GATE-CS-2000)
(A) General purpose registers

(B) Translation lookaside buffer

(C) Program counter

(D) All of the above

Answer: (B)

In a process context switch, the state of the first process must be saved somehow, so that when the
scheduler gets back to the execution of the first process, it can restore this state and continue. The
state of the process includes all the registers that the process may be using, especially the program
counter, plus any other operating system-specific data that may be necessary. A translation look-
aside buffer (TLB) is a CPU cache that memory management hardware uses to improve virtual
address translation speed. A TLB has a fixed number of slots that contain page table entries, which
map virtual addresses to physical addresses. On a context switch, some TLB entries can become
invalid, since the virtual-to-physical mapping is different. The simplest strategy to deal with this is to
completely flush the TLB.

Q.2: The time taken to switch between user and kernel modes of execution is t1 while the time
taken to switch between two processes is t2. Which of the following is TRUE? (GATE-CS-2011)

(A) t1 > t2

(B) t1 = t2

(C) t1 < t2

(D) nothing can be said about the relation between t1 and t2.

Answer: (C)

Process switching involves a mode switch. Context switching can occur only in kernel mode.

FAQs on Process Management

Q.1: Why process management is important?

Answers:

Process management is crucial for organizations as it enables them to identify inefficiencies,


eliminate waste, and enhance productivity. By standardizing processes, eliminating bottlenecks, and
implementing continuous improvement practices, organizations can achieve better results, meet
customer expectations, and gain a competitive advantage.

Q.2: What is the main difference between process manager and memory manager?

Answer:

Processes in the system are manage by processor manager and also it is responsible for the sharing
of the CPU. whereas, memory in the system is managed by memory manager and it is responsible
also for allocation and deallocation of memory, virtual memory management, etc.

Q.3: What are process management examples?

Answer:
Some process management examples are: Eliminate redundant processes, automate workflows and
Improve communication between multiple processes with appropriate business tools.

Process Table and Process Control Block (PCB)

While creating a process, the operating system performs several operations. To identify the
processes, it assigns a process identification number (PID) to each process. As the operating system
supports multi-programming, it needs to keep track of all the processes. For this task, the process
control block (PCB) is used to track the process’s execution status. Each block of memory contains
information about the process state, program counter, stack pointer, status of opened
files, scheduling algorithms, etc.

All this information is required and must be saved when the process is switched from one state to
another. When the process makes a transition from one state to another, the operating system must
update information in the process’s PCB. A process control block (PCB) contains information about
the process, i.e. registers, quantum, priority, etc. The process table is an array of PCBs, that means
logically contains a PCB for all of the current processes in the system.

1. Pointer: It is a stack pointer that is required to be saved when the process is switched from
one state to another to retain the current position of the process.

2. Process state: It stores the respective state of the process.

3. Process number: Every process is assigned a unique id known as process ID or PID which
stores the process identifier.

4. Program counter: It stores the counter,: which contains the address of the next instruction
that is to be executed for the process.
5. Register: These are the CPU registers which include the accumulator, base, registers, and
general-purpose registers.

6. Memory limits: This field contains the information about memory management system used
by the operating system. This may include page tables, segment tables, etc.

7. Open files list : This information includes the list of files opened for a process.

Additional Points to Consider for Process Control Block (PCB)

• Interrupt handling: The PCB also contains information about the interrupts that a process
may have generated and how they were handled by the operating system.

• Context switching: The process of switching from one process to another is called context
switching. The PCB plays a crucial role in context switching by saving the state of the current
process and restoring the state of the next process.

• Real-time systems: Real-time operating systems may require additional information in the
PCB, such as deadlines and priorities, to ensure that time-critical processes are executed in a
timely manner.

• Virtual memory management: The PCB may contain information about a process’s virtual
memory management, such as page tables and page fault handling.

• Inter-process communication: The PCB can be used to facilitate inter-process


communication by storing information about shared resources and communication channels
between processes.

• Fault tolerance: Some operating systems may use multiple copies of the PCB to provide fault
tolerance in case of hardware failures or software errors.

Advantages-

1. Efficient process management: The process table and PCB provide an efficient way to
manage processes in an operating system. The process table contains all the information
about each process, while the PCB contains the current state of the process, such as the
program counter and CPU registers.

2. Resource management: The process table and PCB allow the operating system to manage
system resources, such as memory and CPU time, efficiently. By keeping track of each
process’s resource usage, the operating system can ensure that all processes have access to
the resources they need.

3. Process synchronization: The process table and PCB can be used to synchronize processes in
an operating system. The PCB contains information about each process’s synchronization
state, such as its waiting status and the resources it is waiting for.

4. Process scheduling: The process table and PCB can be used to schedule processes for
execution. By keeping track of each process’s state and resource usage, the operating system
can determine which processes should be executed next.

Disadvantages-
1. Overhead: The process table and PCB can introduce overhead and reduce system
performance. The operating system must maintain the process table and PCB for each
process, which can consume system resources.

2. Complexity: The process table and PCB can increase system complexity and make it more
challenging to develop and maintain operating systems. The need to manage and
synchronize multiple processes can make it more difficult to design and implement system
features and ensure system stability.

3. Scalability: The process table and PCB may not scale well for large-scale systems with many
processes. As the number of processes increases, the process table and PCB can become
larger and more difficult to manage efficiently.

4. Security: The process table and PCB can introduce security risks if they are not implemented
correctly. Malicious programs can potentially access or modify the process table and PCB to
gain unauthorized access to system resources or cause system instability.

5. Miscellaneous accounting and status data – This field includes information about the
amount of CPU used, time constraints, jobs or process number, etc. The process control
block stores the register content also known as execution content of the processor when it
was blocked from running. This execution content architecture enables the operating system
to restore a process’s execution context when the process returns to the running state. When
the process makes a transition from one state to another, the operating system updates its
information in the process’s PCB. The operating system maintains pointers to each process’s
PCB in a process table so that it can access the PCB quickly.

Process Table and Process Control Block – FAQ’s

1: What is a Process Control Block (PCB)?

A process control block (PCB) is a data structure used by operating systems to store important
information about running processes. It contains information such as the unique identifier of the
process (Process ID or PID), current status, program counter, CPU registers, memory allocation, open
file descriptions and accounting information. The circuit is critical to context switching because it
allows the operating system to efficiently manage and control multiple processes.
2: What information does a Process Control Block (PCB) contain?

A process control board (PCB) stores various information about a process so that the operating
system can manage it properly.

A typical printed circuit board contains the following components:

Process ID (PID): a unique identifier for each process. Process Status: Indicates whether the process is
currently running, ready, pending, or stopped. The program counter stores the address of the next
instruction to be executed. CPU Registers: Stores the current CPU register values for context
switching. Memory Management Information: Information about memory allocation and process
usage. I/O Information: Tracks the status of the I/O devices assigned to the process. Accounting
information: resource usage statistics such as CPU time, I/O time, etc.

3: How does the Process Control Block (PCB) facilitate context switching?

Context switching is the process of saving the current state of a running process and loading the state
of another process so that the CPU can switch its execution from one process to another. The process
control block (PCB) plays a key role in context switching because it contains all relevant information
about the process. When the operating system decides to switch to another process, it stores the
current process in the circuit’s memory, including CPU registers and program counters. It then loads
the chip to start the next process, resets its state and resumes execution from where it left off. This
seamless switching between processes allows the operating system to create the illusion of
simultaneous execution, even though the processor can only run one process at a time.

Context Switching in OS

The process of context switching involves the storage of the context/state of a given process in a
way that it can be reloaded whenever required, and its execution can be then resumed from the
very same point as earlier. It is basically a feature of the multitasking OS, and it allows the
sharing of just a single CPU by multiple processes.

What is Context Switching in OS?

Context switching refers to a technique/method used by the OS to switch processes from a given
state to another one for the execution of its function using the CPUs present in the system.
When switching is performed in the system, the old running process’s status is stored as
registers, and the CPU is assigned to a new process for the execution of its tasks. While new
processes are running in a system, the previous ones must wait in the ready queue. The old
process’s execution begins at that particular point at which another process happened to stop it.
It describes the features of a multitasking OS where multiple processes share a similar CPU to
perform various tasks without the requirement for further processors in the given system.

Why Do We Need Context Switching?

Context switching helps in sharing a single CPU among all processes so as to complete the
execution and store the status of the system’s tasks. Whenever the process reloads in a system,
its execution begins at the very point in which there is conflict.

Here are some of the reasons why an OS would need context switching:
1. The switching of a given process to another one isn’t directly in the system. Context
switching helps an OS switch between multiple processes to use the resources of the
CPU for accomplishing its tasks and storing its context. The service of a process can be
resumed at the same point later on. In case we don’t store the data or context of the
currently running process, this stored info may be lost when switching between the
given processes.

2. In case a high-priority process is falling in a ready queue, the process running currently
would be shut down/stopped with the help of a high-priority process for completing its
tasks in a system.

3. In case a running process needs various I/O resources in a system, another process will
switch the current process if it wants to use the CPU. And when it meets the I/O
requirements, the previous process would go into a ready state so that it can wait for the
CPU execution. Context switching helps in storing the process’s state to resume the tasks
in an OS. Else, the process has to restart the execution from the very initial levels.
4. In case an interrupt occurs when a process runs in an OS, the status of the process is
saved as the registers using context switching. After the interrupts are resolved, the
process would switch from a wait to a ready state so as to resume its execution later at
the very same point at which the OS interrupt occurs.

5. Using context switching, a single CPU can simultaneously handle various process
requests without requiring any additional processors.

Examples of Context Switching

Suppose that numerous processes get stored in a PCB (Process Control Block), and a process is in
its running state for the execution of its task using the CPUs. As this process runs, another one
arrives in the ready queue, which has a comparatively higher priority of completing its assigned
task using the system’s CPU. In this case, we used context switching such that it switches the
currently running process with the new one that requires the CPU to finish its assigned tasks.
When a process is switching, the context switch saves the old process’s status in registers.
Whenever any process reloads into a CPU, it initiates the process execution in which the new
process intercepts the old process. In case we don’t save the process’s state, we have to begin its
execution at its initial level. This way, context switching helps an OS to switch between the given
processes and reload or store the process.

Context Switching Triggers

Here are the triggers that lead to context switches in a system:

Interrupts: The CPU requests the data to be read from a disk. In case there are interrupts, the
context switching would automatically switch a part of the hardware that needs less time to
handle the interrupts.
Multitasking: Context switching is the characteristic of multitasking. They allow a process to
switch from the CPU to allow another process to run. When switching a given process, the old
state gets saved so as to resume the execution of the process at the very same point in a system.
Kernel/User Switch: It’s used in the OS when it is switching between the kernel mode and the
user mode.

What is the PCB?

PCB refers to a data structure that is used in the OS to store all the data-related info to the
process. For instance, whenever a process is formed in the OS, the info of the process is
updated, information about the process switching, and the process terminated in the PCB.

Steps of Context Switching

Several steps are involved in the context switching of a process. The diagram given below
represents context switching between two processes, P1 and P2, in case of an interrupt, I/O
need, or the occurrence of a priority-based process in the PCB’s ready queue.

Or
As you can see in the illustration above, the process P1 is initially running on the CPU for the
execution of its task. At the very same time, P2, another process, is in its ready state. If an
interruption or error has occurred or if the process needs I/O, the P1 process would switch the
state from running to waiting.

Before the change of the state of the P1 process, context switching helps in saving the context of
the P1 process as registers along with the program counter (to PCB1). Then it loads the P2
process state from its ready state (of PCB2) to its running state.

Here are the steps are taken to switch the P1 to P2:

1. The context switching must save the P1’s state as the program counter and register to
PCB that is in its running state.

2. Now it updates the PCB1 to the process P1 and then moves the process to its
appropriate queue, like the ready queue, waiting queue and I/O queue.

3. Then, another process enters the running state. We can also select a new process
instead of from the ready state that needs to be executed or when a process has a higher
priority of executing its task.

4. Thus, now we need to update the PCB for the P2 selected process. It involves switching a
given process state from its running state or from any other state, such as exit, blocked,
or suspended.

5. In case the CPU already performs the execution of the P2 process, then we must get the
P2 process’s status so as to resume the execution of it at the very same time at the same
point at which there’s a system interrupt.
In a similar manner, the P2 process is switched off from the system’s CPU to let the process P1
resume its execution. The process P1 is reloaded to the running state from PCB1 to resume its
assigned task at the very same point. Else, the data is lost, so when the process is again
executed, it starts the execution at its initial level.

Process State

As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. A process may be in one of the following states:

• New. The process is being created.

• Running. Instructions are being executed.

• Waiting. The process is waiting for some event to occur (such as an I/O

completion or reception of a signal).

• Ready. The process is waiting to be assigned to a processor.

• Terminated. The process has finished execution.

It is important to realize that only one process can be running on any processor at any instant. Many
processes may be ready and waiting, however. The state diagram corresponding to these states is
presented in Figure 3.2.

Thread in Operating System

Within a program, a Thread is a separate execution path. It is a lightweight process that the
operating system can schedule and run concurrently with other threads. The operating system
creates and manages threads, and they share the same memory and resources as the program that
created them. This enables multiple threads to collaborate and work efficiently within a single
program.

A thread is a single sequence stream within a process. Threads are also called lightweight processes
as they possess some of the properties of processes. Each thread belongs to exactly one process. In
an operating system that supports multithreading, the process can consist of many threads. But
threads can be effective only if CPU is more than 1 otherwise two threads have to context switch for
that single CPU.

Why Do We Need Thread?

• Threads run in parallel improving the application performance. Each such thread has its own
CPU state and stack, but they share the address space of the process and the environment.

• Threads can share common data so they do not need to use interprocess communication.
Like the processes, threads also have states like ready, executing, blocked, etc.

• Priority can be assigned to the threads just like the process, and the highest priority thread is
scheduled first.

• Each thread has its own Thread Control Block (TCB). Like the process, a context switch occurs
for the thread, and register contents are saved in (TCB). As threads share the same address
space and resources, synchronization is also required for the various activities of the thread.

Why Multi-Threading?

A thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads. MS
Word uses multiple threads: one thread to format the text, another thread to process inputs, etc.
More advantages of multithreading are discussed below.

Multithreading is a technique used in operating systems to improve the performance and


responsiveness of computer systems. Multithreading allows multiple threads (i.e., lightweight
processes) to share the same resources of a single process, such as the CPU, memory, and I/O
devices.

Difference Between Process and Thread

The primary difference is that threads within the same process run in a shared memory space, while
processes run in separate memory spaces. Threads are not independent of one another like
processes are, and as a result, threads share with other threads their code section, data section, and
OS resources (like open files and signals). But, like a process, a thread has its own program counter
(PC), register set, and stack space.
Advantages of Thread

• Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned.
• Faster context switch: Context switch time between threads is lower compared to the
process context switch. Process context switching requires more overhead from the CPU.

• Effective utilization of multiprocessor system: If we have multiple threads in a single


process, then we can schedule multiple threads on multiple processors. This will make
process execution faster.

• Resource sharing: Resources like code, data, and files can be shared among all threads
within a process. Note: Stacks and registers can’t be shared among the threads. Each thread
has its own stack and registers.

• Communication: Communication between multiple threads is easier, as the threads share a


common address space. while in the process we have to follow some specific communication
techniques for communication between the two processes.

• Enhanced throughput of the system: If a process is divided into multiple threads, and each
thread function is considered as one job, then the number of jobs completed per unit of time
is increased, thus increasing the throughput of the system.

Types of Threads

Threads are of two types. These are described below.

• User Level Thread

• Kernel Level Thread

User Level Thread and Kernel Level Thread


User Level Threads

User Level Thread is a type of thread that is not created using system calls. The kernel has no work in
the management of user-level threads. User-level threads can be easily implemented by the user. In
case when user-level threads are single-handed processes, kernel-level thread manages them. Let’s
look at the advantages and disadvantages of User-Level Thread.

Example – user threads library includes POSIX threads, Mach C-Threads

Advantages of User-Level Threads

• Implementation of the User-Level Thread is easier than Kernel Level Thread.

• Context Switch Time is less in User Level Thread.

• User-Level Thread is more efficient than Kernel-Level Thread.

• Because of the presence of only Program Counter, Register Set, and Stack Space, it has a
simple representation.

Disadvantages of User-Level Threads

• There is a lack of coordination between Thread and Kernel.

• Inc case of a page fault, the whole process can be blocked.

Kernel Level Threads

A kernel Level Thread is a type of thread that can recognize the Operating system easily. Kernel Level
Threads has its own thread table where it keeps track of the system. The operating System Kernel
helps in managing threads. Kernel Threads have somehow longer context switching time. Kernel
helps in the management of threads.

Example – The example of Kernel-level threads are Java threads, POSIX threads, etc.

Advantages of Kernel-Level Threads

• It has up-to-date information on all threads.

• Applications that block frequency are to be handled by the Kernel-Level Threads.

• Whenever any process requires more time to process, Kernel-Level Thread provides more
time to it.

Disadvantages of Kernel-Level threads

• Kernel-Level Thread is slower than User-Level Thread.

• Implementation of this type of thread is a little more complex than a user-level thread.
S.
No. Parameters User Level Thread Kernel Level Thread

User threads are implemented by Kernel threads are implemented


1. Implemented by
users. by Operating System (OS).

The operating System doesn’t Kernel threads are recognized by


2. Recognize
recognize user-level threads. Operating System.

Implementation of User threads Implementation of Kernel-Level


3. Implementation
is easy. thread is complicated.

Context switch
4. Context switch time is less. Context switch time is more.
time

Hardware Context switch requires no


5. Hardware support is needed.
support hardware support.

If one user-level thread performs If one kernel thread performs a


Blocking
6. a blocking operation then the blocking operation then another
operation
entire process will be blocked. thread can continue execution.

Multithread applications cannot


7. Multithreading take advantage of Kernels can be multithreaded.
multiprocessing.

Kernel-level level threads take


Creation and User-level threads can be created
8. more time to create and
Management and managed more quickly.
manage.

Operating Any operating system can Kernel-level threads are


9.
System support user-level threads. operating system-specific.

Thread The thread library contains the The application code does not
10. code for thread creation, contain thread management
Management
message passing, thread code. It is merely an API to the
S.
No. Parameters User Level Thread Kernel Level Thread

scheduling, data transfer, and kernel mode. The Windows


thread destroying operating system makes use of
this feature.

Example: Java thread, POSIX


11. Example Example: Window Solaris.
threads.

• User Level Threads are


simple and quick to • Scheduling multiple
create. threads that belong to
the same process on
• Can run on any operating different processors is
system possible in kernel-level
• They perform better than threads.

12. Advantages kernel threads since they • Multithreading can be


don’t need to make there for kernel routines.
system calls to create
threads. • When a thread at the
kernel level is halted, the
• In user-level threads, kernel can schedule
switching between another thread for the
threads does not need same process.
kernel mode privileges.

• Transferring control
• Multithreaded
within a process from
applications on user-level
one thread to another
threads cannot benefit
necessitates a mode
from multiprocessing.
13. Disadvantages switch to kernel mode.
• If a single user-level
• Kernel-level threads take
thread performs a
more time to create and
blocking operation, the
manage than user-level
entire process is halted.
threads.

In user-level threads, each thread In kernel-level threads have their


Memory
14. has its own stack, but they share own stacks and their own
management
the same address space. separate address spaces, so they
S.
No. Parameters User Level Thread Kernel Level Thread

are better isolated from each


other.

User-level threads are less fault-


Kernel-level threads can be
tolerant than kernel-level
managed independently, so if
15. Fault tolerance threads. If a user-level thread
one thread crashes, it doesn’t
crashes, it can bring down the
necessarily affect the others.
entire process.

User-level threads don’t take full


It can access to the system-level
advantage of the system
Resource features like I/O operations.So it
16. resources, As they don’t have
utilization can take full Advantages of
direct access to the system-level
System Resources.
features like I/O operations

User-level threads are more


Kernel-level threads are less
17. Portability portable than kernel-level
portable than User-level threads.
threads.

Multiprogramming

We have many processes ready to run. There are two types of multiprogramming:

1. Preemption – Process is forcefully removed from CPU. Pre-emotion is also called time
sharing or multitasking.

2. Non-preemption – Processes are not removed until they complete the execution. Once
control is given to the CPU for a process execution, till the CPU releases the control by itself,
control cannot be taken back forcibly from the CPU.

Preemptive and Non-Preemptive Scheduling

You will discover the distinction between preemptive and non-preemptive scheduling in this article.
But first, you need to understand preemptive and non-preemptive scheduling before going over the
differences.
Preemptive Scheduling

Preemptive scheduling is used when a process switches from the running state to the ready state or
from the waiting state to the ready state. The resources (mainly CPU cycles) are allocated to the
process for a limited amount of time and then taken away, and the process is again placed back in
the ready queue if that process still has CPU burst time remaining. That process stays in the ready
queue till it gets its next chance to execute.

Algorithms based on preemptive scheduling are Round Robin (RR), Shortest Remaining Time First
(SRTF), Priority (preemptive version), etc.
Preemptive scheduling has a number of advantages and disadvantages. The following are non-
preemptive scheduling’s benefits and drawbacks:

Advantages

1. Because a process may not monopolize the processor, it is a more reliable method.

2. Each occurrence prevents the completion of ongoing tasks.

3. The average response time is improved.

4. Utilizing this method in a multi-programming environment is more advantageous.

5. The operating system makes sure that every process using the CPU is using the same amount
of CPU time.

Disadvantages

1. Limited computational resources must be used.

2. Suspending the running process, change the context, and dispatch the new incoming process
all take more time.

3. The low-priority process would have to wait if multiple high-priority processes arrived at the
same time.

Non-Preemptive Scheduling

Non-preemptive Scheduling is used when a process terminates, or a process switches from running
to the waiting state. In this scheduling, once the resources (CPU cycles) are allocated to a process,
the process holds the CPU till it gets terminated or reaches a waiting state. In the case of non-
preemptive scheduling does not interrupt a process running CPU in the middle of the execution.
Instead, it waits till the process completes its CPU burst time, and then it can allocate the CPU to
another process.

Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non
preemptive) and Priority (nonpreemptive version), etc.
Non-preemptive scheduling has both advantages and disadvantages. The following are non-
preemptive scheduling’s benefits and drawbacks:

Advantages

1. It has a minimal scheduling burden.

2. It is a very easy procedure.

3. Less computational resources are used.

4. It has a high throughput rate.

Disadvantages

1. Its response time to the process is super.

2. Bugs can cause a computer to freeze up.

Key Differences Between Preemptive and Non-Preemptive Scheduling

1. In preemptive scheduling, the CPU is allocated to the processes for a limited time whereas,
in Non-preemptive scheduling, the CPU is allocated to the process till it terminates or
switches to the waiting state.

2. The executing process in preemptive scheduling is interrupted in the middle of execution


when a higher priority one comes whereas, the executing process in non-preemptive
scheduling is not interrupted in the middle of execution and waits till its execution.

3. In Preemptive Scheduling, there is the overhead of switching the process from the ready
state to the running state, vise-verse, and maintaining the ready queue. Whereas in the case
of non-preemptive scheduling has no overhead of switching the process from running state
to ready state.

4. In preemptive scheduling, if a high-priorThe process The process non-preemptive low-


priority process frequently arrives in the ready queue then the process with low priority has
to wait for a long, and it may have to starve. , in non-preemptive scheduling, if CPU is
allocated to the process having a larger burst time then the processes with a small burst time
may have to starve.

5. Preemptive scheduling attains flexibility by allowing the critical processes to access the CPU
as they arrive in the ready queue, no matter what process is executing currently. Non-
preemptive scheduling is called rigid as even if a critical process enters the ready queue the
process running CPU is not disturbed.

6. Preemptive Scheduling has to maintain the integrity of shared data that’s why it is cost
associative which is not the case with Non-preemptive Scheduling.

Comparison Chart
Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Once resources(CPU Cycle) are


In this resources(CPU Cycle) are
allocated to a process, the process
Basic allocated to a process for a
holds it till it completes its burst time
limited time.
or switches to waiting state.

Process can be interrupted in Process can not be interrupted until


Interrupt
between. it terminates itself or its time is up.

If a process having high priority If a process with a long burst time is


frequently arrives in the ready running CPU, then later coming
Starvation
queue, a low priority process process with less CPU burst time may
may starve. starve.

It has overheads of scheduling


Overhead It does not have overheads.
the processes.

Flexibility flexible rigid

Cost cost associated no cost associated

In preemptive scheduling, CPU It is low in non preemptive


CPU Utilization
utilization is high. scheduling.

Preemptive scheduling waiting Non-preemptive scheduling waiting


Waiting Time
time is less. time is high.

Preemptive scheduling Non-preemptive scheduling response


Response Time
response time is less. time is high.

Decisions are made by the Decisions are made by the process


scheduler and are based on itself and the OS just follows the
Decision making
priority and time slice process’s instructions
allocation
Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

The OS has less control over the


The OS has greater control over
Process control scheduling of processes
the scheduling of processes

Lower overhead since context


Higher overhead due to
Overhead switching is less frequent
frequent context switching

Examples of preemptive
Examples of non-preemptive
scheduling are Round Robin
Examples scheduling are First Come First Serve
and Shortest Remaining Time
and Shortest Job First.
First.

Conclusion

Preemptive scheduling is not better than non-preemptive scheduling, and vice versa. It all depends
on how a scheduling algorithm increases CPU utilization while decreasing average process waiting
time.

Frequently Asked Questions

Q.1: How is priority determined in preemptive scheduling?

Answer:

Preemptive scheduling systems often assign priority levels to tasks or processes. The priority can be
determined based on factors like the nature of the task, its importance, or its deadline. Higher-
priority tasks are given precedence and are allowed to execute before lower-priority tasks.

Q.2: What happens in non-preemptive scheduling if a task does not yield the CPU?

Answer:

In non-preemptive scheduling, if a task does not voluntarily yield the CPU, it can lead to a situation
called a “starvation” or “deadlock” where other tasks are unable to execute. To avoid such scenarios,
it’s important to ensure that tasks have mechanisms to release the CPU when necessary, such as
waiting for I/O operations or setting maximum execution times.

Operating System Scheduling algorithms

A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going to
discuss in this part −

• First-Come, First-Served (FCFS) Scheduling


• Shortest-Job-Next (SJN) Scheduling

• Priority Scheduling

• Shortest Remaining Time

• Round Robin(RR) Scheduling

• Multiple-Level Queues Scheduling


Shortest-Job-Next (SJN) Scheduling / Shortest Request Next (SRN)
Least Completed Next (LCN) Scheduling

The LCN is preemptive scheduling policy and it always schedules the request that has consumed the
least amount of processor time of all requests existing in the system. Thus, the nature of the job,
whether CPU-bound or I/O-bound, does not influence its progress in the system. All requests now
make approximately equal progress in terms of the processor time consumed by them and short jobs
are guaranteed to finish ahead of long requests. This policy also has the drawback of starving the
long job of CPU attention.
Process Scheduler in Operating System

Process Scheduling is responsible for selecting a processor process based on a scheduling method as
well as removing a processor process. It’s a crucial component of a multiprogramming operating
system. Process scheduling makes use of a variety of scheduling queues. The scheduler’s purpose is
to implement the virtual machine so that each process appears to be running on its own computer to
the user.
What is a Process Scheduler in an Operating System?

The process manager’s activity is process scheduling, which involves removing the running process
from the CPU and selecting another process based on a specific strategy. The scheduler’s purpose is
to implement the virtual machine so that each process appears to be running on its own computer to
the user.

Multiprogramming OS’s process scheduling is critical. Multiple processes could be loaded into
executable memory at the same time in such an OS, and the loaded processes share the CPU utilising
temporal multiplexing.

Types of Schedulers

1. Long-term – performance: Decides how many processes should be made to stay in the
ready state. This decides the degree of multiprogramming. Once a decision is taken it lasts
for a long time which also indicates that it runs infrequently. Hence it is called a long-term
scheduler.

2. Short-term – Context switching time: Short-term scheduler will decide which process is to
be executed next and then it will call the dispatcher. A dispatcher is a software that moves
the process from ready to run and vice versa. In other words, it is context switching. It runs
frequently. Short-term scheduler is also called CPU scheduler.

3. Medium-term – Swapping time: Suspension decision is taken by the medium-term


scheduler. The medium-term scheduler is used for swapping which is moving the process
from main memory to secondary and vice versa. The swapping is done to reduce degree of
multiprogramming.

Types of Process Schedulers

Process schedulers are divided into three categories.


1. Long-Term Scheduler or Job Scheduler

The job scheduler is another name for Long-Term scheduler. It selects processes from the pool (or
the secondary memory) and then maintains them in the primary memory’s ready queue.

The Multiprogramming degree is mostly controlled by the Long-Term Scheduler. The goal of the
Long-Term scheduler is to select the best mix of IO and CPU bound processes from the pool of jobs.

If the job scheduler selects more IO bound processes, all of the jobs may become stuck, the CPU will
be idle for the majority of the time, and multiprogramming will be reduced as a result. Hence, the
Long-Term scheduler’s job is crucial and could have a Long-Term impact on the system.

2. Short-Term Scheduler or CPU Scheduler

CPU scheduler is another name for Short-Term scheduler. It chooses one job from the ready queue
and then sends it to the CPU for processing.

To determine which work will be dispatched for execution, a scheduling method is utilised. The
Short-Term scheduler’s task can be essential in the sense that if it chooses a job with a long CPU
burst time, all subsequent jobs will have to wait in a ready queue for a long period. This is known as
hunger, and it can occur if the Short-Term scheduler makes a mistake when selecting the work.

3. Medium-Term Scheduler

The switched-out processes are handled by the Medium-Term scheduler. If the running state
processes require some IO time to complete, the state must be changed from running to waiting.

This is accomplished using a Medium-Term scheduler. It stops the process from executing in order to
make space for other processes. Swapped out processes are examples of this, and the operation is
known as swapping. The Medium-Term scheduler here is in charge of stopping and starting
processes.

The degree of multiprogramming is reduced. To have a great blend of operations in the ready queue,
swapping is required.

Comparison among Schedulers

Parameters Long-Term Short-Term Medium-Term

Type of It is a type of job It is a type of CPU It is a type of process


Scheduler scheduler. scheduler. swapping scheduler.

Speed Its speed is It is the fastest among Its speed is in between


comparatively less than the other two. both Long and Short-
that of the Short-Term Term schedulers.
scheduler.

Purpose A Long-Term Scheduler The Short-Term Medium-Term reduces


helps in controlling the Scheduler provides much the overall degree of
overall degree of less control over the multiprogramming.
multiprogramming. degree of
multiprogramming.

Minimal Almost absent Minimal Present


time-
sharing
system

Function Selects processes from Selects all those Can re-introduce the
the pool and then loads processes that are ready given process into
them into the memory to be executed. memory. The execution
for execution. can then be continued.

You might also like