Professional Documents
Culture Documents
o Simple Structure
o Monolithic Structure
o Layered Approach Structure
o Micro-Kernel Structure
o Exo-Kernel Structure
o Virtual Machines
SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is only appropriate for usage with
tiny and restricted systems. Since the interfaces and degrees of functionality in this structure are clearly defined,
programs are able to access I/O routines, which may result in unauthorized access to I/O procedures.
This organizational structure is used by the MS-DOS operating system:
o There are four layers that make up the MS-DOS operating system, and each has its own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application programs, and system
programs.
o The MS-DOS operating system benefits from layering because each level can be defined independently and,
when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update. Because of this, simple structures
can be used to build constrained systems that are less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O procedures are visible to end users,
giving them the potential for unwanted access.
MONOLITHIC STRUCTURE
The monolithic operating system controls all aspects of the operating system's operation, including file management,
memory management, device management, and operational operations.
The core of an operating system for computers is called the kernel (OS). All other System components are provided with
fundamental services by the kernel. The operating system and the hardware use it as their main interface. When an
operating system is built into a single piece of hardware, such as a keyboard or mouse, the kernel can directly access all
of its resources.
The monolithic operating system is often referred to as the monolithic kernel. Multiple programming techniques such as
batch processing and time-sharing increase a processor's usability. Working on top of the operating system and under
complete command of all hardware, the monolithic kernel performs the role of a virtual computer. This is an old
operating system that was used in banks to carry out simple tasks like batch processing and time-sharing, which allows
numerous users at different terminals to access the Operating System.
The following diagram represents the monolithic structure:
LAYERED STRUCTURE
The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest layer) contains the hardware,
and layer 1 (the highest layer) contains the user interface (layer N). These layers are organized hierarchically, with the
top-level layers making use of the capabilities of the lower-level ones.
The functionalities of each layer are separated in this method, and abstraction is also an option. Because layered
structures are hierarchical, debugging is simpler, therefore all lower-level layers are debugged before the upper layer is
examined. As a result, the present layer alone has to be reviewed since all the lower layers have already been examined.
The image below shows how OS is organized into layers:
Advantages of Layered Structure:
o Work duties are separated since each layer has its own functionality, and there is some amount of abstraction.
o Debugging is simpler because the lower layers are examined first, followed by the top layers.
MICRO-KERNEL STRUCTURE
The operating system is created using a micro-kernel framework that strips the kernel of any unnecessary parts. Systems
and user applications are used to implement these optional kernel components. So, Micro-Kernels is the name given to
these systems that have been developed.
Each Micro-Kernel is created separately and is kept apart from the others. As a result, the system is now more
trustworthy and secure. If one Micro-Kernel malfunctions, the remaining operating system is unaffected and continues
to function normally.
The image below shows Micro-Kernel Operating System Structure:
Advantages of Micro-Kernel Structure:
o It enables portability of the operating system across platforms.
o Due to the isolation of each Micro-Kernel, it is reliable and secure.
o The reduced size of Micro-Kernels allows for successful testing.
o The remaining operating system remains unaffected and keeps running properly even if a component or Micro-
Kernel fails.
EXOKERNEL
An operating system called Exokernel was created at MIT with the goal of offering application-level management of
hardware resources. The exokernel architecture's goal is to enable application-specific customization by separating
resource management from protection. Exokernel size tends to be minimal due to its limited operability.
Because the OS sits between the programs and the actual hardware, it will always have an effect on the functionality,
performance, and breadth of the apps that are developed on it. By rejecting the idea that an operating system must
offer abstractions upon which to base applications, the exokernel operating system makes an effort to solve this issue.
The goal is to give developers as few restriction on the use of abstractions as possible while yet allowing them the
freedom to do so when necessary. Because of the way the exokernel architecture is designed, a single tiny kernel is
responsible for moving all hardware abstractions into unreliable libraries known as library operating systems. Exokernels
differ from micro- and monolithic kernels in that their primary objective is to prevent forced abstraction.
Exokernel operating systems have a number of features, including:
o Enhanced application control support.
o Splits management and security apart.
o A secure transfer of abstractions is made to an unreliable library operating system.
o Brings up a low-level interface.
o Operating systems for libraries provide compatibility and portability.
System Calls
What is a System Call
Programming interface to the services provided by the OS
Typically written in a high-level language (C or C++)
Mostly accessed by programs via a high-level Application Program Interface (API) rather than direct system call
usenThree most common APIs are Win32 API for Windows, POSIX API for POSIX-based systems (including
virtually all versions of UNIX, Linux, and Mac OS X), and Java API for the Java virtual machine (JVM)
Why use APIs rather than system calls?(Note that the system-call names used throughout this text are generic)
For example, assume an online travel service that aggregates information from multiple airlines. The travel service
interacts with the airline’s API. The API takes the requests to book seats and select meals from the travel service to the
airline system. Then it delivers the airlines responses back to the online travel service and the travel service displays the
details to the users. This is a real-world application for an API.
File management system calls – Create, read, write, delete files, open and close files, set file attributes, etc.
Device management system calls – Request and release devices, set device attributes, etc.
Information management system calls – Get and set system data, get and set time and date, etc.
Communication system calls – Send and receive messages, transfer status information, create and delete
communication connections, etc.
This type of operating system does not interact with the computer directly. There is an operator which takes similar
jobs having the same requirement and groups them into batches. It is the responsibility of the operator to sort jobs
with similar needs.
Batch Operating System
Advantages of Batch Operating System
It is very difficult to guess or know the time required for any job to complete. Processors of the batch systems
know how long the job would be when it is in the queue.
Multiple users can share the batch systems.
Disadvantages of Batch Operating System
The computer operators should be well known with batch systems.
Batch systems are hard to debug.
It is sometimes costly.
The other jobs will have to wait for an unknown time if any job fails.
Examples of Batch Operating Systems: Payroll Systems, Bank Statements, etc.
Multiprogramming Operating Systems can be simply illustrated as more than one program is present in the main
memory and any one of them can be kept in execution. This is basically used for better execution of resources.
MultiProgramming
Advantages of Multi-Programming Operating System
Multi Programming increases the Throughput of the System.
It helps in reducing the response time.
Disadvantages of Multi-Programming Operating System
There is not any facility for user interaction of system resources with the system.
Multi-Processing Operating System is a type of Operating System in which more than one CPU is used for the
execution of resources. It betters the throughput of the System.
Multiprocessing
Advantages of Multi-Processing Operating System
It increases the throughput of the system.
As it has several processors, so, if one processor fails, we can proceed with another processor.
Disadvantages of Multi-Processing Operating System
Due to the multiple CPU, it can be more complex and somehow difficult to understand.
Multitasking
Advantages of Multi-Tasking Operating System
Multiple Programs can be executed simultaneously in Multi-Tasking Operating System.
It comes with proper memory management.
Disadvantages of Multi-Tasking Operating System
The system gets heated in case of heavy programs multiple times.
5. Time-Sharing Operating Systems
Each task is given some time to execute so that all the tasks work smoothly. Each user gets the time of the CPU as
they use a single system. These systems are also known as Multitasking Systems. The task can be from a single user or
different users also. The time that each task gets to execute is called quantum. After this time interval is over OS
switches over to the next task.
Time-Sharing OS
Advantages of Time-Sharing OS
Each task gets an equal opportunity.
Fewer chances of duplication of software.
CPU idle time can be reduced.
Resource Sharing: Time-sharing systems allow multiple users to share hardware resources such as the CPU,
memory, and peripherals, reducing the cost of hardware and increasing efficiency.
Disadvantages of Time-Sharing OS
Reliability problem.
One must have to take care of the security and integrity of user programs and data.
Data communication problem.
High Overhead: Time-sharing systems have a higher overhead than other operating systems due to the need for
scheduling, context switching, and other overheads that come with supporting multiple users.
These types of operating system is a recent advancement in the world of computer technology and are being widely
accepted all over the world and, that too, at a great pace. Various autonomous interconnected computers
communicate with each other using a shared communication network. Independent systems possess their own
memory unit and CPU. These are referred to as loosely coupled systems or distributed systems . These systems’
processors differ in size and function. The major benefit of working with these types of the operating system is that it
is always possible that one user can access the files or software which are not actually present on his system but some
other system connected within this network i.e., remote access is enabled within the devices connected in that
network.
Distributed OS
Examples of Distributed Operating Systems are LOCUS, etc.
These systems run on a server and provide the capability to manage data, users, groups, security, applications, and
other networking functions. These types of operating systems allow shared access to files, printers, security,
applications, and other networking functions over a small private network. One more important aspect of Network
Operating Systems is that all the users are well aware of the underlying configuration, of all other users within the
network, their individual connections, etc. and that’s why these computers are popularly known as tightly coupled
systems.
These types of OSs serve real-time systems. The time interval required to process and respond to inputs is very small.
This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile systems, air traffic
control systems, robots, etc.
PROCESS MANAGEMENT
A Program does nothing unless its instructions are executed by a CPU. A program in execution is called a process. In
order to accomplish its task, process needs the computer resources.
There may exist more than one process in the system which may require the same resource at the same time.
Therefore, the operating system has to manage all the processes and the resources in a convenient and efficient way.
Some resources may need to be executed by one process at one time to maintain the consistency otherwise the system
can become inconsistent and deadlock may occur.
The operating system is responsible for the following activities in connection with Process Management
1. Scheduling processes and threads on the CPUs.
2. Creating and deleting both user and system processes.
3. Suspending and resuming processes.
4. Providing mechanisms for process synchronization.
5. Providing mechanisms for process communication.
The process, from its creation to completion, passes through various states. The minimum number of states is five.
The names of the states are not standardized although the process may be in one of the following states during
execution.
1. New
A program which is going to be picked up by the OS into the main memory is called a new process.
2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU to be assigned. The OS
picks the new processes from the secondary memory and put all of them in the main memory.
The processes which are ready for the execution and reside in the main memory are called ready state processes. There
can be many processes present in the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS depending upon the scheduling algorithm. Hence, if
we have only one CPU in our system, the number of running processes for a particular time will always be one. If we
have n processors in the system then we can have n processes running simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or wait state depending upon the scheduling
algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user then the OS move this process
to the block or wait state and assigns the CPU to the other processes.
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the context of the process (Process Control
Block) will also be deleted the process will be terminated by the Operating system.
Operations on the Process
1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory) and will be ready for the
execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system chooses one process and start executing it.
Selecting the process which is to be executed next, is known as scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts executing it. Process may come to the blocked or
wait state during the execution then in that case the processor starts executing the other processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of the process (PCB) will be
deleted and the process gets terminated by the Operating system.
Process Scheduling in OS (Operating System)
Operating system uses various schedulers for the process scheduling described below.
1. Long term scheduler
Long term scheduler is also known as job scheduler. It chooses the processes from the pool (secondary memory) and
keeps them in the ready queue maintained in the primary memory.
Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long term scheduler is to choose
a perfect mix of IO bound and CPU bound processes among the jobs present in the pool.
If the job scheduler chooses more IO bound processes then all of the jobs may reside in the blocked state all the time
and the CPU will remain idle most of the time. This will reduce the degree of Multiprogramming. Therefore, the Job of
long term scheduler is very critical and may affect the system for a very long time.
2. Short term scheduler
Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready queue and dispatch to
the CPU for the execution.
A scheduling algorithm is used to select which job is going to be dispatched for the execution. The Job of the short term
scheduler can be very critical in the sense that if it selects job whose CPU burst time is very high then all the jobs after
that, will have to wait in the ready queue for a very long time.
This problem is called starvation which may arise if the short term scheduler makes some mistakes while selecting the
job.
3. Medium term scheduler
Medium term scheduler takes care of the swapped out processes.If the running state processes needs some IO time for
the completion then there is a need to change its state from running to waiting.
Medium term scheduler is used for this purpose. It removes the process from the running state to make room for the
other processes. Such processes are the swapped out processes and this procedure is called swapping. The medium
term scheduler is responsible for suspending and resuming the processes.
It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix of processes in the ready
queue.
Process Queues
The Operating system manages various types of queues for each of the process states. The PCB related to the process is
also stored in the queue of the same state. If the Process is moved from one state to another state then its PCB is also
unlinked from the corresponding queue and added to the other state queue in which the transition is made.
1. Arrival Time
The time at which the process enters into the ready queue is called the arrival time.
2. Burst Time
The total amount of time required by the CPU to execute the whole process is called the Burst Time. This does not
include the waiting time. It is confusing to calculate the execution time for a process even before executing it hence the
scheduling problems based on the burst time cannot be implemented in reality.
3. Completion Time
The Time at which the process enters into the completion state or the time at which the process completes its
execution, is called completion time.
4. Turnaround time
The total amount of time spent by the process from its arrival to its completion, is called Turnaround time.
5. Waiting Time
The Total amount of time for which the process waits for the CPU to be assigned is called waiting time.
6. Response Time
The difference between the arrival time and the time at which the process first gets the CPU is called Response Time.
Process Table and Process Control Block (PCB)
While creating a process the operating system performs several operations. To identify the processes, it assigns a
process identification number (PID) to each process. As the operating system supports multi-programming, it needs to
keep track of all the processes. For this task, the process control block (PCB) is used to track the process’s execution
status. Each block of memory contains information about the process state, program counter, stack pointer, status of
opened files, scheduling algorithms, etc. All this information is required and must be saved when the process is switched
from one state to another. When the process makes a transition from one state to another, the operating system must
update information in the process’s PCB. A process control block (PCB) contains information about the process, i.e.
registers, quantum, priority, etc. The process table is an array of PCBs, that means logically contains a PCB for all of the
current processes in the system.
Pointer – It is a stack pointer which is required to be saved when the process is switched from one state to another
to retain the current position of the process.
Process state – It stores the respective state of the process.
Process number – Every process is assigned with a unique id known as process ID or PID which stores the process
identifier.
Program counter – It stores the counter which contains the address of the next instruction that is to be executed for
the process.
Register – These are the CPU registers which includes: accumulator, base, registers and general purpose registers.
Memory limits – This field contains the information about memory management system used by operating system.
This may include the page tables, segment tables etc.
Open files list – This information includes the list of files opened for a process.
1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between processes via the shared memory method and via
the message passing method.
An operating system can implement both methods of communication. First, we will discuss the shared memory methods
of communication and then message passing. Communication between processes using shared memory requires
processes to share some variable, and it completely depends on how the programmer will implement it. One way of
communication using shared memory can be imagined like this: Suppose process1 and process2 are executing
simultaneously, and they share some resources or use some information from another process. Process1 generates
information about certain computations or resources being used and keeps it as a record in shared memory. When
process2 needs to use the shared information, it will check in the record stored in shared memory and take note of the
information generated by process1 and act accordingly. Processes can use shared memory for extracting information as
a record from another process as well as for delivering any specific information to other processes.
Establish a communication link (if a link already exists, no need to establish it again.)
Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)
User Level Thread is a type of thread that is not created using system calls. The kernel has no work in the
management of user-level threads. User-level threads can be easily implemented by the user. In case when user-level
threads are single-handed processes, kernel-level thread manages them. Let’s look at the advantages and
disadvantages of User-Level Thread.
Advantages of User-Level Threads
Implementation of the User-Level Thread is easier than Kernel Level Thread.
Context Switch Time is less in User Level Thread.
User-Level Thread is more efficient than Kernel-Level Thread.
Because of the presence of only Program Counter, Register Set, and Stack Space, it has a simple representation.
Disadvantages of User-Level Threads
There is a lack of coordination between Thread and Kernel.
Inc case of a page fault, the whole process can be blocked.
A kernel Level Thread is a type of thread that can recognize the Operating system easily. Kernel Level Threads has its
own thread table where it keeps track of the system. The operating System Kernel helps in managing threads. Kernel
Threads have somehow longer context switching time. Kernel helps in the management of threads.
The concept of multi-threading needs proper understanding of these two terms – a process and a thread. A process
is a program being executed. A process can be further divided into independent units known as threads. A thread is
like a small light-weight process within a process. Or we can say a collection of threads is what is known as a process.
Applications – Threading is used widely in almost every field. Most widely it is seen over the internet nowadays
where we are using transaction processing of every type like recharges, online transfer, banking etc. Threading is a
segment which divide the code into small parts that are of very light weight and has less burden on CPU memory so
that it can be easily worked out and can achieve goal in desired field. The concept of threading is designed due to the
problem of fast and regular changes in technology and less the work in different areas due to less application. Then as
says “need is the generation of creation or innovation” hence by following this approach human mind develop the
concept of thread to enhance the capability of programming.
CPU Scheduling
In the uniprogrammming systems like MS DOS, when a process waits for any I/O operation to be done, the CPU remains
idol. This is an overhead since it wastes the time and causes the problem of starvation. However, In Multiprogramming
systems, the CPU doesn't remain idle during the waiting time of the Process and it starts executing other processes.
Operating System has to define which process the CPU will be given.
In Multiprogramming systems, the Operating system schedules the processes on the CPU to have the maximum
utilization of it and this procedure is called CPU scheduling. The Operating System uses various scheduling algorithm to
schedule the processes.
Why do we need Scheduling?
In Multiprogramming, if the long term scheduler picks more I/O bound processes then most of the time, the CPU
remains idol. The task of Operating system is to optimize the utilization of resources.
If most of the running processes change their state from running to waiting then there may always be a possibility of
deadlock in the system. Hence to reduce this overhead, the OS needs to schedule the jobs to get the optimal utilization
of CPU and to avoid the possibility to deadlock.
What are the different types of CPU Scheduling Algorithms?
There are mainly two types of scheduling methods:
Preemptive Scheduling: Preemptive scheduling is used when a process switches from running state to ready state
or from the waiting state to the ready state.
Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process terminates , or when a process
switches from running state to waiting state.
There are the following algorithms which can be used to schedule the jobs.
1. First Come First Serve
It is the simplest algorithm to implement. The process with the minimal arrival time will get the CPU first. The lesser the
arrival time, the sooner will the process gets the CPU. It is the non-preemptive type of scheduling.
2. Shortest Job First
The job with the shortest burst time will get the CPU first. The lesser the burst time, the sooner will the process get the
CPU. It is the non-preemptive type of scheduling.
3. Shortest remaining time first
It is the preemptive form of SJF. In this algorithm, the OS schedules the Job according to the remaining time of the
execution.
4. Round Robin
In the Round Robin scheduling algorithm, the OS defines a time quantum (slice). All the processes will get executed in
the cyclic way. Each of the process will get the CPU for a small amount of time (called time quantum) and then get back
to the ready queue to wait for its next turn. It is a preemptive type of scheduling.
5. Priority based scheduling
In this algorithm, the priority will be assigned to each of the processes. The higher the priority, the sooner will the
process get the CPU. If the priority of the two processes is same then they will be scheduled according to their arrival
time.
FCFS considered to be the simplest of all operating system scheduling algorithms. First come first serve scheduling
algorithm states that the process that requests the CPU first is allocated the CPU first and is implemented by
using FIFO queue.
Characteristics of FCFS:
FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
Tasks are always executed on a First-come, First-serve concept.
FCFS is easy to implement and use.
This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS:
Easy to implement
First come, first serve method
Disadvantages of FCFS:
FCFS suffers from Convoy effect.
The average waiting time is much higher than the other algorithms.
FCFS is very simple and easy to implement and hence not much efficient.
First-Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P P P
1 2 3
0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
Suppose that the processes arrive in the order
P2 , P3 , P1
The Gantt chart for the schedule is:
n
n
n
nWaiting time for P1 = 6; P2 = 0; P3 = 3
nAverage waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect short process behind long process
Example 2:
Example
Now, let us solve this problem with the help of the Scheduling Algorithm named First Come First Serve.
Gantt chart for the above Example 1 is:
2 P2 B 1 3 12 11 8
3 P3 C 1 2 14 13 11
4 P4 D 1 4 18 17 13
5 P5 E 2 3 21 19 16
6 P6 F 3 2 23 20 18
Average WT = ( 0 + 8 + 11 + 13 + 16 + 18 ) /6
Average WT = 66 / 6
Average WT = 11
The Average Turn Around Time is:
Average TAT = ( 9 + 11 + 13 + 17 + 19 +20 ) / 6
Average TAT = 89 / 6
Average TAT = 14.83334
This is how the FCFS is solved.
2. Shortest Job First (SJF) Scheduling
Till now, we were scheduling the processes according to their arrival time (in FCFS scheduling). However, SJF scheduling
algorithm, schedules the processes according to their burst time.In SJF scheduling, the process with the lowest burst
time, among the list of available processes in the ready queue, is going to be scheduled next.However, it is very difficult
to predict the burst time needed for a process hence this algorithm is very difficult to implement in the
system.34Abstract Class vs Interface | Difference between Abstract class and Interface in Java
Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time
Disadvantages of SJF
1. May suffer with the problem of starvation
2. It is not implementable because the exact Burst time for a process can't be known in advance.
There are different techniques available by which, the CPU burst time of the process can be determined. We will discuss
them later in detail.
Example
In the following example, there are five jobs named as P1, P2, P3, P4 and P5. Their arrival time and burst time are given
in the table below.
PID Arrival Time Burst Time Completion Time Turn Around Time Waiting Time
1 1 7 8 7 0
2 3 3 13 10 7
3 6 2 10 4 2
4 7 10 31 24 14
5 9 8 21 12 4
Since, No Process arrives at time 0 hence; there will be an empty slot in the Gantt chart from time 0 to 1 (the time at
which the first process arrives).
According to the algorithm, the OS schedules the process which is having the lowest burst time among the available
processes in the ready queue.
Till now, we have only one process in the ready queue hence the scheduler will schedule this to the processor no matter
what is its burst time.
This will be executed till 8 units of time. Till then we have three more processes arrived in the ready queue hence the
scheduler will choose the process with the lowest burst time.
Among the processes given in the table, P3 will be executed next since it is having the lowest burst time among all the
available processes.
So that's how the procedure will go on in shortest job first (SJF) scheduling algorithm.
Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has
elapsed, the process is preempted and added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU
time in chunks of at most q time units at once. No process waits more than (n-1)q time units.
Performance
q large Þ FIFO
q small Þ q must be large with respect to context switch, otherwise overhead is too high
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Typically, higher average turnaround than SJF, but better response
5. Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling algorithm that
works based on the priority of a process. In this algorithm, the editor sets the functions to be as important, meaning
that the most important process must be done first. In the case of any conflict, that is, where there are more than one
processor with equal value, then the most important CPU planning algorithm works on the basis of the FCFS (First
Come First Serve) algorithm.
Characteristics of Priority Scheduling:
Schedules tasks based on priority.
When the higher priority work arrives while a task with less priority is executed, the higher priority work takes the
place of the less priority one and
The latter is suspended until the execution is complete.
Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling:
The average waiting time is less than FCFS
Less complex
Disadvantages of Priority Scheduling:
One of the most common demerits of the Preemptive priority CPU scheduling algorithm is the Starvation
Problem. This is the problem in which a process has to wait for a longer amount of time to get scheduled into the
CPU. This condition is called the starvation problem.
6. Multilevel Queue
Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
Each queue has its own scheduling algorithm
foreground – RR
background – FCFS
Scheduling must be done between the queues
Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation.
Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e.,
80% to foreground in RR
20% to background in FCFS
Multiple-Processor Scheduling
In multiple-processor scheduling multiple CPU’s are available and hence Load Sharing becomes possible. However
multiple processor scheduling is more complex as compared to single processor scheduling. In multiple processor
scheduling there are cases when the processors are identical i.e. HOMOGENEOUS, in terms of their functionality, we
can use any processor available to run any process in the queue.
Why is multiple-processor scheduling important?
Multiple-processor scheduling is important because it enables a computer system to perform multiple tasks
simultaneously, which can greatly improve overall system performance and efficiency.
How does multiple-processor scheduling work?
Multiple-processor scheduling works by dividing tasks among multiple processors in a computer system, which allows
tasks to be processed simultaneously and reduces the overall time needed to complete them.
One approach is when all the scheduling decisions and I/O processing are handled by a single processor which is
called the Master Server and the other processors executes only the user code. This is simple and reduces the need of
data sharing. This entire scenario is called Asymmetric Multiprocessing. A second approach uses Symmetric
Multiprocessing where each processor is self scheduling. All processes may be in a common ready queue or each
processor may have its own private queue for ready processes. The scheduling proceeds further by having the
scheduler for each processor examine the ready queue and select a process to execute.