You are on page 1of 54

DEPARTMENT OF COMPUTATIONAL STUDIES

(JUNE 2022-DEC)

OPERATING SYSTEMS CONCEPTS


SEMESTER -IV
(Unit –I)
DEPARTMENT OF COMPUTATIONAL STUDIES
SUBJECT NAME:OPERATING SYSTEM CONCEPTS,OPERATING SYSTEMS

SUBJECT CODE: A20CAT407,A20CPT407

Prepared By:
S.DIVYA AP/BCA,B.SC(CS)

The topics which are common for both BCA and B.sc(cs)

Introduction to os
Os services
Process concepts
Thread concepts
Multiprocessor scheduling
Deadlock
Process control block
Scheduling algorithms
Interprocess Communication
System calls
Introduction
An Operating System (OS) is an interface between a computer user and computer hardware.
An operating system is a software which performs all the basic tasks like file management,
memory management, process management, handling input and output, and controlling
peripheral devices such as disk drives and printers.
Some popular Operating Systems include Linux Operating System, Windows Operating
System, VMS, OS/400, AIX, z/OS, etc.
Definition
An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs.
Following are some of important functions of an operating System.

 Memory Management
 Processor Management
 Device Management
 File Management
 Security
 Control over system performance
 Job accounting
 Error detecting aids
 Coordination between other software and users
Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main
memory is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a
program to be executed, it must in the main memory. An Operating System does the
following activities for memory management −
 Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part
are not in use.
 In multiprogramming, the OS decides which process will get memory when and how
much.
 Allocates the memory when a process requests it to do so.
 De-allocates the memory when a process no longer needs it or has been terminated.
Processor Management
In multiprogramming environment, the OS decides which process gets the processor when
and for how much time. This function is called process scheduling. An Operating System
does the following activities for processor management −
 Keeps tracks of processor and status of process. The program responsible for this task
is known as traffic controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when a process is no longer required.
Device Management
An Operating System manages device communication via their respective drivers. It does the
following activities for device management −
 Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.
 De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
An Operating System does the following activities for file management −
 Keeps track of information, location, uses, status etc. The collective facilities are often
known as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.
Other Important Activities
Following are some of the important activities that an Operating System performs −
 Security − By means of password and similar other techniques, it prevents
unauthorized access to programs and data.
 Control over system performance − Recording delays between request for a service
and response from the system.
 Job accounting − Keeping track of time and resources used by various jobs and users.
 Error detecting aids − Production of dumps, traces, error messages, and other
debugging and error detecting aids.
 Coordination between other softwares and users − Coordination and assignment of
compilers, interpreters, assemblers and other software to the various users of the
computer systems.

An Operating System provides services to both the users and to the programs.

 It provides programs an environment to execute.


 It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −

 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs
like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management −

 Loads a program into memory.


 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers
hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.

 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.
File system manipulation
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has
its own properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −
Program needs to read a file or write a file.
 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.
Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in
the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −

 Two processes often require data to be transferred between them


 Both the processes can be on one computer or on different computers, but are
connected through a computer network.
 Communication may be implemented by two methods, either by Shared Memory or
by Message Passing.
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to
error handling −

 The OS constantly checks for possible errors.


 The OS takes an appropriate action to ensure correct and consistent computing.
Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job. Following are the major
activities of an operating system with respect to resource management −

 The OS manages all kinds of resources using schedulers.


 CPU scheduling algorithms are used for better utilization of CPU.
Protection
Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer system. Following are the major activities of an
operating system with respect to protection −
 The OS ensures that all access to system resources is controlled.
 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.

An Operating System provides services to both the users and to the programs.

 It provides programs an environment to execute.


 It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −

 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs
like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management −

 Loads a program into memory.


 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers
hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.

 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.
File system manipulation
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has
its own properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −

Program needs to read a file or write a file.


 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.
Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in
the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −

 Two processes often require data to be transferred between them


 Both the processes can be on one computer or on different computers, but are
connected through a computer network.
 Communication may be implemented by two methods, either by Shared Memory or
by Message Passing.
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to
error handling −

 The OS constantly checks for possible errors.


 The OS takes an appropriate action to ensure correct and consistent computing.
Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job. Following are the major
activities of an operating system with respect to resource management −

 The OS manages all kinds of resources using schedulers.


 CPU scheduling algorithms are used for better utilization of CPU.
Protection
Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer system. Following are the major activities of an
operating system with respect to protection −

 The OS ensures that all access to system resources is controlled.


 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.
System Calls in Operating System
A system call is a way for a user program to interface with the operating system. The
program requests several services, and the OS responds by invoking a series of system calls
to satisfy the request. A system call can be written in assembly language or a high-level
language like C or Pascal. System calls are predefined functions that the operating system
may directly invoke if a high-level language is used.
In this article, you will learn about the system calls in the operating system and discuss their
types and many other things.
What is a System Call?
A system call is a method for a computer program to request a service from the kernel of
the operating system
on which it is running. A system call is a method of interacting with the operating system via
programs. A system call is a request from computer software to an operating system's kernel.

The Application Program Interface (API) connects the operating system's functions to user


programs. It acts as a link between the operating system and a process, allowing user-level
programs to request operating system services. The kernel system can only be accessed using
system calls. System calls are required for any programs that use resources.
Difference between structure and union in CKeep Watching

How are system calls made?


When a computer software needs to access the operating system's kernel, it makes a system
call. The system call uses an API to expose the operating system's services to user programs.
It is the only method to access the kernel system. All programs or processes that require
resources for execution must use system calls, as they serve as an interface between the
operating system and user programs.
Below are some examples of how a system call varies from a user function.
1. A system call function may create and use kernel processes to execute the
asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system call with
kernel-mode privilege executes in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that are not
present in the kernel protection domain.
4. The code and data for system calls are stored in global kernel memory.
Why do you need system calls in Operating System?
There are various situations where you must require system calls in the operating system.
Following of the situations are as follows:
1. It is must require when a file system wants to create or delete a file.
2. Network connections require the system calls to sending and receiving data packets.
3. If you want to read or write a file, you need to system calls.
4. If you want to access hardware devices, including a printer, scanner, you need a
system call.
5. System calls are used to create and manage new processes.
How System Calls Work
The Applications run in an area of memory known as user space. A system call connects to
the operating system's kernel, which executes in kernel space. When an application creates a
system call, it must first obtain permission from the kernel. It achieves this using an interrupt
request, which pauses the current process and transfers control to the kernel.
If the request is permitted, the kernel performs the requested action, like creating or deleting a
file. As input, the application receives the kernel's output. The application resumes the
procedure after the input is received. When the operation is finished, the kernel returns the
results to the application and then moves data from kernel space to user space in memory.
A simple system call may take few nanoseconds to provide the result, like retrieving the
system date and time. A more complicated system call, such as connecting to a network
device, may take a few seconds. Most operating systems launch a distinct kernel thread for
each system call to avoid bottlenecks. Modern operating systems are multi-threaded, which
means they can handle various system calls at the same time.
Types of System Calls
There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Now, you will learn about all the different types of system calls one-by-one.

Process Control
Process control is the system call that is used to direct the processes. Some process control
examples include creating, load, abort, end, execute, process, terminate the process, etc.

File Management
File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc.

Device Management
Device management is a system call that is used to deal with devices. Some examples of
device management include read, device, write, get device attributes, release device, etc.

Information Maintenance
Information maintenance is a system call that is used to maintain information. There are some
examples of information maintenance, including getting system data, set time or date, get
time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some examples of
communication, including create, delete communication connections, send, receive messages,
etc.
Examples of Windows and Unix system calls
There are various examples of Windows and Unix system calls. These are as listed below in
the table:

Process Windows Unix

Process Control CreateProcess() Fork()


ExitProcess() Exit()
WaitForSingleObject() Wait()

File Manipulation CreateFile() Open()


ReadFile() Read()
WriteFile() Write()
CloseHandle() Close()

Device Management SetConsoleMode() Ioctl()


ReadConsole() Read()
WriteConsole() Write()

Information Maintenance GetCurrentProcessID() Getpid()


SetTimer() Alarm()
Sleep() Sleep()

Communication CreatePipe() Pipe()


CreateFileMapping() Shmget()
MapViewOfFile() Mmap()

Protection SetFileSecurity() Chmod()


InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()

Here, you will learn about some methods briefly:


open()
The open() system call allows you to access a file on a file system. It allocates resources to
the file and provides a handle that the process may refer to. Many processes can open a file at
once or by a single process only. It's all based on the file system and structure.

read()
It is used to obtain data from a file on the file system. It accepts three arguments in general:
o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.
The file descriptor of the file to be read could be used to identify it and open it
using open() before reading.

wait()
In some systems, a process may have to wait for another process to complete its execution
before proceeding. When a parent process makes a child process, the parent process
execution is suspended until the child process is finished. The wait() system call is used to
suspend the parent process. Once the child process has completed its execution, control is
returned to the parent process.

write()
It is used to write data from a user buffer to a device like a file. This system call is one way
for a program to generate data. It takes three arguments in general:
o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.

fork()
Processes generate clones of themselves using the fork() system call. It is one of the most
common ways to create processes in operating systems. When a parent process spawns a
child process, execution of the parent process is interrupted until the child process completes.
Once the child process has completed its execution, control is returned to the parent process.

close()
It is used to end file system access. When this system call is invoked, it signifies that the
program no longer requires the file, and the buffers are flushed, the file information is altered,
and the file resources are de-allocated as a result.

exec()
When an executable file replaces an earlier executable file in an already executing process,
this system function is invoked. As a new process is not built, the old process identification
stays, but the new process replaces data, stack, data, head, etc.
exit()
The exit() is a system call that is used to end program execution. This call indicates that the
thread execution is complete, which is especially useful in multi-threaded environments. The
operating system reclaims resources spent by the process following the use of
the exit() system function.

Process Management in OS
A Program does nothing unless its instructions are executed by a CPU. A program in execution is called a pr
In order to accomplish its task, process needs the computer resources.
There may exist more than one process in the system which may require the same resource at the same
Therefore, the operating system has to manage all the processes and the resources in a convenient and ef
way.
Some resources may need to be executed by one process at one time to maintain the consistency otherwi
system can become inconsistent and deadlock may occur.
The operating system is responsible for the following activities in connection with Process Management
1. Scheduling processes and threads on the CPUs.
2. Creating and deleting both user and system processes.
3. Suspending and resuming processes.
4. Providing mechanisms for process synchronization.
5. Providing mechanisms for process communication.

Attributes of a process
The Attributes of the process are used by the Operating System to create the process control
block (PCB) for each of them. This is also called context of the process. Attributes which are
stored in the PCB are described below.

1. Process ID
When a process is created, a unique id is assigned to the process which is used for unique
identification of the process in the system.

2. Program counter
A program counter stores the address of the last instruction of the process on which the
process was suspended. The CPU uses this address when the execution of this process is
resumed.

3. Process State
The Process, from its creation to the completion, goes through various states which are new,
ready, running and waiting. We will discuss about them later in detail.
55.9M

1.1K

4. Priority
Every process has its own priority. The process with the highest priority among the processes
gets the CPU first. This is also stored on the process control block.

5. General Purpose Registers


Every process has its own set of registers which are used to hold the data which is generated
during the execution of the process.

6. List of open files


During the Execution, Every process uses some files which need to be present in the main
memory. OS also maintains a list of open files in the PCB.

7. List of open devices


OS also maintain the list of all open devices which are used during the execution of the
process.

Process States
Process States
State Diagram
The process, from its creation to completion, passes through various states. The minimum
number of states is five.
The names of the states are not standardized although the process may be in one of the
following states during execution.

1. New
A program which is going to be picked up by the OS into the main memory is called a new
process.

2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the
CPU to be assigned. The OS picks the new processes from the secondary memory and put all
of them in the main memory.
The processes which are ready for the execution and reside in the main memory are called
ready state processes. There can be many processes present in the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of running
processes for a particular time will always be one. If we have n processors in the system then
we can have n processes running simultaneously.

4. Block or wait
From the Running state, a process can make the transition to the block or wait state
depending upon the scheduling algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user then
the OS move this process to the block or wait state and assigns the CPU to the other
processes.

5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the context of the
process (Process Control Block) will also be deleted the process will be terminated by the
Operating system.

6. Suspend ready
A process in the ready state, which is moved to secondary memory from the main memory
due to lack of the resources (mainly primary memory) is called in the suspend ready state.
If the main memory is full and a higher priority process comes for the execution then the OS
have to make the room for the process in the main memory by throwing the lower priority
process out into the secondary memory. The suspend ready processes remain in the secondary
memory until the main memory gets available.

7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove the blocked
process which is waiting for some resources in the main memory. Since it is already waiting
for some resource to get available hence it is better if it waits in the secondary memory and
make room for the higher priority process. These processes complete their execution once the
main memory gets available and their wait is finished.

Operations on the Process

1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory)
and will be ready for the execution.

2. Scheduling
Out of the many processes present in the ready queue, the Operating system chooses one
process and start executing it. Selecting the process which is to be executed next, is known as
scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts executing it. Process
may come to the blocked or wait state during the execution then in that case the processor
starts executing the other processes.

4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of
the process (PCB) will be deleted and the process gets terminated by the Operating system.
Process Schedulers
Operating system uses various schedulers for the process scheduling described below.

1. Long term scheduler


Long term scheduler is also known as job scheduler. It chooses the processes from the pool
(secondary memory) and keeps them in the ready queue maintained in the primary memory.
Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long
term scheduler is to choose a perfect mix of IO bound and CPU bound processes among the
jobs present in the pool.
If the job scheduler chooses more IO bound processes then all of the jobs may reside in the
blocked state all the time and the CPU will remain idle most of the time. This will reduce the
degree of Multiprogramming. Therefore, the Job of long term scheduler is very critical and
may affect the system for a very long time.

2. Short term scheduler


Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the
ready queue and dispatch to the CPU for the execution.
A scheduling algorithm is used to select which job is going to be dispatched for the
execution. The Job of the short term scheduler can be very critical in the sense that if it
selects job whose CPU burst time is very high then all the jobs after that, will have to wait in
the ready queue for a very long time.
This problem is called starvation which may arise if the short term scheduler makes some
mistakes while selecting the job.

3. Medium term scheduler


Medium term scheduler takes care of the swapped out processes.If the running state processes
needs some IO time for the completion then there is a need to change its state from running to
waiting.
Medium term scheduler is used for this purpose. It removes the process from the running
state to make room for the other processes. Such processes are the swapped out processes and
this procedure is called swapping. The medium term scheduler is responsible for suspending
and resuming the processes.
It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix
of processes in the ready queue.
Definition
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.
Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate
queue for each of the process states and PCBs of all processes in the same execution state are
placed in the same queue. When the state of a process is changed, its PCB is unlinked from
its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.).
The OS scheduler determines how to move processes between the ready and run queues
which can only have one entry per processor core on the system; in the above diagram, it has
been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which are described below

S.N. State & Description


1 Running
When a new process is created, it enters into the system as in the running state.

2 Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
linked list. Use of dispatcher is as follows. When a process is interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the queue to execute.

Schedulers
Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types −

Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new
to ready, then there is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of
the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of
handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the
secondary storage. This process is called swapping, and the process is said to be swapped out
or rolled out. Swapping may be necessary to improve the process mix.
Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.

3 It controls the degree of It provides lesser control It reduces the degree of


multiprogramming over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in time It is a part of Time sharing


minimal in time sharing sharing system systems.
system

5 It selects processes from It selects those processes It can re-introduce the


pool and loads them into which are ready to process into memory and
memory for execution execute execution can be continued.

What is Thread?
A thread is a flow of execution through the process code, with its own program counter that
keeps track of which instruction to execute next, system registers which hold its current
working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and
open files. When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each
thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web server. They also provide a suitable foundation for
parallel execution of applications on shared memory multiprocessors. The following figure
shows the working of a single-threaded and a multithreaded process.

Difference between Process and Thread


S.N. Process Thread

1 Process is heavy weight or resource intensive. Thread is


light
weight,
taking
lesser
resources
than a
process.

2 Process switching needs interaction with operating system. Thread


switching
does not
need to
interact
with
operating
system.

3 In multiple processing environments, each process executes the same code All threads
but has its own memory and file resources. can share
same set of
open files,
child
processes.

4 If one process is blocked, then no other process can execute until the first While one
process is unblocked. thread is
blocked
and
waiting, a
second
thread in
the same
task can
run.

5 Multiple processes without using threads use more resources. Multiple


threaded
processes
use fewer
resources.

6 In multiple processes each process operates independently of the others. One thread
can read,
write or
change
another
thread's
data.

Advantages of Thread
 Threads minimize the context switching time.
 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.
Types of Thread
Threads are implemented in following two ways −
 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The
thread library contains code for creating and destroying threads, for passing message and data
between threads, for scheduling thread execution and for saving and restoring thread
contexts. The application starts with a single thread.
Advantages
 Thread switching does not require Kernel mode privileges.
 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.
Disadvantages
 In a typical operating system, most system calls are blocking.
 Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management code
in the application area. Kernel threads are supported directly by the operating system. Any
application can be programmed to be multithreaded. All of the threads within an application
are supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals
threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel
performs thread creation, scheduling and management in Kernel space. Kernel threads are
generally slower to create and manage than the user threads.
Advantages
 Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
 Kernel routines themselves can be multithreaded.
Disadvantages
 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system
call need not block the entire process. Multithreading models are three types

 Many to many relationship.


 Many to one relationship.
 One to one relationship.
Many to Many Model
The many-to-many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level threads
are multiplexing with 6 kernel level threads. In this model, developers can create as many
user threads as necessary and the corresponding Kernel threads can run in parallel on a
multiprocessor machine. This model provides the best accuracy on concurrency and when a
thread performs a blocking system call, the kernel can schedule another thread for execution.
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking
system call, the entire process will be blocked. Only one thread can access the Kernel at a
time, so multiple threads are unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way that
the system does not support them, then the Kernel threads use the many-to-one relationship
modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This model
provides more concurrency than the many-to-one model. It also allows another thread to run
when a thread makes a blocking system call. It supports multiple threads to execute in
parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel
thread. OS/2, windows NT and windows 2000 use one to one relationship model
Scheduling algorithms.
A Process Scheduler schedules different processes to be assigned to the CPU based on
particular scheduling algorithms. There are six popular process scheduling algorithms which
we are going to discuss in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Round Robin(RR) Scheduling
or preemptive. Non-preemptive algorithms are designed so that once a process enters the
running state, it cannot be preempted until it completes its allotted time, whereas the
preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.
First Come First Serve (FCFS)
 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75


Shortest Job Next (SJN)
 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not
known.
 The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0
P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25


Priority Based Scheduling
 Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any
other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.

Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5
Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6


Shortest Remaining Time
 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is not
known.
 It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling
 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
 Context switching is used to save states of preempted processes.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5


Difference between User-Level & Kernel-Level Thread
S.N. User-Level Threads Kernel-Level Thread

1 User-level threads are faster to create and Kernel-level threads are slower to
manage. create and manage.

2 Implementation is by a thread library at the user Operating system supports creation of


level. Kernel threads.

3 User-level thread is generic and can run on any Kernel-level thread is specific to the
operating system. operating system.

4 Multi-threaded applications cannot take Kernel routines themselves can be


advantage of multiprocessing. multithreaded.
A Process Scheduler schedules different processes to be assigned to the CPU based on
particular scheduling algorithms. There are four popular process scheduling algorithms which
we are going to discuss in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Round Robin(RR) Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are
designed so that once a process enters the running state, it cannot be preempted until it
completes its allotted time, whereas the preemptive scheduling is based on priority where a
scheduler may preempt a low priority running process anytime when a high priority process
enters into a ready state.
First Come First Serve (FCFS)
 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0
P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75


Shortest Job Next (SJN)
 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not
known.
 The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8
Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25


Priority Based Scheduling
 Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any
other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.
Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6
Shortest Remaining Time
 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is not
known.
 It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling
 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
 Context switching is used to save states of preempted processes.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

What is Process Control Block? :


A process control block is a data structure maintained by the operating system for every
process.

***Some Operating System maintain only PCB in that pCB has all entries stored by a
process table.***

A PCB keeps all the information needed to keep track of a process:

•1) Process State: Current state of the process i.e, ready, running,or waiting etc.

• 2) Process Privileges : allow/ disallow access to the system resource.

• 3) Process ID: Unique Identification for each of the process in the operating system.

• 4) Pointer: A pointer to parent process.

• 5) Program Counter: Program counter is a pointer to the address of the next instruction
to be executed for this process.
• 6) CPU Registers: Various CPU registers where process need to be stored for execution
for running state.

• 7) CPU Scheduling Information

• 8) Memory Management Information: Information of page table, memory limit,


segment table information information.

• 9) Accounting Information: It includes the amount of CPU used for process execution,
time limits, execution ID etc.

• 10) Input Output Status Information: This include list of input output devices
allocated to the process.

•The PCB is maintained for a process throughout its lifetime ,And is deleted once the
process terminates:

What is thread and thread scheduling?


Threads are scheduled for execution based on their priority. Even though threads are
executing within the runtime, all threads are assigned processor time slices by the operating
system. The details of the scheduling algorithm used to determine the order in which threads
are executed varies with each operating system.

Multiple processor scheduling or multiprocessor scheduling focuses on designing the


system's scheduling function, which consists of more than one processor. Multiple CPUs
share the load (load sharing) in multiprocessor scheduling so that various processes run
simultaneously.
Threads in Operating System
A thread is a single sequential flow of execution of tasks of a process so it is also known as
thread of execution or thread of control. There is a way of thread execution inside the process
of any operating system. Apart from this, there can be more than one thread inside a process.
Each thread of the same process makes use of a separate program counter and a stack of
activation records and control blocks. Thread is often referred to as a lightweight process.
The process can be split down into so many threads. For example, in a browser, many tabs can be view
threads. MS Word uses many threads - formatting text from one thread, processing input from another thread
Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new process.
o Threads can share the common data, they do not need to use Inter- Process communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.
Types of Threads
In the operating system, there are two types of threads.
1. Kernel level thread.
2. User-level thread.

User-level thread
The operating system does not recognize the user-level thread. User threads can be easily implemented an
implemented by the user. If a user performs a user-level thread blocking operation, the whole process is blo
The kernel level thread does not know nothing about the user level thread. The kernel-level thread manages
level threads as if they are single-threaded processes?examples: Java thread, POSIX threads, etc.
Advantages of User-level threads
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support threads
kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread control bloc
stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the process.
Disadvantages of User-level threads
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.

Kernel level thread


The kernel thread recognizes the operating system. There is a thread control block and process control block
system for each thread and process in the kernel-level thread. The kernel-level thread is implemented b
operating system. The kernel knows about all the threads and manages them. The kernel-level thread of
system call to create and manage the threads from user-space. The implementation of kernel threads is
difficult than the user thread. Context switch time is longer in the kernel thread. If a kernel thread perfo
blocking operation, the Banky thread execution can continue. Example: Window Solaris.
Advantages of Kernel-level threads
1. The kernel-level thread is fully aware of all threads.
2. The scheduler may decide to spend more CPU time in the process of threads being large numerical.
3. The kernel-level thread is good for those applications that block the frequency.
Disadvantages of Kernel-level threads
1. The kernel thread manages and schedules all threads.
2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.
Components of Threads
Any thread has the following components.
1. Program counter
2. Register set
3. Stack space
Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads, and each thr
treated as a job, the number of jobs done in the unit time increases. That is why the throughput
system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one thread in one proces
can schedule more than one thread in more than one processor.
o Faster context switch: The context switching period between threads is less than the process c
switching. The process context switch means more overhead for the CPU.
o Responsiveness: When the process is split into several threads, and when a thread completes its exec
that process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share the same a
space, while in process, we adopt just a few exclusive communication strategies for communi
between two processes.
o Resource sharing: Resources can be shared between all threads within a process, such as code, dat
files. Note: The stack and register cannot be shared between threads. There is a stack and register fo
thread.
Deadlock
A deadlock in OS is a situation in which more than one process is blocked because it is holding a resourc
and also requires some resource that is acquired by some other process. The four necessary conditions fo
deadlock situation to occur are mutual exclusion, hold and wait, no preemption and circular set.

A process in operating system uses resources in the following way. 


1) Requests a resource 
2) Use the resource 
3) Releases the resource 
Deadlock is a situation where a set of processes are blocked because each process is holding
a resource and waiting for another resource acquired by some other process. 
Consider an example when two trains are coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other. A
similar situation occurs in operating systems when there are two or more processes that hold
some resources and wait for resources held by other(s). For example, in the below diagram,
Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2,
and process 2 is waiting for resource 1. 
 
 
Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions) 
Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a
time) 
Hold and Wait: A process is holding at least one resource and waiting for resources. 
No Preemption: A resource cannot be taken from a process unless the process releases the
resource. 
Circular Wait: A set of processes are waiting for each other in circular form. 
Methods for handling deadlock 
There are three ways to handle deadlock 
1) Deadlock prevention or avoidance: The idea is to not let the system into a deadlock state. 
One can zoom into each category individually, Prevention is done by negating one of above
mentioned necessary conditions for deadlock. 
Avoidance is kind of futuristic in nature. By using strategy of “Avoidance”, we have to make
an assumption. We need to ensure that all information about resources which process will
need are known to us prior to execution of the process. We use Banker’s algorithm (Which is
in-turn a gift from Dijkstra) in order to avoid deadlock. 
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred. 
3) Ignore the problem altogether: If deadlock is very rare, then let it happen and reboot the
system. This is the approach that both Windows and UNIX take.
INTERPROCESS COMMUNICATION 
Interprocess communication is the mechanism provided by the operating system that
allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of data
from one process to another.

Interprocess communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a process
letting another process know that some event has occurred or the transferring of data from
one process to another.
A diagram that illustrates interprocess communication is as follows −
Synchronization in Interprocess Communication
Synchronization is a necessary part of interprocess communication. It is either provided by
the interprocess control mechanism or handled by the communicating processes. Some of the
methods to provide synchronization are as follows −

 Semaphore
A semaphore is a variable that controls the access to a common resource by multiple
processes. The two types of semaphores are binary semaphores and counting
semaphores.
 Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at
a time. This is useful for synchronization and also prevents race conditions.
Approaches to Interprocess Communication
.
 Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple
processes. This is done so that the processes can communicate with each other. All
POSIX systems, as well as Windows operating systems use shared memory.
 Message Queue
Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored in the queue until their recipient retrieves
them. Message queues are quite useful for interprocess communication and are used
by most operating systems.
A diagram that demonstrates message queue and shared memory methods of interprocess
communication is as follows −
Multiple processor scheduling or multiprocessor scheduling focuses on designing the
system's scheduling function, which consists of more than one processor. Multiple CPUs
share the load (load sharing) in multiprocessor scheduling so that various processes run
simultaneously.
Multiple Processors Scheduling in Operating System
Multiple processor scheduling or multiprocessor scheduling focuses on designing the system's sche
function, which consists of more than one processor. Multiple CPUs share the load (load sharing) in multipro
scheduling so that various processes run simultaneously. In general, multiprocessor scheduling is comp
compared to single processor scheduling. In the multiprocessor scheduling, there are many processors, and th
identical, and we can run any process at any time.
The multiple CPUs in the system are in close communication, which shares a common bus, memory, and
peripheral devices. So we can say that the system is tightly coupled. These systems are used when we w
process a bulk amount of data, and these systems are mainly used in satellite, weather forecasting, etc.
There are cases when the processors are identical, i.e., homogenous, in terms of their functionality in mu
processor scheduling. We can use any processor available to run any process in the queue.
Multiprocessor systems may be heterogeneous (different kinds of CPUs) or homogenous (the same CPU).
may be special scheduling constraints, such as devices connected via a private bus to only one
CPU.
There is no policy or rule which can be declared as the best scheduling solution to a system with a single proc
Similarly, there is no best scheduling solution for a system with multiple processors as well.

Approaches to Multiple Processor Scheduling


There are two approaches to multiple processor scheduling in the operating system: Symmetric Multiproc
and Asymmetric Multiprocessing.

1. Symmetric Multiprocessing: It is used where each processor is self-scheduling. All processes may b
common ready queue, or each processor may have its private queue for ready processes. The sche
proceeds further by having the scheduler for each processor examine the ready queue and select a p
to execute.
2. Asymmetric Multiprocessing: It is used when all the scheduling decisions and I/O processing are ha
by a single processor called the Master Server. The other processors execute only the user code. T
simple and reduces the need for data sharing, and this entire scenario is called Asym
Multiprocessing.

Processor Affinity
Processor Affinity means a process has an affinity for the processor on which it is currently running. W
process runs on a specific processor, there are certain effects on the cache memory. The data most re
accessed by the process populate the cache for the processor. As a result, successive memory access by the p
is often satisfied in the cache memory.
Now, suppose the process migrates to another processor. In that case, the contents of the cache memory m
invalidated for the first processor, and the cache for the second processor must be repopulated. Because of th
cost of invalidating and repopulating caches, most SMP(symmetric multiprocessing) systems try to
migrating processes from one processor to another and keep a process running on the same processor. T
known as processor affinity. There are two types of processor affinity, such as:
1. Soft Affinity: When an operating system has a policy of keeping a process running on the same pro
but not guaranteeing it will do so, this situation is called soft affinity.
2. Hard Affinity: Hard Affinity allows a process to specify a subset of processors on which it may run.
Linux systems implement soft affinity and provide system calls like sched_setaffinity() that also s
hard affinity.

Load Balancing
Load Balancing is the phenomenon that keeps the workload evenly distributed across all processors in an
system. Load balancing is necessary only on systems where each processor has its own private queue of a p
that is eligible to execute.
Load balancing is unnecessary because it immediately extracts a runnable process from the common run
once a processor becomes idle. On SMP (symmetric multiprocessing), it is important to keep the wo
balanced among all processors to utilize the benefits of having more than one processor fully. One or
processors will sit idle while other processors have high workloads along with lists of processors awaitin
CPU. There are two general approaches to load balancing:

1. Push Migration: In push migration, a task routinely checks the load on each processor. If it fin
imbalance, it evenly distributes the load on each processor by moving the processes from overloaded
or less busy processors.
2. Pull Migration:Pull Migration occurs when an idle processor pulls a waiting task from a busy pro
for its execution.

Multi-core Processors
In multi-core processors, multiple processor cores are placed on the same physical chip. Each core has a re
set to maintain its architectural state and thus appears to the operating system as a separate ph
processor. SMP systems that use multi-core processors are faster and consume less power than systems in
each processor has its own physical chip.
However, multi-core processors may complicate the scheduling problems. When the processor accesses mem
spends a significant amount of time waiting for the data to become available. This situation is called a M
stall. It occurs for various reasons, such as cache miss, which is accessing the data that is not in the cache mem
In such cases, the processor can spend upto 50% of its time waiting for data to become available from memo
solve this problem, recent hardware designs have implemented multithreaded processor cores in which t
more hardware threads are assigned to each core. Therefore if one thread stalls while waiting for the memor
core can switch to another thread. There are two ways to multithread a processor:

Symmetric Multiprocessor
Symmetric Multiprocessors (SMP) is the third model. There is one copy of the OS in memory in this mode
any central processing unit can run it. Now, when a system call is made, the central processing unit on whi
system call was made traps the kernel and processed that system call. This model balances processes and m
dynamically. This approach uses Symmetric Multiprocessing, where each processor is self-scheduling.
The scheduling proceeds further by having the scheduler for each processor examine the ready queue and se
process to execute. In this system, this is possible that all the process may be in a common ready queue o
processor may have its private queue for the ready process. There are mainly three sources of contention th
be found in a multiprocessor operating system.
o Locking system: As we know that the resources are shared in the multiprocessor system, there is a n
protect these resources for safe access among the multiple processors. The main purpose of the lo
scheme is to serialize access of the resources by the multiple processors.
o Shared data: When the multiple processors access the same data at the same time, then there may
chance of inconsistency of data, so to protect this, we have to use some protocols or locking schemes.
o .

Master-Slave Multiprocessor
In this multiprocessor model, there is a single data structure that keeps track of the ready processes. In this m
one central processing unit works as a master and another as a slave. All the processors are handled by a
processor, which is called the master server.

The master server runs the operating system process, and the slave server runs the user processes. The memo
input-output devices are shared among all the processors, and all the processors are connected to a commo
This system is simple and reduces data sharing, so this system is called Asymmetric multiprocessing.

Starvation and Aging in Operating Systems


Generally, Starvation occurs in Priority Scheduling or Shortest Job First Scheduling. In the
Priority scheduling technique, we assign some priority to every process we have, and based
on that priority, the CPU will be allocated, and the process will be executed. Here, the CPU
will be allocated to the process that has the highest priority. Even if the burst time is low, the
CPU will be allocated to the highest priority process.
Starvation is very bad for a process in an operating system, but we can overcome this
starvation problem with the help of Aging.

What is Starvation?
Starvation or indefinite blocking is a phenomenon associated with the Priority scheduling
algorithms. A process that is present in the ready state and has low priority keeps waiting for
the CPU allocation because some other process with higher priority comes with due respect
time. Higher-priority processes can prevent a low-priority process from getting the CPU.

What is Aging in OS?


In Operating systems, Aging is a scheduling technique used to avoid Starvation. Fixed
priority scheduling is a scheduling discipline in which tasks queued for utilizing a system
resource is assigned each priority. A task with a high priority is allowed to access a specific
system resource before a task with a lower priority is allowed to do the same.
Aging is a technique of gradually increasing the priority (by time quantum) of processes that
wait in the system for a long time. By doing so, as time passes, the lower priority process
becomes a higher priority process.
A disadvantage of this approach is that tasks assigned with a lower priority may be starved
when many high priority tasks are queued. Aging is used to gradually increase the priority of
a task based on its waiting time in the ready queue.

You might also like