Professional Documents
Culture Documents
(JUNE 2022-DEC)
Prepared By:
S.DIVYA AP/BCA,B.SC(CS)
The topics which are common for both BCA and B.sc(cs)
Introduction to os
Os services
Process concepts
Thread concepts
Multiprocessor scheduling
Deadlock
Process control block
Scheduling algorithms
Interprocess Communication
System calls
Introduction
An Operating System (OS) is an interface between a computer user and computer hardware.
An operating system is a software which performs all the basic tasks like file management,
memory management, process management, handling input and output, and controlling
peripheral devices such as disk drives and printers.
Some popular Operating Systems include Linux Operating System, Windows Operating
System, VMS, OS/400, AIX, z/OS, etc.
Definition
An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs.
Following are some of important functions of an operating System.
Memory Management
Processor Management
Device Management
File Management
Security
Control over system performance
Job accounting
Error detecting aids
Coordination between other software and users
Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main
memory is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a
program to be executed, it must in the main memory. An Operating System does the
following activities for memory management −
Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part
are not in use.
In multiprogramming, the OS decides which process will get memory when and how
much.
Allocates the memory when a process requests it to do so.
De-allocates the memory when a process no longer needs it or has been terminated.
Processor Management
In multiprogramming environment, the OS decides which process gets the processor when
and for how much time. This function is called process scheduling. An Operating System
does the following activities for processor management −
Keeps tracks of processor and status of process. The program responsible for this task
is known as traffic controller.
Allocates the processor (CPU) to a process.
De-allocates processor when a process is no longer required.
Device Management
An Operating System manages device communication via their respective drivers. It does the
following activities for device management −
Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
Decides which process gets the device when and for how much time.
Allocates the device in the efficient way.
De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
An Operating System does the following activities for file management −
Keeps track of information, location, uses, status etc. The collective facilities are often
known as file system.
Decides who gets the resources.
Allocates the resources.
De-allocates the resources.
Other Important Activities
Following are some of the important activities that an Operating System performs −
Security − By means of password and similar other techniques, it prevents
unauthorized access to programs and data.
Control over system performance − Recording delays between request for a service
and response from the system.
Job accounting − Keeping track of time and resources used by various jobs and users.
Error detecting aids − Production of dumps, traces, error messages, and other
debugging and error detecting aids.
Coordination between other softwares and users − Coordination and assignment of
compilers, interpreters, assemblers and other software to the various users of the
computer systems.
An Operating System provides services to both the users and to the programs.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs
like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management −
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
File system manipulation
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has
its own properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −
Program needs to read a file or write a file.
The operating system gives the permission to the program for operation on file.
Permission varies from read-only, read-write, denied and so on.
Operating System provides an interface to the user to create/delete files.
Operating System provides an interface to the user to create/delete directories.
Operating System provides an interface to create the backup of file system.
Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in
the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −
An Operating System provides services to both the users and to the programs.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs
like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management −
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
File system manipulation
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has
its own properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −
Process Control
Process control is the system call that is used to direct the processes. Some process control
examples include creating, load, abort, end, execute, process, terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples of
device management include read, device, write, get device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There are some
examples of information maintenance, including getting system data, set time or date, get
time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some examples of
communication, including create, delete communication connections, send, receive messages,
etc.
Examples of Windows and Unix system calls
There are various examples of Windows and Unix system calls. These are as listed below in
the table:
read()
It is used to obtain data from a file on the file system. It accepts three arguments in general:
o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.
The file descriptor of the file to be read could be used to identify it and open it
using open() before reading.
wait()
In some systems, a process may have to wait for another process to complete its execution
before proceeding. When a parent process makes a child process, the parent process
execution is suspended until the child process is finished. The wait() system call is used to
suspend the parent process. Once the child process has completed its execution, control is
returned to the parent process.
write()
It is used to write data from a user buffer to a device like a file. This system call is one way
for a program to generate data. It takes three arguments in general:
o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.
fork()
Processes generate clones of themselves using the fork() system call. It is one of the most
common ways to create processes in operating systems. When a parent process spawns a
child process, execution of the parent process is interrupted until the child process completes.
Once the child process has completed its execution, control is returned to the parent process.
close()
It is used to end file system access. When this system call is invoked, it signifies that the
program no longer requires the file, and the buffers are flushed, the file information is altered,
and the file resources are de-allocated as a result.
exec()
When an executable file replaces an earlier executable file in an already executing process,
this system function is invoked. As a new process is not built, the old process identification
stays, but the new process replaces data, stack, data, head, etc.
exit()
The exit() is a system call that is used to end program execution. This call indicates that the
thread execution is complete, which is especially useful in multi-threaded environments. The
operating system reclaims resources spent by the process following the use of
the exit() system function.
Process Management in OS
A Program does nothing unless its instructions are executed by a CPU. A program in execution is called a pr
In order to accomplish its task, process needs the computer resources.
There may exist more than one process in the system which may require the same resource at the same
Therefore, the operating system has to manage all the processes and the resources in a convenient and ef
way.
Some resources may need to be executed by one process at one time to maintain the consistency otherwi
system can become inconsistent and deadlock may occur.
The operating system is responsible for the following activities in connection with Process Management
1. Scheduling processes and threads on the CPUs.
2. Creating and deleting both user and system processes.
3. Suspending and resuming processes.
4. Providing mechanisms for process synchronization.
5. Providing mechanisms for process communication.
Attributes of a process
The Attributes of the process are used by the Operating System to create the process control
block (PCB) for each of them. This is also called context of the process. Attributes which are
stored in the PCB are described below.
1. Process ID
When a process is created, a unique id is assigned to the process which is used for unique
identification of the process in the system.
2. Program counter
A program counter stores the address of the last instruction of the process on which the
process was suspended. The CPU uses this address when the execution of this process is
resumed.
3. Process State
The Process, from its creation to the completion, goes through various states which are new,
ready, running and waiting. We will discuss about them later in detail.
55.9M
1.1K
4. Priority
Every process has its own priority. The process with the highest priority among the processes
gets the CPU first. This is also stored on the process control block.
Process States
Process States
State Diagram
The process, from its creation to completion, passes through various states. The minimum
number of states is five.
The names of the states are not standardized although the process may be in one of the
following states during execution.
1. New
A program which is going to be picked up by the OS into the main memory is called a new
process.
2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the
CPU to be assigned. The OS picks the new processes from the secondary memory and put all
of them in the main memory.
The processes which are ready for the execution and reside in the main memory are called
ready state processes. There can be many processes present in the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of running
processes for a particular time will always be one. If we have n processors in the system then
we can have n processes running simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or wait state
depending upon the scheduling algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user then
the OS move this process to the block or wait state and assigns the CPU to the other
processes.
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the context of the
process (Process Control Block) will also be deleted the process will be terminated by the
Operating system.
6. Suspend ready
A process in the ready state, which is moved to secondary memory from the main memory
due to lack of the resources (mainly primary memory) is called in the suspend ready state.
If the main memory is full and a higher priority process comes for the execution then the OS
have to make the room for the process in the main memory by throwing the lower priority
process out into the secondary memory. The suspend ready processes remain in the secondary
memory until the main memory gets available.
7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove the blocked
process which is waiting for some resources in the main memory. Since it is already waiting
for some resource to get available hence it is better if it waits in the secondary memory and
make room for the higher priority process. These processes complete their execution once the
main memory gets available and their wait is finished.
1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory)
and will be ready for the execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system chooses one
process and start executing it. Selecting the process which is to be executed next, is known as
scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts executing it. Process
may come to the blocked or wait state during the execution then in that case the processor
starts executing the other processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of
the process (PCB) will be deleted and the process gets terminated by the Operating system.
Process Schedulers
Operating system uses various schedulers for the process scheduling described below.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.).
The OS scheduler determines how to move processes between the ready and run queues
which can only have one entry per processor core on the system; in the above diagram, it has
been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which are described below
−
2 Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
linked list. Use of dispatcher is as follows. When a process is interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new
to ready, then there is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of
the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of
handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the
secondary storage. This process is called swapping, and the process is said to be swapped out
or rolled out. Swapping may be necessary to improve the process mix.
Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.
What is Thread?
A thread is a flow of execution through the process code, with its own program counter that
keeps track of which instruction to execute next, system registers which hold its current
working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and
open files. When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each
thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web server. They also provide a suitable foundation for
parallel execution of applications on shared memory multiprocessors. The following figure
shows the working of a single-threaded and a multithreaded process.
3 In multiple processing environments, each process executes the same code All threads
but has its own memory and file resources. can share
same set of
open files,
child
processes.
4 If one process is blocked, then no other process can execute until the first While one
process is unblocked. thread is
blocked
and
waiting, a
second
thread in
the same
task can
run.
6 In multiple processes each process operates independently of the others. One thread
can read,
write or
change
another
thread's
data.
Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a process.
Efficient communication.
It is more economical to create and context switch threads.
Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.
Types of Thread
Threads are implemented in following two ways −
User Level Threads − User managed threads.
Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The
thread library contains code for creating and destroying threads, for passing message and data
between threads, for scheduling thread execution and for saving and restoring thread
contexts. The application starts with a single thread.
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management code
in the application area. Kernel threads are supported directly by the operating system. Any
application can be programmed to be multithreaded. All of the threads within an application
are supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals
threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel
performs thread creation, scheduling and management in Kernel space. Kernel threads are
generally slower to create and manage than the user threads.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system
call need not block the entire process. Multithreading models are three types
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
Waiting time of each process is as follows −
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
1 User-level threads are faster to create and Kernel-level threads are slower to
manage. create and manage.
3 User-level thread is generic and can run on any Kernel-level thread is specific to the
operating system. operating system.
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
Waiting time of each process is as follows −
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6
Shortest Remaining Time
Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.
Impossible to implement in interactive systems where required CPU time is not
known.
It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling
Round Robin is the preemptive process scheduling algorithm.
Each process is provided a fix time to execute, it is called a quantum.
Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
Context switching is used to save states of preempted processes.
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
***Some Operating System maintain only PCB in that pCB has all entries stored by a
process table.***
•1) Process State: Current state of the process i.e, ready, running,or waiting etc.
• 3) Process ID: Unique Identification for each of the process in the operating system.
• 5) Program Counter: Program counter is a pointer to the address of the next instruction
to be executed for this process.
• 6) CPU Registers: Various CPU registers where process need to be stored for execution
for running state.
• 9) Accounting Information: It includes the amount of CPU used for process execution,
time limits, execution ID etc.
• 10) Input Output Status Information: This include list of input output devices
allocated to the process.
•The PCB is maintained for a process throughout its lifetime ,And is deleted once the
process terminates:
User-level thread
The operating system does not recognize the user-level thread. User threads can be easily implemented an
implemented by the user. If a user performs a user-level thread blocking operation, the whole process is blo
The kernel level thread does not know nothing about the user level thread. The kernel-level thread manages
level threads as if they are single-threaded processes?examples: Java thread, POSIX threads, etc.
Advantages of User-level threads
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support threads
kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread control bloc
stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the process.
Disadvantages of User-level threads
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.
Interprocess communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a process
letting another process know that some event has occurred or the transferring of data from
one process to another.
A diagram that illustrates interprocess communication is as follows −
Synchronization in Interprocess Communication
Synchronization is a necessary part of interprocess communication. It is either provided by
the interprocess control mechanism or handled by the communicating processes. Some of the
methods to provide synchronization are as follows −
Semaphore
A semaphore is a variable that controls the access to a common resource by multiple
processes. The two types of semaphores are binary semaphores and counting
semaphores.
Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at
a time. This is useful for synchronization and also prevents race conditions.
Approaches to Interprocess Communication
.
Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple
processes. This is done so that the processes can communicate with each other. All
POSIX systems, as well as Windows operating systems use shared memory.
Message Queue
Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored in the queue until their recipient retrieves
them. Message queues are quite useful for interprocess communication and are used
by most operating systems.
A diagram that demonstrates message queue and shared memory methods of interprocess
communication is as follows −
Multiple processor scheduling or multiprocessor scheduling focuses on designing the
system's scheduling function, which consists of more than one processor. Multiple CPUs
share the load (load sharing) in multiprocessor scheduling so that various processes run
simultaneously.
Multiple Processors Scheduling in Operating System
Multiple processor scheduling or multiprocessor scheduling focuses on designing the system's sche
function, which consists of more than one processor. Multiple CPUs share the load (load sharing) in multipro
scheduling so that various processes run simultaneously. In general, multiprocessor scheduling is comp
compared to single processor scheduling. In the multiprocessor scheduling, there are many processors, and th
identical, and we can run any process at any time.
The multiple CPUs in the system are in close communication, which shares a common bus, memory, and
peripheral devices. So we can say that the system is tightly coupled. These systems are used when we w
process a bulk amount of data, and these systems are mainly used in satellite, weather forecasting, etc.
There are cases when the processors are identical, i.e., homogenous, in terms of their functionality in mu
processor scheduling. We can use any processor available to run any process in the queue.
Multiprocessor systems may be heterogeneous (different kinds of CPUs) or homogenous (the same CPU).
may be special scheduling constraints, such as devices connected via a private bus to only one
CPU.
There is no policy or rule which can be declared as the best scheduling solution to a system with a single proc
Similarly, there is no best scheduling solution for a system with multiple processors as well.
1. Symmetric Multiprocessing: It is used where each processor is self-scheduling. All processes may b
common ready queue, or each processor may have its private queue for ready processes. The sche
proceeds further by having the scheduler for each processor examine the ready queue and select a p
to execute.
2. Asymmetric Multiprocessing: It is used when all the scheduling decisions and I/O processing are ha
by a single processor called the Master Server. The other processors execute only the user code. T
simple and reduces the need for data sharing, and this entire scenario is called Asym
Multiprocessing.
Processor Affinity
Processor Affinity means a process has an affinity for the processor on which it is currently running. W
process runs on a specific processor, there are certain effects on the cache memory. The data most re
accessed by the process populate the cache for the processor. As a result, successive memory access by the p
is often satisfied in the cache memory.
Now, suppose the process migrates to another processor. In that case, the contents of the cache memory m
invalidated for the first processor, and the cache for the second processor must be repopulated. Because of th
cost of invalidating and repopulating caches, most SMP(symmetric multiprocessing) systems try to
migrating processes from one processor to another and keep a process running on the same processor. T
known as processor affinity. There are two types of processor affinity, such as:
1. Soft Affinity: When an operating system has a policy of keeping a process running on the same pro
but not guaranteeing it will do so, this situation is called soft affinity.
2. Hard Affinity: Hard Affinity allows a process to specify a subset of processors on which it may run.
Linux systems implement soft affinity and provide system calls like sched_setaffinity() that also s
hard affinity.
Load Balancing
Load Balancing is the phenomenon that keeps the workload evenly distributed across all processors in an
system. Load balancing is necessary only on systems where each processor has its own private queue of a p
that is eligible to execute.
Load balancing is unnecessary because it immediately extracts a runnable process from the common run
once a processor becomes idle. On SMP (symmetric multiprocessing), it is important to keep the wo
balanced among all processors to utilize the benefits of having more than one processor fully. One or
processors will sit idle while other processors have high workloads along with lists of processors awaitin
CPU. There are two general approaches to load balancing:
1. Push Migration: In push migration, a task routinely checks the load on each processor. If it fin
imbalance, it evenly distributes the load on each processor by moving the processes from overloaded
or less busy processors.
2. Pull Migration:Pull Migration occurs when an idle processor pulls a waiting task from a busy pro
for its execution.
Multi-core Processors
In multi-core processors, multiple processor cores are placed on the same physical chip. Each core has a re
set to maintain its architectural state and thus appears to the operating system as a separate ph
processor. SMP systems that use multi-core processors are faster and consume less power than systems in
each processor has its own physical chip.
However, multi-core processors may complicate the scheduling problems. When the processor accesses mem
spends a significant amount of time waiting for the data to become available. This situation is called a M
stall. It occurs for various reasons, such as cache miss, which is accessing the data that is not in the cache mem
In such cases, the processor can spend upto 50% of its time waiting for data to become available from memo
solve this problem, recent hardware designs have implemented multithreaded processor cores in which t
more hardware threads are assigned to each core. Therefore if one thread stalls while waiting for the memor
core can switch to another thread. There are two ways to multithread a processor:
Symmetric Multiprocessor
Symmetric Multiprocessors (SMP) is the third model. There is one copy of the OS in memory in this mode
any central processing unit can run it. Now, when a system call is made, the central processing unit on whi
system call was made traps the kernel and processed that system call. This model balances processes and m
dynamically. This approach uses Symmetric Multiprocessing, where each processor is self-scheduling.
The scheduling proceeds further by having the scheduler for each processor examine the ready queue and se
process to execute. In this system, this is possible that all the process may be in a common ready queue o
processor may have its private queue for the ready process. There are mainly three sources of contention th
be found in a multiprocessor operating system.
o Locking system: As we know that the resources are shared in the multiprocessor system, there is a n
protect these resources for safe access among the multiple processors. The main purpose of the lo
scheme is to serialize access of the resources by the multiple processors.
o Shared data: When the multiple processors access the same data at the same time, then there may
chance of inconsistency of data, so to protect this, we have to use some protocols or locking schemes.
o .
Master-Slave Multiprocessor
In this multiprocessor model, there is a single data structure that keeps track of the ready processes. In this m
one central processing unit works as a master and another as a slave. All the processors are handled by a
processor, which is called the master server.
The master server runs the operating system process, and the slave server runs the user processes. The memo
input-output devices are shared among all the processors, and all the processors are connected to a commo
This system is simple and reduces data sharing, so this system is called Asymmetric multiprocessing.
What is Starvation?
Starvation or indefinite blocking is a phenomenon associated with the Priority scheduling
algorithms. A process that is present in the ready state and has low priority keeps waiting for
the CPU allocation because some other process with higher priority comes with due respect
time. Higher-priority processes can prevent a low-priority process from getting the CPU.