You are on page 1of 15

Unit II

Subject: Operating System


Subject Code : CAL 817

Department of Computer Application

Dr. Amit Sharma


Associate Professor
School of Computer Application

1|P ag e
CSL-628 Operating System
4 credits (3-1-0)
Unit-1:

Introduction to the Operating System (OS), Types of OS: Batch System, Time Sharing System, Real Time
System. Multi Programming, Distributed System, Functions and Services of OS.

Unit-2:

Process Management: Process Concept, Process State, Process Control Block, Process Scheduling, CPU
Scheduling - CPU Scheduling, Scheduling Criteria, Scheduling Algorithms, Preemptive & Non Preemptive
Scheduling.

Unit-3:

Deadlocks-System model, Characterization, Deadlock Prevention, Deadlock Avoidance and Detection, Recovery
from deadlock.

Unit-4:

Memory Management: Logical Address, Physical Address Contiguous Allocation, External and Internal
Fragmentation

Virtual Memory: Demand paging, page replacement, allocation of frames, thrasing.

Unit-5:

Information Management: File Concept, Access Methods, Directory Structure. Device Management: Disk
Structure, Disk Scheduling Algorithms.

Text books:

1. Silbershatz and Galvin," Operating System Concept", Addition We seley,

Reference books:

1. Tannenbaum,"Operating System Concept", Addition Weseley, 2002.

2|P ag e
Unit-2:

Process Management: Process Concept, Process State, Process Control Block, Process Scheduling, CPU
Scheduling - CPU Scheduling, Scheduling Criteria, Scheduling Algorithms, Preemptive & Non
Preemptive Scheduling.

Introduction to Processes and Process management


a process is a program in execution. In this module we shall explain how a process comes into existence
and how processes are managed.
A process in execution needs resources like processing resource, memory and IO resources. Current
machines allow several processes to share resources. In reality, one processor is shared amongst many
processes. In the first module we indicated that the human computer interface provided by an OS
involves supporting many concurrent processes like clock, icons and one or more windows. A system like
a file server may even support processes from multiple users. And yet the owner of every process gets an
illusion that the server (read processor) is available to their process without any interruption. This
requires clever management and allocation of the processor as a resource. In this module we shall study
the basic processor sharing mechanism amongst processes.

What is a Process

As we know a process is a program in execution. To understand the importance of this definition, let’s
imagine that we have written a program called my_prog.c in C. On execution, this program may read in
some data and output some data. Note that when a program is written and a file is prepared, it is still a
script. It has no dynamics of its own i.e, it cannot cause any input processing or output to happen. Once
we compile, and still later when we run this program, the intended operations take place. In other words,
a program is a text script with no dynamic behavior. When a program is in execution, the script is acted
upon. It can result in engaging a processor for some processing and it can also engage in I/O operations.
It is for this reason a process is differentiated from program. While the program is a text script, a program
in execution is a process.

A process is basically a program in execution. The execution of a process must progress in a sequential
fashion.

A process is defined as an entity which represents the basic unit of work to be implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.

3|P ag e
When a program is loaded into the memory and it becomes a process, it can be divided into four sections
─ stack, heap, text and data. The following image shows a simplified layout of a process inside main
memory –

Component & Description


Stack
The process Stack contains the temporary data such as method/function parameters, return address and
local variables.2
Heap
This is dynamically allocated memory to a process during its run time.3
Text
This includes the current activity represented by the value of Program Counter and the contents of the
processor’s registers.4
Data
This section contains the global and static variables.

Process Life Cycle


When a process executes, it passes through different states. These stages may differ in different operating
systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.S.N.State & Description1
4|P ag e
Start
This is the initial state when a process is first started/created.2

Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor
allocated to them by the operating system so that they can run. Process may come into this state
after Start state or while running it by but interrupted by the scheduler to assign CPU to some other
process.3

Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to running
and the processor executes its instructions.4

Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or
waiting for a file to become available.5

Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved to the
terminated state where it waits to be removed from main memory.

Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for every process. The
PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track
of a process as listed below in the table −S.N.Information & Description1
5|P ag e
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.2
Process privileges
This is required to allow/disallow access to system resources.3
Process ID
Unique identification for each of the process in the operating system.4
Pointer
A pointer to parent process.5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process.6
CPU registers
Various CPU registers where process need to be stored for execution for running state.7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the process.8
Memory management information
This includes the information of page table, memory limits, Segment table depending on memory used by
the operating system.9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.10
IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −

6|P ag e
The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.

Process Scheduling

Definition

The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems
allow more than one process to be loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.

Process Scheduling Queues


The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for each
of the process states and PCBs of all processes in the same execution state are placed in the same queue.
When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new
state queue.
The Operating System maintains the following important process scheduling queues −

 Job queue − This queue keeps all the processes in the system.

7|P ag e
 Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting
to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O device constitute
this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS
scheduler determines how to move processes between the ready and run queues which can only have one
entry per processor core on the system; in the above diagram, it has been merged with the CPU.

Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of
three types −
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the
system for processing. It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is

8|P ag e
stable, then the average rate of process creation must be equal to the average departure rate of processes
leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there is
use of long-term scheduler.

Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase system performance in accordance
with the chosen set of criteria. It is the change of ready state to running state of the process. CPU
scheduler selects a process among the processes that are ready to execute and allocates CPU to one of
them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the
degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-
processes.
A running process may become suspended if it makes an I/O request. A suspended processes cannot
make any progress towards completion. In this condition, to remove the process from memory and make
space for other processes, the suspended process is moved to the secondary storage. This process is
called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.

Comparison between Scheduler

Sr. Long Term Short Term Medium Term


No.
1 It is job scheduler It is CPU Scheduler It is swapping
2 Speed is less than short Scheduler Speed is very fast Speed is in between both
term
3 It controls degree of Less control over degree Reduce the
multiprogramming ofmultiprogramming degree of
multiprogramming
4 Absent or minimal in time Minimal in time Time sharing system
sharing system. sharingsystem. usemedium term
scheduler.

9|P ag e
5 It select processes from It select from among Process can be
pool and load them into theprocesses that are reintroduced into memory
memory for execution. ready to execute. and its execution can be
continued.
6 Process state is (New to Process state is -
Ready) (Ready toRunning)
7 Select a good process, mix Select a new process -
of I/O bound and for aCPU quite
CPUbound. frequently.

Scheduling Algorithms

A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going
to discuss in this chapter −
 First-Come, First-Served (FCFS) Scheduling
 Shortest-Job-Next (SJF) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are
designed so that once a process enters the running state, it cannot be preempted until it completes
its allotted time, whereas the preemptive scheduling is based on priority where a scheduler may
preempt a low priority
running process anytime when a high priority process enters into a ready state.
First Come First Serve (FCFS)
 Jobs are executed on first come, first serve basis.
 It is a non-preemptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling
algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first. The
implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters the
ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the
10 | P a g e
process at the head of the queue. The running process is then removed from the queue. The code for
FCFS scheduling is simple to write and understand. On the negative side, the average waiting time
under the FCFS policy is often quite long. Consider the following set of processes that arrive at time
0, with the length of the CPU burst given in milliseconds:

If the processes arrival in the order P1, P2, P3, and are served in FCFS order, we get the result shown
in the following Gantt chart, which is a bar chart that illustrates a particular schedule, including the
start andfinish times of each of the participating processes:

The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2 , and 27
milliseconds for process P3 . Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds. If
the processes arrive in the order P2, P3 , P1, however, the results will be as shown in the following
Gantt chart:

The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction is substantial. Thus,
the average waiting time under an FCFS policy is generally not minimal and may vary substantially
if the processes CPU burst times vary greatly.
In addition, consider the performance of FCFS scheduling in a dynamic situation. Assume we have
one CPU-bound process and many I/O-bound processes. As the processes flow around the system,
the following scenario may result. The CPU-bound process will get and hold the CPU. During this
time, all the other processes will finish their I/0 and will move into the ready queue, waiting for the
CPU. While the processes wait in the ready queue, the I/0 devices are idle. Eventually, the CPU-
bound process finishes its CPU burst and moves to an I/0 device. All the I/O-bound processes, which
have short CPU bursts, execute quickly and move back to the I/0 queues. At this point, the CPU sits
idle. The CPU-bound process will then move back to the ready queue and be allocated the CPU.
Again, all the I/0 processes end up waiting in the ready queue until the CPU-bound process is done.
11 | P a g e
There is a convoy effect as all the other processes wait for the one big process to get off the CPU.
This effect results in lower CPU and device utilization than might be possible if the shorter processes
were allowed to go first.
Note also that the FCFS scheduling algorithm is nonpreemptive. Once the CPU has been allocated to
a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting
I/0. The FCFS algorithm is thus particularly troublesome for time-sharing systems, where it is
important that each user get a share of the CPU at regular intervals. It would be disastrous to allow
one process to keep the CPU for an extended period.

Shortest-Job-First Scheduling (SJF)

 This is also known as shortest job first, or SJF


 This is a non-preemptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.

A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This
algorithm associates with each process the length of the process's next CPU burst. When the CPU is
available, it is assigned to the process that has the smallest next CPU burst. If the next CPU bursts of
two processes are the same, FCFS scheduling is used to break the tie. Note that a more appropriate
term for this scheduling method would be the shortest-next-CPU-burst algorithm, because scheduling
depends on the length of the next CPU burst of a process, rather than its total length. We use the term
SJF because most people and textbooks use this term to refer to this type of scheduling.

As an example of SJF scheduling, consider the following set of processes, with the length of the
CPUburst given in milliseconds:

12 | P a g e
Using SJF scheduling, we would schedule these processes according to the following Gantt chart:

The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9 milliseconds for
process P3, and 0 milliseconds for process P4 . Thus, the average waiting time is (3 + 16 + 9 + 0) /4 = 7
milliseconds. By comparison, if we were using the FCFS scheduling scheme, the average waiting time
would be 10.25 milliseconds.

Priority Based Scheduling

 Priority scheduling is a non-preemptive algorithm and one of the most common


schedulingalgorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first and
so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any
other resourcerequirement.

Using priority scheduling, we would schedule these processes according to the following Gantt chart:

(0+1+6+16 +18)/5 or 41/5= 8.2


The average waiting time is 8.2 milliseconds.

Round Robin Scheduling

 Round Robin is the preemptive process scheduling algorithm.


 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process

13 | P a g e
executes for a given time period.
 Context switching is used to save states of preempted processes.

If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it
requiresanother 20 milliseconds, it is preempted after the first time quantum, and the CPU is given to the next
process in the queue, process P2 . Process P2 does not need 4 milliseconds, so it quits before its time quantum
expires. The CPU is then given to the next process, process P3. Once each process has received 1 time quantum,
the CPU is returned to process P1 for an additional time quantum. The resulting RR schedule is as follows:

Let's calculate the average waiting time for the above schedule. P1 waits for 6 milliseconds (10-
4), P2 waits for 4 milliseconds, and P3 waits for 7 milliseconds. Thus, the average waiting time is
17/3 = 5.66milliseconds.

14 | P a g e

You might also like