You are on page 1of 56

UNIT:02

PROCESS AND CPU SCHEDULING


OPERATING SYSTEM SEM-IV DBATU,LONERE
COMPUTER SCIENCE AND ENGG

Dr. Mithun B PATIL


M.Tech, Ph.D (CSE)

)
contents
►Process concepts
►Process scheduling
►Operation on process
►Cooperating process
►Threads
►Interprocess Communication.
►Scheduling Criteria
►Scheduling Algorithm
►Multi-processor scheduling
►Real time Scheduling
►Scheduling Algorithms & Performance Evaluation
Process concepts

► Process :Program under execution is called as process.

► Ex: sending an output to printer is process.

► Process requires certain resources such as CPU time, Memory, Files, I/o devices etc are allocated by
Operating System

► OS allocates resources

►When process is created

►While executing the process

► OS reclaims all the resources when process terminated/ completed.

► Program becomes process when it is loaded into memory


Program v/s Process
►Operating system is responsible for
►Creating /deleting the process

►Suspending or resuming the process

►Mechanism of process synchronization

►Mechanism for process communication

►Mechanism for handling deadlock

►Earlier one process with all system resource was executed ,


Now multiple process with system resource are executed.
Process in memory
Stack Stack section- temporary data such as function parameters,
return address, and local variable.

Heap Heap is memory which is dynamically allocated


during process run time.
Data Data section- global variables.

Text Text Section – program code, current activity as represented by value of


program counter and the contents of processor registers.
Process State
►A process changes state as it executes.

new admitted exit terminated


interrupt

ready running

Scheduler
I/O or I/O or
dispatch
event event wait
complet
ion
waiting
Process State

►A Process in its lifetime, undergoes several states

►New: The process is being created

►Running: Instructions are being executed (CPU allocated)

►Waiting : Process is waiting for some event like I/O, reception of a signal.

►Ready: Process has all required resources except CPU

►Terminated: Process has finished its execution


Process control Block[PCB]

Process state

Process id

Program counter

Scheduling information

registers

Memory limits

List of open files


Process control Block

►OS represents each process by a Process Control Block (PCB)


►The data structure contains information as listed below:
1. Process State: State may be New, Ready, Running, Waiting, Terminated
2. Program Counter: Indicates address of next instruction to be executed
3. CPU Registers: like accumulators, index registers, stack pointers, general
purpose registers, and any conditional information. On interrupt or process
switch, this info along with PC is stored and forms Process’ Context.
4. CPU Scheduling Information: Process’ priority, pointers to scheduling
queues, & any other scheduling parameter (more in next).
Process concepts

4. Memory Management Information: Value of base & limit registers, Page


Table etc. (more in unit 5)
5. Accounting Information: Information like amount of CPU & real time
used, time limits, account numbers, process numbers etc.
6. I/O Status Information: Information includes list of I/O devices allocated
to process, list of open files etc.
►Content of PCB are collectively called as context of process.
►Storing current working process and starting new process is called as
context switching
Operation on Process
► Majorly there are two operations on process.
► Process creation
► Process termination
► Process creation:
► Created/Deleted dynamically
► Fork() system call
► Parent-child relationship
► Forming a tree of process
► PID
► ps command
► Requires certain resources to accomplish its task.
► Obtain directly or share
► Passing of input data from parent to child
►When a process creates new process two possibility exist in
terms of execution.
1) The parent continues to execute concurrently with its child.

2) The parent wait until some of all its Childs have terminated.

►Address space of new process


1. Child duplicates parent process

2. Child have new memory allocated


int pid;
Pid=fork();
exec() = to execute the process
2. Process termination
► Final statement- exit()
► Returns int to parent via wait()
► Deallocated Resources
► Termination usually invoked by parent
►A parent may terminate the execution of one of its child for a variety of reasons.
1. The child has exceeded its usage of some resources than it has been allocated.
2. The task assigned to the child is no longer needed.
3. The parent is exiting.
NOTE: If parent terminated all its child are terminated it is called as cascading
termination
Process scheduling

►Objective of multi programming operating system is to obtain max CPU


utilization, processes are running concurrently.

►Process Scheduler selects available process for program execution.

Scheduling Queues:

As processes enter the system, they are put into a Job Queue which
consists of all processes in the system
► Ready Queue/List:
► Processes in main memory & are ready to execute

► Usually a linked list of PCBs

► Ready queue header contains pointers to the first and final PCB in the queue

► Device Queue:
► Separate queue for each device

► Processes waiting for the device are kept in the list


Schedulers
►A process migrates among various queues in its lifetime
►For scheduling purpose, OS selects processes from these
queues
►Selection process is carried out by appropriate schedulers
►Three types of schedulers for three types of situations
1. Short Term Scheduler
2. Medium Term Scheduler
3. Long Term Scheduler
Short Term Scheduler
►The scheduler selects a process from Ready List for CPU allocation for a
very small time period (called as CPU Burst) of around 100 ms

►Processes from Ready list are selected in Round Robin Fashion

►Since CPU burst is too small:


► The algorithm itself should not take more time

► Frequency of scheduler invocation is high (every 100 ms)

►Due to above two requirements, the scheduler is called short term


scheduler
Medium Term Scheduler
► Consider a situation:
► All RAM is allocated to existing processes, few of which are in waiting state for considerable period

► New process is to be created which need RAM OR existing process in Ready state or Running state
needs RAM

► OS selects process(es) to temporarily shift to Secondary Memory so as to create free RAM

► Such processes are called as Swapped-out processes

► When the event for which the processes are waiting, occurs, these swapped
out processes are swapped-in

► Medium Term Scheduler does this job


Long Term Scheduler

►Used for processes submitted in batch

►Batch processes are spooled on to secondary for later execution (NOT


immediate execution)

►Batch processes are selected on availability of required resources like


CPU, RAM, I/O

►Long Term scheduler does this job of selection


► Usually batch processes are described in to one of the three categories:

► CPU Bound: More time is used for computation than generating I/O request e.g. Scientific programs doing lot of
computations

► I/O Bound: Spends more of its time doing I/O than it spends doing computations e.g. a Messaging System

► Memory Bound: These processes are data intensive processes e.g. Database Applications requiring Ram in GB / TB

► Long Term Scheduler carefully selects good process mix so that all three resources are used properly
Cooperating Process
►Multiple processes executed by OS are categorized into two
categories.
1. Independent Processes :
1. Cannot affected by or affect other processes.
2. Doesn’t share any data with other processes.

2. Co Operating Processes :
1. Can affect or be affected by other processes .
2. Shares data with other process
Reasons for Cooperating Processes:

1. Information Sharing: e.g. shared files (app like Railway reservation, WhatsApp,
Facebook).
2. Computation Speedup: A task broken into number of subtasks to run in parallel
(system need multiple processing units like CPU, I/O Channels).
3. Modularity: Dividing system functions into separate processes to run in parallel
(e.g. Schedulers, File I/O, Device & bus drivers).
4. Convenience: An individual user may wish to work on many tasks simultaneously.
Inter-process Communication
► IPC provides a communication mechanism for cooperating processes
► Two standard mechanisms available
1. Shared Memory
2. Message Passing
Shared-Memory Systems
►Requires communicating processes to establish a region in shared memory (typically in address
space of a process).

► Other processes sharing this region attaches themselves.

►Communicating processes can exchange information by reading-writing in this shared memory.

►Form of data, location, format etc. are decided by communicating processes.

►The shared memory is not under control of OS.

►Communicating processes are responsible for maintaining concurrency control.

►Example: Producer-consumer problem


Message-Passing Systems
► OS provides this mechanism
► No shared memory, every process uses its own address space however a communication link is necessary between
the processes
► Useful in distributed environment
► Uses Send-Receive mechanism
► Messages can be of fixed size (e.g. a packet) or of variable size (e.g. eMail, WhatsApp messages)
► Several methods available for logical implementations
1. Direct or indirect communication
2. Synchronous or Asynchronous communication
3. Automatic or explicit buffering
► Direct Communication:: Symmetric
►Each sender process explicitly name the recipient
►send (P, message) : Send a message to process P
►receive (Q, message) : Receive message from process Q
►Communication link properties:
►Link is established automatically
►Link is associated with exactly two processes
►Between each pair of processes, there exists exactly one link
►Direct Communication:: Asymmetric
►Only sender process explicitly name the recipient; Receiver need not
name the sender
►send (P, message) : Send a message to process P
►receive (id, message) : Receive message from any process (variable
id)
►Disadvantages of Direct Communication:
►Limited modularity of the resulting process definition
►Change of name of a process calls for changing all other process
definitions
►Indirect Communication
►Messages are sent to or received from mailboxes, or ports
►Mailbox abstract view: An object where messages can be placed or removed
►Two processes can communicate only if they have a shared mailbox
►send (A, message) – Send message to mailbox A
►receive (A, message) – Receive a message from mailbox A
►Consider processes p1, p2, & p3 are sharing a mailbox
►p1 sends a message to the mailbox and both p2 & p3 execute receive()
►Which one of p2 or p3 will receive message? Three possibilities
►Allow a mailbox to be associated between two processes X
►Allow at most one process to execute receive() at a time
►Choose recipient process randomly
►Mailbox may be owned either by a process or OS
►Process Owned Mailbox:
► Mailbox is part of process’ address space
► Processes recognized as owner process (can only receive messages) and user Process (can only send messages)
► Mailbox disappears when owner process terminates

►OS Owned Mailbox:


► Mailbox is independent and not attached to any process
► OS provides following mechanism
1. Create a Mailbox
2. Send to or receive messages from Mailbox
3. Delete a Mailbox
Synchronization

►Message passing may be Blocking (Synchronous) or Non-


Blocking (Asynchronous)
1. Blocking send: Sender is blocked until the message is received by
recipient or Mailbox

2. Non-Blocking send: Sender sends message & resumes its execution

3. Blocking Receive: The recipient is blocked until message is received

4. Non-Blocking Receive: The receiver retrieves either a message or null.


Buffering

►Whether communication is direct or indirect, messages exchanged by processes reside in

temporary queue

►Three ways of queue implementation:

1. Zero Capacity: Queue can not hold any messages. If Receiver is not ready then sender is blocked

2. Bounded Capacity: Queue has limited capacity to store messages. Sender may be blocked if queue is full.

Receiver may be blocked if queue is empty.

3. Un-Bounded Capacity: Queue has infinite capacity. Thus sender is never blocked.
Threads

► Thread: Smallest sequence of programmed instructions


schedulable independently by a scheduler.
► Thread implementation differs between OS, but in most cases it
is a component of a process.
► Multiple threads can exist within one process,
executing concurrently and sharing resources such as memory
► In particular, the threads of a process share its executable code
A Process with Two
and the values of its variables at any given time. Threads
PROCESS SCHEDULING
CPU Scheduler
► Short-term scheduler selects a process in ready queue, and allocates the CPU
• Queue may be ordered in various ways

► CPU scheduling decisions may take place when a process:


1. Switches from running to waiting state

2. Switches from running to ready state

3. Switches from waiting to ready

4. Terminates

► Scheduling under 1 and 4 is non-preemptive


► All other scheduling is preemptive

• Consider access to shared data


• Consider preemption while in kernel mode
• Consider interrupts occurring during crucial OS activities
Dispatcher

►Dispatcher module gives control of the CPU to the process selected by


the short-term scheduler; this involves:
►switching context (Kernel mode)
►switching to user mode
►jumping to the proper location in the user program to restart that program

►Dispatch latency – time it takes for the dispatcher to stop one process
and start another running
Scheduling Criteria

► CPU utilization – keep the CPU as busy as possible


► Throughput – Number of processes that complete their execution per time unit
► Turnaround time – amount of time to execute a particular process
► Waiting time – amount of time a process has been waiting in the ready queue
► Response time – amount of time it takes from when a request was submitted until
the first response is produced
► Arrival time: at what time process is arrived at ready queue
Scheduling Algorithm Optimization Criteria

1. Max CPU utilization


2. Max throughput
3. Min turnaround time
4. Min waiting time
5. Min response time
Scheduling Algorithms

►Deals with selecting a process from Ready Queue (list) for CPU
allocation
►Many different CPU-Scheduling Algorithms
►First come first served Method(FCFS)
►Shortest Job First(SJF)
►Priority
►Round Robin (RR)
First- Come, First-Served (FCFS) Scheduling

Process Burst Time

P1 24

P2 3

P3 3

Suppose
► that the processes arrive in the order: P1 , P2 , P3

The Gantt Chart for the schedule is:

Waiting
► time for P1 = 0; P2 = 24; P3 = 27

Average
► waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order: P2 , P3 , P1


►The Gantt chart for the schedule is:

► Waiting time for P1 = 6; P2 = 0; P3 = 3


► Average waiting time: (6 + 0 + 3)/3 = 3
► Much better than previous case
► Convoy effect - short process wait behind long process
Process with different arrival time

► Process arrival time: At what time process arrived at ready Queue


► Completion time: At what time it completed execution
► Turn_around time = completion time –arrival time
► Wait time= turnaround time- bursttime

Arrival time Execution time


Ready Queue Completion time
(main memory) CPU
example
Process Brust time Arrival time Completion T.A.T Wait time
(BT) (AT) time(CT) (CT-AT) (T.A.T-BT)
P1 8 0 8 8 0
P2 1 2 9 7 6
P3 3 2 12 10 7
P4 6 3 18 15 9
AVG 10 5.5

P1 P2 P3 P4

0 8 9 12 18
Assignment solved solution
First come first served Method(FCFS)

►Advantage:
► Simple to implement

► Minimum overload

► Disadvantage
► Unpredictable turn around time

► Average wait time is more


2. Shortest-Job-First (SJF) Scheduling

► Associate with each process the length of its next CPU burst
► Use these lengths to schedule the process with the shortest
time
► SJF is optimal – gives minimum average waiting time for a
given set of processes
► Difficulty:: How to know length of next CPU request
2.7.2.1 Example of SJF
Process Arriv Burst Time Arrival Time
P1 0.0 6 0
P2 2.0 8 0
P3 4.0 7 0
P4 5.0 3 0
► SJF scheduling chart

► Average waiting time =


(3 + 16 + 9 + 0) / 4 = 7
Assignment on SJF with Arrival time
Determining Length of Next CPU Burst

► Ask the user – Then pick process with shortest predicted next
CPU burst
► Automatic ::
► Can be done by using the length of previous CPU bursts

►Preemptive version called Shortest-Remaining-Time-


First
Example of Shortest-remaining-time-first
► Now we add the concept of varying arrival times and preemption to the analysis

Process Arrival Time Burst Time

P1 0 8

P2 1 6

P3 2 4

P4 3 2

► Preemptive SJF Gantt Chart


P1 P2 P3 P4 P3 P2 P1
0 1 2 3 5 8 13 20

► Average waiting time = [12 + 6 + 2 + 0] / 4 = 20/4 = 5

You might also like