You are on page 1of 41

Chapter 2: Processes Management

Process Concept

 An operating system executes a variety of programs:

 Batch system – jobs

 Time-shared systems – user programs or tasks

 Process – a program in execution; process execution must

progress in sequential fashion

 A process will need certain resources—such as CPU time, memory, files, and
I/O devices —to accomplish its task.

 These resources are allocated to the process either when it is created or


while it is executing.
Process Concept

 Although traditionally a process contained only a single thread of control as it

ran, most modern operating systems now support processes that have multiple
threads.

 The operating system is responsible for the following activities in connection with
process and thread management:

 creation and deletion of both user and system processes;

 scheduling of processes;

 synchronization, communication,

 deadlock handling for processes.


Process Concept

 A process includes:

 program counter (current activity)

 Stack contains temporary data (such as function


parameters, return addresses, and local variables)

 data section which contains global variables

 Heap which is memory that is dynamically


allocated during process run time
Process State

 As a process executes, it changes state

 new: The process is being created

 running: Instructions are being executed

 waiting: The process is waiting for some event to occur

 ready: The process is waiting to be assigned to a processor

 terminated: The process has finished execution


Diagram of Process State
Process Control Block (PCB)

Each process is represented in the operating system by a process control block


(PCB)—also called a task control block. Information associated with each process

 Process state

 Program counter

 CPU registers

 CPU scheduling information

 Memory-management information

 Accounting information

 I/O status information


Process Control Block (PCB)

Process control block (PCB)


Diagram showing CPU switch from process to process.
Process Scheduling Queues

 Job queue – set of all processes in the system

 Ready queue – set of all processes residing in main memory,

ready and waiting to execute

 Device queues – set of processes waiting for an I/O device

 Processes migrate among the various queues


Schedulers

 Long-term scheduler (or job scheduler) – selects which processes

should be brought into the ready queue

 Short-term scheduler (or CPU scheduler) – selects which process

should be executed next and allocates CPU


Schedulers (Cont.)

 Short-term scheduler is invoked very frequently (milliseconds)  (must be

fast)

 Long-term scheduler is invoked very infrequently (seconds, minutes)  (may


be slow)

 The long-term scheduler controls the degree of multiprogramming

 Processes can be described as either:

 I/O-bound process – spends more time doing I/O than computations,


many short CPU bursts

 CPU-bound process – spends more time doing computations; few very


long CPU bursts
Context Switch

 When CPU switches to another process, the system must save

the state of the old process and load the saved state for the new

process

 The context is represented in the PCB of the process; it includes

the value of the CPU registers, the process state, and memory-

management information.
Interprocess Communication (IPC)

 Processes executing concurrently in the operating system may be either


independent processes or cooperating processes.

 A process is independent if it cannot affect or be affected by the other


processes executing in the system.

 A process is cooperating if it can affect or be affected by the other


processes executing in the system.
Interprocess Communication (IPC)

 Cooperating processes require an interprocess communication (IPC)


mechanism that will allow them to exchange data and information.

 There are two fundamental models of interprocess communication:

(1) shared memory

(2) message passing.


Interprocess Communication (IPC)
 In the message passing model (a), communication takes place by
means of messages exchanged between the cooperating processes.
 In the shared-memory model (b), a region of memory that is shared by
cooperating processes is established. Processes can then exchange
information by reading and writing data to the shared region.
Threads

 A thread is a basic unit of CPU utilization; it comprises a thread ID, a

program counter, a register set, and a stack.

 It shares with other threads belonging to the same process its code
section, data section, and other operating-system resources, such as
open files and signals.

 A traditional (or heavyweight) process has a single thread of control. If

a process has multiple threads of control, it can perform more than one
task at a time.
Single and Multithreaded Processes
CPU Scheduling

 In a single-processor system, only one process can run at a time; any

others must wait until the CPU is free and can be rescheduled.

 The objective of multiprogramming is to have some process running at


all times, to maximize CPU utilization.
CPU Scheduling

 CPU-I/O Burst Cycle

 Process execution begins with a CPU

burst. That is followed by an I/O burst,

which is followed by another CPU burst,

then another I/O burst, and so on.


CPU Scheduler

 Selects from among the processes in memory that are ready to execute, and
allocates the CPU to one of them

 CPU scheduling decisions may take place when a process:

1. Switches from running to waiting state (nonpreemptive)

2. Switches from running to ready state (preemptive)

3. Switches from waiting to ready (preemptive)

4. Terminates (nonpreemptive)
Dispatcher

 Dispatcher module gives control of the CPU to the process selected

by the short-term scheduler; this involves:

 switching context

 switching to user mode

 jumping to the proper location in the user program to restart that


program

 Dispatch latency – time it takes for the dispatcher to stop one process

and start another running


Scheduling Criteria

 CPU utilization – keep the CPU as busy as possible

 Throughput – # of processes that complete their execution per time unit

 Turnaround time – amount of time to execute a particular process

 Waiting time – amount of time a process has been waiting in the ready queue

 Response time – amount of time it takes from when a request was submitted

until the first response is produced, not output (for time-sharing environment)
Optimization Criteria

 Max CPU utilization

 Max throughput

 Min turnaround time

 Min waiting time

 Min response time


Scheduling Algorithms

 First Come First Served Scheduling (FCFS)

 Shortest Job First Scheduling (SJF)

 Priority Scheduling

 Round Robin Scheduling


First-Come, First-Served (FCFS) Scheduling

Process Burst Time

P1 24

P2 3

P3 3

 Suppose that the processes arrive in the order: P1 , P2 , P3


The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27

 Average waiting time: (0 + 24 + 27)/3 = 17


FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order

P 2 , P3 , P1

 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3

 Average waiting time: (6 + 0 + 3)/3 = 3

 Much better than previous case

 Convoy effect short process behind long process


Shortest-Job-First (SJF) Scheduling

 Associate with each process the length of its next CPU burst. Use these
lengths to schedule the process with the shortest time

 Two schemes:

 nonpreemptive – once CPU given to the process it cannot be preempted


until completes its CPU burst

 preemptive – if a new process arrives with CPU burst length less than
remaining time of current executing process, preempt. This scheme is
know as the Shortest-Remaining-Time-First (SRTF)

 SJF is optimal – gives minimum average waiting time for a given set of
processes
Example of Non-Preemptive SJF

Process Burst Time


P1 6
P2 8
P3 7
P4 3
 SJF (non-preemptive)

P4 P1 P3 P2

0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0)/4 = 7


Example of Non-Preemptive SJF

Process Arrival Time Burst Time


P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
 SJF (non-preemptive)

P1 P3 P2 P4

0 3 7 8 12 16

 Average waiting time = (0 + 6 + 3 + 7)/4 = 4


Example of Preemptive SJF

Process Arrival Time Burst Time


P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
 SJF (preemptive)

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

 Average waiting time = ((11 - 2) + (5 - 4) + (4 - 4) + (7 - 5) ) / 4 = 3


Priority Scheduling

 A priority number (integer) is associated with each process

 The CPU is allocated to the process with the highest priority (smallest integer 
highest priority)

 Preemptive

 nonpreemptive

 SJF is a priority scheduling where priority is the predicted next CPU burst time

 Problem  Starvation – low priority processes may never execute

 Solution  Aging – as time progresses increase the priority of the process


Priority Scheduling
 As an example, consider the following set of processes, assumed to have
arrived at time 0, in the order P1, P2, • • , P5, with the length of the CPU burst
given in milliseconds:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

The average waiting time is 8.2 milliseconds.


Round Robin (RR)

 Each process gets a small unit of CPU time (time quantum), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added
to the end of the ready queue.
Process Burst Time
P1 53
P2 17
P3 68
P4 24
 The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

 Typically, higher average turnaround than SJF, but better response


The Deadlock Problem

 A set of blocked processes each holding a resource and waiting to acquire a


resource held by another process in the set.
 Example
 System has 2 disk drives.
 P1 and P2 each hold one disk drive and each needs another one.
 Example

 semaphores A and B, initialized to 1

P0 P1

wait (A); wait(B)


wait (B); wait(A)
Bridge Crossing Example

 Traffic only in one direction.


 Each section of a bridge can be viewed as a resource.
 If a deadlock occurs, it can be resolved if one car backs up
(preempt resources and rollback).
 Several cars may have to be backed up if a deadlock
occurs.
 Starvation is possible.
System Model

 Resource types R1, R2, . . ., Rm

e.g. CPU cycles, memory space, I/O devices

 Each resource type Ri has Wi instances.

 Each process utilizes a resource as follows:

 request

 use

 release
Deadlock Characterization

Deadlock can arise if four conditions hold simultaneously.


 Mutual exclusion: only one process at a time can use a resource.

 Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes.

 No preemption: a resource can be released only voluntarily by the process


holding it, after that process has completed its task.

 Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that
P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is
held by P2, …, Pn–1 is waiting for a resource that is held by
Pn, and Pn is waiting for a resource that is held by P0.
Resource-Allocation Graph

A set of vertices V and a set of edges E.

 V is partitioned into two types:

 P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.

 R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.

 request edge – directed edge Pi  Rj

 assignment edge – directed edge Rj  Pi


Resource-Allocation Graph (Cont.)

 Process

 Resource Type with 4 instances

 Pi requests instance of Rj

Pi
Rj
 Pi is holding an instance of Rj

Pi
Rj
Example of a Resource Allocation Graph

 If graph contains no cycles  no deadlock.

 If graph contains a cycle 

 if only one instance per resource type,


then deadlock.

 if several instances per resource type,


possibility of deadlock.
Graph With A Cycle But No Deadlock

No Deadlock

Deadlock

You might also like