You are on page 1of 148

Difference between a process and a program

 A program is a passive entity, such as the contents of a file


stored on disk
 The term process refers to program code that has been loaded
into a computer's memory so that it can be executed by the
central processing unit (CPU).
 In a multiprogramming system, there will be a number of
processes in memory waiting to be executed on the CPU.
 The process management function of the operating system must
ensure that each process gets a fair share of the CPU's time.
 The operating system must then determine when the CPU can be
made available to the process, and for how long. Once a process
controls the CPU, its instructions will be executed until its
allotted time expires, or until it terminates, or until it requests an
input or output operation.
 Process execution is a series of bursts
CPU burst, I/O burst, CPU burst, I/O burst, etc.
For ex:
Start ------- CPU cycle
Read A,B ------- IO cycle
C=A+B
D=(A*B)-C CPU cycle
E=A-B
F=D/E
Write A,B,C,D,E,F ----- IO Cycle
Stop ------ CPU cycle
 As a process executes , it changes state .
 Each process may be in one of the following states:
 New : a new process has not yet been loaded into main memory,
although its process control block (PCB) has been created.
 Ready :The process is in the memory and waiting to be assigned
to a processor.
 Running : The process that is currently being executed. Assume
a computer with a single processor, so at most one process at a
time can be in this state.
 Waiting : The process is waiting for some event like I/O.
 Terminated: The process finished execution

Note: Only one process can be running (single processor system)


many processes may be in ready and waiting
OS keeps track of processes with a set of
queues:
 Job queue – set of all processes in the system
 Ready queue – set of all processes residing in

main memory, ready and waiting to execute


 Device queues – set of processes waiting for

an I/O device
 Processes migrate between the various

queues as they execute


 The OS must select , for scheduling purposes processes from
these queues in some fashion.
 The selection process is carried out by the appropriate scheduler.
 The processes are spooled to a mass storage device like a disk.
 The long term scheduler(LTS) or job scheduler, selects from this
pool and loads them into memory for execution.
 Hence LTS controls the degree of multiprogramming.
 The short term scheduler (STS) or the CPU scheduler , selects
from among the processes that are ready to execute , and
allocates CPU to one of them.
 Difference between LTS and STS is frequency of execution. The
STS must select a new process for the CPU frequently.
Hence STS executes more frequently than LTS.
 Processes can be described as either:
◦ I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts (e.g., surfing the
web, copying large files)
◦ CPU-bound process – spends more time doing
computations; few very long CPU bursts (e.g., processes
doing intensive mathematical computation)
 It is important for the long-term scheduler to select a good
process mix of I/O-bound and CPU-bound processes
◦ What will happen to the ready queue and I/O queue:
 If all processes are I/O bound?
Then ready queue will be almost empty.
 If all processes are CPU bound?
Then I/O queue will be almost empty
 Another component involved in the CPU scheduling function
(Function of assigning the CPU to one of the processes in the
ready queue) is the dispatcher.
 The dispatcher is the module that actually gives control of the
CPU to the process selected by the STS.
 This function involves loading the registers of the process,
jumping to the proper location in the program to restart it.

 Another scheduler involved is the mid term scheduler(MTS)


 The MTS removes processes from the main memory and thus
reduces the degree of multiprogramming. At some time later,
the process can be reintroduced into the memory and its
execution can be continued from where it left it off.
Addition of medium term scheduler to the
queuing diagram
Medium Term Scheduling
• Possibility-All processes in memory need I/O ie. All processes are blocked
and waiting for an event (IO) and no process is under execution.
• Requirement – Bring in some process from the disk which is ready for
execution.
• If no space in memory – Free the space by swapping out a blocked
process.
• The swapped out processes are known as the suspended processes.
 There another queue on the disk called the blocked suspend
queue for all the processes which are suspended in the
blocked state.
 The task of swapping the process from blocked queue to
blocked suspend queue is performed by the medium term
scheduler.
 When there is a signal for the completion of an IO , for which
the process is blocked and presently in the suspend queue,
the state of the process is changed to ready-suspend and
moved to the ready-suspend queue (which is another queue
on the disk). This task of moving a process from blocked-
suspend queue to the ready suspend queue is also performed
by the medium term scheduler.
 Whenever the suspended process is swapped out on the disk,
there are two choices for bringing in a process, which are
ready for execution.
 First : Pick up a process from the ready suspend queue and
send them to the ready queue.
 Second : A new process from the job queue can be send to
the ready queue.
 Second choice increases the load on the system(by increasing
the list of unfinished jobs).
 Hence generally the first option is preferred.
 Ready: The process is in main memory and
available for execution.
 Blocked: The process is in main memory and

awaiting an event I/O event.


 Blocked suspend: The process in in

secondary memory and awaiting an event.


 Ready suspend: The process is in secondary

memory but is available for execution as


soon as it is loaded into main memory.
 Blocked > Blocked suspend
If there are no ready processes, then at least
one blocked process is swapped out to make
room for another process that is not blocked.
 Blocked suspend > Ready suspend
A process in the Blocked suspend state is
moved to the Ready suspend state when the
event for which it has been waiting occurs.
 Ready suspend > Ready
When there are no ready processes in main
memory, the operating system will need to
bring one in to continue execution.
 It might be the case that a process in the

ready suspend state has higher priority than


any of the processes in the Ready state. In
that case, the operating system designer may
decide that it is more important to get in the
higher-priority process than to minimize
swapping.
 Ready > Ready suspend
It may be necessary to suspend a ready
process if that is the only way to free a
sufficiently large block of main memory.
 New > Ready suspend and New > Ready
When a new process is created, it can either
be added to the Ready queue or the Ready,
suspend queue.
 There would always be insufficient room in

main memory for a new process; hence the


use of the New > Ready suspend transition.
 Blocked suspend > Blocked
A process terminates, freeing some main
memory. There is a process in the Blocked
suspend queue with a higher priority than
any of the processes in the Ready suspend
queue and the operating system has reason
to believe that the blocking event for that
process will occur soon.
 Running > Ready suspend
If the operating system is preempting the
process because a higher-priority process on
the Blocked suspend queue has just become
unblocked, the operating system could move
the running process directly to the Ready
suspend queue and free some main memory.
7 STATE PST

New

Suspend
Dispatch
Activate
Ready Ready Running Exit
Suspend Release
Suspend Timeout

Event
Event Event Wait
Occurs
Occurs
Activate
Blocked
Suspend Blocked
Suspend
Context Switch
 When CPU switches to another process, the

system must save the state of the old process


and load the saved state for the new process
via a context switch
 Context of a process represented in the

Process Control Block(PCB)


 Context-switch time is overhead; the system

does no useful work while switching


Process Control Block (PCB)
Information associated with each process
is stored in the PCB
(also called task control block)
• Process state – running, waiting, etc
• Program counter – location of instruction to next
execute.
• CPU registers – contents of all process-centric
registers.
• CPU scheduling information- priorities, the
amount of time the process has been waiting and
the amount of the process executed the last time
it was running.
• Memory-management information – memory
allocated to the process
• Accounting information – CPU used, clock time
elapsed since start,
• I/O status information – I/O devices allocated to
process.
 How does CPU manage the execution of simultaneously ready
processes?
i.e. When the CPU becomes idle , the OS must select one of the
processes in the ready queue to be executed.
 The selection process is carried out by the STS (CPU

scheduler).
 The scheduler selects from among the processes that are

ready to execute, and allocates the CPU to one of them.


 Which process to select ?

 When to select ?
 Non –preemptive
Process runs until voluntarily relinquishes the CPU
–process blocks on an event (e.g., I/O)
–process terminates
 Preemptive
The scheduler actively interrupts and deschedules an executing
process and schedules another process to run on the CPU.
 Process/CPU scheduling algorithms decides which process to
execute from a list of processes in the ready queue.
 There a number of algorithms for CPU scheduling. All these
algorithms have certain properties
 To compare different algorithms there are certain criteria
 These criteria can be can be used to compare different
algorithms to find out the best algorithm.
1. CPU utilization : The CPU should be kept as busy as possible. CPU
utilization ranges from 0% to 100 %. Typically it ranges from 40 % (for
a lightly loaded system ) to 90 % ( for a heavily loaded system)
2. Throughput : If the CPU is busy executing processes, then work is
being done.
One measure of work is the number of processes completed per time
unit, called throughput.
Varies from 1 process per hour to 10 processes per second.
3. Turnaround time: How long it takes to execute a process ?
Measured from the time of interval of submission of a process to the
time of completion.
4. Waiting time : The amount of time that a process spends in the ready
queue. Sum of the periods spent waiting in the ready queue.
5. Response time : Time from the submission of a request until the first
response is produced. NOT THE TIME IT TAKES TO OUTPUT THE
RESPONSE
Ideally the Scheduling algorithm should :
Maximize : CPU utilization , Throughput.
Minimize : Turnaround time, Waiting time, Response time

Type of Scheduling Algorithms


 Non-preemptive Scheduling
 Preemptive Scheduling
 Deals with the problem of deciding which of the processes
in the ready queue is to be allocated the CPU.

1. FCFS - First-Come, First-Served


◦ CPU is allocated to the process that requested it
first.
◦ Non-preemptive
◦ Ready queue is a FIFO queue
◦ Jobs arriving are placed at the end of queue
◦ Dispatcher selects first job in queue and this job
runs to completion of CPU burst

32
Process Burst Time
P1 24 ms
P2 3 ms
P3 3 ms
 Suppose that the processes arrive in the order: P1 , P2 , P3 , all at
the same time. The Gantt Chart for the schedule is:
P1 P2 P3

0 24 27 30
 Waiting time for P1 = 0; P2 = 24; P3 = 27 ms
 Average waiting time: (0 + 24 + 27)/3 = 17 ms

Operating System Concepts


 Compute Average turnaround time
P1 P2 P3
0 24 27 30

Compute each process’s turnaround time

T (p1) = 24ms
T (p2) = 3ms + 24ms = 27ms
T (p3) = 3ms + 27ms = 30ms

Average turnaround time = (24 + 27 + 30)/3 = 81 / 3 = 27ms


Suppose that the processes arrive in the order
P2 , P3 , P1 .
 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30
 Waiting time for P1 = 6; P2 = 0; P3 = 3
 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case.
 Convoy effect : All small processes have to wait for one big
process to get off the cpu.
 Alternative : Allow shorter processes to go first

Operating System Concepts


 Associate with each process the length of its next CPU burst.
Use these lengths to schedule the process with the shortest
time.
 Two schemes:
◦ Non-preemptive -Once the CPU is given to the process it
cannot be preempted until it completes its CPU burst. i.e
once the CPU has been allocated to a process, it can keep
the cpu until it wants to release it.
◦ Preemptive – If a new process arrives with CPU burst length
less than remaining time of currently executing process
then preempt it. This scheme is know as the Shortest-
Remaining-Time-First (SRTF).

Operating System Concepts


Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
 SJF (non-preemptive)
P1 P3 P2 P4

0 3 7 8 12 16

 Average waiting time = (0 + (8-2) + (7-4) + (12-5))/4 = 4ms


 Average turnaround time =((7-0)+(12-2) + (8-4)+(16-5))= 8 ms

Operating System Concepts


Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
 SJF (preemptive)
P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

 Average waiting time = ((11-2) + 1 + 0 +2)/4 = 3ms


 Average turnaround time =(16+(7-2)+(5-4)+(11-5))/4= 7ms

Operating System Concepts


Example

• Shortest-Remaining-Time-First Scheduling(SRTF)

Process Arrival time Burst Time


P1 0 8 ms
P2 1 4 ms
P3 2 9 ms
P4 3 5 ms

P1 P2 P4 P1 P3

0 1 5 10 17
26 Average WT= ((10-1)+(1-1)+(17-2)+(5-2))/4 = 6.5 ms

Average TT = ((17-0)+(5-1)+(26-2)+(10-3))/4= 13 ms

39
 A priority number (integer) is associated with each
process.
 CPU is allocated to the process with the highest
priority (smallest integer  highest priority).
 SJF is a priority algorithm with shorter CPU burst
having more priority.
 Drawback : Can leave some low priority processes
waiting indefinitely for the CPU. This is called
starvation.
 Solution to the problem of indefinite blockage of
low priority jobs is aging.
 Aging is a technique of gradually increasing
the priority of processes that wait in the
system for a long time.
 For example, if priorities range from 127

(low) to 0 (high), we could increase the


priority of a waiting process by 1 every 15
minutes.
 Eventually, even a process with an initial

priority of 127 would have the highest


priority in the system and would be executed.
 Priorities can be defined either internally or
externally.
 Internally defined priorities use some measurable
quantity or quantities to compute the priority of a
process. For example, memory requirements of
each process.
 External priorities are set by criteria outside the OS,
such as the importance of the process, the type of
process (system/user) etc.
 Priority scheduling can be either pre-emptive or non
preemptive.
 When a process arrives at the ready queue, its priority is
compared with the priority of the currently running
process.
 A pre-emptive priority scheduling algorithm will
preempt the CPU if the priority of the newly arrived
process is higher than the priority of the currently
running process.
 A non preemptive priority scheduling algorithm will
simply continue with the current process
Process Duration Priority Arrival Time
P1 6 4 0
P2 8 1 0
P3 7 3 0
P4 3 2 0
P2 (8) P4 (3) P3 (7) P1 (6)

0 8 11 18 24
P2 waiting time: 0
P4 waiting time: 8 The average waiting time (AWT):
P3 waiting time: 11 (0+8+11+18)/4 = 9.25ms
P1 waiting time: 18
44
Priority algorithm
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
P2 P5 P1 P3 P4

0 1 6 16 18 19

Average wait time = (6+0+16+18+1)/5= 8.2 ms


Turnaround time = (16 + 1 + 18 + 19 +6)/5= 12 ms

45
Arrival Time Burst Time Priority
P1 0 5 3
P2 2 6 1
P3 3 3 2

| p1 | p2 | p3 | p1 |
0 2 8 11 14
Average waiting time =(9+0+5)/3=4.66 ms
Average turn around time=(14+6+8)/3=9.33ms
 The round-robin (RR) scheduling algorithm is
designed especially for time-sharing systems.
 It is similar to FCFS scheduling, but pre-
emption is added to switch between
processes.
 A small unit of time, called a time quantum or
time slice, is defined.
 A time quantum is generally from 10 to 100
milliseconds.
 The ready queue is treated as a circular
queue.
To implement RR scheduling
 We keep the ready queue as a FIFO queue of processes.
 New processes are added to the tail of the ready queue.
 The CPU scheduler picks the first process from the ready queue, sets a
timer to interrupt after 1 time quantum, and dispatches the process.
 The process may have a CPU burst of less than 1 time quantum.
◦ In this case, the process itself will release the CPU voluntarily.
◦ The scheduler will then proceed to the next process in the ready
queue.
 Otherwise, if the CPU burst of the currently running process is longer
than 1 time quantum,
◦ the timer will go off and will cause an interrupt to the OS.
◦ A context switch will be executed, and the process will be put at the
tail of the ready queue.
◦ The CPU scheduler will then select the next process in the ready
queue.
 The performance of the RR algorithm depends
heavily on the size of the time quantum. If the time
quantum is extremely large, the RR policy is the
same as the FCFS policy.
 If the time quantum is extremely small the RR
approach is called processor sharing and (in theory)
creates the appearance that each of the n users has
its own processor running at 1/n th the speed of
the real processor.
◦ Shorter response time
◦ Fair sharing of CPU
Process Burst Time
P1 53
P2 17
P3 68
P4 24
 The Gantt chart is:
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

 Average turn-around time = 134 + 37 + 162 + 121) / 4 = 113.5 ms


Process Burst Time Wait Time
P1 53 57 +24 = 81
P2 17 20
P3 68 37 + 40 + 17= 94
P4 24 57 + 40 = 97

 Average wait time = (81+20+94+97)/4 = 73 ms


 EXAMPLE DATA:
Process Arrival Time CPU Time
◦ 1 0 8
◦ 2 1 4
◦ 3 2 9
◦ 4 3 5
◦ Round Robin, quantum = 4

P1 P2 P3 P4 P1 P3 P4 P3

0 4 8 12 16 20 24 25 26
Average TAT= ( (20-0) + (8-1) + (26-2) + (25-3) )/4 = 73/4 = 18.25ms
Average WT=(16-4)+ (4-1)+ (8+(20-12)+(25-24)-2)+(12+(24-16)-3)=11.75ms
1. FCFS
Process Arrival Time Exec. Time
P1 0 5
P2 2 4
P3 3 7
P4 5 6

P1 P2 P3 P4

0 5 9 16 22
Avg WT= (0+ (5-2) + (9-3) + (16-5))/4=5 time units
Avg TAT =((5-0) +(9-2) + (16-3) + (22-5))/4= 10.5 time units
2. Priority preemptive
Process Arrival Time Exec. Time
Priority
P1 0 5 2
P2 2 4 1
P3 3 7 3
P4 5 6 4

P1 P2 P1 P3 P4

0 2 6 9 16 22

Avg WT= ((0+4) + 0 + (9-3) + (16-5) )/ 4 = 5.25 time units


Avg TAT =((9-0) +(6-2) + (16-3) + (22-5))/4=10.75 time units
3. SRTF
Process Arrival Time Exec. Time
P1 0 9
P2 1 5
P3 2 3
P4 3 4

P1 P2 P3 P2 P4 P1

0 1 2 5 9 13 21
Avg WT= (12+ 3+ 0 + 6)/4=5.25 time units
Avg TAT =(21+8 + 3 + 10 )/4= 10.5 time units
 General class of algorithms involving multiple ready queues
 Appropriate for situations where processes are easily
classified into different groups (e.g., foreground and
background)
 Processes permanently assigned to one ready queue
depending on some property of process.
 Each queue has its own scheduling algorithm
foreground – RR
background – FCFS
 Scheduling as well between the queues--often a priority
preemptive scheduling. For example, foreground queue could
have absolute priority over background queue. (New
foreground jobs displace running background jobs ; serve all
from foreground then from background).
Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS

Scheduling
A new job enters queue Q0 which is served FCFS. When it
gains CPU, job receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1.

At Q1 job is again served FCFS and receives 16 additional


milliseconds. If it still does not complete, it is preempted and
moved to queue Q2.

Long jobs automatically sink to Q2 and are served FCFS


 For example, Windows NT/XP/Vista uses a
multilevel feedback queue, a combination of
fixed priority preemptive scheduling, round-
robin, and first in first out.
Central to the design of modern Operating
Systems is managing multiple processes
Multiprogramming
Multiprocessing
Distributed Processing
 Big Issue is Concurrency

Managing the interaction of all of these


processes
 Interleaving of processes on a uniprocessor
system
 Not only interleaved but overlapped on multi-
processors
Difficulties in concurrency
Eg: If two processes both make use of the
same global variable and both perform reads
and writes on that variable, then the order in
which the reads and writes are executed is
critical.
void echo()
{
chin = getchar(); // get a character from the keyboard
// and store it to variable chin

chout = chin; // transfer it to variable chout

putchar(chout); // print the character to the display

}
 Input is obtained from a keyboard one keystroke at
a time.
 Each input character is stored in variable chin. It is
then transferred to variable chout and sent to the
display.
 Instead of each application having it’s own
procedure this procedure may be shared and only a
copy is loaded to the memory global to all
applications thus saving space.
 Any program can call this procedure repeatedly to
accept user input and display it on the user's
screen.
 Consider a single processor multiprogramming
system supporting a single user.
 The user can be working on a number of
applications simultaneously.
 Assume each of the application needs to use the
procedure echo, which is shared by all the
applications.
 However this sharing can lead to problems.
 The sharing of main memory among processes is useful but could lead to
problems.
 Consider the following sequence:
P1 invokes echo and is interrupted after executing
chin = getchar(); // assume x is entered
P2 is activated and invokes echo, which runs to completion
// assume y is entered and displayed
P1 is resumed and executes
chout = chin;
putchar(chout);
// What is the result?
 The result is that the character input to P1 is lost before being

displayed
 And the character input to P2 is displayed by both P1 and P2.
Problem
 The shared global variable chin and chout is

used by multiple processes.


 If one process updates the global variable

and then is interrupted, another process may


alter this variable before the first process can
use this value.
 This is called as a Race Condition
 A race condition occurs when

Two or more processes access a shared data


and the outcome is dependent upon which
process precisely runs when.
 Suppose that we permit only one process at
a time to be in that procedure.
 So, the following sequence may result:
1. P1 invokes the echo procedure and is interrupted
immediately after the conclusion of the input
function.
• x is stored in variable chin.
2. P2 is activated and invokes the echo procedure.

However because P1 is still in the echo procedure,


P2 is currently blocked from entering the
procedure. Therefore P2 is suspended awaiting for
the availability of the echo procedure.

71
3. P1 resumes and completes execution of echo.
The proper character x is displayed.

4. When P1 exits echo, this removes the block on


P2. When P2 later resumed, the echo is
successfully invoked.
 Necessary to protect share global
variables.
 How?
◦ Control the code that accesses the variable.

72
 Previous Example : Assumption-Single processor,
multiprogramming operating system.
 Same problem could occur on a multiprocessor system.
 Processes P1 and P2 are both executing each on a separate
processor. Both processes invoke the echo procedure.
Process P1 Process P2
. .
chin = getchar(); .
. chin =
getchar();
chout = chin; chout = chin;
putchar(chout); .
.
 The result is that the character input to P1 is lost
before being displayed, and the character input to
P2 is displayed by both P1 and P2.
 Suppose that we permit only one process at a time

to be in that procedure.
 So, the following sequence may result:

1. P1 & P2 run on separate processors


2. P1 enters echo procedure first,
3. P2 tries to enter but is blocked – P2 is suspended
4. P1 completes execution
5. P2 resumes and executes echo procedure
For example, consider the "too much milk" problem.
 Suppose you're sharing your room with a roomie.

 You realize there's no milk in the fridge, and being a good

roommate, you decide to go and buy some.


 Your roommate isn't in the room when you go.

 A little while later, while you're still outside, the roommate,

also being a nice guy, decides to go buy milk as well.


 Now both of you buy milk and there's too much milk.

How do we fix this? Well, you could leave a note on the fridge
before going, in which case, both threads (you and your
roommate) would have this code:
if (noMilk) {
if(noNote) {
leaveNote();
buyMilk();
removeNote();
}
}
But this has a problem
 Suppose the first process is executing, and it checks there's

no milk and before it checks the other if condition, the


scheduler decides to switch to task B.
 Task B checks there's no milk, and that there's no note, and

before it can execute the code inside the inner if, the
scheduler switches again.
 Now task A checks that there's no note, and executes the

statements and buys milk.


 Now the scheduler switches to B, which continues executing

and now both of them buy milk.


It's a fail.
 The portion in any program , which accesses a shared
resource(such as a shared variable in memory ) is called as
the CRITICAL SECTION or CRITICAL REGION.

Solution to race condition.


When a process is in the critical section disallow any process to
enter its critical section
i.e. No two processes should be allowed to be inside their
critical region at the same time.
This is called as MUTUAL EXCLUSION
To summarize
 Each communicating process has a sequence of

instructions that modify shared data. This sequence of


instructions that modify the shared data is called a
 CRITICAL SECTION.

 If 2 processes are executing in their respective critical

section at the same time, then they interfere with each


other and the result of execution depends on the order in
which they are executing.
This is called a RACE CONDITION and should be avoided.
 The way to avoid race condition is to allow only one

process at a time to be executing in its critical section.


This is called as MUTUAL EXCLUSION
 So each process must first request permission to enter
its critical section.
 The section of code implementing this request is called

the Entry Section (ES).


 The critical section (CS) might be followed by a

Leave/Exit Section (LS).


 General structure of a process:

entry section
critical section
exit section
remainder section
The solutions for the CS problem should satisfy the following
characteristics :
1. Mutual Exclusion :

At any time, at most one process can be in its critical section (CS)
2. Progress:
If there are some processes in the queue waiting to get into their
CR and currently no process is inside the CR, then only the waiting
processes must be granted permission to execute in the CR
i.e. The CS will not be reserved for a process that is currently in a
non-critical section.
3. Bounded wait :
A process in its CS remains there only for a finite time because
generally the execution does not take much time, so the waiting
processes in the queue need not wait for a long time.
 Eg: Process using a CS

do {

Entry section
CRITICAL SECTION
Exit section

Remainder section

} while()
In this eg. The CS is situated in a loop.
In each iteration of the loop, the process uses the CS and also
performs other computations called Remainder section
Approaches to achieving Mutual Exclusion
Two Process Solution
Attempt 1
• Processes enter their CSs according to the value of turn(shared variable)
and in strict alternation

do { do {
while (turn = 2) do {nothing}; while (turn = 1) do {nothing};

{CRITICAL SECTION} {CRITICAL SECTION}

turn:=2 turn:=1

Remainder Section Remainder Section


} while () } while ()

Process 1 Process 2
 turn is a shared variable.
 It is initialized to 1 before processs p1 and p2

are created.
 Each of these processes contains a CS for

some shared data d.


 The shared variable turn is used to indicate

which process can enter the CS next.


 Let turn=1 , P1 enters CS.
 After completing CS , it sets turn to 2 so that

P2 can enter CS
Adv:
 Achieves Mutual Exclusion

Drawback
 Suffers from busy waiting(technique in which a process

repeatedly checks a condition ) i.e entire time slice allocated


to the process is wasted in waiting.
 Let P1 be in CS and process P2 be in remainder section. If P1

exits from CS, and finishes its remainder section and wishes
to enter the CS again, it would encounter a busy wait until P2
uses the CS.
Here the PROGRESS condition is violated since P1 is currently
the only process interested in using the CS, however it is not
able to enter the CS.
Attempt 2
 Drawback of the first algorithm is that it does not

know the state of the executing process(whether in


CS/RS)
 To store the state of the process 2 more variables

are introduced, state_flag_P1 and state_flag_P2.


 If a process enters its CR, it first sets its state flag

to 0, and after exiting it sets it to 1.


 Use of two shared variables eliminates the problem

of progress in the first algorithm.


 Does not require strict alternation of entry into CR
 Attempt 2
Process 1 Process 2
{ {
state_flag_P1=1; state_flag_P2=1;
do do
{ {
while(state_flag_P2 =0); while(state_flag_P1 =0);
state_flag_P1=0; state_flag_P2=0;

CR CR

state_flag_P1=1; state_flag_P2=1;

Remainder section Remainder section


} while(true) } while(true)
 Violates mutual exclusion property.
 Assume currently state_flag_P2=1;
 P1 wants to enter CS.
 P1 skips the while loop.
 Before it could make state_flag_P1=0, it is

interrupted.
 P2 is scheduled, it finds state_flag_P1=1 and

enters CS.
 Hence both processes are in CS
 Hence violates mutual exclusion property.
Peterson’s algorithm
 Uses a boolean array flag process_flag[] which contains

one flag for each process. These flags are equivalent to


status variables state_flag_P1 state_flag_P2 of the
earlier algorithm.
 process_flag[0] is for P1 and process_flag[1] is for P2.

 If the process P1 wants to enter the CR it sets

process_flag[0] to true. If it exits CR then its sets


process_flag[0] to false.
 Similarly If the process P2 wants to enter the CR it sets

process_flag[1] to true. If it exits CR then its sets


process_flag[1] to false.
 Peterson’s Algorithm
Process P1 Process P2
{ {
do { do {
process_flag[0]=true; process_flag[1]=true;
process_turn=1; process_turn=0;
while(process_flag[1] && while(process_flag[0] &&
process_turn==1); process_turn==0);

CR CR

process _flag[0]=false; process _flag[1]=false;

Remainder section Remainder section


} while(true) } while(true)
 In addition there is one more shared variable process_turn,
which takes the value of 0 for P1 and 1 for P2.
 The variable process_turn maintains mutual exclusion and
process_flag [] maintains the state of the process.
 Assume both the processes want to enter the CR, so both
are making their flags true, but to maintain mutual
exclusion , the processes before entering the CR allow
other processes to run.
 The process that satisfies both the criteria, that its
process_flag is true and it is its process turn will enter the
CR.
 After exiting the CR, the process makes its flag false, so
that the other can enter CR, if it wants
Eg:1
 Assume initially both process_flag[0] and process_flag[1]

are false.
 Assume P1 is about to enter the CR hence it makes

process_flag[0]=true; process_turn=1;
 It executes while statement , but since

process_flag[1]=false , P1 will enter the CR, execute


CR and after exiting it will make process_flag[0]
=false.
 Now if P2 wants to enter CR, it makes
process_flag[1]=true; process_turn=0.
 It executes while statement , but since
process_flag[0]=false , P2 will enter the CR,
 P2executes CR and after exiting it will make
process_flag[1] =false.
Eg : 2
 Assume initially both process_flag[0] and

process_flag[1] are false.


 P1 wants to enter CR, process_flag[0]=true;

process_turn=1;
 It executes while statement , but since

process_flag[1]=false , P1 will enter the CR.


 Now assume control switches to P2 and P2 wants to
enter CR
 P2 makes process_flag[1]=true; process_turn=0.
 Executes the while loop and since process_flag[0] is true
and turn =0 it waits in the while loop and not allowed to
enter CR.
 Now if CPU switches back to P1, P1 continues with CR
and after exiting makes process_flag[0] to false.
 And now when CPU switches back to P2, P2 can enter
CR.
 Hence achieves mutual exclusion.
Ex 3:
 Assume P1 starts and execute process_flag[0]=true and

process_turn=1;
 At this time P2 interrupts and gets execution and

executes process_flag[1]=true;
 And continues with process_turn=0;

 At this time if P1 is able to get back the execution(CPU),

then it can continue because P2 has given P1 another


chance to execute by making process_turn=0.
 Hence P1 will not busy wait in the while loop and enter

CR.
1. TSL instruction
 Many computers have a special instruction called

“Test and Set Lock” (TSL) instr.


 The instruction has the format ------

TSL ACC, IND


ACC-accumalator reg
IND- name of a memory location which hold a
character(F/N)
 The following actions are taken after the instruction

is executed---
1. Copy the contents of IND to ACC.
2. Set the contents of IND to “N”.
 This instruction is an indivisible instruction, which
means that it cannot be interrupted during its execution
consisting of these 2 steps.
 Hence process switch cannot take place during the

execution of this TSL instruction.


 It will either be fully executed or will not be executed at

all.
 How can we use this TSL instruction to implement

mutual exclusion?
1. IND can take on values N ---- Not free (CR)

F ---- Free (CR)


If IND = “ N “ no process can enter its CR because some
process is in CR.
2. There are 2 routines 1. ENTER-CRITICALREGION
2. EXIT-CRITICALREGION

The CR is encapsulated between these 2 routines.


ENTER-CRITICALREGION
EN.0 TSL ACC,IND IND - ACC, “N” -IND
CMP ACC,F Check if CR is free
BU EN.0 Branch to EN.0 if unequal
RTN Return to the caller and
enter CR
EXIT-CRITICALREGION
MOV IND,”F”
RTN

Begin
Initial Section
Call ENTER-CRITICALREGION
CR
Call EXIT-CRITICALREGION
Remainder Section
End
 Let IND=“F”
 PA is scheduled. PA executes ENTER-CR routine.
 ACC becomes “F” and IND becomes “N”.
 Contents of ACC are compared with “F”.
 Since it is equal process PA enters its CR.
 Assume process PA loses the control of CPU due to
context switch to PB when in CR.
 PB executes ENTER-CR routine. PB executes EN.0.
 IND which is ”N” is copied to ACC and IND becomes
“N”. ACC=N.
 The comparison fails and loops back to EN.0 and
therefore does not execute its CR.
 Assume that PA is rescheduled and it
completes its execution of CR and then
executes the Exit-CR routine where IND
becomes “F”.
 Hence now since IND becomes “F” and

eventually ACC becomes “F”, the next process


say PB if scheduled again can execute its CR.
2. Interrupt Disabling/Enabling
 When a process enters a CR , it should complete the CR

without interruption.
 A process switch after a certain time slice happens due

to an interrupt generated by the timer hardware.


 One solution is to disable all interrupts before any

process enters the CR.


 If the interrupt is disabled, the time slice of the process

which has entered its CR will never get over until it has
come out of its CR completely.
 Hence no other process will be able to enter the CR

simultaneously.
Other Instructions
DI
CR
EI
Remaining Sections
Drawback:
1. Dangerous approach since it gives user processes the
power to turn off the interrupts.
A process in the CR executing an infinite loop.
The interrupt will never be enabled and no other process
can ever proceed
2. The approach fails for a multiprocessor system i.e if there are 2
or more CPUs disabling interrupt affects only the CPU that
executed the disable instruction
 Used for synchronization
 Used to protect any resource such as shared global
memory that needs to be accessed and updated by
many processes simultaneously.
 Semaphore acts as a guard or lock on that resource.
 Whenever a process needs to access the resource, it first
needs to take permission from the semaphore.
 If the resource is free , ie. no process is accessing or
updating it, the process will be allowed , otherwise
permission is denied.
 In case of denial the requesting process needs to wait,
until semaphore permits it.
 Semaphore can be a “Counting Semaphore” where it
can take any integer value or a “Binary semaphore”
where it can take on values 0 or 1
 The semaphore is accessed by only 2 indivisible
operation known as wait and signal operations,
denoted as P and V respectively.
 P- proberen(Dutch word)/wait /down
V- verogen(Dutch word)/signal /up
 P and V form the mutual exclusion primitives for any
process.
 Hence if a process has a CS, it has to be encapsulated
between these 2 operations
 The general structure of such a process becomes --
Initial routine
P(S)
CS
V(S)
Remainder section
 The P and V primitives ensure that only one process is

in its CS.
P(S)
{
while (S<=0);
S--;
}

V(S)
{
S++;
}
 P and V routine must be executed indivisibly.
Drawback
 Requires busy waiting. If a process(PA) is in its CS and is

interrupted by another process(PB).When PB tries to


execute P(S) it loops continuously in the entry code(busy
waiting).
Busy waiting is undesirable since it waits the CPU time.
To overcome busy waiting, P and V are modified, when a
process executes P(S) and finds the semaphore value 0,
rather than busy waiting , the process can go to the
blocked state.
i.e the process goes from running state to blocked state
and is put in a queue of waiting processes called the
semaphore queue
 Semaphore queue
Queue of all processes wanting to enter the CS.
 The semaphore queue is a chain of all PCBs of
processes waiting for the CS to get free.
 Only when a process which is in its CS comes out of
it, should the OS allow a new process to be
released from the semaphore queue.
P(S)

Begin

LOCK

N
IF Move the current
S>0 PCB from Running
Y Sate to the
S=S-1 semaphore Queue

UNLOCK

End
V(S)
Begin

LOCK

S=S+1

N Move the first


IF SQ PCB from SQ to
empty
Ready Queue
Y

UNLOCK

End
 Algorithms

P.0 Disable interrupt U.0 Disable interrupt


P.1 If S>0 U.1 S=S+1
P.2 then S=S-1 U.2 If SQ NOT empty
P.3 else wait on S U.3 then release a process
P.4 Endif U.4 Endif
P.5 Enable interrupts U.5 Enable interrupts
End End
Principles of operation
 Assume S=1 and 4 processes PA PB PC PD in the Ready

Queue. Each of these processes have a CS encapsulated


between P(S) and V(S).
 Assume PA gets scheduled(control of CPU)

1. PA executes the initial routine , if it has one.

2. It executes P(S) routine.

3. Disables interrupt. Checks if S>0. As S=1, it decrements


S and S becomes 0. And enables interrupt.
4. PA enters the CS.

5. Assume time slice for PA gets over while it is in the CS


and PA is moved from ‘running’ state to ‘ready’ state.
6. Next PB is scheduled.
7. PB executes P.0.
8. It checks for S>0. But S=0 and therefore check fails. It
skips P.2 and PCB of PB is added to the semaphore
queue. It is no more running.
9. Next if PC is scheduled, it will undergo the same steps
as PB and its PCB will also be added to the SQ because
still S=0.
10. When will S become 1?
When PA is rescheduled it will complete its CS and
then call the V(S) routine.
11. Assume PA is rescheduled. It disables interrupt at V.0
12. Increments S by 1, checks SQ and finds it is not
empty.
13. It releases PB ie. It moves it from SQ to RQ. Hence now
PB can enter CS after it executes P(S).
Wait(s.count)
Begin
s.count:=s.count-1;
if s.count <0 then
begin
place the process in
s.queue;
block this process
end;
end;
Signal(s.count):
s.count:=s.count+1
if s.count<= 0 then
Begin
remove a process from s.queue;
place this process on ready list
end;
Note:
 s.count>= 0, s.count is number of processes that
can execute wait(s) without blocking
 s.count<=0, the magnitude of s.count is number of
processes blocked waiting in s.queue
Time Current value of Name of the Modified Current Status
Semaphore process value of the
semaphore
1 1 P1 needs to 0 P1 enters CS
access
2 0 P1 interrupted -1 Process P2 is
P2 is scheduled waiting in SQ
P2
3 -1 P3 is scheduled -2 P2 and P3 in
SQ
P2 P3

4 -2 P1 scheduled. -1 P2 is moved
P1 exits CS from SQ to RQ
P3
5 -1 P2 is scheduled 0 P3 is moved
Enters CS, exits from SQ to RQ
CS SQ is empty
6 0 P3 is scheduled 1 No process in
CS
Producer consumer problem(Bounded buffer problem)
Eg-1 Compilers can be considered producer process
and assemblers as consumer process.
A compiler produces the object code and an
assembler consumes the object code.
Eg-2 Process that gives a command for printing a file is a producer
process, and the process for printing the file on the printer is a
consumer process.
 There is a buffer in an application maintained by 2 processes.
 One process is called a producer that produces some data and
fills the buffer.
 Another process is called a consumer that needs data produced
in the buffer and consumes it.
Producer Consumer problem solution using SEMAPHORE
To solve the producer consumer problem using semaphore, the
following requirement should be met :
1. The producer process should not produce an item when the buffer is
full.
2. The consumer process should not consume an item when the buffer
is empty.
3. The producer and consumer process should not try to access and
update the buffer at the same time.
4. When a producer process is ready to produce an item and the buffer
is full, the item should not be lost i.e the producer must be blocked
and wait for the consumer to consume an item.
5. When a consumer process is ready to consume an item and the
buffer is empty, consumer must be blocked and wait for the
producer to produce item.
6. When a consumer process consumes an item i.e a slot in the
buffer is created, the blocked producer process must be signaled
about it.
7. When a producer process produces an item in the empty buffer,
the blocked consumer process must be signaled about it.
Producer Consumer problem solution using SEMAPHORE
 In the Producer-Consumer problem, semaphores are used for two
purpose:
◦ mutual exclusion and
◦ synchronization.

 In the following example there are three semaphores:


1. full, used for counting the number of slots that are full;
2. empty, used for counting the number of slots that are empty; and
3. Buffer_access, used to enforce mutual exclusion.
 Let BufferSize = 3;
semaphore Buffer_access = 1; // Controls access to critical section
semaphore empty = BufferSize; // counts number of empty buffer slots,
semaphore full = 0; // counts number of full buffer slots
The producer must
Producer() wait for an empty
{ while (TRUE) { space in the buffer
Produce an item;
wait(empty);
wait(Buffer_access); // enter critical section
Add item to the buffer; //buffer access
signal(Buffer_access); // leave critical section
signal(full); // increment the full semaphore } }

We must make sure


that the producer and Wake up a blocked
the consumer make consumer process
changes to the
shared buffer in a
mutually exclusive
manner
Consumer() The consumer must
{while (TRUE) wait for a space in
{ the buffer
wait(full); // decrement the full semaphore
wait(Buffer_access); // enter critical section
Consume item from buffer ;
signal(Buffer_access); // leave critical section
signal(empty); // increment the empty semaphore
}
}
We must make sure that the Wake up a blocked
producer and the consumer producer process
make changes to the shared
buffer in a mutually exclusive
manner
Eg: let empty=3 full =0 Buffer_access =1
Assume producer is scheduled(allocated the CPU).
 It produces an item

 Executes wait(empty), empty becomes 2

 Executes wait (Buffer_access), Buffer_access becomes

0.
 Adds item to the buffer

 Executes signal(Buffer_access) , Buffer_access

becomes 1.
 Executes signal(full), full becomes 1
Assuming time slice of producer process is not yet
over….
 Producer produces the next item.

 Empty=1

 Buffer_access=0

 Producer adds item to the buffer.

 Buffer_access =1

 full=2
Continuing…………
 Producer produces the next item.
 Empty=0
 Buffer_access=0
 Producer adds item to the buffer.
 Buffer_access =1
 full=3
Continuing…………
 Producer produces the next item.
 Empty=-1
PRODUCER PROCESS GETS BLOCKED
 Assume the consumer is scheduled to run ….
 Consumer executes wait(full) , full becomes 2
 Executes wait(Buffer_access), Buffer_access
becomes 0.
 Consumes item from the buffer.
 Executes signal(Buffer_access), Buffer_access=1.
 Executes signal(empty), empty becomes 0 and it
wakes up the producer process(removes it from the
blocked state).
 After this if the producer process is scheduled…
 It will start from wait(Buffer_access),
Buffer_access=0.
 It add an item to the buffer.
 Buffer_access=1.
 full becomes 3.
Eg: 2
Assume empty=3 full =0 Buffer_access=1
And consumer process is scheduled
 Consumer executes wait(full), full becomes -1.

CONSUMER PROCESS GETS BLOCKED


 Assume a data item being a shared by a number of
processes.
 Some processes read the data item -reader processes
 Some processes write/update the data item -writer
processes
 Eg- In an airline reservation system, there is a shared
data where the status of seats is maintained.
 If a person needs to enquire about the reservation
status, then the reader process will read the shared data
and get the information.
 On the other hand if a person wishes to reserve a seat,
them the writer process will update the shared data.
 Can multiple readers access the shared data simultaneously
YES
 Can multiple writers access the shared data simultaneously

NO
i.e if one writer is writing, other writers must wait.
Also when a writer is writing , a reader is not allowed to access
the data item.
 This synchronization problem cannot be solved by simply

providing mutual exclusion on the shared data area, because it


will lead to a situation where a reader will also wait while
another reader is reading.
 i.e the people enquiring reservation status should be given

simultaneous access, otherwise there will be unnecessary


delays.
Key features of the readers-writers problem are:
 Many readers can simultaneously read.

 Only one writer can write at a time. Readers and writers

must wait if a writer is writing. When the writer exits


either all waiting readers should be activated or one
waiting writer should be activated.
 A writer must wait if a reader is reading. It must be

activated when the last reader exits.


Reader writer problem using semaphores

 Shared data:
semaphore mutex, wrt
int readcount
 Initially:
mutex = 1, wrt = 1, readcount
=0
Classic synchronization problem(Reader-Writer problem)

A writer will wait if


either another writer
is currently writing or
one or more readers
are currently reading
do{ A reader will wait only
P(mutex); if a writer is currently
readcount++; writing. Note that if
if (readcount == 1) readcount == 1, no
P(wrt); reader is currently
reading and thus that
V(mutex);
is the only time that a
We must make …
reader has to make
sure that readers reading is performed sure that no writer is
update the … currently writing (i.e.,
shared variable P(mutex); if readcount > 1, there
readcount in a readcount--; is at least one reader
mutually if (readcount == 0) reading and thus the
exclusive manner V(wrt); new reader does not
V(mutex); have to wait)
} while(TRUE);
 The dining philosophers problem is useful for modeling
processes that are competing for exclusive access to a
limited number of resources, such as I/O devices.
 Consider five philosophers who spend their lives thinking
and eating.
 The philosophers share a circular table surrounded by five
chairs, each belonging to one philosopher.
 In the center of the table is a bowl of rice, and the table is
laid with five single chopsticks`
 When a philosopher thinks, she does not interact with her
colleagues.
 From time to time, a philosopher gets hungry and tries to
pick up the two chopsticks that are closest to her (the
chopsticks that are between her and her left and right
neighbors).
 A philosopher may pick up only one chopstick at a time.
Obviously, she cannot pick up a chopstick that is already
in the hand of a neighbor.
 When a hungry philosopher has both her chopsticks at the
same time, she eats without releasing her chopsticks.
 When she is finished eating, she puts down both of her
chopsticks and starts thinking again.
 The dining-philosophers problem is an example of a large
class of concurrency-control problems. It is a simple
representation of the need to allocate several resources
among several processes in a deadlock-free and
starvation-free manner.
The Dining-Philosophers Problem
• Solution 1
Shared: semaphore fork[5];
Init: fork[i] = 1 for all i=0 .. 4

Philosopher i

do {
P(fork[i]);
P(fork[i+1]);

/* eat */

V(fork[i]); Oops! Subject


V(fork[i+1]); to deadlock if
they all pick up
/* think */ their “right” fork
} while(true); simultaneously!
The Dining-Philosophers Problem
• Another solution given by Tanenbaum :
• Uses binary semaphores.
• He defines the states of philosopher.
• The possible states are thinking, eating and hungry.
• There is an array state[5] which stores the state of each philosopher.
• This array will be used by all the philosophers hence it has to be
accessed in a mutually exclusive manner.
• Hence a semaphore is defined sem _ state.
• A semaphore is taken for all the philosophers(philospoher[5]), on
which they wait to start eating.
• The philosopher can start eating if neither of their neighbour is eating.
• The state of the philosopher is checked and updated.
 Solution
void Philosopher(int n)
{ do {
think()
get_spoons(n);
eat();
put_spoons(n);
}while(true)
}
void get_spoons(int n)
{
wait(Sem_state);
state[n]=Hungry;
test_state(n);
signal(Sem_state);
wait(philosopher[n]);
}
void put_spoons(int n)
{
wait(Sem_state);
state[n]=Thinking;
test_state(LEFT);
test_state(RIGHT);
signal(Sem_state);
}
Void test_state(int n)
{ if (state[n]=Hungry && state[LEFT]!=Eating &&
state[RIGHT]!=Eating )
{ state[n]=Eating;
signal(philosopher[n]);
}
}
Solution void get_spoons(int n)
void Philosopher(int n) {
{ wait(Sem_state);
do { state[n]=Hungry;
think() test_state(n);
get_spoons(n); signal(Sem_state);
eat(); wait(philosopher[n]);
put_spoons(n); }
} while(true)
}
void put_spoons(int n)
void test_state(int n) {
{ wait(Sem_state);
if (state[n]=Hungry && state[LEFT]!=Eating state[n]=Thinking;
&& test_state(LEFT);
state[RIGHT]!=Eating ) test_state(RIGHT);
{ signal(Sem_state);
state[n]=Eating; }
signal(philosopher[n]);
}
}
 Mutual exclusion is required file processing
where several users are accessing the same
file (databases)
 Example airline booking system, one seat left

two agents access at same time before file is


updated and seat is double booked.
 Solution is file locking preventing access to

the file while it is being updated

You might also like