You are on page 1of 41

Module 6: Process Synchronization

• Background
• The Critical-Section Problem
• Synchronization Hardware
• Semaphores
• Classical Problems of Synchronization
• Critical Regions
• Monitors
• Synchronization in Solaris 2
• Atomic Transactions

Operating System Concepts Silberschatz and Galvin©1999


6.1
Background

• Concurrent access to shared data may result in data inconsistency.


• Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes.

• Shared-memory solution to bounded-buffer problem (Chapter 4) allows


at most n – 1 items in buffer at the same time. A solution, where all N
buffers are used is not simple.
– Suppose that we modify the producer-consumer code by adding a
variable counter, initialized to 0 and incremented each time a new
item is added to the buffer

Operating System Concepts Silberschatz and Galvin©1999


6.2
Code for producer process

While(1)
{
/* produce an item in nextProduced */
While (count == BUFFER_SIZE)
/* do nothing */
Buffer[in] = nextProduced;
In = (in+1)%BUFFER_SIZE;
Counter++;
}

Operating System Concepts Silberschatz and Galvin©1999


6.3
Code for Consumer Process

while(1)
{
while (counter == 0) //do nothing
nextConsumed = buffer[out];
out = (out+1)%BUFFER_SIZE;
counter- -; //consume the item in nextConsumed

Operating System Concepts Silberschatz and Galvin©1999


6.4
Bounded-Buffer (Cont.)

• The statements:
– counter := counter + 1;
– counter := counter - 1;
must be executed atomically.

• Although both the producer and consumer routines are correct


separately, they may not function correctly when executed concurrently.

• For example, suppose that the value of the variable counter is currently
5 and that the producer and consumer processes execute the statements
“counter++” and “counter- -” concurrently.

• Then what will happen?

Operating System Concepts Silberschatz and Galvin©1999


6.5
T0: producer execute register1 = counter [reg=5]
T1: producer execute register1=register1+1 [reg=6]
T2: consumer execute register2=counter [reg=5]
T3: consumer execute register2=register2-1 [reg=4]
T4: producer execute counter=register1 [counter=6]
T5: consumer execute counter=regsiter2 [counter=4]

Operating System Concepts Silberschatz and Galvin©1999


6.6
Race Condition

• A situation where several processes access and


manipulate the same data concurrently and the
outcome of the execution depends on the particular
order in which the access takes place, is called a race
condition.

Operating System Concepts Silberschatz and Galvin©1999


6.7
The Critical-Section Problem

• n processes all competing to use some shared data


• Each process has a code segment, called critical section, in which the
shared data is accessed.

• Problem – to develop a protocol to ensure that when one process is


executing in its critical section, no other process is allowed to execute in
its critical section.

• Structure of process Pi
repeat
entry section
critical section
exit section
reminder section
until false;
Operating System Concepts Silberschatz and Galvin©1999
6.8
Solution to Critical-Section Problem

1. Mutual Exclusion. If process Pi is executing in its critical section, then no


other processes can be executing in their critical sections.
2. Progress. If no process is executing in its critical section and there exist
some processes that wish to enter their critical section, then only those processes
that are not executing in their remainder section can participate in the decision on
which will enter its critical section next and the selection of the processes that
will enter the critical section next cannot be postponed indefinitely.
3. Bounded Waiting. A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
⚫ Assume that each process executes at a nonzero speed
⚫ No assumption concerning relative speed of the n processes.

Operating System Concepts Silberschatz and Galvin©1999


6.9
Initial Attempts to Solve Problem

• Only 2 processes, P0 and P1

• General structure of process Pi (other process Pj)


repeat
entry section
critical section
exit section
reminder section
until false;

• Processes may share some common variables to synchronize their


actions.

Operating System Concepts Silberschatz and Galvin©1999


6.10
Peterson’s solution

• Restricted to two processes that alternate execution between their


critical sections and remainder sections.

• The processes are numbered Pi and Pj respectively


• Peterson’s solution requires two data items to be shared between the
two processes:
– int turn
– boolean flag[2]

• The variable turn indicates whose turn it is to enter its critical section.
• The flag array is used to indicate if a process is ready to enter its critical
section.
– flag[i]=true--- Pi is ready to enter its critical section.

Operating System Concepts Silberschatz and Galvin©1999


6.11
Continue…

• Process Pi
repeat
flag [i] := true;
turn := j;
while (flag [j] and turn = j) do no-op;
critical section
flag [i] := false;
remainder section
until false;

• Meets all three requirements; solves the critical-section problem for two
processes.

Operating System Concepts Silberschatz and Galvin©1999


6.12
• To prove property 1, we note that each Pi enters its critical section only
if either flag[j]==false or turn==i.
– If both processes can be executing in their critical sections at the same
time, then flag[i]==flag[j]==true. But the value of turn either be 0 or 1 but
cannot be both.

• To prove property 2 and 3, we note that a process Pi can be prevented


from entering the critical section only if it is stuck in the while loop
with the condition flag[j]==true and turn==j
– If Pj is not ready to enter the critical section, then flag[j]==false, and Pi
can enter its critical section.
– If Pj has set flag[j] to true and is also executing in its while statement, then
either turn==i or turn==j. If turn==j then Pj will execute its critical section.
– However , once Pj exits its critical section, it will reset flag[j] to false,
allowing Pi to enter its critical section.

Operating System Concepts Silberschatz and Galvin©1999


6.13
Bakery Algorithm

Critical section for n processes

• Before entering its critical section, process receives a number. Holder of


the smallest number enters the critical section.

• If processes Pi and Pj receive the same number, if i < j, then Pi is served


first; else Pj is served first.

• The numbering scheme always generates numbers in increasing order of


enumeration; i.e., 1,2,3,3,3,3,4,5...

Operating System Concepts Silberschatz and Galvin©1999


6.14
Bakery Algorithm (Cont.)

• Notation <≡ lexicographical order (ticket #, process id #)


– (a,b) < (c,d) if a < c or if a = c and b < d
– max (a0,…, an-1) is a number, k, such that k ≥ ai for i - 0,
…, n – 1

• Shared data
var choosing: array [0..n – 1] of boolean;
number: array [0..n – 1] of integer,
Data structures are initialized to false and 0 respectively

Operating System Concepts Silberschatz and Galvin©1999


6.15
Bakery Algorithm (Cont.)

repeat
choosing[i] := true;
number[i] := max(number[0], number[1], …, number [n – 1])+1;
choosing[i] := false;
for j := 0 to n – 1
do begin
while choosing[j] do no-op;
while number[j] ≠ 0
and (number[j],j) < (number[i], i) do no-op;
end;
critical section
number[i] := 0;
remainder section
until false;

Operating System Concepts Silberschatz and Galvin©1999


6.16
Synchronization Hardware

• Many computers, especially those designed with multiple processors in


mind, have an instruction TEST AND SET LOCK (TSL) that works as
follows:
– It reads the contents of the memory word into a register
– Then store a nonzero value at that memory address
– The operations of reading the word and storing into it are
guaranteed to be indivisible--- no other processor can access the
memory word until the instruction is finished.
– The CPU executing the TSL instruction locks the memory bus to
prohibit other CPUs from accessing memory until it is done.

Operating System Concepts Silberschatz and Galvin©1999


6.17
Continue…

• To use the TSL instruction, we will use a shared variable, lock , to


coordinate access to shard memory.

• When lock is 0, any process may set it to 1 using the TSL instruction
and then read or write the shared memory

• When it is done, the process sets lock back to 0 using an ordinary


MOVE instruction.
How can this instruction be used to prevent two processes from
simultaneously entering their critical regions?

Operating System Concepts Silberschatz and Galvin©1999


6.18
Solution

• There are four-instructions in a subroutines as shown in the following


slide.
– The first instruction copies the old value of lock to the register and
then sets lock to 1.
– Then the old value is compared with 0.
– If it is nonzero, the lock was already set, so the program just goes
back to the beginning and test it again.
– Sooner or later it will become 0 (when the process currently in its
critical region is done with its critical region) and the subroutine
returns with the lock set.

• Clearing the lock is simple.


– The program just stores a 0 in lock.

Operating System Concepts Silberschatz and Galvin©1999


6.19
The TSL Instruction

• Figure 2-12. Entering and leaving a critical region


using the TSL instruction.

6.20 Silberschatz and Galvin©1999


Mutual Exclusion with Test-and-Set

• Shared data: var lock: boolean (initially false)


• Process Pi
repeat
while Test-and-Set (lock) do no-op;
critical section
lock := false;
remainder section
until false;

Operating System Concepts Silberschatz and Galvin©1999


6.21
Sleep and Wakeup

• Both Peterson’s solution and the solution using TSL are correct but
both have the defect of requiring busy waiting.

• What these solutions do is this: when a process wants to enter its critical
region, it checks to see if the entry is allowed. If it is not, the process
just sits in a tight loop waiting until it is.

• This approach wastes CPU time.


• Solution: SLEEP and WAKEUP pair.
• SLEEP is a system call that causes the caller to block, that is be
suspended, until another process wakes it up.

• The WAKEUP call has one parameter, the process to be awakened.

• Both SLEEP and WAKEUP each have one parameter, a memory


address used to match up SLEEPs with WAKEUP
Operating System Concepts Silberschatz and Galvin©1999
6.22
The Producer-Consumer Problem

• Let us consider the producer-consumer problem again.


• Here, two processes share a common, fixed-sized buffer.
• Trouble arises when the producer wants to put a new item in the buffer
but it is already full.

• The solution is for the producer to go to sleep, to be awakened when the


consumer has removed one or more items.

• Similarly if the consumer wants to remove an item from the buffer and
sees that the buffer is empty, it goes to sleep until the producer puts
something in the buffer and wakes it up.

• This approach is simple but it can also arise the race condition.

Operating System Concepts Silberschatz and Galvin©1999


6.23
Continue…

• To keep track of the number of items in the buffer, we will need a


variable, count.

• If the maximum number of items in the buffer is N, the producer’s code


will first check to see if count is N. If it is, the producer will go to sleep,
if it is not, the producer will add an item and increment count.

• The consumer’s code is similar: first test count to see if it is 0. If it is,


go to sleep, if it is nonzero, remove and item and decrement the
counter.

Operating System Concepts Silberschatz and Galvin©1999


6.24
The Producer-Consumer
Problem (1)
• Figure 2-13. The producer-consumer problem
with a fatal race condition.

...

6.25 Silberschatz and Galvin©1999


The Producer-Consumer
Problem (2)
...
• Figure 2-13. The producer-consumer problem
with a fatal race condition

6.26 Silberschatz and Galvin©1999


Continue…

• Now the race condition can occur because access to count is


unconstrained. The following situation may occur:
– The buffer is empty and the consumer has just read count to see if
is is 0. At that moment, the scheduler decides to stop running the
consumer temporarily and start running the producer.
– The producer enters an item in the buffer, increments count and
notices that it is now 1. Reasoning that count was just 0 and thus
the consumer must be sleeping, the producer calls wakeup to wake
the customer up.
– Unfortunately, the consumer is not yet logically asleep, so the
wakeup signal is lost. When the consumer next runs, it will test
the value of count it previously read, find it to be 0 and go to
sleep.
– Sooner or later the producer will fill up the buffer and also go to
sleep. Both will sleep forever.

Operating System Concepts Silberschatz and Galvin©1999


6.27
Continue…

• The problem here is that a wakeup sent to a process that is not (yet)
sleeping is lost.

• If it were not lost, everything would work.


• A quick fix is to add a wakeup waiting bit to the picture. When a
wakeup is sent to a process that is still awake, this bit is set.

• Later when the process tries to go to sleep, if the wakeup waiting bit is
on, it will be turned off, but the process will stay awake.

Operating System Concepts Silberschatz and Galvin©1999


6.28
Semaphores
• In 1965, E. W. Dijkstra suggested using an integer variable to count the number
of wakeups saved for future use,

• In his proposal, a new variable type, called a semaphore, was introduced.

• A semaphore could have the value 0, indicating that no wakeups were saved, or
some positive value if one or more wakeups were pending.

• Dijkstra proposed having two operations, DOWN and UP (generalization of


SLEEP and WAKEUP respectively).

• The DOWN operation on a semaphore checks to see if the value is greater than
0. If so, it decrements the value (i.e., uses up one stored wakeup) and just
continue.

• If the value is 0, the process is put to sleep without completing the DOWN for
the moment.

• Checking the value, changing it and possibly going to sleep is all done as a
single, indivisible, atomic action.

• It is guaranteed that once a semaphore operation has started, no other process can
access the semaphore until the operation has completed or blocked.
Operating System Concepts
6.29 Silberschatz and Galvin©1999
Continue…

• The UP operation increments the value of the semaphore addressed.


• If one or more processes were sleeping on that semaphore, unable to
complete an earlier DOWN operation, one of them is chosen by the
system and is allowed to complete its DOWN.

• Thus, after an UP on a semaphore with processes sleeping on it, the


semaphore will still be 0, but there will be one fewer process sleeping
on it.

• The operation of incrementing the semaphore and wakign up one


process is also indivisible.

Operating System Concepts Silberschatz and Galvin©1999


6.30
Solving the Producer-Consumer Problem
using Semaphore
• Semaphores solve the lost-wakeup problem.

• It is essential that they be implemented in an indivisible way, that means, to


implement UP and DOWN as system calls, with the operating System briefly
disabling all interrupts while it is testing the semaphore, updating it and putting
the process to sleep, if necessary.

• This solution uses three semaphores:


– one called full for counting the number of slots that are full. Initially its
value is 0
– One called empty for counting the number of slots that are empty. Initially
equal to the number of slots in the buffer
– One called mutex to make user that the producer and consumer do not
access the buffer at the same time. The value of mutux is initially 1.
– The mutex semaphore is used for mutual exclusion. It is designed to
guarantee that only one process at\ a time will be reading or writing the
buffer.
– The full and empty semaphores ensure that the producer stops running
when the buffer is full and the consumer stop running when the buffer is
empty.
Operating System Concepts Silberschatz and Galvin©1999
6.31
The Producer-Consumer Problem using
Semaphore

...
•.
6.32 Silberschatz and Galvin©1999
The Producer-Consumer Problem using
Semaphore
...

• Figure 2-14. The producer-consumer problem


using semaphores.

6.33 Silberschatz and Galvin©1999


Classical IPC Problem (Dining Philosopher
Problem)

• In 1965, Dijkstra proposed and solved a synchronization problem he


called the dining philosopher problem.

• The problem can be stated quite simply as follows:


– Five philosopher are seated around a circular table.
– Each philosopher has a plate of spaghetti.
– Between each pair of plates is one fork.
– The life of a philosopher consists of alternate periods of eating and
thinking.
– When a philosopher gets hungry, he tries to acquires his left and
right fork, one at a time, in either order. If successful in acquiring
two forks, he eats for a while, then puts down the forks and
continues to think.

• Can you write a program for each philosopher that does what it is
supposed to do and never gets stuck?
Operating System Concepts Silberschatz and Galvin©1999
6.34
The Dining Philosophers Problem

6.35 Silberschatz and Galvin©1999


The Dining Philosophers Problem

• Figure 2-19. A nonsolution to the dining philosophers problem.

Tanenbaum & Woodhull, Operating Systems: Design and Implementation,


6.36 Silberschatz
(c) 2006 Prentice-Hall, Inc. All rights reserved. and Galvin©1999
0-13-142938-8
Continue..

• The solution shown in the previous slide is wrong.


• For example, if all five philosophers take their left forks simultaneously,
none will be able to take their right forks, and there will be a deadlock.

• We could modify the program so that after taking the left fork, the
program checks to see if the right fork is available. If it is not, the
philosopher puts down the left one, waits for some time and then
repeats the whole process.

• This proposal too, fails. For a little bit of bad luck, all the philosophers
could start the algorithm simultaneously, picking up their left forks,
seeing that their right forks were not available, putting down their left
forks, waiting, picking up their left forks again simultaneously and so
on, forever------resulting starvation.

Operating System Concepts Silberschatz and Galvin©1999


6.37
Solution of Dining Philosopher Problem

• The solution presented in the following slides is correct and also allows
the maximum parallelism for an arbitrary number of philosophers.

• It uses an array, state to keep track of whether a philosopher is eating,


thinking or hungry. A philosopher may move only into eating state if
neither neighbor is eating.

• Philosopher i’s neighbors are defined by the macros LEFT and RIGHT.
• If i is 2, LEFT is 1 and RIGHT is 3

Operating System Concepts Silberschatz and Galvin©1999


6.38
The Dining Philosophers Problem

...

• A solution to the dining philosophers problem.


6.39 Silberschatz and Galvin©1999
The Dining Philosophers Problem
...

...
• A solution to the dining philosophers problem.
6.40 Silberschatz and Galvin©1999
The Dining Philosophers Problem
...

• A solution to the dining philosophers problem.


6.41 Silberschatz and Galvin©1999

You might also like