You are on page 1of 33

Operating System

Btech
Process Synchronization
Delhi Technological university
Instructor: Divya Sethia
divyashikha@dce.edu

Module 6: Process Synchronization


• Background
• The Critical-Section Problem
• Peterson’s Solution
• Synchronization Hardware
• Semaphores
• Classic Problems of
Synchronization
• Monitors
• Synchronization Examples
• Atomic Transactions
Background
• Concurrent access to shared data may result
in data inconsistency
• Maintaining data consistency requires
mechanisms to ensure the orderly execution
of cooperating processes
• Suppose that we wanted to provide a
solution to the consumer-producer problem
that fills all the buffers. We can do so by
having an integer count that keeps track of
the number of full buffers. Initially, count is
set to 0. It is incremented by the producer
after it produces a new buffer and is
decremented by the consumer after it
consumes a buffer.

Producer
while (true)

/* produce an item and


put in nextProduced
while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer
while (1)
{
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in
nextConsumed
}
Race Condition
• count++ could be implemented as in machine language

register1 = count
register1 = register1 + 1
count = register1
• count-- could be implemented as

register2 = count
register2 = register2 - 1
count = register2
Concurrent execution of counter++ and counter– is equivalent to sequential
execution of lower level statements interleaved in some arbitary order
Order within each high-level statement is preserved

Race Condition
• count++ could be implemented as in machine language
register1 = count
register1 = register1 + 1
count = register1
• count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2
• Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
• Incorrect state counter =4 indicating 4 buffers are full
where there are 5 buffers full
• If T4 T5 are interchanged counter = 6
• We allowed both processes to manipulate counter
concurrently
Race Condition: several processes access and manipulate
same data concurrently and outcome of execution
depends on particular order in which access takes
place.
Guard against race condition: Only one process allowed
to manipulate variable counter (require
synchronization)

Critical Section
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections
2. Progress - If no process is executing in its critical section
and there exist some processes that wish to enter their
critical section, then the selection of the processes that
will enter the critical section next cannot be postponed
indefinitely
3. Bounded Waiting - A bound must exist on the number of
times that other processes are allowed to enter their
critical sections after a process has made a request to
enter its critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the N processes

Kernel race conditions


• Several kernel mode processes may be active – kernel code prone to race
condition
Eg: kernel data structure to maintain list of open files – need to modify when new file
is opened /closed.
Two processes simultaneously open files and update this list resulting in race
condition
Other structure prone to race conditions: structures maintaining mem allocation for
maintaining process lists and for interrupt handling
Two approaches for handling critical sections in OS:
i) Preemptive kernels (allowes process to be interrupted when it is in kernel
mode)
ii) Nonpreemptive kernels (preemption not allowed when process is in kernel
mode – will run until it exits kernel mode, blocks, or voluntary gives control of
CPU – hence it is free from race conditions on kernel data struc
Linux – preemptive?
• Preemptive suitable for real-time
programming since it allows premption of
process running in kernel.
• -more responsive since there is less risk that
kernel mode process will run for long period
before it gives up CPU to waiting processes.
• Windows XP, Windows 2000 – nonpremp
• Linux - preemptive

Critical section problem solution:


Peterson’s Solution
• Two process solution to critical section
• Assume that the LOAD and STORE instructions are
atomic; that is, cannot be interrupted.
• The two processes share two variables:
– int turn;
– Boolean flag[2]
• The variable turn indicates whose turn it is to enter
the critical section.
• The flag array is used to indicate if a process is
ready to enter the critical section. flag[i] = true
implies that process Pi is ready!
Algorithm for Process Pi
do {
flag[i] = TRUE;
turn = j;
while ( flag[j] && turn == j);

CRITICAL SECTION

flag[i] = FALSE;

REMAINDER SECTION

} while (TRUE);
To enter critical section Pi sets flag[i]=true and sets turn =j asserting that if other
process wishes to enter CS it can do so.
If both processes try to enter at the same time, turn will be set to both I and j roughly
at same time. But only one of the assignments will last – the other will occur but will
be overwritten imm.
Eventually value of turn decides which of two processes is allowed to enter its CS first.

Correctness of Peterson Solution


• 1. Mutual Exclusion: Pi enter CS only if flag[j] = false or
turn = I
If both enter CS simul then flag[i] and flag[j] are true.
But since turn can be 0 or 1 so P0 and P1 could not have
been executing while statements at the same time.
Hence one of the processes Pj must have been successful
in executing while statement, while Pi had to execute
turn =j leads to Pj to be in CS.
Pi waits in the loop flag[j] == true and turn == j
Mutual exclusion is preserved.
Correctness of Peterson Solution
2. Progress requirement satisfied: If no process is in CS
then other processes that are not in the remainder
section can decide which process will enter CS cannot
be postponed indefinitely
Pi is prevented to enter CS only if flag[j] == true and turn
== j.
If Pj is not in CS then flag[j] == false and Pi can enter its CS
If Pj has flag[j] = true then it also executes while llop and
turn is I or j .
If turn == I then Pi will enter CS
If turn – j then Pj will enter CS

Correctness of Peterson Solution


• 3. Bounded wait time:
If Pj is executing in CS and Pi is waiting in the while loop
to access CS
Pj exits CS and reset flag[j] = false allowing Pi to enter CS
If Pj resets flag[j[ = true it must also set turn = I
Pi is waiting in the while loop and does not change value
of turn.
Hence either of the two conditions are not true in the
while loop for Pi
Hence Pi will enter CS after at most one enter by Pj
(bounded waiting)
CS solution: Synchronization Hardware

CS solution: Synchronization Hardware


• Many systems provide hardware support for critical
section code
• Uniprocessors – could disable interrupts
– Currently running code would execute without
preemption
– Generally too inefficient on multiprocessor systems
• Operating systems using this not broadly scalable
• Modern machines provide special atomic hardware
instructions
• Atomic = non-interruptable
– Either test memory word and set value
– Or swap contents of two memory words
TestAndndSet Instruction
• Definition:

boolean TestAndSet (boolean *target)


{
boolean rv = *target;
*target = TRUE;
return rv:
}
TestAndSet is executed atomically
i.E if there are two TestAndSet in executing simultaneously
they will be executed sequentially in some order.

Solution using TestAndSet


• Shared boolean variable lock., initialized to false.
• Solution:
do {
while ( TestAndSet (&lock ))
; /* do nothing

// critical section

lock = FALSE;

// remainder section

} while ( TRUE);
Test and Set evaluation
P0 P1
Lock = False Lock=False
1. TS(&Lock) 2. TS(&L) when invoked L=true
Target/Lock = False Target /Lock = True
Rv = target = false rv = target = true
Target/Lock = true target = true
Ret rv = false ret rv=true
3.Breaks while loop and 5. while loop continues
enters CS breaks only after lock = F
4.L = false

Swap Instruction
• Definition:

void Swap (boolean *a, boolean *b)


{
boolean temp = *a;
*a = *b;
*b = temp:
}
Solution using Swap
• Shared Boolean variable lock
initialized to FALSE; Each process
has a local Boolean variable key.
• Solution:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );

// critical section

lock = FALSE;

// remainder section

} while ( TRUE);

CS satisfaction
• Previous synch techniques provide mutual
exclusion but do not satisfy bounded waiting
requirement
• Boolean waiting[n], boolean lock set to false
• Pi can enter CS either waiting[i] = False or key
== false
• Key is local
Bounded Wait Mutual Exclusion with Test and Set

Semaphore
• Test Set and Swap hardware solution not easy to use by
programmer.

• Semaphores is a synchronization tool that does not require busy


waiting .

• Semaphore S is an integer variable that, apart from initialization, is


accessed only through two standard atomic operations: wait() and
signal() - Originally called P(from Dutch proberen“to test”) and
V(from verhogen“to increment”)

• Less complicated
Semaphore
• Definition of wait():
wait (S) {
while S <= 0
; // no-op
S--;
}
• Definition of signal():
signal (S) {
S++;
}

Semaphore(Cont.)
• All modifications to integer value of semaphore in wait() and
signal() operations must be executed indivisibly.

• i.e when one process modifies semaphore value, no other process


can simultaneously modify same semaphore value.

• In case of wait(S), testing of integer value of S(S<=0), and its


possible modification(S--), must also be executed without
interruption.
Semaphore Usage
• Counting semaphore – integer value can range over an
unrestricted domain
• Binary semaphore – integer value can range only between 0 and 1;
can be simpler to implement Also known as mutex locks
• Binary semaphores can be used to deal with the critical section
problem for multiple processes. The n processes share a
semaphore, mutex, initialized to 1. ach process is organized as:
do {
waiting(mutex);
// critical section
signal(mutex);
//remainder section
} while (TRUE);

Semaphore Usage(Cont.)

• Counting semaphores can be used to control access given resource


consisting of finite number of instances.
• Semaphore is initialized to number of resources available.
• Each process that wishes to use resource performs wait()
operation on semaphore(thereby decrementing the count).
• When process releases resource, it performs signal() operation
(incrementing the count).
• Count for semaphore = 0, => all resources are being used.
- processes that wish to use resource will block until count
becomes greater than 0.
Semaphore Implementation
• Disadvantage of semaphore: is that it requires busy waiting.

• While a process is in its critical section, any other process that tries
to enter its critical section must loop continuously in entry code. -
continual looping is clearly a problem in real multiprogramming
system, where a single CPU is shared among many processes.

• Busy waiting wastes CPU cycles that some other process might be
able to use productively. This type of semaphore is also called a
spinlock because process “spins” while waiting for the lock.

• Note that applications may spend lots of time in critical sections


and therefore this is not a good solution.

Semaphore Implementation with no Busy waiting


• When process executes wait() operation, and finds that
semaphore value is not positive, it must wait.

• But rather than engaging in busy waiting, process can block itself.

• block operation places process into waiting queue associated with


semaphore, and state of process is switched to waiting state.

• Control is transferred to scheduler, which selects another process


to execute.

• Process that is blocked, waiting on semaphore S, should be


restarted when some other process executes a signal() operation.

• Process is restarted by wakeup() operation, which changes


process from waiting state to ready state. The process is then
placed in the ready queue.
Semaphore Implementation with no Busy waiting
(Cont.)
• To implement semaphores under this definition, we define it as:

typedef struct{
int value;
struct process *list;
}semaphore;

• Each semaphore has an integer value and a list of processes list.


• When a process must wait on a semaphore, it is added to the list
of processes.
• A signal() operation removes one process from list of waiting
processes and awakens that process.

Semaphore Implementation with no Busy waiting (Cont.)


• Implementation of wait:
wait (S) {
value--;
if (value < 0) {
add this process to waiting queue
block(); } }

• Implementation of signal:

Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); } }
Semaphore
• If semaphore is negative => number of processes waiting on that
semaphore since in wait the decrement is done before blocking

• List of waiting processes can be implemented by link field in each


PCB. For bounded waiting use FIFO . ( or any queuing strategy)

• Important that wait and signal operations on the same semaphore


must not be done simultaneously. Possible once interrupts are
disabled during wait and signal operations.

• Works on single processor environment since once interrupts,


instructions from different processes cannot be interleaved.
• Only currently running process executes until interrupts are
reenabled and the scheduler can regain control.

Semaphore
• Multiprocessor environment interrupts must
be disabled on every processor else
instructions from diff processes maybe
interleaved. This is difficult and also
diminishes performance.
• Alternate on SMPs use spinlocks to ensure
wait and signal are performed atomically.
Deadlock and Starvation
• Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes.
• Let P0 and P1 be the two processes, each accessing two
semaphores, S and Q, initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
signal (S); signal (Q);
signal (Q); signal (S);
• Suppose that P0 executes wait(S) and then P1 execute wait(Q).
When P0 executes wait(Q), it must wait until P1 executes signal(Q).
Similarly, when P1 executes wait(S), it must wait until P0 executes
signal(S). Since these signal() operations can not be executed, P0
and P1 are deadlocked.

Deadlock and Starvation


• Starvation – indefinite blocking. A process may never be removed
from the semaphore queue in which it is suspended. Eg: add and
remove processes from the associated list of semaphore in LIFO
Deadlock vs Starvation
• Starvation
• Thread may wait indefinitely because other threads keep coming
in and getting requested resources before this thread does.
• That resource is being actively used and thread will stop waiting if
other threads stop coming in.
• Deadlocks
• A group of threads are waiting for resources held by others in
group.
• None of them will ever make progress.

Classical Problems of
Synchronization
• Bounded-Buffer Problem

• Readers and Writers Problem

• Dining-Philosophers Problem
Bounded-Buffer Problem

• The pool consists of N buffers, each can hold one item


• Semaphore mutex provides mutual exclusion for accesses to the
buffer pool and is initialized to the value 1.
• Semaphore full count the number of full buffers, initialized to the
value 0.
• Semaphore empty count the number of empty buffers, initialized
to the value N.

Bounded Buffer Problem (Cont.)


• The structure of the producer process

do {
// produce an item

wait (empty);
wait (mutex);
// add the item to the buffer
signal (mutex);
signal (full);
} while (true);
Bounded Buffer Problem (Cont.)
• The structure of the consumer process

do {
wait (full);
wait (mutex);

// remove an item from buffer


signal (mutex);
signal (empty);

// consume the removed item


} while (true);

Readers-Writers Problem
• A data set is shared among a number of concurrent processes
–Readers: only read the data set; they do not perform any updates
–Writers: can both read and write.
• Problem – allow multiple readers to read at the same time. Only
one single writer can access the shared data at the same time.
readers-writers problem has several variations, all involving priorities.
-First readers-writers problem: no reader will be kept waiting unless
a writer has already obtained permission to use the shared object.
-Second readers-writers problem: once the writer is ready, that
writer performs its write as soon as possible.
• A solution to either problem may result in starvation. In the first
case, writers may starve; in the second case, readers may starve.
Readers-Writers Problem (Cont.)
• Shared Data
– Data set
– Semaphore mutex initialized to 1.
– Semaphore wrt initialized to 1.
– Integer readcount initialized to 0.
semaphore mutex, wrt; int readcount;
• mutex semaphore is used to ensure mutual exclusion when
variable readcount is updated. The readcount variable keeps track
of how many processes are currently reading the object.
• The semaphore wrt functions as a mutual exclusion semaphore for
writers. It is common to both reader and writer processes. Used by
first and last reader that enters or exits the critical section. It is not
used by readers who enter/exit while other readers are in their
critical sections.

Readers-Writers Problem (Cont.)


• The structure of a writer process

do {
wait (wrt) ;

// writing is performed

signal (wrt) ;
} while (true)
Readers-Writers Problem (Cont.)
• The structure of a reader process
do {
wait (mutex) ;
readcount ++ ;
if (readercount == 1) wait (wrt) ;
signal (mutex)

// reading is performed

wait (mutex) ;
readcount - - ;
if (redacount == 0) signal (wrt) ;
signal (mutex) ;
} while (true)

Readers-Writers Problem (Cont.)


• readers-writers problem and its solutions has been generalized to
provide readers-writers locks on some systems. Acquiring a
reader-writer lock requires specifying mode of lock: either read or
write access.
• When a process wishes to read shared data, it requests the
reader-writer lock in read mode; a process wishing to modify the
shared data must request the lock in write mode.
• Multiple processes are permitted to concurrently acquire a reader-
writer lock in read mode; only one process may acquire the lock
for writing.
• Reader-writer lock is useful in the following situations;
- where it is easy to identify which processes only read shared data
and which threads only write shared data.
- where there are more readers than writers.
Dining-Philosophers Problem

• Five philosophers who spend their lives thinking and eating, share
a circular table surrounded by five chairs, each belonging to one
philosopher. In the center of the table is a bowl of rice, and the
table is laid with five single chopsticks.
• When a philosopher thinks, she does not interact with her
colleagues. From time to time, a philosopher gets hungry and tries
to pick up two chopsticks that are closest to her. A philosopher
may pick up only one chopstick at a time.

Dining-Philosophers Problem (Cont.)


• This problem is considered a classic synchronization problem
because it is an example of large class of concurrency-control
problems.
• Solution: represent each chopstick with a semaphore. A
philosopher tries to grab a chopstick by executing a wait()
operation on that semaphore; she releases her chopsticks by
executing the signal() operation on the appropriate semaphore.
• Shared data
– Bowl of rice (data set)
– Semaphore chopstick [5] initialized to 1
Dining-Philosophers Problem (Cont.)
• The structure of Philosopher i:
Do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );

// eat

signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

} while (true) ;

Dining-Philosophers Problem (Cont.)


• Although solution guarantees that no two neighbors are eating
simultaneously, it nevertheless must be rejected because it could
create deadlock. Suppose that all five philosophers become hungry
simultaneously & each grabs her left chopstick. All elements of
chopstick will now be equal to 0. When each philosopher tries to
grab her right chopstick, she will be delayed forever.
• Possible remedies to the deadlock problem:
- Allow at most four philosophers to be sitting simultaneously at
table.
- Allow a philosopher to pick up her chopsticks only if both
chopsticks are available.
- Use an assymmetric solution; that is, an odd philosopher picks up
first her left chopstick and then her right chopstick, whereas an
even philosopher picks up first her right chopstick and then her
left chopstick.
Drawbacks of Semaphores
• Semaphores can be used to solve any of the traditional
synchronization problems
• However, they have some drawbacks
- They are essentially shared global variables
- Can potentially be accessed anywhere in program
- No connection between the semaphore and the data being
controlled by the semaphore
- Used both for critical sections (mutual exclusion) and
coordination (scheduling)
- No control or guarantee of proper usage
• Sometimes hard to use and prone to bugs
• Another approach: Use programming language support

Semaphore problems
• If used incorrectly can lead to problems
• Interchange order of wait and signal causes several
processes running in the CS simul.

• If signal replaced with wait can lead to deadlock


Monitors
• A high-level abstraction that provides a convenient and effective
mechanism for process synchronization.
• A type, or abstract data type, encapsulates private data with public
methods to operate on that data. A monitor type presents a set of
programmer-defined operations that are provided mutual
exclusion within the monitor.
• The monitor type also contains declaration of variables whose
value define the state of an instance of that type, along with the
bodies of procedures or functions that operate on those variables.
• The representation of the monitor type cannot be used directly by
the various processes. Thus a procedure defined within a monitor
can access only those variables declared locally within the monitor
and its formal parameters. Similarly, the local variables of a
monitor can be accessed by only the local procedures.

Monitors Cont.
• Syntax of a monitor:
monitor monitor-name {
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code ( ….) { … }



} }
• Only one process may be active within the monitor at a
time. Thus, the programmer does not need to code this
synchronization constraint explicitly.
Schematic view of a Monitor

Condition Variables
• Synchronization mechanisms are provided by the condition
construct
condition x, y;
• Two operations can be invoked on a condition variable:
– x.wait () – a process that invokes the operation is
suspended.
– x.signal () – resumes one of processes (if any) that
invoked x.wait ()

• If no process is suspended, then the signal() operation has no


effect; that is, the state of x is the same as if the operation had
never been executed.
Monitor with Condition Variables

• Suppose that, when x.signal() operation is invoked by a process P, there is


a suspended process Q associated with condition x. If the suspended
process Q is allowed to resume its execution, the signaling process p must
wait. Otherwise both P and Q would be active simultaneously within the
monitor. However both the processes can conceptually continue with their
execution
• Two possibilities exist:
– Signal and wait: P either waits until Q leaves the monitor or waits for
another condition.
– Signal and continue: Q either waits until p leaves the monitor or waits
for another condition.
• There are reasonable arguments in favor of adopting either option. On
one hand, since P was already executing in the monitor, the signal-and-
continue method seems more reasonable. On the other hand, if we allow
thread p to continue, then by the time Q is resumed, the logical condition
for which Q was waiting may no longer hold.
Dining Philosophers Solution Using
Monitors
• The solution imposes a restriction that a philosopher may pick up her
chopsticks only if both of them are available. To code this solution, we
need to distinguish among the three states in which we may find a
philosopher.
• Data structure:
enum{thinking, hungry, eating} state[5];
• Philosopher i can set the variable state[i]= eating only if her two
neighbors are not eating: (state[(i+4)%5]!= eating) and (state[(i+1)%5]!=
eating).
• We need to declare that
condition self[5];
where philosopher i can delay herself when she is hungry but is unable to
obtain the chopsticks she needs.

Solution to Dining Philosophers (cont.)


monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];

void pickup (int i) {


state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}

void putdown (int i) {


state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
Solution to Dining Philosophers (cont)
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}

initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}

End of Chapter 6

You might also like