You are on page 1of 58

Inter Process Communication and

Synchronization

 Background
 Concurrent Processes
 Inter process Communication
 Producer Consumer Problem
 The Critical-Section Problem
 Semaphores
 Classical Problems of Synchronization

Operating System Concepts – 8th Edition 6.1 Silberschatz, Galvin and Gagne ©2009
Background

 In a multiprogramming or a multiprocessor system various processes


share the data.
 In a single CPU multiprogramming systems, many processes
execute in interleaved fashion, whereas in a multiprocessor systems
they execute in overlapped fashion
 Both can be viewed as examples of concurrent processing and results in
similar problems
 Concurrent access to shared data may result in data inconsistency.
 Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes.

Operating System Concepts – 8th Edition 6.2 Silberschatz, Galvin and Gagne ©2009
Concurrent Processing
 Concurrent processes means processes existing at the same
time
 Concurrent processes may be
 independent of each other or
 cooperating each other
 Independent process cannot affect or be affected by the
execution of another process.
 Cooperating process can affect or be affected by the execution
of another process

Operating System Concepts – 8th Edition 6.3 Silberschatz, Galvin and Gagne ©2009
Interleaving of Concurrent Processes on a
Single CPU

Operating System Concepts – 8th Edition 6.4 Silberschatz, Galvin and Gagne ©2009
Interleaving and Overlapping of Concurrent Processes
with Multiprocessing

Operating System Concepts – 8th Edition 6.5 Silberschatz, Galvin and Gagne ©2009
Need for Process Cooperation
 Information Sharing: Concurrent cooperating needs to share
information (e.g. a shared file)
 Computation speedup: by breaking the task into number of
parallel tasks
 Modularity: dividing the software system modules/functions,
each of which may be divided into separate processes or
threads.
 Convenience: There are many tasks that a user needs to do
such as compiling, printing, editing etc. It is convenient if these
tasks can be managed by cooperating processes.

Operating System Concepts – 8th Edition 6.6 Silberschatz, Galvin and Gagne ©2009
Interprocess Communication(IPC)
• Interprocess communication(IPC) is a mechanism for
processes to synchronize and communicate with each other.

• Two approaches for IPC


• Shared memory
• Message passing

• Why do we need synchronization for shared memory methods ?


[Ans -To avoid Race conditions]

• Methods for shared memory synchronization


[Ans: Mutex, Semaphore, Monitor]

• What message passing methods do we have


[Ans: In Linux: Pipe, Message queue, socket…]

Courtesy: (https://kelvin.ink/2018/10/27/IPC_and_Synchronization/)
Operating System Concepts – 8th Edition 6.7 Silberschatz, Galvin and Gagne ©2009
Approaches for IPC
a) Shared memory
b) Message passing

Operating System Concepts – 8th Edition 6.8 Silberschatz, Galvin and Gagne ©2009
Interprocess Communication – Message Passing

 Mechanism for processes to communicate and to synchronize


their actions
 Message system – processes communicate with each other
without resorting to shared variables
 IPC facility provides two operations:
send(message) – message size fixed or variable
receive(message)
 If P and Q wish to communicate, they need to:
establish a communication link between them
exchange messages via send/receive
 Implementation of communication link
physical (e.g., shared memory, hardware bus)
logical (e.g., logical properties)
Direct Communication

Processes must name each other explicitly:


send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q

Properties of communication link


Links are established automatically
A link is associated with exactly one pair of communicating
processes
Between each pair there exists exactly one link
The link may be unidirectional, but is usually bi-directional
Indirect Communication
Messages are directed and received from mailboxes (also referred to
as ports)
Each mailbox has a unique id
Processes can communicate only if they share a mailbox
Operations
create a new mailbox
send and receive messages through mailbox
destroy a mailbox
Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A
Properties of communication link
Link established only if processes share a common mailbox
A link may be associated with many processes
Each pair of processes may share several communication links
Link may be unidirectional or bi-directional
Indirect Communication

Mailbox sharing
P1, P2, and P3 share mailbox A
P1, sends; P2 and P3 receive
Who gets the message?

Solutions
Allow a link to be associated with at most two processes
Allow only one process at a time to execute a receive operation
Allow the system to select arbitrarily the receiver. Sender is notified who
the receiver was.
Synchronization

 Message passing may be either blocking or non-blocking

 Blocking is considered synchronous


 Blocking send has the sender block until the message is
received
 Blocking receive has the receiver block until a message is
available

 Non-blocking is considered asynchronous


 Non-blocking send has the sender send the message and
continue
 Non-blocking receive has the receiver receive a valid
message or null
Interprocess Synchronization(IPS)
 IPS allows the processes to share system resources (such as
shared memory) in such a way that, concurrent access to them
is systematically handled, thereby avoiding the chance of
inconsistency in the state of these shared resources.
 IPS is useful for
 Avoiding race conditions
 Preserving precedence relationship among cooperating processes

Operating System Concepts – 8th Edition 6.14 Silberschatz, Galvin and Gagne ©2009
Producer-Consumer Problem

 Paradigm for cooperating processes


producer process produces information that is consumed by a
consumer process.
 unbounded-buffer places no practical limit on the size of the buffer.
 bounded-buffer assumes that there is a fixed buffer size.

Operating System Concepts – 8th Edition 6.15 Silberschatz, Galvin and Gagne ©2009
Bounded-Buffer
buffer empty -> counter=0; //(in=out)

Buffer Full -> counter= BUFFER_SIZE;

//Initially the pointers to buffer slots, in and out are set to zero
//Buffer is implemented as a circular array

in = (in + 1) % BUFFER_SIZE;

out = (out + 1) % BUFFER_SIZE;

Operating System Concepts – 8th Edition 6.16 Silberschatz, Galvin and Gagne ©2009
Bounded-Buffer
The shared buffer is implemented as a circular array with two pointers : in and
out.
#define BUFFER_SIZE 10
typedef struct {
..
} item;
item
buffer[BUFFER_SIZE];
int in = 0; // points to the next free position in the
buffer int out = 0; // points to the first full position in
the buffer int counter = 0;

 a shared variable counter, initialized to 0 and incremented each time a new


item is added to the buffer(produced) and decremented each time an item is
deleted from the buffer(consumed).

Operating System Concepts – 8th Edition 6.17 Silberschatz, Galvin and Gagne ©2009
Producer
 Producer process :
item nextProduced;
while (1) {
while (counter ==
BUFFER_SIZE)
; /* buffer full do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}

Operating System Concepts – 8th Edition 6.18 Silberschatz, Galvin and Gagne ©2009
Consumer
 Consumer process :

item nextConsumed;
while (1) {
while (counter ==
0)
; /* buffer empty, do nothing */
nextConsumed = buffer[out];
out = (out + 1) %
BUFFER_SIZE;
}
counter--;

Operating System Concepts – 8th Edition 6.19 Silberschatz, Galvin and Gagne ©2009
Where is the problem ?
 The statements

counter++; /* executed by producer */

counter--; /* executed by consumer */

must be performed atomically.

 Atomic operation means an operation that completes in its


entirety without interruption.

Operating System Concepts – 8th Edition 6.15 Silberschatz, Galvin and Gagne ©2009
 The statement “counter++” may be implemented in machine language
as:

register1 = counter
register1 = register1 + 1
counter = register1

 The statement “counter


—” may be implemented
as:

register2 = counter
register2 = register2 – 1
counter = register2

Operating System Concepts – 8th Edition 6.21 Silberschatz, Galvin and Gagne ©2009
Data inconsistency
 If both the producer and consumer attempt to update the buffer
concurrently, the assembly language statements may get
interleaved.
 Interleaving depends upon how the producer and consumer
processes are scheduled.
 Example: Let the present value of Counter is 5. Then what is
the value of Counter after one execution each of producer and
consumer ?

Operating System Concepts – 8th Edition 6.22 Silberschatz, Galvin and Gagne ©2009
Race Condition

 Race condition: The situation where several


processes access and manipulate shared data
concurrently. The final value of the shared data
depends upon which process finishes last.

 To prevent race conditions, concurrent processes must


be synchronized so that access to critical section must
be provided in a mutual exclusive manner.

Operating System Concepts – 8th Edition 6.24 Silberschatz, Galvin and Gagne ©2009
The Critical-Section Problem
 n processes all competing to use some shared data
 Each process has a code segment, called critical
section, in which the shared data (common
variables/data structures or common file) is accessed.
 E.g. machine language statements for counter++ and
counter -- are the critical sections of producer and
consumer processes respectively.
 Ensure that when one process is executing in its critical
section, no other process is allowed to execute in its
critical section.

Operating System Concepts – 8th Edition 6.25 Silberschatz, Galvin and Gagne ©2009
Banking Example

Consider a banking system with two functions: deposit (amount)


and withdraw (amount). These two functions takes an amount
that is to be deposited or withdrawn from a bank account as
input. Assume a shared bank account exists between a husband
and wife and concurrently the husband calls the withdraw()
function and the wife calls deposit().
Describe how a race condition is possible and what might be
done to prevent the race condition from occurring.

Operating System Concepts – 8th Edition 6.26 Silberschatz, Galvin and Gagne ©2009
Critical Section
" General structure of process Pi (other process Pj)
do {
entry section
critical section
--negotiation protocol
exit section --release
protocol

remainder section
} while (1);

Operating System Concepts – 8th Edition 6.27 Silberschatz, Galvin and Gagne ©2009
Solution to Critical-Section Problem
Must satisfy the following conditions:-
1. Mutual Exclusion. If process Pi is executing in
its critical section, then no other processes can be
executing in their critical sections.
2. Progress. No process running outside its critical
section may block other processes from entering
the critical section.
3. Bounded Waiting. No process should have to wait
forever to enter its critical section, i.e. the solution to
critical section problem should be free of starvation.

Operating System Concepts – 8th Edition 6.28 Silberschatz, Galvin and Gagne ©2009
Hardware Support for Accessing Critical Section

 Pessimistic Approaches
 Disable/Enable Interrupts
 Test and Set Instruction
 Optimistic Approach
 Compare and Swap Instruction

Operating System Concepts – 8th Edition 6.29 Silberschatz, Galvin and Gagne ©2009
Disable/Enable Interrupts

Operating System Concepts – 8th Edition 6.31 Silberschatz, Galvin and Gagne ©2009
Initial Attempts to Solve Problem
 Only 2 processes, P0 and P1
 General structure of process Pi (other process Pj)
do {
entry section --negotiation
protocol
critical section
exit section --
release protocol

remainder section
" } while (1);
Processes may share some common variables to synchronize their
actions.

Operating System Concepts – 8th Edition 6.37 Silberschatz, Galvin and Gagne ©2009
Software Solutions

 Algorithms for two process Solution:


 Algorithm 1 (using turn variables) -
 Algorithm 2 (using flags)
 Peterson’s Algorithm (Using flags and turn variables)
 Dekker’s Algorithm
 Algorithm for n concurrent processes:
 Eisenberg and McGuire Algorithm – first known correct solution to the
critical section problem for

Operating System Concepts – 8th Edition 6.38 Silberschatz, Galvin and Gagne ©2009
Algorithm 1
 Shared variables:
 int turn;
initially turn = 0
 turn = i  Pi
can enter its
critical section
 Process Pi
d
o

{
while (turn != i) ;
critical
section
turn = j;
remainder
section
} while (1);
 Satisfies mutual exclusion,
Silberschatz, Galvin and Gagne ©2009

Operating System Concepts – 8th Edition 6.39
Algorithm 2
 Shared variables
 boolean flag[2];
initially flag [0] = flag [1] = false.
 flag [i] = true  Pi ready to enter its critical section
 Process Pi
do {
flag[i] := true;
while (flag[j]) ;
critical
section
flag [i] = false;
remainder
section
} while (1);
 Satisfies mutual exclusion,
 Removes strict turn taking restriction
 progress requirement is not satisfied.(ex. Pi sets its flag and
preempted, and Pj scheduled for execution, it sets its flag also
Operating System Concepts –then
th both are deadlocked) 6.40
8 Edition Silberschatz, Galvin and Gagne ©2009
Peterson’s Algorithm for two processes

• Structure of Process Pi • Structure of Process Pj

do { do {

//entry section (negotiation protocol) //entry section (negotiation protocol)


flag [ i]:= true; //process Pi wants to enter flag [j]:= true; //process Pj wants to enter
turn = j; // next favored process Pj turn = i; // next favored process Pi
while (flag [ j] and turn = j) ; //wait while (flag [ i] and turn = i) ; //wait till
critical section both agree
//exit section (release protocol) critical section
flag [ i] = false; //exit section (release protocol)
remainder section flag [ j] = false;
remainder section
} while (1);
} while (1);
 Meets all three requirements;
» Mutual exclusion is preserved (variable turn)
» Progress requirement is satisfied (if Pi is preempted outside its CS then progress of Pj is
still ensured)
» Bounded-waiting requirement is met (if Pi is in its CS, Pj waits for bounded time )

Operating System Concepts – 8th Edition 6.41 Silberschatz, Galvin and Gagne ©2009
Semaphore
 Synchronization tool proposed by Dijkstra in 1968
 Used to achieve mutual exclusion among n processes
 Semaphore S – integer variable (0 : busy, 1:free)
 Two standard operations modify S:
 wait() and signal()
 Originally called P() and V()
 Semaphore may be provided as a programming language construct or
as the OS service invoked via system calls or APIs.

Operating System Concepts – 8th Edition 6.45 Silberschatz, Galvin and Gagne ©2009
Semaphore operations
 Can only be accessed via two indivisible (atomic)
operations

 wait (S) {
while S <= 0 //busy
; // no-op-busy waiting
S--;
}

 signal (S) {
S++;
}

Operating System Concepts – 8th Edition 6.46 Silberschatz, Galvin and Gagne ©2009
Semaphore as
General Synchronization Tool

 Counting semaphore – integer value can range over an


unrestricted domain
 Binary semaphore – integer values 0 and 1;
 can be simpler to implement
 Also known as mutex locks
 Provides mutual exclusion
Semaphore mutex; // initialized to
1(FREE) do {
wait (mutex);//negotiation protocol
Critical Section
signal (mutex);//release protocol
Remainder section
} while (TRUE);

Operating System Concepts – 8th Edition 6.47 Silberschatz, Galvin and Gagne ©2009
Disadvantage

 Busy waiting wastes CPU cycles (semaphore with busy waiting is


called ‘Spinlock’)

 Note that applications may spend lots of time in critical sections and
therefore this is not a good solution

Operating System Concepts – 8th Edition 6.48 Silberschatz, Galvin and Gagne ©2009
Queuing implementation of Semaphore

 Semaphore Implementation with no Busy waiting


 With each semaphore there is an associated waiting queue
 Two operations:
 block – place the process invoking the operation on the
appropriate waiting queue ( PCB is moved from running to
block state )
 wakeup – remove one of processes in the waiting queue
and place it in the ready queue (move the PCB
from semaphore queue to the ready queue)

Operating System Concepts – 8th Edition 6.49 Silberschatz, Galvin and Gagne ©2009
 FIFO service discipline may be used to service the processes waiting on
a semaphore
 Semaphore definition (queuing implementation):
 Each semaphore contains and integer value and a pointer to a list
of PCBs representing processes waiting
typedef struct {
int value;
struct
process
*list;//list
of
process
es
waiting
on
Operating System Concepts – 8th Edition 6.50 Silberschatz, Galvin and Gagne ©2009
Semaphore Implementation with
no Busy waiting (Cont.)
 This implementation allows semaphore to take on negative values
 If the semaphore value is negative; its magnitude is the number of processes waiting on that semaphore

 Implementation of wait:
wait(semaphore *S) {
S->value--;
// if S is busy suspend the invoking process if
(S->value < 0) {
add this process to S->list;
block();
//else directly enter critical section that
follows wait()
}
 Implementation of signal:
signal(semaphore *S) {
S->value++;

if (S->value <= 0) { //if semaphore queue not empty


remove a process P from S->list;
wakeup(P); //take
process P from semaphore queue to ready
Operating System Concepts – 8 Edition queue
th 6.51 Silberschatz, Galvin and Gagne ©2009
Classical Problems of Synchronization
 Classical problems used to test newly-proposed synchronization
schemes

 Bounded-Buffer Problem
 Readers and Writers Problem
 Sleeping Barber Problem
 Dining-Philosophers Problem

Operating System Concepts – 8th Edition 6.52 Silberschatz, Galvin and Gagne ©2009
Dining-Philosophers Problem

 Philosophers spend their lives thinking and eating


 Don’t interact with their neighbors, occasionally try to pick up 2 chopsticks (one at a
time) to eat from bowl
 Need both to eat, then release both when done
 In the case of 5 philosophers
 Shared data
  Semaphore chopstick [5] initialized to 1(FREE)

Operating System Concepts – 8th Edition 6.53 Silberschatz, Galvin and Gagne ©2009
Dining-Philosophers Problem Algorithm
 The structure of Philosopher i:

do {
// think_for_a_while;

wait ( chopstick[i] ); // wait for left chopstick


wait ( chopstick[(i + 1) % 5] ); // wait for right chopstick

// eat(Critical Section)

signal ( chopstick[i] ); //put down left chopstick


signal (chopstick[ (i + 1) % 5] ); //put down right chopstick

} while (TRUE);

 What is the problem with this algorithm?


If all philosophers picks their left chopstick simultaneously…….deadlock

Operating System Concepts – 8th Edition 6.54 Silberschatz, Galvin and Gagne ©2009
Deadlock free solutions
 Allow a philosopher to pick up his chopsticks only if both the
chopsticks are available (this must be done in a critical section)
 Use an asymmetric solution:
 An odd philosopher picks up first his left chopstick and then right
chopstick, whereas an even philosopher picks up his right chopstick
and then his left chopstick.

Operating System Concepts – 8th Edition 6.55 Silberschatz, Galvin and Gagne ©2009
Problems with Semaphores
 Incorrect use of semaphore operations:

 signal (mutex) …. wait (mutex)

 wait (mutex) … wait (mutex)

 Omittingof wait (mutex) or signal (mutex) (or both)

 Deadlock and starvation

Operating System Concepts – 8th Edition 6.56 Silberschatz, Galvin and Gagne ©2009
Asymmetric Solution(Deadlock and Starvation Free)

Operating System Concepts – 8th Edition 6.57 Silberschatz, Galvin and Gagne ©2009
Thank You

Operating System Concepts – 8th Edition 6.58 Silberschatz, Galvin and Gagne ©2009

You might also like