You are on page 1of 26

Process Synchronization

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Inter-Process Communication
 Processes within a system may be independent or cooperating
 Cooperating process can affect or be affected by other processes, including sharing data
 Cooperating processes need inte-rprocess communication (IPC) and Synchronization
 Two models of IPC
 Shared memory
 Message passing

Operating System Concepts – 9th Edition 5.2 Silberschatz, Galvin and Gagne ©2013
Shared memory and Message passing

Operating System Concepts – 9th Edition 5.3 Silberschatz, Galvin and Gagne ©2013
Inter-Process Communication
 Reasons for cooperating processes:
 Information sharing
 Computation speedup
 Modularity
 Convenience

Operating System Concepts – 9th Edition 5.4 Silberschatz, Galvin and Gagne ©2013
Communications Models
(a) Message passing. (b) shared memory.

Operating System Concepts – 9th Edition 5.5 Silberschatz, Galvin and Gagne ©2013
Interprocess Communication – Shared Memory

 Paradigm for cooperating


processes, producer process
produces information that is
consumed by a consumer
process
 unbounded-buffer places no
practical limit on the size of
the buffer
 bounded-buffer assumes
that there is a fixed buffer size

 An area of memory shared among the processes that wish


to communicate
 The communication is under the control of the users
processes not the operating system.
 Major issues is to provide mechanism that will allow the
user processes to synchronize their actions when they
access shared memory.
Operating System Concepts – 9th Edition 5.6 Silberschatz, Galvin and Gagne ©2013
Message Passing

 If processes P and Q wish to communicate, they need to:


 Establish a communication link between them
 Exchange messages via send/receive
 Implementation issues:
 How are links established?
 Can a link be associated with more than two processes?
 How many links can there be between every pair of
communicating processes?
 What is the capacity of a link?
 Is the size of a message that the link can accommodate fixed or
variable?
 Is a link unidirectional or bi-directional?

Operating System Concepts – 9th Edition 5.7 Silberschatz, Galvin and Gagne ©2013
Message Passing (Cont.)

 Implementation of communication link


 Physical:
 Shared memory
 Hardware bus
 Network
 Logical:
 Direct or indirect
 Synchronous or asynchronous
 Automatic or explicit buffering

Operating System Concepts – 9th Edition 5.8 Silberschatz, Galvin and Gagne ©2013
Message Passing - Direct or indirect

 Message system – processes communicate with each other


without resorting to shared variables

 IPC facility provides two operations:


 send(receiever_process, message)
 receive(Sender_process, message)

 The message size is either fixed or variable

Operating System Concepts – 9th Edition 5.9 Silberschatz, Galvin and Gagne ©2013
Synchronization
 Message passing may be either blocking or non-blocking
 Blocking is considered synchronous
 Blocking send -- the sender is blocked until the message is
received
 Blocking receive -- the receiver is blocked until a message
is available
 Non-blocking is considered asynchronous
 Non-blocking send -- the sender sends the message and
continue
 Non-blocking receive -- the receiver receives:
 A valid message, or
 Null message
 Different combinations possible
 If both send and receive are blocking, we have a rendezvous

Operating System Concepts – 9th Edition 5.10 Silberschatz, Galvin and Gagne ©2013
Buffering

 Queue of messages attached to the link.


 implemented in one of three ways
1. Zero capacity – no messages are queued on a link.
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits

Operating System Concepts – 9th Edition 5.11 Silberschatz, Galvin and Gagne ©2013
Different between message passing and shared memory

Operating System Concepts – 9th Edition 5.12 Silberschatz, Galvin and Gagne ©2013
Race Conditions
 Issues related to inter-process communication includes
 How a process can pass information to another process
 Controlling access to critical sections
 Synchronization is needed when a process produces data that is
consumer by another process
 Race conditions happen when two or more processes trying to read
and write shared data and the final results (of this shared data) depends
on who runs when
 To avoid race conditions we should prevent more than one process to
access the shared data at the same time.
 If a process is accessing a shared data, other processes should be
prevented from accessing this shared data, this is called mutual
exclusion. So by mutual exclusion we avoid race conditions

Operating System Concepts – 9th Edition 5.13 Silberschatz, Galvin and Gagne ©2013
Race Conditions
The part of the program where the shared memory is accessed
is called critical section.

A solution to achieve mutual exclusion should satisfy four


conditions
1. No two processes may be simultaneously inside their
critical sections
2. The solution should work regardless of the speeds and
number of the CPUs
3. No process running outside its critical section may block
other processes
4. No process should wait forever to enter its critical
section

Operating System Concepts – 9th Edition 5.14 Silberschatz, Galvin and Gagne ©2013
Race Condition
 counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
 counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2

 Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}

Operating System Concepts – 9th Edition 5.15 Silberschatz, Galvin and Gagne ©2013
Example-1

Operating System Concepts – 9th Edition 5.16 Silberschatz, Galvin and Gagne ©2013
Critical Section Problem
 Consider system of n processes {p0, p1, … pn-1}
 A Critical Section is a code segment that accesses shared variables
and has to be executed as an atomic action. It means that in a group
of cooperating processes, at a given point of time, only one process
must be executing its critical section. If any other process also wants
to execute its critical section, it must wait until the first one finishes.
 Each process has critical section segment of code
 Process may be changing common variables, updating table,
writing file, etc
 When one process in critical section, no other may be in its
critical section
 Critical section problem is to design protocol to solve this
 Each process must ask permission to enter critical section in entry
section, may follow critical section with exit section, then
remainder section

Operating System Concepts – 9th Edition 5.17 Silberschatz, Galvin and Gagne ©2013
Critical Section

 General structure of process Pi

Operating System Concepts – 9th Edition 5.18 Silberschatz, Galvin and Gagne ©2013
Solution to Critical Section Problem
 A solution to the critical section problem must satisfy the following three
conditions :
 Mutual Exclusion Out of a group of cooperating processes, only one
process can be in its critical section at a given point of time.
 Progress If no process is in its critical section, and if one or more threads
want to execute their critical section then any one of these threads must be
allowed to get into its critical section.
 Bounded Waiting After a process makes a request for getting into its
critical section, there is a limit for how many other processes can get into
their critical section, before this process's request is granted. So after the
limit is reached, system must grant the process permission to get into its
critical section.

Operating System Concepts – 9th Edition 5.19 Silberschatz, Galvin and Gagne ©2013
Algorithm for Process Pi

do {

while (turn == j);

critical section
turn = j;

remainder section
} while (true);

Operating System Concepts – 9th Edition 5.20 Silberschatz, Galvin and Gagne ©2013
Exercise

Q1) The situation where there are two or more processes that try to read/write shared
resource where the final result will depends on who last read/wrote is called
1. Race conditions
2. Mutual exclusion
3. Inter process communication
Q2) To prevent race conditions we have to prevent more than one process to exist in
its critical section, this is called
1. Race conditions
2. Mutual exclusion
3. Inter process communication
Q3) The part of the program where an access to a shared variable happens is called
1. Race conditions
2. Mutual exclusion
3. Inter process communication
4. Critical section
Q4) Which of the following an example of a inter-process communication
1. A system creates a new process
2. A process produces output to be consumed by another process
3. Two processes read from the same file
4. A process wants to send a value to another process is a different machine

Operating System Concepts – 9th Edition 5.21 Silberschatz, Galvin and Gagne ©2013
Let P0 and P1 be two processes that can access a shared integer variable y whose
value is 5. Assume that P0's code is
 
w=2
w=w+y
print(w)
 
And P1's code is
 
y=y-1
z=y
print(z)
 
What are the possible values that will be printed by P0?

Operating System Concepts – 9th Edition 5.22 Silberschatz, Galvin and Gagne ©2013
Semaphores
 An integer variables that saves the number of wakeups for future use
 A semaphore could be 0 or positive
 Two atomic operations can be applied on the semaphore P and V
 A semaphore is:
 P(sem) (wait/down)
– block until sem > 0, then subtract 1 from sem and proceed
 V(sem) (signal/up)
– add 1 to sem
 Atomic operation means that once a process started the an operation on a semaphore, no
other process can access the semaphore until the current process is finished with the
operation or blocked ( to prevent race conditions on the semaphore).
 To implement mutual exclusion with semaphores, we define a semaphore s=1
 When a process wants to enter its critical section it calls down(s)
 When a process wants to exit its critical section it calls up(s)
 The down and up operations can be made atomic by implementing them as systems calls and
briefly disabling all interrupts while these operations are executed

Operating System Concepts – 9th Edition 5.23 Silberschatz, Galvin and Gagne ©2013
Properties of Semaphores
 Simple
 Works with many processes
 Can have many different critical sections with different semaphores
 Each critical section has unique access semaphores
 Can permit multiple processes into the critical section at once, if desirable.

Operating System Concepts – 9th Edition 5.24 Silberschatz, Galvin and Gagne ©2013
Types of Semaphores
 Semaphores are mainly of two types:
 Binary Semaphore It is a special form of semaphore used for implementing
mutual exclusion, hence it is often called Mutex. A binary semaphore is
initialized to 1 and only takes the value 0 and 1 during execution of a
program.
 Counting Semaphores These are used to implement bounded
concurrency.

Operating System Concepts – 9th Edition 5.25 Silberschatz, Galvin and Gagne ©2013
Down and Up operation
Down operation
down(s) {
If (s>0)
s--;
else
sleep()
}
Up operation
Up(s){
If one process or more is sleeping on this semaphore
Wakeup one of the blocked processes on this semaphore
else
s++;
}

Operating System Concepts – 9th Edition 5.26 Silberschatz, Galvin and Gagne ©2013

You might also like