You are on page 1of 8

Inter process communication

 Processes executing concurrently in the operating system may be either independent processes or
cooperating processes. A process is independent if it cannot affect or be affected by the other
processes executing in the system.
 Any process that does not share data with any other process is independent. A process is cooperating
if it can affect or be affected by the other processes executing in the system. Clearly, any process that
shares data with other processes is a cooperating process.
 There are several reasons for providing an environment that allows process cooperation:
o Information sharing- Since several users may be interested in the same piece of information
(for instance, a shared file), we must provide an environment to allow concurrent access to
such information.
o Computation speedup- If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others.
o Modularity- We may want to construct the system in a modular fashion, dividing the system
functions into separate processes or threads,
o Convenience- Even an individual user may work on many tasks at the same time. For
instance, a user may be editing, printing, and compiling in parallel.
 Cooperating processes require an inter process communication (IPC) mechanism that will allow
them to exchange data and information.
 There are two fundamental models of inter process communication: (1) shared memory and (2)
message passing.

1. Shared-Memory Systems:
 A shared-memory region resides in the address space of the process creating the shared
memory segment. Other processes that wish to communicate using this shared memory
segment must attach it to their address space.
 Shared memory allows maximum speed and convenience of communication.
 In shared memory systems, system calls are required only to establish shared-memory
regions. Once shared memory is established, all accesses are treated as routine memory
accesses, and no assistance from the kernel is required.
 They can then exchange information by reading and writing data in the shared areas.
 Shared memory is faster than message passing, as message passing systems are typically
implemented using system calls and thus require the more time-consuming task of kernel
intervention.
 To illustrate the concept of cooperating processes, let's consider the producer-consumer
problem, which is a common paradigm for cooperating processes. A producer process
produces information that is consumed by a consumer process. For example, a compiler
may produce assembly code, which is consumed by an assembler.
 One solution to the producer-consumer problem uses shared memory. To allow producer
and consumer processes to run concurrently, we must have available a buffer of items that
can be filled by the producer and emptied by the consumer.
 Two types of buffers can be used. The unbounded buffer places no practical limit on the
size of the buffer. The consumer may have to wait for new items, but the producer can
always produce new items. The bounded buffer assumes a fixed buffer size. In this case, the
consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.

2. Message passing system:


 Message passing is useful for exchanging smaller amounts of data, because no conflicts need be
avoided. Message passing is also easier to implement than is shared memory for intercomputer
communication.
 Message passing provides a mechanism to allow processes to communicate and to synchronize
their actions without sharing the same address space and is particularly useful in a distributed
environment, where the communicating processes may reside on different computers connected
by a network.
 A message-passing facility provides at least two operations: send(message) and
receive(message). Messages sent by a process can be of either fixed or variable size.
 If only fixed-sized messages can be sent, the system-level implementation is straightforward.
This restriction, however, makes the task of programming more difficult. Conversely, variable-
sized messages require a more complex system-level implementation, but the programming
task becomes simpler.
 If processes P and Q want to communicate, they must send messages to and receive messages
from each other; a communication link must exist between them.
Here are several methods for logically implementing a link and the send() I receive()
operations:
a) Direct or indirect communication
b) Synchronous or asynchronous communication
c) Automatic or explicit buffering
Naming:
Processes that want to communicate must have a way to refer to each other. They can use either
direct or indirect communication.
Under direct communication, each process that wants to communicate must explicitly name the
recipient or sender of the communication.
Symmetry in addressing:
In this scheme, the send() and receive() primitives are defined as: send(P, message) -Send a
message to process P. receive (Q, message)-Receive a message from process Q.
A communication link in this scheme has the following properties:
 A link is established automatically between every pair of processes that want to
communicate. The processes need to know only each other's identity to communicate.
 A link is associated with exactly two processes.
 Between each pair of processes, there exists exactly one link.
Asymmetry in addressing:
Here, only the sender names the recipient; the recipient is not required to name the sender.
In this scheme, the send() and receive() primitives are defined as follows:
o send(P, message) -Send a message to process P.
o receive (id, message) -Receive a message from any process; the variable id is set to the
name of the process with which communication has taken place.
With indirect communication, the messages are sent to and received from mailboxes, or ports. A
mailbox can be viewed abstractly as an object into which messages can be placed by processes and
from which messages can be removed. Each mailbox has a unique identification. In this scheme, a
process can communicate with some other process via a number of different mailboxes.
Two processes can communicate only if the processes have a shared mailbox, however. The send()
and receive() primitives are defined as follows:
o send (A, message) -Send a message to mailbox A.
o receive (A, message)-Receive a message from mailbox A.
In this scheme, a communication link has the following properties:
 A link is established between a pair of processes only if both members of the pair have a shared
mailbox.
 A link may be associated with more than two processes.
 Between each pair of communicating processes, there may be a number of different links, with
each link corresponding to one mailbox.
Now suppose that processes P1, P2, and P3 all share mailbox A. Process P1 sends a message to A,
while both P2 and P3 execute a receive 0 from A. Which process will receive the message sent by
P1? The answer depends on which of the following methods we choose:
 Allow a link to be associated with two processes at most.
 Allow at most one process at a time to execute a receive() operation.
 Allow the system to select arbitrarily which process will receive the message (that is, either P2 or
P3, but not both, will receive the message).The system may identify the receiver to the sender.
A mailbox that is owned by the operating system has an existence of its own. It is independent and is
not attached to any particular process. The operating system then must provide a mechanism that
allows a process to do the following:
 Create a new mailbox.
 Send and receive messages through the mailbox.
 Delete a mailbox.
Synchronization:
Communication between processes takes place through calls to send() and receive () primitives. There
are different design options for implementing each primitive. Message passing may be either blocking
or nonblocking also known as synchronous and asynchronous.
 Blocking send- The sending process is blocked until the message is received by the receiving
process or by the mailbox.
 Nonblocking send- The sending process sends the message and resumes operation.
 Blocking receive- The receiver blocks until a message is available.
 Nonblocking receive- The receiver retrieves either a valid message or a null.
Different combinations of send() and receive() are possible. When both send() and receive() are
blocking, we have a meeting point between the sender and the receiver. The solution to the producer-
consumer problem becomes trivial when we use blocking send() and receive() statements.
Buffering
Whether communication is direct or indirect, messages exchanged by communication processes reside
in a temporary queue. Basically, such queues can be implemented in three ways:
 Zero capacity- The queue has a maximum length of zero; thus, the link cannot have any messages
waiting in it. In this case, the sender must block until the recipient receives the message.
 Bounded capacity- The queue has finite length n; thus, at most n messages can reside in it. If the
queue is not full when a new message is sent, the message is placed in the queue and the sender can
continue execution without waiting.
 Unbounded capacity- The queue's length is potentially infinite; thus, any number of messages can
wait in it. The sender never blocks.
The zero-capacity case is sometimes referred to as a message system with no buffering; the other cases are
referred to as systems with automatic buffering.
Deadlock
In a multiprogramming environment several processes may compete for a finite number of resources.
A process request resources; if the resource is available at that time a process enters the wait state.
Waiting process may never change its state because the resources requested are held by other waiting
process. This situation is known as deadlock.

Necessary conditions for Deadlocks

1. Mutual Exclusion: At a time only one process can use the resources. If another process Requests
that resource, requesting process must wait until the resource has been released.
2. Hold and wait: A process must be holding at least one resource and waiting to additional resource
that is currently held by other processes.
3. No preemption: The process which once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.
4. Circular Wait: All the processes must be waiting for the resources in a cyclic manner so that the
last process is waiting for the resource which is being held by the first process.

Methods for handling deadlock:

Generally speaking, we can deal with the deadlock problem in one of three ways:
i. We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a
deadlocked state.
ii. We can allow the system to enter a deadlocked state, detect it, and recover.
iii. We can ignore the problem altogether and pretend that deadlocks never occur in the system.

Deadlock prevention
If we simulate deadlock with a table which is standing on its four legs then we can also simulate
four legs with the four conditions which when occurs simultaneously, cause the deadlock.

However, if we break one of the legs of the table then the table will fall definitely. The same
happens with deadlock, if we can be able to violate one of the four necessary conditions and don't
let them occur together then we can prevent the deadlock.

Let's see how we can prevent each of the conditions.


1. Mutual Exclusion

Mutual section from the resource point of view is the fact that a resource can never be used by more
than one process simultaneously which is fair enough but that is the main reason behind the deadlock.
If a resource could have been used by more than one process at the same time then the process would
have never been waiting for any resource.

However, if we can be able to violate resources behaving in the mutually exclusive manner then
the deadlock can be prevented.
Spooling
For a device like printer, spooling can work. There is a memory associated with the printer which
stores jobs from each of the process into it. Later, Printer collects all the jobs and print each one of
them according to FCFS. By using this mechanism, the process doesn't have to wait for the printer
and it can continue whatever it was doing. Later, it collects the output when it is produced.
Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from two
kinds of problems.

 This cannot be applied to every resource.


 After some point of time, there may arise a race condition between the processes to get space
in that spool.
We cannot force a resource to be used by more than one process at the same time since it will not be
fair enough and some serious problems may arise in the performance. Therefore, we cannot violate
mutual exclusion for a process practically.
2. Hold and Wait

Hold and wait condition lies when a process holds a resource and waiting for some other resource
to complete its task. Deadlock occurs because there can be more than one process which are holding
one resource and waiting for other in the cyclic order.

However, we have to find out some mechanism by which a process either doesn't hold any resource
or doesn't wait. That means, a process must be assigned all the necessary resources before the
execution starts. A process must not wait for any resource once the execution has been started.

This can be implemented practically if a process declares all the resources initially. However, this
sounds very practical but can't be done in the computer system because a process can't determine
necessary resources initially.

Process is the set of instructions which are executed by the CPU. Each of the instruction may
demand multiple resources at the multiple times. The need cannot be fixed by the OS.

The problem with the approach is:

 Practically not possible.


 Possibility of getting starved will be increases due to the fact that some process may hold a
resource for a very long time.

3. No Preemption

Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take the
resource away from the process which is causing deadlock then we can prevent deadlock.

This is not a good approach at all since if we take a resource away which is being used by the
process then all the work which it has done till now can become inconsistent.
Consider a printer is being used by any process. If we take the printer away from that process and
assign it to some other process then all the data which has been printed can become inconsistent and
ineffective and also the fact that the process can't start printing again from where it has left which
causes performance inefficiency.

4. Circular Wait

To violate circular wait, we can assign a priority number to each of the resource. A process can't
request for a lesser priority resource. This ensures that not a single process can request a resource
which is being utilized by some other process and no cycle will be formed. Among all the methods,
violating Circular wait is the only approach that can be implemented practically.

Deadlock avoidance

Deadlock avoidance can be done with Banker’s Algorithm.

Banker’s Algorithm

Banker’s Algorithm is resource allocation and deadlock avoidance algorithm which test all the
request made by processes for resources, it checks for the safe state, if after granting request system
remains in the safe state it allows the request and if there is no safe state it doesn’t allow the request
made by the process.

Inputs to Banker’s Algorithm:

i. Max need of resources by each process.


ii. Currently allocated resources by each process.
iii. Max free available resources in the system.

The request will only be granted under the below condition:

i. If the request made by the process is less than equal to max need to that process.
ii. If the request made by the process is less than equal to the freely available resource in the
system.
iii. Example: Total resources; A=10, B=5, C=7
Processes Allocation Maximum Need Current Remaining
Availability Need
A B C A B C A B C A B C
P1 0 1 0 7 5 3 3 3 2 7 4 3
P2 2 0 0 3 2 2 1 2 2
P3 3 0 2 9 0 2 6 0 0
P4 2 1 1 4 2 2 2 1 1
P5 0 0 2 5 3 3 5 3 1
iv. Total allocated = 7 2 5
[NOTE: If total resources are not there and only CURRENT AVAILABILITY of
P1 is given, you can find out the TOTAL RESOURCES, ex; In the above example
you can also get the TOTAL RESOURCES by adding TOTAL ALLOCATED
and CURRENT AVAILABILITY of P1 i.e. here by adding you can get,
A=7+3=10, B=2+3=5, C=5+2=7 i.e. A=10, B=5, C=7.]

ANSWER

Total resources; A=10, B=5, C=7

Processes Allocation Maximum Need Current Remaining Need


Availability
A B C A B C A B C A B C
P1 0 1 0 7 5 3 3 3 2 7 4 3
P2 2 0 0 3 2 2 5 3 2 1 2 2
P3 3 0 2 9 0 2 7 4 3 6 0 0
P4 2 1 1 4 2 2 7 4 5 2 1 1
P5 0 0 2 5 3 3 7 5 5 5 3 1
Total allocated = 7 2 5 10 5 7

SAFE STATE = P1  P4  P5 P1  P3

You might also like