You are on page 1of 43

Inter Process Communication

Interprocess Communication
• Processes executing concurrently in the operating
system may be either independent processes or
cooperating processes.
Independent processes- They cannot affect or be
affected by the other processes executing in the
system.
Cooperating Processes- They can affect or be affected
by the other processes executing in the system.
Examples: IRCTC Ticket reservation
Note: They can share the variables (or) memory (or)
code (or) Resources such as CPU, Printer, Scanner.
 Basics of inter process communication (IPC) (or)
Advantages of process cooperation (or) Why IPC
• There are several reasons for providing an
environment that allows process cooperation :
• Information sharing: Since several users may be
interested in the same piece of information (for
example, a shared file), you must provide a situation
for allowing concurrent access to that information.
• Computation speed up: If you want a particular work
to run fast, you must break it into sub-tasks where each
of them will get executed in parallel with the other
tasks. Note that such a speed-up can be attained only
when the computer has compound or various
processing elements like CPUs or I/O channels.
• Modularity: You may want to build the system
in a modular way by dividing the system
functions into split processes or threads.
• Convenience: Even a single user may work on
many tasks at a time. For example, a user may
be editing, formatting, printing, and compiling
in parallel.
Dangers of process cooperation
– Data corruption, deadlocks, increased complexity
– Requires processes to synchronize their processing
• Inter process communication (IPC) is a mechanism
which allows processes to communicate with each
other and synchronize their actions.
• OS Provides facilities for IPC.
• There are two primary models of inter process
communication (or)IPC Mechanisms:
1. Shared memory
2. Message passing
• In the shared memory model, a region of memory
that is shared by cooperating processes is
established.
• Processes can then exchange information by reading
and writing data to the shared region.
• In the message passing model, communication
takes place by means of messages exchanged
between the cooperative processes.
• The two communications models are contrasted
in the figure below:
Producer Consumer Problem
• A producer process produces information that is
consumed by a consumer process.
• For example, a compiler may produce assembly
code, which is consumed by an assembler. The
assembler, in turn, may produce object modules,
which are consumed by the loader.
• One solution to the producer consumer problem
uses shared memory.
Process Synchronization
Introduction:
• When two or more process cooperates with each
other, their order of execution must be
preserved otherwise there can be conflicts in
their execution and inappropriate outputs can be
produced.
• A cooperative process is the one which can
affect the execution of other process or can be
affected by the execution of other process. Such
processes need to be synchronized so that their
order of execution can be guaranteed.
What is Process Synchronization?
• The procedure involved in preserving the
appropriate order of execution of cooperative
processes is known as Process
Synchronization. 
(or)
• Process Synchronization is the task of
coordinating the execution of processes in a
way that no two processes can have access to
the same shared data and resources.
• There are various synchronization mechanisms
that are used to synchronize the processes.
Example: int shared=5 (uni processor)
Process P1 Process P2
int x= shared; int y=shared;
x++; y--;
sleep(1); sleep(1);
shared=x; shared=y;

• The value of shared may be either 4 or 6, where the


correct result should be 5.
• Here process are not synchronizing . This problem is
called as race condition
Race Condition
• A Race Condition typically occurs when two or
more threads try to read, write and possibly make
the decisions based on the memory that they are
accessing concurrently.
Critical Section
• The regions of a program that try to access
shared resources and may cause race conditions
are called critical section.
• To avoid race condition among the processes, we
need to assure that only one process at a time can
execute within the critical section.
Example of race condition:
Two process are p1 and p2, b is a shared variable with initial value
2. If you can execute any order how many different values of b.

P1() P2()
{ {
c=b-1; d=2*b;
b=2*c; b=d-1;
}
}

a)3 b)2 c)5 d)4


Ans=3 different values
Critical Section Problem
• Critical Section is the part of a program which tries to
access shared resources. That resource may be any
resource in a computer like a memory location, Data
structure, CPU or any IO device.
• The critical section cannot be executed by more than
one process at the same time; operating system faces
the difficulties in allowing and disallowing the
processes from entering the critical section.
• The critical section problem is used to design a set of
protocols which can ensure that the Race condition
among the processes will never arise.
• A Critical Section Environment contains:

• In order to synchronize the cooperative processes,


our main task is to solve the critical section problem.
• We need to provide a solution in such a way that the
following conditions can be satisfied.
Solution to Critical Section Problem (or) Requirements of Synchronization
mechanisms
● Any solution to the critical section problem must satisfy three
requirements:
● Mutual Exclusion
• If process P1 is executing in its critical section, then no
other processes can be executing in their critical sections.
● Progress
• Progress means that if one process doesn't need to execute
into critical section then it should not stop other processes
to get into the critical section.
● Bounded Waiting
• We should be able to predict the waiting time for every
process to get into the critical section. The process must
not be endlessly waiting for getting into the critical
section.
Semaphore
• Semaphore was proposed by Dijkstra in 1965
which is a very significant technique to manage
concurrent processes.
• A semaphore is an integer variable which is used
in mutual exclusive manner by various
concurrent cooperative process in order to
achieve synchronization.
• It uses two atomic operations, 1)wait or down
or p(), and 2) signal or up or v() for the process
synchronization.
• There are two types of semaphores : Binary
Semaphores and Counting Semaphores
1. Binary Semaphore – This is also known as
mutex lock. It can have only two values – 0 and
1. Its value is initialized to 1. It is used to
implement the solution of critical section
problem with multiple processes.
2. Counting Semaphore – Its value can range
over an unrestricted domain. It is used to
control access to a resource that has multiple
instances.
Counting Semaphore
• There are the scenarios in which more than one
processes need to execute in critical section
simultaneously. However, counting semaphore
can be used when we need to have more than one
process in the critical section at the same time.
• The programming code of semaphore
implementation is shown below which includes
the structure of semaphore and the logic using
which the entry and the exit can be performed in
the critical section.
struct Semaphore  
{  
 int value; // processes that can enter in the critical section simultaneously
    queue type L; // L contains set of processes which get blocked  
}  
Down (Semaphore S)  
{  
    S.value = S.value-1; 
//semaphore's value will get decreased when a new   
    //process enter in the critical section   
    if (S.value< 0)  
    {  
        put_process(PCB) in L; //if the value is negative then   
        //the process will get into the blocked state.  
        Sleep();   
    } else  
        return;  

up (Semaphore S)  
{  
   S.value = S.value+1; //semaphore value will get increased whe
n
    //it makes an exit from the critical section.   
    if(S.value<=0)  
    {  
      select a process from L; //if the value of semaphore is positiv
e
        //then wake one of the processes in the blocked queue.   
        wake-up();  
    }  
    }  
}  
• In this mechanism, the entry and exit in the
critical section are performed on the basis of
the value of counting semaphore.
• The value of counting semaphore at any point
of time indicates the maximum number of
processes that can enter in the critical section
at the same time.
Problem on Counting Semaphore
• A Counting Semaphore was initialized to 12. then 10P
(wait) and 4V (Signal) operations were computed on this
semaphore. What is the result?
Ans:
S = 12 (initial)   
10 p (wait) :  
S = S -10 = 12 - 10 = 2   
then 4 V :   
S = S + 4 =2 + 4 = 6  
• Hence, the final value of counting semaphore is 6.
Binary Semaphore or Mutex
• In counting semaphore, Mutual exclusion was not
provided because we has the set of processes
which required to execute in the critical section
simultaneously.
• However, Binary Semaphore strictly provides
mutual exclusion. Here, instead of having more
than 1 slots available in the critical section, we can
only have at most 1 process in the critical section.
The semaphore can have only two values, 0 or 1.
• Let's see the programming implementation of
Binary Semaphore.
struct Bsemaphore  
{  
    enum Value(0,1); //value is enumerated data ty
pe which can only have two values 0 or 1.  
    Queue type L;  
}  
/* L contains all PCBs corresponding to process  
Blocked while processing down operation unsuc
cessfully.   
*/   
Down (Bsemaphore S)   
{  
    if (s.value == 1) // if a slot is available in the   
//critical section then let the process enter in the queue.
    {  
 S.value = 0; // initialize the value to 0 so that no other process
 can read it as 1.   
    }  
    else  
    {  
   put the process (PCB) in Suspend list ; 
//if no slot is available  then let the process wait in the blocked queue.   
        sleep();   
    }  
}  
Up (Bsemaphore S)   
{  
    if (Suspend List is empty) 
//an empty blocked processes list implies that no process   
    //has ever tried to get enter in the critical section.   
    {  
        S.Value =1;   
    }  
    else  
    {  
        Select a process from Suspend List;   
        Wakeup(); 
// if it is not empty then wake the first process of the blocked queue.   
    }   
}  
Example: Binary semaphore s=1
P1 P2

Down(s) Down(s)
CS CS
Up(S) Up(S)
Counting Semaphore vs. Binary Semaphore
• Here, are some major differences between counting and
binary semaphore:

Counting Semaphore Binary Semaphore

No mutual exclusion Mutual exclusion

Any integer value Value only 0 and 1

More than one slot Only one slot

Provide a set of Processes It has a mutual exclusion


mechanism.
Thread
• A Thread is a basic unit of CPU execution. It consisting
of a program counter, a stack, and a set of registers, and a
thread ID.
• A Thread is a light weight process.
• The process can be split down into so many threads. For
example, in a browser, many tabs can be viewed as
threads. MS Word uses many threads – typing the text (it
is a thread), formatting text (it is another thread), spell
checker (it is another thread), etc.
• As shown in Figure, multi-threaded applications have
multiple threads within a single process, each having
their own program counter, stack and set of registers, but
sharing common code, data, and certain structures such as
open files.
single thread process and multi thread process

If a process has multiple threads of control , it can perform more than


one task at a time
Difference between Process and Thread
Process Thread
Process is called Heavy weight process Thread is called Light weight process
Process switching needs interaction Thread switching does not need to
with OS. interact with OS.
In multiple processing environments, All threads can share same set of open
Each process executes the same code files, child process.
but has its own memory and file
resources.
If one process is blocked then no other While one thread is blocked and
process execute until the first process waiting, a second thread in the same
is unblocked. task can run.
Each process operates independently One thread can read, write or change
to the others. another thread data.

Process takes more time to Thread takes less time to


terminate. terminate.
Life cycle or States of a Thread
• The life cycle of thread is  Born state, Ready state, Running
state, Blocked State, Sleep, Dead.
1. Born State: A thread that has just created.
2. Ready State: The thread is waiting for the
processor (CPU).
3. Running: The System assigns the processor to
the thread means that the thread is being
executed.
4. Blocked State: The thread is waiting for an
event to occur or waiting for an I/O device.
5. Sleep: A sleeping thread becomes ready after
the designated sleep time expires.
6. Dead: The execution of the thread is finished
Types of Thread
There are two types of thread in operating systems:
1. User Level Thread- User managed thread.
2. Kernel Level Thread – Support and managed
directly by the OS.
1. User Level Thread:- In this case, the thread
management kernel is not aware of existence of
thread.
Advantages:
1. Thread switching does not require kernel mode
privileges.
2. It can run on any OS.
3. User level threads are faster to create and manage.
Disadvantages:
1. In a typical OS, more system calls are blocking.
2. Multithreaded application cannot take advantage
of multiprocessing.
2. Kernel Level Thread
• In this case, thread management is done by kernel.
• Kernel threads are supported directly by the OS.
• Kernel maintains context information for the process as a whole
and for individual threads with in the process.
• Kernel performs operation like thread creation , scheduling and
management in kernel space.
Advantages :
1. It can simultaneously schedule multiple threads from the same
process on multiple process.
2. If one thread is blocked, the kernel can schedule another thread
of the same process.
Disadvantages:
1. These are slower to create and manage than the user thread.
2. Requires mode switch.
Multithreading Models
The user threads must be mapped to kernel
threads, by one of the following strategies:
1. Many to One Model
2. One to One Model
3. Many to Many Model
1. Many-to-One Model
• In the many-to-one model, many user-level
threads are all mapped onto a single kernel thread.
• Thread management is handled by the thread
library in user space, which is very efficient.
Limitations:
• The entire process will block if a thread makes a
blocking system call.
• Because only one thread can access the kernel at a
time, multiple threads are unable to run in parallel
on multiprocessor.
2. One-to-One Model
• Maps each user thread to a kernel thread.
• Provides more concurrency than the many-to-one model by
allowing another thread to run when a thread makes a blocking
system call.
• Also allows multiple threads to run in parallel on
multiprocessors.
• Limitations:
• Creating a user thread requires creating the corresponding
kernel thread.
• Because the overhead of creating kernel threads can burden the
performance of an application, most implementation of this
model restrict the numer of threads supported by the system.
• Linux and Windows from 95 to XP implement the one-to-one
model for threads.
Many-To-Many Model
• Multiplexes many user-level threads to a smaller or equal
number of kernel threads.
• The number of kernel threads may be specific to either a
particular application or a particular machine.
• Developers can create as many user threads as necessary,
and the corresponding kernel threads can run in parallel
on a multiprocessor.
• Also, when a thread performs a blocking system call, the
kernel can schedule another thread for execution.
• This is the model that is implemented in most of the
systems.

You might also like