You are on page 1of 83

UNIT-3

IPC & SYNCHRONIZATION


Inter Process Communication(IPC)

- Exchange of data between two or more separate, independent


processes/threads.

- Operating systems provide facilities/resources for inter-process


communications (IPC), such as message queues, semaphores, and
shared memory.

- Distributed computing systems make use of these facilities/resources


to provide application programming interface (API) which allows IPC
to be programmed at a higher level of abstraction. (e.g., send and
receive)

- Distributed computing requires information to be exchanged among


independent processes.
IPC – unicast and multicast

• In distributed computing, two or more processes engage in


IPC using a protocol agreed upon by the processes. A process
may be a sender at some points during a protocol, a receiver
at other points.

• When communication is from one process to a single other


process, the IPC is said to be a unicast, e.g., Socket
communication.
• When communication is from one process to a group of
processes, the IPC is said to be a multicast, e.g.,
Publish/Subscribe Message model
Unicast vs. Multicast

P 2 P 2 P 3 ... P 4

m
m m m

P 1 P 1

unicast m u ltic ast


Interprocess Communications in Distributed
Computing

P ro c ess 1 P ro c ess 2

d a ta

sender r e c e iv e r
Operations provided in Interprocess
Communications
• Receive ( [sender], message )

• Connect (sender address, receiver address), for


connection-oriented communication.

• Send ( [receiver], message)

• Disconnect (connection identifier), for connection-oriented


communication.
Interprocess Communication in basic HTTP

W eb serve r

S 1 S 2 S 3 S 4
o p e ra tio n s:
H TTP S 1 : a cce p t co n n e ctio n
a proce ss S 2 : re ce ive (re q u e s t)
re qu e st
S 3 : s e n d (re s p o n s e )
a n o p e ra tio n S 3 : disco n n e ct
H TTP
C 1 : m a k e co n n e ctio n
re spon se
C 2 : se n d (re q u e s t)
d a ta flo w C 3 : re ce ive (re sp o n se )
C 1 C 2 C 3 C 4
C 4 : disco n n e ct

W e b brow ser

Processing order: C1, S1, C2, S2, S3, C3, C4, S4


Event Synchronization

• Interprocess communication may require that the two


processes synchronize their operations: one side sends, then
the other receives until all data has been sent and received.

• Ideally, the send operation starts before the receive operation


commences.

• In practice, the synchronization requires system support.


Synchronous vs. Asynchronous Communication

• The IPC operations may provide the synchronization


necessary using blocking. A blocking operation issued by a
process will block further processing of the process until the
operation is fulfilled.

• Alternatively, IPC operations may be asynchronous or


nonblocking. An asynchronous operation issued by a process
will not block further processing of the process. Instead, the
process is free to proceed with its processing, and may
optionally be notified by the system when the operation is
fulfilled.
Synchronous send and receive
process 1 process 2
running o n ho st 1 running o n ho st 2

b lo c k in g r e c e iv e s t a r t s

b lo c k i n g s e n d s t a r t s a n o p e r a t io n

e x e c u t i o n f lo w
a c k n o w le d g e m e n t o f d a t a r e c e iv e d s u s p e n d e d p e r io d
b lo c k in g s e n d r e t u r n s p r o v id e d b y t h e I P C f a c i li t y b lo c k i n g r e c e i v e e n d s

Synchro no us Send and R eceive

Client Server
Sender Receiver
Asynchronous send and synchronous receive
P rocess 2
P ro cess 1

b lo c k in g r e c e iv e s t a r t s

n o n b lo c k in g s e n d

o p e r a t io n
e x e c u t io n f lo w
s u s p e n d e d p e r io d
b lo c k in g r e c e iv e r e t u r n s

A synchronous Send and


Synchro no us R eceive

Client Server
Sender Receiver
Synchronous send and Asynchronous Receive
P rocess 2
P ro cess 1

b lo c k in g s e n d is s u e d

n o n b l o c k in g r e c e iv e is s u e d
t r a n s p a r e n t a c k n o w le d g e m e n t
p r o v i d e d b y t h e I P C f a c ili t y e x e c u t i o n f lo w
s u s p e n d e d p e r io d

Synchronous Send and


A synchro no us R eceive
S c e n a r io A
Asynchronous send and Asynchronous receive
P rocess 2
P rocess 1

n o n b lo c k in g r e c e i v e is s u e d
a n d r e t u r n e d im m e d i a t e ly
b lo c k i n g s e n d i s s u e d
p r o c e s s is n o t i f ie d
o f t h e a r r iv a l o f
d a ta
e x e c u t io n f l o w
s u s p e n d e d p e r io d

A synchro nous Send and


A synchro no us R eceive
S c e n a r io C
Data Representation
• Data transmitted on the network is a binary stream.
• An interprocess communication system may provide the
capability to allow data representation to be imposed on the
raw data.
• Because different computers may have different internal
storage format for the same data type, an external
representation of data may be necessary—standard format.

• Data marshalling is the process of (I) flattening a data


structure, and (ii) converting the data to an external
representation.
• Some well known external data representation schemes are:
Sun XDR (External Data Representation)
ASN.1 (Abstract Syntax Notation One)
XML (Extensible Markup Language)
Data Marshalling
" T h is is a t e s t . "
1 .2 7 .3 -1 .5
1 . f la t t e n in g o f s t r u c t u r e d d a t a it e m s
host A m a rsh a llin g
2 . c o n v e r t in g d a t a t o e x t e r n a l ( n e t w o r k )
r e p r e s e n t a t io n
1 1 0 0 1 1 ... 1 0 0 0 0 1 0 0 ...

1 . c o n v e r t d a t a t o in t e r n a l r e p r e s e n t a t io n
u n m a rsh a llin g
2 . r e b u ild d a t a s t r u c t u r e s .

" T h is is a t e s t . "
-1 .5 E x t e r n a l t o in t e r n a l r e p r e s e n t a t io n a n d v ic e v e r s a
is n o t r e q u ir e d
7 .3 - if t h e t w o s id e s a r e o f t h e s a m e h o s t t y p e ;
1 .2 - if t h e t w o s id e s n e g o t ia t e s a t c o n n e c t io n .

host B
Text-based protocols

• Data marshalling is at its simplest when the data exchanged is


a stream of characters, or text.

• Exchanging data in text has the additional advantage that the


data can be easily parsed in a program and displayed for
human perusal. Hence it is a popular practice for protocols to
exchange requests and responses in the form of character-
strings. Such protocols are said to be text-based.

• Many popular network protocols, including FTP (File Transfer


Protocol), HTTP, and SMTP (Simple Mail Transfer Protocol), are
text-based.
Group Communication

• Objective: each of a group of processes must receive copies of


the messages sent to the group

• Group communication requires:


 Coordination
 Agreement: on the set of messages that is received and on
the delivery ordering
Group Communication

 System: contains a collection of processes, which can


communicate reliably over one-to-one channels
 Processes: members of groups, may fail only by crashing

 Groups:

Closed group Open group

18
Group Communication

 Primitives:
 multicast(g, m): sends the message m to all members
of group g
 deliver(m) : delivers the message m to the calling
process
 sender(m) : unique identifier of the process that sent
the message m
 group(m): unique identifier of the group to which the
message m sent

19
Remote Procedure Call

• A Remote Procedure Call (RPC) is an inter-process communication that


allows a computer program to cause a procedure to execute in another
address space (commonly on another computer on a shared network)
without the programmer explicitly coding the details for this remote
interaction.

• RPC allows programs to call procedures located on other machines. But


the procedures ‘send’ and ‘receive’ do not conceal the communication
which leads to achieving access transparency in distributed systems.
Remote Procedure Call
• Information can be transported in the form of parameters and can come back
in procedure result. No message passing is visible to the programmer.

• Basic idea of RPC (Remote Procedure Call)


– define a server as a module that exports a set of procedures that can be
called by client programs.

call
Client Server
return

21
Remote Procedure Call

• Would like to do the same if called procedure or function


is on a remote server
Remote Method Invocation (RPC)

• RPC allow programmer to execute remote function class


using the same semantics as local functions calls.
Local Machine (Client) Remote Machine (Server)

;SampleServer remoteObject
;int s

1,2
;s = remoteObject.sum(1,2)

{ public int sum(int a,int b)


;return a + b
3 }

;System.out.println(s)
RPC Operations

• Call by Value: Here the parameters are copied into the


stack. The value parameter is just an initialized local variable.
The called procedure may modify the variable, but such
changes do not affect the original value at the calling side.

• Call by reference: It is a pointer to the variable. In the call to


Read, the second parameter is a reference parameter. It does
not modify the array in the calling procedure.

• Call-by-copy: Here the caller copies the variable to the stack


then copies it back after the call, overwriting the caller’s
original values
24
TRANSPARENCY OF RPC

• A major issue in the design of an RPC facility is its


transparency property. A transparent RPC mechanism is one in
which local procedures and remote procedures are
(effectively) indistinguishable to programmers. This requires
the following two types of transparencies:

1. Syntactic transparency means that a remote procedures call


have exactly the same syntax as a local procedure call.
Syntax is about the structure or the grammar of the language
2. Semantic transparency means that the semantics of a remote
procedure call are identical to those of a local procedure call.

.Semantics is about the meaning of the sentence •


:For example •
x++; // increment foo(xyz, --b, &qrs); // call foo
IMPLEMENTING RPC MECHANISM

• Implementation of an RPC mechanism usually involves the


following five elements.
• 1. The client
• 2. The client stub
• 3. The RPCRuntime
• 4. The server stub
• 5. The server
RPC MECHANISM
• Client :
The client is a user process that initiates a remote
procedure call. To make a remote procedure call, the
client makes a perfectly normal local call that
invokes a corresponding procedure in the client
stub.

• Client Stub :
The client stub is responsible for carrying out
the out following two tasks :

On receipt of a call request from the client,


it packs a specification of the target procedure
and the arguments into a message and then asks
the local RPCRuntime to send it to the server stub.
On receipt of the result of procedure execution,
it unpacks the result and passes it to the client.
RPC MECHANISM

• RPCRuntime :
The RPCRuntime handles transmission of
messages across the network between
client and server machines. It is
responsible for retransmissions,
acknowledgements, packet routing, and
encryption.

The RPCRuntime on the client machine


receives the call request message from
the client stub and sends it to the server
machine. It also receives the message
containing the result of procedure
execution from the server machine and
passes it to the client stub
RPC MECHANISM
• Server Stub :
The job of the server stub is very similar to
that of the client stub. It performs the following
two tasks:-
On the receipt of the call request message
from the local RPCRuntime, the server stub
unpacks it and makes a perfectly normal call to
invoke the appropriate procedure in the server.

On receipt of the result of procedure execution


from the server, the server stub packs the
result into a message and then asks the local
RPCRuntime to send it to the client stub.
• Server :
On receiving a call request from the server
stub, the server executes the appropriate
procedure and returns the result of procedure
execution to the server stub.
RPC MECHANISM
• Summary of the process:
• 1) The client procedure calls the client stub
in the normal way.
• 2) The client stub builds a message and
calls the local operating system.
• 3) The client's as sends the message to the
remote .
• 4) The remote as gives the message to the
server stub.
• 5) The server stub unpacks the parameters
and calls the server.
• 6) The server does the work and returns
the result to the stub.
• 7) The server stub packs it in a message
and calls its local .
• 8. The server's as sends the message to the
client‘s.
• 9) The client's as gives the message to the
client stub.
• 10)The stub unpacks the result and returns
to the client.
STUB GENERATION

• Stubs can be generated in one of the following two ways :


• 1. Manually : In this method, the RPC implementer provides a set of
translation functions from which a user can construct his or her own
stubs. This method is simple to implement and can handle very complex
parameter types.

• 2. Automatically : This is the more commonly used method for stub


generation. It uses Interface Definition Language (IDL) that is used to
define the interface between a client and a server. An interface definition
is mainly a list of procedure names supported by the interface, together
with the types of their arguments and results.

• For example, an interface definition has information to indicate whether


each argument is input, output, or both – only input arguments need be
copied from client to server and only output arguments need be copied
from server to client
RPC MESSAGES

• Any remote procedure call involves a client process and a


server process that are possibly located on different
computers. The mode of interaction between the client and
server is that the client asks the server to execute a remote
procedure and the server returns the result of execution of
the concerned procedure to the client. Based on this mode of
interaction, the two types of messages involved in the
implementation of an RPC system are as follows :
• Call messages
• Reply messages
RPC MESSAGES
• Call Messages :
• Since a call message is used to request execution of a particular remote
procedure the five basic components necessary in a call message are as
follows :
• 1. The identification information of the remote procedure to be
• executed.
• 2. The arguments necessary for the execution of the procedure. In
addition to these two fields, a call message normally has the following
fields:-
• a). A message identification field that consists of a sequence number. This
field is useful of two ways – for identifying lost messages and duplicate
messages in case of system failures and for properly matching reply
messages to outstanding call messages, especially in those cases when the
replies of several outstanding call messages arrive out of order.
• b). A message type field that is used to distinguish call messages from
reply messages. For example, in an RPC system, this field may be set to 0
for all call messages and set to 1 for all reply messages.
RPC MESSAGES
• c). A client identification field that may be used for two purposes: –
To allow the server of the RPC to identify the client to whom the reply
message has to be returned and to allow the server to check the
authentication of the client process for executing the concerned
procedure.
• Reply Messages :
When the server of an RPC receives a call message from a client, it could
be faced with one of the following conditions. In the list below, it is
assumed for a particular condition that no problem was detected by the
server for any of the previously listed conditions :
COMMUNICATION PROTOCOLS FOR RPCS
• In the RRA protocol fig.4.1 there is a
possibility that the acknowledgement
message may itself get lost. Therefore
implementation of the RRA protocol
requires that the unique message
identifiers associated with request
messages must be ordered. Each
reply message contains the message
identifier of the corresponding
request message, and each
acknowledgement message also
contains the same name message
identifier.

• This helps in matching a reply with its


corresponding request and an
acknowledgement with its
corresponding reply.
COMMUNICATION PROTOCOLS FOR RPCS

• A client acknowledges a reply message only if it has received


the replies to all the requests previous to the request
corresponding to this reply.
• Thus an acknowledgement message is interpreted as
acknowledging the receipt of all reply messages
corresponding to the request messages with lower message
identifiers. Therefore, the loss of an acknowledgement
message is harmless.
 
RPC Binding

• Binding is the process of connecting the client to the server


– the server, when it starts up, exports its interface
• identifies itself to a network name server
• tells RPC runtime that it is alive and ready to accept
calls
– the client, before issuing any calls, imports the server
• RPC runtime uses the name server to find the location
of the server and establish a connection
• The import and export operations are explicit in the server
and client programs
Assignment 2
•Using architectures differentiate between RPC and RMI.
•Explain Mutual Exclusion and Election algorithms with examples.
•What is meant by deadlock? Explain detection and prevention mechanism of
deadlock.
•Explain types of load distribution algorithms.
•Explain distributed database design techniques
•What is distributed multi media? Explain the characteristics of multi media.

• Write short notes on:


1.Amoeba, Mach, Chorus
2.Fragmentation and types
3. Load balancing & Load Sharing
4.Leaky Bucket & Token Bucket
5. . Data Marshalling
6. Clock synchronization

Submission Date on or before 30-10-2015


Sockets

• A socket is defined as an endpoint for communication.


• Concatenation of IP address and port
• The socket 161.25.19.8:1625 refers to port 1625 on host
161.25.19.8
• Communication consists between a pair of sockets.
• Sockets allow only an unstructured stream of bytes to be
exchanged. It is the responsibility of the client or server
application to impose a structure on the data.
Socket Communication
Remote Method Invocation
•Remote Method Invocation (RMI) is a Java mechanism similar to RPCs.
•RMI allows a Java program on one machine to invoke a method on a
remote object.
Remote and local method invocations

remote local C
invocation invocation local E
remote
invocation invocation F
A B local
invocation D

Remote method invocation: Method invocation


between objects in different processes, whether in
the same computer or not
•Remote object reference: identifier to refer to a certain remote object in a
distributed system. E.g. B’s must be made available to A.
•Remote interface: every remote object has one that specifies which methods
can be invoked remotely. E.g. B and F specify what methods in remote
interface.
•Server interface: for RPC. Server provides a set of procedure that are
42
available for use by client. File server provide reading and writing files.
A remote object and its remote interface

remoteobject
Local object

Data
remote
interface
m1 implementation m4

{ m2
m3 of methods
m5
m6

•Remote interface: Class of remote object implements the methods


of its remote interface. Object in other processes can only invoke
methods that belong to its remote interface. However, local object
can invoke remote interface methods as well as other methods
implemented by remote object.

•Define the remote interface by extending an interface named


43
Remote.
The Stub and Skeleton
call

skeleton
Stub
RMI Client RMI Server
return

• A client invokes a remote method, the call is first


forwarded to stub.
• The stub is responsible for sending the remote call over to
the server-side skeleton
• The stub opening a socket to the remote server,
marshaling the object parameters and forwarding the
data stream to the skeleton.
• A skeleton contains a method that receives the remote
calls, unmarshals the parameters, and invokes the actual
remote object implementation.
RMI and the OSI reference model

Copyright © 2001 Qusay H. Mahmoud


Marshalling Parameters
The General RMI Architecture
Remote Machine
• The server must first bind
bind
its name to the registry. RMI Server

• The client lookup the server Registry


name in the registry to skeleton

establish remote
references. return call lookup

• The Stub serializing the


parameters to skeleton, the stub
skeleton invoking the
remote method and RMI Client
serializing the result back to
the stub. Local Machine
IMPLEMENTATION USING RMI
Registry

1
2 1

server
client 3

4
1. Server implements the method and registers an object that can be used by a client to a registry
( a database that keeps track of all services and their locations). The client is aware of the remote
object. It contacts the registry to obtain information about the location of the server node, the port
number where the process resides and the object id number (Many clients can request the service
simultaneously).
2. The registry responds with information about the server’ s IP address , port number for the service,
object id number etc.
3. The client sends a request to invoke the service on the server.
4. The server executes the service and returns the results to the client.
5. The communication between the client and the server is handled through the stub and skeleton.
48
The Differences between RMI and RPC

• RMI is similar to Remote Procedure Calls (RPC) in the sense


that both RMI and RPC enable you to invoke methods, but
there are some important differences. With RPC, you call a
standalone procedure. With RMI, you invoke a method within
a specific object. RMI can be viewed as object-oriented RPC.
The Differences between RMI and Traditional
Client/Server Approach
• A RMI component can act as both a client and a server,
depending on the scenario .

• A RMI system can pass functionality from a client to a server


and vice versa. A client/server system typically only passes
data back and forth between server and client.
CLOCK SYNCHRONIZATION

• Clock synchronization is a problem from computer science and engineering which deals
with the idea that internal clocks of several computers may differ. Even when initially set
accurately, real clocks will differ after some amount of time due to clock drift, caused by
clocks counting time at slightly different rates.

• Every computer needs a timer mechanism (called a computer clock) to keep track of
current time and also for various accounting purposes such as calculating the time spent
by a process in CPU utilization, disk I/O and so on, so that the corresponding user can be
charged properly.

• An application may have processes that concurrently run on multiple nodes of the
system. For correct results, several such distributed applications require that the clocks
of the nodes are synchronized with each other.

• For example, on line reservation system to be fair, the only remaining seat booked
almost simultaneously from two different nodes should be offered to the client who
booked first, even if the time different between the two bookings is very small. It may
not be possible to guarantee this if the clocks of the nodes of the system are not
synchronized.
HOW COMPUTER CLOCKS ARE IMPLEMENTED

• A computer clock usually consists of three components – a quartz crystal that oscillates at a well – defined
frequency, a counter register, and a holding register.

• The holding register is used to store a constant value that is decided based on the frequency of oscillation of
the quartz crystal. That is, the value in the counter register is decremented by 1 for each oscillation of the
quartz crystal.

• When the value of the counter register becomes zero, an interrupt is generated and its value is reinitialized to
the value in the holding register. Each interrupt is called clock tick.

• The value in the holding register is chosen 60 so that on 60 clock ticks occur in a second.

• CMOS RAM ( Complementary metal-oxide semiconductor, or CMOS, typically refers to a


battery-powered memory chip in your computer that stores startup information) is also
present in most of the machines which keeps the clock of the machine up-to-date even when the machine is
switched off. When we consider one machine and one clock, then slight delay in the clock ticks over the
period of time does not matter, but when we consider n computers having n crystals, all running at a slightly
different time, then the clocks get out of sync over the period of time. This difference in time values is called
clock skew.

• clock drift rate: the relative amount that a computer clock differs from a perfect clock

 
Clocks Drifting

.The relation between clock time and UTC when clocks tick at different rates

53 Distributed Systems
How Clocks Work in Computer
Holding Quartz Oscillation at a well-
crystal defined frequency
register
Each crystal oscillation
When counter gets 0, its
decrements the counter by 1
value reloaded from the
holding register
Counter
When counter is 0, an
interrupt is generated, which
is call a clock tick
CPU
At each clock tick, an interrupt
service procedure add 1 to time
stored in memory Memory
Clock Skew problem
2146 2147 2148 2149 2150 2151 2152 2153 2154
Computer 1
output.o file created

2142 2143 2144 2145 2146 2147 2148 2149 2150


Computer 2

output.c file created output.c file changed


What’s Different in Distributed Systems

 In centralized systems, where processes can share a clock and


memory, implementation of synchronization primitives relies
on shared memory and the times that events happened.

 In distributed system, processes can run on different


machines.
- No shared memory physically exists in a multi-computer
system
- No global clock to judge which event happens first
Synchronization With Physical Clocks

- TAI (International Atomic Time): Cs133 atomic clock

- UTC (Universal Coordinated Time): modern civil time, can


be received from WWV (shortwave radio station),
satellite, or network time server.

- ITS (Internet Time Service)


- NTS (Network Time Protocol)

 How do we synchronize clocks with each other?


- De- Centralized Algorithm :Lamport’s algorithm
- Centralized Algorithm: Cristian’s Algorithm
- Distributed algorithm: Berkeley Algorithm

57 Distributed Systems
De- Centralized Algorithm :Lamport’s algorithm
Lamport developed a “happens before” notation to express this: a→b means that a happens
before b. If a is a message sent and b is a the message being received, then a→b must be true.
A message cannot be received before it is sent1. This relationship is transitive. If a→b and b→c
.then a→c

The importance of measuring time is to assign a time value to each event on which everyone will
agree on the final order of events. That is, if a→b then clock(a) < clock(b) since the clock must
never run backwards. If a and b occur on different processes that do not exchange messages
.(even through third parties) then a→b is not true. These events are said to be concurrent
 
Consider the sequence of events depicted in Figures 1 and 2 taking place between three
.machines whose clocks tick at different rates
 
In three of the six messages, we get the appearance of moving back in time. Because of this,
future messages from those sources appear to have originated earlier than they really have. If
we are to sort messages by the timestamps placed upon them when they were sent, the
sequence of messages would appear to be {a, b, e, d, c, f} rather than {a, b, c, d, e, f}. Lamport’s
:algorithm remedies the situation as follows
 
Each message carries a timestamp of the sending time (according to the sender’s
clock). When a message arrives and the receiver’s clock is less than the timestamp
on the received message, the system’s clock is forwarded to the message’s
.timestamp + 1. Otherwise nothing is done

If we apply this algorithm to the same sequence of messages, we can see that
message ordering is now preserved (Figures 3 and 4). Note that between every
.two events, the clock must tick at least once

In summary, Lamport’s algorithm requires a monotonically increasing software


counter for a “clock” that has to be incremented at least when events that need to
be timestamped take place. These events will have this “Lamport timestamp”
attached to them. For any two events, where a→b, L(a) < L(b) where L(x)
.represents the Lamport timestamp for event x
Cristian’s algorithm

Assumptions: There is a machine with WWV receiver, which


receives precise UTC (Universal Coordinated Time). It is called
the time server.

Algorithm:
1. A machine sends a request to the time server at least every
d/2 seconds, where d is the maximum difference allowed
between a clock and the UTC;
2. The time server sends a reply message with the current UTC
when receives the request;
3. The machine measures the time delay between time serve’s
sending the message and the machine’s receiving the
message. Then, it uses the measure to adjust the clock.
61 Distributed Systems
Cristian’s algorithm

Getting the current time from a time server

62 Distributed Systems
Cristian’s algorithm
A major problem – the client clock is fast  arriving value of CUTC will be
.smaller than client’s current time, C
?What to do
One needs to gradually slow down client clock by adding less time
.per tick

Minor problem – the one-way delay from the server to client is “significant”
.and may vary considerably
?What to do
.Measure this delay and add it to CUTC
.The best estimate of delay is (T1 – T0)/2
In cases when T1 – T0 is above a threshold, then ignore the measurement.
The new time can be set to the time returned by the server plus the time that
:lapsed since the server generated the timestamp

63 Distributed Systems
Cristian’s algorithm

Adjust the clock: Berkeley Algorithm


- If the local clock is faster than the UTC, add less to the time
memory for each clock tick;
- If the local clock is slower than the UTC, add more to the time
memory for each clock tick.
The Berkeley Algorithm
synchronization without time server

a)The time daemon asks all the other machines for their clock value discrepancies

b)The machines answer

c) The time daemon tells everyone how to adjust their clock to the average

Ie. The server now averages the three timestamps: the two it received and its own, computing (3:00+3:25+2:50)/3 = 3:05. Now it sends an
offset to each machine so that the machine's time will be synchronized to the average once the offset is applied. The machine with a time of
3:25 gets sent an offset of -0:20 and the machine with a time of 2:50 gets an offset of +0:15. The server has to adjust its own time by +0:05.
Mutual Exclusion
• Mutual exclusion (often abbreviated to mutex) algorithms are
used in concurrent programming to avoid the simultaneous use
of a common resource, such as a global variable, by pieces of
computer code called critical sections.

• A critical section is a piece of code in which a process or thread


accesses a common resource. The critical section by itself is not a
mechanism or algorithm for mutual exclusion. A program,
process, or thread can have the critical section in it without any
mechanism or algorithm which implements mutual exclusion.
 
Centralized Algorithm
• One of the processes in the system is chosen to coordinate the
entry to the critical section.
• A process that wants to enter its critical section sends a request
message to the coordinator.
• The coordinator decides which process can enter the critical section
next, and its sends that process a reply message.
• When the process receives a reply message from the coordinator, it
enters its critical section.
• After exiting its critical section, the process sends a release message
to the coordinator and proceeds with its execution.
• This scheme requires three messages per critical-section entry:
request
reply
release
Mutual Exclusion:
critical regions in distributed systems
A Centralized Algorithm
(to simulate a single processor system, needs a coordinator)

a) Process 1 asks the coordinator for permission to enter a critical region. Permission is
granted
b) Process 2 then asks permission to enter the same critical region. The coordinator does
not reply.
c) When process 1 exits the critical region, it tells the coordinator, which then replies to 2
Centralized Algorithm

• Advantages
– Mutual exclusion guaranteed by coordinator
– “Fair” sharing possible without starvation
– Simple to implement
• Disadvantages
– Single point of failure (coordinator crashes)
– Performance bottleneck
Distributed Algorithm
• When a process wants to enter a critical region, it builds a message
containing:
name of the critical region
it’s process number
it’s current time
• The process sends this message to all the processes in the network. When
another process receives this message, it takes the action pertaining on its
state and the critical region mentioned. Three cases are possible here:
1) If the message receiving process is not in the critical region and does
not wish to enter it, it sends it back.
2) Receiver is already in the critical region and does not reply
3) Receiver wants to enter the same critical region but has not done so,
it compares the “time stamp” of the incoming message with the one it has
sent to others for permission. The lowest one wins and can enter the
critical region.
• When the process exists from the critical region, it sends an OK message
to inform everyone.
A Distributed Algorithm
requires a total ordering of all events in the system
A message contains the critical region name, the process number and the current time

a) Two processes (0,2) want to enter the same critical region at the same moment.
b) Process 0 has the lowest timestamp, so it wins and enters the critical region.
c) When process 0 is done, it sends an OK also, so 2 can now enter the critical region.
• This algorithm is worse than the centralized one (n points of failure, scaling,
multiple messages…)
Distributed Algorithm

• Advantage
– No central bottleneck
– Fewer messages than Decentralized
• Disadvantage
– n points of failure
– i.e., failure of one node to respond locks up system
Token Ring Algorithm
• In software, a logical ring is constructed in which each process is assigned
a position in the ring. The ring positions may be allocated in numerical
order of network addresses or some other means. It does not matter
what the ordering is. All that matters is that each process knows who is
next in line after itself.

• When the ring is initialized, process 0 is given a token. The token circulates
around the ring. It is passed from process k to process k +1 in point-to-
point messages. When a process acquires the token from its neighbor, it
checks to see if it is attempting to enter a critical region. If so, the process
enters the region, does all the work it needs to, and leaves the region.
After it has exited, it passes the token along the ring.

• It is not permitted to enter a second critical region using the same token.
If a process is handed the token by its neighbor and is not interested in
entering a critical region, it just passes it along. As a consequence, when
no processes want to enter any critical regions, the token just circulates at
high speed around the ring.
A Token Ring Algorithm
when the process acquires the token, it accesses the critical region (if needed)

start

a) An unordered group of processes on a network.


b) A logical, ordered, ring constructed in software. Each process knows who
is the next in line
Token Ring Algorithm

• Advantages
• Fairness, no starvation
• Recovery from crashes if token is not lost
• Disadvantage
• Crash of process holding token
• Difficult to detect; difficult to regenerate exactly one
token
Mutual Exclusion

Messages per Delay before entry *


Algorithm Problems
entry/exit (in message times)

Centralized 3 2 Coordinator crash

Crash of any
Distributed 2(n–1) 2(n–1)
process

Lost token,
Token ring 1 to  0 to n – 1
process crash

A comparison of three mutual exclusion algorithms.


Election algorithms

• Election algorithms are meant for electing to take coordinator


process from among the currently running processes in such a
manner that at any instance of time there is a single coordinator for
all processes in the system.

• Election algorithm are based on the following assumptions :


1. Each process in the system has a unique priority number.

2. Whenever an election is held, the process having the highest


priority number among the currently active processes is elected as
the coordinator.

3. On recovery, a failed process can take appropriate actions to


rejoin the set of active processes.
Bully Algorithm

• When any process notices that the coordinator is no longer responding


to the requests, it asks for the election.
Example: A process P holds an election as follows
1) P sends an ELECTION message to all the processes with higher
numbers.
2) If no one responds, P wins the election and becomes the
coordinator.
3) If one higher process answers; it takes over the job and P’s job is
done.

• At any moment an “election” message can arrive to process from one


of its lowered numbered colleague. The receiving process replies with
an OK to say that it is alive and can take over as a coordinator.

• Now this receiver holds an election and in the end all the processes
give out except one and that one is the new coordinator. The new
coordinator announces its new post by sending all the processes a
message that it is starting immediately and is the new coordinator of
the system.
Election Algorithms
The Bully Algorithm
Selecting a coordinator

• The bully election algorithm


a) Process 4 holds an election
b) Process 5 and 6 respond, telling 4 to stop
c) Now 5 and 6 each hold an election
d) Process 6 tells 5 to stop
e) Process 6 wins and tells everyone
Ring Algorithm
• It is based on the use of a ring as the name suggests. But this does not use a token.
Processes are physically ordered in such a way that every process knows its
successor.
• When any process notices that the coordinator is no longer functioning, it builds
up an ELECTION message containing its own number and passes it along to its
successor. If the successor is down, then sender skips that member along the ring
to the next working process.
• At each step, the sender adds its own process number to the list in the message
effectively making itself a candidate to be elected as the coordinator.

• When the message returns to p, it sees its own process ID in the list and knows
that the circuit is complete.

• P circulates a COORDINATOR message with the new high number.


• Here, both 2 and 5 elect 6:
[5,6,0,1,2,3,4]
[2,3,4,5,6,0,1]
A Ring Algorithm
.Each process knows who its successor is

• Election algorithm using a ring (without token).


UNIT-3 Assignment Questions

• Why and how does stub generation takes place in RPC ?


• Differentiate between RPC and RMI architecture.
• Explain Mutual Exclusion and Election algorithms with examples.
• Write short notes on:
1. RPC messages
2. Data Marshalling
3. Clock synchronization
4. RRA protocol

• Note: don’t write answers from shivani

You might also like