You are on page 1of 49

D. M.

Akbar Hussain
Ocvin ,.cn (ov.c
D. M. Akbar Hussain
O.c.ic.
1ad1f1ona11y fhe concepf 1s cenfa11zed dafa pocess1ng, by a
s1ng1e o sef {c1usfe} of compufes.
Concepfua11y, each use hav1ng a fem1na1 1s connecfed by some
k1nd of commun1caf1on fac111fy fo a cenfa1 pocess1ng compufe.
1hee ae numbe of cenfa11zed desc1pf1ons Wh1ch makes
d1ffeenf mean1ngs:
Cenfa11zed Compufes: One o moe sef of compufes 1ocafed 1n a
cenfa1 fac111fy.
Cenfa11zed Pocess1ng: A11 app11caf1ons un on fhe cenfa1
dafa pocess1ng fac111fy.
Cenfa11zed Dafa: A11 dafa f11es and dafabases ae sfoed af
one paf1cu1a fac111fy and fhe cenfa1 compufe o compufes
have confo1 and access on fhaf dafa.
ln confasf fo cenfa1 pocess1ng concepf a d1sf1bufed dafa
pocess1ng concepf has numbe of compufes {sma11} scaffeed
fhoughouf fhe ogan1zaf1on.
5uch d1spes1on has fhe pofenf1a1 fo pov1de pocess1ng 1n a
vey effecf1ve Way as 1f 1s based on opeaf1ona1 need, econom1cs
and geogaph1ca1 bas1s.
D. M. Akbar Hussain
Ii.ivcv Ivv To.c..in IIT
kespons1ve
Ava11ab111fy
kesouce 5ha1ng
lncemenfa1 GoWfh
lnceased use lnvo1vemenf & Confo1
Lnd use Poducf1v1fy
D. M. Akbar Hussain
Ii.ivcv Ivv To.c..in IIT
Commun1caf1ons Ach1fecfue
lf 1s a soffWae fo suppof nefWok of
compufes.
NefWok Opeaf1ng 5ysfem
NefWok of app11caf1on compufes W1fh one
o moe seves.
D1sf1bufed Opeaf1ng 5ysfem
A common opeaf1ng sysfem shaed by nefWok
compufes.
D. M. Akbar Hussain
(icn,c.c (onvin
D. M. Akbar Hussain
(icn,c.c (onvin
D. M. Akbar Hussain
(icn,c.c (onvin
C11enf
lf pov1des h1gh1y use f1end1y 1nfeface
fo fhe use. 1hee ae bas1ca11y a1med fo
pov1de ease of use.
5eve
lf pov1des a sef of shaed use sev1ces
fo fhe c11enfs. Io examp1e, dafabase
seve.
NefWok
1h1d mosf essenf1a1 paf of c11enf/seve
compuf1ng 1s fhe nefWok.
D. M. Akbar Hussain
(icn,c.c (on{ivvion
Moe use f1end1y app11caf1ons ava11ab1e fo
1nd1v1dua1s and depafmenf 1eve1 manage fo
pov1de such app11caf1ons depend1ng upon fhe
1oca1 equ1emenf
A1fhough app11caf1ons ae scaffeed buf emphas1s
1n on cenfa11zaf1on of copoafe dafabases and
many nefWok managemenf and uf111fy funcf1ons.
A1so, managemenf can econom1ca11y use a
spec1a11zed compufe fac111fy fom d1ffeenf
depafmenfs.
use has geafe cho1ce 1n se1ecf1ng fhe
appop1afe poducf as fhee 1s moe fo se1ecf
fom.
NefWok1ng 1s fundamenfa1 fo fhe opeaf1on, so
nefWok managemenf and secu1fy have h1gh
p1o1fy 1n ogan1z1ng and opeaf1ng 1nfomaf1on
sysfems.
D. M. Akbar Hussain
(icn,c.c /i.vion.
D. M. Akbar Hussain
Ivvv.c /i.vion
D. M. Akbar Hussain
Ivvv.c /i.vion
D. M. Akbar Hussain
Ivvv.c /i.vion
1hee 1s mass1ve ob of seach1ng, equ1es
1age d1sk, h1gh speed pocesso and l/O, Wh1ch
1s nof equ1ed W1fh evey c11enf.
lf Wou1d nof do good fo move a11 fhe ecods fo
fhe c11enf, some 1og1c 1n add1f1on fo seach1ng
1s equ1ed fo 1ocafe fhe acfua1 ecod on beha1f
of c11enf.
NoW see fhe above s1fuaf1on: some app11caf1on 1og1c
may needed fo be on seve s1de.
D. M. Akbar Hussain
(icn,c.c (v..c.
- hosf 8ased Pocess1ng
It is not a true client/server model, rather refers to traditional
mainframe environment in which virtually all processing is done by
the central host. User interface is generally through a dump
terminal.
D. M. Akbar Hussain
(icn,c.c (v..c.
- 5eve 8ased Pocess1ng
Most basic class of client/server concept, client just provide a GUI
and all the processing is performed at server site. Does not provide
the functionality one would expect from a client/server
configuration except a user friendly interface at the user end.
D. M. Akbar Hussain
(icn,c.c (v..c.
- C11enf 8ased Pocess1ng {Iaf C11enf}
This is in contrast to server based processing as all application
processing is performed at client end, with the exception of data
validation and database logic functions which are best performed at
server site. Client normally housed sophisticated database logic
functions, most commonly used, it also enables the user for
customization locally.
D. M. Akbar Hussain
(icn,c.c (v..c.
- Coopeaf1ve Pocess1ng {Iaf C11enf}
This is a balanced optimized way in which processing is performed
by looking at the strengths of client and server individually and the
distribution of data. This kind is not easy to implement and maintain.
D. M. Akbar Hussain
Incc Iic (icn,c.c /.nic.vc
- 8as1c c11enf seve mode1 1s fWo 1eve1 1n
Wh1ch fhee 1s a c11enf f1e and a seve
f1e.
A neW fhee 1eve1 {f1e} ach1fecfue 1s
becom1ng popu1a. 1he app11caf1on soffWae
1s d1sf1bufed on a use mach1ne {fh1n
c11enf}, m1dd1e seve {gafeWay, seves as
c11enf and seve} and a backend seve {dafa
seve}. M1dd1e f1e can convef pofoco1s,
mapp1ng of dafabase que1es and can mege
esu1fs fom d1ffeenf dafa souces.
D. M. Akbar Hussain
Incc Iic (icn,c.c /.nic.vc
D. M. Akbar Hussain
Iic (v.nc (on.i.cn.,
D. M. Akbar Hussain
^ivvc.vc
- C11enf/seve benef1fs fom modu1a1fy and
m1x & mafch p1affom & app11caf1ons
- 1ue benef1f 1s on1y poss1b1e 1f a un1fom
mean and sfy1e of access fo sysfem esouces
acoss a11 p1affoms.
- A sfandad Way fo do fhaf 1s fo use an
1nfeface {m1dd1eWae} befWeen fhe
app11caf1on and commun1caf1on soffWae.
- 5uppose a d1sf1bufed sysfem has some
1nfomaf1on on one fype of dafabase and paf
of fhaf 1nfomaf1on 1s sfoed on a d1ffeenf
dafabase. When a c11enf Wanfs fo access fhaf
1nfomaf1on, he/she 1s nof concened Wh1ch
vendos dafabase confa1ns fhaf 1nfomaf1on.
he/5he 1s on1y concened a un1fom access fo
fhaf 1nfomaf1on.
D. M. Akbar Hussain
^ivvc.vc in (icn,c.c
D. M. Akbar Hussain
^ivvc.vc in oi.v cn.c
- M1dd1eWae pov1des ea1 d1sf1bufed c11enf/seve compuf1ng.
- Lnf1e d1sf1bufed sysfem can be v1eWed as a sef of app11caf1ons and
esouces ava11ab1e fo fhe use.
- use 1s nof concened 1ocaf1on of dafa 1ndeed 1ocaf1on of app11caf1on.
- A11 app11caf1ons opeafe ove a un1fom app11caf1ons pogamm1ng
1nfeface, fhe m1dd1eWae cufs acoss a11 c11enf and seve p1affoms and
1s espons1b1e fo ouf1ng c11enf equesfs fo fhe appop1afe seves.
D. M. Akbar Hussain
^ivvc.vc Ivn.ionvi,
D. M. Akbar Hussain
Ii.ivcv ^c..vc Tv..in
True distributed system comprises of individual machines with no shared
memory. So the inter-processor techniques that rely on shared memory such
as semaphores cannot be used.
These system basically rely on message passing techniques.
Straight forward application of messages as used on single machines
Remote procedure call
In the first case of a client/server model, a client needs some service say read a
file, sends a message containing a request of service to the server process.
Server honors it and sends a message containing a reply. In this case two
functions send and receive are used. Send specifies destination and message
content and receive function tells from whom message is required and provide
buffer for the incoming message.
D. M. Akbar Hussain
)civii, (. +ncivii,
A reliable message passing facility guarantees delivery of messages, based on:
Reliable Transport Protocol
Error Checking
Acknowledgement
Retransmission
Reordering of Mis-ordered Messages
If delivery is guaranteed no need to inform sending process.
But acknowledgement may be useful.
Repeated failure of delivery can be notified to the sender.
In an other scheme, sender just send the message to the network
This simplifies most of the things, communication overhead, complexity and
processing.
For those applications which require confirmation can use individual requests
to satisfy.
D. M. Akbar Hussain
o.[in (. )ono.[in
With non-blocking, asynchronous primitive a process is
not suspended on send and receive. When a send
primitive is issued, operating system returns control to the
process as soon as the message has been queued for
transmission. Later the process is interrupted to inform
that it has been sent. With non-blocking receive, after
issuing receive primitive it keeps on executing till
interrupted by the arrival message or it can pool for status
periodically. It is the efficient technique.
With blocking, synchronous send or receive control is not
returned to the process until after it has been transmitted or
received.
D. M. Akbar Hussain
)cnoc To.cvvc (v. ()T()
This method is widely accepted and used.
It enable remote interfaces o be specified as a set of named
operations with designated types.
Communication code can be generated automatically
because of precise and standard interface specification.
Because of standard and precise interface client and server
modules can be portable to different operating systems.
D. M. Akbar Hussain
)cnoc To.cvvc (v.
D. M. Akbar Hussain
Ic.in 1..vc. in )T(
Parameter Passing: It is simple to use call by value
reference to pass parameters as parameters are copied to
the message and transmitted to the remote system. It is
much more complex to use call by reference (pointer)
options. A unique system wide pointer may be required
for each object and overhead expected may be more.
Parameter Representation: If the calling and called
programs are in identical programming languages and are
on the same type of machine then it is no problem. But if
that is not the case then this issue has to be resolved in the
presentation layers. Otherwise, it has to be integrated into
the remote procedure call facility. Simple option is to
keep standardized format for objects, integers etc.
D. M. Akbar Hussain
Ic.in 1..vc. in )T(
Client/Server Binding: It provides relationship between remote and
calling program. Binding is formed when two applications have made
connection and ready to exchange information:
Non-persistent Binding: Represents a logical connection has
been established and as soon as the values are returned it will
be dismantled. The connection requires the state information
to be created on both sides which consumes resources. So it
is good in that regard but creating connection every time has
overhead.
Persistent Binding: This allows the connection remains
established for some time and if there seems to be no activity
for a certain period of time it can be dismantled.
D. M. Akbar Hussain
Ic.in 1..vc. in )T(
Synchronous Vs Asynchronous: It is similar to blocking and non-
blocking concept, traditionally remote procedure call is synchronous, so the
calling procedure wait till the value is returned. Therefore, it behaves like a
procedure call. To improve the efficiency, asynchronous RPC can be used.
For the synchronization between client and server:
A higher layer application in the client and server can initiate
the exchange and then check at the end that all requested actions
have been performed.
A client can issue a string of asynchronous RPCs followed by
a final synchronous RPC. The server will respond to the
synchronous RPC only after completing all the work requested in
the preceding asynchronous RPCs.
In both the options asynchronous RPC need no reply from the server.
D. M. Akbar Hussain
Ic.in 1..vc. in )T(
Object Oriented Mechanism: In this scheme client and server ship messages
back and forth between objects. Object communication may rely on
underlying message or RPC structure or developed on top of object oriented
capabilities in the operating system. A client need a service send a request to
an object request broker, which acts as a directory of all remote service
available on the network. The broker calls the appropriate object and passes
along the relevant data. Then the remote object services the request and
replies to the broker, which response to the client. The success depends upon
the standardization of objects.
D. M. Akbar Hussain
(v.c.
A cluster is a group of interconnected computers working together as a unified
computing resource and provides an illusion of a single machine (each
computer in a cluster is also referred as node)
An alternative to SMP
High Performance and High Availability
Benefits of Clusters:
Absolute Scalability: It is possible to create large clusters.
Incremental Scalability: Because of its configuration it is possible to
add more as needed.
High Availability: Because each node is a standalone machine,
failure of one node does not stop the service. Most of the system
has automatic built in software fault tolerance capability.
Superior Price/Performance: The cost of creating a cluster is much
less than a single large machine of equal computing capability.
D. M. Akbar Hussain
(v.c (on{ivvion
D. M. Akbar Hussain
(v.c (on{ivvion
D. M. Akbar Hussain
Ivn.ionv (v..i{i.vion
Locking management for
data access
Reduce overhead,
downtime minimum
Multiple servers simultaneously share
access to disks.
3. Servers Share Disks
Requires RAID technology
for disk failure
compensation
Reduced overhead To reduce overhead clusters consists
of servers connected to common
disks.
2. Servers connected
to disks
High network overhead High availability at
the cost of copying
overhead
Each server has its own disk, data
continuously copied from primary to
secondary. Software management to
assign incoming client requests for
load balance and high utilization.
1. Separate Servers
Complexity Processing is
performed by all
The secondary computer is also used
in processing, possibly has the
following classifications:
Active Secondary:
Secondary computer is
unavailable for processing
Easy
Implementation
One, primary computer handle all
processing, the other remains inactive
till primary fails. Failure check is
through heartbeat messages sent to
standby periodically.
Passive Standby:
(Not really a cluster by
definition)
L1m1faf1ons L1m1faf1ons L1m1faf1ons L1m1faf1ons 8enef1fs 8enef1fs 8enef1fs 8enef1fs Desc1pf1on Desc1pf1on Desc1pf1on Desc1pf1on C1usfe1ng Mefhod C1usfe1ng Mefhod C1usfe1ng Mefhod C1usfe1ng Mefhod
D. M. Akbar Hussain
O, Ic.in 1..vc.
Failure Management:
Depends on Clustering Method
High Availability Clusters
Fault Tolerant Clusters
In the first case all services are available with high probability. In case of a
failure the query is retrieved and executed by another server. Partially, recovered
transaction are not guaranteed and should be handled at application level.
In the second case fault tolerance is achieved through redundant shared disk
and baking uncommitted and committed complete transactions. Switching an
application and data resources over to another system in the clusters is called fail-
over and restoration in the event of a failed system is available again is called fail-
back. Which could be automatic, if the problem is truly fixed.
Load Balancing:
The system should be automatically able to load balance in the event
of an increased resource.
D. M. Akbar Hussain
O, Ic.in 1..vc.
Parallelizing Computation:
Effective use of cluster requires execution of software from a single
application in parallel. Following three schemes are possible:
Parallelizing Compiler: A compiler at compile time
determine which part of an application can execute in
parallel. Mostly depends upon the nature of the problem
and of-course on compiler designed.
Parallelized Application: Programmer writes the
application specifically for cluster configuration and uses
message passing for data exchange between nodes.
Possibly the best but puts lot of load on programmers.
Parametric Computing: This approach is used for
applications starting with a different set of conditions for
example in a simulation.
D. M. Akbar Hussain
(v.c (onvc /.nic.vc
D. M. Akbar Hussain
(v.c (onvc /.nic.vc
Typical Cluster Architecture
Individual nodes are connected through LAN
Each node can work independently
Middleware software is installed on each node
Middleware provides a unified system image to
user (Single System Image)
Middleware is responsible for high availability
Middleware is responsible for load balancing and
failure management
D. M. Akbar Hussain
Ic.ivc ^ivvc.vc c.i.c.
Single Entry Point: User logon to cluster rather than to a individual computer.
Single File Hierarchy: User sees a single hierarchy of file directories under same root
directory.
Single Control Point: Default workstation for cluster management.
Single Virtual Networking: Any node can access any point in the cluster, even though
the actual cluster may consists of multiple interconnected networks, there is single
virtual network.
Single Memory Space: Distributed shared memory enables programs to share
variables.
Single Job Management: No need of specifying the host computer for the execution
of a job.
Single User Interface: A common GUI for all users.
Single I/O Space: Any node can remotely access any I/O device without knowing its
physical location.
Single Process Space: A uniform process identification scheme, a process on a node
can create, communicate with any other process to a remote node.
Checking Pointing: Periodically saves process table for roll-back recovery.
Process Migration: For load balancing.
D. M. Akbar Hussain
(v.c (. ^T
Both provides support for high demand application.
Solutions are commercially available, SMP a bit longer.
SMPs main plus is easy to manage and configure.
SMP is much closer to the original single processor model.
Major change required in a SMP is the scheduler function.
SMP typically take less physical space.
SMP products because of their longer past history are established
and stable.
Clusters are far superior in scalability and probably may win
future race.
D. M. Akbar Hussain
+invo.. 2000 (v.c c.c
Wolfpack
D. M. Akbar Hussain
+invo.. 2000 (v.c c.c
Cluster Service: Collection of software.
Resource: An item/object managed by the
cluster service.
Online: A resource is online at a node when it
is providing any service.
Group: A collection of resources managed as
single unit.
D. M. Akbar Hussain
+) (v.c
D. M. Akbar Hussain
+) (v.c
D. M. Akbar Hussain
co.v{ 1)+\ (v.c.
D. M. Akbar Hussain
co.v{ 1)+\ (v.c.
Beowulf Distributed Process
Space (BPROC)
Beowulf Ethernet Channel
Bonding
Pvmsync
EnFuzion
D. M. Akbar Hussain
vnnv,