You are on page 1of 17

Mr. Siva sandeep sandeepchintamaneni@gmail.com B.TECH.

(IT)

Mr. Naveen ynavn06@gmail.com B.TECH. (IT)

Guided By Prof. Mr. R. Praveen kumar Information technology ( Asst. Proffessor )

.R.SI!!H"RTH" ENGINEERING C#$$EGE i%aya&ada.

Indexing the things..


1

r. !o.
' * + , .

"oncept
In(rodu)(ion Grid Types of Grid Ho& i( &orks/oS(/ua0i(y of Servi)e) Guided S)1edu0in2 "02ori(1m "r)1i(e)(ure Grid Componen(s Grid Sof(&are Componen(s E7amp0e89 :ni(ed !evi)es Can)er Resear)1 Pro%e)( Business < Te)1no0o2i)a0 Benefi(s from Grid )ompu(in2 Comparin2 &i(1 o(1er dis(ri=u(ed )ompu(in2 sys(ems$imi(a(ion of Grid Compu(in2 G0o=us Too0ki( T1e >u(ure Con)0usion
#

Page !o.
' + + , .

3 4 5 6 '; '' '* '+ ', '.

3 4 5 6 '; '' '' '* '* '*

GRI! C#MP:TING
'.' In(rodu)(ion89

I"N >#STER When the network is as fast as the computer's internal links, the machine disintegrates across the net into a set of special purpose appliances. -- %ilder &echnology 'eport( )*ne #000. ?:ni(ed &e s(and@ divided &e fa00A( is the +ey idea ,ehind grid comp*ting. %rid is a type of para00e0 and dis(ri=u(ed sys(em that ena,les the sharing( selection( and aggregation of 2eo2rap1i)a00y dis(ri=u(ed Bau(onomousB resour)es dynami)a00y a( run(ime depending on their availa,ility( capa,ility( performance( cost( and *sers- .*ality/of/service re.*irements. %rid comp*ting is a critical shift in thin+ing a,o*t ho0 to ma7imiCe (1e va0ue of )ompu(in2 resour)es. It allo0s *s to *nite pools of servers( storage systems and net0or+s into a single large system so 0e can deliver the po0er of m*ltiple/systems reso*rces to a single *ser point for a specific p*rpose. &o a *ser( data file( or an application( the system appears to ,e a sin20e@ enormous vir(ua0 )ompu(in2 sys(em. %rid comp*ting is the next logical step in distri,*ted net0or+ing. )*st as the Internet allo0s *sers to share ideas and files as the seeds of pro1ects( grid comp*ting lets *s share the reso*rces of disparate comp*ter systems. T1e ma%or purpose of a 2rid@ is (o vir(ua0iCe resour)es (o so0ve pro=0ems. o( rather than *sing a net0or+ of comp*ters simply to comm*nicate and transfer data( grid comp*ting taps the *n*sed processor cycles or n*mero*s i.e. tho*sands of comp*ters.

'.* !efini(ion89 A means of net0or+ comp*ting that harnesses the *n*sed processing cycles
of n*mero*s comp*ters( to solve intensive pro,lems that are often too large for a single comp*ter to handle( s*ch as in life sciences or climate modeling.

'.+ #ri2in89
2

Its roots lies in early distri,*ted comp*ting pro1ects that date ,ac+ to the 1340s As a res*lt( grid has ,ecome a centerpiece of the 5*tility comp*ting5 mar+eting drive ta+en *p ,y nearly every vendor. '.,P1ases D1is(oryE a) >irs( 2enera(ion89>">NER(6actoring via !et0or+/7na,led 'ec*rsion) and I9F"G are the t0o first generation grid comp*ting methods. 6A6!7' is 8ased on p*,lic +ey encryption algorithm( 0as *sed factori9e extremely large n*m,ers

=) Se)ond 2enera(ion )ore (e)1no0o2ies G$#B:S8


It is ,ased on glo,*s tool+it reso*rce allocation manager( an extended version of 6&P protocol( 0hich ena,les data access( partial file access management at high speed. )) T1ird 2enera(ion sys(ems &hese system follo0s properties of a*tonomy s*ch as 1. :etail +no0ledge of its component and stat*s #. ;*st config*re reconfig*re itself dynamically $. a,le to recover from malf*nction 2. Protect itself against attac+ &he third generation grid comp*ting involves *se of different techni.*es s*ch as <;=( 1ava( )#77 etc. !ifferen( Types89:istri,*ted o,1ect system it is ,ased on "ommon 'e.*est 8ro+er Architect*re (">'8A) . ?e can *se 1ava 0ith ">'8A *p to certain extent as it provides distri,*ted o,1ect thro*gh ';I. )ava has dra0,ac+s in terms of comp*tation speed. Hini and RMI89)ini is designed to provide a soft0are infrastr*ct*re that can form distri,*ted comp*ting environment .in this application is *s*ally 0ritten in 1ava and comm*nication *sing ';I. @ey concepts of 1ini are $ookup( !is)overy@ $easin2 ( Remo(e even(s and Transa)(ion .

'.. Need 89&he ,est candidates for grid are applications that r*n the same or similar
comp*tations on tho*sands or millions of pieces of data( 0ith no single calc*lation dependent on those that came ,efore.

*. Grid89
%rids are *s*ally 1e(ero2eneous ne(&orks. %rid nodes( generally individ*al comp*ters( consist of different hard0are and *se a variety of operating systems and net0or+ing to connecting them vary in ,and0idth. &hese reso*rces are *sed among the vario*s pro1ects. &his forms the system as the aggregation of reso*rces for a partic*lar tas+ i.e. vir(ua0 or2aniCa(ion.

Simp0e Grid !ia2ram +. Types of Grid89


+.'Compu(a(iona0 Grid89 It is foc*sed on se((in2s aside resour)es specifically for comp*ting po0er .In this type of grid most of machines are 1i21 performan)e servers.

+.* S)aven2in2 Grid89It is most commonly *sed 0ith large n*m,ers of desk(op ma)1ine. ;achines are scavenged for availa,le "PB cycles and other reso*rces. >0ners of des+top machines are *s*ally 2iven )on(ro0 over 0hen their reso*rces are availa,le to grid. +.+ !a(a Grid89It is responsi,le for ho*sing and providing access to da(a a)ross mu0(ip0e or2aniCa(ions. Bsers are not concerned 0ith 0here this data is located as long as they access to the data .

, Ho& i( &orks%rid comp*ting *ses net0or+ed cl*sters of "PBs connected over the Internet( a company intranet or a corporate ?A!. &he res*lting net0or+ of "PBs acts as a fo*ndation for a set of 2rid9ena=0in2 sof(&are (oo0s. &hese tools let the grid accept a large comp*ting 1o, and ,rea+ it do0n into tens( h*ndreds or tho*sands of independent tas+s. &he tools then search the grid for availa,le reso*rcesC assign tas+s to processors( aggregate the 0or+ and spit o*t one final res*lt. A grid *ser installs the provided 2rid sof(&are (for *sing the grid as 0ell as donating to the grid) on his machine and gets connected 0ith Internet. &he *ser esta,lishes his identity 0ith a )er(ifi)a(e au(1ori(y. &his sof(&are may =e au(oma(i)a00y re)onfi2ured ,y the grid management system (o kno& (1e )ommuni)a(ion address of (1e mana2emen( nodes in (1e 2rid and user or ma)1ine iden(ifi)a(ion informa(ion. &o *se the grid( most grid systems re.*ire the *ser to log on to a system *sing a *ser I: that is enrolled in the grid. >nce logged on( the *ser can .*ery the grid and s*,mit 1o,s. %rid systems *s*ally provide command line tools as 0ell as graphical *ser interfaces (G:Is) for Iueries. "ommand line tools are especially *sef*l 0hen the *ser 0ants to 0rite a script. )o, s*,mission *s*ally consists of three parts( even if there is only one command re.*ired. ome inp*t data and possi,ly the exec*ta,le program or exec*tion script file are sent to the machine to exec*te the 1o,. ending the inp*t is called ?s(a2in2 (1e inpu( da(a.A &he %o= is e7e)u(ed on (1e 2rid ma)1ine.

&he resu0(s of (1e %o= are sent ,ac+ to the s*,mitter. ?hen there are a large n*m,er of s*, 1o,s( the 0or+ re.*ired to collect the res*lts and prod*ce the final res*lt is *s*ally accomplished ,y a single program

.. a /oS (/ua0i(y of Servi)e) 2uided s)1edu0in2 a02ori(1m


ched*ler is the main part of grid comp*ting. In the three main phases of sched*ler( first is resour)e dis)overy( second phase involves 2a(1erin2 informa(ion a=ou( resour)es and )1oosin2 (1e =es( ma()1 for app0i)a(ion reIuiremen(. In third phase the %o= is e7e)u(ed. &he sched*ling algorithms are mainly divided into t0o categoriesE on0ine mode and =a()1 mode. In Minimum Comp0e(ion Time@ grid system assigns the tas+ to the machine( that 0ill have earliest completion time( and in Minimum E7e)u(ion Time( it assigns the tas+ to the machine that performs tas+( in least exec*tion time.

Min9min a02ori(1m
F1i0e there are tas+s to sched*le >or all tas+s to sched*le >or all hosts "omp*te the expected completion time end >or "omp*te metric "omp*te minim*m completion time ched*le the tas+ end F1i0e for all tas+s ti in meta/tas+ Mv (in an ar,itrary order) for all hosts mj (in a fixed ar,itrary order) comp*te completion time do *ntil all tas+s 0ith high Fo re.*est in Mv are mapped for each tas+ 0ith high Fo in Mv( find a host in the Fo .*alified host set that o,tains the earliest completion time find the tas+ tk 0ith the minim*m earliest completion time

assign tas+ tk to the host ml that gives it the earliest completion time delete tas+ tk from Mv *pdate CTil for all i end do do *ntil all tas+s 0ith lo0 Fo re.*est in Mv are mapped for each tas+ in Mv find the earliest completion time and the corresponding host find the tas+ tk 0ith the minim*m earliest completion time assign tas+ tk to the host ml that gives it the earliest completion time delete tas+ tk from Mv *pdate CTil for all i end do

&he Ios (priori(y =ased a02ori(1m) finds the minim*m earliest completion time and assign the tas+ to the host 0hich gives the least completion time to it .6inally the lo0 .os tas+s are also mapped. &his algorithm improves efficiency ,y a,o*t ''J.

.. = Tree 0oad =a0an)in2 a02ori(1ms


&he &=8A algorithm( named &ree =oad 8alancing Algorithm( creates a vir(ua0 in(er)onne)(in2 (ree (non9)y)0i) )onne)(ed 2rap1) among the comp*ters of the system. >n this tree( each comp*ter of an N level sends its *pdated load information to the N-1 level comp*ters. &he selection of the ,est comp*ter ( to exec*te a process( received ,y the system( 0or+s as a deep search on the interconnecting tree.

6. "RCHITECT:RE8 ?e 0ill st*dy the architect*re from the >pen %rid ervices Architect*re (>% A)( developed ,y the mem,ers of the %lo,al %rid 6or*m (%%6).8*ilding on existing ?e, services standards( the #GS" defines a 2rid servi)e as a Fe= servi)e that conforms to a partic*lar set of conventions. ?or+ing gro*ps in organi9ations li+e the G0o=a0 Grid >orum and #"SIS are ,*sy defining an array of grid standards in areas li+e 3

"pp0i)a(ions and pro2rammin2 mode0s@ "r)1i(e)(ure@ !a(a mana2emen(@ Se)uri(y@ Performan)e@ S)1edu0in2 and resour)e mana2emen(.

4 Grid Componen(s89
4.'P a0K:s or( er

In(erfa)e89" grid user s1ou0d no( see a00 of (1e )omp0e7i(ies of (1e )ompu(in2 2rid. 6rom this perspective( the *ser sees the grid as a virt*al comp*ting reso*rce 1*st as the cons*mer of po0er sees the receptacle as an interface to a virt*al generator. 4.* Se)uri(y89 At the ,ase of any grid environment( there m*st ,e mechanisms to provide sec*rity( incl*ding a*thentication( a*thori9ation( data encryption( and so on. &he %rid ec*rity Infrastr*ct*re (GSI) component of the %lo,*s &ool+it provides ro=us( se)uri(y me)1anisms. &he % I incl*des an #pen SS$ implementation. It provides a sin20e si2n9on me)1anism@ so that once a *ser is a*thenticated( a proxy certificate is created and *sed 0hen performing actions 0ithin the grid

4.+ Broker89>nce a*thenticated( the *ser 0ill ,e la*nching an application. 8ased on the
application( and possi,ly on other parameters provided ,y the *ser( the next step is to identify the availa,le and appropriate reso*rces to *se 0ithin the grid. &his tas+ co*ld ,e carried o*t ,y a =roker fun)(ion 4., S)1edu0er89>nce the reso*rces have ,een identified( the next logical step is to sched*le the individ*al 1o,s to r*n on them. If a set of stand/alone 1o,s are to ,e exec*ted 0ith no interdependencies( then a speciali9ed sched*ler may not ,e re.*ired. Go0ever( if yo* 0ant to reserve a specific reso*rce or ens*re that different 1o,s 0ithin the application r*n conc*rrently( then a 1o, sched*ler sho*ld ,e *sed to coordinate the exec*tion of the 1o,s. T1e G0o=us Too0ki(

10

does no( in)0ude su)1 a s)1edu0er@ =u( (1ere are severa0 s)1edu0ers avai0a=0e (1a( 1ave =een (es(ed &i(1 and )an =e used in a %lo,*s grid environment. 4.. !a(a Mana2emen(89If any data incl*ding application mod*les m*st ,e moved or made accessi,le to the nodes 0here an application-s 1o,s 0ill exec*te( then there needs to ,e a sec*re and relia,le method for moving files and data to vario*s nodes 0ithin the grid. &he %lo,*s &ool+it contains a data management component ( %rid Access to econdary torage (G"SS) (facilities li+e Grid >TP) 4.3 Ho= and Resour)e Mana2emen(89. &he %rid 'eso*rce Allocation ;anager (GR"M) provides the services to act*ally la*nch a 1o, on a partic*lar reso*rce( chec+ its stat*s( and retrieve its res*lts 0hen it is complete.

5. Grid Sof(&are Componen(89 5.' Mana2emen( Componen(8 &here is a component that +eeps trac+ of the reso*rces
availa,le to the grid and 0hich *sers are mem,ers of the grid.&here are measuremen( )omponen(s and "dvan)ed 2rid mana2emen( sof(&ares. 5.* !onor Sof(&are89It m*st ,e a,le to receive the exec*ta,le file or select the proper one from copies pre/installed on the donor machine. &he soft0are is exec*ted and the o*tp*t is sent ,ac+ to the re.*ester. 5.+ Su=mission Sof(&are89 Bs*ally any mem,er machine of a grid can ,e *sed to s*,mit 1o,s to the grid and initiate grid .*eries. Go0ever( in some grid systems( this f*nction is implemented as a separate component installed on su=mission nodes or su=mission )0ien(s. 5., S)1edu0ers89 &his soft0are locates a machine on 0hich to r*n a grid 1o, that has ,een s*,mitted ,y a *ser. hierarchically. 5.. Communi)a(ions89 A grid system may incl*de soft0are to help 1o,s comm*nicate 0ith each other Go0ever( the app0i)a(ion may imp0emen( an a02ori(1m that re.*ires that the s*, 1o,s comm*nicate some information among them. T1e open s(andard Messa2e Passin2 ched*lers *s*ally react to the immediate grid load and arranged

11

In(erfa)e (MPI) and any of several variations is often incl*ded as part of the grid system for 1*st this +ind comm*nication. 5.3 #=serva(ion@ Mana2emen( and Measuremen(89&his soft0are is referred to as a sensor@ for implementing c*stom load sensors for other than "PB or storage reso*rces. It forms the ,asis for grid resour)e =rokerin2( or to mana2e priori(ies more fair0y.

6. E7amp0e #f Grid Compu(in289 United Devices Cancer Research

Pro ect
&he :ni(ed !evi)es Can)er Resear)1 Pro%e)( 0ill research to *ncover ne0 cancer dr*gs thro*gh the com,ination of )1emis(ry@ )ompu(ers@ spe)ia0iCed sof(&are@ or2aniCa(ions and individua0s 0ho are committed to fighting cancer. &he research centers on proteins( 0hich have ,een determined to ,e a possi,le target for cancer therapy( 0o*ld go thro*gh a process called Bvir(ua0 s)reenin2B( special analysis soft0are ($i2and>i() 0ill identify molec*les that interact 0ith these proteins( and 0ill determine 0hich of the molec*lar candidates has a high li+elihood of ,eing developed

into a dr*g. &he process is similar to finding the right +ey to open a special loc+ H ,y loo+ing at millions *pon millions of molec*lar +eys.

1#

Par(i)ipan(s in the Bnited :evices "ancer 'esearch Pro1ect (s1o&n =e0o&) are sent a ligand li,rary over the Internet. &heir P" 0ill analy9e the molec*les *sing doc+ing soft0are called $i2and>i( =y "))e0rys. &he =igand6it soft0are analy9es the mo0e)u0ar da(a =y usin2 a (1ree9dimensiona0 mode0 to attempt to interact 0ith a protein ,inding site. ?hen a ligand doc+s s*ccessf*lly 0ith a protein the res*lting interaction is scored and the interactions that generate the highest scores are recorded and filed for f*rther eval*ation.

';.

Business

<

Te)1no0o2i)a0

Benefi(s

>rom

Grid

Compu(in28

In)rease produ)(ivi(y =y providin2 users (1e resour)es (1ey need on demand@ respond Iui)k0y (o )1an2in2 =usiness and marke( demands@ )rea(e vir(ua0 or2aniCa(ions (1a( )an s1are resour)es and da(a@ e7p0oi(in2 underu(i0iCed resour)es @ para00e0 CP: )apa)i(y
@Resour)e =a0an)in2 e().

1$

Characteristic
ResourceManagement (i.e.
memory, objects, storage, network access, etc)

Cluster
"entrali9ed

!rid
:istri,*ted

P"P
:istri,*ted

Resource #wnership

ing*lar (>ften loc+ed to a single node to prevent data corr*ption) "entrali9ed( allocated according config*ration

ing*lar or m*ltiple( varies from platform to platform

ing*lar( m*ltiple( or distri,*ted( depending on circ*mstance and architect*re

Me(1od of Resour)e "00o)a(ion K S)1edu0in2

:ecentrali9ed

!IA( there is no single permanent host for centrali9ed data or reso*rce management. 7verything is transient. Bn+no0n( it is circ*mstantial ;*ltiple competing standards Any type( incl*ding 0ireless device and em,edded systems. &heoretically( infinite (In act*ality( it depends on net0or+ ,ac+,one transmission speed( n*m,er of clients( type of transmission protocol..)

$%ternal Representation

ingle Image

ingle or m*ltiple image(s)

&nter'#pera(ilit) *uggested $+uipments

%*aranteed 0ithin a cl*ster ;ostly high/end( high capa,ility systems #/ 16 0ay (Altho*gh( theoretically 1#4J is possi,le)

7nforced 0ithin a frame0or+ Gigh/end or commodity systems &0o to tho*sands *nits connection

*caling

Discover) ,echanism

:efined mem,ership ( tatic or :ynamic) 12

"entrali9ed index( as 0ell as( m*ltiple decentrali9ed mechanisms.

Al0ays decentrali9ed discovery mechanism.

11. Limitations of Grid Computing: %rid systems are less dynamic( scala,le and fa*lt tolerant as compared to P#P systems
!ot every application is s*ita,le or ena,led for r*nning on a grid.

"ell*lar 0ireless net0or+s are more constrained than traditional 0ired net0or+s ,eca*se of the limitations of ,and0idth( processing po0er and memory.

12. Globus Grid Toolkit :&he open so*rce %lo,*s &ool+it is a f*ndamental ena,ling technology for the 5%ridK( 0hich incl*des soft0are services and li,raries for reso*rce monitoring( discovery and management( pl*s sec*rity and file management. In addition to ,eing a central part of science and engineering pro1ects that total nearly a half/,illion dollars internationally( the %lo,*s &ool+it is a s*,strate on 0hich leading I& companies are ,*ilding significant commercial %rid prod*cts.

'+. T1e >u(ure898efore grid comp*ting moves into the commercial mainstream( "7>s
need to learn more a,o*t the technology and its possi,ilities( and identify 0ays they can *se it. 8*t other pro,lems needed to ,e solved li+e are se)uri(y@ s(andardiCa(ion and ne& pro(o)o0s for ,ringing together a n*m,er of operating systems( vendor platforms and applications( ,efore grid comp*ting ,ecomes tr*ly 0idespreadHpartic*larly in the context of inter/enterprise grids( *tility models and *ltimately a glo,al grid. A n*m,er of nonprofit gro*ps s*ch as the G0o=a0 Grid >orum@ (1e G0o=us Pro%e)( and (1e Ne& Produ)(ivi(y Ini(ia(ive are 0or+ing on sec*rity and standardi9ation iss*es.

',. Con)0usion89 o 0e see( grid comp*ting LisL (1e u0(ima(e Bki00erA (e)1no0o2y. &he grid
comp*ting can ,e implemented *sing different technologies li+e <;=( 1ava( ">'8A and dynami)a00y a00o)a(es (1e resour)es. &he ,asic part is the s)1edu0er of (1e 2rid )ompu(in2. &he grid can ,e implemented in in(rane( or on In(erne(. &he grid is *sed for large comp*tations in fields s*ch as =io(e)1no0o2y@ au(omo=i0e@ medi)a0 and many fie0ds. &he grid can ,e implemented depending on re.*irements of the application( sec*rity( and priority for tas+ to ,e done. &he

1A

sched*ler is the core item of the gridC the algorithm can ,e depending on priority or reso*rces availa,le or type of net0or+ availa,le. Altho*gh there is some limitation on 0ireless environment( ,*t it is f*t*re of comp*ting 0hich 0ill explore ne0 0ays for comp*ting in enterprise environment.

'..Referen)es89
000.google.com 000.s*n.fr

16

1D

You might also like