|Views: 269|Likes: 1

Published by ijcsis

Abstract — The paper deals with information realization in case of grid topology. Nodal communication strategies with clusters has also been cited. Information prediction has been pointed out with relevant statistical method, forward sensing, backward sensing and cumulative frequency form. Binary tree classifier theory has been applied for information grouping. The paper also deals with comparison analysis of information coding.

Keywords- grid topology ,forward sensing , backward sensing, binary tree classifier, information coding

Keywords- grid topology ,forward sensing , backward sensing, binary tree classifier, information coding

Abstract — The paper deals with information realization in case of grid topology. Nodal communication strategies with clusters has also been cited. Information prediction has been pointed out with relevant statistical method, forward sensing, backward sensing and cumulative frequency form. Binary tree classifier theory has been applied for information grouping. The paper also deals with comparison analysis of information coding.

Keywords- grid topology ,forward sensing , backward sensing, binary tree classifier, information coding

Keywords- grid topology ,forward sensing , backward sensing, binary tree classifier, information coding

See more

See less

Information realization with statistical predictiveinferences and coding form

D.Mukherjee

Sir Padampat Singhania UniversityUdaipur-313601,Rajasthan,Indiadebasismukherjee1@gmail.com

P.Chakrabarti* , A.Khanna , V.Gupta

Sir Padampat Singhania UniversityUdaipur-313601,Rajasthan,India*prasun9999@rediffmail.com

Abstract

—

The paper deals with information realization in case of grid topology. Nodal communication strategies with clusters hasalso been cited. Information prediction has been pointed out withrelevant statistical method, forward sensing, backward sensingand cumulative frequency form. Binary tree classifier theory hasbeen applied for information grouping. The paper also deals withcomparison analysis of information coding.

Keywords- grid topology ,forward sensing , backward sensing, binary tree classifier, information coding

I.

INFORMATION MERGING IN GRID INDIAGONAL APPROACHIn order to solve complex problems in artificial intelligenceone needs both large amount of knowledge & somemechanisms for manipulating that knowledge to createsolutions to new problems .Basically knowledge is a mappingof different facts with help appropriate functions for e.g. Earthis a planet. Can be realized as a function

–

planet (Earth)

.Information merging can be realized as combining differentpieces of information to arrive at a conclusion. The differentinformation elements can be related in different ways i.e.either in hierarchy or in form of a graph or even a mesh.Consider a mesh of size

m X n

i.e. m rows & n columns thenif each intersection point has a information element placed onit then one way of merging element A with B can be coveringa path of length (5XN) (here m= 8 & n=9). If weight of covering each path is considered same then in case of diagonalapproach we can find a path of diagonal nature of length 5

√2

and then travelling a length (N-5) in linear fashion thus findinga shortest path the same can also be determined by graph

algorithms like Dijkstra‟s or kruskal‟s algorithm for

minimum spanning tree. If each path is considered to be of zero weight then interestingly there is no sense travelling apath from A to B i.e. we can directly merge the two pointsi.e. we take point A &directly merge it with point B in such acase we need to have some stack like mechanism to determinethe order in which the nodes arrive & are merged.

* -* -* -* -- - - - * * * * *

* shows path of traversalFig1: Information merging in mesh/gridThe above concept can be realized in DDM(Distributed DataMining) where large amount of geographically scatteredknowledge is merged & is mined to derive conclusions &make decisions for e.g. GIS i.e. the Geographical InformationSystem which uses cartography(art of making maps) withvarious information elements(sources) to derive decisionsupport results like which route to choose for a givendestination.II. INFORMATION MERGING IN CLUSTER NETWORKSThis section mainly focuses on the nodal communicationbetween the farthest node in a N*N structure[1] andinformation realization indicates nodal message . Let usassume each cluster to be consisting of 16 nodes and then tryto communicate between the source and the destination nodeas described in the fig1.The point to be noted here is that toestablish the communication link between the adjacentelements or units of the cluster we have to have thecommunication in just reverse order in the 2 adjacentelements. The order of the communication isABM rows (m- 1)pathsN columns (n-1) paths

(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 6, September 2010215http://sites.google.com/site/ijcsis/ISSN 1947-5500

The condition can be visually imagined as follows:Now let us first talk about the case when there is only oneelement i.e, 1*1.In this particular case if we want tocommunicate between the farthest node then there will be only1 node in between the source and the destination which can befurther visualized as follows:If we denote it by using the function f(x)then the value of f(x)will be 1.f(x)=1; The intermediate node is ll. Now let usconsider the case 2*2 matrix the value here will bef(x)=1+2=3; The intermediate nodes are 1(2,3),2(4).For the case for the 3*3 matrix the value of the functionf(x)=1+2+2=5;Similarly for the 4*4 matrix we can get the value of f(x)=1+2+2+2.Here in this case we were having only 4 elements in a ring.Suppose we have 8 elements in the ring in that case we haveto compute the number of nodes required to communicate orestablish the connection between the farthest nodes.Justification - Let us consider the case of 1*1 matrix tocommunicate between the farthest node we need 3 nodes, i.e,f(x)=3.In case of 2*2 matrix to communicate between thefarthest node we need 7 nodes, i.e, f(x)=3+4;In case of 3*3matrix to communicate between the farthest node we need 7nodes, i.e, f(x)=3+4+4;In case of 4*4 matrix to communicatebetween the farthest node we need 7 nodes, i.e,f(x)=3+4+4+4;In case of 16 elements in a ring , we canproceed as follows. Let us consider the case of 1*1 matrix tocommunicate between the farthest node we need 3 nodes, i.e,f(x)=7.In case of 2*2 matrix to communicate between thefarthest node we need 7 nodes, i.e, f(x)=7+8;In case of 3*3matrix to communicate between the farthest node we need 7nodes, i.e, f(x)=7+8+8;In case of 4*4 matrix to communicatebetween the farthest node we need 7 nodes, i.e,f(x)=7+8+8+8;Now the total number of nodes can be derivedby the general formula as (N/2-1)+(M-1)*(N/2) where N =number of nodes present in the unit or element, M =dimension of the square matrix.The data can be represented inthe tabular form as follows:

No. of nodes1*1 2*2 3*3 4*44 1 3 5 78 3 7 11 1516 7 15 23 31

051015202530351*1 2*2 3*3 4*44816

Fig.2: Nodal communication in cluster

The x-axis represents the M*M matrix where M varies from 1to 3.The y-axis represents the number of optimumcommunication nodes required in the establishing the pathbetween the source node and the farthest node. The number of nodes per element is indicated by the 3colors.III. STATISTICAL INFERENCE OF FUTURISTICVALUESIn statistical inferences the input & output of a situation arerelated with a certain relation or function based on which weinfer futuristic values. Consider a real-time situation in whicha given input parameter is observed over time between instantsT1 & T2 given the relation [2]

M

t

= a.e

t

then M

avg

=

√(

M

t1

. M

t2

)

(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 6, September 2010216http://sites.google.com/site/ijcsis/ISSN 1947-5500

Case 1:If we take observations at equal instants of time then

M

t1

= a.e

t1

M

t2

= a.e

t1+k

M

t3

= a.e

t1+2k

General term

M

tn

= a.e

t1+(n-1)k

i.e. the values of output Mforms a G.P. series of increasing order common ratio as

e

k

.Case 2:If we take observation at unequal timing interval in that caseT1 = t1 =>

M

t1

= a.e

t1

T2 = t1 + k1 =>

M

t2

= a.e

t1+k1

T3 = t2 + k2 = t1 + (k1 + k2) =>

M

t2

= a.e

t2+k2

= M

t2

=a.e

t1+k1+k2

General term Tn = T1+(k1+k2+k3+…+kn)

Tn = tn-1 + kn-1 = t1 +

(k1 + k2 + k3+…+kn

-1) =>

M

tn

= a.e

tn-1+kn-1

= M

tn

= a.e

(t1+k1+k2+k3+…+kn

-1)

= a.e

t1+Ktotal

i.e.

now any futuristic value say at instant tn is

M

tn

= a.e

t1

.e

Ktotal

(observed value)

Given

M

t

= a.e

t

, taking log on both sides we have,ln(M

t

) = ln(a) + ti.e. ln(M

tn

) = ln(a) + tnln(M

tn

) = ln(a) + t1+k1+k2+k3+…+kn

-1Thus we have obtained a log linear model for the abovefunction M

t

= a.e

t

using which we can calculate or predict thefuturistic values for increased ranges.Y = m.X + CIf we try to minimize the value of Ktotal we can do so by

making k1=k2=k3=…=kn

-1 which is same as Case 1.IV. PROJECTION OF SENSED INFORMATIONLet I= {i

1

,i

2

,…i

n

} be the set of sensed information. In theprocess of feature appropriate observation, forward selection ,backward elimination and decision based induction methodsare applied.

A. Forward selection based information sensing

Let I= {i

1

, i

2

,….,

i

n

}be the set of information estimates of various trends noted after observation in respective timinginstants Y = {y

1

,y

2

,…y

n

}. The accuracy measurement is to becalculated first based on comparison analysis. The minimumdeviation reflects high accuracy level of prediction and thatinformation will be selected. In this manner, { } , {bestinformation

},{first two}….will be s

elected.

B. Backward elimination based information sensing

Using backward elimination , in each stage each information iseliminated and thereby after the final screening stage theprojected set reveals the final optimum information space.

C. Cumulative frequency based information sensing

OBSERVATIONS INFORMATION INVOLVEDg

1

i

1

,i

3

,i

4

,i

6

g

2

i

3

,i

5

g

3

i

4

,i

5

,i

6

g

4

i

2

,i

3

,i

5

g

5

i

1

,i

2

g

6

i

1

,i

2

,i

3

,i

6

Table1 : Association of information against each observationFeatures InitialvalueCount Value (Value)

2

i

1

0.1 3 0.3 0.09i

2

0.2 3 0.6 0.36i

3

0.3 4 1.2 1.44i

4

0.4 2 0.8 0.64i

5

0.5 3 1.5 2.25i

6

0.6 3 1.8 3.24Table 2 : Determination of count and valueNow CF = ( x , y , z )where x = number of elements , y = linear sum of the elementsand z = sum of the square of the elements[3]V. BINARY TREE BASED GAIN CLASSIFIERIn this section information represents gain analysis. Asearch[4] can be formed based on the initial search term andits gradual sub term while the process of matching. Therebythe level is increased, in initial search term is the root and the

final term fully matching with the context of the users‟ desire

is a leaf node.In the above figure, G0 is the root that is initial search term. If a user wants to analyze further gain classification, thenidentify each search term as a binary code and by giving thecode number he can analyze the position of gain estimate inthe model . The concept of coding is as follows:

G0 LEVEL 0G1,1 G1,2 LEVEL 1G2,1 G2,2 G2,3 G2,4 LEVEL 2LEVEL 3G3,1 G3,2 G3,3 G3,4 G3,5 G3,6 G3,7 G3,8

Fig3: Binary tree based gain classifier

(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 6, September 2010217http://sites.google.com/site/ijcsis/ISSN 1947-5500