This action might not be possible to undo. Are you sure you want to continue?
6, September 2010
Information realization with statistical predictive inferences and coding form
D.Mukherjee
Sir Padampat Singhania University Udaipur313601,Rajasthan,India debasismukherjee1@gmail.com
P.Chakrabarti* , A.Khanna , V.Gupta
Sir Padampat Singhania University Udaipur313601,Rajasthan,India *prasun9999@rediffmail.com
Abstract—The paper deals with information realization in case of grid topology. Nodal communication strategies with clusters has also been cited. Information prediction has been pointed out with relevant statistical method, forward sensing, backward sensing and cumulative frequency form. Binary tree classifier theory has been applied for information grouping. The paper also deals with comparison analysis of information coding.
A * * * * *
M rows (m 1)paths * * * *
Keywords grid topology ,forward sensing , backward sensing, binary tree classifier, information coding
B
I.
INFORMATION MERGING IN GRID IN DIAGONAL APPROACH N columns (n1) paths
* shows path of traversal
In order to solve complex problems in artificial intelligence one needs both large amount of knowledge & some mechanisms for manipulating that knowledge to create solutions to new problems .Basically knowledge is a mapping of different facts with help appropriate functions for e.g. Earth is a planet. Can be realized as a function – planet (Earth). Information merging can be realized as combining different pieces of information to arrive at a conclusion. The different information elements can be related in different ways i.e. either in hierarchy or in form of a graph or even a mesh. Consider a mesh of size m X n i.e. m rows & n columns then if each intersection point has a information element placed on it then one way of merging element A with B can be covering a path of length (5XN) (here m= 8 & n=9). If weight of covering each path is considered same then in case of diagonal approach we can find a path of diagonal nature of length 5√2 and then travelling a length (N5) in linear fashion thus finding a shortest path the same can also be determined by graph algorithms like Dijkstra‟s or kruskal‟s algorithm for minimum spanning tree. If each path is considered to be of zero weight then interestingly there is no sense travelling a path from A to B i.e. we can directly merge the two points i.e. we take point A &directly merge it with point B in such a case we need to have some stack like mechanism to determine the order in which the nodes arrive & are merged.
Fig1: Information merging in mesh/grid The above concept can be realized in DDM(Distributed Data Mining) where large amount of geographically scattered knowledge is merged & is mined to derive conclusions & make decisions for e.g. GIS i.e. the Geographical Information System which uses cartography(art of making maps) with various information elements(sources) to derive decision support results like which route to choose for a given destination. II. INFORMATION MERGING IN CLUSTER NETWORKS This section mainly focuses on the nodal communication between the farthest node in a N*N structure[1] and information realization indicates nodal message . Let us assume each cluster to be consisting of 16 nodes and then try to communicate between the source and the destination node as described in the fig1.The point to be noted here is that to establish the communication link between the adjacent elements or units of the cluster we have to have the communication in just reverse order in the 2 adjacent elements. The order of the communication is
215
http://sites.google.com/site/ijcsis/ ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010
The condition can be visually imagined as follows:
Now let us first talk about the case when there is only one element i.e, 1*1.In this particular case if we want to communicate between the farthest node then there will be only 1 node in between the source and the destination which can be further visualized as follows:
nodes, i.e, f(x)=3+4+4;In case of 4*4 matrix to communicate between the farthest node we need 7 nodes, i.e, f(x)=3+4+4+4;In case of 16 elements in a ring , we can proceed as follows. Let us consider the case of 1*1 matrix to communicate between the farthest node we need 3 nodes, i.e, f(x)=7.In case of 2*2 matrix to communicate between the farthest node we need 7 nodes, i.e, f(x)=7+8;In case of 3*3 matrix to communicate between the farthest node we need 7 nodes, i.e, f(x)=7+8+8;In case of 4*4 matrix to communicate between the farthest node we need 7 nodes, i.e, f(x)=7+8+8+8;Now the total number of nodes can be derived by the general formula as (N/21)+(M1)*(N/2) where N = number of nodes present in the unit or element, M = dimension of the square matrix.The data can be represented in the tabular form as follows:
No. of nodes 4 8 16 1*1 1 3 7 2*2 3 7 15 3*3 5 11 23 4*4 7 15 31
If we denote it by using the function f(x)then the value of f(x)will be 1.f(x)=1; The intermediate node is ll. Now let us consider the case 2*2 matrix the value here will be f(x)=1+2=3; The intermediate nodes are 1(2,3),2(4).
35 30 25 20 15 4 8 16
For the case for the 3*3 matrix the value of the function f(x)=1+2+2=5;
10 5 0 1*1 2*2 3*3 4*4
Fig.2: Nodal communication in cluster
The xaxis represents the M*M matrix where M varies from 1 to 3.The yaxis represents the number of optimum communication nodes required in the establishing the path between the source node and the farthest node. The number of nodes per element is indicated by the 3colors. Similarly for the 4*4 matrix we can get the value of f(x)=1+2+2+2. Here in this case we were having only 4 elements in a ring .Suppose we have 8 elements in the ring in that case we have to compute the number of nodes required to communicate or establish the connection between the farthest nodes. Justification  Let us consider the case of 1*1 matrix to communicate between the farthest node we need 3 nodes, i.e, f(x)=3.In case of 2*2 matrix to communicate between the farthest node we need 7 nodes, i.e, f(x)=3+4;In case of 3*3 matrix to communicate between the farthest node we need 7 III. STATISTICAL INFERENCE OF FUTURISTIC VALUES In statistical inferences the input & output of a situation are related with a certain relation or function based on which we infer futuristic values. Consider a realtime situation in which a given input parameter is observed over time between instants T1 & T2 given the relation [2] Mt = a.et then Mavg = √(Mt1 . Mt2 )
216
http://sites.google.com/site/ijcsis/ ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010
C. Cumulative frequency based information sensing Case 1: If we take observations at equal instants of time then Mt1 = a.et1 Mt2 = a.et1+k Mt3 = a.et1+2k General term Mtn = a.et1+(n1)k i.e. the values of output M forms a G.P. series of increasing order common ratio as ek . Case 2: If we take observation at unequal timing interval in that case T1 = t1 => Mt1 = a.et1 T2 = t1 + k1 => Mt2 = a.et1+k1 T3 = t2 + k2 = t1 + (k1 + k2) => Mt2 = a.et2+k2 = Mt2 = a.et1+k1+k2 General term Tn = T1+(k1+k2+k3+…+kn) Tn = tn1 + kn1 = t1 + (k1 + k2 + k3+…+kn1) => Mtn = a.etn1+kn1 = Mtn = a.e(t1+k1+k2+k3+…+kn1) = a.et1+Ktotal i.e. now any futuristic value say at instant tn is Mtn = a.et1.eKtotal (observed value) Given Mt = a.et , taking log on both sides we have, ln(Mt) = ln(a) + t i.e. ln(Mtn) = ln(a) + tn ln(Mtn) = ln(a) + t1+k1+k2+k3+…+kn1 Thus we have obtained a log linear model for the above function Mt = a.et using which we can calculate or predict the futuristic values for increased ranges. Y = m.X + C If we try to minimize the value of Ktotal we can do so by making k1=k2=k3=…=kn1 which is same as Case 1. IV. PROJECTION OF SENSED INFORMATION Let I= {i1,i2,…in} be the set of sensed information. In the process of feature appropriate observation, forward selection , backward elimination and decision based induction methods are applied. A. Forward selection based information sensing Let I= {i1 , i2,….,in}be the set of information estimates of various trends noted after observation in respective timing instants Y = {y1,y2,…yn}. The accuracy measurement is to be calculated first based on comparison analysis. The minimum deviation reflects high accuracy level of prediction and that information will be selected. In this manner, { } , {best information},{first two}….will be selected. B. Backward elimination based information sensing Using backward elimination , in each stage each information is eliminated and thereby after the final screening stage the projected set reveals the final optimum information space. In the above figure, G0 is the root that is initial search term. If a user wants to analyze further gain classification, then identify each search term as a binary code and by giving the code number he can analyze the position of gain estimate in the model . The concept of coding is as follows: OBSERVATIONS g1 g2 g3 g4 g5 g6 INFORMATION INVOLVED i1,i3,i4,i6 i3,i5 i4,i5,i6 i2,i3,i5 i1,i2 i1,i2,i3,i6
Table1 : Association of information against each observation Features i1 i2 i3 i4 i5 i6 Initial value 0.1 0.2 0.3 0.4 0.5 0.6 Count 3 3 4 2 3 3 Value 0.3 0.6 1.2 0.8 1.5 1.8 (Value)2 0.09 0.36 1.44 0.64 2.25 3.24
Table 2 : Determination of count and value Now CF = ( x , y , z ) where x = number of elements , y = linear sum of the elements and z = sum of the square of the elements[3]
V. BINARY TREE BASED GAIN CLASSIFIER In this section information represents gain analysis. A search[4] can be formed based on the initial search term and its gradual sub term while the process of matching. Thereby the level is increased, in initial search term is the root and the final term fully matching with the context of the users‟ desire is a leaf node.
G0 LEVEL 0
G1,1
G1,2
LEVEL 1
G2,1
G2,2 G2,3
G2,4
LEVEL 2
LEVEL 3 G3,1 G3,2 G3,3 G3,4 G3,5 G3,6 G3,7 G3,8 Fig3: Binary tree based gain classifier
217
http://sites.google.com/site/ijcsis/ ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010
Value = 0 if the search term is a left child of parent node = 1 otherwise N Theorem: In the process of coding, ∑ 1/2Li =1, where i=1 Li is the length of code of ith leaf node in the tree, N is total number of leaf nodes and 1<i<N. Proof: From fig.3 codes of leaf nodes are as follows: Nodes G3,1 G3,2 G3,3 G3,4 G3,5 G3,6 G3,7 G3,8 Respective code 000 001 010 011 100 101 110 111 For p3: Class C1 C2 C3 C4 C5 C6 C7 C8
C3 = { 10 } C4 = { 11 }
ijk 000 001 010 011 100 101 110 111
False p1,p2,p3 p1,p2 p1,p3 p1 p2,p3 p2 p3 
True p3 p2 p2,p3 p1 p1,p3 p1,p2 p1,p2,p3
In the initial stage, classes are C1, C2 based on the parameter p1. In the second stage, the classes are C1,C2,C3 based on p2. In the last stage, classes are C1,C2...C8 based on p3.This means that if we assume that „n‟ is the number of parameters involved in the system for examination purpose. Then, the maximum length of code word for a particular class is „n‟. The number of classes is 2n, provided that the classes are distinct in nature. VI. CODED INFORMATION SENSING Let original message is “FATHER”. For the first alphabet, µvalue = 1/((position of that)+ π /100). Hence it‟s offset value = ceiling of (the product of µvalue and 10). The weight is given by its position in alphabet string[5]. Therefore total_value = offset value * weight. From the next character onwards, µvalue_next = 1/(mod value of (position of next  position of previous ) + π/100). Hence total_value is calculated in similar manner. Now, bias value will be equal to total number of characters in the message.Compute net_value as (total_value_first char + total _value_last char) (bias value) and let it be x (say). Mode 0≤x<100 100≤x<150 150≤x<200 Operation Reverse the message. Circular left shift of message by n/2 bits where n= bias value. Circular right shift of message by n/2 bits
So, N=8. Each leaf node has identical code length i.e. 3. Therefore, 1/2Li =1/2=1/8, 1/2= 1/8, …1/2=1/8 We now design a binary tree based classifier taking some parameters for examination purpose and represent each point on the basis of a code generated by arithmetic coding. Finally, represent the same on the basis of set theory .We assume that the gain set available is G ={ g1,g2,g3,g4 }.The parameters based on which the examination is to be carried are the elements of the set P = { p1,p2,p3 }.The result of the examination are denoted in the form of Boolean variables such that the outputs are denoted as: NO = 0 YES = 1 At the initial timing instant, the parameter p1 is applied for testing purpose. Hence, in the initial stage, there will be at least one class while a maximum of two classes. In the second level, the parameter p2 is applied and accordingly the classes are defined. In the final stage, the parameters p3 is applied. If we assume the classifier as a binary tree representation, we can apply arithmetic coding to each class such that a „NO‟ of a particular exam is denoted by „0‟ and a „YES‟ is denoted by „1‟.In the initial stage, the class which contains the elements for negative supply of p1 is C1 ={ d1,d2}, while, C2 = { d2,d4 }. In this manner, the tree is to be constructed such that the code word for each class is denoted by ijk where i Є { 0,1 } , j Є { 0,1 } and k Є { 0,1 }. For p1: C1 = { d1,d2 } C2 = { d2,d4 } For p2: C1 = { 00 } C2 = { 01 }
Iteration 1: µF = 1/((position of „F‟ in alphabet list) + π /100) = 1/((6)+ π /100) = 0.165798547. Offset value = ceiling of (0.165798547*10) = 2. Weight = position of „F‟ in alphabet list = 6. Thus, total_value = 2*6 =12. Iteration 2: µA= 1/((position of „A‟ – position of „F‟)+ π/100) = 1/((16)+ π/100) = 1/5.031415927 = 0.198751209. Offset value = ceiling of (0.198751209*10) = 2. Weight = 1. Thus, total_value = 2*1 = 2. Iteration 3: µT = 1/((position of „T‟ – position of „A‟)+ π/100) = 1/((201)+ π/100) = 1/19.03141593 = 0.052544697. Offset value = ceiling of (0.052544697*10) = 1. Weight = 20. Thus, total_value = 1*20 = 20.
218
http://sites.google.com/site/ijcsis/ ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010
REFERENCES Comparator
[1] A.Kumar , P.Chakrabarti , P.Saini , V.Gupta ,“Proposed techniques of random walk, statistical and cluster based node realization” communicated to IEEE conf. Advances in Computer Science ACS 2010 , India , Dec10

Encryption system
Output Cipher
[2] P.Chakrabarti , S.K.De , S.C.Sikdar , “Statistical Quantification of Gain Analysis in Strategic Management” published in IJCSNS ,Korea , Vol 9 No11 ,pp.315318, 2009 [3]P.Chakrabarti, “Data mining A Mathematical Realization and cryptic application using variable key” published in International journal , Advances in Information Mining , Vol 2 No 1, pp1822,2010
+
wb *
* w1 C1 C2
[4] P.Chakrabarti, P.S.Goswami, “Approach towards realizing resource mining and secured information transfer” published in international journal of IJCSNS, Korea , Vol 8 No.7, pp345350, 2008
* cn W2
wn
[5] P.Chakrabarti , “Attacking Attackers in Relevant to Information Security” Proceedings of RIMTIET, Mandi Gobindgarh. pp 6971, March 29, 2008
About authors:
Ci = offset value for i = 1 to n , wi = weight , wb = bias value Fig 4: Coding Model Iteration 4: µH = 1/((position of „H‟ – position of „T‟)+ π/100) = 1/((820)+ π/100) = 1/12.03141593 = 0.083115736. Offset value = ceiling of (0.083115736*10) = 1. Weight = 8. Thus, total_value = 1*8 = 8. Iteration 5: µE = 1/((position of „E‟ – position of „H‟)+ π/100) = 1/((58)+ π/100) = 1/3.031415927 = 0.32987885. Offset value = ceiling of (0.32987885*10) = 4. Weight = 5. Thus, total_value = 4*5 = 20. Iteration 6: µR = 1/((position of „R‟ – position of „E‟)+ π/100) = 1/((185)+ π/100) = 1/13.03141593 = 0.076737632. Offset value = ceiling of (0.076737632*10) = 1. Weight = 18. Thus, total_value = 1*18 = 18. Now, wb= bias value = number of bits in FATHER= 6. So net_value= accumulated sum of all total_value – wb = (12+2+20+8+20+18)  6 = 74. It falls in the range 0≤x<100. So, “FATHER” is reversed. Therefore resultant cipher is “REHTAF”. VII. CONCLUSION The paper points out information merging in grid and cluster network models. Statistical means of information prediction as well as forward, backward and cumulative frequency based schemes have been analyzed . Binary tree based information classification and coded information have been justified with relevant mathematical analysis. Dr.P.Chakrabarti(09/03/81) is currently serving as Associate Professor in the department of Computer Science and Engineering of Sir Padampat Singhania University,Udaipur. Previously he worked at Bengal Institute of Technology and Management , Oriental Institute of Science and Technology, Dr.B.C.Roy Engineering College, Heritage Institute of Technology, Sammilani College. He obtained his Ph.D(Engg) degree from Jadavpur University in Sep09,did M.E. in Computer Science and Engineering in 2005,Executive MBA in 2008and B.Tech in Computer Science and Engineering in 2003.He is a life member of Indian Science Congress Association , Calcutta Mathematical Society , Calcutta
Debasis Mukherjee (20/08/80) is pursuing Ph.D. from USIT, GGSIPU, Delhi, India from2010. He received the M. Tech. degree in VLSI Design from CDAC Noida in 2008 and bachelor degree in Electronics and Instrumentation Engineering from BUIE, Bankura, West Bengal, India in 2003.He achieved first place in district in “Science Talent Search Test” 1991. He has some publications of repute in IEEE conferences.
219
http://sites.google.com/site/ijcsis/ ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 6, September 2010
Statistical Association , Indian Society for Technical Education , Cryptology Research Society of India, IAENG(HongKong), CSTA(USA), annual member of Computer Society of India, VLSI Society of India , IEEE(USA), senior member of IACSIT(Singapore) and selected member of The IAENG Society of Artificial Intelligence , Computer Science , Data Mining. He is a Reviewer of International journal of Information Processing and Management (Elsevier) , International Journal of Computers and Applications , Canada and International Journal of Computer Science and Information Security(IJCSIS,USA), editorial board member of International Journal of Engineering and Technology, Singapore and International Journal of Computer and Electrical Engineering. He has about 100 papers in national and international journals and conferences in his credit and two patents(filed). He has several visiting assignments at BHU Varanasi , IIT Kharagpur , Amity University,Kolkata , et al.
A.Khanna and V.Gupta are the third year students of Information Technology and Computer Science & Engg. branch respectively of Sir Padampat Singhania University.
220
http://sites.google.com/site/ijcsis/ ISSN 19475500
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.