Professional Documents
Culture Documents
I. INTRODUCTION
-0.8,'---_0
'---:---=---'--=---:---::---"
PCA is the method of modem data analysis. When we are
Figure 2. IOlxlO matrix data
observing some complex system like neuroscience, web
indexing, meteorology and oceanography it is tuff to figure
Let's take an example how peA is used to illustrate
out what happing because data appears clouded, unclear and
microarray experiment data shows 101 genes of 101peopie
even redundant.
this can form the matrix of 101x101 so consist 101 axes per
The goal of peA is to figure out new most important
genes due to that data result is cloud values for
basis to re-express the data set. The new basis filters out multidimensional array. From fig 1 data is unclear and
noise and reveals hidden structure [1]. peA is useful when clouded after applying peA data reduced to 101xlO graph
number of variable have to compute and wish to reduce the shows in fig 2. This describes the characteristics of 101
variable by creating artificial variable [2]. people where the most disease genes lie.
leA is based on random statistical computation. The data
is assumed to be linear or nonlinear mixer of some unknown
latent variables. The latent variables are non gaussian and
mutually independent and they called independent
component of observed data [3].
536
A. PCA Algorithm B. ICA Algorithm
,...----...,
W
VVhiten the data
Eigen value (a,b) &
Eigen vector
..v
Letw(k)=E{x(w(k-I)T x)} -
Create features vector
3w(k-l)
K=l initiaU
..v
Final data= Rowfeatures
vector" Row data adjust
transpose. Divide w(k) by its
norm
In step2 first vanance IS measured. The variance of
random variable is measure the statistical dispersion of data
mean how far data from its expected value.
Variance (x) L:(x-ui
=
fmite dimensional spectrum theorem says that for every real The famous process of whiten data is Eigen value
symmetry matrix A there exist a real orthogonal matrix Q decomposition process.
T I12 T
such that D= Q AQ where D is diagonal matrix. xl ED- E x
=
Third step determines Eigen value and Eigen vector of Where E and D are eigen vectors and eigen values of
covariance matrix. Eigen value is used in linear matrix x[5].
transformation of vector. It is property of matrix; act on a Step 4 assumes a norm vector of I. A norm vector
certain vector in which vector change only it magnitude and function strictly assigns the positive length to all vectors in
direction remain same (principle vector). Eigen value matrix vector space. Norm of one vector matrix is diagonal matrix
2 2
is scalar matrix. Its effect on a matrix is scalar multiplication. (dl, d2, d3 ...) such that (dI +d2 +d3+ ...) 112 1
= .
For a module M over a ring, with the endomorphism algebra In step 5 kurtosis is measured. Kurtosis is measurement
End (M) replacing the algebra of matrix, the analog of scalar of non gaussinaty. Independent signal have highest kurtosis
matrix are scalar transformation. Eigen vectors are if we mix one independent signal with other characteristics
orthogonal to each others. .Orthogonal matrix preserve inner tends to more Gaussian distribution means lesser kurtosis
product so for vector u, v in an n dimensional real inner value.
product space[4].
537
III. EXPERIMENT AND RESULT
400J REFERENCE
538