Professional Documents
Culture Documents
Simulation Study
Conclusion
References
Ashwini Maurya
Simulation Study
Conclusion
References
Outline
Simulation Study
Conclusion
References
Ashwini Maurya
Simulation Study
Conclusion
References
Covariance Matrix
Consider X = (X1 , , Xn )T be observation matrix from a
p-dimensional multivariate distribution with mean vector
= (1 , , p )T and covariance matrix , the sample
covariance matrix is defined by:
Si,j =
n
X
i )(Xm,j X
j );
(Xm,i X
i, j = 1, 2, ..p
m=1
Simulation Study
Conclusion
References
Covariance Matrix
Consider X = (X1 , , Xn )T be observation matrix from a
p-dimensional multivariate distribution with mean vector
= (1 , , p )T and covariance matrix , the sample
covariance matrix is defined by:
Si,j =
n
X
i )(Xm,j X
j );
(Xm,i X
i, j = 1, 2, ..p
m=1
Simulation Study
Conclusion
References
Covariance Matrix
Consider X = (X1 , , Xn )T be observation matrix from a
p-dimensional multivariate distribution with mean vector
= (1 , , p )T and covariance matrix , the sample
covariance matrix is defined by:
Si,j =
n
X
i )(Xm,j X
j );
(Xm,i X
i, j = 1, 2, ..p
m=1
Simulation Study
Conclusion
References
Simulation Study
Conclusion
References
Simulation Study
Conclusion
References
Simulation Study
Conclusion
References
Penalization methods
It is well known that the `1 minimization leads to sparse solution.
Some of the popular methods for estimating a sparse inverse
covariance matrix based on `1 minimization includes:
Graphical Lasso (Friedman et.al., 2007)
Sparse Permutation Invariant Covariance Matrix Estimation
(Rothamn et al., 2008)
High Dimensional Inverse Covariance Matrix Estimation
(Ming Yuan, 2010)
Sparse Inverse Covariance Matrix Estimation using Quadratic
Approximation ( CJ Hsieh et.al., 2011)
Ashwini Maurya
Simulation Study
Conclusion
References
Penalization methods
It is well known that the `1 minimization leads to sparse solution.
Some of the popular methods for estimating a sparse inverse
covariance matrix based on `1 minimization includes:
Graphical Lasso (Friedman et.al., 2007)
Sparse Permutation Invariant Covariance Matrix Estimation
(Rothamn et al., 2008)
High Dimensional Inverse Covariance Matrix Estimation
(Ming Yuan, 2010)
Sparse Inverse Covariance Matrix Estimation using Quadratic
Approximation ( CJ Hsieh et.al., 2011)
Ashwini Maurya
Simulation Study
Conclusion
References
Outline
Simulation Study
Conclusion
References
Ashwini Maurya
Simulation Study
Conclusion
References
Beyond `1 Penalization
Ashwini Maurya
Simulation Study
Conclusion
References
Beyond `1 Penalization
Ashwini Maurya
Simulation Study
Conclusion
References
Beyond `1 Penalization
Ashwini Maurya
Simulation Study
Conclusion
References
0.0
0.2
0.4
0.6
0.8
1.0
200
0
50
100
Frequency
150
200
150
100
Frequency
50
100
50
0
Frequency
150
200
Beyond `1 Penalization
0.0
0.5
1.0
1.5
2.0
2.5
Ashwini Maurya
Simulation Study
Conclusion
References
(2.1)
where W is inverse of covariance matrix, S is sample
covariance matrix, and are non-negative real constants.
Ashwini Maurya
Simulation Study
Conclusion
References
(2.1)
where W is inverse of covariance matrix, S is sample
covariance matrix, and are non-negative real constants.
Ashwini Maurya
Simulation Study
Conclusion
References
(2.1)
where W is inverse of covariance matrix, S is sample
covariance matrix, and are non-negative real constants.
Ashwini Maurya
Simulation Study
Conclusion
References
Ashwini Maurya
Simulation Study
Conclusion
References
Ashwini Maurya
Simulation Study
Conclusion
References
Ashwini Maurya
Simulation Study
Conclusion
References
Ashwini Maurya
Simulation Study
Conclusion
References
Outline
Simulation Study
Conclusion
References
Ashwini Maurya
Simulation Study
Conclusion
References
k = 1, 2, ...
(3.1)
Ashwini Maurya
Simulation Study
Conclusion
References
Convergence Analysis
Theorem
Let {Wn , n = 1, 2, ...} be the sequence generated by proximal
gradient algorithm. Let M < be a constant such that
P
| < W Wn , 5kWn k1 > | < M. We have,
n=1
F (Wk )
F (W )
Ashwini Maurya
Simulation Study
Conclusion
References
Outline
Simulation Study
Conclusion
References
Ashwini Maurya
Simulation Study
Conclusion
References
Simulation
Extensive simulation are done, here we report the result for
two types of inverse covariance matrix:
(i) Hub Graph : The rows/columns are partitioned into J
equally-sized disjoint groups:
{V1 V2 , ..., VJ } = {1, 2, ..., p}, each group is associated
with a pivotal row k. Let |V1 | = s. We set wi,j = wj,i = for
i Vk and wi,j = wj,i = 0 otherwise. In our experiment,
J = [p/s], k = 1, s + 1, 2s + 1, ..., and we always set
= 1/(s + 1) with J = 20.
(ii) Neighborhood Graph : We first uniformly sample
(y1 , y2 , ..., yn ) from a unit square. We then set wi,j = wj,i =
1
with probability ( 2) exp(4kyi yj k2 ). The remaining
entries of W are set to be zero. The number of nonzero
off-diagonal elements of each row or column is restricted to be
smaller than [1/]. is set to be 0.245.
Ashwini Maurya
Simulation Study
Conclusion
References
Simulation
Extensive simulation are done, here we report the result for
two types of inverse covariance matrix:
(i) Hub Graph : The rows/columns are partitioned into J
equally-sized disjoint groups:
{V1 V2 , ..., VJ } = {1, 2, ..., p}, each group is associated
with a pivotal row k. Let |V1 | = s. We set wi,j = wj,i = for
i Vk and wi,j = wj,i = 0 otherwise. In our experiment,
J = [p/s], k = 1, s + 1, 2s + 1, ..., and we always set
= 1/(s + 1) with J = 20.
(ii) Neighborhood Graph : We first uniformly sample
(y1 , y2 , ..., yn ) from a unit square. We then set wi,j = wj,i =
1
with probability ( 2) exp(4kyi yj k2 ). The remaining
entries of W are set to be zero. The number of nonzero
off-diagonal elements of each row or column is restricted to be
smaller than [1/]. is set to be 0.245.
Ashwini Maurya
Simulation Study
Conclusion
References
Simulation
We compare the proposed method (JCP) to graphical lasso
and SPICE in terms of Kullback-Leibler loss (KL-Loss) for
varying values of sample size (n) and number of covariates
(p). We choose n = 50, 100 and p = 50, 100, 200.
For both Hub and Neighborhood type inverse covariance
matrix, the JCP method shows better performance than
SPICE method for all n and p in terms of KL-Loss.
The performance of JCP is better than graphical lasso for
p=50 and p=100, whereas the JCP performs similar to
Graphical lasso for p=200.
The extensive simulation analysis ( refer to the paper) also
shows that the loss function value of the estimated inverse
covariance matrix also depends upon the underlying structure
of matrix.
Ashwini Maurya
Simulation Study
Conclusion
References
Simulation
We compare the proposed method (JCP) to graphical lasso
and SPICE in terms of Kullback-Leibler loss (KL-Loss) for
varying values of sample size (n) and number of covariates
(p). We choose n = 50, 100 and p = 50, 100, 200.
For both Hub and Neighborhood type inverse covariance
matrix, the JCP method shows better performance than
SPICE method for all n and p in terms of KL-Loss.
The performance of JCP is better than graphical lasso for
p=50 and p=100, whereas the JCP performs similar to
Graphical lasso for p=200.
The extensive simulation analysis ( refer to the paper) also
shows that the loss function value of the estimated inverse
covariance matrix also depends upon the underlying structure
of matrix.
Ashwini Maurya
Simulation Study
Conclusion
References
Simulation
We compare the proposed method (JCP) to graphical lasso
and SPICE in terms of Kullback-Leibler loss (KL-Loss) for
varying values of sample size (n) and number of covariates
(p). We choose n = 50, 100 and p = 50, 100, 200.
For both Hub and Neighborhood type inverse covariance
matrix, the JCP method shows better performance than
SPICE method for all n and p in terms of KL-Loss.
The performance of JCP is better than graphical lasso for
p=50 and p=100, whereas the JCP performs similar to
Graphical lasso for p=200.
The extensive simulation analysis ( refer to the paper) also
shows that the loss function value of the estimated inverse
covariance matrix also depends upon the underlying structure
of matrix.
Ashwini Maurya
Simulation Study
Conclusion
References
Simulation
We compare the proposed method (JCP) to graphical lasso
and SPICE in terms of Kullback-Leibler loss (KL-Loss) for
varying values of sample size (n) and number of covariates
(p). We choose n = 50, 100 and p = 50, 100, 200.
For both Hub and Neighborhood type inverse covariance
matrix, the JCP method shows better performance than
SPICE method for all n and p in terms of KL-Loss.
The performance of JCP is better than graphical lasso for
p=50 and p=100, whereas the JCP performs similar to
Graphical lasso for p=200.
The extensive simulation analysis ( refer to the paper) also
shows that the loss function value of the estimated inverse
covariance matrix also depends upon the underlying structure
of matrix.
Ashwini Maurya
Simulation Study
Conclusion
References
JCP
Graphical Lasso
SPICE
KL-loss
10
KL-loss
20
10
30
JCP
Graphical Lasso
SPICE
50
100
200
Number of variables p
Ashwini Maurya
50
100
200
Number of variables p
Simulation Study
Conclusion
References
JCP
Graphical Lasso
SPICE
KL-loss
10
KL-loss
10
20
JCP
Graphical Lasso
SPICE
50
100
200
Number of variables p
Ashwini Maurya
50
100
200
Number of variables p
Simulation Study
Conclusion
References
Outline
Simulation Study
Conclusion
References
Ashwini Maurya
Simulation Study
Conclusion
References
Conclusion
We impose joint convex penalty which has shown better
performance than other methods based on simulation study.
In practice the underlying inverse covariance matrix may have
additional structure than just being sparse, in that case a
suitable penalty can be used to estimate the inherent
structure of the matrix.
The proposed method uses joint convex penalty which is more
flexible for penalizing entries of a inverse covariance matrix
differently rather than penalizing by same amount of chosen
regularization parameter as in graphical lasso and SPICE.
Under mild conditions, the proposed proximal gradient
algorithm is shown to achieve sub-linear rate of convergence
for this problem which can further be improved to have a
linear rate of convergence, makes it suitable choice for large
scale optimization problems.
Ashwini Maurya
Simulation Study
Conclusion
References
Conclusion
We impose joint convex penalty which has shown better
performance than other methods based on simulation study.
In practice the underlying inverse covariance matrix may have
additional structure than just being sparse, in that case a
suitable penalty can be used to estimate the inherent
structure of the matrix.
The proposed method uses joint convex penalty which is more
flexible for penalizing entries of a inverse covariance matrix
differently rather than penalizing by same amount of chosen
regularization parameter as in graphical lasso and SPICE.
Under mild conditions, the proposed proximal gradient
algorithm is shown to achieve sub-linear rate of convergence
for this problem which can further be improved to have a
linear rate of convergence, makes it suitable choice for large
scale optimization problems.
Ashwini Maurya
Simulation Study
Conclusion
References
Conclusion
We impose joint convex penalty which has shown better
performance than other methods based on simulation study.
In practice the underlying inverse covariance matrix may have
additional structure than just being sparse, in that case a
suitable penalty can be used to estimate the inherent
structure of the matrix.
The proposed method uses joint convex penalty which is more
flexible for penalizing entries of a inverse covariance matrix
differently rather than penalizing by same amount of chosen
regularization parameter as in graphical lasso and SPICE.
Under mild conditions, the proposed proximal gradient
algorithm is shown to achieve sub-linear rate of convergence
for this problem which can further be improved to have a
linear rate of convergence, makes it suitable choice for large
scale optimization problems.
Ashwini Maurya
Simulation Study
Conclusion
References
Outline
Simulation Study
Conclusion
References
Ashwini Maurya
Simulation Study
Conclusion
References
Selected References
Ashwini Maurya
Simulation Study
Conclusion
References
Thanks You !
Ashwini Maurya