You are on page 1of 41

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

A Joint Convex Penalty for Inverse Covariance


Matrix Estimation
Ashwini Maurya
Department of Statistics and Probability, Michigan State University

March 27, 2014

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Outline

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Covariance Matrix
Consider X = (X1 , , Xn )T be observation matrix from a
p-dimensional multivariate distribution with mean vector
= (1 , , p )T and covariance matrix , the sample
covariance matrix is defined by:

Si,j =

n
X

i )(Xm,j X
j );
(Xm,i X

i, j = 1, 2, ..p

m=1

When p > n, the sample covariance matrix is singular matrix


and therefore the estimation of and hence 1 is ill-posed.
We assume that 1 is sparse, i.e.,
#{(i, j) : 1
i,j 6= 0} := g  p(p + 1)/2.
How can we estimate the non-zero entries of inverse
covariance matrix?
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Covariance Matrix
Consider X = (X1 , , Xn )T be observation matrix from a
p-dimensional multivariate distribution with mean vector
= (1 , , p )T and covariance matrix , the sample
covariance matrix is defined by:

Si,j =

n
X

i )(Xm,j X
j );
(Xm,i X

i, j = 1, 2, ..p

m=1

When p > n, the sample covariance matrix is singular matrix


and therefore the estimation of and hence 1 is ill-posed.
We assume that 1 is sparse, i.e.,
#{(i, j) : 1
i,j 6= 0} := g  p(p + 1)/2.
How can we estimate the non-zero entries of inverse
covariance matrix?
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Covariance Matrix
Consider X = (X1 , , Xn )T be observation matrix from a
p-dimensional multivariate distribution with mean vector
= (1 , , p )T and covariance matrix , the sample
covariance matrix is defined by:

Si,j =

n
X

i )(Xm,j X
j );
(Xm,i X

i, j = 1, 2, ..p

m=1

When p > n, the sample covariance matrix is singular matrix


and therefore the estimation of and hence 1 is ill-posed.
We assume that 1 is sparse, i.e.,
#{(i, j) : 1
i,j 6= 0} := g  p(p + 1)/2.
How can we estimate the non-zero entries of inverse
covariance matrix?
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Scope of Inverse Covariance Matrix


Estimation of inverse covariance matrix is important in number of
statistical analysis including:
Gaussian Graphical Modeling: In Gaussian graphical modeling,
a zero entry of an element in the inverse of a covariance
matrix corresponds to conditional independence between the
variables.
Linear or Quadratic Discriminant Analysis: When the features
are assumed to have multivariate Gaussian density, the
resulting discriminant rule requires an estimate of inverse
covariance matrix.
Principal Component Analysis (PCA): In multivariate high
dimensional data it is often desirable to transform the
high-dimensional feature space to a lower dimension without
loosing much information. covariance matrix method is
popular method for PCA estimates.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Scope of Inverse Covariance Matrix


Estimation of inverse covariance matrix is important in number of
statistical analysis including:
Gaussian Graphical Modeling: In Gaussian graphical modeling,
a zero entry of an element in the inverse of a covariance
matrix corresponds to conditional independence between the
variables.
Linear or Quadratic Discriminant Analysis: When the features
are assumed to have multivariate Gaussian density, the
resulting discriminant rule requires an estimate of inverse
covariance matrix.
Principal Component Analysis (PCA): In multivariate high
dimensional data it is often desirable to transform the
high-dimensional feature space to a lower dimension without
loosing much information. covariance matrix method is
popular method for PCA estimates.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Scope of Inverse Covariance Matrix


Estimation of inverse covariance matrix is important in number of
statistical analysis including:
Gaussian Graphical Modeling: In Gaussian graphical modeling,
a zero entry of an element in the inverse of a covariance
matrix corresponds to conditional independence between the
variables.
Linear or Quadratic Discriminant Analysis: When the features
are assumed to have multivariate Gaussian density, the
resulting discriminant rule requires an estimate of inverse
covariance matrix.
Principal Component Analysis (PCA): In multivariate high
dimensional data it is often desirable to transform the
high-dimensional feature space to a lower dimension without
loosing much information. covariance matrix method is
popular method for PCA estimates.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Penalization methods
It is well known that the `1 minimization leads to sparse solution.
Some of the popular methods for estimating a sparse inverse
covariance matrix based on `1 minimization includes:
Graphical Lasso (Friedman et.al., 2007)
Sparse Permutation Invariant Covariance Matrix Estimation
(Rothamn et al., 2008)
High Dimensional Inverse Covariance Matrix Estimation
(Ming Yuan, 2010)
Sparse Inverse Covariance Matrix Estimation using Quadratic
Approximation ( CJ Hsieh et.al., 2011)

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Penalization methods
It is well known that the `1 minimization leads to sparse solution.
Some of the popular methods for estimating a sparse inverse
covariance matrix based on `1 minimization includes:
Graphical Lasso (Friedman et.al., 2007)
Sparse Permutation Invariant Covariance Matrix Estimation
(Rothamn et al., 2008)
High Dimensional Inverse Covariance Matrix Estimation
(Ming Yuan, 2010)
Sparse Inverse Covariance Matrix Estimation using Quadratic
Approximation ( CJ Hsieh et.al., 2011)

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Outline

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Beyond `1 Penalization

It is well known that the eigenvalues of sample covariance


matrix are over-dispersed [Marcenko-Pastur 1967, Johnstone
2001]
In high dimensional setting ( where n  p), an estimate of
covariance matrix based on sample covariance matrix will be
singular matrix. In other words, the most of eigen values of
covariance matrix will be zero or close to zero.
Consequently the eigenvalues of inverse covariance matrix will
be very large.

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Beyond `1 Penalization

It is well known that the eigenvalues of sample covariance


matrix are over-dispersed [Marcenko-Pastur 1967, Johnstone
2001]
In high dimensional setting ( where n  p), an estimate of
covariance matrix based on sample covariance matrix will be
singular matrix. In other words, the most of eigen values of
covariance matrix will be zero or close to zero.
Consequently the eigenvalues of inverse covariance matrix will
be very large.

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Beyond `1 Penalization

It is well known that the eigenvalues of sample covariance


matrix are over-dispersed [Marcenko-Pastur 1967, Johnstone
2001]
In high dimensional setting ( where n  p), an estimate of
covariance matrix based on sample covariance matrix will be
singular matrix. In other words, the most of eigen values of
covariance matrix will be zero or close to zero.
Consequently the eigenvalues of inverse covariance matrix will
be very large.

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

0.0

0.2

0.4

0.6

0.8

True Eigen Values

1.0

200
0

50

100

Frequency

150

200
150
100

Frequency

50

100
50
0

Frequency

150

200

Beyond `1 Penalization

0.0

0.5

1.0

1.5

2.0

2.5

Sample Eigen Values n=500 p=200

Ashwini Maurya

Sample Eigen Values n=100 p=200

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Beyond `1 Penalization and A Joint Convex Penalty


Motivated by this observation, we include an additional
penalty which reduces the eigen-spectrum of inverse
covariance matrix.
Let X = (X1 , , Xn )T be observation matrix from a
p-dimensional multivariate normal distribution with mean
vector = (1 , , p )T and covariance matrix .
The constrained maximum likelihood estimate of the inverse
covariance matrix is given by:
argmin F (W ) = log (det(W )) + tr (SW ) + kW k1 + kW k
W 0

(2.1)
where W is inverse of covariance matrix, S is sample
covariance matrix, and are non-negative real constants.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Beyond `1 Penalization and A Joint Convex Penalty


Motivated by this observation, we include an additional
penalty which reduces the eigen-spectrum of inverse
covariance matrix.
Let X = (X1 , , Xn )T be observation matrix from a
p-dimensional multivariate normal distribution with mean
vector = (1 , , p )T and covariance matrix .
The constrained maximum likelihood estimate of the inverse
covariance matrix is given by:
argmin F (W ) = log (det(W )) + tr (SW ) + kW k1 + kW k
W 0

(2.1)
where W is inverse of covariance matrix, S is sample
covariance matrix, and are non-negative real constants.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Beyond `1 Penalization and A Joint Convex Penalty


Motivated by this observation, we include an additional
penalty which reduces the eigen-spectrum of inverse
covariance matrix.
Let X = (X1 , , Xn )T be observation matrix from a
p-dimensional multivariate normal distribution with mean
vector = (1 , , p )T and covariance matrix .
The constrained maximum likelihood estimate of the inverse
covariance matrix is given by:
argmin F (W ) = log (det(W )) + tr (SW ) + kW k1 + kW k
W 0

(2.1)
where W is inverse of covariance matrix, S is sample
covariance matrix, and are non-negative real constants.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Beyond `1 Penalization and A Joint Convex Penalty


The first penalty term kW k1 in equation (1.1) is defined as
the sum of absolute value of the inverse covariance matrx W .
The second penalty term kW k is the trace norm which is
defined as sum of singular values of W ( for positive definite
matrices, eigen values and singular values are same).
`1 norm is convex, smooth except at the origin however trace
norm is non-smooth convex function. Consequently the
optimization problem is convex optimization problems with
non-smooth constraints.
We implement proximal gradient method to solve this
optimization problem.

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Beyond `1 Penalization and A Joint Convex Penalty


The first penalty term kW k1 in equation (1.1) is defined as
the sum of absolute value of the inverse covariance matrx W .
The second penalty term kW k is the trace norm which is
defined as sum of singular values of W ( for positive definite
matrices, eigen values and singular values are same).
`1 norm is convex, smooth except at the origin however trace
norm is non-smooth convex function. Consequently the
optimization problem is convex optimization problems with
non-smooth constraints.
We implement proximal gradient method to solve this
optimization problem.

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Beyond `1 Penalization and A Joint Convex Penalty


The first penalty term kW k1 in equation (1.1) is defined as
the sum of absolute value of the inverse covariance matrx W .
The second penalty term kW k is the trace norm which is
defined as sum of singular values of W ( for positive definite
matrices, eigen values and singular values are same).
`1 norm is convex, smooth except at the origin however trace
norm is non-smooth convex function. Consequently the
optimization problem is convex optimization problems with
non-smooth constraints.
We implement proximal gradient method to solve this
optimization problem.

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Beyond `1 Penalization and A Joint Convex Penalty


The first penalty term kW k1 in equation (1.1) is defined as
the sum of absolute value of the inverse covariance matrx W .
The second penalty term kW k is the trace norm which is
defined as sum of singular values of W ( for positive definite
matrices, eigen values and singular values are same).
`1 norm is convex, smooth except at the origin however trace
norm is non-smooth convex function. Consequently the
optimization problem is convex optimization problems with
non-smooth constraints.
We implement proximal gradient method to solve this
optimization problem.

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Outline

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Proximal Gradient Algorithm

Let h(x) be some smooth convex function bounded below by


some finite constant. The proximal gradient method generates
a sequence of solutions by solving,
xk = argmin h(x) + kx xk1 k2

k = 1, 2, ...

(3.1)

The above solution converges weekly to argminx h(x)


[Rockafellar, 1976].

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Convergence Analysis

Theorem
Let {Wn , n = 1, 2, ...} be the sequence generated by proximal
gradient algorithm. Let M < be a constant such that

P
| < W Wn , 5kWn k1 > | < M. We have,
n=1

F (Wk )

F (W )

LkW 0 W k2F +18c 2 + M


4k

where L > 0 is the least upper Lipschitz constant of gradient of


negative log-likelihood function, > 1 and c > 0 are constants.

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Outline

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Simulation
Extensive simulation are done, here we report the result for
two types of inverse covariance matrix:
(i) Hub Graph : The rows/columns are partitioned into J
equally-sized disjoint groups:
{V1 V2 , ..., VJ } = {1, 2, ..., p}, each group is associated
with a pivotal row k. Let |V1 | = s. We set wi,j = wj,i = for
i Vk and wi,j = wj,i = 0 otherwise. In our experiment,
J = [p/s], k = 1, s + 1, 2s + 1, ..., and we always set
= 1/(s + 1) with J = 20.
(ii) Neighborhood Graph : We first uniformly sample
(y1 , y2 , ..., yn ) from a unit square. We then set wi,j = wj,i =
1
with probability ( 2) exp(4kyi yj k2 ). The remaining
entries of W are set to be zero. The number of nonzero
off-diagonal elements of each row or column is restricted to be
smaller than [1/]. is set to be 0.245.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Simulation
Extensive simulation are done, here we report the result for
two types of inverse covariance matrix:
(i) Hub Graph : The rows/columns are partitioned into J
equally-sized disjoint groups:
{V1 V2 , ..., VJ } = {1, 2, ..., p}, each group is associated
with a pivotal row k. Let |V1 | = s. We set wi,j = wj,i = for
i Vk and wi,j = wj,i = 0 otherwise. In our experiment,
J = [p/s], k = 1, s + 1, 2s + 1, ..., and we always set
= 1/(s + 1) with J = 20.
(ii) Neighborhood Graph : We first uniformly sample
(y1 , y2 , ..., yn ) from a unit square. We then set wi,j = wj,i =
1
with probability ( 2) exp(4kyi yj k2 ). The remaining
entries of W are set to be zero. The number of nonzero
off-diagonal elements of each row or column is restricted to be
smaller than [1/]. is set to be 0.245.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Simulation
We compare the proposed method (JCP) to graphical lasso
and SPICE in terms of Kullback-Leibler loss (KL-Loss) for
varying values of sample size (n) and number of covariates
(p). We choose n = 50, 100 and p = 50, 100, 200.
For both Hub and Neighborhood type inverse covariance
matrix, the JCP method shows better performance than
SPICE method for all n and p in terms of KL-Loss.
The performance of JCP is better than graphical lasso for
p=50 and p=100, whereas the JCP performs similar to
Graphical lasso for p=200.
The extensive simulation analysis ( refer to the paper) also
shows that the loss function value of the estimated inverse
covariance matrix also depends upon the underlying structure
of matrix.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Simulation
We compare the proposed method (JCP) to graphical lasso
and SPICE in terms of Kullback-Leibler loss (KL-Loss) for
varying values of sample size (n) and number of covariates
(p). We choose n = 50, 100 and p = 50, 100, 200.
For both Hub and Neighborhood type inverse covariance
matrix, the JCP method shows better performance than
SPICE method for all n and p in terms of KL-Loss.
The performance of JCP is better than graphical lasso for
p=50 and p=100, whereas the JCP performs similar to
Graphical lasso for p=200.
The extensive simulation analysis ( refer to the paper) also
shows that the loss function value of the estimated inverse
covariance matrix also depends upon the underlying structure
of matrix.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Simulation
We compare the proposed method (JCP) to graphical lasso
and SPICE in terms of Kullback-Leibler loss (KL-Loss) for
varying values of sample size (n) and number of covariates
(p). We choose n = 50, 100 and p = 50, 100, 200.
For both Hub and Neighborhood type inverse covariance
matrix, the JCP method shows better performance than
SPICE method for all n and p in terms of KL-Loss.
The performance of JCP is better than graphical lasso for
p=50 and p=100, whereas the JCP performs similar to
Graphical lasso for p=200.
The extensive simulation analysis ( refer to the paper) also
shows that the loss function value of the estimated inverse
covariance matrix also depends upon the underlying structure
of matrix.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Simulation
We compare the proposed method (JCP) to graphical lasso
and SPICE in terms of Kullback-Leibler loss (KL-Loss) for
varying values of sample size (n) and number of covariates
(p). We choose n = 50, 100 and p = 50, 100, 200.
For both Hub and Neighborhood type inverse covariance
matrix, the JCP method shows better performance than
SPICE method for all n and p in terms of KL-Loss.
The performance of JCP is better than graphical lasso for
p=50 and p=100, whereas the JCP performs similar to
Graphical lasso for p=200.
The extensive simulation analysis ( refer to the paper) also
shows that the loss function value of the estimated inverse
covariance matrix also depends upon the underlying structure
of matrix.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Simulation(Hub type Inverse Covariance Matrix)


Left graph n = 50, right graph n = 100 for varying p.

JCP
Graphical Lasso
SPICE

KL-loss
10

KL-loss

20

10

30

JCP
Graphical Lasso
SPICE

50

100

200

Number of variables p

Ashwini Maurya

50

100

200

Number of variables p

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Simulation(Neighborhood type Inverse Covariance Matrix)


Left graph n = 50, right graph n = 100 for varying p.

JCP
Graphical Lasso
SPICE

KL-loss
10

KL-loss

10

20

JCP
Graphical Lasso
SPICE

50

100

200

Number of variables p

Ashwini Maurya

50

100

200

Number of variables p

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Outline

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Conclusion
We impose joint convex penalty which has shown better
performance than other methods based on simulation study.
In practice the underlying inverse covariance matrix may have
additional structure than just being sparse, in that case a
suitable penalty can be used to estimate the inherent
structure of the matrix.
The proposed method uses joint convex penalty which is more
flexible for penalizing entries of a inverse covariance matrix
differently rather than penalizing by same amount of chosen
regularization parameter as in graphical lasso and SPICE.
Under mild conditions, the proposed proximal gradient
algorithm is shown to achieve sub-linear rate of convergence
for this problem which can further be improved to have a
linear rate of convergence, makes it suitable choice for large
scale optimization problems.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Conclusion
We impose joint convex penalty which has shown better
performance than other methods based on simulation study.
In practice the underlying inverse covariance matrix may have
additional structure than just being sparse, in that case a
suitable penalty can be used to estimate the inherent
structure of the matrix.
The proposed method uses joint convex penalty which is more
flexible for penalizing entries of a inverse covariance matrix
differently rather than penalizing by same amount of chosen
regularization parameter as in graphical lasso and SPICE.
Under mild conditions, the proposed proximal gradient
algorithm is shown to achieve sub-linear rate of convergence
for this problem which can further be improved to have a
linear rate of convergence, makes it suitable choice for large
scale optimization problems.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Conclusion
We impose joint convex penalty which has shown better
performance than other methods based on simulation study.
In practice the underlying inverse covariance matrix may have
additional structure than just being sparse, in that case a
suitable penalty can be used to estimate the inherent
structure of the matrix.
The proposed method uses joint convex penalty which is more
flexible for penalizing entries of a inverse covariance matrix
differently rather than penalizing by same amount of chosen
regularization parameter as in graphical lasso and SPICE.
Under mild conditions, the proposed proximal gradient
algorithm is shown to achieve sub-linear rate of convergence
for this problem which can further be improved to have a
linear rate of convergence, makes it suitable choice for large
scale optimization problems.
Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Outline

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Selected References

Maurya, Ashwini, A Joint Convex Penalty for Inverse


Covariance Matrix Estimation. Journal of Computational
Statistics and Data Analysis 2014.
Friedman J., Hastie T., Tibshirani R., Sparse inverse
covariance estimation with the graphical lasso. Biostatistics.
2008 .
Rothman A. J., Bickel P. J., Levina E., Zhu J., Sparse
permutation invariant covariance estimation. Electron. J.
Stat. 2 494-515,2008

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

Introduction and Scope

Joint Convex Penalty

Solving the optimization problem

Simulation Study

Conclusion

References

Thanks You !

Ashwini Maurya

A Joint Convex Penalty for Inverse Covariance Matrix Estimation

You might also like