You are on page 1of 2

Matrix Completion Literature

10/04/2016

Recht, B., Fazel, M., & Parrilo, P. A. (2010). Guaranteed minimum-rank solutions
of linear matrix equations via nuclear norm minimization. SIAM review, 52(3),
471-501.
Give conditions where nuclear norm relaxation gives the optimal solution
to the problem of minimizing matrix rank subject to linear equalities
o Also referred to as matrix sensing
o Specifically, restricted isometry property
o Essentially means that subjecting matrix to LHS of equality
constraints does not radically alter matrix eccentricity
o (1-d_r(A))*|X|_F <= |A(X)| <= (1+d_r(A))*|X|_F for all matrices
of rank at most r
o Show that if d_5r < 0.1 where r = rank of optimal matrix, then
nuclear norm relaxation gives exact solution
o Extension of work by Candes & Tao on compressed sensing
Critically, do not specify sampling operator, but random linear functions
o Given m equations <A_k, M>=b_k, where M is matrix to be
reconstructed and entries of A_k are iid Gaussian or Bernoulli r.v.s
o But proofs dont seem to rely on this randomness?
Do not extend notion of incoherence
Give examples of cases where RIP holds with high probability
o In particular, show holds with high probability in matrix completion
problem when # samples > C*nr*logn
Cands, E. J., & Recht, B. (2009). Exact matrix completion via convex
optimization. Foundations of Computational mathematics, 9(6), 717-772.
Solve low-rank matrix completion exactly using convex optimization
o Build on Recht, Fazel & Parrilos nuclear norm relaxation
o Can solve matrix completion uniquely and exactly with high
probability given O(rn^1.2)*logn samples for low r
o O(rn^1.25)*logn for any r
o Require a sample in each row & column
o Requires solving an SDP
Also extend notion of incoherence from Candes & Tao
o Effectively, measure of max sparsity among rows/columns

o Use this to abstract away from Restricted Isometry Property


Here, specify sampling operator instead of random linear functions
Can handle noise corruption?
Cands, E. J., & Tao, T. (2010). The power of convex relaxation: Near-optimal
matrix completion. IEEE Transactions on Information Theory, 56(5), 2053-2080.
Prove information theoretic lower bound samples necessary for matrix
completion of O(nrlogn)
Also prove trace-norm minimization provides exact solution given
O(nr(logn)^2) samples
Jain, P., Netrapalli, P., & Sanghavi, S. (2013, June). Low-rank matrix completion
using alternating minimization. In Proceedings of the forty-fifth annual ACM
symposium on Theory of computing (pp. 665-674). ACM.
Provide first theoretical analysis of performance of alternating
minimization techniques for matrix completion and sensing
o Show that it guarantees convergence under same assumptions
used for other methods
Specifically assumption of Candes & Recht (2009)
o Show that, under assumptions, converges faster, i.e. geometrically
o Need 2log(|M|/e) iterations to get |M-UV|<=e for matrix sensing
o Need O(log(|M|/e) iterations to get |M-UV|<=e for matrix
completion given O(k^7 (s_1/s_k)^6 u log(k|M|)/n) samples
Critically, assumes that rank is given and does not minimize it
o So does not solve the rank minimization problem
Given rank r, can decompose matrix into bi-linear components X=UV
o If X is m x m, U and V are m x r
o Then use alternating minimization of error metric on U and V
Alternating minimization has certain advantages over other methods
o Parallelizable, low memory footprint, and flexible modeling
o But still needs to calculate SVD at beginning
Adapted to noisy matrix completion by Gunasekar et al. (2013)
Hardt, M. (2013). On the provable convergence of alternating minimization for
matrix completion. arXiv preprint arxiv:1312.0925.
Improve results of Jain et al. (2013) by factor of k^5 (s_1/s_k)^4
De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). A multilinear singular
value decomposition. SIAM journal on Matrix Analysis and Applications, 21(4),
1253-1278.
Generalize concept of SVD for matrices to tensors; called HOSVD
o Always exists and is unique for real and complex tensors
o Returns SVD when applied to matrices
De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). On the best rank-1 and
rank-(r 1, r 2,..., rn) approximation of higher-order tensors. SIAM Journal on
Matrix Analysis and Applications, 21(4), 1324-1342.

You might also like