Professional Documents
Culture Documents
2.1. Random Variables on the Riemannian Manifold Hm The definition (6) underlines a direct analogy to the generic
Mahalanobis distance. The quantities ρ1 and ρ2 can be seen
Let Hm be a complete and simply connected m-dimensional as functions of the dispersion parameters of the random vari-
Riemannian manifold. Let Y ∈ Hm be a random variable ables Y1 and Y2 away from their mode ȳ1 and ȳ2 respectively
within their submanifold. Due to their independence, the Rie- 3. MIXTURES OF PRODUCT-SPACES GAUSSIAN
mannian generic Mahalanobis distance is expressed as DISTRIBUTIONS
d2 (y, ȳ) = ρ21 (σ1 )d2H1n (y1 , ȳ1 ) + ρ22 (σ2 )d2H2p (y2 , ȳ2 ) (7) In the context of visual content classification or indexation,
various works have focused on dictionary learning in the de-
where the quantity d2Hi (., .) depends directly on the metric scriptor space which can be Euclidean or a more constrained
considered on Hi . Riemannian manifolds [2, 3, 4, 11, 12, 13]. For this, mixture
Remark. In the deterministic case, it is noticeable that models have shown nice performance as tool for statistical
if we consider the pair (Y1 , Y2 ) = (µ, Σ), we can observe learning.
that the Wasserstein distance between two multivariate Gaus-
sian laws, N1 (µ1 , Σ1 ) and N2 (µ2 , Σ2 ) is clearly a Rieman- 3.1. Definition
nian distance within the general class of distance associated
to the equation (7). In [21] from the optimal transport theory, From the definition of our product-spaces Gaussian distribu-
the Wasserstein distance is given by the following formula tions versus Rm × SPD(m), the mixture is defined as:
W 2 (N1 , N2 )) =k µ1 − µ2 k22 + k Σ1
1/2
− Σ2
1/2
k2F (8) p(µ, Σ|($k , µ̄k , Γk , Σ̄k , σk )1≤k≤K ) =
K
X (10)
where k A k2F = trace(AAT ) is the Frobenius norm. $k × p(µ| µ̄k , Γk ) × p(Σ| Σ̄k , σk )
k =1
2.3. Product-spaces Gaussian Distribution from Gaus- where $µ ∈ (0, 1) are the weights with sum one and where
sian Measures each Gaussian law is given by (9).
Let us now pay attention to the random pair Y = (µ, Σ)
3.2. EM algorithm for mixture parameter estimation
which corresponds to Rm × SPD(m) of dimension m +
m(m + 1)/2. This section provides definition of the product- Let (µ1 , Σ1 ), . . . , (µN , ΣN ) be independent samples from (9).
spaces Riemannian Gaussian distributions. We choose the Based on these samples, the maximum likelihood estimate
following form (MLE) of ϑ = (ϑ1≤k≤K ) = ({$k , µ̄k , Γk , Σ̄k , σk }1≤k≤K )
can be computed using an EM algorithm. For this, let us con-
(µ, Σ) ∼ p(µ| µ̄, Γ) × p(Σ| Σ̄, σ) (9)
sider for ϑ,
ωn,k (ϑ) = PK$k ×p(µ n ,Σn |ϑk )
PN
• For the variable µ $ ×p(µ ,Σ |ϑ )
Nk (ϑ) = n=1 ωn,k (ϑ)
s=1 s n n s
The conventional Euclidean multivariate Gaussian law is se- The EM algorithm iteratively updates ϑ̂ = (ϑ̂1≤k≤K ),
lected which is an approximation of the MLE of ϑ as follows. Based
1
p(µ| µ̄, Γ) = (2π)m/2 exp − 12 (µ − µ̄)T Γ−1 (µ − µ̄)
on the current value of ϑ̂, it updates ϑ̂new
k as follows:
|Γ|1/2
[3] B. C. Lovell, S. Shirazi, C. Sanderson, and M. T. Ha- [13] M. T. Harandi, M. Salzmann, and R. I. Hartley, “Dimen-
randi, “Graph embedding discriminant analysis on sionality reduction on spd manifolds: The emergence of
grassmannian manifolds for improved image set match- geometry-aware methods,” CoRR, vol. abs/1605.06182,
ing,” 2013 IEEE Conference on Computer Vision and 2016.
Pattern Recognition, vol. 00, no. undefined, pp. 2705–
2712, 2011. [14] M. Lovric, M. Min-Oo, and E. A Ruh, “Multivariate
normal distributions parametrized as a riemannian sym-
[4] M. Faraki, M. T. Harandi, and F. Porikli, “More about metric space,” Journal of Multivariate Analysis, vol. 74,
vlad: A leap from euclidean to riemannian manifolds,” no. 1, pp. 36 – 48, 2000.
in 2015 IEEE Conference on Computer Vision and Pat-
[15] R. Hosseini and S. Sra, “Matrix manifold optimization
tern Recognition (CVPR), June 2015, pp. 4951–4960.
for gaussian mixtures,” in Advances in Neural Infor-
[5] S. Jayasumana, R. Hartley, M. Salzmann, H. Li, and mation Processing Systems 28: Annual Conference on
M. Harandi, “Kernel methods on riemannian manifolds Neural Information Processing Systems 2015, Decem-
with gaussian rbf kernels,” IEEE Transactions on Pat- ber 7-12, 2015, Montreal, Quebec, Canada, 2015, pp.
tern Analysis and Machine Intelligence, vol. 37, no. 12, 910–918.
pp. 2464–2477, Dec 2015.
[16] X. Pennec, “Intrinsic statistics on riemannian mani-
[6] J. Zhang, L. Wang, L. Zhou, and W. Li, “Learning dis- folds: Basic tools for geometric measurements,” J.
criminative stein kernel for spd matrices and its appli- Math. Imaging Vis., vol. 25, no. 1, pp. 127–154, July
cations,” IEEE Transactions on Neural Networks and 2006.
Learning Systems, vol. 27, no. 5, pp. 1020–1033, May [17] M. Moakher, “On the averaging of symmetric positive-
2016. definite tensors,” Journal of Elasticity, vol. 82, no. 3,
[7] O. Tuzel, F. Porikli, and P. Meer, “Pedestrian detec- pp. 273–296, 2006.
tion via classification on riemannian manifolds,” IEEE [18] V. Arsigny, P. Fillard, X. Pennec, and N. Ayache, “Geo-
Transactions on Pattern Analysis and Machine Intelli- metric means in a novel vector space structure on sym-
gence, vol. 30, no. 10, pp. 1713–1727, 2008. metric positive-definite matrices,” SIAM Journal on Ma-
trix Analysis and Applications, vol. 29, no. 1, pp. 328–
[8] Y. Pang, Y. Yuan, and X. Li, “Gabor-based region co-
347, 2007.
variance matrices for face recognition,” IEEE Transac-
tions on Circuits and Systems for Video Technology, vol. [19] G. Cheng and B. C. Vemuri, “A novel dynamic system
18, no. 7, pp. 989 – 993, 2008. in the space of SPD matrices with applications to ap-
pearance tracking,” SIAM J. Imaging Sciences, vol. 6,
[9] B. Ma, Y. Su, and F. Jurie, “Bicov: a novel image rep-
no. 1, pp. 592–615, 2013.
resentation for person re-identification and face verifica-
tion,” in British Machine Vision Conference, 2012, pp. [20] B.-Y. Chen, “Warped products in real space forms,”
11–pages. Rocky Mountain J. Math., vol. 34, no. 2, pp. 551–563,
06, 2004.
[10] S. Said, L. Bombrun, Y. Berthoumieu, and J. Man-
ton, “Riemannian gaussian distributions on the space [21] A. Takatsu, “Wasserstein geometry of gaussian mea-
of symmetric positive definite matrices,” to ap- sures,” Osaka J. Math., vol. 48, no. 4, pp. 1005–1026,
pear in IEEE Transactions on Information Theory, 12, 2011.
https://arxiv.org/abs/1507.01760, 2016.