Professional Documents
Culture Documents
To cite this article: Koenraad M.R. Audenaert & Fumio Hiai (2015): Reciprocal Lie–Trotter
formula, Linear and Multilinear Algebra, DOI: 10.1080/03081087.2015.1082957
Article views: 4
Download by: [Florida State University] Date: 13 September 2015, At: 08:17
Linear and Multilinear Algebra, 2015
http://dx.doi.org/10.1080/03081087.2015.1082957
1. Introduction
For any matrices X and Y the well-known Lie–Trotter formula expresses the convergence
lim (e X/n eY/n )n = e X +Y .
n→∞
The symmetric form with a continuous parameter is also well-known for positive semidef-
inite matrices A, B ≥ 0 as
lim (A p/2 B p A p/2 )1/ p = P0 exp(log A+̇ log B), (1.1)
p0
where P0 is the orthogonal projection onto the intersection of the supports of A, B and
log A+̇ log B is defined as P0 (log A)P0 + P0 (log B)P0 .
When σ is an operator mean [1] corresponding to an operator monotone function f
on (0, ∞) such that α := f (1) is in (0, 1), the operator mean version of the Lie–Trotter
formula holds as
lim (A p σ B p )1/ p = P0 exp((1 − α) log A+̇α log B) (1.2)
p0
for matrices A, B ≥ 0. (This version follows from [2, Theorem 4.11] if one of A, B is
positive definite, but it can be shown for general positive semidefinite matrices based on
[3, Section 4].) In particular, let σ be the geometric mean A # B (introduced first in [4] and
further discussed in [1]), corresponding to the operator monotone function f (x) = x 1/2
(hence α = 1/2). Then (1.2) yields
lim (A p # B p )2/ p = P0 exp(log A+̇ log B), (1.3)
p0
i=1 i=1
with equality for k = d, where λ1 (X ) ≥ · · · ≥ λd (X ) are the eigenvalues of X sorted in
decreasing order and counting multiplicities. The Araki–Lieb–Thirring inequality can be
written in terms of log-majorization as
(A p/2 B p A p/2 )1/ p ≺(log) (Aq/2 B q Aq/2 )1/q if 0 < p < q, (1.4)
for matrices A, B ≥ 0, see [5, p. 301–302] and [6,7]. One can also consider the comple-
mentary version of (1.4) in terms of the geometric mean. Indeed, for A, B ≥ 0 we have [7]
can prove the existence of the limit of G p as p → ∞ only when A, B are 2 × 2 matrices,
and the general case must be left unsettled. The appendix is a proof of a technical lemma
stated in Section 2.
i=1
d
B = W diag(b1 , . . . , bd )W ∗ = bi wi wi∗ . (2.2)
i=1
exists, and a1 b1 ≥ λ1 ≥ · · · ≥ λd ≥ ad bd .
d
V ∗ A p/2 B p A p/2 V
p/2 p/2 p
ij
= u ik u jk ai a j bk .
k=1
In particular,
d
∗ p p
V A p/2 p
B A p/2
V ii
= |u ik |2 ai bk
k=1
i=1 k=1
so that
1/ p
min{|u ik |2 : u ik = 0}
λ1 ( p) ≥ max{ai bk : u ik = 0}. (2.7)
d
Estimates (2.6) and (2.7) give the desired expression immediately. In fact, they prove the
existence of the limit λ1 in (2.4) as well apart from Lemma 2.1.
In what follows, for each k = 1, . . . , d we write I d (k) for the set of all subsets
I of {1, . . . , d} with |I | = k. For I, J ∈ Id (k), we denote by (V ∗ W ) I,J the k × k
submatrix of V ∗ W corresponding to rows in I and columns in J ; hence det(V ∗W )
I,J
∗
denotes the corresponding minor of V W . We also write a I := i∈I ai and b I := i∈I bi .
Since det(V ∗ W ) = 0, note that for any k = 1, . . . , d and any I ∈ Id (k) we have
det(V ∗ W ) I,J = 0 for some J ∈ Id (k), and that for any J ∈ Id (k) we have
det(V ∗ W ) I,J = 0 for some I ∈ Id (k).
Proof For each k = 1, . . . , d the antisymmetric tensor powers A∧k and B ∧k (see [9]) are
given in the form of diagonalizations as
So, for the convenience of the reader, we will present its sketchy proof in Appendix 1 based
on [10].
Lem m a 2.4 There are constants α, β > 0 (depending on only d and k) such that
√
−1θ
α
P − Q
≤ inf
u 1 ∧ · · · ∧ u k − e v1 ∧ · · · ∧ vk
≤ β
P − Q
θ ∈R
for all orthonormal sets {u 1 , . . . , u k } and {v1 , . . . , vk } and the respective orthogonal projec-
tions P and Q onto span{u 1 , . . . , u k } and span{v1 , . . . , vk }, where
P − Q
is the operator
norm of P − Q and
·
inside infimum is the norm on H∧k .
The main result of the paper is the next theorem showing the existence of limit for the
reciprocal version of (1.1).
Theorem 2.5 For every d × d positive semidefinite matrices A and B, the matrix Z p in
(2.3) converges as p → ∞ to a positive semidefinite matrix.
A = diag(a1 , . . . , ad ), B = W diag(b1 , . . . , bd )W ∗ .
lim λ1 (Z ∧k
p ) = lim λ1 ( p) · · · λk−1 ( p)λk ( p)
p→∞ p→∞
= λ1 . . . λk−1 λk
> λ1 · · · λk−1 λk+1 = lim λ2 (Z ∧k
p ). (2.9)
p→∞
6 K.M.R. Audenaert and F. Hiai
(Z ∧k
p ) = (A )
p ∧k p/2 ∧k
W ((diag(b1 , . . . , bd ))∧k ) p (W ∧k )∗ (A∧k ) p/2
p/2
p
p/2
= diag(a I ) I w I,J I,J diag(b I ) I w J,I I,J diag(a I ) I
⎡ ⎤
=⎣ w I,K w J,K a I a J b K ⎦
p/2 p/2 p
Then we have
⎡ ⎤
p p
Z ∧k 1/2 1/2
aI a J bK
=⎣ ⎦
p
w I,K w J,K
ηk ηk
K ∈Id (k) I,J
⎡ ⎤
−→ Q := ⎣ w I,K w J,K δ I,J,K ⎦ ,
K ∈Id (k) I,J
where
1 if (I, K ), (J, K ) ∈ k ,
δ I,J,K :=
0 otherwise.
Since Q I,I ≥ |w I,K |2 > 0 when (I, K ) ∈ k , note that Q = 0. Furthermore, since
the eigenvalue λ1 (Z ∧k p ) is simple (if p large), it follows from (2.9) that the limit Q of
∧k p
Z p /ηk must be a rank one projection ψψ ∗ up to a positive scalar multiple, where
ψ is a unit vector in (Cd )∧k . Since the unit eigenvector u 1 ( p) ∧ · · · ∧ u k ( p) of Z ∧k
p p
corresponding to the largest (simple) eigenvalue coincides with that√of Z ∧k p /ηk , we
conclude that u 1 ( p) ∧ · · · ∧ u k ( p) converges ψ up to a scalar multiple e −1θ . Therefore, by
Lemma 2.4, the orthogonal projection onto span{u 1 ( p), . . . , u k ( p)} converges as p → ∞.
Assume now that
λ1 = · · · = λk1 > λk1 +1 = · · · = λk2 > · · · > λks−1 +1 = · · · = λks (ks = d).
From the fact proved above, the orthogonal projection onto span{u 1 ( p), . . . , u kr ( p)} con-
verges for any r = 1, . . . , s − 1, and this is trivial for r = s. Therefore, the orthogonal
projection onto span{u kr −1 +1 ( p), . .
. , u kr ( p)} converges to a projection Pr for any r =
1, . . . , s, and thus Z p converges to rs =1 λkr Pr .
Linear and Multilinear Algebra 7
For 1 ≤ k ≤ d define ηk by the right-hand side of (2.8). Then Lemma 2.3 (see also the
proof of Lemma 2.1) implies that, for k = 1, . . . , d,
ηk
λk = if ηk > 0
ηk−1
(where η0 := 1), and λk = 0 if ηk = 0. So one can effectively compute the eigenvalues
of Z := lim p→∞ Z p ; however, it does not seem that there is a simple algebraic method to
compute the limit matrix Z .
We end the section with a brief exposition on the generalization of Theorem 2.5 to the
case of more than two matrices without proof. Let A1 , . . . , Am be d ×d positive semidefinite
matrices with diagonalizations
(l) (l)
Al = Vl Dl Vl∗ , Dl = diag a1 , . . . , ad , 1 ≤ l ≤ m.
Downloaded by [Florida State University] at 08:17 13 September 2015
where
(l) d
Wl := Vl∗ Vl+1 = wi j , 1 ≤ l ≤ m − 1.
i, j=1
where
(1) (2) (m)
w(i 1 , i 2 , . . . , i m ) := wi1 j2 w j2 j3 · · · w jm−1 im : 1 ≤ j2 , . . . , jm−1 ≤ d,
(2) (m−1) (2) (m−1)
a j2 · · · a jm−1 = ai2 · · · aim−1 .
This can be applied to the antisymmetric tensors of Al ’s to show that the limit λi :=
lim p→∞ λi ( p) exists for every i = 1, . . . , d. Then the next theorem can be shown by
extending the proof of Theorem 2.5.
in the log-majorization order in (3.2). To do this, let 0 = i 0 < i 1 < · · · < il−1 < il = d
and 0 = j0 < j1 < · · · < jm−1 < jm = d be taken so that
a1 = · · · = ai1 > ai1 +1 = · · · = ai2 > · · · > ail−1 +1 = · · · = ail ,
b1 = · · · = b j1 > b j1 +1 = · · · = b j2 > · · · > b jm−1 +1 = · · · = b jm .
Theorem 3.1 In the above situation, the following conditions are equivalent:
It follows (see the proof of Lemma 2.1) that this is equivalent to (i). (ii) ⇒ (iii) is trivial.
(iii) ⇒ (i). By Lemma 2.3 again condition (iii) means that
h
h
λi = ai bi for all h ∈ {i 1 , . . . , il−1 , j1 , . . . , jm−1 }. (3.3)
i=1 i=1
Linear and Multilinear Algebra 9
k k
This holds also for h = d thanks to (3.2). We need to prove that i=1 λi = i=1 ai bi for
all k = 1, . . . , d. Now, let ir −1 < k ≤ ir and js−1 < k ≤ js as in condition (ii). If k = ir or
k = js , then the conclusion has already been stated in (3.3). So assume that ir −1 < k < ir
and js−1 < k < js . Set h 0 := max{ir −1 , js−1 } and h 1 := min{ir , js } so that h 0 < k < h 1 .
By (3.3), for h = h 0 , h 1 , we have
h0
h0
h1
h1
λi = ai bi > 0, λi = ai bi .
i=1 i=1 i=1 i=1
h 1
Since ai = ah 1 and bi = bh 1 for h 0 < i ≤ h 1 , we have i=h 0 +1
λi = (ah 1 bh 1 )h 1 −h 0 . By
h 0 +1 h 0 +1
(3.2) we furthermore have i=1 λi ≤ i=1 ai bi and hence
ah 1 bh 1 ≥ λh 0 +1 ≥ λh 0 +2 ≥ · · · ≥ λh 1 .
Downloaded by [Florida State University] at 08:17 13 September 2015
k k
Therefore, λi = ah 1 bh 1 for all i with h 0 + 1 < i ≤ h 1 , from which i=1 λi = i=1 ai bi
follows for h 0 < k < h 1 .
Proposition 3.2 Assume that the equivalent conditions of Theorem 3.1 hold. Then, for
each r = 1, . . . , l, the spectral projection of Z corresponding to the set of eigenvalues
{air −1 +1 bir −1 +1 , . . . , air bir } is equal to the spectral projection ii=i
r
r −1 +1
vi vi∗ of A corre-
sponding to air . Hence Z is of the form
d
Z= ai bi u i u i∗
i=1
ir ∗
ir ∗
for some orthonormal basis {u 1 , . . . , u d } such that i=ir −1 +1 u i u i = i=ir −1 +1 vi vi for
r = 1, . . . , l.
Proof In addition to Theorem 2.5 we may prove that, for each k ∈ {i 1 , . . . , il−1 }, the
spectral projection of Z p corresponding to {λ1 ( p), . . . , λk ( p)} converges to i=1 k
vi vi∗ .
Assume that k = ir with 1 ≤ r ≤ l − 1. When js−1 < k < js , by condition (iii)
of Theorem 3.1, we have det(V ∗ W ){1,...,k},{1,..., js−1 , js ,..., jk } = 0 for some { js , . . . , jk } ⊂
{ js−1 + 1, . . . , js }. By exchanging w js , . . . , w jk with w js−1 +1 , . . . , wk we may assume that
det(V ∗ W ){1,...,k},{1,...,k} = 0. Furthermore, by replacing A and B with V AV ∗ and V BV ∗ ,
respectively, we may assume that V = I . So we end up assuming that
A = diag(a1 , . . . , ad ), B = W diag(b1 , . . . , bd )W ∗ ,
and det W (1, . . . , k) = 0, where W (1, . . . , k) denotes the principal k × k submatrix of the
top-left corner. Let {e1 , . . . , ed } be the standard basis of Cd . By Theorem 3.1, we have
k
k−1
lim λ1 (Z ∧k
p )= ai bi > ai bi · ak+1 bk+1 = lim λ2 (Z ∧k
p )
p→∞ p→∞
i=1 i=1
so that the largest eigenvalue of Z ∧k p is simple for every sufficiently large p. Let {u 1 ( p), . . . ,
u d ( p)} be an orthonormal basis of Cd for which Z p u i ( p) = λi ( p)u i ( p) for 1 ≤ i ≤ d.
Then u 1 ( p) ∧ · · · ∧ u k ( p) is the unit eigenvector of Z ∧k
p corresponding to the eigenvalue
λ1 (Z ∧k d ∧k
p ). We now show that u 1 ( p) ∧ · · · ∧ u k ( p) converges to e1 ∧ · · · ∧ ek in (C ) . We
10 K.M.R. Audenaert and F. Hiai
observe that
p/2
(A∧k ) p/2 = diag a I I = a{1,...,k} diag 1, α2 , . . . , α d
p/2 p/2 p/2
(k )
with respect to the basis ei1 ∧ · · · ∧ eik : I = {i 1 , . . . , i k } ∈ Id (k) , where the first diagonal
entry 1 corresponds to e1 ∧ · · · ∧ ek and 0 ≤ αh < 1 for 2 ≤ h ≤ dk . Similarly,
∧k p p p p
(diag(b1 , . . . , bd )) = b{1,...,k} diag 1, β2 , . . . , β d ,
(k )
where 0 ≤ βh ≤ 1 for 2 ≤ h ≤ dk . Moreover, W ∧k is given as
⎡ ⎤
w11 · · · w1(d )
⎢ . . .. ⎥
Downloaded by [Florida State University] at 08:17 13 September 2015
k
W ∧k = w I,J I,J = ⎢ ⎣ .. .. . ⎦,
⎥
w(d )1 · · · w(d )(d )
k k k
where w I,J = det W I,J and so w11 = det W (1, . . . , k) = 0. As in the proof of Theorem 2.5
we now compute
p
(Z ∧k
p ) = (A )
p ∧k p/2 ∧k
W (diag(b1 , . . . , bd ))∧k (W ∧k )∗ (A∧k ) p/2
⎡ ⎤(d )
(dk ) k
p ⎢ p/2 p/2 p ⎥
= a{1,...,k} b{1,...,k} ⎣ wi h w j h αi α j βh ⎦ ,
h=1
i, j=1
where α1 = β1 = 1. As p → ∞ we have
⎡ ⎤
(dk )
⎢ p/2 p/2 p ⎥
⎣ w w α
ih jh i α j β h⎦ −→ diag |w 1h |2
, 0, . . . , 0
h=1 h:βh =1
Since the
unit eigenvector of Z ∧kp corresponding
to the largest eigenvalue coincides with
(k )
d
p/2 p/2 p
that of h=1 wi h w j h αi α j βh , it follows that u 1 ( p) ∧ · · · ∧ u k ( p) converges to
√
e1 ∧ · · · ∧ ek up to a scalar multiple e −1θ , θ ∈ R. By Lemma 2.4, this implies the desired
assertion.
Corollary 3.3 If the eigenvalues a1 , . . . , ad of A are all distinct and the conditions of
Theorem 3.1 hold, then
In particular, when the eigenvalues of A are all distinct and so are those of B, the
conditions of Theorem 3.1 mean that all the leading principal minors of V ∗ W are non-zero.
Let E be a k-dimensional subspace of Cd and E be the orthogonal projection onto E.
Let B be given with diagonalization in (2.2). In [13, Theorem 1.2] Bourin proved that if
E ∩ span{wi : i > k} = {0} then
Linear and Multilinear Algebra 11
Since E B p E is of at most rank k, it is obvious that λi ((E B p E)1/ p ) = 0 for all i > k and
all p > 0. The following is a slight refinement of Bourin’s result.
Corollary 3.4 Let E and B be as stated above. Then the following conditions are
equivalent:
it follows that FE is the adjoint of E F . Since the kernel of FE is E ∩ F ⊥ , (b) means that
FE is injective. This is equivalent to that E F is surjective, which means (c).
4. Limit of ( A p # B p )1/ p as p → ∞
Another problem, seemingly more interesting, is to know what is shown on the convergence
(A p σ B p )1/ p as p → ∞, the reciprocal version of (1.2). For example, 1/when σ = , the
p
arithmetic mean, the increasing limit of (A p B p )1/ p = (A p + B p )/2 as 1 ≤ p → ∞
exists and
A ∨ B := lim (A− p B − p )−1/ p = lim (A p + B p )1/ p (4.1)
p→∞ p→∞
is the supremum of A, B with respect to some spectral order among Hermitian matrices,
see [14] and [15, Lemma 6.15]. When σ = !, the harmonic mean, we have the infimum
counterpart A ∧ B := lim p→∞ (A p ! B p )1/ p , the decreasing limit as 1 ≤ p → ∞.
In this section we are interested in the case where σ = #, the geometric mean. For each
p > 0 and d × d positive semidefinite matrices A, B with the diagonalizations in (2.1) and
(2.2), we define
G p := (A p # B p )2/ p , (4.2)
2/ p
which is given as A p/2 (A− p/2 B p A− p/2 )1/2 A p/2 if A > 0. The eigenvalues of G p are
denoted as λ1 (G p ) ≥ · · · ≥ λd (G p ) in decreasing order.
exists, and a1 b1 ≥
λ1 ≥ · · · ≥
λd ≥ ad bd . Furthermore,
d
(ai bd+1−i )i=1
d
≺(log) λi i=1 ≺(log) (ai bi )i=1
d
. (4.3)
so that (4.3) follows by letting p → ∞. To prove (4.5), we may by continuity assume that
A > 0. By [7, Corollary 2.3] and (3.1) we have
d
(λi (G 1 ))i=1
d
≺(log) λi (A1/2 B A1/2 ) i=1 ≺(log) (ai bi )i=1
d
.
Since G 1 A−1 G 1 = B, there exists a unitary matrix V such that A−1/2 G 1 A−1/2 =
1/2 1/2
proving (4.5)
seems much more difficult, and we can currently settle the special case of 2 × 2 matrices
only.
Proposition 4.2 Let A and B be 2 × 2 positive semidefinite matrices with the diagonal-
izations (2.1) and (2.2) with d = 2. Then G p in (4.2) converges as p → ∞ to a positive
semidefinite matrix whose eigenvalues are
(a1 b1 , a2 b2 ) if (V ∗ W )12 = 0,
λ1 ,
λ2 =
(max{a1 b2 , a2 b1 }, min{a1 b2 , a2 b1 }) if (V ∗ W )12 = 0.
Linear and Multilinear Algebra 13
Proof Since
2/ p ∗
G p = V (diag(a1 , a2 )) p # (V ∗ W diag(b1 , b2 )V ∗ W ) p V ,
we may assume without loss of generality that V = I (then V ∗ W = W ).
First, when W12 = 0 (hence W is diagonal), we have for every p > 0
G p = diag(a1 b1 , a2 b2 ).
0 w1
Next, when W11 = 0 (hence W = with |w1 | = |w2 | = 1), we have for every
w2 0
p>0
G p = diag(a1 b2 , a2 b1 ).
w11 w12
In the rest it suffices to consider the case where W = with wi j = 0 for
Downloaded by [Florida State University] at 08:17 13 September 2015
w21 w22
all i, j = 1, 2. First, assume that det A = det B = 1 so that a1 a2 = b1 b2 = 1. For every
p > 0, since det A p = det B p = 1, it is known [16, Proposition 3.11] (also [17, Proposition
4.1.12]) that
Ap + B p
Ap # B p = √
det(A p + B p )
so that
(A p + B p )2/ p
Gp = 1/ p .
det(A p + B p )
Compute
p p p p p
a + |w11 |2 b1 + |w12 |2 b2 w11 w21 b1 + w12 w22 b2
Ap + B p = 1 p p p p p (4.6)
w11 w21 b1 + w12 w22 b2 a2 + |w21 |2 b1 + |w22 |2 b2
and
det(A p + B p ) = 1 + |w21 |2 (a1 b1 ) p + |w22 |2 (a1 b2 ) p + |w11 |2 (a2 b1 ) p + |w12 |2 (a2 b2 ) p
+ |w11 w22 − w12 w21 |2 . (4.7)
Hence, we have
1/ p 1/ p
lim det(A p + B p ) = a 1 b1 , lim Tr (A p + B p ) = max{a1 , b1 }.
p→∞ p→∞
Furthermore,
λ2 = min{a1 b2 , a2 b1√ λ1
} follows since λ2 = 1.
√
For general A, B > 0 let α := det A and β := det B. Since
2/ p
G p = αβ (α −1 A) p # (β −1 B) p ,
we see from the above case that G p converges as p → ∞ and
λ1 = αβ max{(α −1 a1 )(β −1 b2 ), (α −1 a2 )(β −1 b1 )} = max{a1 b2 , a2 b1 },
and similarly for
λ2 .
The remaining is the case when a2 and/or b2 = 0. We may assume that a1 , b1 > 0
since the case A = 0 or B = 0 is trivial. When a2 = b2 = 0, since a1−1 A and b1−1 B are
non-commuting rank one projections, we have G p = 0 for all p > 0 by [1, (3.11)]. Finally,
assume that a2 = 0 and B > 0. Then we may assume that a1 = 1 and det B = 1. For ε > 0
Downloaded by [Florida State University] at 08:17 13 September 2015
so that
lim G p = diag(b1−1 , 0) = diag(b2 , 0),
p→∞
which is the desired assertion in this final situation.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
The work of FH was supported in part by Grant-in-Aid for Scientific Research (C)26400103.
References
[1] Kubo F, Ando T. Means of positive linear operators. Math. Ann. 1980;246:205–224.
[2] Hiai F. Log-majorizations and norm inequalities for exponential operators. In: Janas J, Szafraniec
FH, Zemánek J, editors. Linear operators. Banach Center Publications 38. Warsaw: Polish Acad.
Sci.; 1997. p. 119–181.
[3] Hiai F, Petz D. The Golden–Thompson trace inequality is complemented. Linear Algebra Appl.
1993;181:153–185.
[4] Pusz W, Woronowicz SL. Functional calculus for sesquilinear forms and the purification map.
Rep. Math. Phys. 1975;8:159–170.
[5] Lieb EH, Thirring W. Inequalities for the moments of the eigenvalues of the Schrödinger
Hamiltonian and their relaion to Sobolev inequalities. In: Lieb EH, Simon B, Wightman AS,
editors. Studies in mathematical physics. Princeton (NJ): Princeton University Press; 1976. p.
269–303.
Linear and Multilinear Algebra 15
[6] Araki H. On an inequality of Lieb and Thirring. Lett. Math. Phys. 1990;19:167–170.
[7] Ando T, Hiai F. Log majorization and complementary Golden–Thompson type inequalities.
Linear Algebra Appl. 1994;197:113–131.
[8] Audenaert KMR, Datta N. α-z-Rényi relative entropies. J. Math. Phys. 2015;56:022202.
[9] Bhatia R. Matrix analysis. New York (NY): Springer; 1996.
[10] Ferrer J, García MI, Puerta F. Differentiable families of subspaces. Linear Algebra Appl.
1994;199:229–252.
[11] Marshall AW, Olkin I, Arnold BC. Inequalities: theory of majorization and its applications. 2nd
ed. New York (NY): Springer; 2011.
[12] Hiai F. Matrix analysis: matrix monotone functions, matrix means, and majorization.
Interdisciplinary Inf. Sci. 2010;16:139–248.
[13] Bourin J-C. Convexity or concavity inequalities for Hermitian operators. Math. Ineq. Appl.
2004;7:607–620.
[14] Kato T. Spectral order and a matrix limit theorem. Linear Multilinear Algebra. 1979;8:15–19.
Downloaded by [Florida State University] at 08:17 13 September 2015
[15] Ando T. Majorizations, doubly stochastic matrices, and comparison of eigenvalues. Linear
Algebra Appl. 1989;118:163–248.
[16] Moakher M. A differential geometric approach to the geometric mean of symmetric positive-
definite matrices. SIAM J. Matrix Anal. Appl. 2005;26:735–747.
[17] Bhatia R. Positive definite matrices. Princeton (NJ): Princeton University Press; 2007.
where π and
π are surjective maps defined for u = (u 1 , . . . , u k ) ∈ Ok,d as
π(u) := span{u 1 , . . . , u k },
π (u) := [u 1 ∧ . . . ∧u k ], the equivalence class ofu 1 ∧ . . . ∧u k ,
and φ is the canonical representation of G(k, d) by the kth antisymmetric tensors (or the kth exterior
products).
As shown in [10], the standard Grassmannian topology on G(k, d) is the final topology (the
quotient topology) from the map π and it coincides with the topology induced by the gap metric:
dgap (U, V) :=
PU − PV
for k-dimensional subspaces U, V of H and the orthogonal projections PU , PV onto them. On the
k,d induced from the norm on Hk,d ⊂ H∧k , which
other hand, consider the quotient topology on H
is determined by the metric
√
d( π (v)) := inf
u 1 ∧ · · · ∧u k − e −1θ v1 ∧ · · · ∧vk
,
π (u), u, v ∈ Ok,d .
θ∈R