You are on page 1of 14

H ANNER ’ S I NEQUALITY F OR P OSITIVE S EMIDEFINITE

M ATRICES
arXiv:2110.08312v1 [math.FA] 15 Oct 2021

Victoria M Chayes
Department of Mathematics
Rutgers University
Piscataway, NJ 08854
vc362@math.rutgers.edu

October 19, 2021

A BSTRACT
We prove an analogous Hanner’s Inequality of Lp spaces for positive semidefinite matrices. Let
||X||p = Tr[(X ∗ X)p/2 ]1/p denote the p-Schatten norm of a matrix X ∈ Mn×n (C). We show
that the inequality ||X + Y ||pp + ||X − Y ||pp ≥ (||X||p + ||Y ||p )p + (|||X||p − ||Y ||p |)p holds for
1 ≤ p ≤ 2 and reverses for p ≥ 2 when X, Y ∈ Mn×n (C)+ . This was previously known in the
1 < p ≤ 4/3, p = 2, and p ≥ 4 cases, or with additional special assumptions. We outline these
previous methods, and comment on their failure to extend to the general case. With the general
p
inequality, it is confirmed that the unit ball in C+ has the same moduli of smoothness and convexity
p
as the unit ball in L .
Keywords Hanner’s Inequality · Non-Commutative Inequality · Uniform Convexity · Uniform
Smoothness

1 Introduction
........Hanner’s Inequalities for Lp function spaces are the well known pair of inequalities for f, g ∈ Lp ,
p
||f + g||pp + ||f − g||pp ≥ (||f ||p + ||g||p )p + ||f ||p − ||g||p (1.1)
for 1 ≤ p ≤ 2, with the reverse inequality for p ≥ 2. Hanner first established that it sufficed to prove for non-negative
f, g, and that the function
p
ζ(u, v) := (u1/p + v 1/p )p + u1/p − v 1/p , u ≥ 0, v ≥ 0 (1.2)
was convex in the range 1 ≤ p ≤ 2 to write the desired inequality
Z
p
|f (t) + g(t)|p + |f (t) − g(t)|p dt ≥ (||f ||p + ||g||p )p + ||f ||p − ||g||p (1.3)

as Z Z Z 
ζ(x(t), y(t))dt ≥ ζ x(t)dt, y(t)dt (1.4)

for x(t) = f (t)p , y(t) = g(t)p , which of course follows from the established convexity of ζ. He likewise deduced the
inequality for p ≥ 2 from the concavity of ζ for p ≥ 2.
........Hanner’s inequalities are directly related to the concept of uniform convexity in Banach spaces: a Banach space
is uniformly convex if for each 0 ≤ ǫ ≤ 2, there is some δ(ǫ) ≥ 0 such that

x + y
||x|| = ||y|| = 1, ||x − y|| ≥ ǫ
imply 2 ≤ 1 − δ(ǫ) (1.5)
O CTOBER 19, 2021

originally defined by Clarkson in 1936[5]. Hanner’s inequalities are of note because they establish the ideal constant
δ(ǫ) for Lp [7]. One can in fact define the modulus of convexity of a space X as
 
1
δX (ǫ) := inf 1 − ||x + y|| : ||x|| = ||y|| = 1, ||x − y|| ≥ 2ǫ (1.6)
2
which gives a bound of the form
δLp (ǫ) ≥ (ǫ/Kp,r )r (1.7)
1 1
for r = p when p ≥ 2, and + = 1 for 1 ≤ p ≤ 2. One can also consider general r-uniform convexity by setting r
p r
to different values than listed, so long as a K exists such that the inequality holds; see [1] for more detail. Lp spaces
are p− uniformly convex for p ≥ 2. There is then a dual notion of modulus of smoothness
 
||u + v|| + ||u − v||
ρX (τ ) := sup − 1 : ||u|| = 1, ||v|| = τ (1.8)
2
related to the modulus of convexity by
ρX ∗ (τ ) = sup{τ ǫ − δX (ǫ) : 0 ≤ ǫ ≤ 1} (1.9)
as proven in [12]. Then t-uniform smoothness can be defined if there exists some K such that
||x + y||t + ||x − y||t
≤ ||x||t + ||Ky||t (1.10)
2
for all x, y ∈ X and t ∈ (1, 2].
........It was shown in [1] that p-uniform smoothness implies q-uniform convexity for dual spaces and dual indicies p
and q, and the same ideal constant K. Hanner’s inequality for Lp spaces provided all of these ideal constants in the
Lp setting, but they are unknown in the non-commutative case.
........This generalization to C p convexity was first addressed in [17], and the optimal coefficients of 2-uniform smooth-
ness convexity were determined in [1]. In all, progress that has been made on the problem is as follows:

1. In [14] (McCarthy, 1967): for all X, Y ∈ Mn×n (C) such that ||X||p = ||Y ||p , and all ranges of p. Note the
proof for 1 ≤ p ≤ 2 given in [14] is incorrect and corrected in [10].
2. In [17] (Tomczak-Jaegermann 1974): for all X, Y ∈ Mn×n (C) and p = 2k.
3. In [1] (Ball, Carlen, Lieb 1994): for all X, Y ∈ Mn×n (C) and 1 ≤ p ≤ 34 , and p ≥ 4; and for X +Y, X −Y ≥
0 and all ranges of p.
4. In [4] (Chayes, 2020): for all self-adjoint X, Y ∈ Mn×n (C) such that the anticommutator {X, Y } = XY +
Y X = 0, and all ranges of p.

........The primary roadblock to establishing the inequalities in C p is that operator concave and operator convex func-
tions are a far more limited class than concave and convex functions, so Hanner’s original strategy was no longer
considered a viable method. Previous methods sidestepped this question of operator concavity in one of two ways:
either using singular value rearrangement inequalities, or via establishing operator convexity of simpler related trace
functions in the limited case of X + Y, X − Y ≥ 0. No attempts have been made in the past to establish operator
concavity of Equation 1.2, which is the method this paper employs.
........Our main theorem is the extension of Hanner’s inequalities to all positive semidefinite matrices:
Theorem 1.1. Let X, Y ∈ Mn×n (C)+ . Then
p p
||X + Y ||p + ||X − Y ||p ≥ (||X||p + ||Y ||p ) + ||X||p − ||Y ||p
p p (1.11)
for 1 ≤ p ≤ 2, with the inequality reversing for p ≥ 2.

This is significany progress towards establishing Hanner’s inequalities for C p in general, and much as Hanner’s in-
p
equalities for Lp do, it characterizes the optimal coefficients of uniform smoothness and convexity for C+ .
........This paper is arranged in the following manner: Section 2 addresses the previous methods used to make progress
for the general inequality in more detail, and why they are unable to extend to the general inequality. Section 3 presents
the proof of Theorem 1.1. Section 4 comments on the challenges of extending this proof beyond positive semidefinite
matrices and to the general case. Appendix A provides a brief background in the technique of majorization, and lists
the results used, and Appendix B provides the rigorous numerical proof for operator concavity.

2
O CTOBER 19, 2021

2 Commentary On Previous Methods


........A function f : R+ → R is operator convex if for all A, B ≥ 0 and 0 ≤ t ≤ 1,
f ((1 − t)A + tB) ≤ (1 − t)f (A) + tf (B); (2.1)
where A ≥ B indicates A − B is positive semidefinite. A function is operator concave if −f is operator convex. An
operator convex function will satisfy Jensen’s Operator Inequality:
!
X X

f Ak Xk Ak ≤ A∗k f (Xk )Ak (2.2)
k k
P
for all sets of self-adjoint operators {Xk } and operators Ak such that k A∗k A = I. This is often used to establish
inequalities between f (A) and f (ADiag ), the diagonal of a matrix A in a particular basis. Many functions that are
convex or concave are not operator convex or concave; for example, x 7→ xp is not operator convex for p ≥ 2.
The primary known cases of operator convex and concave functions are x 7→ xp for −1 ≤ p ≤ 2, x 7→ log(x),
and x 7→ x log(x). The Löwner-Heinz Theorem provides the full characterization of operator monotone functions,
and given f is operator convex if and only if x 7→ f (x)/x is operator monotone, this characterizes operator convex
functions as well.
........Clearly, if f is jointly operator convex, then (X, Y ) 7→ Tr[f (X, Y )] is convex. However, there are more functions
X 7→ Tr[f (X, Y )] considered in the basis where Y is diagonal that are convex in X that the class of jointly operator
convex functions: it suffices to show that
d2

Tr [f (X + sA, Y )] ≥0 (2.3)
ds2 s=0
for all self-adjoint A. Once more, though, this is a smaller class of functions than standard convex functions.
........As mentioned in the Introduction, Hanner originally proved his inequalities by reducing the problem to that of
the joint concavity of
p
ζ(u, v) := (u1/p + v 1/p )p + u1/p − v 1/p , u ≥ 0, v ≥ 0 (2.4)
which in turn follows from the concavity of ζ(1, v). We will consider the same function directly, and prove that it is
jointly concave for positive semidefinite operators.
........Previous work to extend Hanner’s inequality to matrices has fallen into two main categories to sidestep operator
convexity of the original function Hanner considered. The first considers inequalities regarding singular values, and the
second leverages trace operator convexity of simpler functions on the convex cone MY := {X : X + Y, X − Y ≥ 0}
for fixed Y .
........The first method, employed in [17], was effectively a singular value rearrangement method. It is known that
n
X
Tr[A1 A2 . . . Ak ] ≤ σi (A1 )σi (A2 ) . . . σi (Ak ) (2.5)
i=1

(this is proven in [17] for operators as a general Horn inequality; however, it is equally possible to prove for matrices
with majorization, as is done in Appendix A). Then for Hermitian X, Y , one can expand
 
||X + Y ||2k 2k
2k + ||X − Y ||2k = Tr (X + Y )
2k
+ (X − Y )2k (2.6)
Xn X k  
2k
≤ σi (X)2j σi (Y )2k−2j (2.7)
i=1 j=1
2j
n
X 2k 2k
= (σi (X) + σi (Y )) + (σi (X) − σi (Y )) (2.8)
i=1
2k 2k
≤ (||X||2k + ||Y ||2k ) + (||X||2k − ||Y ||2k ) . (2.9)
Also falling under the umbrella of this method, both the extension of known cases of the inequality to the anticommu-
tator {X, Y } = XY + Y X = 0 in [4] and ||X||p = ||Y ||p corrected proof [10] used singular value rearrangement
arguments with majorization.
........It was conjectured in [2] that the inequality of Line (2.7) might extend beyond p = 2k, which is notably stronger
than Hanner’s inequality for matrices. This conjecture arose from the fact that it is possible to show that certain

3
O CTOBER 19, 2021

singular value rearrangement inequalities hold when X + Y, X − Y ≥ 0. However, counterexamples were found in
[4] for even the X, Y ≥ 0 to these singular value rearrangement inequalities, closing that line of research as a potential
way to prove Hanner’s inequalities for matrices.
........In [1], the most significant progress in establishing Hanner’s inequalities for matrices was made. First, a vital
duality lemma was proven:
Lemma 2.1. (Ball, Carlen, Lieb 1994 [1], Lemma 6) Let X be a normed space with dual X ∗ . Then for all f, g ∈ X,
p
||f + g||p + ||f − g||p ≥ (||f || + ||g||)p + ||f || − ||g|| (2.10)
for 1 ≤ p ≤ 2 if and only if
q
||u + v||q + ||u − v||q ≤ (||u|| + ||v||)q + ||u|| − ||v|| (2.11)

for all u, v ∈ X ∗ and p−1 + q −1 = 1.


4
This lemma will be used in our proof as well to extend the range from 2 ≤ p ≤ 4 to 3 ≤ p ≤ 2.
........To establish Hanner’s inequality when X + Y, X − Y ≥ 0, the authors noted that Hanner’s inequality can be
addressed through a variational inequality for all x, y ∈ R,
 
p p sup
|x + y| + |x − y| = {α(r)|x|p + α(r−1 )|y|p : 0 < r < ∞} (2.12)
inf

with α(r) := αp (r) is defined as

α(r) := (1 + r)p−1 + |1 − r|p−1 sign(1 − r) (2.13)


and sup and inf are taken respectively for p ≥ 2 and 1 ≤ p ≤ 2.
........Then for X + Y, X − Y ≥ 0, one can establish the convexity of
G(X) := ||X + Y ||pp + ||X − Y ||pp − 2||X||pp (2.14)

for 1 ≤ p ≤ 2; as α(r) ≤ 2 for 1 ≤ p ≤ 2, and with the application of Hanner’s inequality for sequences once X has
been replaced by XDiag in the basis of Y , one establishes the desired relationship to α(r−1 )||Y ||pp :

||X + Y ||pp + ||X − Y ||pp − 2||X||pp ≥ ||XDiag + Y ||pp + ||XDiag − Y ||pp − 2||XDiag ||pp ≥ α(r−1 )||Y ||pp (2.15)

from which Hanner’s inequality follows. However, one can find numerical counterexamples to the convexity of G(X)
when X, Y ≥ 0:
Counterexample 2.2. The matrices
   
4.27204 −0.341558 0.15 0
X1 := , Y1 := (2.16)
−0.341558 0.127963 0 0.35

violate convexity for G(X) and p & 1.33.

It should be noted that the inequality is trivially true taking ||Y || → ∞ for fixed ||X||, but that Counterexample 2.2
further has the property ||X||p ≥ ||Y ||p for 1 ≤ p ≤ 2, closing off the avenue that additional restrictions might allow
for convexity.
........Finally, the author introduced a Taylor series method in [3] to prove Hanner-like inequalities for both Ls with
s < 1, and their counterparts for matrices when X + Y, X − Y ≥ 0; the paper shows that Hanner-like inequalities that
hold for Ls in fact require that X + Y, X − Y ≥ 0 and do not hold for arbitrary X, Y ≥ 0. The same method can be
used to establish the equality case for matrices in known cases of inequality from [1], which might lead one to think it
is an additional category of method of proof for Hanner’s inequalities. However,
h q the method also
i fundamentally relies
q s
on a convexity argument of funtions of the form Ψ(A, B, K)q,r,s := Tr B 2 KAp K ∗ B 2 into a diagonalization
argument, leading to its classification in the diagonalization category. While numerical counterexamples were not
found for general matrices using the expansion when p ≥ 1, the complicated nature of the expressions impeded
further progress.

4
O CTOBER 19, 2021

3 Proof of Theorem 1.1


........We begin with the proof of operator concavity for the function of interest:
Theorem 3.1. The function f : R+ → R+ defined by
p
f (x) = (1 + x1/p )p + 1 − x1/p (3.1)
is operator concave for p ≤ 2 ≤ 4.

Proof. We will first need the following Lemma:


Lemma 3.2. Let s ∈ (1, 2), and let z1 be a complex number in the first quadrant. Then
g(z) = (z1 + z)s + (z1 − z)s (3.2)
has non-negative imaginary value on the quarter disk D1 := {z := x + iy : |z| ≤ |z1 |, x, y ≥ 0}; where z 7→ z s is
defined with the standard branch cut θ ∈ (−π, π].

Proof. We note that g is analytic on D1 , so Im(g(z)) is harmonic must attain its maximum and minimum on the
boundry of D1 . Given that the sign of the imaginary part is scale-invariant, it in fact suffices to check that
g̃(z) = (eiθ1 + z)s + (eiθ1 − z)s (3.3)
π
has non-negative imaginary part on D1 for θ1 ∈ [0, 2 ].

........At z = 0, we have g̃(0) = 2eisθ1 . Given eiθ1 is in the first quadrant, and z 7→ z s maps the first quadrant to the
upper half plane, Img̃(0) ≥ 0. At z = 1,
  
p s sin(θ1 )
Im (g̃(1)) = (cos(θ1 ) + 1)2 + sin(θ1 )2 sin s arctan
cos(θ1 ) + 1
    (3.4)
p s sin(θ1 )
+ (cos(θ1 ) − 1)2 + sin(θ1 )2 sin s π + arctan
cos(θ1 ) − 1
At θ1 = 0, the above is equal to 0, and at θ = π2 is non-negative (and in fact only 0 when s = 2). Seen as a function
of θ1 for θ1 ∈ [0, π2 ], the only critical point is a maximum; thus Img̃(1) ≥ 0.
........We now examine
d  s−1 s−1 
g̃(x) = s eiθ1 + x − eiθ1 − x (3.5)
dx
to note that there are no critical points for x ∈ [0, 1]. Therefore, as the endpoints are non-negative, Im(g̃(x)) ≥ 0.
........For Im(g̃(ix)), we can consider the derivative
d  s−1 s−1 
g̃(ix) = is eiθ1 + ix − eiθ1 − ix (3.6)
dx
directly. In the regime 0 ≤ x ≤ cos(θ1 ),
  p   
d 2 2
s−1 sin(θ1 )
Im g̃(ix) = s (cos(θ1 ) + x) + sin(θ1 ) cos (s − 1) arctan (3.7)
dx cos(θ1 ) + x
   
p s−1 sin(θ1 )
2
− (cos(θ1 ) − x) + sin(θ1 ) 2 cos (s − 1) arctan
cos(θ1 ) − x
Equation (3.7) is obviously non-negative, as the first component is non-negative with both the R(s−1)/2 and cos terms
increasing. For the second term, the R(s−1)/2 is smaller  than that of the first term,
 and the cos term is decreasing
d d
(and in fact sometimes negative). Given Im dx g̃(ix) |x=0 = 0, Im dx g̃(ix) ≥ 0 in this regime. In the regime
cos(θ1 ) ≤ x ≤ 1,
  p   
d s−1 sin(θ1 )
Im g̃(ix) = s (cos(θ1 ) + x)2 + sin(θ1 )2 cos (s − 1) arctan (3.8)
dx cos(θ1 ) + x
    
p s−1 sin(θ1 )
− (cos(θ1 ) − x)2 + sin(θ1 )2 cos (s − 1) π + arctan
cos(θ1 ) − x

5
O CTOBER 19, 2021

once more both the R(s−1)/2 and cos terms of the first component are each positive, increasing, and greater than their
d
counterparts in the second component, confirming Im dx g̃(ix) ≥ 0 and hence Im(g̃(ix)) ≥ 0 for x ∈ [0, 1]. This
also implies Im(g̃(ix)) ≥ 1.
........Finally, we wish to examine
g̃(eiθ ) = (eiθ1 + eiθ )s + (eiθ1 − eiθ )s (3.9)
π 2
on the compact set A := {(s, θ1 , θ) ∈ [1, 2] × [0, 2 ] }.
Given that this is a far more complicated expression, we turn
to a rigorous numeric proof to show the imaginary part is non-negative. We note that
d 

Im g̃(eiθ ) = iseiθ (eiθ1 + eiθ )s−1 − iseiθ (eiθ1 − eiθ )s−1 < 16, (3.10)

d


Im g̃(eiθ ) = iseiθ1 (eiθ1 + eiθ )s−1 + iseiθ1 (eiθ1 − eiθ )s−1 < 16, (3.11)
dθ1
and similarly considering the derivative of z s , z 7→ Im(log(z)z s ) on A1 := {z = x + iy : − 2 ≤ x ≤ 2, 0 ≤ y ≤ 2}
and A2 := {z = x + iy : − 2 ≤ x ≤ 2, −2 ≤ y ≤ 0} is bounded by
p  p s p s
Im (log(z)z s ) = ln x2 + y 2 x2 + y 2 sin(sθ) + θ x2 + y 2 cos(sθ) (3.12)
p  2
≤ ln x2 + y 2 x2 + y 2 + 16π (3.13)
≤ 67 + 16π (3.14)
≤ 125. (3.15)
We then perform series expansions around the known zeroes of Im(g̃(eiθ )) on A: {(s, 0, 0), (s, 0, π2 ), (2, π2 , 0),
(2, π2 , π2 ), (1, 0, θ)}; we let A′ be the region within ǫ = 10
1
of those points to show Im(g̃(eiθ )) ≥ 0 on A′ , then
iθ ′
calculate Im(g̃(e )) on A\A using a grid search of distance [minA\A′ Im(g̃(eiθ ))]/1000. The full expansions and
code used can be found in Appendix B.

Returning to Theorem 3.1, it suffices to prove that


 p/2  p/2
f (z) = (1 + z 1/p )2 + (1 − z 1/p )2 (3.16)

with the standard branch cut θ ∈ (−π, π] is a Herglotz (alternatively, Nevanlinna) function, ie a function analytic on
the upper half plane and whose range is a subset of the upper half plane. Then it has the representation
Z ∞ 
1 λ
F (z) = C + Bz + − dµ(λ) (3.17)
−∞ λ−z 1 + λ2
for some constant C, positive constant B, and positive Borel measure µ [11], and hence is operator concave.
........We note that for r > 0,
 p/2  p/2 
iθ 1/p 2 iθ 1/p 2
Im (1 + (re ) ) + (1 − (re ) ) ≥0 (3.18)

if and only if 
  p/2  p/2 
Im f˜(r̃, p) := Im (r̃ + (eiθ )1/p )2 + (r̃ − (eiθ )1/p )2 ≥0 (3.19)

for r̃ = r−1/p , by scaling by r. Therefore, we will focus on the sign of the latter.
........Let z1 := r̃2 + ei2θ/p , z2 := 2r̃eiθ/p . We can re-write the relationship of Equation (3.19) as
 
Im (z1 + z2 )p/2 + (z1 − z2 )p/2 ≥ 0. (3.20)
p
When r̃ ≥ | cos(2θ/p)|, then z1 is in the upper half plane, z2 is in D1 , and 1 ≤ p2 ≤ 2; therefore by Lemma 3.2
p
we see the inequality of Equation (3.19) holds. We now only have the finite area r̃ < | cos(2θ/p)| to determine the
minimum imaginary value on.

6
O CTOBER 19, 2021

........Once more, by analyticity, the minimum value


p of the expression in Equation (3.19) must take place on the edge
of the set. We know that along thepcurve r̃ = | cos(2θ/p)| expression is non-negative by continuity. Therefore, it
is the negative real axis with r̃ < | cos(2π/p)| that we must restrict our attention to. We will return to searching a
grid of {r̃, p} values by bounding the derivative, and using a Taylor expansion to exclude known zeroes along the lines
p = 2 and r̃ = 0. The rigorous numeric proof of this is found in Appendix B.

........With operator concavity of f , we have all the tools we need to prove the remaining unknown range of Hanner’s
p
inequality in C+ :

Proof. Proof of Theorem 1.1


........By Lemma 2.1, it suffices to show that the inequality holds for 2 ≤ p ≤ 4; then duality extends the range to
4
3 ≤ p ≤ 2 and all other cases are known.
........Let f be a real valued function defined on a convex set C ⊆ Rn . The perspective function associated to f is a
function of two variables
   
t t
Pf (t, s) = f s, (t, s) ∈ s > 0, ∈ C ⊆ Rn+1 (3.21)
s s

........Similarly, for commuting positive operators {L, R}, one can define an operator perspective
 
L
Pf (L, R) = f R. (3.22)
R
If f (t) is operator convex and [L, R] = 0, then the perspective Pf (L, R) is jointly convex in L and R [6]; and likewise
for operator concave functions.
........We define the left and right multiplication operators by LA (X) = AX and RB (X) = XB for A, B ≥ 0. Then
LA and RB are commuting operators. They are positive operators on Mn×n (C) under the inner product hA, Bi =
Tr[AB ∗ ]. We also note that LA s = LAs for all s ∈ R, and similarly for RB . By Theorem 3.1, the operator perspective
of p
f (x) := (1 + x1/p )p + 1 − x1/p (3.23)
given by  p 1/p p

P (X̃, Ỹ ) = LX̃ 1/p
+ RỸ 1/p
+ LX̃ 1/p
− RỸ (3.24)

is jointly concave in X̃, Ỹ . Therefore the mapping


h p  i
1/p p
(X̃, Ỹ ) 7→ Tr LX̃ 1/p + RỸ 1/p
+ LX̃ 1/p
− RỸ (K)K ∗ (3.25)

is jointly concave, and choosing K = I we get that


h p p i
(X̃, Ỹ ) 7→ Tr (X̃)1/p + (Ỹ )1/p + (X̃)1/p − (Ỹ )1/p (3.26)

is jointly concave. We will let X̃ = X p and Ỹ = Y p going forward.


........We note that  p 
||X + Y ||pp + ||X − Y ||pp = Tr (X + Y )p + X − Y (3.27)
p 1/p p 1/p
and X = (X ) , Y = (Y ) . Considering in the basis where Y is diagonal, we thus have
h p p i
Tr (X p )1/p + (Y p )1/p + (X p )1/p − (Y p )1/p
h p p i (3.28)
≤ Tr (X p )Diag 1/p + (Y p )1/p + (X p )Diag 1/p − (Y p )1/p

........We can then apply Hanner’s inequality on sequences to the term on the right,
h p p i
Tr (X p )Diag 1/p + (Y p )1/p + (X p )Diag 1/p − (Y p )1/p
 p p (3.29)
≤ ||(X p )Diag 1/p ||p + ||Y ||p + ||(X p )Diag 1/p
||p − ||Y ||p .

7
O CTOBER 19, 2021

........Finally, we notice that as the positive semidefiite matrix X p conforms to the majorization relationship (X p )Diag ≺
X p (see Appendix A), we can expand
n
X n
X n
X n
X
||(X p )Diag 1/p p
||p = λ((X p )Diag 1/p p
) = λ((X p )Diag ) ≤ λ(X p ) = λ(X)p = ||X||pp (3.30)
i=1 i=1 i=1 i=1

so ||(X p )Diag 1/p


||p ≤ ||X||p . The function
p p
g(x) = (x + y) + x − y (3.31)
is increasing in x for fixed y. Therefore,
 p p p
||(X p )Diag 1/p ||p + ||Y ||p + ||(X p )Diag 1/p
||p − ||Y ||p ≤ (||X||p + ||Y ||p )p + ||X||p − ||Y ||p . (3.32)

Stringing inequalities Eq. (3.28)-(3.32) together proves the theorem.

4 Commentary On Extension To The General Case


........To prove Hanner’s inequality for general matrices, it suffices to consider self-adjoint matrices by noting that the
inequality holds for general X, Y if and only if it holds for
   
Xb := 0∗ X , Yb :=
0 Y
. (4.1)
X 0 Y∗ 0

........The naive hope for a direct extension would be that some form of direct relationship could be found between
between the inequality holding for {X, Y } and {|X|, |Y |}. Much like Hanner showed for complex z1 , z2 that
|z1 + z2 |p + |z1 − z2 |p ≥ ||z1 | + |z2 ||p + ||z1 | − |z2 ||p (4.2)
for 1 ≤ p ≤ 2 with the inequality reversing for p ≥ 2, we would want to examine
?
||X + Y ||pp + ||X − Y ||pp ≥ |||X| + |Y |||pp + |||X| − |Y |||pp (4.3)
for 1 ≤ p ≤ 2 with the desired inequality reversing for 2 ≤ p ≤ 4. Numerical counterexamples for non-Hermitian
matrices in the 2×2 case are easily found for this entire range, meaning that the inequality does not hold in general, and
by our doubling trick it does not hold for Hermitian matrices of dimension 4×4 or higher. Numerical counterexamples
can be constructed in the 2 × 2 case for 1 ≤ p ≤ 3 for the Hermitian matrices. Counterexamples were not found for
3 ≤ p ≤ 4 range for a search of up to twenty degrees of precision, leaving open the possibility that this could be
used to extend to a general Hanner’s inequality for Hermitian matrices for 43 ≤ p ≤ 23 and 3 ≤ p ≤ 4 for 2 × 2 and
3 × 3. It should be noted that different behavior for 2 ≤ p ≤ 3 and 3 ≤ p ≤ 4 is also seen in related singular value
rearrangement inequalities [4].
Counterexample 4.1. The matrices
   
0.2862 + 0.3497i 0.3443 + 0.4049i 0.2995 + 0.0213i 0.4823 + 0.2727i
X1 := , Y1 := (4.4)
0.1491 + 0.1834i 0.3116 + 0.1588i 0.1012 + 0.4232i 0.3171 + 0.04277i
have the opposite desired relationship between ||X1 + Y1 ||pp + ||X1 − Y1 ||pp and |||X1 | + |Y1 |||pp + |||X1 | − |Y1 |||pp for
1 ≤ p ≤ 4.
Counterexample 4.2. The matrices
     
1.48 0 cos(t) − sin(t) 3.6 0 cos(t) sin(t)
X2 := , Y2 (t) := (4.5)
0 −3.9 sin(t) cos(t) 0 0.4 − sin(t) cos(t)
have no universal relationship between ||X2 + Y2 ||pp + ||X2 − Y2 ||pp and |||X2 |+ |Y2 |||pp + |||X2 |− |Y2 |||pp for 1 ≤ p ≤ 2,
shifting 0 ≤ t ≤ π2 , including the opposite desired relationship for t = 1.16867.
Counterexample 4.3. The matrices
   
0.52 0 −3.98248 7.44316
X3 := , Y3 := (4.6)
0 −1.54 7.44316 −16.2975
have the opposite desired relationship between between ||X3 +Y3 ||pp +||X3 −Y3 ||pp and |||X3 |+|Y3 |||pp +|||X3 |−|Y3 |||pp
for 2 ≤ p ≤ 3.

8
O CTOBER 19, 2021

b Yb }; namely noting
........The next potential method for an extension would be to take advantage of the structure of {X,
that σ(X b+ ) = σ(Xb− ) = σ(X) and likewise for Yb . Therefore, if it is possible to show either
?
b + Yb ||p + ||X
||X b − Yb ||p ≥ ||X
b+ + Yb+ ||p + ||X
b+ − Yb+ ||p + ||X
b− + Yb− ||p + ||X
b− − Yb− ||p (4.7)
p p p p p p

for 1 ≤ p ≤ 2 or
?
b + Yb ||p + ||X
||X b − Yb ||p ≤ ||X
b+ + Yb+ ||p + ||X
b+ − Yb+ ||p + ||X
b− + Yb− ||p + ||X
b− − Yb− ||p (4.8)
p p p p p p

for 2 ≤ p ≤ 4, then the structure of the doubling argument combined with the fact that Theorem 1.1 gives the desired
inequality for both {Xb+ , Yb+ } and {X b− , Yb− }, and therefore for {X, Y }. Unfortunately, counterexamples for this can
also be found in the 2 × 2 case:
Counterexample 4.4. The matrices
   
1.12 0 2.28089 1.12009
X4 := , Y4 := (4.9)
0 −4.57 1.12009 −3.82089

have the opposite desired relationsip between ||Xb4 + Yb4 ||pp + ||X
b4 − Yb4 ||pp and ||X
c4 + + Y
c4 + ||pp + ||X
c4 + − Y
c4 + ||pp +
c4 − + Y
||X c4 − || + ||X
p c4 − − Yc4 − || for 1 ≤ p / 2.2.
p
p p
Counterexample 4.5. The matrices
   
1.871 0 2.28089 1.12009
X5 := , Y5 := (4.10)
0 −0.354 1.12009 −3.82089
have the opposite desired relationship for 2 ≤ p / 3.6.

b Yb }, comparing (||X||p ± ||Y ||p )p to (||X+ ||p ± ||Y+ ||p )p + (||X− ||p ± ||Y− ||p )p
........Without the structure of {X,
p
will depend on X and Y . Therefore, the author sees no obvious way to extend the results from C+ to C p for the entire
4 p
range 3 ≤ p ≤ 4. Still, the full result for C+ marks a significant extension of the known cases of Hanner’s inequality
for matrices.

Appendix A
........Let a, b ∈ Rn with components labeled in descending order a1 ≥ · · · ≥ an and b1 ≥ · · · ≥ bn . Then b weakly
majorizes a, written a ≺w b, when
Xk X k
ai ≤ bi , 1≤k≤n (5.11)
i=1 i=1
and majorizes a ≺ b when the final inequality is an equality. Majorization is an incredibly powerful tool in matrix
analysys because of its relationship to convexity: Suppose a ≺w b. Then for any function φ : R → R that is increasing
and convex on the domain containing all elements of a and b,
n
X n
X
φ(ai ) ≤ φ(bi ). (5.12)
i=1 i=1

If a ≺ b, the ‘increasing’ requirement can be dropped [8] [9] [18] [20].


........In the proof of Theorem 1.1, we made use of a single well-known majorization relationship between a Hermitian
matrix’s diagonal elements and its eigenvalues:
Theorem 5.6. Schur [16]; Mirsky [15] Let X ∈ Mn×n (C) be a self-adjoint matrix with diagonal elements x :=
(x11 , . . . , xnn ). Then
x ≺ λ(X) (5.13)

........The inequality Equation (2.5) in Section 2 comes from stringing together the famous inequality of Von Neumann
Theorem 5.7. Von Neumann 1937 [19] Let A, B ∈ Mn×n (C). Then
n
X
Tr[AB] ≤ σi (A)σi (B) (5.14)
i=1

9
O CTOBER 19, 2021

with a majorization identity proven by the author in [4]:


Lemma 5.8. Let x ≺w y, and a ≺w b be non-negative vectors labeled in descending order. Then x ◦ a ≺w y ◦ b.
taking A = A1 , B = A2 . . . Ak then applying Lemma 5.8 (k-1) times.
........A full overview of majorization and its application to matrix theory can be found in [13].

Appendix B
........In this Appendix we provide additional detail and the code used to rigorously prove Theorem 3.1 and Lemma
3.2.
........For Lemma 3.2, we want to define the area A′ around the points {(s, 0, 0), (s, 0, π2 ), (2, π2 , 0), (2, π2 , π2 ), (1, 0, θ)}.
Numeric evidence indicates that looking for series expansions in θ1 around 0 for (s, 0, 0), (s, 0, π2 ), and (1, 0, θ), and
expansions in θ1 around π2 for (2, π2 , 0), (2, π2 , π2 ), will produce the desired region.
........For (s, 0, 0), we consider a Taylor expansion of the first term
s 
eiθ1 + 1 = 2s + is2s−1 θ1 + O θ12 (6.15)
with second derivative wrt θ1 bounded in modulus by
 s−2 

s −eiθ1 1 + eiθ1 1 + seiθ1 ≤ 6 (6.16)
for s ∈ [1, 2]. We directly expans the second term
    
iθ1
s  p s−1 sin(θ1 )
Im e + 1 2
= (cos(θ1 ) − 1) + sin(θ1 )2 sin s π + arctan (6.17)
cos(θ1 ) − 1
≥ − cos(θ1 ) − 1)2 + sin(θ1 )2 (6.18)
= −(2 − 2 cos(θ1 )) (6.19)
≥ −θ12 (6.20)
s s−1
 1
As Im (2 ) = 0 and mins∈[1,2] Im s2 = 1, we conclude that any choice of ǫ < 7 will produce the desired
positivity on A′ .
........For (s, 0, π2 ), we first expand
g̃(i) = (eiθ1 + i)s + (eiθ1 − i)s (6.21)
 
s s 1 i 
= ((1 − i) + (1 + i) ) + + (i(1 − i)s + (1 + i)s ) sθ1 + O θ12 (6.22)
2 2
and calculate 
 
s s 1 i s s
Im ((1 − i) + (1 + i) ) = 0, min Im + (i(1 − i) + (1 + i) ) s = 2. (6.23)
s∈[1,2] 2 2
with second derivative with respect to θ1 bounded in modulus by
  s−2 s−1 s−1 s−2 
iθ1
se (s − 1) −eiθ1 eiθ1 − i − eiθ1 − i − eiθ1 + i − (s − 1)eiθ1 eiθ1 + i < 16 (6.24)
1
for θ1 < 7 and s ∈ [1, 2]. Stringing Equation 6.23 and Equation (6.24) together produces the requirement ǫ < 18 .
........For (1, 0, θ), we note that
(eiθ1 + eiθ )1 + (eiθ1 − eiθ )1 = 2eiθ1 (6.25)
which clearly has strictly positive imaginary part for any choice of θ1 = ǫ > 0.
........For (2, π2 , 0) and (2, π2 , π2 ) we can once again directly expand

2(e2iθ1 + 1) θ = 0
(eiθ1 + eiθ )2 + (eiθ1 − eiθ )2 = 2(e2iθ1 + e2iθ ) = (6.26)
2(e2iθ1 − 1) θ = π2
π
which has strictly positive imaginary part for any choice of θ1 = 2 − ǫ with 0 < ǫ < 18 . We will therefore choose a
1
maximum value of ǫ = 10 .
1 π
........Minimization software indicates a numeric minimum of Im(g̃(eiθ )) on A\A′ = {(s, θ1 , θ) ∈ [1, 2] × [ 10 ,2 −
1 π 1 π
10 ] × [0, 2 ]} is approximately 0.199667 and occurs at s = 1, θ 1 = 10 , θ = 2 , and we recall derivatives in the s, θ1 ,
and θ directions are all bounded by 500. We run the following Maple code with 50 digits of precision:

10
O CTOBER 19, 2021

gridtest1:=proc(a,step,e) local i,j,k:


for i from 1 to ceil(1/(2*a)) do
for j from 1 to ceil(Pi/(4*a)) do
for k from 1 to ceil((Pi/2-2*e)/(2*a)) do
if Im((exp(I*(Pi/4+k*a))+exp(I*(Pi/4+j*a)))^(3/2+i*a)
+(exp(I*(Pi/4+k*a))-exp(I*(Pi/4+j*a)))^(3/2+i*a))<a*step then
print(‘smaller minimum found‘):
return([i,j,k,1,1,1]):
elif Im((exp(I*(Pi/4+k*a))+exp(I*(Pi/4+j*a)))^(3/2-i*a)
+(exp(I*(Pi/4+k*a))-exp(I*(Pi/4+j*a)))^(3/2-i*a))<a*step then
print(‘smaller minimum found‘):
return([i,j,k,-1,1,1]):
elif Im((exp(I*(Pi/4+k*a))+exp(I*(Pi/4-j*a)))^(3/2+i*a)
+(exp(I*(Pi/4+k*a))-exp(I*(Pi/4-j*a)))^(3/2+i*a))<a*step then
print(‘smaller minimum found‘):
return([i,j,k,1,-1,1]):
elif Im((exp(I*(Pi/4-k*a))+exp(I*(Pi/4+j*a)))^(3/2+i*a)
+(exp(I*(Pi/4-k*a))-exp(I*(Pi/4+j*a)))^(3/2+i*a))<a*step then
print(‘smaller minimum found‘):
return([i,j,k,1,1,-1]):
elif Im((exp(I*(Pi/4-k*a))+exp(I*(Pi/4-j*a)))^(3/2+i*a)
+(exp(I*(Pi/4-k*a))-exp(I*(Pi/4-j*a)))^(3/2+i*a))<a*step then
print(‘smaller minimum found‘):
return([i,j,k,1,-1,-1]):
elif Im((exp(I*(Pi/4-k*a))+exp(I*(Pi/4+j*a)))^(3/2-i*a)
+(exp(I*(Pi/4-k*a))-exp(I*(Pi/4+j*a)))^(3/2-i*a))<a*step then
print(‘smaller minimum found‘):
return([i,j,k,-1,1,-1]):
elif Im((exp(I*(Pi/4+k*a))+exp(I*(Pi/4-j*a)))^(3/2-i*a)
+(exp(I*(Pi/4+k*a))-exp(I*(Pi/4-j*a)))^(3/2-i*a))<a*step then
print(‘smaller minimum found‘):
return([i,j,k,-1,-1,1]):
elif Im((exp(I*(Pi/4-k*a))+exp(I*(Pi/4-j*a)))^(3/2-i*a)
+(exp(I*(Pi/4-k*a))-exp(I*(Pi/4-j*a)))^(3/2-i*a))<a*step then
print(‘smaller minimum found‘):
return([i,j,k,-1,-1,-1]):
fi: od: od: od:
print(‘The grid search was successful!‘):
end:

with the values e=1/10, a=0.18/1000, step=500 to rigorously prove Im(g̃(eiθ )) ≥ 0.


........For the numeric proof of Theorem 3.1, we first must bound the derivatives
 2 p/2

∂    2  p−2 p r+ep
˜ iπ iπ 2

f (r̃, p) = p r − e p −r + e p + iπ < 32 (6.27)
∂r̃ r+e p
for r ∈ [0, 1], p ∈ [2, 4] and
!
∂  2 p/2 1
 2  iπ
iπe p
˜ iπ iπ
f (r̃, p) = −r + e p log −r + e p − iπ
∂p 2 −pr + e p p
  (6.28)
 2 p/2 1  2  iπ

 log r + e p
iπ iπe p
 < 10
+ r+ep −  
2 p r+e p

p
for r ∈ [0, | cos(2π/p)|]
p and p ∈ [2, 4] to give us a derivative step size of 50. We note that known zeroes for
B = {(r, p) ∈ [0, | cos(2π/p)|] × [2, 4]} exist along the lines r = 0 and p = 2.

11
O CTOBER 19, 2021

........We first bound p away from 2. To do so, we consider the Talor expansion
    X∞  
2π (p)(2n) 2nπ
Im f˜(r̃, p) = p(p − 1) sin 2
r̃ + 2 sin r̃2n ≥ 0 (6.29)
p n=2
(2n)! p

to attempt to show ∞
  X (p)  
2π 2 (2n) 2nπ 2n
p(p − 1) sin r̃ ≥ 2 sin r̃ (6.30)
p (2n)! p
n=2
p p
with for maximum r̃p= | cos(2π/p)| for 2 < p ≤ 2 + ǫ; and we note that if the inequality holds at | cos(2π/p)|,
it will hold for r̃ ≤ | cos(2π/p)|. We observe
    n X   n
X ∞
(p)(2n) 2nπ 2π

(p)(2n) 2π

2 sin cos ≤ 2 cos , (6.31)

n=2
(2n)! p p
n=2
(2n)! p
and given that the Taylor series for
X∞
p (p)(2n) 2n p−2n 2n
p
(x + ry) + (x − ry) = 2 y x r (6.32)
n=0
(2n)!

converges at r = 1 for x > y ≥ 0, we can write


  n s   !p s   !p  
X∞
(p)(2n) 2π
cos 2π
+ 1 − cos 2π −2−p(p−1) cos 2π . (6.33)
2 cos = 1 +
n=2
(2n)! p p p p

........Consider the function


     
2π 2π 2π
h(p) = −p(p − 1) sin cos + 2 − p(p − 1) cos
p p p
s  !p s  !p (6.34)
2π 2π
− 1 + − cos − 1 − − cos
p p

Then h(2) = 0, h′ (2) = 3 + π − 4 log(2) ≈ 3.369. We bound the second derivative, which forms a rather complicated
expression, by |h′′ (p)| ≤ 7 for p ∈ [2, 3]. Therefore,
h(p) ≥ (3 + π − 4 log(2))p − 7p2 , (6.35)
so for ǫ = 52 < (3 + π − 4 log(2))/7, we have h(p) ≥ 0 for p ∈ [2, 2 + ǫ], then f˜(r, p) ≥ h(p) for (r, p) ∈
p
[0 × | cos(2π/p)|] × [2, 2 + ǫ] establishes the desired relationship.
........We can now return to the Taylor expansion in r̃ near r̃ = 0:
   
2π 
˜
Im f (r̃, p) = p(p − 1) sin r̃2 + O r̃4 (6.36)
p
for leading order imaginary term p(p − 1) sin( 2π 42
p ) ≥ 25 . We bound the fourth derivative
 
2 p/2  2 p/2 
iπ iπ

 −r + e p r+ep 
 
(p − 3)(p − 2)(p − 1)p   4 +  4  < 50 (6.37)
 iπ iπ 
−r + e p r+ep

√  
for (r, p) ∈ B. We therefore conclude that for r̃ < ǫ′ = 2521 , Im f˜(r̃, p) ≥ 0. We combine both these bounds away
from r̃ = 0 and p = 2 to form B ′ .
√ p
........Minimization software indicates a numeric minimum of Im(f˜(r̃, p) on B\B ′ = {(r̃, p) ∈ [ 2521 , | cos(2π/p)|]

×[ 12 21 12
5 , 4]]} is approximately 0.0565134 and at the corner r = 25 , p = 5 , and we recall derivatives in the r̃ and p
directions are all bounded by 50. We run the following Maple code with 50 digits of precision:

12
O CTOBER 19, 2021

gridtest2:=proc(a,step,e1,e2) local i,j:


for i from 1 to ceil((1-e2)/(2*a)) do
for j from 1 to ceil((2-e1)/(2*a)) do
if Im(((1/2-e2/2+i*a+exp(I*Pi/(3+e1/2+j*a)))^2)^((3+e1/2+j*a)/2)
+((1/2-e2/2+i*a-exp(I*Pi/(3+e1/2+j*a)))^2)^((3+e1/2+j*a)/2))<a*step then
print(‘smaller minimum found‘):
return([i,j,1,1]):
elif Im(((1/2-e2/2-i*a+exp(I*Pi/(3+e1/2+j*a)))^2)^((3+e1/2+j*a)/2)
+((1/2-e2/2-i*a-exp(I*Pi/(3+e1/2+j*a)))^2)^((3+e1/2+j*a)/2))<a*step then
print(‘smaller minimum found‘):
return([i,j,-1,1]):
elif Im(((1/2-e2/2+i*a+exp(I*Pi/(3+e1/2-j*a)))^2)^((3+e1/2-j*a)/2)
+((1/2-e2/2+i*a-exp(I*Pi/(3+e1/2-j*a)))^2)^((3+e1/2-j*a)/2))<a*step then
print(‘smaller minimum found‘):
return([i,j,1,-1]):
elif Im(((1/2-e2/2-i*a+exp(I*Pi/(3+e1/2-j*a)))^2)^((3+e1/2-j*a)/2)
+((1/2-e2/2-i*a-exp(I*Pi/(3+e1/2-j*a)))^2)^((3+e1/2-j*a)/2))<a*step then
print(‘smaller minimum found‘):
return([i,j,-1, -1]):
fi: od: od:
print(‘The grid search was successful!‘):
end:

with the values e1= 52 , e2= 21
25 , a=0.05/100 step=50 to rigorously prove Im(f˜(r̃, p)) ≥ 0.

Acknowledgements
This research was partially funded by the NDSEG Fellowship, Class of 2017. Thank you to my advisor, Professor Eric
Carlen, for bringing my attention to the problem and providing me with a background to the subject.

References
[1] Ball, K., Carlen, E.A., Lieb, E.H.: Sharp uniform convexity and smoothness inequalities for trace
norms. Inventiones mathematicae 115(1), 463–482 (1994). DOI 10.1007/BF01231769. URL
https://doi.org/10.1007/BF01231769
[2] Carlen, E., Lieb, E.H.: Some matrix rearrangement inequalities. Annali di Matemat-
ica Pura ed Applicata 185(5), S315–S324 (2006). DOI 10.1007/s10231-004-0147-z. URL
https://doi.org/10.1007/s10231-004-0147-z
[3] Chayes, V.: Reverse Holder, Minkowski, and Hanner inequalities for matrices. arXiv preprint arXiv:2103.09915
(2021)
[4] Chayes, V.M.: Matrix rearrangement inequalities revisited. Mathematical Inequalities and Applications 24(2),
431–444 (2021). DOI dx.doi.org/10.7153/mia-2021-24-30
[5] Clarkson, J.A.: Uniformly convex spaces. Transactions of the American Mathematical Society 40(3), 396–414
(1936). URL http://www.jstor.org/stable/1989630
[6] Effros, E.G.: A matrix convexity approach to some celebrated quantum inequalities. Proceedings of
the National Academy of Sciences 106(4), 1006–1008 (2009). DOI 10.1073/pnas.0807965106. URL
https://www.pnas.org/content/106/4/1006
[7] Hanner, O.: On the uniform convexity of L p and l p. Ark. Mat. 3(3), 239–244 (1956). DOI 10.1007/BF02589410.
URL https://doi.org/10.1007/BF02589410
[8] Hardy, G.H., Littlewood, J.E., Pólya, G.: Some simple inequalities satisfied by convex functions. Messenger
Math. 58, 145–152 (1929). URL https://ci.nii.ac.jp/naid/10009422169/en/
[9] Hardy, G.H., Polya, G.: Inequalities. Cambridge : Cambridge University Press (1934). Bibliography: p. [300]-
314
[10] Hirzallah, O., Kittaneh, F.: Non-commutative Clarkson inequalities for unitarily invariant norms. Pacific J. Math
202(2), 363–369 (2002)

13
O CTOBER 19, 2021

[11] Hoffman, S.: The American Mathematical Monthly 75(1), 100–100 (1968). URL
http://www.jstor.org/stable/2315170
[12] Lindenstrauss, J.: On the modulus of smoothness and divergent series in Banach spaces. Michi-
gan Mathematical Journal 10(3), 241 – 252 (1963). DOI 10.1307/mmj/1028998906. URL
https://doi.org/10.1307/mmj/1028998906
[13] Marshall, A.W., Olkin, I., Arnold, B.C.: Inequalities: Theory of Majorization and Its Applications, 2 edn.
Springer, New York (2011)
[14] McCarthy, C.: c_ p cp. Isr. J. Math. 5, 249–271 (1967)
[15] Mirsky, L.: Inequalities for normal and hermitian matrices. Duke Math. J. 24(4), 591–599 (1957). DOI 10.1215/
S0012-7094-57-02467-5. URL https://doi.org/10.1215/S0012-7094-57-02467-5
[16] Schur, I.: Uber eine Klasse von Mittelbildungen mit Anwendungen auf die Determinantentheorie. Sitzungs-
berichte der Berliner Mathematischen Gesellschaft 22(9-20), 51 (1923)
[17] Tomczak-Jaegermann, N.: The moduli of smoothness and convexity and the Rademacher averages of the trace
classes S p (1 ≤ p ≤ ∞)∗ . Studia Mathematica 50(2), 163–182 (1974). URL http://eudml.org/doc/217886
[18] Tomić, M.: Théoreme de Gauss relatif au centre de gravité et son application. Bull. Soc. Math. Phys. Serbie 1,
31–40 (1949)
[19] Von Neumann, J.: Some matrix-inequalities and metrization of matrix-space (1937)
[20] Weyl, H.: Inequalities between two types of eigenvalues of a linear transformation. Proceedings of the National
Academy of Sciences of the United States of America 35(7), 408–411 (1949)

14

You might also like