Professional Documents
Culture Documents
Editors:
J.-M. Morel, Cachan
F. Takens, Groningen
B. Teissier, Paris
Jay Jorgenson · Serge Lang
Posn(R) and
Eisenstein Series
ABC
Authors
Jay Jorgenson
City College of New York
138th and Convent Avenue
New York, NY 10031
USA
e-mail: jjorgenson@mindspring.com
Serge Lang
Department of Mathematics
Yale University
10 Hillhouse Avenue
PO Box 208283
New Haven, CT 06520-8283
USA
DOI 10.1007/b136063
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,
reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9,
1965, in its current version, and permission for use must always be obtained from Springer. Violations
are liable for prosecution under the German Copyright Law.
Springer is a part of Springer Science+Business Media
springeronline.com
c Springer-Verlag Berlin Heidelberg 2005
Printed in Germany
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.
Typesetting: TEX output by the author
Cover design: design & production GmbH, Heidelberg
Printed on acid-free paper SPIN: 11422372 41/3142/du 543210
Preface
Acknowledgements
Jorgenson thanks PSC-CUNY and the NSF for grant support. Lang thanks
Tony Petrello for his support of the Yale Mathematics Department. Both of
us thank him for support of our joint work. Lang also thanks the Max Planck
Institut for productive yearly visits. We thank Mel DelVecchio for her patience
in setting the manuscript in TEX, in a victory of person over machine.
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
1
GLn (R) Action on Posn (R)
Let G = GLn (R) or SLn (R) and Γn = GLn (Z). Let Posn (R) be the space
of positive symmetric real n × n matrices. Recall that symmetric real n × n
matrices Z have an ordering, defined by Z = 0 if and only if hZx, xi = 0 for
all x ∈ Rn . We write Z1 = Z2 if and only if Z1 − Z2 = 0. If Z = 0 and Z is
non-singular, then Z > 0, and in fact Z = λI if λ is the smallest, necessarily
positive, eigenvalue.
The group G acts on Posn (R) by associating with each g ∈ G the auto-
morphism (for the C ∞ or real analytic structure) of Posn given by
[g]Z = gZ t gs .
We are interested in Γn \Posn (R), and we are especially interested in its
topological structure, coordinate representations, and compactifications which
then allow effective computations of volumes, spectral analysis, differential
geometric invariants such as curvature, and heat kernels, and whatever else
comes up.
The present chapter deals with finding inductively a nice fundamental do-
main and establishing coordinates which are immediately applied to describe
Grenier’s compactification, following Satake.
Quite generally, let X be a locally compact topological space, and let Γ
be a discrete group acting on X. Let Γ0 be the kernel of the representation
Γ → Aut(X). A strict fundamental domain F for Γ is a Borel measurable
subset of X such that X is the disjoint union of the translates γF for γ ∈ Γ/Γ0 .
In most practices, X is also a C ∞ manifold of finite dimension. We define a
fundamental domain F to be a measurable subset of X such that X is
the union of the translates γF , and if γx ∈ F for some γ ∈ Γ, and γ does
not act trivially on X, then x and γx are on the boundary of F . In practice,
this boundary will be reasonable, and in particular, in the cases we look at,
this boundary will consist of a finite union of hypersurfaces. By resolution of
singularities, the boundary can then be parametrized by C ∞ maps defined on
cubes of Euclidean space of dimension 5 dim X − 1. Thus the boundary has
n-dimensional measure 0.
Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 1–22 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
2 1 GLn (R) Action on Posn (R)
1 Iwasawa-Jacobi Decomposition
Let:
G = Gn = GLn (R)
Posn = Posn (R) = space of symmetric positive real matrices
K = O(n) = Unin (R) = group of real unitary n × n matrices
U = group of real unipotent upper triangular matrices, i.e. of the form
1 xij
0 1
u(X) = u = . so u(X) = I + X,
.. . ..
0 0 ... 1
with
X = (xij ), 1 < i < j 5 n .
A = group of diagonal matrices with positive components,
a1
a2 0
a=
.. ai > 0 all i .
.
0
an
U × A × K → U AK = G
is a differential isomorphism.
1 Iwasawa-Jacobi Decomposition 3
Proof. Let {e1 , . . . , en } be the standard unit vectors of Rn , and let x ∈ Ln (R).
Let vi = xei . We orthogonalize {v1 , . . . , vn } by the standard Gram-Schmidt
process, so we let
w1 = v1 ,
w2 = v2 − c21 w1 w1 ,
w3 = v3 − c32 w2 − c31 w2 ⊥ w1 and w2
and so on. Then e0i = wi /kwi k is a unit vector, and the matrix a having
kwi k−1 for its diagonal elements is in A. Let k = aux so x = u−1 a−1 k. Then
k is unitary, which proves that G = U AK. To show uniqueness, suppose that
u1 at u1 = u2 bt u2 with u1 , u2 ∈ U and a, b ∈ A,
ua = bt u .
g 7→ t g −1 .
We write the transpose on the left to balance the inverse on the right. We
have a surjective mapping
G → Posn given by g 7→ g t g .
ϕ : G/K → Posn ,
b1
µ ¶ ..
A b A .
g= t =
c d
bn−1
c1 · · · cn−1 d
d = dn (g)
[g]M = gM t g,
An expression
à ! à !
W 0 W + [x]v xv
Z = [u(x)] =
0 v vt x v
1 Iwasawa-Jacobi Decomposition 5
(2) dn (Z) = v .
= .
t t t t t t
cW A + ( cx + d)v (Ax + b) [ c]W + [ cx + d]v
In particular,
[u(−Ax)] = [u(Ax)]−1 ,
2 Inductive Construction
of the Grenier Fundamental Domain
This section is taken from [Gre 88].
Throughout the section, we let:
G = GLn (R)
Γ = GLn (Z)
Posn = Posn (R) = space of symmetric positive n × n real matrices.
We write Z > 0 for positivity. We use the action of G on Posn given by
(1) v=λ.
v 5 [t c]W + (t cx + d)2 v
w = v(1 − x2 ),
Proposition 2.2. The inequalities Fun 1 and Fun 3 are equivalent to:
dn ([g]Z) 5 r .
Proof. Since W ∈ Posn−1 we have W = λIn−1 for some λ > 0, and hence
Hence there is only a finite number of c ∈ Zn−1 such that [t c]W 5 r. Then
from the inequality
(t cx + d)2 v 5 r,
we conclude that there is only a finite number of d ∈ Z satisfying this inequal-
ity, as was to be shown.
We next prove that every element of Posn may be translated by some
element of Γ into a point of Fn . Without loss of generality, in light of the
above finiteness, we may assume that dn (Z) = v is minimal for all elements
in the Γ-orbit [Γ]Z. By induction, there exists a matrix
We let à !
A 0
g= ∈ Γn .
0 1
Then by (6) of Sect. 1,
" #Ã !
In−1 Ax [A]W 0
[g]Z = .
0 1 0 v
Thus we have at least satisfied the condition Fun 2. Without loss of generality,
we my now assume that W ∈ Fn−1 , since the dn does not change under the
action of a semidiagonalized element g as above, with dn (g) = 1.
Now by acting with g = u(b) with b ∈ Zn−1 and using the homomorphic
property u(x + b) = u(b)u(x), we may assume without loss of generality that
|xij | 5 12 for all j. Finally, using (10) of Sect. 2, we may change the sign of x
if necessary so that 0 5 x1 , thus concluding the proof that some element in
the orbit [Γ]Z satisfies the three Fun conditions.
There remains to prove that if Z and [g]Z ∈ Fn with g ∈ Γn then Z and
[g]Z are on the boundary, or [g] = id on X, that is [g] = ±In . We again prove
this by induction, it being true for n = 2 by Proposition 2.2, so we assume the
result for Fn−1 , and we suppose [g]Z, Z are both in Fn . Then from Fun 1,
If c 6= 0,then Z and [g]Z are on the boundary, because the boundary is defined
among other conditions by this hypersurface equality coming from Fun 1.
10 1 GLn (R) Action on Posn (R)
Since det(g) = ±1 because g ∈ GLn (Z), it follows that det A = ±1, or, in
other words, A ∈ GLn−1 (Z). We have
" #Ã !
In−1 ±Ax + b [A]W 0
[g]Z = .
0 1 0 v
where |x0 | is the sup norm of x0 . By induction, there is only a finite number
of inequalities
v 0 5 [t c0 ]W 0 + (t c0 x0 + d0 )2 v 0 .
By straight matrix multiplication,
where t c(n−2) = (c1 , . . . , cn−2 ). Thus there is only a finite number of vectors
t
c = (t c(n−2) , cn−1 ) because we may take c(n−2) among the choices for c0 .
Then with the bounds on the coordinates xj , there is only a finite number
of d ∈ Z which will satisfy the inequalities Fun 1. This concludes the proof
of the general finiteness statements. In addition, as Grenier remarks, the fi-
nite number of inequalities can be determined explicitly. For this and other
purposes, one uses:
Lemma 2.5. Let Z ∈ Fn ,
" #Ã !
In−1 x W 0
Z=
0 1 0 v
as before. let zi = zii be the i-th diagonal element of Z, and wi = wii the i-th
diagonal element of W . Then for i = 1, . . . , n − 1,
4
v 5 zi 5 wi .
3
12 1 GLn (R) Action on Posn (R)
[t c]W = wi , t
cx = xi ,
To get the explicit finite number of inequalities for the Grenier fundamental
domain, one simply follows through the inductive procedure using Lemma 2.5,
cf. [Gre 88], pp. 301-302.
In the sequel we use systematically the above notation:
We conclude this section with further inequalities which are usually stated
and proved in the context of so-called “reduction theory”, for elements of
Posn in the Minkowski fundamental domain. These inequalities, as well as
their applications, hold for the Grenier fundamental domain, cf. [Gre 88],
Theorem 2, which we reproduce.
Theorem 2.6. For Z ∈ Posn we have |Z| 5 z1 . . . zn , and for Z ∈ Fn ,
µ ¶n(n−1)/2
4
|Z| 5 z1 . . . zn 5 |Z| .
3
So µ ¶n(n−1)/2
3
|Z| = znn .
4
Proof. We prove the first (universal) inequality by induction. As before, we use
the first order Iwasawa decomposition of Z with the matrix W . The theorem
is trivial for n = 1. Assume it for n − 1. Then zi = wi + x2i v (i = 1, . . . , n − 1),
so by induction,
|W | 5 w1 . . . wn−1 5 z1 . . . zn−1 .
Hence
|Z| = |W |v = |W |zn 5 z1 . . . zn ,
which is the desired universal inequality.
2 1
z12 5 z1 z2
4
2 Inductive Construction of the Grenier Fundamental Domain 13
whence
3 4
|Z| = z1 z2 or z1 z2 5 |Z|,
4 3
which takes care of n = 2. Assume the inequality for n − 1. In the first order
Iwasawa decomposition, we have W ∈ Fn−1 . Then
z1 . . . zn z1 . . . zn z1 . . . zn−1
= =
|Z| |W |v |W |
µ ¶n−1
4 w1 . . . wn−1
5 by Lemma 2.5
3 |W |
µ ¶n−1+(n−1)(n−2)/2
4
5 by induction and W ∈ Fn−1
3
µ ¶n(n−1)/2
4
5 ,
3
thus proving the desired inequality z1 . . . zn 5 ( 43 )n(n−1)/2 |Z|. The final in-
equality then follows at once from Lemma 2.5, that is zi = v = zn for all i.
This concludes the proof.
We give an application following Maass [Maa 71], Sect. 9, formula (8),
which is used in proving the convergence of certain Eisenstein series. In order
to simplify the notation, we write
µ ¶n(n−1)/2
4
c1 = c1 (n) = .
3
Theorem 2.7. let Z ∈ Fn . Let Zdia be the diagonal matrix whose diagonal
components are the same as those of Z. Then as operators on Rn ,
1
Zdia 5 Z 5 nZdia .
nn−1 c1
−1
Proof. Let r1 , . . . , rn be the eigenvalues of Z[Zdia2 ]. Then
−1 −1
r1 + . . . + rn = tr(Z[Zdia2 ]) = tr(ZZdia )=n
and
−1
r1 . . . rn = |Z| · |Zdia | = c−1
1
by Theorem 2.7. Hence for all i = 1, . . . , n,
1
ri < n and ri = .
nn−1 c1
Therefore
1 −1 −1
In 5 Zdia2 ZZdia2 5 nIn .
nn−1 c1
If A > 0 and C is invertible symmetric, then CAC > 0, so if A 5 B then
1
CAC 5 CBC, and we use C = Zdia
2
to conclude the proof.
14 1 GLn (R) Action on Posn (R)
Γn = GLn (Z),
dn (Z) = v = a−1
n , W = a1/(n−1)
n Z (n−1) , x = x(n−1)
so that " #Ã !
1/(n−1) (n−1)
(n) In−1 x an Z 0
(1) Z=Z = ,
0 1 0 a−1
n
In−2 x(n−2) 0
u(x(n−2) ) = 0 1 0 .
0 0 1
where
C = u(x(n−1) )u(x(n−2) ) .
(n−1)
Factoring out a−1
n , putting u2 = u(x )u(x(n−2) ), and
2
yn−1 = an/(n−1)
n a−1
n−1 ,
we get
n−1
2 n−2 (n−2)
yn−1 an−1 Z
(2a) Z= [u2 ]a−1 2 .
n yn−1
1
(3)
Z = [u3 ]
1/(n−1) 1/(n−2) 1/(n−3)
an an−1 an−2 Z (n−3)
1/(n−1) 1/(n−2) −1
an an−1 an−2
1/(n−1) −1
an an−1
a−1
n
where a−1
n−i = dn−i (Z
(n−i)
). We factor out a−1
n . This gives rise to a factor
n/(n−1)
an on each diagonal component. Then we rewrite (3) in the form:
n−2
2 2
yn−1 yn−2 n−3
an−2 Z (n−3)
2 2
(3a) Z = [u3 ]a−1
yn−1 yn−2
.
n 2
yn−1
1
and therefore µ ¶
Zn−1 0
(8) Z = [u(x(n−1) )]a−1
n .
0 1
Of course, Zn−1 does not have determinant 1, contrary to Z (n−1) . From the
2
definition of yn−1 and (7) we obtain
(9) −2
yn−1 Zn−1 = an−1 Z (n−1) .
This formula then remains valid inductively replacing n − 1 by n − i for
i = 1, . . . , n − 1. Note that an−1 = an−1 (Z (n−1) ), similar to
an = an (Z (n) ) = an (Z) .
4 The Grenier Fundamental Domain and the Minimal Compactification 17
Remark. What we call the standard coordinates are actually standard in the
literature, dating back to Minkowski, Jacobi, Siegel, etc. Actually, if one puts
qi = yi2
then Siegel calls (q1 , . . . , qn−1 ) the normal coordinates, see, for instance,
the references [Sie 45], [Sie 59].
Formulas for dn .
Formulas (11) and (12) are set up so that they are valid replacing n − 1 by
n − i, with c0 of dimension n − i − 1, d0 equal to a scalar, x0 of dimension
n − i − 1.
The formulas are set up for immediate application in the next section
where we consider Z in a fundamental domain. The first condition defining
such a domain will specify that the expression on the right of (11), or in
parentheses on the right of (12), is = 1.
0 5 x 5 1/2 .
x2 + y 2 = 1 and 0 5 x 5 1/2 .
Thus we get half the usual fundamental domain, because we took the discrete
group to be GLn (Z) rather than SL2 (Z).
That the above conditions define a fundamental domain follows at once
from the case for GLn . In the rest of the section, we give further inequalities
which will be used for the compactification subsequently. The first inequality
generalizes the inequality x2 + y 2 = 1 from n = 2.
an (Z) = (3/4)(n−1)/2 .
Proof. We choose d = 0 and c = en−1 (the standard unit vector with all
components 0 except for the (n − 1)-component which is 1). Then SFun 1
yields
x2n−1 + yn−1
2
= 1,
and since |xn−1 | 5 12 , we get yn−1
2
= 34 . The coordinates yn−i are designed
in such a way that this argument can be applied step by step, thus proving
the first statement of the lemma. The Hermite inequality then follows from
Sect. 3, (6).
Lemma 4.2. Let Z ∈ SFn . For all c(n−i) ∈ Zn−i (i = 1, . . . , n − 1) for which
c(n−i) 6= 0 we have
[t c(n−i) ]Zn−i = yn−i
2
.
2
Proof. This comes from the fact that in Sect. 3, (12) we get yn−1 times a
number = 1 according to SFun 1.
4 The Grenier Fundamental Domain and the Minimal Compactification 19
Proof. This comes inductively from Sect. 3, (12), where on the right we obtain
2 2
the product yn−1 . . . yn−k times a number = 1, plus a number = 0.
Grenier carried out an idea of Satake for the Siegel modular group to the
case of GLn , and we continue to follow Grenier.
There are actually several compactifications, and we begin with the sim-
plest inductive one. It is not clear to what extent this simplest suffices and
for which purposes.
Since the present discussion deals with SLn , we shall write Fn instead of
SFn for simplicity. In case both GLn and SLn are considered simultane-
ously, then of course a distinction has to be preserved.
We shall first define a compactification of Fn . Quite simply, we let
Fn∗ = Fn ∪ Fn−1 ∪ . . . ∪ F1 .
We shall put a topology of this union and show that Fn∗ then provides a
compactification of Fn . The topology is defined inductively.
For n = 1, F1 = {∞} is a single point.
For n = 2, F2 is the usual fundamental domain, as we have seen, and its
compactification is F2 ∪ {∞} = F2 ∪ F1 .
Let n = 2.
∗
Let P ∈ Fn−k with 1 5 k 5 n − 1, so P ∈ Fn−1 . Let U be a neighborhood
∗
of P in Fn−1 . Let M > 0. Let:
Proof. There are three conditions to be met. We start with SFun 1. We need
to show that for all primitive (t c, d) we have
5 Siegel Sets
We follow [Gre 93]. Let Dn be the group of diagonal matrices with ±1 as
diagonal elements. We let
[
Fn± = [γ]Fn .
γ∈Dn
1/2
T
–1/2 1/2
Note that the largest value of T such that the Siegel set contains the funda-
mental domain is 3/4. The shaded portion just reaches the two corners. We
then have the following rather precise theorem of Grenier.
(n) (n)
Theorem 5.1. Sie1 ⊂ Fn± ⊂ Sie3/4 .
Proof. The inclusion on the right is a special case of Lemma 4.1.
Now for the inclusion on the left, note that condition SFun 2± follows at
once by induction, and SFun 3± is met by definition, so the main thing is to
prove SFun 1± , for which we give Grenier’s proof. The statement being true
for n = 2, we give the inductive step, so we index things by n, with Siegel sets
(n)
being denoted by SieT in SPosn , for instance. So suppose
(n−1) ±
Sie1 ⊂ Fn−1 .
(n)
Given Z ∈ Sie1 , and writing t c = (t c0 , d0 ), we have
[t c]Zn−1 + (t cx + d)2 = yn−1
2
[t c0 ]Zn−2 + yn−1
2
(t c0 , d0 )2 + (t cx + d)2 .
(n−1) ±
But Z (n−1) ∈ Sie1 ⊂ Fn−1 , so Lemma 4.2 implies that for
c0 ∈ Zn−2 , c0 6= 0,
22 1 GLn (R) Action on Posn (R)
we have
[t c0 ]Zn−2 = yn−2
2
,
and hence we get the inequality
2
yn−1 [t c0 ]Zn−2 = yn−1
2 2
yn−2 = 1,
We shall give various formulas related to measures on GLn and its subgroups.
We also compute the volume of a fundamental domain, a computation which
was originally carried out by Minkowski. Essentially we follow Siegel’s proof
[Sie 45]. We note historically that people used to integrate over fundamental
domains, until Weil pointed out the existence of a Haar (invariant) measure
on homogeneous spaces with respect to unimodular subgroups in his book
[We 40], and observed that Siegel’s arguments could be cast in the formalism
of this measure [We 46].
Siegel’s historical comments [Sie 45] are interesting. He first refers to a
result obtained by Hlawka the year before [Hla 44], proving a statement by
Minkowski which had been left unproved for 50 years. However, as Siegel says,
Hlawka’s proof “does not make clear the relation to the fundamental domain
of the unimodular group which was in Minkowski’s mind. This relation will
become obvious in the theorem” which Siegel proves in his paper, and which
we reproduce here.
The Siegel formula is logically independent of most of the computations
that precede it. For the overall organization, and ease of reference, we have
treated each aspect of the Haar measures systematically before passing on to
the next, but we recommend that readers read the section on Siegel’s formula
early, without wading through the other computations.
The present chapter can be viewed as a chapter of examples, both for this
volume and subsequent ones. The discrete subgroups GLn (Z) and SLn (Z) will
not reappear for quite some time, and in particular, they will not reappear in
the present volume which is concerned principally with analysis on the univer-
sal covering space G/K with G = SLn (R) and K = Unin (R) (the real unitary
group). Still, we thought it worthwhile to give appropriate examples jumping
ahead to illustrate various concepts and applications. The next chapter will
continue in the same spirit, with a different kind of application. Readers in a
hurry to get to the extension of Fourier analysis can omit both chapters, with
the exception of Sect. 1 in the present chapter. Even Sect. 1 will be redone in
a different spirit when the occasion arises.
Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 23–47 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
24 2 Measures and Integration
Theorem 1.1. A Siegel set in SLn (R) has finite Haar measure.
Proof. [Sie 59] Since K is compact and Uc has bounded (euclidean) measure,
it follows that Z Z Z Z
dg = C δ(a)−1 d∗ a .
Uc At K At
Hence it suffices to prove that this integral over At is finite. Using the coor-
dinates q1 , . . . , qn−1 , the fact that Haar measure on each factor of R+(n−1) is
dqi /qi , and the fact that
n−1
Y
δ(a) = qimi with mi = 1 ,
i=1
we find that
Z Z∞ Z∞ Y Y dqi
δ(a) −1 ∗
d a= ··· qi−mi ,
At qi
t t
In [Sie 59] Siegel used the above result to show that SLn (Z)\SLn (R) has
finite measure. He also used the normal coordinates to construct a compact-
ification of SLn (Z)\SLn (R). By Theorem 5.1 of Chap. 1, we know that a
fundamental domain for SLn (Z) is contained in a Siegel set, and hence we
have given one proof of
Theorem 1.2. The quotient space SLn (Z)\SLn (R) has finite measure.
Next we shall deal with formulas for integration on the space Posn = Posn (R).
It is a homogeneous space, so has a unique Haar measure with respect to the
action of G = GLn (R), up to a constant factor.
We follow some notation systematically as follows. If Y = (yij ) is a system
of coordinates from euclidean space, we write
Y
dµeuc (Y ) = dyij
15i5j5n,
and the product expression for dY is thus taken over this range of indices.
Deviations from Lebesgue measure will be denoted by dµ(Y ), with µ to be
specified.
If ϕ is a local C ∞ isomorphism, we let J(ϕ) be the Jacobian factor of the
induced map on the measure, so it is the absolute value of the determinant
of the Jacobian matrix, when expressed in terms of local coordinates. Often
the determinant will be positive. If g is a square matrix, we let |g| denote its
determinant, and kgk is then the absolute value of the determinant.
The exposition of the computation of various Jacobians and measure fol-
lows Maass [Maa 71], who based himself on Minkowski and Siegel, in partic-
ular [Sie 59].
Proposition 2.1. A GLn (R)-bi-invariant measure on Posn is given by
For g ∈ GLn (R), the Jacobian determinant J(g) of the transformation [g] is
J(g) = kgkn+1 .
Proof. We prove the second assertion first. Note that g 7→ J(g) is multiplica-
tive, that is J(g1 g2 ) = J(g1 )J(g2 ), and it is continuous, so it suffices to prove
the formula for a dense set of matrices g. We pick the set of matrices of the
form gDg −1 , with D = diag(d1 , . . . , dn ) being diagonal. Then [D]Y is the
matrix (di yij dj ). Hence
Y
J(gDg −1 ) = J(D) = |di dj | = kDkn+1 = kgDg −1 kn+1 ,
i5j
= dµn (Y ) ,
thus concluding the proof of left invariance. Right invariance follows because
J(g) = J(t g).
Finally, the invariance under Y 7→ Y −1 can be seen as follows. If we let
S(Y ) = Y −1 , then for a tangent vector H ∈ Symn ,
S 0 (Y )H = −Y −1 HY −1 ,
Let Tri+ +
n (R) = Trin be the group of upper triangular matrices with posi-
tive diagonal coefficients. Then in the notation of Sect. 1, we have the direct
decomposition
AU = Tri+ n .
Tri+ t
n → Posn given by T 7→ T T .
where Y
dµeuc (T ) = dtij
i5j
is the ordinary euclidean measure. Note that we are following systematic no-
tation where we use a symbol µ to indicate deviation from euclidean measure.
For the triangular group Tri+ , the variables i and j range over 1 5 i 5 j 5 n.
We shall usually abbreviate
tii = ti .
First we decompose the Iwasawa coordinates stepwise, going down one
step at a time. We write an element Y ∈ Posn in inductive coordinates
t
µ ¶
y z
Y = with y ∈ R+ , z ∈ Rn−1 , Yn−1 ∈ Posn−1 .
z Yn−1
Thus
Y = Y (y, z, Yn−1 ) .
We have the first decomposition of an element T ∈ Tri+ n:
t
µ ¶
t1 x
T = = T (t1 , x, Tn−1 ) ,
0 Tn−1
t21 +t xx t t
x Tn−1
(1) Y = ϕ+ (T ) = ϕ+ (t1 , x, Tn−1 ) =
Tn−1 x Yn−1
whence
2t1 ∗ 0
∂(Y ) ∂(y, z, Yn−1 )
(2) = = 0 Tn−1 ∗
∂(T ) ∂(t1 , x, Tn−1 ) ∂(Yn−1
0 0 ∂(Tn−1 ) .
Thus we obtain
¯ ¯
+
¯ ∂(Yn−1 ) ¯
(3) J(ϕ ) = 2t1 |Tn−1 | ¯ ¯ ¯
∂(Tn−1 ) ¯
= 2n (t1 · · · tn )(t2 · · · tn ) · · · tn
Y n
= 2n tii .
i=1
Thus not only do we get the inductive expression (3), but we can state the
full transformation formula:
28 2 Measures and Integration
or in terms of integration,
Z Z
f (Y )dµeuc (Y ) = f (T t T )J(ϕ+ )(T )dµeuc (T ) .
Posn Tri+
n
Then for the Haar measures of Propositions 1.4 and 2.1, we have
dµn (Y ) = 2n dµ Tri (T ) .
where the integrals over t1 , . . . , tn are from 0 to ∞, and those over tij (with
i < j) are from −∞ to ∞.
There is a corresponding map ϕ− given by
ϕ− : Tri+ − t
n → Posn given by ϕ (T ) = T T .
t21 t1 t x
whence
2t1 ∗
0 t1
∂(Y ) .. ..
(2− ) = .
.
∂(T )
0 0 · · · t1
∂(Yn−1 )
0 0···0 ∂(Tn−1 )
2 Decompositions of Haar Measure on Posn (R) 29
so we obtain
(3− ) J(ϕ− ) = 2tn1 J(ϕ−
1,n−1 )
n
Y
= 2n tn−i+1
i
i=1
n
= 2 δ(T )β(T ) .
or in terms of integration,
Z Z
f (Y )dµeuc (Y ) = f (t T T )J(ϕ− )(T )dµeuc (T )
Posn Tri+
n
Z
= 2n f (t T T )δ(T )β(T )dµeuc (T ) .
Tri+
n
Then 1
δPos = δIw
2
=ρ.
Next we give similar formulas for the partial decomposition in blocks of arbi-
trary size. Let 0 < p < n and p + q = n. For x ∈ Rp×q let
µ ¶
Ip X
u(X) = .
0 Iq
ϕ+
p,q : Posp × R
p×q
× Posq → Posn
as above. A direct check of dimensions shows that they add up properly, that
is
p(p + 1) (n − p)(n − p + 1) n(n + 1)
+ + p(n − p) = .
2 2 2
Direct multiplication in (4) yields the explicit expression
2 Decompositions of Haar Measure on Posn (R) 31
W + [X]V XV
(5) ϕ+
p,q (W, X, V ) =
.
V tX V
From (5), one sees that ϕ+p,q is bijective, because first, V uniquely determines
the lower right square Y3 . Then X is uniquely determined to give Y2 = XV ,
and finally W is uniquely determined to give Y1 .
Note that aside from this formal matrix multiplication, one has Y > 0 if
and only if W > 0 and V > 0, and X is arbitrary.
J(ϕ+ p
p,q ) = |V | .
For Y = ϕ+
p,q (W, X, V ) we have the change of variable formula
and similarly with n and q, combined with the value for the Jacobian. The
formula comes out as stated.
One may carry out the similar analysis with lower triangular matrices.
Thus we let Tri−
n be the space of lower triangular matrices, with the map
ϕ− : Tri+ − t
n → Posn defined by ϕ (T ) = T T .
J(ϕ−
p,q ) = |W |
−q
.
For Y = ϕ−
p,q (W, X, V ) the change of variable formula is
The proofs are exactly the same as for the other case carried out previously,
and will therefore be omitted.
Polar Coordinates
where as before A is the group of diagonal matrices with positive diagonal el-
ements. For those matrices with distinct eigenvalues (the regular elements)
this decomposition is unique up to a permutation of the diagonal elements,
and elements of k which are diagonal and preserve orthonormality, in other
words, diagonal elements consisting of ±1. Hence the map
is a covering of degree 2n n! over the regular elements. This map is called the
polar coordinate representation of Posn , and (k, a) are called the polar
coordinates of a point. As mentioned above, these coordinates are unique
up to the above mentioned 2n n! changes over the regular elements.
We want to give an expression for the Haar measure on Posn in terms of
the polar coordinates, and for this we need to compute the Jacobian J(p).
For k ∈ K, we have t kk = I, t k = k −1 , dk = ((dk)ij ), so
Then we let
ω(k) = k −1 dk so t ω = −ω .
Thus ω is a skew symmetric matrix of 1-forms,
((k −1 dk)ij ) = (ωij (k)) ,
with component being 1-forms ωij . Observe that each such form is necessarily
left K-invariant, that is for any fixed k1 ∈ K, we have
ω(k1 k) = ω(k)
directly by substitution. Taking the wedge product
^
ωij
i<j
in some definite order yields a volume form on K, which is left invariant, and
therefore right invariant. The absolute value of this form then represents a
Haar measure on K, which we denote by νn , or dνn (k) if we use it inside an
integral sign. We call νn the polar Haar measure, so
¯ ¯
¯^ ¯
dνn = ¯¯ ωij ¯¯ .
i<j
Then
dνn (k) = νn (K)dµn (k)
where νn (K) is the total νn -measure of K, which we have to compute. For
the moment, we go on with the differential of the polar coordinate map.
Formula 2.8.
dp(k, a) = k(da + ωa − aω)k −1 = [k](da + ωa − aω)
and
(ωa − aω)ij = (aj − ai )ωij .
In the above formula, da is the Euclidean diag(da1 , . . . , dan ).
Proof. Matrix multiplication being bilinear, we have
dp(kat k) = (dk)at k + kdat k + kadt k .
But also t kk = I implies (dt k)k + t kdk = 0 as we have seen, so
dt k = −t k(dk)t k .
Substituting this value for dt k = dk −1 into the previous formula and using
the skew symmetry t ω = −ω yields the formula.
34 2 Measures and Integration
Taking the wedge product will allow us to determine various relations between
previous measures.
Letting
n
−(n−1)/2
Y Y
γ(a) = |ai − aj | ai
i<j i=1
Proof. We take the wedge product of the forms in Formula 2.8, that is
^ Y ^
± dai ∧ (aj − ai ) ωij .
i<j i<j
Taking absolute values yields the first formula. The second formula concern-
ing the Haar measures follows from the definition of the Haar measure in
Proposition 2.1. This concludes the proof.
We shall now compute the constant νn (K), following Muirhead and the
computation reproduced in Terras [Ter 88].
First we need a remark on determinants. Let V = (vij ) be an n × n matric
of vectors in a vector space, and C = (cij ) be a scalar matrix. Let
W = V C = (wij ) .
Then ^ ^
± wij = vij (det C)n .
i,j i,j
This is immediate from the usual rule, when one applies a matrix to a row
(v1 , . . . , vn ) of vectors, in which case one gets a factor of det C. Here we
perform this operation n times, whence the power (det C)n .
2 Decompositions of Haar Measure on Posn (R) 35
The product of ωij over i > j is a volume form (maximal degree form) on K.
Hence wedging it with any one of these 1-forms yields 0. Hence we find
¯ ¯¯ ¯
¯V −1
¯¯ V ¯
dµeuc (X) = ¯ ((dT T )ij ¯¯
¯ ¯ ¯ ωij ¯¯(det T )n
i5j i>j
¯ ¯
n
n−i
Q Q ¯V ¯
= tii dtij ¯
¯ ωij ¯¯
i=1 i<j i<j
as was to be shown.
n
√ n(n+1)/2 Q
Theorem 2.11. We have νn (K) = 2n π Γ(j/2)−1 .
j=1
Proof. Let f (X) = exp(−tr(t XX)) for X ∈ Matn (R), so f (X) splits
Y
f (X) = exp(−x2ij ) .
i,j
36 2 Measures and Integration
Then
Z Z
n2 /2
π = f (X) dµeuc (X) = f (X)dµeuc (X)
Matn GLn
Z Z Y
[by Lemma 2.10] = f (kT ) tn−i
ii dµeuc (T )dνn (k)
K
Tri+
Z Y Y Y
= νn (K) exp(−t T T ) tn−i
ii dtii dtij .
i<j
Tri+
Now the integral over Tri+ splits into a product of several single integrals
over i = 1, . . . , n. The integrals over the variables tii are from 0 to ∞,
Z ∞ µ ¶
−t2 n−i n−i+1 1
e t dt = Γ
0 2 2
The other integrals over the tij (i < j) are individually of the form
Z ∞
2 √
e−t dt = π ,
−∞
√ n(n−1)
and there are n(n − 1) of them, thus giving rise to the power π .
Multiplying these two types of products and solving for νn (K) gives the stated
value.
In the first place, every Y ∈ Posn can be written uniquely in the form
Posn = R+ × SPosn−1
in terms of the coordinates (r, Z). There exists a unique SLn (R)-invariant
(1)
measure on SPosn , denoted by µn , such that for the above product decom-
position, we have
dr (1)
dµn (Y ) = dµn (Z) .
r
Warning: This decomposition does not come from the Jacobian of the coor-
dinate mapping!
Immediately from the above decomposition, we obtain:
so the formula is clear. The second one is immediate from Fubini’s theorem.
Note that the finiteness of the volume was proved in Theorem 1.6.
From the isomorphism in (2) and the fact that Rn /Zn is compact, if we
suppose inductively that Γn−1 \Gn−1 has finite measure, we then obtain:
Proposition 3.4. The measure of Γn,1 \Gn,1 is finite.
One more formula:
Proposition 3.5.
Z Z
1
h(t gg)dg = an h(Y )|Y |− 2 µn (Y )
GLn (R) Posn
where
n
Y π j/2
an = .
j=1
Γ(j/2)
4 Siegel’s Formula
Throughout this section we use the notation of Sect. 3 concerning subgroups
of Gn = SLn (R), but when we fix n for parts of the discussion, we abbreviate:
(2) Zn−1 \Rn−1 → Γn,1 \Gn,1 → Γn−1 \Gn−1 = SLn−1 (Z)\SLn−1 (R)
has finite measure under the inductive assumption of finite measure for the
quotient space SLn−1 (Z)\SLn−1 (R).
Formula (1) allows us to transport Lebesgue measure from Rn to G1 \G.
We use x for the variable on Rn , sometimes identified with the variable in
G1 \G. In an integral, we write Lebesgue measure as dx. We let µG1 \G be the
corresponding measure on G1 \G, under the isomorphism (1).
We continue to use the fact that a homogeneous space with a closed uni-
modular subgroup has an invariant measure, unique up to a constant factor.
We consider the lattice of subgroups
G
¡ @
¡ @
¡
ª @
R
G1 Γ
@ ¡
@ ¡
@
R ª ¡
Γ1
Fix a Haar measure dg on G. On the discrete groups Γ, Γ1 let the Haar
measure be the counting measure (measure 1 at each point). Then dg de-
termines unique measures on Γ\G and Γ1 \G, since a measure is determined
locally. We can denote the induced measures by dḡ without fear of confusion.
In addition to that, going on the other side, having fixed dg on G and
dµG1 \G = dx on G1 \G ,
Putting in the variables, this means that for a function f on Γ1 \G, say con-
tinuous with compact support, we have:
Z Z Z
(4) f (g1 g)dḡ1 dµG1 \G (ḡ) = f (g)dḡ
G1 \G Γ1 \G1 Γ1 \G
Z Z
= f (γg)dγ̄dḡ .
Γ\G Γ1 \Γ
4 Siegel’s Formula 41
Of course, the formula for f ∈ Cc (G1 \G) determines the measures, but is valid
for a much wider class of functions, namely the class for which the integrals
converge absolutely, say f ∈ L1 (Γ1 \G).
Lemma 4.1. Let f ∈ L1 (Rn ) ≈ L1 (G1 \G). Let cn = vol(Γ1 \G1 ). Then
Z Z Z
cn f (x)dx = f (γg)dγ̄dḡ .
Rn Γ\G Γ1 \Γ
Proof. The right side is just the right side of (4). For the left side, note that
f (g1 g) = f (g) for g1 ∈ G1 by the current hypothesis on f , and hence the inside
integral on the left of (4) just yields the volume vol(Γ1 \G1 ). The lemma then
follows by the definition of cn .
Denote by prim(t Zn ) the set of primitive row vectors, i.e. integral vectors
such that the g.c.d. of the components is 1. Then
t
(5) e1 Γ = t e1 SLn (Z) = prim(t Zn ) ,
since any primitive vector can be extended to a matrix in SLn (Z). From (5),
it follows that the totality of all non-zero vectors in t Zn is the set of vectors
t
Zn − {0} = {k` with ` primitive and k = 1, 2, 3, . . .} ,
Replacing f (x) by f (kx) with a positive integer k, and using the chain rule
on the right, we find
Z X Z
f (k`g)dḡ = cn k −n f (x)dx .
` prim Rn
Γ\G
Summing over all k ∈ Z+ , the elements k` with ` primitive range over all
non-zero elements of t Zn . On the left side, we obtain the factor cn ζ(n), so we
may rewrite that last expression in the form
Z X Z
(6) f (`g)dḡ = cn ζ(n) f (x)dx ,
Γ\G `6=0 Rn
Assuming that Vn is finite, we shall now prove that Vn = cn ζ(n). For this
we can take a function f which is continuous = 0, with positive integral and
compact support. We note that for any g ∈ SLn (R),
µ ¶ Z
1 X 1
(7) lim f `g = f (x)dx .
N →∞ N n N
`6=0 Rn
This is just a property of the Riemann integral, passing to the limit, because
translation by g preserves the measure of a parallelotope. We integrate this
formula over Γ\G and find:
Z X µ ¶
1 1
Z
VN f (x)dx = lim f `g dḡ
N →∞ N n N
Rn `6=0
Γ\G
µ ¶
1 1
Z
= lim cn ζ(n) f x dx by (6)
N →∞ N n N
Rn
Z
= lim cn ζ(n) f (x)dx
N →∞
Rn
Remark. From Theorem 1.4 of the present chapter and Theorem 5.1 of Chap.
1, we know that Γ\G has finite measure. However, as does Siegel, one can give
another proof by induction, not based on the use of Siegel sets. The first part
of the proof of the preceding theorem is valid no matter whether Vn is finite
or not, and it yielded (6). Here we use the induction which made Proposition
3.4 valid, so cn is finite. Let f be a function with compact support, equal to
1 on some given compact set, and f = 0 everywhere. Define
4 Siegel’s Formula 43
µ ¶
1 1
fN (x) = f x .
Nn N
Apply (6) to the function fN instead of f , and take the limit for N → ∞. By
(7) it follows that the measure of every compact subset of Γ\G is bounded by
cn ζ(n), and therefore that Γ\G has finite measure. Of course, this argument
and the argument in the second part of the theorem can be joined to give
proofs of the finiteness and the value of Vn simultaneously. We preferred the
present arrangement, separating both considerations.
Another Proof
Because of its intrinsic interest, and because we want to emphasize how the
additive Poisson formula mingles with the multiplicative Haar measures, we
shall give another proof for the computation of the volume, originally due to
Weil [We 46]. We add one term involving f (0) to both sides of (6) to get
Z Z X
(7) cn ζ(n) f (x)dx + Vn f (0) = f (`g)dḡ
Rn `
Γ\G
where the sum is taken over all ` ∈ t Zn . Normalize the Fourier transform by
Z
∨
f (y) = f (x)e−2πix·y dx .
Rn
Then the Poisson formula gives for any function ϕ in the Schwartz space:
X X
ϕ(`) = ϕ∨ (`) .
` `
If ϕ(x) = f (xg), then ϕ∨ (y) = f ∨ (y t g −1 ) for g ∈ SLn (R). From (7) we find
Z X
(8) cn ζ(n)f ∨ (0) + Vn f (0) = f ∨ (` t g −1 )dḡ
`
Γ\G
Z X
= f ∨ (`g)dḡ .
Γ\G `
k`gk = An .
1
Z X
Ann µeuc (Bn ) = f (`g)dḡ .
Vn
Γ\G `6=0
Hence for this value of g, all the points `g lie outside the ball of radius An ,
which concludes the proof.
Remark. Since µeuc (Bn ) = π n/2 /Γ(1 + n/2), one can use Stirling’s formula
get the asymptotic behavior of µeuc (Bn ), which immediately allows one to give
an explicit expression for An when n is sufficiently large. Stirling’s formula
shows that µeuc (Bn ) tends to 0 fairly rapidly, and hence An can be selected
to tend to infinity accordingly. For instance, An = (n/2πe)1/2 will do for n
sufficiently large.
(1)
We reformulate Siegel’s theorem on SPosn . We use the measure µn of
Sect. 3.
There exists a unique Haar measure dg on G such that
Z Z
(9) f (Z)dµ(1)
n (Z) = f (g t g)dḡ .
Γ\SPosn Γ\G
4 Siegel’s Formula 45
Proof. Let f (x) = ϕ(t xx) and apply Siegel’s formula to f . Then
Z Z X
Vn f (x)dx = f (`g)dḡ (by Theorem 4.2)
Rn `6=0
Γ\G
Z X
= ϕ([`]g t g)dḡ
`6=0
Γ\G
Z X
= ϕ([`]Z)dµ(1)
n (Z)
Γ\SPosn `6=0
by the normalization (9) thus concluding the proof of the first version, sum-
ming over all ` 6= 0. The second version is done in exactly the same way from
the second version of Theorem 4.2.
dr
Z X Z
(1)
ϕ([`]Z)dµn (Z) = Vn−1 ϕ(r)rn/2 .
r
` prim
Γn \SPosn R+
Proof. We first note that the sum inside the integral, viewed as a function
of Z ∈ SPosn , is Γn -invariant because action by Γn (on the right side of `)
simply permutes the primitive integral vectors. In any case, we may rewrite
the left side in the form:
46 2 Measures and Integration
Z X
left side = ϕ([t e1 ][γ]Z) dµ(1)
n (Z)
Γn \SPosn γ∈Γn,1 \Γn
Z
= ϕ(z11 )dµ(1)
n (Z)
Γn,1 \SPosn
Z
= f (Z)dµ(1)
n (Z) putting f (Z) = ϕ(z11 )
Γn,1 \SPosn
Z∞
dw
Z Z
(1)
= f (w, x, V )wn/2 dx dµ(n−1) (V )
w
Γn−1 \SPosn−1 Rn /Zn 0
Z∞
dw
= Vn−1 ϕ(w)wn/2 ,
w
0
with the use of Proposition 3.3 in the penultimate step. This concludes the
proof.
π n/2 π n/2
µeuc (Bn ) = =
Γ(1 + n/2) (n/2)Γ(n/2)
and therefore
(11) µ(1)
euc (S
n−1
) = nµeuc (Bn ) .
From (10) and (11) it follows trivially that for a function ϕ on R+ one has
the formula
π n/2 n/2 dr
Z Z
(12) ϕ(r)r = (t xx)dx ,
Γ(n/2) r
R+ Rn
Theorem 4.6. (Minkowski) Let G = SLn (R) and Γ = SLn (Z). Let
Proof. We start with Corollary 4.4, to which we apply Proposition 4.5 and
follow up by formula (12). The inductive relation drops out, and the case
n = 1 is trivial. Readers who don’t like n = 1 can check for themselves the
case n = 2 (the standard upper half plane).
3
Special Functions on Posn
Classical functions such as the gamma function and the Bessel function have
analogues on symmetric spaces, as do certain classical integral transforms. For
the generalization of gamma function to Posn , the idea goes back to Siegel
[Sie 35], and for the Bessel function it goes back to Bochner [Boc 52], Herz
[Her 55] and Selberg [Sel 56]. We shall give further bibliographical comments
later. Among the integral transforms is the generalized Mellin transform. Cf.
Gindikin [Gin 64], who provides a beautiful survey of special functions on
spaces like, but more general than Posn . Thus large portions of harmonic
analysis, as well as the theory of Dirichlet and Bessel series carries over to
such spaces. Here we are concerned with the most standard of all symmetric
spaces, the space Posn of symmetric positive definite real matrices. As the
reader will see, one replaces the invariant measure dy/y on the multiplicative
group by the measure |Y |−(n+1)/2 dµeuc (Y ) on the space of positive definite
matrices. We develop systematically the theory of some special functions on
this space, namely, the gamma and K-Bessel functions, as a prototype of other
special functions, and also prototype of more general symmetric spaces. No
matter what, it is useful to have tabulated the formulas in this special case,
for various applications.
We note that Terras has a section dealing with the gamma and Bessel
functions [Ter 88], Chap. 4, Sect. 4. For many reasons, we thought it was
worth while to include here a new exposition of the material. For one thing,
she leaves too many exercises for the reader. Aside from that, in the tradition
of the quadratic forms people, she uses right action of the group on Posn , and
we use left action in the tradition of the Lie industry. We have also used the
basic notion of characters on Posn systematically, associating both the gamma
and Bessel transforms with characters on the homogeneous space.
Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 49–74 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
50 3 Special Functions on Posn
1 Characters of Posn
The space Posn is a homogeneous space for GLn (R), but it is a principal ho-
mogeneous space for the space Tri+ of upper triangular matrices with positive
diagonal components, as we have seen in Chap. 1. We can write an element
Y ∈ Posn uniquely in the form
Y = [T ]I with T ∈ Tri+ , so Y = T t T .
d(Y ) = |Y | .
it is less cumbersome to leave out the complex variables. Note that for any
α ∈ C,
dα ρs = ρs+α where s + α = (s1 + α, . . . , sn + α) .
Observe that the transpose maps Tri+ to Tri− and preserves the diagonal
elements. We could have carried out the same construction with Tri− instead
of Tri+ . To relate the two constructions, it is useful to introduce the reversing
matrix
0 1
ω= 1
1 0
So we define
ψ ∗ (T ) = ψ([ω] t T −1 ) .
Proposition 1.1. The map ρ 7→ ρ∗ is an involution on the group of charac-
ters. The character associated with ρ∗ is ψ ∗ . For α ∈ C,
(dα )∗ = d−α .
52 3 Special Functions on Posn
Proof. Immediate from (3) and (4) above, and the definition of ρ∗ .
The above involution gives one mechanism to go back and forth between
the upper triangular action and the lower triangular action on the left. The
non-commutativity forces the development of some sort of formalism to deal
with it.
The advantage of dealing with [ω] is that it allows us to define an involution
on the group of characters. However, it stems from the fact that Posn is
actually both a left G-space and a right G-space. The map S : Y 7→ Y −1
leaves Tri+ and Tri− stable, and interchanges left characters (for Tri+ ) to
right characters. For a left character ρ, we may denote ρ0 the right character
such that
ρ0 (Y ) = ρ(Y −1 ) so ρ00 = ρ .
On the other hand, [ω] interchanges Tri+ and Tri− , as well as it interchanges
left and right characters. Indeed, for T ∈ Tri+ and abbreviating [ω]X = X ω ,
we have
ρ ◦ [ω](Y [T ]) = ρ(t T ω Y ω T ω ) = ψ(t T ω )ρ([ω]Y ) ,
so ψρ◦[ω] (T ) = ψ(t T ω ) is the character on Tri+ associated with ρ ◦ [ω]. Taking
the composite
ρ∗ = ρ0 ◦ [ω] = (ρ ◦ [ω])0
then yields the involution on left characters.
Next, let us define an involution on the space of n-complex variables, that
is,
s∗ = (−sn , . . . , −s1 ) .
Thus s∗1 = −sn−i+1 for i = 1, . . . , n. We want a character hs = ρs# with a
suitable vector s# such that
h∗s = hs∗ .
Trivial fooling around with the index and solving a linear equation shows:
Proposition 1.2. Let
n
Y
(4) hs (Y ) = (tn−i+1 )2si +i−(n+1)/2 .
i=1
(5) hs = ρs#
where
n−i n−1
s#
i = sn−i+1 + − .
2 4
1 Characters of Posn 53
as well as
n
Y
(7) hs (Y ) = |Y |−(n+1)/4 (tn−i+1 )2si +i .
i=1
Subdeterminants
Let Y ∈ Posn . For each j with j = 1, . . . , n let Subj Y be the i × j upper left
square submatrix of Y , and let Subj Y be the lower right square submatrix of
Y , as shown on the figure.
j Subj Y
Subj Y j
Authors dealing with the right action of Tri+ then consider Subj Y and
corresponding characters. With our left action, we consider Subj Y . The reason
is given in the next proposition.
Then
t t
B tD
AA AB ∗
t and T t T =
TT = ,
t t t
BA ∗ D B D D
so the proposition is clear.
54 3 Special Functions on Posn
Proof. Proposition 1.3. Characters are of course meant in the sense we have
fixed, for the left action of Tri+ . If one considers right action then Subj Y has
to be replaced by Subj Y to make the assertion valid.
Proposition 1.5. Index [ω] as [ωn ] for its action on n × n matrices. Then
Proof. Check it out directly for n = 2, then n = 3, and then do what you
want with induction and matrix multiplication.
q−z = qz−1 .
Both power functions are left characters. For right characters (as in other
authors), one defines the power function pz by taking the product with Subj
instead of Subj .
When n = 1 and ρ = ρs , then the above value coincides with the usual gamma
function of a single variable s. Actually, the integral can be expressed in terms
of the usual gamma function as follows.
56 3 Special Functions on Posn
√ n(n−1)/2 Y
µ ¶
n−1
Γn (ρs ) = π Γ si − .
i
2
It follows that the integral splits into a product of single integrals as follows:
For indices i < j, we have the product
∞
Y Z √ n(n−1)/2
exp(−t2ij )dtij = π .
i<j −∞
For indices i = j, so over the variables t11 , . . . , tnn we have the product of
i-th terms, i = 1, . . . , n, using a single variable s for simplicity, y = t2 , and
dy/y = 2dt/t,
Z∞ Z∞
−t2 2(s+(i−n)/2) dt dy
e t 2 = e−y y (s+(i−n)/2)
t y
0 0
µ ¶
i−n
= Γ s+ .
2
Remark. Mellin transforms and gamma functions as above and more gener-
ally hypergeometric functions occur notably in Gindikin [Gin 64]. In the early
days, as in [Sie 35] and [Her 55], only the determinant character was used to
define a gamma function. Selberg saw further with his power function [Sel 56].
We shall now extend systematically the standard formalism of the gamma
integral and subsequently the K-Bessel integral to the matricial case. Proofs
which used only the invariance of the measure dy/y on the positive multi-
plicative group, and an interchange of integration, go over systematically.
We start with the standard result that for a > 0, we have
Z∞
dy
(5) e−ay y s = Γ(s)a−s .
y
0
Proof. Write Z = T t T with T ∈ Tri+ . Then starting with the measure invari-
ance, we find:
Z
−1
(Γn #ρ)(Z) = e−tr([T ]Y Z ) ρ([T ]Y )dµn (Y )
Posn
Z
t
T t T −1 T −1 )
= e−tr(T Y ψ(T )ρ(Y )dµn (Y )
Posn
Z
−1
= ψ(T ) e−tr(T Y T )
ρ(Y )dµn (Y )
Posn
= ρ(Z)Γn (ρ),
58 3 Special Functions on Posn
which proves the first formula. The second follows by putting Z = A−1 and
using the fact that tr(AY ) = tr(Y A).
Remark. The above result joins two properties. First, the gamma function
was viewed as a function on the space of characters, whence the notation Γ(ρ).
Second, we view the exponential function involving two variables (Z, Y ) as a
“kernel function” giving rise to an integral operator, which we may apply to
a space of functions which make the integral absolutely convergent, certainly
including functions with “polynomial growth” in some sense, and certainly
including a half plane of characters on the symmetric space Posn . A function h
may be an eigenfunction for this integral transform, i.e. the gamma transform,
and in this case, the corresponding eigenvalue may be denoted by λΓ (h). Then
Proposition 2.2 may be formulated by saying that for a character ρ, we have
λΓ (ρ) = Γn (ρ) .
Y 7→ Y −1 and Y 7→ [ω]Y .
The desired formula then follows from Proposition 2.2 and the definitions.
Proof. We note that ρ = ρ00 , and ρ0 is a left character. We then make the
measure-preserving change of variables Y 7→ Y −1 and apply Propositions 2.2
and 2.3 to conclude the proof.
and the next. The basis principle is that, once the original definition is given
with matrices, then almost all the formulas for the ordinary K-Bessel func-
tion are valid, with essentially the same proofs as in the one-dimensional case,
using the invariant measure dµn (Y ) on Posn instead of the invariant measure
dy/y on the positive multiplicative group, which is Pos1 . We have found it
convenient to adopt the convention that for n = 1, s ∈ C, a, b > 0,
Z∞
−1
) s dy
Ks (a, b) = e−(ay+by y .
y
0
Directly from the fact that for g ∈ GLn (R), the map [g] preserves the measure,
we get the transformation formula
It is very easy to show that in a fixed bounded range for r, one has uniformly
Cf. [La73/87], Chap. XX, Sect. 3, K7. Of course, as x → 0, the integral blows
up.
The next theorem will show not only that the higher dimensional Bengtson-
Bessel function is entire in the n complex variables s1 , . . . , sn , but it also gives
uniform estimates for the absolute convergence of the integral, in terms of the
eigenvalues of A and B as these approach 0 (which is bad) or ∞ (which is
good).
Theorem 3.1. Let λ > 0 be such that A and B = λI. Put
σj = Re(sj ) .
Then the integral representing Kρs (A, B) is absolutely convergent and satisfies
r n(n−1)/2 Y
n
π
|Kρs (A, B)| 5 Kσj −(n−j)/2 (λ) .
λ j=1
Proof. We write down the integral representing the Bessel function just with
the real part, since the imaginary part does not contribute to the absolute
value estimate. For X ∈ Rn , we have A[X] = λt XX and hence for X ∈ Rn×n
we have
tr(A[X]) = λ tr([X]I) .
In the Bessel integral, we change the variable as in Chap. 2, Sect. 2, putting
Y = T t T with T ∈ Tri+ , so
X n
X X
tr(AY ) = λt2ij = λt2jj + λt2ij .
i5j j=1 i<j
In the product over i < j, we omit the term with (tij )2 which only makes the
p n(n−1)/2
estimate worse, and then the integral gives precisely the factor π/λ .
For the product with the diagonal variables tj = tjj , we change the variable
putting u = t2 , du/u = 2dt/t. Then one gets just the Bessel integral in one
variable, giving the other factor in the desired estimate. This concludes the
proof.
Then the integral is absolutely convergent only in the half plane of convergence
of the gamma integral, but we can substitute Z = 0.
We then find
We use the commutativity of the trace to see that this integral is the same as
Z
−1
e−tr(A[T ]Y +B([T ]Y ) ) ρ(Y )dµn (Y ) .
Posn
(K4) Kρ (A, B) = Kρ0 (B, A) = Kρ∗ ◦[ω] (B, A) = Kρ∗ ([ω]B, [ω]A) .
ρ0 (Y ) = ρ(Y −1 ) = ρ∗ ([ω]Y ) .
Inductive Formulas
We conclude this section with inductive formulas for the Bessel function start-
ing with the degenerate case as in Bengtson. We fix the notation for the rest
of the section
Proposition 3.1. Let 0 < p < n and p + q = n. Let P ∈ Posn , Q, D ∈ Posq .
For X ∈ Rp×q , we let
Ip X
u(X) = .
0 Iq
A variable Y ∈ Posn has a unique expression in partial Iwasawa coordinates
µ ¶
W 0
Y = Iw+ (W, X, V ) = [u(X)]
0 V
We specify each time which expression is used. A left, resp. a right, character
ρ can be expressed uniquely as a product
µ ¶
W 0
ρ = ρ1 (W )ρ2 (V ) ,
0 V
where
3 The Bengtson Bessel Function 63
M (Y ) = AY + BY −1
and A, B are either in partial Iwasawa coordinates or degenerate, for instance
µ ¶
+ 0 0
A = Iw (P, C, Q) and B = .
0 D
In the proof, we then use the coordinates Y = Iw− (W, X, V ) with Iw− . Many
combinations can occur, as in the next four propositions, with the alternate
Iwasawa decomposition or upper left hand corner blocks instead of lower right.
The four propositions allow to determine similar answers for Bessel integrals
formed by permuting variables, e.g. using t u(C) instead of u(C) in Proposition
3.2 below. For example, in that proposition, we let
µ ¶ µ ¶
P 0 O O
M (Y ) = [u(C)] Y + Y −1 = AY + BY −1 .
0 Q O D
Let
µ ¶ µ ¶
0 0 P 0
M 0 (Y ) = Y + [u(C)] Y −1 = BY + AY −1 .
0 D 0 Q
M ∼N
Proof. The pattern of the present proof will be repeated several times after-
wards. We are computing Kρ (A, B), where A = Iw+ (P, C, Q). For certain
matrix multiplications to come out, we use Y = Iw− (W, X, V ), i.e. we use the
alternate partial Iwasawa decomposition for the variable Y .
so that
The translation by C in the dµeuc (X)-integral does not change the integral,
so we can omit C from the integrand. Multiplication by Q on the right and W
on the left introduce linear changes of coordinates in this dµeuc (X)-integral,
1 1
in fact the change is by Q 2 and W 2 since the variable X is “squared”. The
situation is the same as in Chap. 3, Sect. 2. Formally, we are dealing with the
change of variables
1 1
Z = W 2 XQ 2 , dZ = dX|W |q/2 |Q|p/2 .
Making this change of variables shows that the dµeuc (X)-integral has the
standard value
√ pq
Z
t
e−tr( XX) dµeuc (X) = π ,
Rp×q
The variables are separated in the double integral. From Proposition 2.2, the
W -integral yields
(7) ρ1 (P −1 )Γp (ρ1 ) ;
while directly from the definition of the K-Bessel function, the V -integral
yields
(8) Kρ2 d−p/2 (Q, D) .
q
√
Putting these last two factors together with the power of π and the factor
|Q|−p/2 proves the proposition.
We tabulate the variation with [t u(C)] instead of [u(C)].
Proof. The point of the method of Proposition 3.2 was to combine the ex-
pressions with (C, P, Q) and (X, W, V ). To do so in the present case, we have
to use the alternative coordinates Y = Iw+ (W, X, V ). Then matrix multipli-
cation now gives
PW ∗ ∗ ∗ ∗ ∗∗ 0 ∗∗∗
(10) M (Y ) ∼ + ,
∗∗ ([t (C + X)]P + Q)V ∗ DV −1
whence
then yields Kρ2 (Q, D). Putting all factors together gives the stated answer.
66 3 Special Functions on Posn
Proof. We let
µ ¶
W 0
Y = [t u(X)] = Iw− (W, X, V ) .
0 V
and
P W −1
µ ¶ ∗ ∗ ∗∗
P 0
(15) [t u(C)] Y −1 ∼ .
0 Q
∗∗∗ ([t u(C − X)]P + Q)V −1
Then
Since
dµn (Y ) = |W |q/2 |V |−p/2 dµeuc (X)dµp (W )dµq (V ) ,
we find that
Z
(17) e−trM (Y ) ρ(Y )dµn (Y ) =
Posn
ZZZ
e−trM (W,X,V ) ρ1 (W )ρ2 (V )|W |q/2 |V |−p/2 dµeuc (X)dµp (W )dµq (V ).
leaves the dµq (V )-integral, times the inverse |P |−q/2 , while there is a cance-
lation of |V |−p/2 , so the V -integral is
Z
−1
(19) e−tr(QV ) ρ2 (V )dµq (V ) .
Rp×q
Proof. The situation can use some of the computations of Proposition 3.2, but
in evaluating trM (Y ) the term with D has to be replaced. A new computation
shows that
A(W −1 + [X]V −1 ) ∗
A 0
(20) Y −1 = .
0 0 ∗ ∗ ∗ ∗ ∗∗ 0
Hence
Then we first perform the W -integral and V -integral, and the stated answer
drops out.
and we can apply Proposition 3.4 to find a value for the right side.
68 3 Special Functions on Posn
Kρ (A, In )
Z
= Kρ1 dq/2 (P + [X + C]Q, Ip )Kρ2 d−p/2 (Q, Iq + t XX)dµeuc (X) .
p q
Rp×q
with
(16) f (Y ) = e−trM (Y ) ρ1 (W )ρ2 (V ) .
The expression for M (Y ) = M (W, X, V ) = AY + Y −1 is the same as in (3),
except that the matrix with DV −1 is replaced by
−1
W ∗ ∗ ∗∗
(17) Y −1 ∼ .
t −1
∗ ∗ ∗ ( XX + I)V
Therefore we obtain
Rp×q
Kρ (A, In )
Z
= Kρ1 d−q/2 (P, I + X t X)Kρ2 dp/2 (P [C + X] + Q, I)dµeuc (X).
Rp×q
We let M (Y ) = AY + Y −1 . Then
Then ψ(TX ) cancels ψ(TX )−1 . Now interchange the integrals again. We have
so (2) becomes
Z Z
1/2 1/2 1/2 1/2
(3) e−hXY ,XY i e−ihX,Ri dν(X)e−hAY ,AY i ρ(Y )dµq (Y ) .
Posq Rp×q
The next theorem gives the Mellin transform of the Bessel function,
and extends the one variable formula
Z∞
dy
Kz (1, y)y s = Γ(s)Γ(s + z) .
y
0
Y 7→ [TZ−1 ]Y
so that Z
∧
βσ,Y (R) = βσ,Y (X)e−i<X,R> dν(X) .
Rp×q
The first thing we remark about this Fourier transform, also called the Bengt-
son function, is its eigenfunction property for the action of Rp×q on Posn .
Proposition 4.3. With the above notation, for Z ∈ Rp×q , we have
∧
βσ,[u(Z)]Y (R) = ei<Z,R> βσ,Y
∧
(R) .
72 3 Special Functions on Posn
Then
Z
∧
(8) βσ,W,V (R) = σ ∗ ((A2 + XB t (XB))−1 )e−ihX,Ri dν(X) .
Rp×q
Proof. This is actually the content of (8), together with the change of variables
X 7→ XB −1 , before we apply Theorem 4.1.
Proof. By definition
Z
∧
βσ,W,I q
(R) = σ ∗ ((W + X t X)−1 )e−i<X,R> dν(X) .
Rp×q
Then
Z
∧ q/2
βσ,T,I q
(R) = |W | σ([t T ω ](I + X t X)ω )ei<X,T R> dν(X)
Rp×q
Z
= |W |q/2 σ([ω]t T ) σ([ω](I + X t X))e−i<X,T R> dν(X)
Rp×q
Z
q/2 ∗ −1
= |W | σ (W ) σ ∗ ((I + X t X)−1 )e−i<X,T R> dν(X) ,
Rp×q
∧
which yields the theorem by definition of βσ,I p ,Iq
(T R).
Having given the two reduction formulas above in separate cases, we can
combine them into one statement for the record.
Theorem 4.8. Let the situation be as in Theorems 4.6 and 4.7, so σ is a left
character on Posp , W ∈ Posp , W = t T T with T ∈ Tri+
p , and V ∈ Posq . Then
1
∧
βσ,W,V (R) = |V |−p/2 |W |q/2 σ ∗ (W −1 )βσ,I
∧
p ,Iq
(T RV − 2 ) .
1 Invariant Polynomials
Let:
Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 75–94 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
76 4 Invariant Differential Operators on Posn (R)
Let Eij be the matrix with ij-component equal to 1, and all other com-
ponents equal to 0. Then Sym has a basis (actually orthogonal) consisting of
the elements
1
vii = Eii and vij = (Eij + Eji ) for i < j .
2
Then the algebra Pol(Sym) can be viewed as the algebra of polynomials
The coordinate functions (xij ) form the dual basis of (vij ), and hi = xii . Let
K be the usual compact group of real unitary matrices. We let:
Pol(Sym)K = subalgebra consisting of the elements invariant under
the conjugation action by K.
Proof. Every element of Sym can be diagonalized with respect to some ortho-
normal basis. This means that
Sym = [K]a ,
so that every element of Sym is of the form kv t k = kvk−1 for some v ∈ a and
k ∈ K. Thus the restriction map is injective. We have to prove that it is sur-
jective. For this we recall that a symmetric polynomial in variables h1 , . . . , hn
can be expressed uniquely as a polynomial in the elementary symmetric func-
tions s1 , . . . , sn . Furthermore, these symmetric functions are the coefficients
of the characteristic polynomial of elements v ∈ a:
det(tI + v) = tn + s1 tn−1 + . . . + sn .
Remark. The above result was proved by Chevalley for semisimple Lie al-
gebras. Cf. Wallach [Wal 88], Theorem 3.1.2 and Helgason [Hel 84], Chap. 2,
1 Invariant Polynomials 77
The sum is here taken over all permutations σ of {1, . . . , d}. Given a non-
degenerate bilinear map between V and another vector space V ∨ , the same
formula defines a duality on their algebras of polynomial functions. In prac-
tice, one is usually given some non-degenerate symmetric bilinear form on
V itself, identifying V with its dual space. Note that the sum defining the
scalar product on monomials is the same as the sum defining determinants,
except that the alternating signs are replaced by all plus signs, thus making
the sum symmetric rather than skew symmetric in the two sets of variables
(λ1 , . . . , λd ) and (v1 , . . . , vd ). If {v1 , . . . , vn } is a basis for V and {λ1 , . . . , λn }
78 4 Invariant Differential Operators on Posn (R)
is the dual basis, then the value of the above pairing on their monomials is
1 or 0. Thus the distinct monomials of given degree d form dual bases for
Pold (V ) and Pold (V ∨ ).
Let K be a group acting on V . Then K also acts functorially on the dual
space V ∨ . For a functional λ ∈ V ∨ , we have by definition
Proposition 1.2. The pairing described above between Pol(V ) and Pol(V ∨ )
is K-invariant, in the sense that for P ∈ Pol(V ) and Q ∈ Pol(V ∨ ), we have
0 → Pol(a∨ ) → Pol(V ∨ ) .
Pol(a∨ )W = S(a)W .
Pol(V )K → Pol(a)W
Pol(a∨ )W → Pol(V ∨ )K
is an isomorphism.
Posn : First as an open subset of Sym; and second as the image of the exponen-
tial map giving a differential isomorphism with Sym. Each one of these charts
gives rise to a description of the invariant differential operators, serving differ-
ent purposes. The algebra of invariant differential operators is isomorphic to
a polynomial algebra, and each one of these charts gives natural algebraically
independent generators for this algebra. The first set of generators is due to
Maass-Selberg [Maa 55], [Maa 56], [Sel 56], see also [Maa 71], which we follow
more or less.
We let DO(M ) denote the algebra of C ∞ differential operators on a mani-
fold M . Let G be a Lie group acting on M . As mentioned in the introduction,
we let DO(M )G be the subalgebra of G-invariant differential operators, and
0
similarly DO(M )G for any subgroup G0 . When the subgroup is G itself, we
often omit the reference to G, and speak simply of invariant differential oper-
ators. In the present chapter, we take M = Posn , and G = GLn (R).
We let Y = (yij ) be the symmetric matrix of variables on Posn with
Yij = yji for all i, j = 1, . . . , n. We let dY = (dyij ). We also let
1
∂/∂y11 2 ∂/∂ij
∂ ..
(1) =
.
∂Y
1
2 ∂/∂yij ∂/∂ynn
1
∂11 2 ∂ij
..
=
. .
1
2 ∂ij ∂nn
The notation with partial derivatives ∂ij on the right is useful when we do
not want to specify the variables. Note that the matrix of partial derivatives
has a factor 1/2 in its components off the diagonal. We let tr be the trace.
For any function f on Posn ,we have
∂ X
(2) tr(dY )f = df = (∂ij f )(Y )dyij .
∂Y
i5j
This follows at once from the multiplication of matrices dY and ∂/∂Y . When
summing over all indices i, j, the factors 1/2 add up to 1, as desired to get
the df . This justifies the notation ∂/∂Y .
Next we consider a change of variables under the action of the group
G = GLn (R). Let g ∈ G. Let
(3) Z = gY t g so dZ = gdY t g .
Hence
∂ ∂ ∂ ∂ −1
(4) = tg g and = t g −1 g .
∂Y ∂Z ∂Z ∂Y
Example. For any positive integer r,
µ ¶r µ ¶r
∂ ∂ −1 ∂ ∂
(5) Z = gY g and Z =g Y g −1 .
∂Z ∂Y ∂Z ∂Y
Consequently
µ ¶r µµ ¶r ¶ µµ ¶r ¶
∂ ∂ ∂
(6) tr Z = tr Z = tr Y ,
∂Z ∂Z ∂Y
from which we see that tr((Y ∂/∂Y )r ) is invariant for all positive integers r.
Thus we have exhibited a sequence of invariant differential operators. For a
positive integer r, we define the Maass-Selberg operators
µµ ¶r ¶
∂
δr = tr Y .
∂Y
Thus the invariance of D, namely that [Lg ]D = D for all g, means that for
all f ,
(8) (D(Lg f ))(Y ) = (Df )([g −1 ]Y ) = (Df )(g −1 Y t g −1 )
or equivalently
Now we put Z = gY t g. Then the right side becomes (Df )(Y ). The left side is
µ ¶ µ ¶
∂ −1 t −1 t t −1 ∂ −1
(D(Lg f ))(Z) = P Z, (f (g Z g )) = P gY g, g g (f (Y )) .
∂Z ∂Y
Therefore the invariance formula can be expressed as
µ ¶ µ ¶
∂ −1 ∂
P gY t g, t g −1 g (f (Y )) = P Y, (f (Y )) ,
∂Y ∂Y
from which we can omit the expression f (Y ) at the end. We obtain
Proposition 2.1. Given P (Y, X) ∈ F u[X], the operator P (Y, ∂/∂Y ) is in-
variant if and only if for all g ∈ G,
P (gY t g, t g −1 Xg −1 ) = P (Y, X)
or also µ ¶ µ ¶
t t −1 ∂ −1 ∂
D gY g, g g = D Y, .
∂Y ∂Y
We are now finished with the general remarks on invariant differential
operators, and we relate them with operators with constant coefficients at the
origin. The origin is the unit matrix, which we denote by I. We let
X Y m(i,j)
P (I, X) = ϕ(m) (I) Xij = PD,I (X) .
(m) i5j
DO(Posn )G → Pol(Sym)K .
Proof. We have already seen above that PD,I is K-invariant, and that the
association D 7→ PD,I is injective on DO(Posn )G . There remains only to
prove the surjectivity. Given P (X) ∈ Pol(Sym)K and Z ∈ Posn , we may
write Z = [g]I with some g ∈ G. Define Df by the formula
µ ¶ µ ¶
∂ ¯ ∂ ¯
(Df )(Z) = P f ([g]Y ) ¯Y =I = P (f ◦ [g])(Y ) ¯Y =I .
∂Y ∂Y
so (Df )(Z) is well defined, and D is defined in such a way that its G-invariance
is then obvious. Local charts show that it is a differential operator, thereby
concluding the proof.
to the function etr(Y ) , and showing that we don’t get 0, by a degree argument.
Define the weight w of P (x1 , . . . , xn ) to be the degree of the polynomial
P (x1 , x22 , . . . , xnn ). Then w is also the degree of the polynomial
P (tr(Y ), . . . , tr(Y n ))
in the variables (Y ) = (yij ). To see this, let Pw be the sum of all the monomial
terms of weight w occurring in P (x1 , . . . , xn ). We suppose P 6= 0 so Pw 6= 0.
Then Pw (tr(Y ), . . . , tr(Y n )) is homogeneous of degree w in (yij ), and 6= 0
since tr(Y ), . . . , tr(Y n ) are algebraically independent. All other monomials
occurring in P have lower weight, and hence lower degree in yij ), thus proving
the assertion about w being the degree in (Y ).
Suppose that
µ µ ¶ µ ¶n ¶
∂ ∂
P (δ1 , . . . δn ) = 0 = P tr Y , . . . , tr Y .
∂Y ∂Y
By the remark preceeding the lemma, without loss of generality we may re-
place all δij by 0 whenever i 6= j, to evaluate the effect of a polynomial in
tr(Y ∂/∂Y ) on etr(Y ) . More precisely, let ∆ be the diagonal matrix operator
∂11 0
∆=
.. .
.
0 ∂nn
Then
µ µ ¶ µµ ¶n ¶
∂ ∂
Q tr Y , . . . , tr Y etr(Y )
∂Y ∂Y
= Q(tr(Y ∆), . . . , tr((Y ∆)n )etr(Y ) + R(Y )etr(Y ) ,
where R(Y ) has degree smaller than Q(tr(Y ), . . . , tr(Y n )). Thus we are re-
duced to proving the formula for
Suppose, by induction, that the lemma is proved for xd11 −1 xd22 . . . xdnn . Apply-
ing Y ∆ to the inductive expression immediately yields the desired result. The
general case follows the same way.
Here we describe the invariant differential operators via the exponential chart
exp : Sym → Posn . To each K-invariant polynomial we associate a differential
operator as follows.
3 The Lie Algebra Generators 85
fG (g) = f ([g]I) .
Observe that by definition of the Newton polynomial, one can also write this
definition in the form
µ µ ¶ µ ¶n ¶
# ∂ ∂ ¯
(DP fG )(g) = P tr , . . . , tr f ([g][exp X]I) ¯X=O .
∂X ∂X
Lemma 3.1. The function DP fG depends only on cosets G/K, and is there-
fore a function on Posn , denoted DP f .
Pol(Sym)K → DO(Posn )G .
86 4 Invariant Differential Operators on Posn (R)
Proof. First we prove the injectivity of the map. It suffices to prove the injec-
tivity at g = e (unit element of G). Let F (X) = f (exp 2X) be the pull back
of f to Sym = TI Posn (tangent space at the origin). The function f locally
near I, and so F locally near 0, can be chosen arbitrarily, for instance to be a
monomial in the variables. If DP f = 0, then for every monomial F ((xij )) we
have P (∂/∂X)F (xij )) = 0 whence P = 0, thus proving injectivity.
D = γ∂j1 . . . ∂jm ,
t 1
Dψ = (−1)m ∂j1 . . . ∂jm (γβψ) .
β
Using a partition of unity, this formula also applies under conditions of ab-
solute convergence.
Apply this to the volume form corresponding to the measure on Posn :
We get:
P
and m = mij . Then
Y µ ∂ ¶mij
t
DY = (−1)m |Y |(n+1)/2 ◦ (α(Y )|Y |−(n+1)/2 ) .
∂yij
i5j
= λρ c(ρ)ρ .
90 4 Invariant Differential Operators on Posn (R)
Hence D̃ and t D have the same eigenvalues ρ for all ρ. Hence they are equal
by [JoL 01], Theorem 1.3 of Chap. 3. This concludes the proof.
A ≈ R+ × . . . × R+ with a = diag(a1 , . . . an ) ∈ A .
DP = P (D1 , . . . Dn ) .
Proof. This is a result in calculus, because of the special nature of the group
A. Indeed, the ordinary exponential map gives a Lie group isomorphism of
R × . . . × R with A. Then invariant differential operators on a = R × . . . × R
are simply the differential operators with constant coefficients. One sees this
at once from the invariance, namely if we let f ∈ Fu(a), then for v ∈ a,
are translation invariant, and so constant. Note that the partial derivatives
∂/∂xi on a correspond to ai ∂/∂ai on A, with the change of variables
ai = exi or xi = log ai .
a⊥ = Sym(0)
be its orthogonal complement under the trace form, so a⊥ is the space (not
algebra) of matrices with zero diagonal components. For a ∈ A, the multi-
plicative translate [a]Sym(0) is the normal space to the tangent space of [a]I,
but in the present situation,
[a]Sym(0) = Sym(0) .
We view TI Posn = Sym as the tangent space of Posn at I, and T[a]I Posn is
its image under [a]. So at each [a]I ∈ A we have the splitting
which, by our choice of chart can be viewed as simply Sym again, but one has
to be careful about the action in making identifications.
Let NPosn A be the normal bundle of A in Posn . Then the fibers of NPosn A
are simply the normal spaces [a]a⊥ , with a ∈ A, so may be identified with a⊥ .
The exponential map (varying at each point)
is a differential isomorphism.
5 The Normal Projection 93
Proof. This is a special case of a result given by Loos [Loo 69], pp. 161–162, in
the context of semisimple Lie groups and symmetric spaces. However, it is also
a special case of a much more general theorem about Cartan-Hadamard spaces
in differential geometry, see [La 99], Chap. X, Theorem 2.5 for an exposition
and further historical comments. Helgason [Hel 78], Chap. 1, Theorem 14.6
and Chap. 6, Theorem 1.4 gives a topological version without differentiability,
extending theorems of Mostow [Mos 53], also stated without differentiability.
The product decomposition of Theorem 5.4 will be called the A-normal
decomposition of Posn .
The geometry is illustrated on the following figure, with w ∈ Sym(0) .
[a] exp(tw)
exp(tw)
A
[a] I
I
We have drawn the exponential curves emanating from the unit matrix, and
from some translation [a]I, in the geodesic normal direction, a situation we
now discuss more extensively. Given a function f on A, we can define its nor-
mal extension to Posn , namely we let fPosn be the function on Posn which
is constant on every translation [a] exp(a⊥ ), for all a ∈ A. From Theorem 5.4
we have two differential isomorphisms
e
a + a⊥ = a + Sym(0) −→ A × a⊥ −→
NA
Posn ,
where the left arrow is simply the exponential map on a and the identity on
a⊥ . Thus the function f on A not only can be extended normally to Posn via
eN A , but can be pulled back to a function F on a, and then be extended to
FSym constant on each coset h + a⊥ , with h ∈ a. Thus for X ∈ a⊥ = Sym(0) ,
and a = exp h, we have by definition
r⊥ ⊥ ⊥
A (D)f = DA f = (DfPosn )A .
⊥
Lemma 5.5. The map D 7→ DA is a linear isomorphism
K ≈ W
r⊥
A : DO(Posn ) −→ IDO(A) .
For D = DP we have
DPa = r⊥
A (DP ) .
Proof. This just puts Proposition 5.1 and the previous lemma together.
5
Poisson Duality and Zeta Functions
This chapter recalls some standard facts about the Poisson summation formula
on a euclidean space. It will be applied when the euclidean space is the space of
matrices, with the trace scalar product, so we make all the standard formalism
explicit in this case. We give two classical applications to the Epstein zeta
function, which serve as prototypes. In the extension of the theory to Posn ,
we have to generalize the definition of the Bessel functions to this higher
dimensional case, and this will be done in the next chapter. Then we can put
all these results together in the study of Eisenstein series.
1 Poisson Duality
Duality on Vector Spaces Over R
dµ(by) = bN dµ(y) .
Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 95–106 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
96 5 Poisson Duality and Zeta Functions
More generally, let A : V → V be an invertible linear map, and denote kAk the
absolute value of the determinant of A. Then for the composite f ◦ A we have
FT 3. (f ◦ A)∨ = kAk−1 f ∨ ◦ t A−1 .
In particular, if A is symmetric (which will be the case in practice) we find
FT 4. (f ◦ A)∨ = kAk−1 f ∨ ◦ A−1 .
For example, if A is represented by a diagonal matrix dia(a1 , . . . , aN ) with
respect to a basis, and aj > 0 for all j, then
kAk = a1 . . . aN .
Next, let z ∈ V , and define the additive translation fz by
fz (x) = f (x − z) .
Then
FT 5. (fz )∨ (x) = fz∨ (x) = e−2πihx,zi f ∨ (x) .
This comes at once from the invariance of µ under additive translations.
FT 6. If the measure of the unit cube for an orthonormal basis is 1,
and we define f − (x) = f (−x), then
f ∨∨ = f − .
One can either repeat the usual proof with the present normalization, or de-
duce it from the same formula for the otherwise normalized Fourier transform.
That is, if we define
Z √
f ∧ (x) = f (x)e−ixy dν(y) with dν(y) = ( 2π)−N dy ,
RN
f (x) = e−πhx,xi
is self-dual, i.e. f ∨ = f . For the other normalization, the function
g(x) = e−hx,xi/2
is self-dual, that is g ∧ = g.
These relations are elementary from calculus.
2 The Matrix Scalar Product 97
X X
f (α) = µ(V /L)−1 f ∨ (α0 ) .
α∈L α0 ∈L0
Proof. The Poisson formula can be seen from our point of view as a special
case of the heat kernel relation on a torus. However, we give the usual proof
via Fourier inversion. Normalize µ so that µ(V /L) = 1. Let
X
g(x) = f (x + α) .
α∈L
Then X X
f (α) = g(0) = f ∨ (α0 ) ,
α∈L α0 ∈L0
Then FT 5 yields
−1
[A]Q−1 )
X
(4) θ(P, Z, Q) = |P |−q/2 |Q|−p/2 e−2πihA,Zi e−πtr(P .
A∈L
The sum on the right may be called a twist of the theta series by a
character, namely the character
X 7→ e−2πihX,Zi .
It is the only change in the Poisson formula of Theorem 2.2, but has to be
incorporated in the notation. Thus one may define the twisted theta func-
tion X
θZ (P, Q) = e−2πihA,Zi e−πtr(P A]Q) .
A∈L
Then we get:
Theorem 2.3. From the definitions and (4),
Note that the sum over A ∈ L involves each non-zero element of L twice,
namely A and −A. Thus one could write the contribution of the character in
the series for θZ (P, Q) without the minus sign, since P −1 [A]Q−1 is even as a
function of A.
Lemma 3.1. The Epstein series converges absolutely for all s ∈ C with
Re(s) > n/2.
because Y is positive definite and Y [a] >> |Y 1/2 a|2 >> |a|2 for |a| → ∞, so
Λ(Y, s) = π −s Γ(s)E(Y, s) .
This simply comes by integrating term by term the theta series, taking the
Mellin transform of each term. Having subtracted the term with a = 0 guaran-
tees absolute convergence and the interchange of the series and Mellin integral.
3 The Epstein Zeta Function: Riemann’s Expression 101
We shall now give the Riemann type expression for the analytic continu-
ation of the Epstein zeta series.
We define the incomplete gamma integral for c > 0 by
Z∞
dt
Γ∞
1 (s, c) = e−ct ts .
t
1
We note that Γ∞
1 (s, c) is entire in s.
Theorem 3.2. The function s 7→ Λ(Y, s) has the meromorphic extension and
à 1
!
|Y |− 2 1
Λ(Y, s) − −
s − n/2 s
− 12 ∞ n
X³ ³ ´´
= Γ∞
1 (s, πY [a]) + |Y | Γ1 − s, πY −1 [a]
2
a6=0
∞ Z∞
s dt dt
Z
1
= (θ(Y, t) − 1)t + |Y |− 2 (θ(Y −1 , t) − 1)tn/2−s .
1 t t
1
The series and truncated integrals converge uniformly on every compact set
to entire functions, and the other two terms exhibit the only poles of Λ(Y, s)
with the residues. The function satisfies the functional equation
1
³ n ´
Λ(Y, s) = |Y |− 2 Λ Y −1 , − s .
2
Proof. In (2), we write
Z∞ Z1 Z∞
= + .
0 0 1
Γ∞
P
The integral over [1, ∞] yields the sum 1 (s, πY [a]) taken over a 6= 0. For
the other integral, we get
Z1 Z1 Z1
s dt s−1 dt
(θ(Y, t) − 1)t =− t dt + θ(Y, t)ts
t t
0 0 0
Z1
1 dt
=− + |Y |−1/2 θ(Y −1 , t−1 )ts−n/2 .
s t
0
We subtract 1 and add 1 to θ(Y −1 , t−1 ). Integrating the term with 1 yields
the second polar term |Y |−1/2 /(s − n/2). In the remaining integral with
θ(Y −1 , t−1 )−1 , we change variables, putting u = t−1 , du/u = dt/t. Then the
interval of integration changes from [0, 1] to [1, ∞], and the remaining terms
in the desired formula comes out. This concludes the proof of the formula in
the theorem.
102 5 Poisson Duality and Zeta Functions
By Theorem 2.2, we know the functional equation for θ(Y, t). The func-
tional equation in Theorem 3.2 is then immediate, because except for the
factor |Y |−1/2 , under the change s 7→ n/2 − s, the first two terms are inter-
changed, and the two terms in the sum are interchanged. The factor |Y |−1/2 is
then verified to behave exactly as stated in the functional equation for ξ(Y, s).
This concludes the proof.
The integral expressions allow us to estimate ξ(Y, s) in vertical strips as is
usually done in such situations, away from the poles at s = 0, s = n/2.
Corollary 3.3. Let σ0 > 0, σ1 > n/2 and let S be that part of the strip
−σ0 5 Re(s) 5 σ1 , with |s| = 1 and |s − n/2| = 1. Then for s ∈ S we have
1 1
³ n ´
|Λ(Y, s)| 5 |Y |− 2 + 1 + Λ(Y, σ1 ) + |Y |− 2 Λ Y −1 , + σ0 .
2
Proof. We merely estimate the three terms in Theorem 3.2. The polar terms
give the stated estimate since we are estimating outside the discs of radius 1
around the poles. We make the first integral larger by replacing s by σ1 ,
and then by replacing the limits of integration, making them from 0 to ∞,
which gives ξ(Y, σ1 ) as an upper bound for the first integral. As to the second
integral, we perform the similar change, but use the value s = −σ0 to end up
with the stated estimate. This concludes the proof.
Λ(Y, s) = Λ(Ỹ , s) .
so that
[b, c]Y = [c, −b]Ỹ .
The map (b, c) 7→ (c, −b) permutes the non-zero elements of Z2 so the propo-
sition follows.
For later use, we insert one more consequence, a special case of Corol-
lary 3.3.
Corollary 3.8. In the domain −2 < Re(s) < 3 and s outside the discs of
radius 1 around 0, 1, we have the estimates:
1
|Λ(Y, s)| 5 1 + |Y |− 2 + Λ(Y, 3)|Y |5/2 + Λ(Y, 3) .
The Bessel-Fourier series for the Epstein-Eisenstein function can still be done
with the ordinary Bessel function, so we carry it out here separately, as an in-
troduction to the more general result in the matrix case, when a generalization
of the Bessel function will have to be taken into account.
We write n = p + q with integers p, q = 1. We have a partial Iwasawa
decomposition of Y ∈ Posn , given by W ∈ Posp , V ∈ Posq , X ∈ Rp×q with
W 0 Ip x
Y = [u(X)] where u(X) = .
0 V 0 Iq
Theorem 5.1. Let Y ∈ Posn have the above partial Iwasawa decomposition.
Then
Then
Y [a] = W [b] + V [t Xb + c] .
We decompose the sum for θ(Y, t) accordingly:
106 5 Poisson Duality and Zeta Functions
X X X X
= + .
a∈0 b=0 b6=0 c∈Zq
c6=0
The sum with b = 0 gives the term Λq (V, s). The sum over all c for each
b 6= 0 is then a theta series to which Poisson summation formula applies as in
Theorem 2.3, to yield
X t
e−πW [b]t e−πV [ Xb+c]t
c∈Zq
t
bXc −πV −1 [c]t−1
X
= e−πW [b]t |V |−1/2 t−q/2 e−2πi e .
c∈Zq
The term with c = 0 summed over all b 6= 0 yields |V |−1/2 Λ(W, s − q/2). The
remaining sum is a double sum
Z∞
−1/2
X
2πit bXc −1
[c]t−1 ) s−q/2 dt
|V | e e−π(W [b]t+V t .
b=0
t
c6=0 0
1 Adjointness Relations
Let:
U = Uni+ = group of upper unipotent n × n matrices.
Γ = GLn (Z)
Γ∞ = ΓU = Γ ∩ U .
We let ρ be a character on Posn . The most classical Selberg primitive Eisen-
stein series is the series
X
E pr (Y, ρ) = ρ([γ]Y ) .
γ∈ΓU \Γ
it follows that the value ρ([γ]Y ) depends only on the coset ΓU γ in ΓU \Γ,
whence the sum was taken over such cosets to define the Eisenstein series. If
ρ = ρ−s , then the sum is the usual
Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 107–120 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
108 6 Eisenstein Series First Part
E pr ([γ]Y, ρ) = E pr (Y, ρ) .
defined by X
TrΓU \Γ ϕ(Y ) = ϕ([γ)]Y ) .
γ∈ΓU \Γ
defined by Z
f (Y ) = f ([u]Y )du .
ΓU \U
We shall give two essentially formal properties of the ΓU \Γ-trace. For the
first one, see already SL2 (R), Chap. XIII, Sect. 1.
where the scalar product is given by the usual hermitian integral, or also by
the bilinear integral without the complex conjugation.
Proof. For simplicity, we carry out the computation without the complex
conjugation. We write formally dy instead of dµ(Y ). The factor 2 appears
because Γ does not act faithfully on Posn , but with kernel ±I. So from the
point of view of the measure, terms get counted twice in the third step of the
following proof. We have:
1 Adjointness Relations 109
Z
hTrΓU \Γ ϕ, f iΓ\P = TrΓU \Γ ϕ(y)f (y)dy
Γ\P
Z X
= ϕ([γ]y)f (y)dy
Γ\P ΓU \Γ
Z
=2 ϕ(y)f (y)dy
ΓU \P
Z Z
=2 ϕ([u]y)f ([u]y)dudy
U \P ΓU \U
Z Z
=2 ϕ(y) f ([u]y)dudy
U \P ΓU \U
= 2hϕ, TrΓU \Γ f iU \P .
This concludes the proof.
Next, we give a second adjointness relation, with a twist from left to right.
Indeed, note how the ΓU \Γ-trace is a sum over γ ∈ ΓU \Γ, with ΓU on the left,
whereas the sum on the right side of the equation in the next proposition is
over γ ∈ Γ\ΓU , with ΓU on the right. Furthermore, the sum as written cannot
be taken inside the integral sign, because the integral over Posn is needed to
make the term involving γ independent of the coset γΓU . Cf. step (4) in the
proof.
Proposition 1.2. Suppose the function ϕ on Posn is U -invariant. Let f be
a function on Posn . Under conditions of absolute convergence, we have the
adjointness relation
Z X Z
TrΓU \Γ (ϕ)(Y )f (Y )dµ(Y ) = ϕ(Y )f ([γ]Y )dµ(Y ) .
Posn γ∈Γ/ΓU Posn
X Z
(6) = ϕ(Y )f ([γ]Y )dµ(Y ) ,
γ∈Γ/ΓU Pos
n
the present section, we deal with a subcase where the proof for the Epstein
zeta function has a direct analogue, the main difference lying in the use of
the general Bessel function of Chap. 3, rather than the classical one-variable
Bessel function.
We fix positive integers p, q such that p + q = n, and p = q so that n = 2q.
We then decompose an element A ∈ Zn×q in two components
µ ¶
B
A= with B ∈ Zp×q and C ∈ Zq×q .
C
The sum is over all integral A ∈ Zn×q such that the B-component as above
has rank q. Thus the sum can be taken separately over all such B, and for each
B over all C ∈ Zq×q without any restriction on C. Thus the singular part of
the theta series would correspond to the part with b = 0 in the Epstein zeta
function, but the higher dimension complicates the simpler condition b = 0.
We combine the above (p, q) splitting with corresponding partial Iwasawa
coordinates, that is
à !
W 0
Y = Iw+ (W, X, V ) = [u(X)]
0 V
Γ = Γq
M = M(q) = Zq×q . Elements of M are denoted by C.
M∗ = M∗ (p, q) = the set of elements B ∈ Zp×q of rank q.
C∈M
t
BXC) −πtr(V −1 [C]Z −1 )
X
= |V |−q/2 |Z|−q/2 e−2πitr( e .
C∈M
Define
X t
h1 (C, Z) = e−πtr(W [B]Z) e−2πitr( BXC)
B∈M∗
−1
[C]Z −1 )
h2 (C, Z) = e−πtr(V |Z|−q/2 .
B 7→ B t γ −1
Note that each term in the sum satisfies the above equations. We then take
∗
the convolution of θp,q with a test function Φ on Γq \Posq , namely
Z
∗ ∗
(6) (θn,q ∗ Φ)(Y ) = θn,q (Y, Z)Φ(Z)dµq (Z) .
Γq \Posq
Γ\P = Γq \Posq .
Note that Γ acts on M on the right, and thus gives rise to right cosets of
M. We shall deal with
2 Fourier Expansion Determined by Partial Iwasawa Coordinates 113
X
,
C∈M/Γ
Then Z Z
X X
h(C, Z)dµ(Z) = 2 h(C, Z)dµ(Z) .
C∈M C∈M/Γ P
Γ\P
Proof.
Z X Z X X
h(C, Z)dµ(Z) = h(Cγ, Z)dµ(Z)
Γ\P
C∈M
Γ\P C∈M/Γ γ∈Γ
Z X X
= h(C, [t γ −1 ]Z)dµ(Z)
Γ\P C∈M/Γ γ∈Γ
X Z
= 2 h(C, Z)dµ(Z)
C∈M/Γ P
Note that if the function h(C, Z) satisfies (4), then so does the function
Φ(Z)h(C, Z), directly from the invariance of Φ. We shall now assume that Φ
∗
is a ΓU \Γ-trace to get a Fourier expansion for θn,q ∗ Φ.
with coefficients
X
aB,C = 2|V |−q/2 Kϕd−q/2 (πW [B][γ], π[γ −1 ]V −1 [C]) .
Γ∈Γ/ΓU
114 6 Eisenstein Series First Part
Proof. First remark that the expression on the right of the formula to be
proved makes sense. Indeed, if we replace C by Cγ with some element γ ∈ Γ,
then in tr(t BXC) we can move γ next to t B, and the sum over B ∈ M∗
then allows us to cancel γ. Hence the sum over B ∈ M∗ depends only on the
coset of C in M/Γ. Next we recall that Φd−q/2 is also a ΓU \Γ -trace, namely
trivially
Φd−q/2 = TrΓU \Γ (ϕd−q/2 ) .
Now:
|V |q/2 (θn,q
∗
∗ Φ)(Y )
Z
= Φ(Z)|V |q/2 θn,q
∗
(Y, Z)dµ(Z)
Γ\P
Z X
= Φ(Z) h1 h2 (C, Z)dµ(Z) by equation (5)
C∈M
Γ\P
X Z
= 2 Φ(Z)h1 h2 (Z)dµ(Z) (by Lemma 2.1)
C∈M/Γ P
X X t
= 2e−2πitr( BXC)
·
C∈M/Γ B∈M∗
Z
−1
[C]Z −1 )
Φ(Z)|Z|−q/2 eπtr(W [B]Z+V dµ(Z)
P
X X t
= |V |q/2 aB,C e−2πitr( BXC)
(by Proposition 1.2).
C∈M/Γ B∈M∗
Φ(Z) = E pr (Z, ρ)
is an Eisenstein series, in which case the Fourier expansion comse from Terras
[Ter 85], Theorem 1.
Let f be a function on Posn , invariant under the group Γ = GLn (Z). What
we shall actually need is invariance of f under the two subgroups:
3 Fourier Coefficients from Partial Iwasawa Coordinates 115
The following theorem is valid under this weaker invariance, but we may as
well assume the simpler hypothesis which implies both of these conditions.
Under the invariance, f has a Fourier series expansion
X
f (Y ) = aN (W, V )e2πihN,Xi ,
N ∈Zp×q
Now:
Z Ip X [γ]W 0
aN (V, [γ]W ) = f e−2πihN,Xi dX
0 Iq 0 V
Z Ip X γ 0 W 0
= f e−2πihN,Xi dX
0 Iq 0 Iq 0 V
Z γ X W 0
= f e−2πihN,Xi dX .
0 Iq 0 V
Now make the change of variables X 7→ γX so d(γX) = dX. Using (2) and
the invariance of f under the actions of
116 6 Eisenstein Series First Part
γ 0
0 Iq
shows that the last expression obtained is equal to atγN (W, V ), because
f (Y ) = f (v, y (N −1) , X)
Then
am (v, [γ]Y (n−1) )
Z Ã" #Ã !!
In−1 x v 1/(n−1) [γ]Y (n−1) 0
= f e−2πihm,xi dx
0 1 0 v −1
Z Ã" #" #Ã !!
In−1 x γ 0 v 1/(n−1) Y (n−1) 0
= f e−2πihm,xi dx
0 1 0 1 0 v −1
Z Ã" #Ã !!
γ x v 1/(n−1) Y (n−1) 0
= f e−2πihm,xi dx.
0 1 0 v −1
assuming λΓ (f ) 6= 0.
Then d(Z)−1 and d(Z) cancel, and the formula of the theorem drops out.
with √ n(n−1)/2
bn = π and αi = (n − i)/2 .
Proof. Use the value s# found in Chap. 3, Proposition 1.2, and plug in Corol-
lary 5.3. We note that in this particular case, there is a symmetry and a
cancelation which gets rid of the reversal of the variables s1 , . . . , sn .
s∗ = (−sn , . . . , −s1 )
We have Q̃ = [S]Q and h∗s = [S][ω]hs . Directly from its definition, [ω]Q = Q.
The theorem is then immediate (canceling [S][ω], as it were).
7
Geometric and Analytic Estimates
The basic differential geometry of the space Posn is given in Chap. XI of [La
99] and will not be reproduced here. We merely recall the basic definition.
We view Symn (vector space of real symmetric n × n matrices) as the tangent
space at every point Y of Posn . The Riemannian metric is defined at the point
Y by the formula
where Y 0 (t) is the naive derivative of the map of a real interval into Posn ,
viewed as an open subset of Symn .
The two basic properties of this Riemannian metric are:
Theorem 1.1. Let Symn have the positive definite scalar product given by
hM, M1 i = tr(M M1 ). Then the exponential map exp : Symn → Posn is metric
semi-increasing, and is metric preserving on lines from the origin.
Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 121–132 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
122 7 Geometric and Analytic Estimates
Theorem 1.2. The Riemannian distance between any two points Y, Z ∈ Posn
is given by the formula
X
dist(Y, Z) = (log ai )2 ,
(2) = .
−t XW −1 V −1 + [t X]W −1
Theorem 1.3. The metric on Posn admits the decomposition
tr(V −1 dV )2 5 tr(Y −1 dY )2 ,
is metric decreasing.
1 The Metric and Iwasawa Coordinates 123
dW + [X]dV + dX · V t X + XV · d t X dX · V + XdV
(3) dY = .
t t
dV · X + V · d X dV
we have
(5) L0 = dW · W −1 + XV · d t X · W −1
L1 = −dW · w−1 X − XV · d t X · W −1 X + dX + X · dV · V −1
L2 = V · dX · W −1
L3 = dV · V −1 − V · d t X · W −1 X .
which shows that the third quadratic form is positive definite and concludes
the proof.
Let G = GLn (R) as usual. It is easily verified that the action of G on
Posn is metric preserving, so G has a representation as a group of Riemannian
automorphisms of Posn . Again cf. [La 99] Chap. XI, Theorem 1.1. Here we are
interested in the behavior of the determinant |Y | as a function of distance.
Consider first a special case, taking distances from the origin I = In . By
Theorem 1.2, we know that if Y ∈ Br (I) (Riemannian ball of radius r centered
at I), then X
dist(Y, I)2 = (log ai )2 < r2 .
It then follows that there exists a number cn (r), such that for Y ∈ Br (I), we
have
1
(6) < |Y | < cn (r) .
cn (r)
124 7 Geometric and Analytic Estimates
|Y | = a1 . . . an .
√
With the Schwarz inequality, we take cn (r) = e nr . Note that from an upper
bound for |Y |, we get a lower bound automatically because Y 7→ Y −1 is an
isometry. From another point of view, we also have (log ai )2 = (log a−1 2
i ) .
In the above estimate, we took a ball around I. But the transitive action
of G on Posn gives us more uniformity. Indeed:
Lemma 1.4. For any pair Y, Z ∈ Posn with dist(Y, Z) < r, we have
|Z|
cn (r)−1 < < cn (r).
|Y |
Proof. We have
|tZ − Y | = |Y | |tY −1 Z − I| .
The roots of this polynomial are the same as the roots of the polynomial
1
|t[Y − 2 ]Z ∈ Br (I), so the lemma follows from the corresponding statement
translated to the origin I.
Cij
Cii
Lemma 1.5. For g ∈ GLn (R) and all pairs Y, Z ∈ Posn with dist (Y, Z) 5 r,
and all j = 1, . . . , n we have
Next, let
(
Dr = Y ∈ Posn such that
)
1
|Y | < cn (r) and |Subj Y | > for j = 1, . . . , n .
cn (r)
2 Convergence Estimates for Eisenstein Series 125
Br ([γ]I) ⊂ Dr .
Proof. Let Y ∈ Br ([γ]I). Then [γ −1 ]Y ∈ Br (I), and we can apply (6), as well
as |[γ −1 ]I| = 1 to prove the inequality |Y | < cn (r). For the other inequality,
by the distance decreasing property, we have
and
a11 ... 0
.. .. .. , a > 0 ,
A= . . . ii
0 ... ann
and X = (xij ) is strictly upper triangular. We call (X, A) the full Iwasawa
coordinates for Y on Posn .
126 7 Geometric and Analytic Estimates
D(c) ∩ FU .
To test absolute convergence, it suffices to do so when all zj are real. The next
lemma will prove absolute convergence when Re(zj ) > 1.
The effect of intersecting D(c) with Fu is to bound the xij -coordinates. Thus
the convergence of the integral depends only on the ai -coordinates. To con-
centrate on them, we let
n n
Y i−(n+1)/2
Y dai
dµn,A = ai .
i=1 i=1
ai
Just to see what’s going on, suppose n = 2 and the variables are
a1 = u and a2 = v .
v ε+(n+1)/2
which cancels the similar expression in the outer v-integral. Thus finally the
convergence is reduced to
Z∞
1
dv < ∞
v 1+ε
1/c
which is true. Having n variables only complicates the notation but not the
idea, which is to integrate successively with respect to dan , then dan−1 , and
so forth until da1 , which we leave to the reader to conclude the proof of
Lemma 2.1.
Next we combine the metric estimates from the last section with the mea-
sure estimates which we have just considered. Let r be a radius of discreteness
for Γ, defined at the end of the last section. Then
Dr = D(cn (r)) ,
is not empty for each m, k. The set D defined above is stable under the action
of ΓU . Hence translating the sets Smk back into FU we conclude that
−1
(4) [τmk ]Smk ⊂ Dr ∩ FU for all m, k .
−1
By Lemma 1.7, the sets [τmk ]Smk are disjoint, for pairs (m, k) defined as
above.
We are now ready to apply the geometry to estimate certain series.
Let ρ be a character. The primitive Eisenstein series is defined by
EUpr (Y, ρ) =
X
ρ([γ]Y ) .
γ∈ΓU \Γ
We shall be concerned with the character equal to the Selberg power function,
(n−1)
that is q−z , so that by definition,
n−1
pr(n−1)
X Y
EU (Y, z) = |Subj [γ]Y |−zj .
γ∈ΓU \Γ j=1
2 Convergence Estimates for Eisenstein Series 129
First, note that any Y ∈ Posn lies in some ball Br (I), and by Lemma 1.5,
we see that the convergence of the series for any given Y is equivalent to the
convergence with Y = I. We also have uniformity of convergence in a ball of
fixed radius. In addition, we note that
|Subn [γ]Y | = |[γ]Y | = |Y | for all γ ∈ Γ .
Thus the convergence of the above Eisenstein series is equivalent with the
convergence of
n
pr(n)
X Y
EU (Y, z) = |Subj [γ]Y |−zj .
γ∈ΓU \Γ j=1
X Z n
Y
¿ |Subj Y |−b dµ(Y ) .
γ∈ΓU \Γ B ([γ]I) j=1
r
We combine the inclusion (4) with the estimate in (5). We use the fact that
|Subj [τ ]Y | = |Subj Y | for τ ∈ ΓU ,
and we translate each integral back into FU . We then obtain from (5)
X∞ Xdm Z Yn
E(I, b) ¿n |Subj Y |−b dµ(Y )
m=1 k=1 −1 j=1
[τmk ]Smk
Z n
Y
¿n |Subj Y |−b dµn (Y ) .
Dr ∩FU j=1
The sign ¿n means that the left side is less than the right side times a
constant depending only on n. We have used here the fact already determined
−1
that the sets [τmk ]Smk are disjoint and contained in Dr ∩ FU . The finiteness
of the integral was proved in Lemma 2.1, which thereby concludes the proof
of Theorem 2.2.
130 7 Geometric and Analytic Estimates
converges absolutely for Re(z2 ) > 3/2 and Re(zj ) > 1 with j = 3.
The proof is the same as the proof of Theorem 2.2. One uses the same set
D(c). Lemma 2.1 has its analogue for the product with one term omitted. The
calculus computation comes out as stated. For instance, for n = 3, the region
D(c) is defined by the inequalities
1 1 1
< uvw < c, vw > , w> .
c c c
The series is dominated by the repeated integral
Z∞ Z∞ c/vw
Z
(vw)−3/2−ε u(n+1)/2 v 1−(n+1)/2 w2−(n+1)/2 dudvdw ,
1/c 1/wc 1/vwc
For various reasons, including the above specific application, Maass ex-
tends the convergence theorem still further as follows [Maa 71].
Let
0 = k0 < k1 < . . . < km < km+1 = n
be a sequence of integers which we call an integral partition P of n. Let
ni = ki − ki−1 , i = 1, . . . , m + 1 .
Theorem 3.2. ([Maa 71], Sect. 7) This Eisenstein series is absolutely con-
vergent for
1 1
Re(zi ) > (ni+1 + ni ) = (ki+1 − ki−1 ), i = 1, . . . , m .
2 2
Proof. One has to go through the same steps as in the preceding section,
with the added complications of the more elaborate partition. One needs the
Iwasawa-Jacobi coordinates with blocks,
W1 . . . 0 In1 . . . Xij
Y = [u(X)] ... .. .. and u(X) = .. .. .. .
. . . . .
0 ... Wm+1 0 ... Inm+1
Note that Theorem 3.1 is a special case of Theorem 3.2. However, the
notation of Theorem 3.1 is simpler, and we thought it worth while to state it
and indicate its proof separately, using the easier notation for the Eisenstein
series.
The subgroup ΓP is usually called a parabolic subgroup. Such sub-
groups play an essential role in the compactification of Γn \Posn , and in the
subsequent spectral eigenfunction decomposition.
8
Eisenstein Series Second Part
In Chap. 5, we already saw the Epstein zeta function, actually two zeta func-
tions, one primitive and the other one completed by a Riemann zeta function.
Indeed, let Y ∈ Posn . We may form the two series
X X
E pr (Y, s) = ([a]Y )−s and E(Y, s) = ([a]Y )−s
a prim a6=0
where the first sum is taken over a ∈ t Zn , a 6= 0 and a primitive; while the
second sum is taken over all a ∈ t Zn , a 6= 0. Any a ∈ t Zn can be written
uniquely in the form
a = da1 with d ∈ Z+ and a1 primitive .
Therefore
E(Y, s) = ζQ (2s)E pr (Y, s) .
We have to extend this property to the more general Selberg Eisenstein
series on Posn . This involves a more involved combinatorial formalism, about
integral matrices in Zj,j+1 with j = 1, . . . , n − 1. Thus the first section is
devoted to the linear algebra formalism of such integral matrices and their
decompositions. After that, we define the general Eisenstein series and ob-
tain various expressions for them which are used subsequently in deriving the
analytic continuation and functional equations. For all this, we will follow
Maass from [Maa 71] after [Maa 55], [Maa 56]. He did a great service to the
mathematical community in providing us with a careful and detailed account.
However, we have had to rethink through all the formulas because we use left
characters instead of right characters as in Maass-Selberg, and also we intro-
duce the Selberg variables s = (s1 , . . . , sn ) as late as possible. Indeed, we work
with more general functions than characters, for application to more general
types of Eisenstein series constructed with automorphic forms, or beyond with
the heat kernel.
We note here one important feature about the structure of various fudge
factors occurring in functional equations: they are eigenvalues of certain
Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 133–162 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
134 8 Eisenstein Series Second Part
Γn = GLn (Z);
M∗n = set of integral n × n matrices of rank n;
M∗ (p, q) = set of integral p × q matrices of rank min(p, q);
∆n = set of upper triangular integral n × n matrices of rank n;
Tn = Γn ∩ ∆n = group of upper triangular integral matrices of
determinant ± 1 .
We note that M∗n and ∆n are just sets of matrices, not groups. The diag-
onal components of an element in ∆n are arbitrary integers 6= 0, so elements
of ∆n are not necessarily unipotent. On the other hand, the elements of Tn
necessarily have ±1 on the diagonal, so differ from unipotent elements pre-
cisely by such diagonal elements. Note that ∆n is stable under the action
of Tn on both sides, but we shall usually consider the left action. Thus we
consider coset representatives in Γn for the coset space Tn \Γn and also coset
representatives D ∈ ∆n of the coset Tn D, which is a subset of ∆n . Similarly,
M∗n is stable under the action of Γn on both sides, and we can consider the
coset space Γn \M∗n .
Tn \∆n → Γn \M∗n
Proof. By induction, and left to the reader. We shall work out formally a more
complicated variation below.
satisfies the inequalities in the lemma. We start at the top, so we first solve
for x12 such that
0 5 y12 + x12 d22 < d22 .
This inequality has a unique integral solution x12 . We then solve inductively
for x13 , . . . , x1n ; then we go down the rows to conclude the proof.
Lemma 1.3. Given integers djj > 0 (j = 1, . . . , n), the number of cosets
Tn D with D having the given diagonal elements is
n
j−1
Y
djj .
j=1
136 8 Eisenstein Series Second Part
Proof. Immediate.
Remark. The previous lemmas have analogues for the right action of Γn on
M∗n . First, Lemma 1.1 is valid without change for the right action of Γn on
M∗n and the right action of Tn on ∆n . On the other hand, the inequalities
defining coset representatives in Lemma 1.2 for the right have to read:
Then the number of cosets DTn with D having given d11 , . . . , dnn > 0 is
n
dn−j
Y
jj .
j=1
We consider the coset space Tn−1 \M∗ (n − 1, n). Given a coset Tn−1 C, by
Lemma 1.4 we can find a coset representative of the form (0, D)γ with γ ∈ Γn .
We use such representatives to describe a fibration of Tn−1 \M∗ (n − 1, n) over
Tn \Γn as follows.
Lemma 1.5. Let π : Tn−1 \M∗ (n − 1, n) → Tn \Γn be the map which to each
coset Tn−1 C with representative (0, D)γ associates the coset Tn γ. This map π
is a surjection on Tn \Γn , and the fibers are Tn−1 \∆n−1 .
Proof. Implicit in the statement of the lemma is that the association π as de-
scribed is well defined, i.e. independent of the chosen representative. Suppose
the second column, third column, etc. Thus γ, γ 0 are in the same coset of
Tn \Γn , showing the map is well defined. We note that the surjectivity of π is
immediate.
As to the fibers, if τ ∈ Tn and D ∈ ∆n−1 , then (0, D)τ again has the
form (0, D0 ) with D0 ∈ ∆n−1 . Thus by definition, the fiber above a coset Tn γ
consists precisely of cosets
where Ij is the unit j × j matrix as usual. Note the operation on the left,
and the fact that 0 denotes the j × (n − 1) zero matrix, so that (0, Ij ) is a
j × n matrix. If Y = T t T with an upper triangular matrix T , then Yj = Tjt Tj ,
where Tj is the lower right j × j submatrix of T .
From a given Y we obtain a sequence (Yn , Yn−1 , . . . , Y1 ) by the operation
indicated in (1), starting with Yn = Y . We call this sequence the Selberg
sequence of Y . Given γ ∈ Γn , we shall also form the Selberg sequence
with Yn = [γ]Y . In some sense (to be formalized below) this procedure gives
rise to “primitive” sequences. It will be necessary to deal with non-primitive
sequences, and thus we are led to make more general definitions as follows.
By an integral chain (more precisely n-chain) we mean a finite sequence
It’s obvious that (2) implies (3). Conversely, suppose EQU 2 and (3). We
then let γn = γ 0 γ −1 , and it follows inductively that (2) is satisfied.
A sequence (γ, Cn−1 , . . . , C1 ) will be said to be triangularized if we have
that Cj = (0, Dj ) with Dj ∈ ∆j for j = 1, . . . , n − 1. Thus the first column of
Cj is zero.
The next lemmas give special representatives for equivalence classes.
Lemma 1.6. Let Cj ∈ Zj,j+1 (j = 1, . . . , n − 1) be integral matrices. There
exist elements γj ∈ Γj (j = 1, . . . , n) such that for j = 1, . . . , n − 1 we have
0 ∗ ... ∗
−1
γj Cj γj+1 = (0, T1 ) = ... ... . . . ... ,
0 0 ... ∗
that is, the first column on the right is 0, and the rest is upper triangular, with
Tj ∈ Tri+ j . Thus every chain is equivalent to a triangularized one.
0 Hj−1
0 0...∗
where the matrix on the right has first column 0, and the rest upper triangular.
We let
1 0
γj = for j = 1, . . . , n .
0 ηj−1
1 Integral Matrices and Their Chains 139
This last matrix has the desired form (0, Tj ), thereby concluding the proof.
(γ, Dn−1 , . . . , D1 )
whose components are among the fixed representatives, associate the chain
Then this association gives a bijection from the set of representative sequences
to equivalence classes of chains, i.e. every chain is equivalent to exactly one
formed as above, with the fixed representatives.
(γ 0 , (0, Dn−1
0
), . . . , (0, D10 ))
then Dn−1 is the fixed representative of the coset Tn−1 Dn−1 . We can then
continue by induction. This shows that the stated association maps bijectively
on the families of equivalence classes and proves the lemma.
Lemma 1.8. The map γ 7→ chains of (γ, (0, In−1 ), . . . , (0, I1 )) induces a bi-
jection
Tn \Γn → primitive equivalence classes of chains .
(n−1) (n)
In particular, we may work with qz on Posn , or also with qz on Posn ,
depending on circumstances. In any case, we see that we may also write
Implicit in this definition is the assumption that the series involved converges
absolutely. The next lemma gives a first example.
For any positive integer n, we make the general definition of the Riemann
zeta fudge factor at level n,
n
Y
ΦQ,n (z) = ζQ (2(zi + . . . + zn ) − (n − i)) .
i=1
In other words,
(n)
λHZ (q−z ) = ΦQ,n (z) .
This relationship holds for Re(zi + . . . + zn ) > (n − i + 1)/2, i = 1, . . . , n, which
(n)
is the domain of absolute convergence of the Hecke-zeta operator on q−z .
(n)
Proof. Directly from the definition of q−z , we find
n
(n)
Y
(1) q−z ([(0, D)]S) = |[(0, Ii )(0, D)]S)|−zi
i=1
Yn
= |Subi (D)|−2zi |Subi (S)|−zi
i=1
n
(n)
Y
= (dn−i+1 · · · dn )−2zi q−z (S) ,
i=1
where d1 , . . . , dn are the diagonal elements of D. Next we take the sum over all
integral non-singular triangular D, from the set of representatives of Lemma
1.2, so
142 8 Eisenstein Series Second Part
d1 ... ∗
.. .. .. .
D= . . .
0 ... dn
The sum over D can be replaced by a sum
∞
X n
Y
dk−1
k
d1 ,...,dn =1 k=1
(n)
by Lemma 1.3. With the substitution k = n − i + 1, the factor of q−z (S) in
(1) can thus be expressed as
n
X Y
(dn−i+1 . . . dn )−2zi
D i=1
∞ ∞ n
−2(zn−k+1 +...+zn )+k−1
X X Y
(2) = ... dk
d1 =1 dn =1 k=1
= ΦQ,n (z)
Next we deal with a similar but more involved situation, for which we
make a general definition of the Riemann zeta fudge factors, namely
j
Y
ΦQ,j (z) = ΦQ,j (z1 , . . . , zj ) = ζQ (2(zi + . . . + zj ) + j − i)
i=1
and
n
(n)
Y
ΦQ (z1 , . . . , zn ) = ΦQ,j (z) .
j=1
These products will occur as factors in relations among Eisenstein series later.
In the next lemma, we let {Dj } range over the representatives of Tj \∆j (j =
(j)
1, . . . n) as given in Lemma 1.2. We let dνν denote the diagonal elements of
Dj , with the indexing j − k + 1 5 ν 5 j, which will fit the indexing in
the literature. The indexing also fits our viewing Dj as a lower right square
submatrix.
Lemma 2.3.
n n j
(n)
X X Y Y Y
... (d(j)
νν )
−2zk
= ΦQ (z)
Dn D1 k=1 j=k ν=j−k+1
Y
= ζQ (2(zi + . . . + zj ) + j − i) .
15i5j5n
3 Eisenstein Series 143
Proof. For a fixed index j, we consider the sum on the left over the represen-
tatives {Dj }. The products inside the sum which are indexed by this value j
then can be written
j
X Y j
Y
(d(j)
νν )
−2zk
.
Dj k=1 ν=j−k+1
This is precisely the term evaluated in (2), and seen to be equal to ZQ,j (z).
Taking the product over j = 1, . . . , n concludes the proof of the lemma.
3 Eisenstein Series
Next we shall apply chains as in Sect. 1 to elements of Posn . Let Y ∈ Posn .
Let C be a chain, C = (γ, Cn−1 , . . . , C1 ). For each j = 1, . . . , n − 1 define
(n)
One may also define qC with one more variable, namely
n
(n)
Y
qC,z (Y ) = |Cj (Y )|zj .
j=1
(n−1) (n−1)
qC 0 ,z (Y ) = qC,z (Y ) ;
(n−1)
in other words, qC,z depends only on the equivalent class of C. Hence
the power function can be determined by using the representatives given by
Lemma 1.7.
As in Sect. 1, we let Tn be the group of integral upper triangular n × n
matrices with ±1 on the diagonal. We define the Selberg Eisenstein series
(n−1)
X (n−1)
ET ,n (Y, z) = qC,−z (Y ) ,
C
144 8 Eisenstein Series Second Part
where the sum is taken over all equivalence classes of chains. We define the
primitive Selberg Eisenstein series by the same sum taken only over the
primitive equivalence classes, that is
pr(n−1) (n−1)
X
ET ,n (Y, z) = qC,−z (Y ) .
C primitive
qC,z (Y ) = qz ([γ]Y ) .
This is essentially the Eisenstein series we have defined previously, except that
we are summing mod Tn instead of mod ΓU . However, we note that for any
character ρ, and τ ∈ Tn we have the invariance property
Since (Tn : ΓU ) = 2n , denoting the old Eisenstein series by EUpr (Y, q−z ), we
get
Then
3 Eisenstein Series 145
(j)
here dνν are the diagonal elements of Dj .
(n−1)
Theorem 3.1. The Eisenstein series EU,n (Y, z) converges absolutely for
Re(zj ) > 1 (j = 1, . . . , n − 1) and satisfies the relation
(n−1) (n−1) pr(n−1)
EU,n (Y, z) = ΦQ (z1 , . . . , zn−1 )EU (Y, z) .
Proof. Both the relation and the convergence follow from (6) and Lemma 2.3
applied to n − 1 instead of n, and Theorem 2.2 of Chap. 7.
ϕ∗ (Y ) = ϕ([ω]Y −1 ) = ϕ(ωY −1 ω) .
Proposition 3.2. Let ϕ be any U -invariant function such that its ΓU \Γ-trace
converges absolutely. Then
This proves the second statement. Then the first formula comes out, namely:
X
TrΓU \Γ ϕ(Y −1 ) = ϕ([γ]Y −1 )
γ∈ΓU \Γ
X
= ϕ(γY −1 t γ)
γ
X
= ϕ∗ (ω(t γ −1 Y γ −1 )ω)
γ
= TrΓU \Γ ϕ∗ (Y )
The next two lemmas deal with similar identities with sums taken over
cosets of matrices modulo the triangular group.
Lemma 3.3. Let ϕ be a Tn -invariant function such that the following sums
are absolutely convergent, i.e. a left character on Posn . Let S ∈ Posn+1 . Then
X X
ϕ∗ ((S[A])−1 ) = ϕ([C]S) .
A∈M∗ (n+1,n)/Tn C∈Tn \M∗ (n,n+1)
Proof. Inserting an ω inside the left side and using the definition of ϕ∗ , to-
gether with ϕ∗∗ = ϕ, we see that the left side is equal to
X X
ϕ(S[A][ω]) = ϕ(S[Aω]) .
A∈M∗ (n+1,n)/Tn A
By definition, M∗ (n + 1, n) =
S
ATn , with a family {A} of coset representa-
A
tives. Since M∗ (n + 1, n) = M (n + 1, n)ω, we also have
∗
[ [ [
ATn = AωωTn ω = AωTn−
A Aω
where Tn− is the lower integral triangular group. Thus the family {Aω} is a
family of coset representatives for M∗ (n + 1, n)/Tn− . Writing
S[Aω] = [ω t A]S,
3 Eisenstein Series 147
we see that we can sum over the transposed matrices, and thus that the desired
sum is equal to X
ϕ([C]S) ,
C∈Tn \M∗ (n,n+1)
Normalizing the series by taking sums mod Tn or mod ΓU only introduces the
simple factor 2n each time.
We shall now develop further the series on the right in Lemma 3.3, by
using the eigenvalue property EF HZ stated in Sect. 2.
Then the inner sum is just the Hecke operator of ϕ, when evaluated at
Subn [γ]S. The result then falls out.
(n)
In particular, we may apply the lemma to the case when ϕ = q−z , and we
obtain:
Proof. Special case of Lemma 3.4, after applying Lemma 2.1 which determines
the eigenvalue of the Hecke-zeta operator.
148 8 Eisenstein Series Second Part
where the sum is taken over A ∈ Zn+1,n . This is the standard theta series. We
can differentiate term by term. By (1) and the subsequent remark, we note
that
X X
DY θ(S, Y ) = DY e−π(S[A]Y ) = βA,S (Y )e−πtr(S[A]Y ) ,
rk(A)=n rk(A)=n
From (3), we then see that Dθ satisfies the same functional equation, that is
For functions ϕ such that the ΓU \Γ-trace and the following integral are ab-
solutely convergent, we can form the convolution on Γn \Posn :
Z
(Dθ ∗ TrΓU \Γ ϕ)(S) = (DY θ)(S, Y )TrΓU \Γ (ϕ)(Y )dµn (Y ) .
Γn \Posn
We abbreviate as before
P = Posn , Γ = Γn
using formula (2), and then transposing Q̃Y from the exponential term to the
ϕ ◦ [γ](Y ) term. Now we make the translation Y 7→ [γ −1 ]Y in the integral
over P. Under this change, ΓU \Γ 7→ Γ/ΓU , and the expression is equal to
n
X X
= 2(−1)n |πS[A]|
A∈M∗ /Γ γ −1 ∈Γ/ΓU
Z
−1
e−πtr(S[Aγ ]Y )
|Y |k+1 QY (|Y |−k ϕ(Y ))dµ(Y ) .
P
The two sums over Γ/ΓU and over M∗ /Γ can be combined into a single sum
with A ∈ M∗ /ΓU , which yields the formula proving the lemma.
=2 n
λHZ (ϕ)ETpr (πS, ϕ ◦ Subn ) by Lemmas 3.3 and 3.4.
The Eisenstein series here is on Posn+1 , and going back to ΓUn+1 instead of
Tn+1 introduces the factor 1/2n+1 , which multiplied by 2n leaves 1/2. This
1/2 cancels the factor 2 occurring in Theorem 4.2. The relationship asserted
in the theorem then falls out, thus concluding the proof.
152 8 Eisenstein Series Second Part
(Dθ ∗ TrΓU \Γn ϕ∗ )(S) = π w Λn (ϕ∗ )λHZ (ϕ)TrΓUn+1 \Γn+1 (ϕ ◦ Subn )(S) .
Proof. We just pull out the homogeneity factor from inside the expression in
Theorem 4.3.
Remark. Remark Immediately from the definitions, one sees that for the
Selberg power character, we have
n
X
deg qz(n) = wn (z) = jzj .
j=1
namely
n
(n−1)
Y
(1) |Y |sn +(n−1)/4 q−z (Y ) = hs (Y ) = (tn−i+1 )2si +i−(n+1)/2 ,
i=1
where
1
zj = sj+1 − sj + for j = 1, . . . , n − 1,
2
or also
(n−1)
(2) q−z (Y ) = |Y |−sn −(n−1)/4 hs (Y ) .
5 Changing to the (s1 , . . . , sn )-variables 153
Proposition 5.1. We have in the appropriate domain (see the remark be-
low):
ζ pr (Y −1 , s) = |Y |sn −s1 +(n−1)/2 ζ pr (Y, s∗ ) ,
where s∗ = (−sn , . . . , −s1 ), so s∗j = −sn−j+1 .
Proof. We have
1 1
s∗k+1 − s∗k + = sj − sj−1 + with j =n−k+1.
2 2
Thus the domains of convergence in terms of the s∗ and s variables are “the
same” half planes.
Proof. By definition,
n−1
Y
ΦQ,n−1 (z) = ζQ (2(zi + . . . + zn−1 ) − (n − i − 1)) .
i=1
(n) (n)
Since ZQ (s∗ ) = ZQ (s) by Lemma 5.2, it follows that Proposition 5.1 is valid
if we replace the primitive Eisenstein series ζ pr (Y, s) by ζ(Y, s).
In connection with using Posn+1 via Theorem 4.3 and Corollary 4.4, it is
(n) (n−1)
natural to consider q−z as well as q−z .
5 Changing to the (s1 , . . . , sn )-variables 155
Lemma 5.4. Put zn = sn+1 − sn + 1/2, and let ϕs,sn+1 be the character on
Posn defined by
Proof. By definition,
(n) (n−1)
q−z (Y ) = |Y |−zn q−z (Y ) .
This is just the formulation of Lemma 2.1 in the (s, sn+1 ) variables. Further-
more,
n µ ¶
X n+1
(7) wn (z) = deg ϕs,sn+1 = si − sn+1 − .
i=1
4
This is immediate from (3) and the homogeneity degree of the determinant.
We define various elementary functions from which we build others, and
relate them to eigenvalues found in the preceding section. We let
These are standard fudge factors in one variable u. Following the previous
general pattern, we define
n−1
Y
gn (s) = gn (s1 , . . . , sn ) = g(sn − si + 1/2)
i=1
n−1
Y
Fn (s) = Fn (s1 , . . . , sn ) = F (sn − si + 1/2).
i=1
Finally, we define
n
Y n
Y
g (n) (s) = gj (s) and F (n) (s) = Fj (s) .
j=1 j=1
These definitions follow the same pattern that we used with the fudge factor
(n)
involving the Riemann zeta function, i.e. ZQ,n (s) and ZQ (s). In particular,
and
n
Y
F (n+1) (s, sn+1 ) = Fj+1 (s1 , . . . , sj+1 ) .
j=1
The next lemma is the analogue of Proposition 5.3 for the fudge factor that
we are now dealing with.
Lemma 5.5. We have the explicit determination
Fn+1 (s, sn+1 ) = π wn Λn (ϕ∗s,sn+1 ) .
The exponent wn is the degree in (7), as a function of s1 , . . . , sn+1 .
Proof. This is a tedious verification.
(n)
We apply Corollary 4.4 to the character q−z = ϕs,sn+1 . We note that
(n)∗
(8) q−z = ϕ∗s,sn+1 = dsn+1 +(n+1)/4 hs∗ .
Proof. This result is proved by the Riemann method. The integral over Γn \Pn
is decomposed into a sum
Z Z Z
= + ,
Γn \Pn (Γn \Pn )(=1) (Γn \Pn )(51)
where the parentheses (=1) and (51) signify the subdomain where the de-
terminant is = 1 resp. 5 1. On the second integral, we make the change of
variables Y 7→ Y −1 . Then letting Fn = Γn \Pn , we get:
We now use two previous functional equations. One is the functional equation
for the regularized theta functions, namely Sect. 4, formulas (4) and (5), which
read:
The other equation is stated in Proposition 5.1, which is valid with ζ(Y, s)
(n) (n)
instead of ζ pr (Y, s), because ZQ (s∗ ) = ZQ (s) is the same factor needed
to change the primitive Eisenstein series into the non-primitive one. Ap-
plying this proposition and the functional equation for the theta function
158 8 Eisenstein Series Second Part
shows directly and immediately that the two terms under the integral for
ξ(S −1 ; s∗ , sn+1 ) are changed precisely into the two terms which occur in the
integral expression for ξ(S; s, sn+1 ) multiplied by |S|n/2 . This concludes the
proof.
Theorem 6.2. Let S ∈ Posn+1 and let
η(S; s1 , . . . , sn+1 ) =
F (n+1) (sn+1 , s1 , . . . , sn )|S|sn ζ(S; sn+1 , s1 , . . . , sn ) .
by Proposition 5.1, valid in the domain Re(Sj+1 − sj + 12 ) > 1 for each index
j = 1, . . . , n − 1, that is in the domain B. On the other hand,
Let prRn+1 (D) = DR be the projection on the real part. Since the inequalities
defining D involve only the real part, it follows that
D = DR + iRn+1 ,
under a transposition, and even under the transposition between the special
variables s1 and s2 . Then we shall obtain Selberg’s theorem:
Proof. The following proof follows Selberg’s lines and is the one given in
(n−1)
Maass [Maa 71]. We have ζ(Y ; s) = EU (Y, z) (the non-primitive Eisen-
stein series). The essential part of the proof will be to show that the function
(n−1)
π −s1 Γ(z1 )EU (Y, z) = π −(s2 −s1 +1/2) Γ(s2 − s1 + 1/2)ζ(Y, s)
g(u) = π −u Γ(u) .
The sum over (Cn , . . . , C2 ) is over equivalence classes, whose definition for
such truncated sequences is the same as for (Cn , . . . , C1 ), except for disre-
garding the condition on C1 . The theorem was proved in Chap. 5, Theorem
4.1 in the case n = 2, so we assume n = 3. We write the Eisenstein series with
one further splitting, that is
n−1
(n−1) (1)
X Y
ET (Y, z) = |Cj (Y )|−zj |C2 (Y )|−z2 ET (C2 (Y ), z1 ) .
(Cn ,...,C2 ) j=3
Although the notation with the chains was the clearest previously, it now
becomes a little cumbersome, so we abbreviate
Cj (Y ) = Yj for j = 1, . . . , n .
Then we rewrite the above expressions for the Eisenstein series in the form
n−1
(n−1) (1)
X Y
(1) ET (Y, z) = |Yj |−zj ET (Y2 , z1 )
(Yn ,...,Y2 ) j=2
7 Invariance under All Permutations 161
n−1
(1)
X Y
(2) = |Yj |−zj |Y2 |−z2 ET (Y2 , z1 ) .
(Yn ,...,Y2 ) j=3
is invariant under the permutation of s1 and s2 . The only thing to watch for
is that this permutation can be done while preserving the convergence of the
series expression (2) for E (n−1) (Y, z). Thus one has to select an appropriate
domain of absolute convergence, so that all the above expressions make sense.
Maass does this as follows. We start with the inductive lowest dimensional
piece,
Λ2 (Y, z1 ) = π −z1 Γ(z1 )E (1) (Y, z1 ),
which is the first case studied in Chap. 5, Sect. 3. We gave an estimate for
this function in the strip Str(−2, 3), that is
away from 0 and 1, specifically outside the discs of radius 1 centered ar 0 and
1, as in Corollary 3.8 of Chap. 5.
Next, we consider the series
X n−1
Y
(3) π −z1 Γ(z1 )E (n−1) (y, Z) = |Yj |−zj Λ2 (Y2 , z1 ) .
(Yn ,...,Y2 ) j=2
By Theorem 3.1, mostly Theorem 2.2 of Chap. 7, the series in (3) converges
absolutely for Re(zj ) > 1, j = 1, . . . , n − 1. Similarly, by Chap. 7, Theorem
3.1, we also know that the series
X n−1
Y
(4) |Yj |−zj
(Yn ,...,Y2 ) j=2
power of |Y2 |, in the sbove strip outside the unit discs around 0, 1, it follows
that the Eisenstein series from (1) converges absolutely in the domain
D1 = points in Cn with z1 in the strip Str(−2, 3) outside the discs of
radius 1 around 0, 1; and
Let
D2 = subdomain of D1 satisfying the further inequality Re(z2 ) > 6.
In terms of the variables z, we want to prove the functional equation
X n−1
Y
|Yj |−zj |Y2 |−z2 Λ2 (Y2 , z1 )
(Yn ,...,Y2 ) j=3
X n−1
Y
= |Yj |−zj |Y2 |−z1 −z2 +1/2 Λ2 (Y2 , 1 − z1 ) .
(Yn ,...,Y2 ) j=3
The series on both sides are convergent in D2 , so the formal argument is now
justified, and we have proved that
z1 7→ 1 − z1 , z2 7→ z1 + z2 − 1/2, zj 7→ zj (j = 3, . . . , n − 1), sn 7→ sn ,
or
transposition of s1 and s2 .
This concludes the proof of Theorem 7.1.
Remark. Just as Maass gave convergence criteria for Eisenstein series with
more general parabolic groups [Maa 71], Sect. 7, he also gives the analytic
continuation and functional equation for these more general groups at the
end of Sect. 17, pp. 279–299.
Bibliography
[Hel 84] HELGASON, S.: Groups and Geometric Analysis. Academic Press
(1984).
[Her 55] HERZ, C.: Bessel functions of matrix arguments. Ann. Math. 61 (1955)
474–523.
[Hla 44] HLAWKA, E.: Zur Geometrie der Zahlen. Math. Zeitschr. 49 (1944)
285–312.
[Hör 66] HÖRMANDER, L.: An introduction to complex analysis in several vari-
ables. VanNostrand, Princeton (1966).
[ImT 82] IMAI, K., and TERRAS, A.: Fourier expansions of Eisenstein series for
GL(3, Z). Trans. AMS 273 (1982) 679–694.
[JoL 99] JORGENSON, J., and LANG, S.: Hilbert-Asai Eisenstein series, regu-
larized products, and heat kernels. Nagoya Math. J. 153 (1999) 155–
188.
[JoL 01] JORGENSON, J., and LANG, S.: Spherical Inversion on SLn (R).
Springer-Verlag (2001).
[kAR 65] KARPELEVIC, F. I.: The geometry of geodesics and the eigenfunctions
of the Beltrami-Laplace operator on symmetric spaces. Trans. Moscow
Math. Obsc. 14 (1965) 48–185; Trans. Moscow Math. Soc. (1965) 51–
199.
[La 75/85] LANG, S.: SL2 (R). Addison-Wesley (1975); Springer-Verlag (1985).
[La 93] LANG, S.: Real and Functional Analysis. Graduate Texts in Mathemat-
ics 142 Springer-Verlag (1993).
[La 99] LANG, S.: Fundamentals of Differential Geometry. Springer-Verlag
(1999).
[Llds 76] LANGLANDS, R. P.: On the Functional Equations Satisfied by Eisen-
stein Series. Lecture Notes in Math. 1083 Springer Verlag (1984).
[Loo 69] LOOS, O.: Symmetric Spaces I and II. Benjamin (1969).
[Maa 55] MAASS, H.: Die Bestimmung der Dirichletreihen mit Grössen- charak-
teren zu den Modulformen n-ten Grades. J. Indian Math. Soc. 19 (1955)
1–23.
[Maa 71] MAASS, H.: Siegel’s Modular Forms and Dirichlet Series. Lecture Notes
in Math. 216 Springer Verlag (1971).
[Min 1884] MINKOWSKI, H.: Grundlagen für eine Theorie der quadratischen For-
men mit ganzzahligen Koeffizienten. Mémoire Acadḿie des Sciences
(1884). Collected Works I 3–144.
[Min 05] MINKOWSKI, H.: Diskontinuitätsbereich für arithmetische Äquivalenz.
J. reine angew. Math. 129 (1905) 270–274. Collected Works II 53–100.
[Moo 64] MOORE, C.: Compactifications of symmetric spaces II: The Cartan
domains. Amer. J. Math. 86 (1964) 358–378.
[Mos 53] MOSTOW, D.: Some new decomposition theorems for semi-simple
groups. Memoirs AMS (1953).
[Nar 68] NARASIMHAN, R.: Analysis on Real and Complex Manifolds. North
Holland (1968).
[Sat 56] SATAKE, I.: Compaction des espaces quotients de Siegel I. Séminaire
Cartan 1957–58, 3 March 1958, 12–01.
[Sat 60] SATAKE, I.: On compactifications of the quotient spaces for arithmeti-
cally defined discontinuous groups. Ann. Math. 72 (1960) 555–580.
[Sel 56] SELBERG, A.: Harmonic analysis and discontinuous groups. J. Indian
Math. Soc. 20 (1956) 47–87.
Bibliography 165
[Sie 56] SIEGEL, C. L.: Über die analytische theorie der quadratische Formen.
[Sie 35] Ann. Math. 36 (1935) 527–606.
[Sie 36] Ann. Math. 37 (1936) 230–263.
[Sie 37] Ann. Math. 38 (1937) 212–291.
[Sie 38] SIEGEL, C. L.: Über die zeta funktionen indefiniter quadratischen
Formen.
[Sie 38] Ann. Math. 43 (1938) 682–708.
[Sie 39] Ann. Math. 44 (1939) 398–426.
[Sie 40] SIEGEL, C. L.: Einheiten quadratischer Formen. Abh. Math. Sem. Han-
sische Univ. 13 (1940) 209–239.
[Sie 41] SIEGEL, C. L.: Equivalence of quadratic forms. Amer. J. Math. 63
(1941) 658–680.
[Sie 43] SIEGEL, C. L.: Discontinuous groups. Ann. Math. 44 (1943) 674–689.
[Sie 44a] SIEGEL, C. L.: On the theory of indefinite quadratic forms. Ann. Math.
45 (1944) 577–622.
[Sie 44b] SIEGEL, C. L.: The average measure of quadratic forms with given
determinant and signature. Ann. Math. 45 (1944) 667–685.
[Sie 45] SIEGEL, C. L.: Some remarks on discontinuous groups. Ann. Math. 46
(1945) 708–718.
[Sie 48] SIEGEL, C. L.: Indefinite quadratische Formen und Modulfunktionen.
Courant Anniv. Volume (1948) 395–406.
[Sie 51] SIEGEL, C. L.: Indefinite quadratische Formen und Funktionentheorie,
I. Math. Ann. 124 (1951) 17–54; II, 364–387.
[Sie 55/56] SIEGEL, C. L.: Lectures on Quadratic Forms. Tata Institute, Bombay
(1955–56).
[Sie 59] SIEGEL, C. L.: Zur Reduktionstheorie quadratischen Formen. Pub.
Math. soc Japan (1959) Collected Papers #72, Volume III, 275–327.
[Ter 80] TERRAS, A.: Integral formulas and integral tests for series of positive
matrices Pacific J. Math. 89 (1980) 471–490.
[Ter 85a] TERRAS, A.: The Chowla Selberg method for Fourier expansion of
higher rank Eisenstein series. Canad. Math. Bull. 28 (1985) 280–294.
[Ter 85b] TERRAS, A.: Harmonic Analysis on Symmetric Spaces and Applica-
tions, I. Springer-Verlag (1985).
[Ter 88] TERRAS, A.: Harmonic Analysis on Symmetric Spaces and Applica-
tions, II. Springer-Verlag (1988).
[ViT 82] VINOGRADOV, A., and TAKHTAZHAN, L.: Theory of Eisenstein se-
ries for the group SL(3, R) and its applications to a binary problem.
J. Soviet Math. 18 (1982) 293–324.
[Wal 73] WALLACH, N.: Harmonic Analysis on Homogeneous Spaces. Marcel
Dekker (1973).
[We 46] WEIL, A.: Sur quelques résultats de Siegel. Summa Braz. Math. 1
(1946) 21–39; Collected Papers I, Springer-Verlag (1979) 339–357.
Index