You are on page 1of 176

Lecture Notes in Mathematics 1868

Editors:
J.-M. Morel, Cachan
F. Takens, Groningen
B. Teissier, Paris
Jay Jorgenson · Serge Lang

Posn(R) and
Eisenstein Series

ABC
Authors
Jay Jorgenson
City College of New York
138th and Convent Avenue
New York, NY 10031
USA
e-mail: jjorgenson@mindspring.com

Serge Lang
Department of Mathematics
Yale University
10 Hillhouse Avenue
PO Box 208283
New Haven, CT 06520-8283
USA

Library of Congress Control Number: 2005925188

Mathematics Subject Classification (2000): 43A85, 14K25, 32A50

ISSN print edition: 0075-8434


ISSN electronic edition: 1617-9692
ISBN-10 3-540-25787-X Springer Berlin Heidelberg New York
ISBN-13 978-3-540-25787-5 Springer Berlin Heidelberg New York

DOI 10.1007/b136063

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,
reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9,
1965, in its current version, and permission for use must always be obtained from Springer. Violations
are liable for prosecution under the German Copyright Law.
Springer is a part of Springer Science+Business Media
springeronline.com
c Springer-Verlag Berlin Heidelberg 2005
Printed in Germany
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.
Typesetting: TEX output by the author
Cover design: design & production GmbH, Heidelberg
Printed on acid-free paper SPIN: 11422372 41/3142/du 543210
Preface

We are engaged in developing a systematic theory of theta and zeta functions,


to be applied simultaneously to geometric and number theoretic situations in
a more extensive setting than has been done up to now. To carry out our pro-
gram, we had to learn some classical material in several areas, and it wasn’t
clear to us what would simultaneously provide enough generality to show the
effectiveness of some new methods (involving the heat kernel, among other
things), while at the same time keeping knowledge of some background (e.g.
Lie theory) to a minimum. Thus we experimented with the quadratic model
of G/K in the simplest case G = GLn (R). Ultimately, we gave up on the
quadratic model, and reverted to the G/K framework used systematically by
the Lie industry. However, the quadratic model still serves occasionally to ver-
ify some things explicitly and concretely for instance in elementary differential
geometry.
The quadratic forms people see the situation on K\G, with right G-action.
We retabulated all the formulas with left G-action. Just this may be useful
for readers since the shift from right to left is ongoing, but not yet universal.
Some other people have found our notes useful. For instance, we in-
clude some reduction theory and Siegel’s formula (after Hlawka’s work). We
carry out with some variations material in Maass [Maa 71], dealing with
GLn (R), but also include more material than Maass. We have done some
things hinted at in Terras [Ter 88]. Her inclusion of proofs is very sporadic,
and she leaves too many “exercises” for the reader. Our exposition is self-
contained and can be used as a naive introduction to Fourier analysis and
special functions on spaces of type G/K, making it easier to get into more
sophisticated treatments.
VI Preface

Acknowledgements

Jorgenson thanks PSC-CUNY and the NSF for grant support. Lang thanks
Tony Petrello for his support of the Yale Mathematics Department. Both of
us thank him for support of our joint work. Lang also thanks the Max Planck
Institut for productive yearly visits. We thank Mel DelVecchio for her patience
in setting the manuscript in TEX, in a victory of person over machine.

February, 2005 J. Jorgenson


S. Lang
Contents

1 GLn (R) Action on Posn (R) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1 Iwasawa-Jacobi Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Inductive Construction
of the Grenier Fundamental Domain . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 The Inductive Coordinates on SPosn . . . . . . . . . . . . . . . . . . . . . . . . . 14
4 The Grenier Fundamental Domain
and the Minimal Compactification of Γn \SPosn . . . . . . . . . . . . . . . 17
5 Siegel Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2 Measures, Integration and Quadratic Model . . . . . . . . . . . . . . . 23


1 Siegel Sets and Finiteness of Measure Mod SLn (Z) . . . . . . . . . . . . 24
2 Decompositions of Haar Measure on Posn (R) . . . . . . . . . . . . . . . . . 25
3 Decompositions of Haar Measure on SPosn . . . . . . . . . . . . . . . . . . . 36
4 Siegel’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3 Special Functions on Posn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49


1 Characters of Posn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2 The Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3 The Bengtson Bessel Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4 Mellin and Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4 Invariant Differential Operators on Posn (R) . . . . . . . . . . . . . . . 75


1 Invariant Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2 Invariant Differential Operators
on Posn the Maass-Selberg Generators . . . . . . . . . . . . . . . . . . . . . . . 78
3 The Lie Algebra Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4 The Transpose of an Invariant Differential Operator . . . . . . . . . . . 87
5 Invariant Differential Operators
on A and the Normal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
VIII Contents

5 Poisson Duality and Zeta Functions . . . . . . . . . . . . . . . . . . . . . . . . 95


1 Poisson Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
2 The Matrix Scalar Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3 The Epstein Zeta Function: Riemann’s Expression . . . . . . . . . . . . . 99
4 Epstein Zeta Function: A Change of Variables . . . . . . . . . . . . . . . . . 104
5 Epstein Zeta Function: Bessel-Fourier Series . . . . . . . . . . . . . . . . . . 105

6 Eisenstein Series First Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107


1 Adjointness Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
2 Fourier Expansion Determined
by Partial Iwasawa Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3 Fourier Coefficients from Partial Iwasawa Coordinates . . . . . . . . . . 114
4 A Fourier Expansion on SPosn (R) . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

5 The Regularizing Operator QY = |Y || ∂Y | . . . . . . . . . . . . . . . . . . . . . 118

7 Geometric and Analytic Estimates . . . . . . . . . . . . . . . . . . . . . . . . . 121


1 The Metric and Iwasawa Coordinates . . . . . . . . . . . . . . . . . . . . . . . . 121
2 Convergence Estimates for Eisenstein Series . . . . . . . . . . . . . . . . . . . 125
3 A Variation and Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

8 Eisenstein Series Second Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133


1 Integral Matrices and Their Chains . . . . . . . . . . . . . . . . . . . . . . . . . . 134
2 The ζQ Fudge Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
3 Eisenstein Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4 Adjointness and the ΓU \Γ-trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5 Changing to the (s1 , . . . , sn )-variables . . . . . . . . . . . . . . . . . . . . . . . . 152
6 Functional Equation: Invariance
under Cyclic Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7 Invariance under All Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
1
GLn (R) Action on Posn (R)

Let G = GLn (R) or SLn (R) and Γn = GLn (Z). Let Posn (R) be the space
of positive symmetric real n × n matrices. Recall that symmetric real n × n
matrices Z have an ordering, defined by Z = 0 if and only if hZx, xi = 0 for
all x ∈ Rn . We write Z1 = Z2 if and only if Z1 − Z2 = 0. If Z = 0 and Z is
non-singular, then Z > 0, and in fact Z = λI if λ is the smallest, necessarily
positive, eigenvalue.
The group G acts on Posn (R) by associating with each g ∈ G the auto-
morphism (for the C ∞ or real analytic structure) of Posn given by
[g]Z = gZ t gs .
We are interested in Γn \Posn (R), and we are especially interested in its
topological structure, coordinate representations, and compactifications which
then allow effective computations of volumes, spectral analysis, differential
geometric invariants such as curvature, and heat kernels, and whatever else
comes up.
The present chapter deals with finding inductively a nice fundamental do-
main and establishing coordinates which are immediately applied to describe
Grenier’s compactification, following Satake.
Quite generally, let X be a locally compact topological space, and let Γ
be a discrete group acting on X. Let Γ0 be the kernel of the representation
Γ → Aut(X). A strict fundamental domain F for Γ is a Borel measurable
subset of X such that X is the disjoint union of the translates γF for γ ∈ Γ/Γ0 .
In most practices, X is also a C ∞ manifold of finite dimension. We define a
fundamental domain F to be a measurable subset of X such that X is
the union of the translates γF , and if γx ∈ F for some γ ∈ Γ, and γ does
not act trivially on X, then x and γx are on the boundary of F . In practice,
this boundary will be reasonable, and in particular, in the cases we look at,
this boundary will consist of a finite union of hypersurfaces. By resolution of
singularities, the boundary can then be parametrized by C ∞ maps defined on
cubes of Euclidean space of dimension 5 dim X − 1. Thus the boundary has
n-dimensional measure 0.

Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 1–22 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
2 1 GLn (R) Action on Posn (R)

In this chapter, we have essentially reproduced aspects of Grenier’s pa-


pers [Gre 88] and [Gre 93]. He carried out on GLn (R) and SLn (R) Satake’s
compactification of the Siegel upper half space [Sat 56], [Sat 58], see also Sa-
take’s general results [Sat 60]. It was useful to have Grenier’s special case
worked out in the literature, especially Grenier’s direct inductive method.
Note that to a large extent, this chapter proves results compiled in Borel
[Bor 69], with a variation of language and proofs. These are used systemati-
cally in treatments of Eisenstein series, partly later in this book, and previ-
ously for instance in Harish-Chandra [Har 68].

1 Iwasawa-Jacobi Decomposition
Let:
G = Gn = GLn (R)
Posn = Posn (R) = space of symmetric positive real matrices
K = O(n) = Unin (R) = group of real unitary n × n matrices
U = group of real unipotent upper triangular matrices, i.e. of the form
 
1 xij
0 1 
u(X) = u =  .  so u(X) = I + X,
 
 .. . .. 
0 0 ... 1

with
X = (xij ), 1 < i < j 5 n .
A = group of diagonal matrices with positive components,
 
a1

 a2 0 

a=
 ..  ai > 0 all i .

 . 
0 
an

Theorem 1.1. The product mapping

U × A × K → U AK = G

is a differential isomorphism. Actually, the map

U × A → Posn (R) given by (u, a) 7→ uat u

is a differential isomorphism.
1 Iwasawa-Jacobi Decomposition 3

Proof. Let {e1 , . . . , en } be the standard unit vectors of Rn , and let x ∈ Ln (R).
Let vi = xei . We orthogonalize {v1 , . . . , vn } by the standard Gram-Schmidt
process, so we let

w1 = v1 ,
w2 = v2 − c21 w1 w1 ,
w3 = v3 − c32 w2 − c31 w2 ⊥ w1 and w2

and so on. Then e0i = wi /kwi k is a unit vector, and the matrix a having
kwi k−1 for its diagonal elements is in A. Let k = aux so x = u−1 a−1 k. Then
k is unitary, which proves that G = U AK. To show uniqueness, suppose that

u1 at u1 = u2 bt u2 with u1 , u2 ∈ U and a, b ∈ A,

then putting u = u−1


2 u1 we find

ua = bt u .

Since u and t u are triangular in opposite direction, they must be diagonal,


and finally a = b. That the decomposition is differentially a product is proved
by computing the Jacobian of the product map, done in Chap. 2.

The group K is the subset of elements of G fixed under the involution

g 7→ t g −1 .

We write the transpose on the left to balance the inverse on the right. We
have a surjective mapping

G → Posn given by g 7→ g t g .

This mapping gives a bijection of the coset space

ϕ : G/K → Posn ,

and this bijection is a real analytic isomorphism. Furthermore, the group G


acts on Posn by a homomorphism g 7→ [g] ∈ Aut(Posn ), where [g] is given by
the formula
[g]p = gpt g .
This action is on the left, contrary to right wing action by some people. On
the other hand, there is an action of G on the coset space G/K by translation

τ : G → Aut(G/K) such that τ (g)g1 K = gg1 K .

Under the bijection ϕ, a translation τ (g) corresponds precisely to the action


[g].
Next we tabulate some results on partial (inductive) Iwasawa decomposi-
tions. These results are purely algebraic, and do not depend on real matrices
4 1 GLn (R) Action on Posn (R)

or positivity. They depend only on routine matrix computations, and it will


prove useful to have gotten them out of the way systematically.
Let G = GLn denote the general linear group, wherever it has its compo-
nents. Vectors are column vectors. An element g ∈ GLn can be written

b1
 
µ ¶  .. 
A b  A . 
g= t = 
c d  
 bn−1

c1 · · · cn−1 d

where b and c are (n − 1)-vectors, so t c is a row vector of dimension n − 1, A


is an (n − 1) × (n − 1) matrix, and d is a scalar. We write

d = dn (g)

for this lower right corner of g.


For induction purposes, we do not deal with fully diagonal matrices but
with an inductive decomposition
à ! à !
W 0(n−1) W 0
t (n−1)
= with W ∈ GLn−1 and v scalar 6= 0 .
0 v 0 v

We have the left action of GL on Matn given on a matrix M by

[g]M = gM t g,

so g 7→ [g] is a representation. For an (n − 1)-vector x, we denote


à !
In−1 x
u(x) = .
0 1

We write t x = (x1 , . . . , xn−1 ). Then x 7→ u(x) is an injective homomorphism.


In particular,
u(x)−1 = u(−x) .
The usual matrix multiplication works to yield
à !
A Ax + b
(1) gu(x) = t t
c cx + d

An expression
à ! à !
W 0 W + [x]v xv
Z = [u(x)] =
0 v vt x v
1 Iwasawa-Jacobi Decomposition 5

for a matrix Z will be called a first order Iwasawa decomposition of Z.


We note that with such a decomposition, we have

(2) dn (Z) = v .

Straightforward matrix multiplication yields the expression:


à !
W 0
(3) [g]Z = [g][u(x)]
0 v

[A]W + [Ax + b]v AWc + (Ax + b)v(t xc + d)


 

=  .
t t t t t t
cW A + ( cx + d)v (Ax + b) [ c]W + [ cx + d]v

In particular,

(4) dn ([g]Z) = [t c]W + [t cx + d]v = [t c]W + (t cx + d)2 v .

Indeed, t cx is a scalar, so is t cx + d, so [t cx + d]v = (t x + d)2 v. Note that


directly from matrix multiplication, one has also

(5) dn ([g]Z) = [t c, d]Z .

For later purposes, we record the action of a semidiagonalized matrix:


" # " #" #Ã !
A 0 A 0 In−1 x W 0
(6) Z=
0 1 0 1 0 1 0 v
" #Ã !
In−1 Ax [A]W 0
= .
0 1 0 v

One way to see this is to multiply both sides of (6) by

[u(−Ax)] = [u(Ax)]−1 ,

and to verify directly the identity


à !à !à ! à !
In−1 −Ax A 0 In−1 x A 0
(7) = .
0 1 0 1 0 1 0 1

We have a trivial action


" #Ã ! Ã !
In−1 0 W 0 W 0
(8) = .
0 −1 0 v 0 v
6 1 GLn (R) Action on Posn (R)
" #
In−1 0
In other words, acts as the identity on a semidiagonalized matrix
0 −1
à !
W 0
. On the other hand, on u(x) we can effect a change of sign by the
0 v
transformation
" #Ã ! Ã !
In−1 0 In−1 x In−1 −x
(9) = .
0 −1 0 1 0 1

We then derive the identity


" #" #Ã !
In−1 0 In−1 x W 0
(10)
0 −1 0 1 0 v
" #Ã !
In−1 −x W 0
= .
0 1 0 v

Indeed, in the left side of (10) we insert


" #" # " #
In−1 0 In−1 0 In−1 0
=
0 −1 0 −1 0 1
µ ¶
W 0
just before , and use (8), (9) to obtain the right side, and thus prove
0 v
(10).

2 Inductive Construction
of the Grenier Fundamental Domain
This section is taken from [Gre 88].
Throughout the section, we let:
G = GLn (R)
Γ = GLn (Z)
Posn = Posn (R) = space of symmetric positive n × n real matrices.
We write Z > 0 for positivity. We use the action of G on Posn given by

g 7→ [g], where [g]Z = gZ t g .

Thus g 7→ [g] is now viewed as a representation of G in Aut(Posn ). We note


that the kernel of this representation in Γn is ±In , in other words, if g ∈ Γn
and [g] = id then g = ±In .
We use the notation of Sect. 1. An element Z ∈ Posn has a first order
Iwasawa decomposition
2 Inductive Construction of the Grenier Fundamental Domain 7
" #Ã !
In−1 x W 0
Z=
0 1 0 v

with W ∈ Posn−1 and v ∈ R+ .


Since we shall deal with the discrete group Γn , the following fact from
algebra is useful to remember.
Let R be a principal ideal ring. A vector in t Rn is primitive, i.e. has rela-
tively prime components, if and only if this vector can be completed as the
first (or any) row of a matrix in GLn (R).
This fact is immediately proved by induction. In dealing with dn ([g]Z) and
g ∈ Γn , we note that this lower right component depends only on the integral
row vector (t c, d) ∈ t Zn . Here we continue to use the notation of Sect. 1, that
is à !
A b
g= t .
c d
Note that we have an obvious lower bound for v. If λ is the smallest
eigenvalue of Z (necessarily > 0), then using the n-th unit vector en and the
inequality [en ]Z = λken k2 we find

(1) v=λ.

For n = 2, we define the set Fn to consist of those Z ∈ Posn such that:


Fun 1. dn (Z) 5 dn ([g]Z) for all g ∈ Γn , or in terms of coordinates,

v 5 [t c]W + (t cx + d)2 v

for all (t c, d) primitive in Zn .


Fun 2. W ∈ Fn−1
Fun 3. 0 5 x1 5 12 and |xj | 5 1
2 for j = 2, . . . , n − 1.
Minkowski had defined a fundamental domain Minn by the following
conditions on matrices Z = (zij ) ∈ Posn :
Min 1. For all a ∈ Zn with (ai , . . . , an ) = 1 we have [t a]Z = zii .
Min 2. zi,i+1 = 0 for i = 1, . . . , n − 1.
Minkowski’s method is essentially that followed by Siegel in numerous works,
for instance [Sie 40], [Sie 55/56]. Grenier’s induction (following Satake) is sim-
pler in several respects, and we shall not use Minkowski’s in general. Grenier
followed a recursive idea of Hermite. However, we shall now see that for n = 2,
F2 is the same as Minkowski’s Min2 .
The case n = 2.
We tabulate the conditions in this case. The positivity conditions imply
at once that v, w > 0, so Fun 2 doesn’t amount to anything more. We have
with x ∈ R:
8 1 GLn (R) Action on Posn (R)
à !à !à ! " #à !
1 x w 0 1 0 1 x w 0
Z= = .
0 1 0 v x 1 0 1 0 v

The remaining Fun conditions read:


Fun 1. v 5 c2 w + (cx + d)2 v for all primitive vectors (c, d).
Fun 3. 0 5 x 5 12 .
Proposition 2.1. For v, w > 0 the above conditions are equivalent to the
conditions
1
v 5 w + x2 v and 0 5 x 5 for all x ∈ R .
2
Under these conditions, we have
3
w= v.
4
Proof. The inequality v 5 w + x2 v comes by taking c = 1, d = 0. Then

w = v(1 − x2 ),

and since 0 5 x 5 12 , the inequality w = 3v/4 follows. Then it also fol-


lows that for all primitive pairs (c, d) of integers, we have Fun 1 (immediate
verification), thus proving the proposition.
Write Z in terms of its coordinates:
à ! à !
z11 z12 w + x2 v xv
Z= = .
z12 z22 vx v

Proposition 2.2. The inequalities Fun 1 and Fun 3 are equivalent to:

0 5 2z12 5 z22 5 z11 .

Thus F2 is the same as the Minkowski fundamental domain Min2 . If z12 = 0,


then the inequalities are equivalent to 0 < z22 5 z11 .
Proof. The equivalence is immediate in light of the explicit determination of
zij in terms of v, w and x, coming from the equality of matrices above.
After tabulating the case n = 2, we return to the general case. We shall
prove by induction:
Theorem 2.3. The set Fn is a fundamental domain for GLn (Z) on Posn (R).
Proof. The case n = 2 follows from the special tabulation in Proposition 2.2.
So let n = 3 and assume Fn−1 is a fundamental domain. Let Z ∈ Posn and
let g ∈ GLn (Z) have the matrix expression of Sect. 1, so A, b, c, d are integral
matrices. We begin by showing:
2 Inductive Construction of the Grenier Fundamental Domain 9

Given a positive number r and Z ∈ Posn , there is only a finite number of


primitive (t c, d) ∈ t Zn (bottom row of some g ∈ GLn (Z)) such that

dn ([g]Z) 5 r .

Proof. Since W ∈ Posn−1 we have W = λIn−1 for some λ > 0, and hence

[t c]W = λt cc for all c ∈ Zn−1 .

Hence there is only a finite number of c ∈ Zn−1 such that [t c]W 5 r. Then
from the inequality
(t cx + d)2 v 5 r,
we conclude that there is only a finite number of d ∈ Z satisfying this inequal-
ity, as was to be shown.
We next prove that every element of Posn may be translated by some
element of Γ into a point of Fn . Without loss of generality, in light of the
above finiteness, we may assume that dn (Z) = v is minimal for all elements
in the Γ-orbit [Γ]Z. By induction, there exists a matrix

A ∈ GLn−1 (Z) = Γn−1 such that [A]W ∈ Fn−1 .

We let à !
A 0
g= ∈ Γn .
0 1
Then by (6) of Sect. 1,
" #Ã !
In−1 Ax [A]W 0
[g]Z = .
0 1 0 v

Thus we have at least satisfied the condition Fun 2. Without loss of generality,
we my now assume that W ∈ Fn−1 , since the dn does not change under the
action of a semidiagonalized element g as above, with dn (g) = 1.
Now by acting with g = u(b) with b ∈ Zn−1 and using the homomorphic
property u(x + b) = u(b)u(x), we may assume without loss of generality that
|xij | 5 12 for all j. Finally, using (10) of Sect. 2, we may change the sign of x
if necessary so that 0 5 x1 , thus concluding the proof that some element in
the orbit [Γ]Z satisfies the three Fun conditions.
There remains to prove that if Z and [g]Z ∈ Fn with g ∈ Γn then Z and
[g]Z are on the boundary, or [g] = id on X, that is [g] = ±In . We again prove
this by induction, it being true for n = 2 by Proposition 2.2, so we assume the
result for Fn−1 , and we suppose [g]Z, Z are both in Fn . Then from Fun 1,

dn (Z) = dn ([g]Z), that is, v = [t c]W + (t cx + d)2 v .

If c 6= 0,then Z and [g]Z are on the boundary, because the boundary is defined
among other conditions by this hypersurface equality coming from Fun 1.
10 1 GLn (R) Action on Posn (R)

If c = 0, then this equality reads v = vd2 so d = ±1 and


à !
A b
g= .
0 ±1

Since det(g) = ±1 because g ∈ GLn (Z), it follows that det A = ±1, or, in
other words, A ∈ GLn−1 (Z). We have
" #Ã !
In−1 ±Ax + b [A]W 0
[g]Z = .
0 1 0 v

Then [A]W ∈ Fn−1 so by induction:


either W, [A]W ∈ boundary of Fn−1
or A = ±In−1 .
If W, [A]W ∈ boundary of Fn−1 , then Z and [g]Z ∈ boundary Fn . On the
other hand, if A = ±In−1 , then
à !
±In−1 b
g= ,
0 ±1

and therefore " #Ã !


In−1 ±x ± b W 0
[g]Z = .
0 1 0 v
That Z, [g]Z ∈ Fn implies that
1 1
0 5 x1 5 and 0 5 ±x1 ± b1 5 ;
2 2
1
|xj |, | ± xj ± bj | 5 .
2
Since b ∈ Zn−1 , we find:
either xj = ± 21 , bj = ±1 for j = 2, . . . , n − 1 and x1 = 12 , b1 = 1;
or xj 6= ± 12 and bj = 0 for all j.
It follows that either A and [g]Z are on the boundary of Fn determined by
the x-coordinate, or b = 0 in which case
à !
In−1 0
g=± .
0 ±1
à !
In−1 0
If g = , then by (10) of Sect. 2, we have
0 −1
2 Inductive Construction of the Grenier Fundamental Domain 11
" #Ã !
In−1 −x W 0
[g]Z = ,
0 1 0 v

and both 0 5 x1 5 12 and 0 5 −x1 5 12 , so x1 = 0 and Z, [g]Z are in the


boundary. This concludes the proof of the theorem.
Theorem 2.4. The fundamental domain Fn can be defined by a finite number
of inequalities, which can be determined inductively explicitly. Its boundary
consists of a finite number of hypersurfaces.
Proof. Proposition 2.2 gives the result for n = 2, so let n = 3, and assume
the result for Fn−1 . Conditions Fun 2 and Fun 3 clearly consist only of a
finite number of inequalities, with equalities defining the boundary. So we
are concerned whether the conditions Fun 1 involve only a finite number of
conditions, that is

v 5 [t c]W + (t cx + d)2 v for all primitive (t c, d) .

Since W ∈ Fn−1 , we may write


" #Ã !
In−2 x0 W0 0 1
W = with v 0 > 0, W 0 ∈ Fn−2 , |x0 | 5 ,
0 1 0 v0 2

where |x0 | is the sup norm of x0 . By induction, there is only a finite number
of inequalities
v 0 5 [t c0 ]W 0 + (t c0 x0 + d0 )2 v 0 .
By straight matrix multiplication,

[t c]W = [t c(n−2) ]W 0 + (t c(n−2) x0 + cn−1 )2 v 0

where t c(n−2) = (c1 , . . . , cn−2 ). Thus there is only a finite number of vectors
t
c = (t c(n−2) , cn−1 ) because we may take c(n−2) among the choices for c0 .
Then with the bounds on the coordinates xj , there is only a finite number
of d ∈ Z which will satisfy the inequalities Fun 1. This concludes the proof
of the general finiteness statements. In addition, as Grenier remarks, the fi-
nite number of inequalities can be determined explicitly. For this and other
purposes, one uses:
Lemma 2.5. Let Z ∈ Fn ,
" #Ã !
In−1 x W 0
Z=
0 1 0 v

as before. let zi = zii be the i-th diagonal element of Z, and wi = wii the i-th
diagonal element of W . Then for i = 1, . . . , n − 1,
4
v 5 zi 5 wi .
3
12 1 GLn (R) Action on Posn (R)

Proof. From Fun 1, with all (t c, d) = 1, we consider the values d = 0 and


c = ei (the i-unit vector). Then

[t c]W = wi , t
cx = xi ,

so Fun 1 yields v 5 wi + x2i v = zi . Then also


4 1 4
v 5 wi /(1 − x2i ) 5 wi whence zi 5 wi + v 5 wi ,
3 4 3
thus concluding the proof.

To get the explicit finite number of inequalities for the Grenier fundamental
domain, one simply follows through the inductive procedure using Lemma 2.5,
cf. [Gre 88], pp. 301-302.
In the sequel we use systematically the above notation:

zi = zii = i-th diagonal component of the matrix Z .

We conclude this section with further inequalities which are usually stated
and proved in the context of so-called “reduction theory”, for elements of
Posn in the Minkowski fundamental domain. These inequalities, as well as
their applications, hold for the Grenier fundamental domain, cf. [Gre 88],
Theorem 2, which we reproduce.
Theorem 2.6. For Z ∈ Posn we have |Z| 5 z1 . . . zn , and for Z ∈ Fn ,
µ ¶n(n−1)/2
4
|Z| 5 z1 . . . zn 5 |Z| .
3
So µ ¶n(n−1)/2
3
|Z| = znn .
4
Proof. We prove the first (universal) inequality by induction. As before, we use
the first order Iwasawa decomposition of Z with the matrix W . The theorem
is trivial for n = 1. Assume it for n − 1. Then zi = wi + x2i v (i = 1, . . . , n − 1),
so by induction,
|W | 5 w1 . . . wn−1 5 z1 . . . zn−1 .
Hence
|Z| = |W |v = |W |zn 5 z1 . . . zn ,
which is the desired universal inequality.

Next suppose Z ∈ Fn . Again, we use induction for the right inequality.


For n = 2, from Proposition 2.2 we get 2z12 5 z2 5 z1 . Therefore

2 1
z12 5 z1 z2
4
2 Inductive Construction of the Grenier Fundamental Domain 13

whence
3 4
|Z| = z1 z2 or z1 z2 5 |Z|,
4 3
which takes care of n = 2. Assume the inequality for n − 1. In the first order
Iwasawa decomposition, we have W ∈ Fn−1 . Then
z1 . . . zn z1 . . . zn z1 . . . zn−1
= =
|Z| |W |v |W |
µ ¶n−1
4 w1 . . . wn−1
5 by Lemma 2.5
3 |W |
µ ¶n−1+(n−1)(n−2)/2
4
5 by induction and W ∈ Fn−1
3
µ ¶n(n−1)/2
4
5 ,
3
thus proving the desired inequality z1 . . . zn 5 ( 43 )n(n−1)/2 |Z|. The final in-
equality then follows at once from Lemma 2.5, that is zi = v = zn for all i.
This concludes the proof.
We give an application following Maass [Maa 71], Sect. 9, formula (8),
which is used in proving the convergence of certain Eisenstein series. In order
to simplify the notation, we write
µ ¶n(n−1)/2
4
c1 = c1 (n) = .
3
Theorem 2.7. let Z ∈ Fn . Let Zdia be the diagonal matrix whose diagonal
components are the same as those of Z. Then as operators on Rn ,
1
Zdia 5 Z 5 nZdia .
nn−1 c1
−1
Proof. Let r1 , . . . , rn be the eigenvalues of Z[Zdia2 ]. Then
−1 −1
r1 + . . . + rn = tr(Z[Zdia2 ]) = tr(ZZdia )=n
and
−1
r1 . . . rn = |Z| · |Zdia | = c−1
1
by Theorem 2.7. Hence for all i = 1, . . . , n,
1
ri < n and ri = .
nn−1 c1
Therefore
1 −1 −1
In 5 Zdia2 ZZdia2 5 nIn .
nn−1 c1
If A > 0 and C is invertible symmetric, then CAC > 0, so if A 5 B then
1
CAC 5 CBC, and we use C = Zdia
2
to conclude the proof.
14 1 GLn (R) Action on Posn (R)

3 The Inductive Coordinates on SPosn


In some important and especially classical contexts, one consider the special
linear group SLn (R) and the special symmetric space:
SPosn (R) = space of positive matrices with determinant 1.
For the rest of this chapter, we consider only this case, and we follow Grenier
[Gre 93]. Observe that we keep the same discrete group

Γn = GLn (Z),

because the operation of GLn (Z) leaves SPosn (R) stable.


We now take Z ∈ SPosn (R), with the same first order Iwasawa decompo-
sition as before, so that with some v ∈ R+ , W ∈ Posn (R), we have
" #µ
In−1 x

W 0
Z= ,
0 1 0 v

but to take account of the additional property of determinant 1, we put

dn (Z) = v = a−1
n , W = a1/(n−1)
n Z (n−1) , x = x(n−1)

so that " #Ã !
1/(n−1) (n−1)
(n) In−1 x an Z 0
(1) Z=Z = ,
0 1 0 a−1
n

with Z (n−1) ∈ SPosn−1 (R), in particular, det Z (n−1) = 1. This particular


choice of coordinates is useful for some inductive purposes.
In case n = 2, this decomposition corresponds to expressing z = x + iy in
the upper half plane h2 , and the part going to infinity corresponds to y → ∞,
y = an , Z (n−1) = 1.
In (1), if we factor out a−1
n we can write the decomposition in the form
à !
n/(n−1) (n−1)
(n) −1 an Z 0
(1.a) Z=Z = [u(x)]an ,
0 1
à !
In−1 x
with u(x) = .
0 1
We may then iterate the above construction, applied to Z (n−1) which has
determinant 1. We let n 7→ n − 1. Let x(n−2) be the x-coordinate in the first
order Iwasawa decomposition of Z (n−1) . Identify

In−2 x(n−2) 0
 

u(x(n−2) ) =  0 1 0 .
 

0 0 1

Then using (1), and letting a−1


n−1 = dn−1 (Z
(n−1)
), we get
3 The Inductive Coordinates on SPosn 15
 
1/(n−1) 1/(n−2) (n−2)
an an−1 Z
(2) Z = [C] 
 1/(n−1) −1 
an an−1 
a−1
n

where
C = u(x(n−1) )u(x(n−2) ) .
(n−1)
Factoring out a−1
n , putting u2 = u(x )u(x(n−2) ), and
2
yn−1 = an/(n−1)
n a−1
n−1 ,

we get  
n−1
2 n−2 (n−2)
 yn−1 an−1 Z
(2a) Z= [u2 ]a−1 2  .

n  yn−1
1

We may continue inductively. Let u3 = u(x(n−1) )u(x(n−2) )u(x(n−3) ) for in-


stance. Then

(3)
Z = [u3 ]
 1/(n−1) 1/(n−2) 1/(n−3) 
an an−1 an−2 Z (n−3)
1/(n−1) 1/(n−2) −1
an an−1 an−2
 
 
 1/(n−1) −1 
 an an−1 
a−1
n

where a−1
n−i = dn−i (Z
(n−i)
). We factor out a−1
n . This gives rise to a factor
n/(n−1)
an on each diagonal component. Then we rewrite (3) in the form:
 n−2 
2 2
yn−1 yn−2 n−3
an−2 Z (n−3)
2 2
(3a) Z = [u3 ]a−1

 yn−1 yn−2 
 .
n  2
yn−1 
1

Thus we define inductively the standard coordinates. Letting n 7→ n − 1 7→


n − 2 and so forth:
2
yn−1 = an/(n−1)
n a−1
n−1
2 (n−1)/(n−2) −1
yn−2 = an−1 an−2
2 (n−2)/(n−3) −1
yn−3 = an−2 an−3

and so forth. Then we obtain inductively the partial Iwasawa decomposi-


tion with ui = u(x(n−1) )u(x(n−2) ) . . . u(x(n−i) ).
16 1 GLn (R) Action on Posn (R)
 n−i+1 
2 2
yn−1 . . . yn−i+1 n−i
an−i+1 z (n−i)
 
2 2

 yn−1 . . . yn−i+1 

..
 
−1 
(4) [ui ]an  .  .

2 2
yn−1 yn−2
 
 
 
2
 yn−1 
1
We carry this out to the end, and put X = upper triangular nilpotent matrix
formed with the column vectors (x(1) , . . . , x(n−1) ). Thus
0 x(1) . . . x(n−1)
   
0 xij
0 0  0 0 
X =. . =. . .
   
 .. .. . ..   .. ..
 .. 

0 0 ... 0 0 0 ... 0
and
u(X) = In + X .
We obtain the full Iwasawa decomposition:
 2 2
. . . y12

yn−1 yn−2
 .. 
 . 
−1 
(5) Z = [u(X)]an 

2 2
 yn−1 yn−2 

2
 yn−1 
1
with a full diagonal matrix formed with the standard coordinates.
From the property that det(Z) = 1, we conclude
n−1
2(n−j)
Y
(6) ann = yn−j .
j=1

One may also use another convenient normalization by letting


(7) Zn−1 = ann/(n−1) Z (n−1) so that W = a−1
n Zn−1 ,

and therefore µ ¶
Zn−1 0
(8) Z = [u(x(n−1) )]a−1
n .
0 1
Of course, Zn−1 does not have determinant 1, contrary to Z (n−1) . From the
2
definition of yn−1 and (7) we obtain
(9) −2
yn−1 Zn−1 = an−1 Z (n−1) .
This formula then remains valid inductively replacing n − 1 by n − i for
i = 1, . . . , n − 1. Note that an−1 = an−1 (Z (n−1) ), similar to
an = an (Z (n) ) = an (Z) .
4 The Grenier Fundamental Domain and the Minimal Compactification 17

Remark. What we call the standard coordinates are actually standard in the
literature, dating back to Minkowski, Jacobi, Siegel, etc. Actually, if one puts

qi = yi2

then Siegel calls (q1 , . . . , qn−1 ) the normal coordinates, see, for instance,
the references [Sie 45], [Sie 59].

Formulas for dn .

Because the fundamental domain is partly defined by a minimality condition


on the lowest right corner of matrices, we record formulas for dn in the present
context, where dn denotes the lower right corner of an n × n matrix.
For any Z ∈ Posn and any row vector (t c, d) ∈ t Rn , just by matrix
multiplication we have

(10) an [t c, d]Z = [t c]Zn−1 + (t cx + d)2 .

Note that an = dn (Z)−1 . Formula (10) is set up so that it holds inductively,


say for the first step, and any row vector (t c0 , d0 ) of dimension n − 1,

(11) an−1 [t c0 , d0 ]Z (n−1) = [t c0 ]Zn−2 + (t c0 x0 + d0 )2 .

In light of (9), the formula can be rewritten

(12) [t c0 , d0 ]Zn−1 = yn−1


2
([t c0 ]Zn−2 + (t c0 x0 + d0 )2 ) .

Formulas (11) and (12) are set up so that they are valid replacing n − 1 by
n − i, with c0 of dimension n − i − 1, d0 equal to a scalar, x0 of dimension
n − i − 1.
The formulas are set up for immediate application in the next section
where we consider Z in a fundamental domain. The first condition defining
such a domain will specify that the expression on the right of (11), or in
parentheses on the right of (12), is = 1.

4 The Grenier Fundamental Domain


and the Minimal Compactification of Γn\SPosn

We define the fundamental domain SFn of Γn acting on SPosn to be the


set of all matrices Z ∈ SPosn (R) satisfying for all primitive (t c, d) ∈ t Zn and
notation as in Sect. 3, (1), (7):

SFun 1. an [t c, d]Z = 1, or equivalently


n/(n−1) (n−1)
[t c]an Z + (t cx + d)2 = 1, or equivalently
[ c]Zn−1 + ( cx + d)2 = 1.
t t
18 1 GLn (R) Action on Posn (R)

SFun 2. Z (n−1) ∈ SFn−1 .


SFun 3. 0 5 x1 5 1/2, and |xj | 5 1/2 for j = 2, . . . , n − 1.

Special Case n = 2. In this case, we set an = y. Then SFun 1 amounts


to
x2 + y 2 = 1,
which is the usual condition defining the lower part of the fundamental do-
main. Condition SFun 3 when n = 2 states that

0 5 x 5 1/2 .

Thus F2 corresponds to the elements x + iy in h2 such that

x2 + y 2 = 1 and 0 5 x 5 1/2 .

Thus we get half the usual fundamental domain, because we took the discrete
group to be GLn (Z) rather than SL2 (Z).
That the above conditions define a fundamental domain follows at once
from the case for GLn . In the rest of the section, we give further inequalities
which will be used for the compactification subsequently. The first inequality
generalizes the inequality x2 + y 2 = 1 from n = 2.

Lemma 4.1. Let Z ∈ SFn . For i = 1, . . . , n − 1 we have


3
x2n−i + yn−i
2
=1 and 2
yn−i = .
4
Hence we get Hermite’s inequality

an (Z) = (3/4)(n−1)/2 .

Proof. We choose d = 0 and c = en−1 (the standard unit vector with all
components 0 except for the (n − 1)-component which is 1). Then SFun 1
yields
x2n−1 + yn−1
2
= 1,
and since |xn−1 | 5 12 , we get yn−1
2
= 34 . The coordinates yn−i are designed
in such a way that this argument can be applied step by step, thus proving
the first statement of the lemma. The Hermite inequality then follows from
Sect. 3, (6).

Lemma 4.2. Let Z ∈ SFn . For all c(n−i) ∈ Zn−i (i = 1, . . . , n − 1) for which
c(n−i) 6= 0 we have
[t c(n−i) ]Zn−i = yn−i
2
.
2
Proof. This comes from the fact that in Sect. 3, (12) we get yn−1 times a
number = 1 according to SFun 1.
4 The Grenier Fundamental Domain and the Minimal Compactification 19

Lemma 4.3. Let z ∈ SFn . For c(n−1) ∈ Zn−1 having cn−j 6= 0 if j 5 k + 1,


we have
[t c(n−1) ]Z1 = yn−1
2 2
. . . yn−k .

Proof. This comes inductively from Sect. 3, (12), where on the right we obtain
2 2
the product yn−1 . . . yn−k times a number = 1, plus a number = 0.

Grenier carried out an idea of Satake for the Siegel modular group to the
case of GLn , and we continue to follow Grenier.
There are actually several compactifications, and we begin with the sim-
plest inductive one. It is not clear to what extent this simplest suffices and
for which purposes.

Since the present discussion deals with SLn , we shall write Fn instead of
SFn for simplicity. In case both GLn and SLn are considered simultane-
ously, then of course a distinction has to be preserved.
We shall first define a compactification of Fn . Quite simply, we let

Fn∗ = Fn ∪ Fn−1 ∪ . . . ∪ F1 .

We shall put a topology of this union and show that Fn∗ then provides a
compactification of Fn . The topology is defined inductively.
For n = 1, F1 = {∞} is a single point.
For n = 2, F2 is the usual fundamental domain, as we have seen, and its
compactification is F2 ∪ {∞} = F2 ∪ F1 .
Let n = 2.

Let P ∈ Fn−k with 1 5 k 5 n − 1, so P ∈ Fn−1 . Let U be a neighborhood

of P in Fn−1 . Let M > 0. Let:

V (U, M ) = set of Z ∈ SPosn (R) such that an (Z) > M, Z (n−1) ∈ U,


1
0 5 x1 , and |xj | 5 for all j = 1, . . . , n − 1 .
2
Lemma 4.4. For M sufficiently large, V (U, M ) is contained in Fn .

Proof. There are three conditions to be met. We start with SFun 1. We need
to show that for all primitive (t c, d) we have

an (Z)n/(n−1) [t c]Z (n−1) + (t cx + d)2 = 1 .

We are given Z (n−1) ∈ U ⊂ Fn−1 ∗


. If c = 0, then d = ±1, so the above
inequality is clear. Assume c 6= 0. Then we shall prove the stronger inequality
with the term (t cx + d)2 deleted. From Z (n−1) ∈ Fn−1

, we get
µ ¶(n−1)/2
t (n−1) (n−1) 3
[ c]Z = an−1 (Z )=
4
20 1 GLn (R) Action on Posn (R)

by Hermite’s inequality (Lemma 4.1), and hence


µ ¶(n−2)
n/(n−1) t (n−1) n/(n−1) 3
an (Z) [ c]Z =M /2 .
4
Hence as soon as M is sufficiently large, we have the desired inequality which
proves SFun 1.
As to SFun 2, the inductive condition on the x-coordinate is met by
definition of V (U, M ), so we have to verify SFun 1 in lower dimensions, or in
other words
an−i [t c(n−i) ]Z (n−i) = 1 for i = 1, . . . , n − 1,
where c(n−i) is primitive in Zn−i . This follows from Sect. 3, (9) and Lemma
4.1.
Finally, SFun 3 holds by the definition of V (U, M ). This proves the lemma.
We define the topology on Fn∗ inductively. A neighborhood of a point

P ∈ Fn−1 is defined to be V (U, M ) ∪ U as above, i.e. it is the union of the

part in Fn and the part in Fn−1 . If P ∈ Fn , then a fundamental system of
neighborhoods is given from the standard topology on Fn .
Theorem 4.5. The space Fn∗ is compact.
Proof. It suffices to show that any sequence {Z(ν)} in Fn has a subsequence
with a limit in Fn∗ . By induction, without loss of generality we may assume
that {Z(ν)(n−1) } converges to something in Fn−1 ∪ . . . ∪ F1 . From Lemma
4.1 (Hermite inequality), {an (ν)} is bounded away from 0, actually bounded
from below by ( 34 )(n−1)/2 , but the precise value is irrelevant here. If {an (ν)}
has a subsequence bounded away from ∞, then it has a subsequence which
converges to a real number, and then the subsequence {Z(ν)} converges in
a natural way. If on the other hand an (ν) → ∞, then Z(ν) converges to the
above something by definition of the topology on Fn∗ . This concludes the proof.
Remark. Satake has told us that actually the compactification of Γ\SPosn (R)
can be described as follows. Since SPosn (R) is contained in Symn (R), one sim-
ply takes the closure of F̄n of the fundamental domain of Fn in the projective
space PSymn (R). Then the compactification is Γ\ΓF̄n , which is the union of
the Fk for k = 1, . . . , n.

5 Siegel Sets
We follow [Gre 93]. Let Dn be the group of diagonal matrices with ±1 as
diagonal elements. We let
[
Fn± = [γ]Fn .
γ∈Dn

This is a domain defined inductively by the conditions:


5 Siegel Sets 21

SFun 1± . Same as SFun 1.


±
SFun 2± . Z (n−1) ∈ Fn−1 .
SFun 3± . |xi | 5 1/2 for i = 1, . . . , n − 1.
Thus Fn± has symmetry about 0 for all the x-coordinates.
(n) (n)
For T > 0 we define the Siegel set SieT,1/2 = SieT (since we don’t deal
with another bound on the x-coordinates) by:
(n)
SieT = set of Z ∈ SPosn such that |xij | 5 1/2 and yi2 = T for all
i = 1, . . . , n − 1.
Remark. For n = 2, a Siegel set is just a rectangle to infinity inside the usual
half vertical strip −1/2 5 x 5 1/2, y > 0.

1/2
T

–1/2 1/2
Note that the largest value of T such that the Siegel set contains the funda-
mental domain is 3/4. The shaded portion just reaches the two corners. We
then have the following rather precise theorem of Grenier.
(n) (n)
Theorem 5.1. Sie1 ⊂ Fn± ⊂ Sie3/4 .
Proof. The inclusion on the right is a special case of Lemma 4.1.
Now for the inclusion on the left, note that condition SFun 2± follows at
once by induction, and SFun 3± is met by definition, so the main thing is to
prove SFun 1± , for which we give Grenier’s proof. The statement being true
for n = 2, we give the inductive step, so we index things by n, with Siegel sets
(n)
being denoted by SieT in SPosn , for instance. So suppose
(n−1) ±
Sie1 ⊂ Fn−1 .
(n)
Given Z ∈ Sie1 , and writing t c = (t c0 , d0 ), we have
[t c]Zn−1 + (t cx + d)2 = yn−1
2
[t c0 ]Zn−2 + yn−1
2
(t c0 , d0 )2 + (t cx + d)2 .
(n−1) ±
But Z (n−1) ∈ Sie1 ⊂ Fn−1 , so Lemma 4.2 implies that for
c0 ∈ Zn−2 , c0 6= 0,
22 1 GLn (R) Action on Posn (R)

we have
[t c0 ]Zn−2 = yn−2
2
,
and hence we get the inequality
2
yn−1 [t c0 ]Zn−2 = yn−1
2 2
yn−2 = 1,

which proves SFun 1± in the case c0 6= 0, If c0 = 0, then it is easy to show


that either

[t c]Zn−1 + (t cx + d)2 = yn−1


2
= 1,
or
[t c]Zn−1 + (t cx + d)2 = d2 = 1,
which proves SFun 1± , and concludes the proof of the theorem.
2
Measures, Integration and Quadratic Model

We shall give various formulas related to measures on GLn and its subgroups.
We also compute the volume of a fundamental domain, a computation which
was originally carried out by Minkowski. Essentially we follow Siegel’s proof
[Sie 45]. We note historically that people used to integrate over fundamental
domains, until Weil pointed out the existence of a Haar (invariant) measure
on homogeneous spaces with respect to unimodular subgroups in his book
[We 40], and observed that Siegel’s arguments could be cast in the formalism
of this measure [We 46].
Siegel’s historical comments [Sie 45] are interesting. He first refers to a
result obtained by Hlawka the year before [Hla 44], proving a statement by
Minkowski which had been left unproved for 50 years. However, as Siegel says,
Hlawka’s proof “does not make clear the relation to the fundamental domain
of the unimodular group which was in Minkowski’s mind. This relation will
become obvious in the theorem” which Siegel proves in his paper, and which
we reproduce here.
The Siegel formula is logically independent of most of the computations
that precede it. For the overall organization, and ease of reference, we have
treated each aspect of the Haar measures systematically before passing on to
the next, but we recommend that readers read the section on Siegel’s formula
early, without wading through the other computations.
The present chapter can be viewed as a chapter of examples, both for this
volume and subsequent ones. The discrete subgroups GLn (Z) and SLn (Z) will
not reappear for quite some time, and in particular, they will not reappear in
the present volume which is concerned principally with analysis on the univer-
sal covering space G/K with G = SLn (R) and K = Unin (R) (the real unitary
group). Still, we thought it worthwhile to give appropriate examples jumping
ahead to illustrate various concepts and applications. The next chapter will
continue in the same spirit, with a different kind of application. Readers in a
hurry to get to the extension of Fourier analysis can omit both chapters, with
the exception of Sect. 1 in the present chapter. Even Sect. 1 will be redone in
a different spirit when the occasion arises.

Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 23–47 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
24 2 Measures and Integration

1 Siegel Sets and Finiteness of Measure Mod SLn(Z)


We assume that the reader is acquainted with the basic computations of Haar
measure in an Iwasawa decomposition, as in [JoL 01], Chap. 1, Sect. 2. In this
section, we give an application of the basic Haar measure formulas. We recall
that in Iwasawa coordinates G = U AK, for Haar measures dx, du, da, dk,
Z Z Z Z
f (x)dx = C f (uak)δ(a)−1 dudadk .
G U A K

For t, c > 0 we define the following subsets of U and A in SLn (R):


Uc = subset of u ∈ U with |uij | 5 c
At = subset of a ∈ A with ai = tai+1 for i = 1, . . . , n − 1.
We then define the Siegel set

Siet,c = Uc At K in SLn (R) ,

or we may work just on the subgroup U A, in which case we would specify


that Siet,c = Ut Ac . On SLn (R), Siet,c thus consists of all elements uak with
|uij | 5 c and ai = tai+1 for i = 1, . . . , n − 1. Since we are on SLn (R), we take
the quotients
qi = ai /ai+1 (i = 1, . . . , n − 1)
for coordinates, which are called normal coordinates by Siegel [Sie 45] and
[Sie 59]. Then the coordinates (q1 , . . . , qn−1 ) given an isomorphism

(q1 , . . . , qn−1 ) : A → R+(n−1) .

Theorem 1.1. A Siegel set in SLn (R) has finite Haar measure.

Proof. [Sie 59] Since K is compact and Uc has bounded (euclidean) measure,
it follows that Z Z Z Z
dg = C δ(a)−1 d∗ a .
Uc At K At

Hence it suffices to prove that this integral over At is finite. Using the coor-
dinates q1 , . . . , qn−1 , the fact that Haar measure on each factor of R+(n−1) is
dqi /qi , and the fact that
n−1
Y
δ(a) = qimi with mi = 1 ,
i=1

we find that
Z Z∞ Z∞ Y Y dqi
δ(a) −1 ∗
d a= ··· qi−mi ,
At qi
t t

which is finite, thus proving the theorem.


2 Decompositions of Haar Measure on Posn (R) 25

In [Sie 59] Siegel used the above result to show that SLn (Z)\SLn (R) has
finite measure. He also used the normal coordinates to construct a compact-
ification of SLn (Z)\SLn (R). By Theorem 5.1 of Chap. 1, we know that a
fundamental domain for SLn (Z) is contained in a Siegel set, and hence we
have given one proof of

Theorem 1.2. The quotient space SLn (Z)\SLn (R) has finite measure.

2 Decompositions of Haar Measure on Posn(R)

Next we shall deal with formulas for integration on the space Posn = Posn (R).
It is a homogeneous space, so has a unique Haar measure with respect to the
action of G = GLn (R), up to a constant factor.
We follow some notation systematically as follows. If Y = (yij ) is a system
of coordinates from euclidean space, we write
Y
dµeuc (Y ) = dyij

for the corresponding euclidean (Lebesgue) measure. Here we shall reserve


the letter Y for a variable in Posn , so the indices range over

15i5j5n,

and the product expression for dY is thus taken over this range of indices.
Deviations from Lebesgue measure will be denoted by dµ(Y ), with µ to be
specified.
If ϕ is a local C ∞ isomorphism, we let J(ϕ) be the Jacobian factor of the
induced map on the measure, so it is the absolute value of the determinant
of the Jacobian matrix, when expressed in terms of local coordinates. Often
the determinant will be positive. If g is a square matrix, we let |g| denote its
determinant, and kgk is then the absolute value of the determinant.
The exposition of the computation of various Jacobians and measure fol-
lows Maass [Maa 71], who based himself on Minkowski and Siegel, in partic-
ular [Sie 59].
Proposition 2.1. A GLn (R)-bi-invariant measure on Posn is given by

dµ(Y ) = |Y |−(n+1)/2 dµeuc (Y ) .

For g ∈ GLn (R), the Jacobian determinant J(g) of the transformation [g] is

J(g) = kgkn+1 .

The invariant measure satisfies dµn (Y −1 ) = dµn (Y ), i.e. it is also invariant


under Y 7→ Y −1 .
26 2 Measures and Integration

Proof. We prove the second assertion first. Note that g 7→ J(g) is multiplica-
tive, that is J(g1 g2 ) = J(g1 )J(g2 ), and it is continuous, so it suffices to prove
the formula for a dense set of matrices g. We pick the set of matrices of the
form gDg −1 , with D = diag(d1 , . . . , dn ) being diagonal. Then [D]Y is the
matrix (di yij dj ). Hence
Y
J(gDg −1 ) = J(D) = |di dj | = kDkn+1 = kgDg −1 kn+1 ,
i5j

which proves the formula for J(g). Then


Y
dµn ([g]Y ) = k[g]Y k−(n+1)/2 J([g]) dyij
i5j
Y
= kgk−(n+1) kY k−(n+1)/2 kgkn+1 dyij
i5j

= dµn (Y ) ,

thus concluding the proof of left invariance. Right invariance follows because
J(g) = J(t g).
Finally, the invariance under Y 7→ Y −1 can be seen as follows. If we let
S(Y ) = Y −1 , then for a tangent vector H ∈ Symn ,

S 0 (Y )H = −Y −1 HY −1 ,

so det S 0 (Y ) = J(Y −1 ) = |Y |−(n+1) . Then

dµn (Y −1 ) = |Y |(n+1)/2 |Y |−(n+1) dµeuc (Y ) = dµn (Y ) ,

thus concluding the proof of the proposition.

Full Triangular Coordinates

Let Tri+ +
n (R) = Trin be the group of upper triangular matrices with posi-
tive diagonal coefficients. Then in the notation of Sect. 1, we have the direct
decomposition
AU = Tri+ n .

We also have the C ∞ isomorphism

Tri+ t
n → Posn given by T 7→ T T .

In Sect. 1 we recalled that a Haar measure on Tri+


n is given by

dµ Tri (T ) = δ(T )−1 β(T )−1 dµeuc (T ) ,


2 Decompositions of Haar Measure on Posn (R) 27

where Y
dµeuc (T ) = dtij
i5j

is the ordinary euclidean measure. Note that we are following systematic no-
tation where we use a symbol µ to indicate deviation from euclidean measure.
For the triangular group Tri+ , the variables i and j range over 1 5 i 5 j 5 n.
We shall usually abbreviate
tii = ti .
First we decompose the Iwasawa coordinates stepwise, going down one
step at a time. We write an element Y ∈ Posn in inductive coordinates
t
µ ¶
y z
Y = with y ∈ R+ , z ∈ Rn−1 , Yn−1 ∈ Posn−1 .
z Yn−1

Thus
Y = Y (y, z, Yn−1 ) .
We have the first decomposition of an element T ∈ Tri+ n:

t
µ ¶
t1 x
T = = T (t1 , x, Tn−1 ) ,
0 Tn−1

so (t1 , x, Tn−1 ) are coordinates for T , and we have the mapping


+
ϕ+ + t
1,n−1 : Trin → Posn given by ϕ1,n−1 (T ) = T T .

Direct matrix multiplication gives

t21 +t xx t t
 
x Tn−1
(1) Y = ϕ+ (T ) = ϕ+ (t1 , x, Tn−1 ) =  
Tn−1 x Yn−1

whence
2t1 ∗ 0
 
∂(Y ) ∂(y, z, Yn−1 )
(2) = = 0 Tn−1 ∗ 
∂(T ) ∂(t1 , x, Tn−1 ) ∂(Yn−1
0 0 ∂(Tn−1 ) .

Thus we obtain
¯ ¯
+
¯ ∂(Yn−1 ) ¯
(3) J(ϕ ) = 2t1 |Tn−1 | ¯ ¯ ¯
∂(Tn−1 ) ¯
= 2n (t1 · · · tn )(t2 · · · tn ) · · · tn
Y n
= 2n tii .
i=1

Thus not only do we get the inductive expression (3), but we can state the
full transformation formula:
28 2 Measures and Integration

Proposition 2.2. Let ϕ+ : Tri+ + t


n → Posn be the map ϕ (T ) = T T . Let
ti = tii be the diagonal elements of T . Then
n
Y
J(ϕ+ ) = 2n tii = 2n β(T ) ;
i=1

or in terms of integration,
Z Z
f (Y )dµeuc (Y ) = f (T t T )J(ϕ+ )(T )dµeuc (T ) .
Posn Tri+
n

Then for the Haar measures of Propositions 1.4 and 2.1, we have

dµn (Y ) = 2n dµ Tri (T ) .

Written in full, this means


n n
2dti Y
Z Z Z Y Y
f (Y )dµn (Y ) = ... f (T t T ) ti−n
i dtij ,
i=1 i=1
ti i<j
Posn

where the integrals over t1 , . . . , tn are from 0 to ∞, and those over tij (with
i < j) are from −∞ to ∞.
There is a corresponding map ϕ− given by

ϕ− : Tri+ − t
n → Posn given by ϕ (T ) = T T .

Similarly, we have the map


µ ¶
t1 0
ϕ−
1,n−1 (T )
t
= T T when T = t
.
x Tn−1

Direct multiplication gives

t21 t1 t x
 

(1− ) Y = ϕ− (T ) = ϕ− (t1 , x, Tn−1 ) =  


t
t1 x Tn−1 Tn−1

whence
2t1 ∗
 
 
0 t1
 
 
∂(Y )  .. ..

(2− ) = .

.

∂(T ) 

0 0 · · · t1

 
 
 
∂(Yn−1 )
0 0···0 ∂(Tn−1 )
2 Decompositions of Haar Measure on Posn (R) 29

so we obtain
(3− ) J(ϕ− ) = 2tn1 J(ϕ−
1,n−1 )
n
Y
= 2n tn−i+1
i
i=1
n
= 2 δ(T )β(T ) .

Thus we can state the analogous transformation formula:


Proposition 2.3. Let ϕ− : Tri+ → Posn be the map ϕ− (T ) = t T T . Then
n
Y
J(ϕ− ) = 2n tn−i+1
i = 2n α(T )β(T ) ;
i=1

or in terms of integration,
Z Z
f (Y )dµeuc (Y ) = f (t T T )J(ϕ− )(T )dµeuc (T )
Posn Tri+
n
Z
= 2n f (t T T )δ(T )β(T )dµeuc (T ) .
Tri+
n

The triangularization has a variation depending on how we write it. Let


us write a positive diagonal matrix
 1/2 
  a11 . . . 0
a11 . . . 0  . .. 
..
A =  ... .. ..  so A1/2 =  .. . . 
 .

. .  
 
0 . . . ann 1/2
0 . . . ann

A full Iwasawa decomposition for Y can be written in the form


1
Y = [u(X)]A = T t T with T = u(X)A 2 .
1 1
Then tii = aii2 and 2dtii /tii = daii /aii . Furthermore, tij = xij ajj
2
for i < j.
Then 1
dtij = ajj 2
dxij + a term with daij .
Plugging in the formula for dµn (Y ) in Proposition 2.2, we find:
Proposition 2.4. Let Y = [u(X)]A. Then
n n
Y i−(n+1)/2
Y daii Y
dµn (Y ) = aii dxij .
i=1 i=1
aii i<j

Similarly, on the other side:


30 2 Measures and Integration

Proposition 2.5. Let Y = A[u(X)]. Then


n n
Y −i+(n+1)/2
Y daii Y
dµn (Y ) = aii dxij .
i=1 i=1
aii i<j

Remark. We meet here a density related to the Iwasawa density of Sect. 1.


Let us define
n
−i+(n+1)/2
Y
δPos (a) = aii .
i=1

Then 1
δPos = δIw
2
=ρ.

Block Decomposition of Triangular Coordinates

Next we give similar formulas for the partial decomposition in blocks of arbi-
trary size. Let 0 < p < n and p + q = n. For x ∈ Rp×q let
µ ¶
Ip X
u(X) = .
0 Iq

We can write an element Y ∈ Posn in the form


µ ¶
W 0
(4) Y = [u(X)] = ϕ+p,q (W, X, V )
0 V

with W ∈ Posp , V ∈ Posq , and X ∈ Rp×q . Then Y is decomposed into


rectangular blocks illustrated on the following figure:
 p×p p×q 
 Y1 Y2 
 
 
Y =



 t 
 Y2 Y3 

This decomposition gives partial coordinates Y = Y (W, X, V ), with the map

ϕ+
p,q : Posp × R
p×q
× Posq → Posn

as above. A direct check of dimensions shows that they add up properly, that
is
p(p + 1) (n − p)(n − p + 1) n(n + 1)
+ + p(n − p) = .
2 2 2
Direct multiplication in (4) yields the explicit expression
2 Decompositions of Haar Measure on Posn (R) 31
 
W + [X]V XV
(5) ϕ+
p,q (W, X, V ) =
  .
V tX V

From (5), one sees that ϕ+p,q is bijective, because first, V uniquely determines
the lower right square Y3 . Then X is uniquely determined to give Y2 = XV ,
and finally W is uniquely determined to give Y1 .
Note that aside from this formal matrix multiplication, one has Y > 0 if
and only if W > 0 and V > 0, and X is arbitrary.

Proposition 2.6. The Jacobian is given by

J(ϕ+ p
p,q ) = |V | .

For Y = ϕ+
p,q (W, X, V ) we have the change of variable formula

dµn (Y ) = |W |−q/2 |V |p/2 dµeuc (X)dµp (W )dµq (V ) .

Proof. We compute the Jacobian matrix, and find


 
Ir ∗...∗ ∗
0 V ...0 ∗ 
∂(Y )
=  ... .. . . .. ..
 
 
∂(W, X, V )  . .. . 

0 0...V ∗
0 0...0 Is

with V occurring p times as blocks on the diagonal, r = p(p + 1)/2 and


s = q(q +1)/2. Taking the determinant yields the stated value. For the change
of variable formula, we just plug in using the definitions

dµp (W ) = |W |−(p+1)/2 dµeuc (W ) ,

and similarly with n and q, combined with the value for the Jacobian. The
formula comes out as stated.

One may carry out the similar analysis with lower triangular matrices.
Thus we let Tri−
n be the space of lower triangular matrices, with the map

ϕ− : Tri+ − t
n → Posn defined by ϕ (T ) = T T .

Then we have the partial map

(6) Y = ϕ− p,q (W, X, V )


   
W 0 W WX
= [t u(X)]  = .
t
0 V XW W [X] + V
32 2 Measures and Integration

Proposition 2.7. The Jacobian is given by

J(ϕ−
p,q ) = |W |
−q
.

For Y = ϕ−
p,q (W, X, V ) the change of variable formula is

dµn (Y ) = |W |q/2 |V |−p/2 dµeuc (X)dµp (W )dµq (V ) .

The proofs are exactly the same as for the other case carried out previously,
and will therefore be omitted.

Polar Coordinates

There is another decomposition besides the Iwasawa decomposition, giving


other types of information about the invariant measure, which we shall present
below, namely polar coordinates. These have been considered in the general
context of semisimple Lie groups and symmetric spaces (cf. Harish-Chandra
[Har 58a,b] and Helgason’s book [Hel 84]). We learned from Terras [Ter
88] that statisticians dealt with certain special cases and computations on
GLn (R), notably Muirhead, and we found her book helpful in writing up the
rest of this section. See notably 4.1 Exercise 24, and 4.2 Proposition 2.
A standard theorem from linear algebra states that given a positive definite
symmetric operator on a finite dimensional real vector space with a positive
definite scalar product, there exists an orthonormal basis with respect to which
the operator is diagonalized. Since the eigenvalues are necessarily positive, this
means in our set up that every element Y ∈ Posn can be expressed in the form

Y = [k]a with k ∈ K and a ∈ A ,

where as before A is the group of diagonal matrices with positive diagonal el-
ements. For those matrices with distinct eigenvalues (the regular elements)
this decomposition is unique up to a permutation of the diagonal elements,
and elements of k which are diagonal and preserve orthonormality, in other
words, diagonal elements consisting of ±1. Hence the map

p : K × A → Posn given by (k, a) 7→ kat k = kak−1 = Y

is a covering of degree 2n n! over the regular elements. This map is called the
polar coordinate representation of Posn , and (k, a) are called the polar
coordinates of a point. As mentioned above, these coordinates are unique
up to the above mentioned 2n n! changes over the regular elements.
We want to give an expression for the Haar measure on Posn in terms of
the polar coordinates, and for this we need to compute the Jacobian J(p).
For k ∈ K, we have t kk = I, t k = k −1 , dk = ((dk)ij ), so

(6) (dt k)k + t kdk = 0 and so dt k = −k −1 dkk −1 .


2 Decompositions of Haar Measure on Posn (R) 33

Then we let
ω(k) = k −1 dk so t ω = −ω .
Thus ω is a skew symmetric matrix of 1-forms,
((k −1 dk)ij ) = (ωij (k)) ,
with component being 1-forms ωij . Observe that each such form is necessarily
left K-invariant, that is for any fixed k1 ∈ K, we have
ω(k1 k) = ω(k)
directly by substitution. Taking the wedge product
^
ωij
i<j

in some definite order yields a volume form on K, which is left invariant, and
therefore right invariant. The absolute value of this form then represents a
Haar measure on K, which we denote by νn , or dνn (k) if we use it inside an
integral sign. We call νn the polar Haar measure, so
¯ ¯
¯^ ¯
dνn = ¯¯ ωij ¯¯ .
i<j

Of course we can compare νn with the universally normalized Haar mea-


sure on compact groups,written µn , such that
Z
dµn = 1 = µn (K) .
K

Then
dνn (k) = νn (K)dµn (k)
where νn (K) is the total νn -measure of K, which we have to compute. For
the moment, we go on with the differential of the polar coordinate map.
Formula 2.8.
dp(k, a) = k(da + ωa − aω)k −1 = [k](da + ωa − aω)
and
(ωa − aω)ij = (aj − ai )ωij .
In the above formula, da is the Euclidean diag(da1 , . . . , dan ).
Proof. Matrix multiplication being bilinear, we have
dp(kat k) = (dk)at k + kdat k + kadt k .
But also t kk = I implies (dt k)k + t kdk = 0 as we have seen, so
dt k = −t k(dk)t k .
Substituting this value for dt k = dk −1 into the previous formula and using
the skew symmetry t ω = −ω yields the formula.
34 2 Measures and Integration

We note that the computation of Formula 2.6 can also be written

dY = [k](da + ωa − aω) = matrix of 1-forms (dyij ) .

Taking the wedge product will allow us to determine various relations between
previous measures.

Proposition 2.9. These measures satisfy the infinitesimal relation


Y Y
dµeuc (Y ) = |ai − aj | dai dνn (k) .
i<j

Letting
n
−(n−1)/2
Y Y
γ(a) = |ai − aj | ai
i<j i=1

this yields the comparison of Haar measures locally where p is bijective:


Y
dµn (Y ) = γ(a)d∗ adνn (k) where d∗ a = dai /ai .

For a function f with compact support on Posn , we thus have globally


Z Z Z
f (Y )dµn (Y ) = (2n n!)−1 f ([k]a)γ(a)d∗ adνn (k) .
Posn A K

Proof. We take the wedge product of the forms in Formula 2.8, that is
^ Y ^
± dai ∧ (aj − ai ) ωij .
i<j i<j

Taking absolute values yields the first formula. The second formula concern-
ing the Haar measures follows from the definition of the Haar measure in
Proposition 2.1. This concludes the proof.

We shall now compute the constant νn (K), following Muirhead and the
computation reproduced in Terras [Ter 88].
First we need a remark on determinants. Let V = (vij ) be an n × n matric
of vectors in a vector space, and C = (cij ) be a scalar matrix. Let

W = V C = (wij ) .

Then ^ ^
± wij = vij (det C)n .
i,j i,j

This is immediate from the usual rule, when one applies a matrix to a row
(v1 , . . . , vn ) of vectors, in which case one gets a factor of det C. Here we
perform this operation n times, whence the power (det C)n .
2 Decompositions of Haar Measure on Posn (R) 35

Lemma 2.10. Let

K × Tri+ → G = GLn (R) be the product map (k, T ) 7→ kT .

Put X = kT , and dX = (dxij ). Then


n
Y
dµeuc (X) = tn−i
ii dµeuc (T )dνn (k) .
i=1
Q
Proof. By definition, dµeuc (X) = dxij taken over all i, j = 1, . . . , n. We
have
dX = (dk)T + kdT
so
k −1 dX = (dT T −1 + ω)T .
Hence taking the wedge product of all the components, up to sign, we get
n
^ ^
dxij = ((dT T −1 )ij + ωij ) det(T )n .
i,j=1 i,j

But dT T −1 is an upper triangular matrix (with components 0 unless i 5 j),


and we have seen that ω is skew symmetric. Hence separating the wedge
products over i 5 j and i > j, we find up to sign
^ ^ ^
dxij = ((dT T −1 )ij + ωij ) ∧ ωij (det T )n .
i,j i5j i>j

The product of ωij over i > j is a volume form (maximal degree form) on K.
Hence wedging it with any one of these 1-forms yields 0. Hence we find
¯ ¯¯ ¯
¯V −1
¯¯ V ¯
dµeuc (X) = ¯ ((dT T )ij ¯¯
¯ ¯ ¯ ωij ¯¯(det T )n
i5j i>j
¯ ¯
n
n−i
Q Q ¯V ¯
= tii dtij ¯
¯ ωij ¯¯
i=1 i<j i<j

as was to be shown.
n
√ n(n+1)/2 Q
Theorem 2.11. We have νn (K) = 2n π Γ(j/2)−1 .
j=1

Proof. Let f (X) = exp(−tr(t XX)) for X ∈ Matn (R), so f (X) splits
Y
f (X) = exp(−x2ij ) .
i,j
36 2 Measures and Integration

Then
Z Z
n2 /2
π = f (X) dµeuc (X) = f (X)dµeuc (X)
Matn GLn
Z Z Y
[by Lemma 2.10] = f (kT ) tn−i
ii dµeuc (T )dνn (k)
K
Tri+
Z Y Y Y
= νn (K) exp(−t T T ) tn−i
ii dtii dtij .
i<j
Tri+

Now the integral over Tri+ splits into a product of several single integrals
over i = 1, . . . , n. The integrals over the variables tii are from 0 to ∞,
Z ∞ µ ¶
−t2 n−i n−i+1 1
e t dt = Γ
0 2 2

after we let uQ= t2 , du = 2tdt. The product of these integrals is therefore


precisely 2−n Γ(i/2).

The other integrals over the tij (i < j) are individually of the form
Z ∞
2 √
e−t dt = π ,
−∞

√ n(n−1)
and there are n(n − 1) of them, thus giving rise to the power π .
Multiplying these two types of products and solving for νn (K) gives the stated
value.

Corollary 2.12. For f ∈ Cc (Posn ) we have


Z Z Z
n −1
f (Y )dµn (Y ) = (2 n!) νn (K) f ([k]a)γ(a)d∗ adk ,
Posn A K

where dk is the Haar measure on K with total measure 1, and


√ n(n+1)/2
1 π
(2n n!)−1 νn (K) = Q .
n! Γ(i/2)

3 Decompositions of Haar Measure on SPosn


This section complements the preceding one by giving a measure decomposi-
tion via SLn , and also working out some inductive block decompositions on
SLn itself.
3 Decompositions of Haar Measure on SPosn 37

In the first place, every Y ∈ Posn can be written uniquely in the form

Y = r1/n Z with r > 0 and Z ∈ SPosn .

Thus we have a product decomposition

Posn = R+ × SPosn−1

in terms of the coordinates (r, Z). There exists a unique SLn (R)-invariant
(1)
measure on SPosn , denoted by µn , such that for the above product decom-
position, we have
dr (1)
dµn (Y ) = dµn (Z) .
r
Warning: This decomposition does not come from the Jacobian of the coor-
dinate mapping!
Immediately from the above decomposition, we obtain:

Proposition 3.1. For a function f on SPosn ,


Z Z
f (|Y |−1/n Y )|Y |dµn (Y ) = f (Z)dµ(1)
n (Z) .
|Y |51 SPosn

On the other hand, say for a continuous function g on R+ ,


b
dr
Z Z
g(|Y |)dµn (Y ) = Voln g(r) ,
a r
Γn \Posn
0<a5|Y |5b

where Voln is the finite volume of Γn \SPosn .

Proof. As to the first integral, the left side is equal to


Z 1 Z
dr
f (Z)r dµ(1) n (Z) ,
0 r
SPosn

so the formula is clear. The second one is immediate from Fubini’s theorem.
Note that the finiteness of the volume was proved in Theorem 1.6.

Example of the second formula. We have


1
Z
|Y |s dµ(Y ) = (bs − as ) Voln .
s
a5|Y |5b
Γn \Posn
38 2 Measures and Integration

Proposition 3.2. From the first order decomposition of Z ∈ SPosn :


  
1 0 w 0
Z=  
x In−1 −1/(n−1)
0 w V
with w ∈ R+ , x ∈ Rn−1 , V ∈ Posn−1 , we have the measure decomposition
dw (1)
dµ(1)
n (Z) = w
n/2
dxdµn−1 (V ) .
w
Proof. This comes from the same type of partial coordinates Jacobian com-
putation as in Sect. 2.
Next we tabulate the subgroups of SLn (R) which will allow inductive
decompositions of various things on SLn (R). We let:
Gn = SLn (R)
Γn = SLn (Z)
Gn,1 = subgroup of Gn leaving the first unit vector t e1 fixed, so
Gn,1 consists of all matrices
µ ¶
1 0
with x ∈ Rn−1 and g 0 ∈ SLn−1 (R) .
x g0

It is immediately verified that Gn,1 is unimodular.


Γn,1 = Γn ∩ Gn,1 = subgroup of Γn consisting of all matrices
µ ¶
1 0
with m ∈ Zn−1 and γ 0 ∈ Γn−1 .
m γ0
Hn = subgroup of Gn,1 consisting of all matrices
µ ¶
1 0
with x ∈ Rn−1 and γ 0 ∈ Γn−1 = SLn−1 (Z) .
x γ0
These subgroups give rise to a fibration
Γn,1 \SPosn

with fiber R+ × Rn−1 /Zn−1

y
Γn−1 \SPosn−1
and to isomorphisms

(1) Hn \Gn,1 −→ Γn−1 \Gn−1 as homogeneous spaces

(2) Γn−1 \Hn ≈ Rn /Zn as groups.


The measure decomposition of Proposition 3.2 gives rise to a corresponding
measure decomposition on the fibration. In terms of integration over the fibers,
we get the following integral formula.
4 Siegel’s Formula 39

Proposition 3.3. For a function f on Γn−1 \SPosn , in terms of the coordi-


nates of Proposition 3.2, we have
Z
f (Z)dµ(1)
n (Z)
Γn,1 \SPosn
dw
Z Z Z
(1)
= f (w, x, V )wn/2 dx dµn−1 (V ) .
w
Γn−1 \SPosn−1 Zn \Rn R+

From the isomorphism in (2) and the fact that Rn /Zn is compact, if we
suppose inductively that Γn−1 \Gn−1 has finite measure, we then obtain:
Proposition 3.4. The measure of Γn,1 \Gn,1 is finite.
One more formula:
Proposition 3.5.
Z Z
1
h(t gg)dg = an h(Y )|Y |− 2 µn (Y )
GLn (R) Posn

where
n
Y π j/2
an = .
j=1
Γ(j/2)

4 Siegel’s Formula
Throughout this section we use the notation of Sect. 3 concerning subgroups
of Gn = SLn (R), but when we fix n for parts of the discussion, we abbreviate:

G = Gn , Γ = Γn , G1 = Gn,1 , Γ1 = Γn,1 = Γn ∩ Gn,1 .

The group G = SLn (R) acts on the right of t Rn (simultaneously as it


acts on the left or SPosn ). We may interpret Γ\G as the set of all lattices of
determinant 1, because SLn (Z) = Γ maps t Zn to itself, and all such lattices
are of the form t Zn g with g ∈ SLn (R). The group G1 = Gn,1 is the isotropy
group of the unit vector t e1 . As in Sect. 3, we write an element of G1 in the
form µ ¶
1 0
with x ∈ Rn−1 and g 0 ∈ SLn (R) .
x g0
There is a natural isomorphism of G-homogeneous space

(1) G1 \G −→ t Rn − {0} given by g 7→ t e1 g .
40 2 Measures and Integration

We shall use the fibering

(2) Zn−1 \Rn−1 → Γn,1 \Gn,1 → Γn−1 \Gn−1 = SLn−1 (Z)\SLn−1 (R)

induced by the above “coordinates” (x, g 0 ), showing that

Γn,1 \Gn,1 = Γ1 \G1

has finite measure under the inductive assumption of finite measure for the
quotient space SLn−1 (Z)\SLn−1 (R).
Formula (1) allows us to transport Lebesgue measure from Rn to G1 \G.
We use x for the variable on Rn , sometimes identified with the variable in
G1 \G. In an integral, we write Lebesgue measure as dx. We let µG1 \G be the
corresponding measure on G1 \G, under the isomorphism (1).
We continue to use the fact that a homogeneous space with a closed uni-
modular subgroup has an invariant measure, unique up to a constant factor.
We consider the lattice of subgroups
G
¡ @
¡ @
¡
ª @
R
G1 Γ
@ ¡
@ ¡
@
R ª ¡
Γ1
Fix a Haar measure dg on G. On the discrete groups Γ, Γ1 let the Haar
measure be the counting measure (measure 1 at each point). Then dg de-
termines unique measures on Γ\G and Γ1 \G, since a measure is determined
locally. We can denote the induced measures by dḡ without fear of confusion.
In addition to that, going on the other side, having fixed dg on G and

dµG1 \G = dx on G1 \G ,

there is a unique measure dg1 on G1 such that


Z Z Z Z Z
(3) = = .
G1 \G Γ1 \G1 Γ1 \G Γ\G Γ1 \Γ

Putting in the variables, this means that for a function f on Γ1 \G, say con-
tinuous with compact support, we have:
Z Z Z
(4) f (g1 g)dḡ1 dµG1 \G (ḡ) = f (g)dḡ
G1 \G Γ1 \G1 Γ1 \G
Z Z
= f (γg)dγ̄dḡ .
Γ\G Γ1 \Γ
4 Siegel’s Formula 41

Of course, the formula for f ∈ Cc (G1 \G) determines the measures, but is valid
for a much wider class of functions, namely the class for which the integrals
converge absolutely, say f ∈ L1 (Γ1 \G).
Lemma 4.1. Let f ∈ L1 (Rn ) ≈ L1 (G1 \G). Let cn = vol(Γ1 \G1 ). Then
Z Z Z
cn f (x)dx = f (γg)dγ̄dḡ .
Rn Γ\G Γ1 \Γ

Proof. The right side is just the right side of (4). For the left side, note that
f (g1 g) = f (g) for g1 ∈ G1 by the current hypothesis on f , and hence the inside
integral on the left of (4) just yields the volume vol(Γ1 \G1 ). The lemma then
follows by the definition of cn .

Denote by prim(t Zn ) the set of primitive row vectors, i.e. integral vectors
such that the g.c.d. of the components is 1. Then
t
(5) e1 Γ = t e1 SLn (Z) = prim(t Zn ) ,

since any primitive vector can be extended to a matrix in SLn (Z). From (5),
it follows that the totality of all non-zero vectors in t Zn is the set of vectors
t
Zn − {0} = {k` with ` primitive and k = 1, 2, 3, . . .} ,

so k ranges over the positive integers.


We let
Vn = vol(Γ\G) = vol(SLn (Z)\SLn (R)) .
If we change the Haar measure on G by a constant factor, then the volume
changes by this same constant. The volume is with respect to our fixed dg. In
(9) we shall fix a normalization of dg.
Theorem 4.2. (Siegel [Sie 45] ) Let G = SLn (R) and Γ = SLn (Z). Choose
any f ∈ L1 (Rn ). Let dx be Lebesgue measure on Rn . Then
Z Z X
Vn f (x)dx = f (`g)dḡ
Rn `6=0
Γ\G
Z X
= ζ(n) f (`g)dḡ.
Γ\G ` prim

Furthermore Vn = cn ζ(n). Theorem 4.6 will determine Vn , cn .

Proof. On the right of (4) we use Lemma 4.1 to obtain


Z X Z
f (`g)dḡ = cn f (x)dx .
Γ\G ` prim Rn
42 2 Measures and Integration

Replacing f (x) by f (kx) with a positive integer k, and using the chain rule
on the right, we find
Z X Z
f (k`g)dḡ = cn k −n f (x)dx .
` prim Rn
Γ\G

Summing over all k ∈ Z+ , the elements k` with ` primitive range over all
non-zero elements of t Zn . On the left side, we obtain the factor cn ζ(n), so we
may rewrite that last expression in the form
Z X Z
(6) f (`g)dḡ = cn ζ(n) f (x)dx ,
Γ\G `6=0 Rn

where the sum is taken over all ` 6= 0 in t Zn .

Assuming that Vn is finite, we shall now prove that Vn = cn ζ(n). For this
we can take a function f which is continuous = 0, with positive integral and
compact support. We note that for any g ∈ SLn (R),
µ ¶ Z
1 X 1
(7) lim f `g = f (x)dx .
N →∞ N n N
`6=0 Rn

This is just a property of the Riemann integral, passing to the limit, because
translation by g preserves the measure of a parallelotope. We integrate this
formula over Γ\G and find:
Z X µ ¶
1 1
Z
VN f (x)dx = lim f `g dḡ
N →∞ N n N
Rn `6=0
Γ\G
µ ¶
1 1
Z
= lim cn ζ(n) f x dx by (6)
N →∞ N n N
Rn
Z
= lim cn ζ(n) f (x)dx
N →∞
Rn

by letting u = x/N, du = dx/N n . This concludes Siegel’s proof, based on a


Riemann sum argument. For another proof, see below.

Remark. From Theorem 1.4 of the present chapter and Theorem 5.1 of Chap.
1, we know that Γ\G has finite measure. However, as does Siegel, one can give
another proof by induction, not based on the use of Siegel sets. The first part
of the proof of the preceding theorem is valid no matter whether Vn is finite
or not, and it yielded (6). Here we use the induction which made Proposition
3.4 valid, so cn is finite. Let f be a function with compact support, equal to
1 on some given compact set, and f = 0 everywhere. Define
4 Siegel’s Formula 43
µ ¶
1 1
fN (x) = f x .
Nn N

Apply (6) to the function fN instead of f , and take the limit for N → ∞. By
(7) it follows that the measure of every compact subset of Γ\G is bounded by
cn ζ(n), and therefore that Γ\G has finite measure. Of course, this argument
and the argument in the second part of the theorem can be joined to give
proofs of the finiteness and the value of Vn simultaneously. We preferred the
present arrangement, separating both considerations.

Another Proof

Because of its intrinsic interest, and because we want to emphasize how the
additive Poisson formula mingles with the multiplicative Haar measures, we
shall give another proof for the computation of the volume, originally due to
Weil [We 46]. We add one term involving f (0) to both sides of (6) to get
Z Z X
(7) cn ζ(n) f (x)dx + Vn f (0) = f (`g)dḡ
Rn `
Γ\G

where the sum is taken over all ` ∈ t Zn . Normalize the Fourier transform by
Z

f (y) = f (x)e−2πix·y dx .
Rn

Then the Poisson formula gives for any function ϕ in the Schwartz space:
X X
ϕ(`) = ϕ∨ (`) .
` `

If ϕ(x) = f (xg), then ϕ∨ (y) = f ∨ (y t g −1 ) for g ∈ SLn (R). From (7) we find
Z X
(8) cn ζ(n)f ∨ (0) + Vn f (0) = f ∨ (` t g −1 )dḡ
`
Γ\G
Z X
= f ∨ (`g)dḡ .
Γ\G `

To justify this last identity, note that the map g 7→ t g −1 is an automorphism


of G as a topological group, and has order 2, so it preserves Haar measure.
Furthermore, this automorphism maps Γ onto itself, so it induces an auto-
morphism of Γ\G as homogeneous space, and preserves the Haar measure on
this space, as desired.
Now let f be in the Schwartz space, and such that f ∨ (0) 6= f (0). We apply
(7) to f ∨ . Using (8) we get
44 2 Measures and Integration

cn ζ(n)f ∨ (0) + Vn f (0) = cn ζ(n)f ∨∨ (0) + Vn f ∨ (0) .

Since f ∨∨ (0) = f (0), we may rewrite the above as

cn ζ(n)(f ∨ (0)) − f (0)) = Vn (f ∨ (0) − f (0)) .

This concludes the other proof.


We give the application of Siegel’s formula to Hlawka’s theorem, which
actually Siegel improved by showing that an epsilon in Hlawka’s estimate was
unnecessary.
Corollary 4.3. (Minkowski-Hlawka) Let An be a positive number such
that
Ann µeuc (Bn ) < 1 ,
where µeuc (Bn ) is the euclidean volume of the unit ball Bn . Then for some
g ∈ SLn (R) and all ` ∈ t Zn , ` 6= 0

k`gk = An .

Proof. Let f be the characteristic function of the euclidean ball of radius An ,


and apply Siegel’s formula (Theorem 4.2) to get

1
Z X
Ann µeuc (Bn ) = f (`g)dḡ .
Vn
Γ\G `6=0

By assumption, for some g ∈ SLn (R), we must have


X
f (`g) < 1 .
`6=0

Hence for this value of g, all the points `g lie outside the ball of radius An ,
which concludes the proof.

Remark. Since µeuc (Bn ) = π n/2 /Γ(1 + n/2), one can use Stirling’s formula
get the asymptotic behavior of µeuc (Bn ), which immediately allows one to give
an explicit expression for An when n is sufficiently large. Stirling’s formula
shows that µeuc (Bn ) tends to 0 fairly rapidly, and hence An can be selected
to tend to infinity accordingly. For instance, An = (n/2πe)1/2 will do for n
sufficiently large.
(1)
We reformulate Siegel’s theorem on SPosn . We use the measure µn of
Sect. 3.
There exists a unique Haar measure dg on G such that
Z Z
(9) f (Z)dµ(1)
n (Z) = f (g t g)dḡ .
Γ\SPosn Γ\G
4 Siegel’s Formula 45

Since we give Γ the counting measure, such a measure determines an invariant


measure on G/K, as left homogeneous space, and a Haar measure on K giving
K measure 1, because of the product decomposition U AK. The Haar mea-
sure dḡ on Γ\G satisfying (9) will be called the symmetrically normalized
measure. It says that the natural isomorphism

Γ\G/K → Γ\SPosn = [Γ]\SPosn


(1)
preserves the naturally given measure µn .
Corollary 4.4. Suppose dḡ is the symmetrically normalized measure. For ϕ
continuous (say) and in L1 (R+ ), we have:
Z Z X
t
Vn ϕ( xx)dx = ϕ([`]Z)dµ(1)
n (Z),
Rn `6=0
Γ\SPosn
Z X
= ζ(n) ϕ([`]Z)dµ(1)
n (Z).
`prim
Γn \SPosn

Proof. Let f (x) = ϕ(t xx) and apply Siegel’s formula to f . Then
Z Z X
Vn f (x)dx = f (`g)dḡ (by Theorem 4.2)
Rn `6=0
Γ\G
Z X
= ϕ([`]g t g)dḡ
`6=0
Γ\G
Z X
= ϕ([`]Z)dµ(1)
n (Z)
Γ\SPosn `6=0

by the normalization (9) thus concluding the proof of the first version, sum-
ming over all ` 6= 0. The second version is done in exactly the same way from
the second version of Theorem 4.2.

Proposition 4.5. For ϕ on R+ guaranteeing convergence (for example, as-


sume that ϕ ∈ Cc (R+ ))

dr
Z X Z
(1)
ϕ([`]Z)dµn (Z) = Vn−1 ϕ(r)rn/2 .
r
` prim
Γn \SPosn R+

Proof. We first note that the sum inside the integral, viewed as a function
of Z ∈ SPosn , is Γn -invariant because action by Γn (on the right side of `)
simply permutes the primitive integral vectors. In any case, we may rewrite
the left side in the form:
46 2 Measures and Integration
Z X
left side = ϕ([t e1 ][γ]Z) dµ(1)
n (Z)
Γn \SPosn γ∈Γn,1 \Γn
Z
= ϕ(z11 )dµ(1)
n (Z)
Γn,1 \SPosn
Z
= f (Z)dµ(1)
n (Z) putting f (Z) = ϕ(z11 )
Γn,1 \SPosn
Z∞
dw
Z Z
(1)
= f (w, x, V )wn/2 dx dµ(n−1) (V )
w
Γn−1 \SPosn−1 Rn /Zn 0
Z∞
dw
= Vn−1 ϕ(w)wn/2 ,
w
0

with the use of Proposition 3.3 in the penultimate step. This concludes the
proof.

We shall apply the above results as in Siegel to determine the volume of


SLn (Z)\SLn (R), but we need to recall some formulas from euclidean space.
We still let dx denote ordinary Lebesgue measure on Rn . We let Sn−1 be the
unit sphere and Bn be the unit ball. Then we recall from calculus that

π n/2 π n/2
µeuc (Bn ) = =
Γ(1 + n/2) (n/2)Γ(n/2)

is the volume of the unit ball. We use polar coordinates in Rn , so there is a


unique decomposition
(10) dx = rn−1 dr dµ(1)
euc (θ)
(1)
where dµeuc represents a uniquely determined measure on Sn−1 , equal to dθ
when n = 2. For arbitrary n, θ = (θ1 , . . . , θn−1 ) has n − 1 coordinates. We
then find
Z Z1
(1) n−1
µeuc (Bn ) = dx = µeuc (S ) rn−1 dr ,
Bn 0

and therefore
(11) µ(1)
euc (S
n−1
) = nµeuc (Bn ) .
From (10) and (11) it follows trivially that for a function ϕ on R+ one has
the formula
π n/2 n/2 dr
Z Z
(12) ϕ(r)r = (t xx)dx ,
Γ(n/2) r
R+ Rn

say for ϕ continuous and in L1 (R+ ).


4 Siegel’s Formula 47

Theorem 4.6. (Minkowski) Let G = SLn (R) and Γ = SLn (Z). Let

Λζ(s) = π −s/2 Γ(s/2)ζ(s) .

Then with respect to the symmetrically normalized measure on G, the volume


Vn of Γ\G is given inductively by Vn = Λζ(n)Vn−1 , which yields
n
Y
Vn = Λζ(k) .
k=2

Proof. We start with Corollary 4.4, to which we apply Proposition 4.5 and
follow up by formula (12). The inductive relation drops out, and the case
n = 1 is trivial. Readers who don’t like n = 1 can check for themselves the
case n = 2 (the standard upper half plane).
3
Special Functions on Posn

Classical functions such as the gamma function and the Bessel function have
analogues on symmetric spaces, as do certain classical integral transforms. For
the generalization of gamma function to Posn , the idea goes back to Siegel
[Sie 35], and for the Bessel function it goes back to Bochner [Boc 52], Herz
[Her 55] and Selberg [Sel 56]. We shall give further bibliographical comments
later. Among the integral transforms is the generalized Mellin transform. Cf.
Gindikin [Gin 64], who provides a beautiful survey of special functions on
spaces like, but more general than Posn . Thus large portions of harmonic
analysis, as well as the theory of Dirichlet and Bessel series carries over to
such spaces. Here we are concerned with the most standard of all symmetric
spaces, the space Posn of symmetric positive definite real matrices. As the
reader will see, one replaces the invariant measure dy/y on the multiplicative
group by the measure |Y |−(n+1)/2 dµeuc (Y ) on the space of positive definite
matrices. We develop systematically the theory of some special functions on
this space, namely, the gamma and K-Bessel functions, as a prototype of other
special functions, and also prototype of more general symmetric spaces. No
matter what, it is useful to have tabulated the formulas in this special case,
for various applications.
We note that Terras has a section dealing with the gamma and Bessel
functions [Ter 88], Chap. 4, Sect. 4. For many reasons, we thought it was
worth while to include here a new exposition of the material. For one thing,
she leaves too many exercises for the reader. Aside from that, in the tradition
of the quadratic forms people, she uses right action of the group on Posn , and
we use left action in the tradition of the Lie industry. We have also used the
basic notion of characters on Posn systematically, associating both the gamma
and Bessel transforms with characters on the homogeneous space.

Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 49–74 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
50 3 Special Functions on Posn

1 Characters of Posn

The space Posn is a homogeneous space for GLn (R), but it is a principal ho-
mogeneous space for the space Tri+ of upper triangular matrices with positive
diagonal components, as we have seen in Chap. 1. We can write an element
Y ∈ Posn uniquely in the form

Y = [T ]I with T ∈ Tri+ , so Y = T t T .

By a character ρ, on Posn , we mean a complex valued and nowhere zero


function ρ : Posn → C∗ such that there exists a character ψ = ψρ of Tri+ ,
trivial on the unipotent subgroup (with diagonal elements equal to 1), and
such that
ρ(Y ) = ψ(T ), so in particular ρ(I) = 1 .
It follows that for any T1 ∈ Tri+ , if A = T1 t T1 , then

(1) ρ([T1 ]Y ) = ψ(T1 )ρ(Y ) and ρ([A]Y ) = ρ(A)ρ(Y [T1 ]) .

Example. First, we have the determinant d defined by

d(Y ) = |Y | .

The verification that the determinant is indeed a character is immediate.


It is equal to the product of the squares of the T -diagonal elements in the
representation Y = T t T . If ρ is any character and α is a complex number,
then dα ρ is a character.
In general, we can define all characters as follows. For T ∈ Tri+ , write
 
t11 . . . t1n
T =  ... .. ..  , t > 0 all i .

. .  ii
0 ... tnn

For an n-tuple s = (s1 , . . . , sn ) of complex numbers, we define


n
s
Y
χs (T ) = tjjj
j=1

and we define the basic character ρs on Posn to be


n
2s
Y
ρs (Y ) = χ2s (T ) = tjj j if Y = T t T = [T ]I .
j=1

Since a character on Posn (R) amounts to a character on the n-fold product of


the multiplication group R+ , it follows that all characters are parametrized
by such n-tuples of complex numbers. However, for much of the formalism,
1 Characters of Posn 51

it is less cumbersome to leave out the complex variables. Note that for any
α ∈ C,
dα ρs = ρs+α where s + α = (s1 + α, . . . , sn + α) .
Observe that the transpose maps Tri+ to Tri− and preserves the diagonal
elements. We could have carried out the same construction with Tri− instead
of Tri+ . To relate the two constructions, it is useful to introduce the reversing
matrix
0 1

ω= 1

1 0

consisting of 1 on the antidiagonal, and 0 elsewhere. Then ω 2 = I, and for


T ∈ Tri+ we have
 
tnn 0
 tn−1,n−1 
[ω]T = T [ω] =   ∈ Tri− .
 
..
 . 
∗ t11
Furthermore, the diagonal elements are permuted by the action of [ω]. The
operation [ω] is multiplicative, and is simply conjugation by ω, defined on all
of G. We note that
[ω] : Tri+ → Tri−
is a group isomorphism. It allows us to deal with the map Y 7→ Y −1 . If we
write Y = TY t TY , then
[ω]Y −1 = ([ω] t TY−1 )[ω]TY−1 ,
so we obtain the formula
(2) T[ω]Y −1 = [ω] t TY−1 .
Define ρ∗ (Y ) by the formula
(3) ρ∗ (Y ) = ρ([ω]Y −1 ) = ψ([ω] t TY−1 ) so ρ(Y −1 ) = ρ∗ ([ω]Y ) .
If ψ = χ2s as above, and T ∈ Tri+ , then
Y −2s
−2sn
(4) ψ([ω] t T −1 ) = tii n−i+1 = t−2s
nn · · · t11
1
.

So we define
ψ ∗ (T ) = ψ([ω] t T −1 ) .
Proposition 1.1. The map ρ 7→ ρ∗ is an involution on the group of charac-
ters. The character associated with ρ∗ is ψ ∗ . For α ∈ C,
(dα )∗ = d−α .
52 3 Special Functions on Posn

Proof. Immediate from (3) and (4) above, and the definition of ρ∗ .

The above involution gives one mechanism to go back and forth between
the upper triangular action and the lower triangular action on the left. The
non-commutativity forces the development of some sort of formalism to deal
with it.
The advantage of dealing with [ω] is that it allows us to define an involution
on the group of characters. However, it stems from the fact that Posn is
actually both a left G-space and a right G-space. The map S : Y 7→ Y −1
leaves Tri+ and Tri− stable, and interchanges left characters (for Tri+ ) to
right characters. For a left character ρ, we may denote ρ0 the right character
such that
ρ0 (Y ) = ρ(Y −1 ) so ρ00 = ρ .
On the other hand, [ω] interchanges Tri+ and Tri− , as well as it interchanges
left and right characters. Indeed, for T ∈ Tri+ and abbreviating [ω]X = X ω ,
we have
ρ ◦ [ω](Y [T ]) = ρ(t T ω Y ω T ω ) = ψ(t T ω )ρ([ω]Y ) ,
so ψρ◦[ω] (T ) = ψ(t T ω ) is the character on Tri+ associated with ρ ◦ [ω]. Taking
the composite
ρ∗ = ρ0 ◦ [ω] = (ρ ◦ [ω])0
then yields the involution on left characters.
Next, let us define an involution on the space of n-complex variables, that
is,
s∗ = (−sn , . . . , −s1 ) .
Thus s∗1 = −sn−i+1 for i = 1, . . . , n. We want a character hs = ρs# with a
suitable vector s# such that
h∗s = hs∗ .
Trivial fooling around with the index and solving a linear equation shows:
Proposition 1.2. Let
n
Y
(4) hs (Y ) = (tn−i+1 )2si +i−(n+1)/2 .
i=1

Then h∗s = hs∗ . Furthermore, we have the identity

(5) hs = ρs#

where
n−i n−1
s#
i = sn−i+1 + − .
2 4
1 Characters of Posn 53

For the record, we note that we can also write


n
Y
(6) hs (Y ) = t2sn−i+1 −i+(n+1)/2 ,
i=1

as well as
n
Y
(7) hs (Y ) = |Y |−(n+1)/4 (tn−i+1 )2si +i .
i=1

Subdeterminants

Let Y ∈ Posn . For each j with j = 1, . . . , n let Subj Y be the i × j upper left
square submatrix of Y , and let Subj Y be the lower right square submatrix of
Y , as shown on the figure.

j Subj Y

Subj Y j

Authors dealing with the right action of Tri+ then consider Subj Y and
corresponding characters. With our left action, we consider Subj Y . The reason
is given in the next proposition.

Proposition 1.3. The map Y 7→ Subj Y is a homomorphism for the left


action of Tri+ , and the map Y 7→ Subj Y is a homomorphism for the right
action of Tri+ (left action of Tri− ).

Proof. Let T be a square matrix decomposed into blocks


  t 
A B A 0
T =  so t T =   .
t
0 D B tD

Then
t t
B tD
  
AA AB ∗
t  and T t T = 
TT =   ,
t t t
BA ∗ D B D D
so the proposition is clear.
54 3 Special Functions on Posn

Proposition 1.4. Let j be a positive integer, 1 5 j 5 n. Let σ be a character


on Posj . Then
Y 7→ σ(Subj Y )
is a character on Posn .

Proof. Proposition 1.3. Characters are of course meant in the sense we have
fixed, for the left action of Tri+ . If one considers right action then Subj Y has
to be replaced by Subj Y to make the assertion valid.

Proposition 1.5. Index [ω] as [ωn ] for its action on n × n matrices. Then

Subj [ωn ]Y = [ωj ]Subj Y .

Proof. Check it out directly for n = 2, then n = 3, and then do what you
want with induction and matrix multiplication.

The Selberg Power Function

For application to a later chapter, we shall consider a special character which


plays a role in the theory of Eisenstein series. Readers may omit the follow-
ing considerations until they are applied, because for the general theory, the
particular choice of variables we are going to make is at beast irrelevant, and
at worst it obscures the general latent structure.
Two sets of variables are going to play a role, so we consider new complex
variables z = (z1 , . . . , zn ). We define the Selberg power function
n
Y n−1
Y
qz(n) (Y ) = |Subj Y |zj and qz(n−1) (Y ) = |Subj Y |zj .
i=1 i=1

Thus the power function on the right depends only on n − 1 variables


z1 , . . . , zn−1 . Note that for either function

q−z = qz−1 .

Both power functions are left characters. For right characters (as in other
authors), one defines the power function pz by taking the product with Subj
instead of Subj .

Proposition 1.6. Write Y = T t T with T ∈ Tri+ . Let ti (i = 1, . . . , n) be the


diagonal elements of T , so ti = ti (Y ). Then
n
Y
(5) |Subj Y | = t2i ;
i=n−j+1
2 The Gamma Function 55
j
n Y
(n)
Y
(6) q−z (Y )= (t2n−k+1 )−zj
j=1 k=1
n
−2(zi +...+zn )
Y
= tn+1−i .
i=1

Proof. Immediate from the definitions.


A change of variables relates the Selberg power function in terms of the
function hs defined previously. Let
1
zj = sj+1 − sj + .
2
Proposition 1.7. On Posn , we have
(n−1)
q−z = hs , with s = (s1 , . . . , sn ) .
Proof. Immediate from the definitions.
(n)
Remark. For q−z we are at the edge where the introduction of the additional
variable sn+1 raises the need for a boundary condition. The relationship on
Posn still reads
(n)
q−z = hs with s = (s1 , . . . , sn )
with the provision that in the relation zn = sn+1 − sn + 12 the special value
sn+1 = −(n + 1)/4. With this provision, the variables (z1 , . . . , zn ) are still
related by an invertible affine map with the variables (s1 , . . . , sn ).

2 The Gamma Function


Classical integral transforms on the positive multiplicative group Pos1 and
some of their properties extend to Posn , as we shall see. First we define the
Mellin transform of a function f to be
Z
Mf (ρ) = f (Y )ρ(Y )dµn (Y ) .
Posn

Note that the transform is a function of characters (left or right). If we express


ρ as ρs , then one may write Mf (s), in which case we view Mf as a function
of the n complex variables s1 , . . . , sn .
As a first example, we define the gamma function Γn to be
Z
Γn (ρ) = e−tr(Y ) ρ(Y )dµn (Y ) .
Posn

When n = 1 and ρ = ρs , then the above value coincides with the usual gamma
function of a single variable s. Actually, the integral can be expressed in terms
of the usual gamma function as follows.
56 3 Special Functions on Posn

Proposition 2.1. For ρ = ρs , the above integral is absolutely convergent for


Re(si ) > (n − i)/2, and has the value

√ n(n−1)/2 Y
µ ¶
n−1
Γn (ρs ) = π Γ si − .
i
2

In the normalization of Proposition 1.2,


√ n(n−1)/2 Y
µ ¶
n−1
Γn (hs ) = π Γ si − .
i
4

Proof. We use the change of variables formula from Y to T in Chap. 2, Propo-


sition 2.2, which yields
Z
e−tr(Y ) ρs (Y )dµn (Y )
Posn
Z Y Y Y
= exp(−tr(T t T )) t2s i n
tii 2 ti−n−1
ii dtij .
Tri+ i5j

The matrix multiplication shows that the diagonal elements of T t T are


 2
t11 + . . . + t21n

 t222 + . . . + t22n 
T tT = 
 
 . .. 

2
tnn

It follows that the integral splits into a product of single integrals as follows:
For indices i < j, we have the product

Y Z √ n(n−1)/2
exp(−t2ij )dtij = π .
i<j −∞

For indices i = j, so over the variables t11 , . . . , tnn we have the product of
i-th terms, i = 1, . . . , n, using a single variable s for simplicity, y = t2 , and
dy/y = 2dt/t,
Z∞ Z∞
−t2 2(s+(i−n)/2) dt dy
e t 2 = e−y y (s+(i−n)/2)
t y
0 0
µ ¶
i−n
= Γ s+ .
2

This concludes the proof.


2 The Gamma Function 57

Remark. Mellin transforms and gamma functions as above and more gener-
ally hypergeometric functions occur notably in Gindikin [Gin 64]. In the early
days, as in [Sie 35] and [Her 55], only the determinant character was used to
define a gamma function. Selberg saw further with his power function [Sel 56].
We shall now extend systematically the standard formalism of the gamma
integral and subsequently the K-Bessel integral to the matricial case. Proofs
which used only the invariance of the measure dy/y on the positive multi-
plicative group, and an interchange of integration, go over systematically.
We start with the standard result that for a > 0, we have
Z∞
dy
(5) e−ay y s = Γ(s)a−s .
y
0

We may also cast the generalization in the formal context of a convolu-


tion, that is for any function h, under conditions of absolute convergence,
we define its gamma transform to be its convolution with the kernel
−1
(Z, Y ) 7→ e−tr(Z Y ) , for which we use the notation
Z
−1
(Γn #h)(Z) = e−tr(Y Z ) h(Y )dµn (Y ) .
Posn

The integral is absolutely convergent in a half plane of characters ρ.


Proposition 2.2. In the domain of absolute convergence, a left character ρ
is an eigenfunction of the above integral operator, namely

(Γn #ρ)(Z) = Γn (ρ)ρ(Z) .

Or, in direct generalization of (5), for A ∈ Posn ,


Z
e−tr(AY ) ρ(Y )dµn (Y ) = Γn (ρ)ρ(A−1 ) .
Posn

Proof. Write Z = T t T with T ∈ Tri+ . Then starting with the measure invari-
ance, we find:
Z
−1
(Γn #ρ)(Z) = e−tr([T ]Y Z ) ρ([T ]Y )dµn (Y )
Posn
Z
t
T t T −1 T −1 )
= e−tr(T Y ψ(T )ρ(Y )dµn (Y )
Posn
Z
−1
= ψ(T ) e−tr(T Y T )
ρ(Y )dµn (Y )
Posn
= ρ(Z)Γn (ρ),
58 3 Special Functions on Posn

which proves the first formula. The second follows by putting Z = A−1 and
using the fact that tr(AY ) = tr(Y A).

Remark. The above result joins two properties. First, the gamma function
was viewed as a function on the space of characters, whence the notation Γ(ρ).
Second, we view the exponential function involving two variables (Z, Y ) as a
“kernel function” giving rise to an integral operator, which we may apply to
a space of functions which make the integral absolutely convergent, certainly
including functions with “polynomial growth” in some sense, and certainly
including a half plane of characters on the symmetric space Posn . A function h
may be an eigenfunction for this integral transform, i.e. the gamma transform,
and in this case, the corresponding eigenvalue may be denoted by λΓ (h). Then
Proposition 2.2 may be formulated by saying that for a character ρ, we have

λΓ (ρ) = Γn (ρ) .

We recall Chap. 2, Proposition 2.1, that Y 7→ Y −1 and the action of


GLn (R), both left and right, preserve the measure dµn (Y ) on Posn . In par-
ticular, [ω] is measure preserving.
Proposition 2.3. For a left character ρ, A ∈ Posn , we have
Z
−1
e−tr(AY ) ρ(Y )dµn (Y ) = Γn (ρ∗ )ρ(A) .
Posn

Proof. We make the two measure preserving transformations

Y 7→ Y −1 and Y 7→ [ω]Y .

The desired formula then follows from Proposition 2.2 and the definitions.

Corollary 2.4. For a right character ρ, we have


Z
−1
(Γn #ρ)(Z) = e−tr(Y Z ) ρ(Y )dµn (Y ) = Γn (ρ ◦ [ω])ρ(Z) .

Proof. We note that ρ = ρ00 , and ρ0 is a left character. We then make the
measure-preserving change of variables Y 7→ Y −1 and apply Propositions 2.2
and 2.3 to conclude the proof.

3 The Bengtson Bessel Function


Following an idea of Bochner [Boc 52], Herz [Her 55] defined a Bessel function
using the determinant character. Bengtson [Be 83] extended this Bessel func-
tion to all characters, and extended the proofs of various classical formulas
from Pos1 to Posn . We shall give the main results of his paper in this section
3 The Bengtson Bessel Function 59

and the next. The basis principle is that, once the original definition is given
with matrices, then almost all the formulas for the ordinary K-Bessel func-
tion are valid, with essentially the same proofs as in the one-dimensional case,
using the invariant measure dµn (Y ) on Posn instead of the invariant measure
dy/y on the positive multiplicative group, which is Pos1 . We have found it
convenient to adopt the convention that for n = 1, s ∈ C, a, b > 0,
Z∞
−1
) s dy
Ks (a, b) = e−(ay+by y .
y
0

Although we shall deal a lot with characters, we find it worth while to


give general definition for the Bessel transform of a function h on Posn . Under
conditions of absolute convergence, we define the K-Bessel transform of h
for A, B ∈ Posn by the integral
Z
−1
Kh (A, B) = e−tr(AY +BY ) h(Y )dµn (Y ) .
Posn

Thus we may also write


Z
−1
Kh (A2 , B 2 ) = e−tr([A]Y +[B]Y )
h(Y ) dµn (Y ) .
Posn

Directly from the fact that for g ∈ GLn (R), the map [g] preserves the measure,
we get the transformation formula

Kh◦[g] (A, B) = Kh (A[g −1 ], [g]B) .

We just let Y 7→ [g −1 ]Y in the defining integral.


We now specialize to the case when h = ρ is a character. We then call Kρ a
Bessel function. By a character, we mean a left character unless otherwise
specified. Let ρ be a left or right character on Posn . If one parameterizes ρ
by n complex variables s = (s1 , . . . , sn ), then the function Kρ (A, B) is entire
in s, or as we shall also say, Kρ is entire in ρ. We shall prove that this is
the case by estimating the higher dimensional Bessel function in terms of the
classical Bessel function in one variable. For r ∈ R and x > 0, this classical
function is defined by the integral
Z∞
dt
Kr (x) = e−x(t+1/t) tr .
t
0

It is very easy to show that in a fixed bounded range for r, one has uniformly

Kr (x) = O(e−2x ) for x → ∞ .


60 3 Special Functions on Posn

Cf. [La73/87], Chap. XX, Sect. 3, K7. Of course, as x → 0, the integral blows
up.
The next theorem will show not only that the higher dimensional Bengtson-
Bessel function is entire in the n complex variables s1 , . . . , sn , but it also gives
uniform estimates for the absolute convergence of the integral, in terms of the
eigenvalues of A and B as these approach 0 (which is bad) or ∞ (which is
good).
Theorem 3.1. Let λ > 0 be such that A and B = λI. Put

σj = Re(sj ) .

Then the integral representing Kρs (A, B) is absolutely convergent and satisfies
r n(n−1)/2 Y
n
π
|Kρs (A, B)| 5 Kσj −(n−j)/2 (λ) .
λ j=1

Proof. We write down the integral representing the Bessel function just with
the real part, since the imaginary part does not contribute to the absolute
value estimate. For X ∈ Rn , we have A[X] = λt XX and hence for X ∈ Rn×n
we have
tr(A[X]) = λ tr([X]I) .
In the Bessel integral, we change the variable as in Chap. 2, Sect. 2, putting
Y = T t T with T ∈ Tri+ , so
X n
X X
tr(AY ) = λt2ij = λt2jj + λt2ij .
i5j j=1 i<j

We have similar inequalities with BY −1 . Let (tij ) = T −1 . Put tj = tjj . By


Proposition 2.3+ of Chap. 2, and as in Proposition 1.4, we find the estimate
n Z∞ ∞
2 −2 2σ 2dtj Y Z 2 ij 2
e−λ(tj +tj ) tj j tjj−n
Y
|Kρs (A, B)| 5 e−λ(tij +(t ) )
dtij .
j=1
tj i<j −∞
0

In the product over i < j, we omit the term with (tij )2 which only makes the
p n(n−1)/2
estimate worse, and then the integral gives precisely the factor π/λ .
For the product with the diagonal variables tj = tjj , we change the variable
putting u = t2 , du/u = 2dt/t. Then one gets just the Bessel integral in one
variable, giving the other factor in the desired estimate. This concludes the
proof.

Next we deal systematically with an extensive formalism satisfied by the


Bessel function. Instead of taking both variables A, B positive, we allow some
degeneracy in one of them, say B, and with Z semipositive, we define
3 The Bengtson Bessel Function 61
Z
−1
Kρ (A, Z) = e−tr(AY +ZY )
ρ(Y )dµn (Y ) .
Posn

Then the integral is absolutely convergent only in the half plane of convergence
of the gamma integral, but we can substitute Z = 0.

Through K 4 we Assume that ρ is a Left Character

We then find

(K1) Kρ (A, O) = ρ(A−1 )Γn (ρ), so Kρ (I, O) = Γn (ρ) .

The formula follows from Proposition 2.2


For T ∈ Tri+ , A, B ∈ Posn , we have

(K2) Kρ (A[T ], [T −1 ]B) = ψρ (T )−1 Kρ (A, B) .

Proof. The integral representing the left side is


Z
t −1 t −1 −1
e−tr( T AT Y +T B T Y ) ρ(Y )dµn (Y ) .
Posn

We use the commutativity of the trace to see that this integral is the same as
Z
−1
e−tr(A[T ]Y +B([T ]Y ) ) ρ(Y )dµn (Y ) .
Posn

We make the translation Y 7→ [T −1 ]Y and use the definition of a character to


conclude the proof.

As a direct consequence, we can reduce the value of the Bessel function


to the case when one of the arguments is I. Indeed, we choose T ∈ Tri+ such
that A−1 = T t T , so A[T ] = I. Then for this T ,

(K3) Kρ (A, B) = ψρ (T )Kρ (I, [T −1 ]B) .

On Pos1 , the Bessel function in two variables is reduced to a function of


one variable Ks (c), defined by the formula
Z∞
dt
Ks (c) = e−c(t+1/t) ts .
t
0

Indeed, a change of variables immediately shows that


µ ¶s µ ¶2s
b b
Ks (a2 , b2 ) = Ks (ab) = Ks (b2 , a2 ) .
a a
62 3 Special Functions on Posn

The non-commutativity on Posn apparently prevents a similar reduction when


n > 1. One still has K 3, and an additional symmetry:

(K4) Kρ (A, B) = Kρ0 (B, A) = Kρ∗ ◦[ω] (B, A) = Kρ∗ ([ω]B, [ω]A) .

This is immediate by making the change of variables Y 7→ Y −1 in the integral


defining Kρ , and using formula (3) of Sect. 1, namely

ρ0 (Y ) = ρ(Y −1 ) = ρ∗ ([ω]Y ) .

Of course, we have now indexed K by ρ0 = ρ∗ ◦ [ω], which is a right character,


but not a left character.

Inductive Formulas

We conclude this section with inductive formulas for the Bessel function start-
ing with the degenerate case as in Bengtson. We fix the notation for the rest
of the section
Proposition 3.1. Let 0 < p < n and p + q = n. Let P ∈ Posn , Q, D ∈ Posq .
For X ∈ Rp×q , we let  
Ip X
u(X) =   .
0 Iq
A variable Y ∈ Posn has a unique expression in partial Iwasawa coordinates
µ ¶
W 0
Y = Iw+ (W, X, V ) = [u(X)]
0 V

with W ∈ Posp , V ∈ Posq , X ∈ Rp×q ; or also


µ ¶
W 0
Y = Iw− (W, X, V ) = [t u(X)] .
0 V

We specify each time which expression is used. A left, resp. a right, character
ρ can be expressed uniquely as a product
µ ¶
W 0
ρ = ρ1 (W )ρ2 (V ) ,
0 V

where ρ1 , ρ2 are left resp. right characters on Posp and Posq .


We shall be computing Bessel integrals
Z
Kρ (A, B) = e−trM (Y ) ρ(Y )dµn (Y )
Posn

where
3 The Bengtson Bessel Function 63

M (Y ) = AY + BY −1
and A, B are either in partial Iwasawa coordinates or degenerate, for instance
µ ¶
+ 0 0
A = Iw (P, C, Q) and B = .
0 D

In the proof, we then use the coordinates Y = Iw− (W, X, V ) with Iw− . Many
combinations can occur, as in the next four propositions, with the alternate
Iwasawa decomposition or upper left hand corner blocks instead of lower right.
The four propositions allow to determine similar answers for Bessel integrals
formed by permuting variables, e.g. using t u(C) instead of u(C) in Proposition
3.2 below. For example, in that proposition, we let
µ ¶ µ ¶
P 0 O O
M (Y ) = [u(C)] Y + Y −1 = AY + BY −1 .
0 Q O D

Let
µ ¶ µ ¶
0 0 P 0
M 0 (Y ) = Y + [u(C)] Y −1 = BY + AY −1 .
0 D 0 Q

Then for a left character ρ, formula K 4 tells us

Kρ (B, A) = Kρ0 (A, B) ,

and ρ0 is a right character to which we can apply Proposition 3.2.


We fix a notation, useful when taking the trace of products of matrices. if
M, N are square matrices of the same size, we define an equivalence

M ∼N

to mean that M can be written as a product, and N is obtained from M by


a succession of interchanges of factors of type Z1 Z2 7→ Z2 Z1 . Thus if M ∼ N
then tr(M ) = tr(N ).

Proposition 3.2. Let C ∈ Rp×q . Let


µ ¶ µ ¶
P 0 0 0
(1) M (Y ) = [u(C)] Y + Y −1 = AY + BY −1 .
0 Q 0 D

Let ρ be a right character on Posn . Then


Z
Kρ (A, B) = e−trM (Y ) ρ(Y )dµn (Y )
Posn
√ pq
= π |Q|−p/2 ρ1 (P −1 )Γp (ρ1 )Kρ2 d−p/2 (Q, D) .
q
64 3 Special Functions on Posn

Proof. The pattern of the present proof will be repeated several times after-
wards. We are computing Kρ (A, B), where A = Iw+ (P, C, Q). For certain
matrix multiplications to come out, we use Y = Iw− (W, X, V ), i.e. we use the
alternate partial Iwasawa decomposition for the variable Y .

We apply Corollary 2.7 of Chap. 2, giving us the change of variables for-


mula. We use the function

(2) f (Y ) = e−trM (Y ) ρ(Y ) = e−trM (Y ) ρ1 (W )ρ2 (V ).

Matrix multiplication shows that


   
(P + [C + X]Q)W ∗∗ 0 ∗
(3) M (Y ) ∼  + 
−1
∗ ∗ ∗∗ QV ∗ DV

so that

(4) trM (Y ) = tr(P W + W [C + X]Q + QV + DV −1 ) = trM (W, X, V ) .

Then by Corollary 2.7 of Chap. 2, the integral to be evaluated is


Z Z
(5) f (Y )dµn (Y ) = e−trM (Y ) ρ(Y )dµn (Y ) =
Posn Posn
ZZZ
e−trM (W,X,V ) ρ1 (W )ρ2 (V )|W |q/2 |V |−p/2 dµeuc (X)dµp (W )dµq (V )

The translation by C in the dµeuc (X)-integral does not change the integral,
so we can omit C from the integrand. Multiplication by Q on the right and W
on the left introduce linear changes of coordinates in this dµeuc (X)-integral,
1 1
in fact the change is by Q 2 and W 2 since the variable X is “squared”. The
situation is the same as in Chap. 3, Sect. 2. Formally, we are dealing with the
change of variables
1 1
Z = W 2 XQ 2 , dZ = dX|W |q/2 |Q|p/2 .

Making this change of variables shows that the dµeuc (X)-integral has the
standard value
√ pq
Z
t
e−tr( XX) dµeuc (X) = π ,
Rp×q

divided by the Jacobian factor, i,.e, multiplied by the inverse factor


|Q|−p/2 |W |−q/2 . Then (5) has the value
√ pq
(6) π |Q|−p/2
ZZ
−1
e−tr(P W ) e−tr(QV +DV ) ρ1 (W )ρ2 (V )|V |−p/2 dµp (W )dµq (V ) .
3 The Bengtson Bessel Function 65

The variables are separated in the double integral. From Proposition 2.2, the
W -integral yields
(7) ρ1 (P −1 )Γp (ρ1 ) ;
while directly from the definition of the K-Bessel function, the V -integral
yields
(8) Kρ2 d−p/2 (Q, D) .
q

Putting these last two factors together with the power of π and the factor
|Q|−p/2 proves the proposition.
We tabulate the variation with [t u(C)] instead of [u(C)].

Proposition 3.3. Let


µ ¶ µ ¶
P 0 0 0
(9) M (Y ) = [t u(C)] Y + Y −1 .
0 Q 0 D

Let ρ be a left character on Posn . Then


√ pq
Z
e−trM (Y ) ρ(Y )dµn (Y ) = π |P |q/2 Γp (ρ1 d−q/2 )ρ1 (P −1 )Kρ2 (Q, D) .
Posn

Proof. The point of the method of Proposition 3.2 was to combine the ex-
pressions with (C, P, Q) and (X, W, V ). To do so in the present case, we have
to use the alternative coordinates Y = Iw+ (W, X, V ). Then matrix multipli-
cation now gives
   
PW ∗ ∗ ∗ ∗ ∗∗ 0 ∗∗∗
(10) M (Y ) ∼  +  ,
∗∗ ([t (C + X)]P + Q)V ∗ DV −1

whence

trM (Y ) = tr(P W + V [t (C + X)]P + QV + DV −1 )


(11) = trM (W, X, V ) .

We can then argue as in Proposition 3.2, using Corollary 2.5 of Chap. 2, so


the desired integral is equal to
ZZZ
(12) e−trM (W,X,V ) ρ1 (W )ρ2 (V )|W |−q/2 |V |p/2 dµeuc (X)dµp (W )dµq (V ) .

We perform the W -integral, which yields Γ(ρ1 d−q/2 )ρ1 (P −1 ) by Proposition


2.2. Then we perform the X-integral, get rid of the C-translation, and change
1 1
variables, so let√Z = V 2 t XP 2 and dZ = |V |p/2 |P |−q/2 dX. Then the X-
pq
integral yields π |P | . The factor|V |p/2 disappears, and the V -integral
q/2

then yields Kρ2 (Q, D). Putting all factors together gives the stated answer.
66 3 Special Functions on Posn

Proposition 3.4. Let A ∈ Posp , C ∈ Rp×q . Let


µ ¶ µ ¶
A 0 t P 0
(13) M (Y ) = Y + [ u(C)] Y −1 .
0 0 0 Q

Let ρ be a right character on Posn . Then


√ pq
Z
e−trM (Y ) ρ(Y )dµn (Y ) = π |P |−q/2 Kρ1 dq/2 (A, P )ρ2 (Q)Γq (ρ∗2 ) .
p
Posn

Proof. We let
µ ¶
W 0
Y = [t u(X)] = Iw− (W, X, V ) .
0 V

Matrix multiplication gives


µ ¶ µ ¶
A 0 AW ∗
(14) Y =
0 0 ∗ 0

and
P W −1
 
µ ¶ ∗ ∗ ∗∗
P 0
(15) [t u(C)] Y −1 ∼   .
0 Q
∗∗∗ ([t u(C − X)]P + Q)V −1

Then

(16) trM (Y ) = tr(AW + P W −1 + V −1 [t (C − X)]P + QV −1 ) .

Since
dµn (Y ) = |W |q/2 |V |−p/2 dµeuc (X)dµp (W )dµq (V ) ,
we find that
Z
(17) e−trM (Y ) ρ(Y )dµn (Y ) =
Posn
ZZZ
e−trM (W,X,V ) ρ1 (W )ρ2 (V )|W |q/2 |V |−p/2 dµeuc (X)dµp (W )dµq (V ).

The W -integral splits to give the factor

(18) Kρ1 dq/2 (A, P ) .


p

We perform the dµeuc (X)-integral next, whereby we can eliminate transla-


tion by C, replace −X by X, but we have the stretching factor coming
from V −1 , P , so we need a change of variables Z = √ V −1/2 t XP 1/2 and
−p/2 q/2 t pq
dZ = |V | |P | d X. The integral yields the factor π as usual. This
3 The Bengtson Bessel Function 67

leaves the dµq (V )-integral, times the inverse |P |−q/2 , while there is a cance-
lation of |V |−p/2 , so the V -integral is
Z
−1
(19) e−tr(QV ) ρ2 (V )dµq (V ) .

Applying Proposition 2.3 concludes the proof.

Proposition 3.5. Notation as in the previous two propositions, let


µ ¶ µ ¶
P 0 A 0
M (Y ) = [u(C)] Y + Y −1 .
0 Q 0 0

Let ρ be a right character. Then


Z
e−trM (Y ) ρ(Y )dµn (Y )
Posn
Z
= Kρ1 dq/2 (P + [C + X]Q, A)Kρ2 d−p/2 (Q, A[X])dµeuc (X).
p q

Rp×q

Proof. The situation can use some of the computations of Proposition 3.2, but
in evaluating trM (Y ) the term with D has to be replaced. A new computation
shows that
A(W −1 + [X]V −1 ) ∗
   
A 0
(20)   Y −1 =   .
0 0 ∗ ∗ ∗ ∗ ∗∗ 0

Hence

(21) trM (Y ) = tr(P W + W [C + X]Q + QV + AW −1 + A[X]V −1 ) .

Then we first perform the W -integral and V -integral, and the stated answer
drops out.

Remark. As we observed at the beginning of our discussion, the last two


propositions can be used to derive variations. For instance, let
µ ¶ µ ¶
0 A 0 −1 t P 0
M (Y ) = BY + Y with B = [ u(C)] .
0 0 0 Q

Let ρ be a left character. Then by K 4,


µ µ ¶¶ µµ ¶ ¶
A 0 A 0
Kρ B, = Kρ0 ,B
0 0 0 0

and we can apply Proposition 3.4 to find a value for the right side.
68 3 Special Functions on Posn

The inductive formulas of Propositions 3.2 and 3.3 have corresponding


formulas in the non-degenerate case, as in [Ter 85], Proposition 2, which we
now give. They depend on the Iw+ and Iw− Iwasawa coordinates.

Proposition 3.6. Let P ∈ Posq , C ∈ Rp×q and n = p + q, giving rise to a


partial Iwasawa decomposition, so we put
µ ¶
P 0
A = [u(C)] = Iw+ (P, C, Q) .
0 Q

Let ρ be a right character on Posn . Then

Kρ (A, In )
Z
= Kρ1 dq/2 (P + [X + C]Q, Ip )Kρ2 d−p/2 (Q, Iq + t XX)dµeuc (X) .
p q

Rp×q

Proof. We let again Y = Iw− (W, X, V ), and


Z
Kρ (A, In ) = f (Y )dµn (Y )

with
(16) f (Y ) = e−trM (Y ) ρ1 (W )ρ2 (V ) .
The expression for M (Y ) = M (W, X, V ) = AY + Y −1 is the same as in (3),
except that the matrix with DV −1 is replaced by
 −1 
W ∗ ∗ ∗∗
(17) Y −1 ∼   .
t −1
∗ ∗ ∗ ( XX + I)V

Therefore we obtain

(18) trM (Y ) = tr(P W + [X + C]QW + QV + W −1 + (I + t XX)V −1 ) .

Then dµn (Y ) = |W |q/2 |V |−p/2 dµp (W )dµq (V )dµeuc (X), so


Z
Kρ (A, In ) = f (Y )dµn (Y ) =
Posn
Z
Kρ1 dq/2 (P + [X + C]Q, Ip )Kρ2 d−p/2 (Q, Iq +t XX)dµeuc (X) .
p q

Rp×q

This proves the proposition.

Finally, we come to the last variation.


4 Mellin and Fourier Transforms 69

Proposition 3.7. Put


µ ¶
P 0
A = [t u(C)] = Iw− (P, C, Q) .
0 Q

Let ρ be a left character on Posn . Then

Kρ (A, In )
Z
= Kρ1 d−q/2 (P, I + X t X)Kρ2 dp/2 (P [C + X] + Q, I)dµeuc (X).
Rp×q

Proof. For A we have the same computation as in Proposition 3.3, but we


now need the new computation
 −1 
W ∗ ∗ ∗∗
Y −1 =   .
−1 −1
∗∗ W [X] + V

We let M (Y ) = AY + Y −1 . Then

(22)trM (Y ) = tr(P W + V [t (C + X)]P + QV + W −1 + W −1 [X] + V −1 ) .

The stated answer comes out.

4 Mellin and Fourier Transforms


The next Bengtson result extends the classical Fourier transform formula
Z∞
eixr dx 1
Γ(s) √ = √ Ks− 12 (a2 , r2 /4)
(x2 + a2 )s 2π 2
−∞

for a > 0, r = 0, Re(s) > 1/2.


Theorem 4.1. Let Rp×q have the scalar product hX, Ri = tr( t XR). Let
√ −pq Q
dν(X) = 2π dxij . Let A ∈ Posq and let ρ be a left character on Posq .
Then in a half plane of ρ,
Z
Γq (ρ) ρ((t XX + A2 ) e−ihX,Ri dν(X)
Rp×q

= ( 2)−pq Kρd−p/2 (A2 , t RR/4) .
q

Proof. We recall that the function e−<X,X>/2 is self-dual with respect to ν.


Write down the defining integral for the gamma function, and interchange the
order of integration to get the left side equal to
70 3 Special Functions on Posn
Z Z
(1) e−tr(Y ) ρ(Y )ρ((t XX + A2 )−1 )dµn (Y )e−ihX,Ri dν(X) .
Rp×q Posq

We write t XX + A2 = t TX TX with TX ∈ Tri+ . Make the left translation by


[TX ] on Y . Then the above expression is equal to
Z Z
(2) e−tr([TX ]Y ) ψ(TX )ρ(Y )ψ(TX )−1 dµq (Y )e−ihX,Ri dν(X) .
Rp×q Posq

Then ψ(TX ) cancels ψ(TX )−1 . Now interchange the integrals again. We have

tr([TX ]Y ) = tr(TX Y t TX ) = tr(t TX TX Y )


= hXY 1/2 , XY 1/2 i + hAY 1/2 , AY 1/2 i,

so (2) becomes
Z Z
1/2 1/2 1/2 1/2
(3) e−hXY ,XY i e−ihX,Ri dν(X)e−hAY ,AY i ρ(Y )dµq (Y ) .
Posq Rp×q

We make the change of variables


√ √
Z/ 2 = XY 1/2 , so dZ = ( 2)pq |Y |p/2 dX ,
−hZ,Zi/2
and the inner
√ integral becomes the Fourier transform of e evaluated
−1/2
at RY / 2. Thus (3) becomes
Z
2 t −1/4 √ pq
(4) e−tr(A Y + RRY )
ρ(Y )|Y |−p/2 dµ(Y )/ 2
Posn
1
= √ Kρd−p/2 (A2 , t RR/4) .
( 2)pq

This proves the theorem.

The next theorem gives the Mellin transform of the Bessel function,
and extends the one variable formula
Z∞
dy
Kz (1, y)y s = Γ(s)Γ(s + z) .
y
0

Theorem 4.2. let σ, ρ be two left characters on Posn . Then


Z
Kσ (1, y)ρ(Y )dµ(Y ) = Γn (ρ)Γn (σρ) .
Posn
4 Mellin and Fourier Transforms 71

Proof. We start with the right side:


ZZ
Γn (ρ)Γn (σρ) = e−tr(Y ) ρ(Y )e−tr(Z) σ(Z)ρ(Z)dµ(Y )dµ(Z) .

Write Z = TZ t TZ = [TZ ]I. Make the translation

Y 7→ [TZ−1 ]Y

in the dµ(Y )-integral. Then the double integral is equal to


ZZ
−1
e−tr(Z+Y Z ) ψρ (TZ−1 )ρ(Y )ψρ (TZ )σ(Z)dµ(Y )dµ(Z) .

Then there is a cancelation of ψρ (TZ ). After an interchange of integrals, the


desired formula comes out.
In our discussion of Proposition 1.2, we had left one component unspec-
ified. This component has its own importance, as in Bengtson, and we shall
now deal with it. We consider partial Iwasawa decompositions on Posn . We
let p, q be positive integers with p + q = n. We identify

Matp,q (R) = Rp×q .

As we have seen in Chap. 3, Sect. 2, we have a natural positive definite scalar


product hX, Ri = tr(t XR) on this space. For X ∈ Rp×q we let
 
Ip X
u(X) =   so u(X) ∈ Tri+ .
0 Iq

Let σ be a left character on Posp . Let Y ∈ Posn . We define the upper


Bengtson function βσ,Y on Rp×q by the formula

βσ,Y (X) = σ ◦ [ω](Subp [u(X)]Y ) .

We normalize the Fourier transform with the measure


√ Y
dν(X) = ( 2π)−pq dxij ,

so that Z

βσ,Y (R) = βσ,Y (X)e−i<X,R> dν(X) .
Rp×q

The first thing we remark about this Fourier transform, also called the Bengt-
son function, is its eigenfunction property for the action of Rp×q on Posn .
Proposition 4.3. With the above notation, for Z ∈ Rp×q , we have

βσ,[u(Z)]Y (R) = ei<Z,R> βσ,Y

(R) .
72 3 Special Functions on Posn

Proof. This is immediate, because


Z

βσ,[u(Z)]Y (R) = σ ◦ [ω](Subp [u(X)][u(Z)]Y )e−i<X,R> dν(X) .
Rp×q

We make the translation X 7→ X − Z, and the formula falls out.

We investigate the Bengtson function in connection with the inductive


decomposition of an element of Posn . An element Y ∈ Posn has a unique
partial Iwasawa decomposition
 
W 0
Y = [u(X)]  
0 V

with W ∈ Posn and V ∈ Posq . Matrix multiplication yields


   
W 0 W + [X]V XV
(5) [u(X)]  =  .
0 V V tX V

Note that V = Subq Y and W + [X]V = Subp Y . Furthermore, the expression


on the right immediately gives both the existence and uniqueness of the partial
Iwasawa decomposition. Indeed, V is determined first, then X is determined
by the upper right and lower left components, and finally W is determined
to solve for the upper left component. For the record we give the alternate
version with  
W 0
Y = [t u(X)]   ,
0 V
     
W 0 W 0 W WX
(6)   [u(X)] = [t u(X)]  = 
t
0 V 0 V XW W [X] + V
−1 
W −1 −W −1 X
  
W 0
(7) [u(X)]   =   .
t −1 −1 −1
0 V − XW W [X] + V
If Y has the diagonal decomposition Y = diag(W, V ) as above, then we may
write
βσ,Y = βσ,W,V .
By Theorem 4.1, the Fourier transform then has the expression
Z

βσ,W,V (R) = σ ∗ ((W + [X]V )−1 )e−i<X,R> dν(X) .
Rp×q
4 Mellin and Fourier Transforms 73

Theorem 4.4. let n = p + q as above. Let W ∈ Posp and V ∈ Posq . Let σ be


a left character on Posp and β the upper Bengtson function. Then

Γp (σ ∗ )βσ,W,V

(R) = |V |−p/2 ( 2)−pq Kσ∗ d−q/2 (W, [R]V −1 /4) .
p

Proof. By definitions, letting W = A2 , V = B 2 , we have

βσ,W,V (X) = σ([ω]Subp [u(X)]Y )


= σ ∗ ((A2 + [X]B 2 )−1 ).

Then
Z

(8) βσ,W,V (R) = σ ∗ ((A2 + XB t (XB))−1 )e−ihX,Ri dν(X) .
Rp×q

We make the change of variables X 7→ XB −1 . Then

hXB −1 , Ri = hX, RB −1 i and dν(XB −1 ) = dν(X)|B|−p .


1
Applying Theorems 4.1 with R replaced by RV − 2 (and taking transposes)
concludes the proof.

Remark. Bengtson (see [Be 83]) calls the Fourier transform βσ,W,V a Bessel
function, and denotes it by kp,q .
For the convenience of the reader we tabulate the alternate formula with
upper and lower triangular matrices reversed. Let τ be a left character on
Posq . We define the lower Bengtson function

βτ,Y (X) = τ ◦ [ω](Subq Y [u(X)]) .

Theorem 4.5. Let βτ,Y be the lower Bengtson function. Then



Γq (τ ∗ )βτ,W,V

(R) = |W |−q/2 ( 2)−pq Kτ ∗ d−p/2 (V, W −1 [R]/4) .
q

The proof is of course the same, mutatis mutandis.


Remark. In the definitions of the Bengtson functions, note that the reversing
operator [ω] might be denoted by [ωp ] resp. [ωq ], to denote the size of ω, which
is p resp. q in the respective theorems. In practice, the context determines the
size.
Next we give formulas which reduce the computation of the Fourier trans-
form to the case when W = Ip , or V = Iq , or Y = Ip+q = In .
Proposition 4.6. Let the notation be as in Theorem 4.4, so σ is a left char-
acter on Posp and βσ the upper Bengtson function. Then

βσ,W,V (R) = |V |−p/2 βσ,W,I

q
(RV −1/2 ) .
74 3 Special Functions on Posn

Proof. This is actually the content of (8), together with the change of variables
X 7→ XB −1 , before we apply Theorem 4.1.

Proposition 4.7. Let W = t T T with T ∈ Tri+ p , W ∈ Posp . Let σ be a left


character on Posp and βσ the upper Bengtson function. Then

βσ,W,I q
(R) = |W |q/2 σ ∗ (W −1 )βσ,I

p ,Iq
(T R) .

Proof. By definition
Z

βσ,W,I q
(R) = σ ∗ ((W + X t X)−1 )e−i<X,R> dν(X) .
Rp×q

We make the change of variables X 7→ t T X. We note that

ht T X, Ri = hX, T Ri and dν(t T X) = |T |q dν(X) = |W |q/2 dν(X) .

Then
Z
∧ q/2
βσ,T,I q
(R) = |W | σ([t T ω ](I + X t X)ω )ei<X,T R> dν(X)
Rp×q
Z
= |W |q/2 σ([ω]t T ) σ([ω](I + X t X))e−i<X,T R> dν(X)
Rp×q
Z
q/2 ∗ −1
= |W | σ (W ) σ ∗ ((I + X t X)−1 )e−i<X,T R> dν(X) ,
Rp×q


which yields the theorem by definition of βσ,I p ,Iq
(T R).

Having given the two reduction formulas above in separate cases, we can
combine them into one statement for the record.

Theorem 4.8. Let the situation be as in Theorems 4.6 and 4.7, so σ is a left
character on Posp , W ∈ Posp , W = t T T with T ∈ Tri+
p , and V ∈ Posq . Then

1

βσ,W,V (R) = |V |−p/2 |W |q/2 σ ∗ (W −1 )βσ,I

p ,Iq
(T RV − 2 ) .

Observe that the change of position of V resp. W in the two preceding


propositions was carried out independently, and each change is somewhat
lighter than the combination, so we have given all the steps to lighten the
computation.
4
Invariant Differential Operators on Posn (R)

1 Invariant Polynomials

Let V be a finite dimensional vector space over the reals. We let:


Pol(V ) = algebra of polynomial functions on V ;
S(V ) = Pol(V ∨ )
= symmetric algebra of V , where V ∨ is the dual space.
In non-invariant terms, if {λ1 , . . . , λN } is a basis of V ∨ , the monomials
{λ1 . . . λm
m1
N } form a basis of Pol(V ). We apply this construction to two
N

vector spaces as follows. First, let


a = vector space of n × n diagonal matrices.
W = group of permutations of the diagonal elements of a diagonal
matrix.
Because of the way W generalizes in the theory of Lie algebras, we call W
the Weyl group. Let Eii be the diagonal matrix with 1 in the i-th component
and 0 elsewhere. Then every element v ∈ a can be expressed as a linear
combination
n
X
v= hi Eii with coordinate functions hi .
i=1

Let:

Pol(a)W = subalgebra of Pol(a) consisting of elements invariant


under W.

Thus Pol(a)W consists of the symmetric polynomials in the algebraically in-


dependent elements (variables) h1 , . . . hn .
Next we consider V = Sym.

Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 75–94 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
76 4 Invariant Differential Operators on Posn (R)

Let Eij be the matrix with ij-component equal to 1, and all other com-
ponents equal to 0. Then Sym has a basis (actually orthogonal) consisting of
the elements
1
vii = Eii and vij = (Eij + Eji ) for i < j .
2
Then the algebra Pol(Sym) can be viewed as the algebra of polynomials

P (X) = P (. . . , xij , . . .)i5j

where X is the coordinate matrix of a vector


X
v= xij vij = vX .
i5j

The coordinate functions (xij ) form the dual basis of (vij ), and hi = xii . Let
K be the usual compact group of real unitary matrices. We let:
Pol(Sym)K = subalgebra consisting of the elements invariant under
the conjugation action by K.

Theorem 1.1. The restriction Pol(Sym) → Pol(a) induces an algebra iso-


morphism

Pol(Sym)K −→ Pol(a)W .
In other words, every W -invariant polynomial on a can be uniquely extended
to a K-invariant polynomial on Sym.

Proof. Every element of Sym can be diagonalized with respect to some ortho-
normal basis. This means that

Sym = [K]a ,

so that every element of Sym is of the form kv t k = kvk−1 for some v ∈ a and
k ∈ K. Thus the restriction map is injective. We have to prove that it is sur-
jective. For this we recall that a symmetric polynomial in variables h1 , . . . , hn
can be expressed uniquely as a polynomial in the elementary symmetric func-
tions s1 , . . . , sn . Furthermore, these symmetric functions are the coefficients
of the characteristic polynomial of elements v ∈ a:

det(tI + v) = tn + s1 tn−1 + . . . + sn .

But then a polynomial Q(s1 , . . . , sn ) can be viewed as an element of the


symmetric algebra Pol(Sym), by taking v ∈ Sym, extending the polynomial
in Pol(a), and obviously K-invariant, thus proving the theorem.

Remark. The above result was proved by Chevalley for semisimple Lie al-
gebras. Cf. Wallach [Wal 88], Theorem 3.1.2 and Helgason [Hel 84], Chap. 2,
1 Invariant Polynomials 77

Corollary 5.12 for a proof as a consequence of a much more analytic theo-


rem. A more direct proof was given by Harish-Chandra essentially along the
same lines as the proof we gave above for Theorem 1.1, but with technical
complications. Cf. [Hel 62], Chapter X, Theorem 6.16 which gives a complete
exposition of this proof, not kept in [Hel 84], but only mentioned in Exercise
D1, p. 340, following Harish-Chandra.

Actually, the algebra of symmetric polynomials on a has two natural


sets of generators. The elementary symmetric polynomials as above, and the
Newton polynomials

Sr (h) = tr(hr ) = hr1 + . . . + hrn with r = 1, . . . , n .

Thus the algebra Pol(Sym)K is generated by the algebraically independent


elements
tr(X), . . . , tr(X n ) .
These elements restrict to tr(h), . . . , tr(hn ) on a. Let P = P (X) be an arbi-
trary K-invariant polynomial. Then there exists a unique polynomial PNew in
n variables such that

P (X) = PNew (tr(X), . . . , tr(X n )) .

we call PNew the Newton polynomial of P .


For the reader’s convenience, we recall specific properties of the duality
for polynomial functions. We do so in a general context. Let V be a finite
dimensional vector space over a field of characteristic 0. Let d be a positive
integer, and let Pold (V ) be the vector space of homogeneous polynomials of
degree d on V . The usual “variables” are the coordinate functions with respect
to a basis, and such polynomials are therefore polynomial functions on V . If
V ∨ is the dual space, then V = V ∨∨ , and V, V ∨ play a symmetric role with
respect to each other. We denote elements of V by v and elements of V ∨ by λ.
The vector spaces Pold (V ) and Pold (V ∨ ) are dual to each other, under the
pairing whose value on monomials is given by
X
hλ1 . . . λd , v1 . . . vd i = hλ1 , vσ(1) i . . . hλd , vσ(d) i .
σ

The sum is here taken over all permutations σ of {1, . . . , d}. Given a non-
degenerate bilinear map between V and another vector space V ∨ , the same
formula defines a duality on their algebras of polynomial functions. In prac-
tice, one is usually given some non-degenerate symmetric bilinear form on
V itself, identifying V with its dual space. Note that the sum defining the
scalar product on monomials is the same as the sum defining determinants,
except that the alternating signs are replaced by all plus signs, thus making
the sum symmetric rather than skew symmetric in the two sets of variables
(λ1 , . . . , λd ) and (v1 , . . . , vd ). If {v1 , . . . , vn } is a basis for V and {λ1 , . . . , λn }
78 4 Invariant Differential Operators on Posn (R)

is the dual basis, then the value of the above pairing on their monomials is
1 or 0. Thus the distinct monomials of given degree d form dual bases for
Pold (V ) and Pold (V ∨ ).
Let K be a group acting on V . Then K also acts functorially on the dual
space V ∨ . For a functional λ ∈ V ∨ , we have by definition

([k]λ)(v) = λ([k −1 ]v) .

Proposition 1.2. The pairing described above between Pol(V ) and Pol(V ∨ )
is K-invariant, in the sense that for P ∈ Pol(V ) and Q ∈ Pol(V ∨ ), we have

h[k]P, [k]Qi = hP, Qi .

This is an immediate consequence of the definitions.


Let a be a subspace of V . The exact sequence 0 → a → V has the dual
exact sequence
V ∨ → a∨ → 0 .
The restriction map
Pol(V ) → Pol(a) → 0
corresponds to the dual sequence

0 → Pol(a∨ ) → Pol(V ∨ ) .

Let W be the subgroup of K leaving a stable, modulo the subgroup leaving a


elementwise fixed. We have

Pol(a∨ )W = S(a)W .

Immediately from the definitions, we get:


Proposition 1.3. The restriction

Pol(V )K → Pol(a)W

is an isomorphism if and only if the dual sequence

Pol(a∨ )W → Pol(V ∨ )K

is an isomorphism.

2 Invariant Differential Operators


on Posn the Maass-Selberg Generators
This section will describe the Maass-Selberg generators for invariant differ-
ential operators. We show how the invariant polynomials of the preceding
section are related to differential operators. There are two natural charts for
2 Invariant Differential Operators 79

Posn : First as an open subset of Sym; and second as the image of the exponen-
tial map giving a differential isomorphism with Sym. Each one of these charts
gives rise to a description of the invariant differential operators, serving differ-
ent purposes. The algebra of invariant differential operators is isomorphic to
a polynomial algebra, and each one of these charts gives natural algebraically
independent generators for this algebra. The first set of generators is due to
Maass-Selberg [Maa 55], [Maa 56], [Sel 56], see also [Maa 71], which we follow
more or less.
We let DO(M ) denote the algebra of C ∞ differential operators on a mani-
fold M . Let G be a Lie group acting on M . As mentioned in the introduction,
we let DO(M )G be the subalgebra of G-invariant differential operators, and
0
similarly DO(M )G for any subgroup G0 . When the subgroup is G itself, we
often omit the reference to G, and speak simply of invariant differential oper-
ators. In the present chapter, we take M = Posn , and G = GLn (R).
We let Y = (yij ) be the symmetric matrix of variables on Posn with
Yij = yji for all i, j = 1, . . . , n. We let dY = (dyij ). We also let
1
 
∂/∂y11 2 ∂/∂ij
∂  .. 
(1) =
 . 
∂Y

 
1
2 ∂/∂yij ∂/∂ynn
1
 
∂11 2 ∂ij
 .. 
=
 .  .

 
1
2 ∂ij ∂nn
The notation with partial derivatives ∂ij on the right is useful when we do
not want to specify the variables. Note that the matrix of partial derivatives
has a factor 1/2 in its components off the diagonal. We let tr be the trace.
For any function f on Posn ,we have
∂ X
(2) tr(dY )f = df = (∂ij f )(Y )dyij .
∂Y
i5j

This follows at once from the multiplication of matrices dY and ∂/∂Y . When
summing over all indices i, j, the factors 1/2 add up to 1, as desired to get
the df . This justifies the notation ∂/∂Y .
Next we consider a change of variables under the action of the group
G = GLn (R). Let g ∈ G. Let

(3) Z = gY t g so dZ = gdY t g .

Then f (Y ) = f (g −1 Z t g −1 ) = f1 (Z) = f ◦ [g −1 ](Z), and


80 4 Invariant Differential Operators on Posn (R)
µ ¶ µ ¶
∂ ∂
t
df1 (Z) = tr dZ f1 (Z) = tr gdY g f
∂z ∂Z
µ ¶

= tr dY · t g g f.
∂Z

Hence
∂ ∂ ∂ ∂ −1
(4) = tg g and = t g −1 g .
∂Y ∂Z ∂Z ∂Y
Example. For any positive integer r,
µ ¶r µ ¶r
∂ ∂ −1 ∂ ∂
(5) Z = gY g and Z =g Y g −1 .
∂Z ∂Y ∂Z ∂Y

Consequently
µ ¶r µµ ¶r ¶ µµ ¶r ¶
∂ ∂ ∂
(6) tr Z = tr Z = tr Y ,
∂Z ∂Z ∂Y

from which we see that tr((Y ∂/∂Y )r ) is invariant for all positive integers r.
Thus we have exhibited a sequence of invariant differential operators. For a
positive integer r, we define the Maass-Selberg operators
µµ ¶r ¶

δr = tr Y .
∂Y

We return to these differential operators in Theorem 2.3. Here we continue


with general properties.
The left translation operation [Lg ]D is characterized by the property

(7) ([Lg ]D)(Lg f ) = Lg (Df ) or ([Lg ]D)(Lg f )(Y ) = (Df )([g −1 ]Y ) .

Thus the invariance of D, namely that [Lg ]D = D for all g, means that for
all f ,
(8) (D(Lg f ))(Y ) = (Df )([g −1 ]Y ) = (Df )(g −1 Y t g −1 )
or equivalently

D(Lg f ) = Lg (Df ) or also D(f ◦ Lg ) = (Df ) ◦ Lg .

A differential operator can be written uniquely in the form


X Y
D = P (Y, ∂/∂Y ) = ϕ(m) (Y ) (∂/∂yij )m(i,j)
(m) i5j

with functions ϕ(m) and integral exponents m(i, j) = 0. Let X = (xij ) be a


variable in Sym, in terms of its coordinate functions. Let Fu be the ring of
C ∞ functions on Posn . If we wish to specify the dependence of an element of
Fu[X] on Y via its coefficients, we write an element of Fu[X] in the form
2 Invariant Differential Operators 81
m(i,j)
X Y
P (Y, X) = ϕ(m) (Y ) xij .
(m) i5j

Thus P (Y, X) is C ∞ in the coordinates of Y , and polynomial in those of X.


∂ ∂
The map P (Y, X) 7→ P (Y, ∂Y ) also written D(Y, ∂Y ), obtained by substi-
tuting ϕij for Xij , establishes an Fu-linear (not ring) isomorphism between
Fu[X] and the Fu-module of differential operators. As in freshman calculus, it
is useful to have a formalism of differentiation both with the variable explic-
itly, and without the variable. Observe that the invariance formula (8) can be
written with any letter, say Z, that is

(D(Lg f ))(Z) = (Df )(g −1 Z t g −1 ) .

Now we put Z = gY t g. Then the right side becomes (Df )(Y ). The left side is
µ ¶ µ ¶
∂ −1 t −1 t t −1 ∂ −1
(D(Lg f ))(Z) = P Z, (f (g Z g )) = P gY g, g g (f (Y )) .
∂Z ∂Y
Therefore the invariance formula can be expressed as
µ ¶ µ ¶
∂ −1 ∂
P gY t g, t g −1 g (f (Y )) = P Y, (f (Y )) ,
∂Y ∂Y
from which we can omit the expression f (Y ) at the end. We obtain
Proposition 2.1. Given P (Y, X) ∈ F u[X], the operator P (Y, ∂/∂Y ) is in-
variant if and only if for all g ∈ G,

P (gY t g, t g −1 Xg −1 ) = P (Y, X)

or also µ ¶ µ ¶
t t −1 ∂ −1 ∂
D gY g, g g = D Y, .
∂Y ∂Y
We are now finished with the general remarks on invariant differential
operators, and we relate them with operators with constant coefficients at the
origin. The origin is the unit matrix, which we denote by I. We let
X Y m(i,j)
P (I, X) = ϕ(m) (I) Xij = PD,I (X) .
(m) i5j

Then PD,I (X) is a polynomial (ordinary), and PD,I (∂/∂Y ) is a polynomial


differential operator with constant coefficients, called the polynomial ex-
pression of D(Y, ∂/∂Y ) at the origin. It gives the value of the differential
operator at the origin, in the sense that for any function f on Posn ,
¯
(9) (Df )(I) = PD,I (∂/∂Y )f (Y ) ¯Y =I .

Furthermore, this polynomial PD,I uniquely determines the invariant differ-


ential operator, i.e. the association
82 4 Invariant Differential Operators on Posn (R)

P (Y, ∂/∂Y ) 7→ P (I, ∂/∂Y ) = PD,I (∂/∂Y )

is an injective linear map of the real vector space of invariant differential


operators into the space of polynomials with constant coefficients. Indeed,
given a point Y , we select g ∈ G such that Y = [g −1 ]I. Then for any function
f on Posn ,
(10) (Df )([g −1 ]Y ) = (D(Lg f ))(I) .
Thus the family of values of derivatives (Df )(I) (for all f ∈ Fu) uniquely
determines D. Hence so does the polynomial PD .
Furthermore, the degree of the differential operator D is equal to the degree
of PD,I (as an ordinary polynomial). This is true because of the invariance of
the degree under local isomorphisms, and also more explicitly by Proposition
2.2.

Theorem 2.2. The association D 7→ PD,I is a linear isomorphism

DO(Posn )G → Pol(Sym)K .

Proof. We have already seen above that PD,I is K-invariant, and that the
association D 7→ PD,I is injective on DO(Posn )G . There remains only to
prove the surjectivity. Given P (X) ∈ Pol(Sym)K and Z ∈ Posn , we may
write Z = [g]I with some g ∈ G. Define Df by the formula
µ ¶ µ ¶
∂ ¯ ∂ ¯
(Df )(Z) = P f ([g]Y ) ¯Y =I = P (f ◦ [g])(Y ) ¯Y =I .
∂Y ∂Y

This value is independent of the choice of g, because any other choice is of


the form gk with some k ∈ K, and by the K-invariance of P , we get
µ ¶ µ ¶
∂ ∂
P (f ◦ [g])([k]Y ) = P (f ◦ [g])(Y ) ,
∂Y ∂Y

so (Df )(Z) is well defined, and D is defined in such a way that its G-invariance
is then obvious. Local charts show that it is a differential operator, thereby
concluding the proof.

Theorem 2.3. The Maass-Selberg invariant operators δ1 , . . . δn are algebrai-


cally independent, and the (commutative) ring C[δ1 , . . . δn ] is the full ring of
invariant differential operators.

Proof. In Sect. 1, we recalled the Newton polynomials, and we now define


µ µ ¶ µ ¶n ¶
∂ ∂
D1 (Y, ∂/∂Y ) = PD,New tr Y , . . . , tr Y .
∂Y ∂Y

Then D1 is an invariant differential operator in the algebra generated by


δ1 , . . . , δn , and
2 Invariant Differential Operators 83
µ ¶ µ ¶
∂ ∂
D Y, − D1 Y,
∂Y ∂Y
is an invariant differential operator. Furthermore, PD − PD1 has degree less
than the degree of PD , in other words, D − D1 is a differential operator of
lower degree than D, because the terms of highest degree in two differential
operators commute modulo operators of lower degree. Thus we can continue
by induction to conclude the proof that δ1 , . . . , δn generate the algebra of
invariant differential operators.

Finally, we prove the algebraic independence. Let P (x1 , . . . , xn ) be a non-


zero polynomial. We have to prove that P (δ1 , . . . , δn ) 6= 0. We shall prove this
non-vanishing by applying the operator
µ µ ¶ µµ ¶n ¶¶
∂ ∂
P (δ1 , . . . , δn ) = P tr Y , . . . , tr Y
∂Y ∂Y

to the function etr(Y ) , and showing that we don’t get 0, by a degree argument.
Define the weight w of P (x1 , . . . , xn ) to be the degree of the polynomial
P (x1 , x22 , . . . , xnn ). Then w is also the degree of the polynomial

P (tr(Y ), . . . , tr(Y n ))

in the variables (Y ) = (yij ). To see this, let Pw be the sum of all the monomial
terms of weight w occurring in P (x1 , . . . , xn ). We suppose P 6= 0 so Pw 6= 0.
Then Pw (tr(Y ), . . . , tr(Y n )) is homogeneous of degree w in (yij ), and 6= 0
since tr(Y ), . . . , tr(Y n ) are algebraically independent. All other monomials
occurring in P have lower weight, and hence lower degree in yij ), thus proving
the assertion about w being the degree in (Y ).
Suppose that
µ µ ¶ µ ¶n ¶
∂ ∂
P (δ1 , . . . δn ) = 0 = P tr Y , . . . , tr Y .
∂Y ∂Y

Observe that ∂ij etr(Y ) = etr(Y ) if i = j and 0 otherwise. If M is a monomial


of power products of the δij , then

M etr(Y ) = etr(Y ) if M contains only power products of ∂11 , . . . , ∂nn


M etr(Y ) = 0 otherwise

Lemma 2.4. Let Q(x1 , . . . , xn ) be a polynomial 6= 0. Then


µ µ ¶ µµ ¶n ¶
∂ ∂
Q tr Y , . . . , tr Y etr(Y )
∂Y ∂Y
= etr(Y ) {Q(tr(Y ), . . . , tr(Y n )) + R(Y )}

where R(Y ) is a polynomial of degree less than the weight of Q.


84 4 Invariant Differential Operators on Posn (R)

Proof. It suffices to prove the lemma when Q is a monomial, say

Q(x1 , . . . , xn ) = xd11 . . . xdnn of weight d1 + 2d2 + . . . + ndn .

By the remark preceeding the lemma, without loss of generality we may re-
place all δij by 0 whenever i 6= j, to evaluate the effect of a polynomial in
tr(Y ∂/∂Y ) on etr(Y ) . More precisely, let ∆ be the diagonal matrix operator
 
∂11 0
∆=
 ..  .

.
0 ∂nn

Then
µ µ ¶ µµ ¶n ¶
∂ ∂
Q tr Y , . . . , tr Y etr(Y )
∂Y ∂Y
= Q(tr(Y ∆), . . . , tr((Y ∆)n )etr(Y ) + R(Y )etr(Y ) ,

where R(Y ) has degree smaller than Q(tr(Y ), . . . , tr(Y n )). Thus we are re-
duced to proving the formula for

(tr(Y ∆))d1 . . . (tr(Y ∆)n )dn .

We do this by induction. Say d1 = 1. We have

tr(Y ∆)etr(Y ) = tr(Y )etr(Y ) .

Suppose, by induction, that the lemma is proved for xd11 −1 xd22 . . . xdnn . Apply-
ing Y ∆ to the inductive expression immediately yields the desired result. The
general case follows the same way.

Returning to the proof of algebraic independence, we see that the degree in


(Y ) of P (tr(Y ), . . . , tr(Y n )) is equal to the weight of P , and hence by Lemma
2.3, the effect of
µ µ ¶ µµ ¶n ¶¶
∂ ∂
P tr Y , . . . , tr Y
∂Y ∂Y

on etr(Y ) cannot be 0. This concludes the proof of Theorem 2.3.

3 The Lie Algebra Generators

Here we describe the invariant differential operators via the exponential chart
exp : Sym → Posn . To each K-invariant polynomial we associate a differential
operator as follows.
3 The Lie Algebra Generators 85

Let P ∈ Pol(Sym)K be a K-invariant polynomial function on Sym. Let f


be a function on Posn . Let fG be the function on G defined by

fG (g) = f ([g]I) .

We call fG the lift of f to G. With X denoting the coordinate matrix of


elements in Sym as before, we define
µ ¶
∂ ¯
(DP fG )(g) = P f ([g][exp X]I) ¯X=O .
∂X

Observe that by definition of the Newton polynomial, one can also write this
definition in the form
µ µ ¶ µ ¶n ¶
# ∂ ∂ ¯
(DP fG )(g) = P tr , . . . , tr f ([g][exp X]I) ¯X=O .
∂X ∂X

Remark. The notation exp X implicitly involves an identification. Indeed,


X is the
P matrix of coordinates functions, and we identify it with the vector
vX = xij vij (sum taken over i 5 j). When writing exp X, we really mean
exp(vX ).

Lemma 3.1. The function DP fG depends only on cosets G/K, and is there-
fore a function on Posn , denoted DP f .

Proof. Replacing f by f ◦ [g], it suffices to prove that DP fG is K-invariant,


that is (DP fG )(k) = DP fG (I) for all k ∈ K. But
µ ¶

f (k(exp 2X)k −1 ) ¯X=O
¯
(DP fG )(k) = P
∂X
µ ¶

f (exp 2kXk −1 )) ¯X=O
¯
=P
∂X

Let Z = kXk −1 . Then as at the beginning of Sect. 2,


∂ ∂ −1 ∂ −1
= t k −1 k =k k .
∂Z ∂X ∂X
By the invariance of P under conjugation by k −1 , a change of variables con-
cludes the proof

Directly from the definition, it follows that DP ∈ DO(Posn )G , that is, DP


is an invariant differential operator on Posn .

Theorem 3.2. The association P 7→ DP is a linear isomorphism

Pol(Sym)K → DO(Posn )G .
86 4 Invariant Differential Operators on Posn (R)

Proof. First we prove the injectivity of the map. It suffices to prove the injec-
tivity at g = e (unit element of G). Let F (X) = f (exp 2X) be the pull back
of f to Sym = TI Posn (tangent space at the origin). The function f locally
near I, and so F locally near 0, can be chosen arbitrarily, for instance to be a
monomial in the variables. If DP f = 0, then for every monomial F ((xij )) we
have P (∂/∂X)F (xij )) = 0 whence P = 0, thus proving injectivity.

As to the surjectivity, let D ∈ DO(Posn )G . Then for every function f on


Posn , and g ∈ G, we have

(Df )([g]I) = (D(f ◦ [g]))(I) .

Thus it suffices to prove the existence of a polynomial P ∈ Pol(Sym)K such


that for all f we have
µ ¶
∂ ¯
(Df )(I) = P f ([exp X]I) ¯X=O
∂X
µ ¶
∂ ¯
=P F (X) ¯X=O .
∂X

In the exponential chart, i.e., a neighborhood of O in Sym, there is a poly-


nomial P satisfying this relation for all f , and the only question is whether
this polynomial is K-invariant. But the conjugation action by K on Posn cor-
responds to the conjugation action of K in the chart, i.e. conjugation by an
element k ∈ K commutes with the exponential, as we have already used in
the first part of the proof. By the invariance of the operator, we have for all
k ∈ K, µ ¶ µ ¶
∂ −1 ¯
¯ ∂ ¯
P F (kXk ) X=O = P F (X) ¯X=O .
∂X ∂X
Put Z = kXk −1 . The same transformation discussed at the beginning of this
section, with Y replaced by X, shows that
∂ ∂
= k −1 k.
∂X ∂Z
Hence µ ¶ µ ¶
∂ ∂
k −1
¯ ¯
P k F (Z) ¯Z=O = P F (Z) ¯Z=O
∂Z ∂Z
for all functions F in a neighborhood of O in Sym. This implies that P is
K-invariant, and concludes the proof of the theorem.
Now that Theorems 2.2 and 3.2 are proved essentially the same way, but
the notation and context are sufficiently different so we reproduced the proofs
separately. The key point is that in both charts, conjugation by elements of
K commutes with the chart.
4 The Transpose of an Invariant Differential Operator 87

Remark. Here we have treated the relations between Pol(Sym)K and


DO(Posn )G directly on the symmetric space Posn . Helgason treats the sit-
uation more elaborately on the group and an arbitrary reductive coset space,
cf. [Hel 84], Chap. 2, Theorems 4.3 through 4.9.
Given a polynomial P (X), we let P (∂) be the differential operator with
constant coefficients obtained by substituting ∂ij for xij . Thus if we wish to
suppress the variables, we could write
µ ¶

P = P (∂) .
∂X

This is a differential operator on the space of functions on Sym.

4 The Transpose of an Invariant Differential Operator


Let M be a manifold with a volume form, and D a differential operator. As
usual, we can deal with the hermitian integral scalar product, or the bilinear
symmetric integral scalar product, given by the integral without the extra
complex conjugate, with respect to the volume dµ. We let D∗ be the adjoint
of D with respect to the hermitian product, and t D the transpose of D with
respect to the symmetric scalar product. Thus for C ∞ functions ψ1 , ψ2 for
which the following integrals are absolutely convergent, we have by definition
Z Z
(Dψ1 )ψ2 dµ = ψ1 (t Dψ2 )dµ .
M M

We shall denote the symmetric scalar product by [ψ1 , ψ2 ], to distinguish it


from the hermitian one hψ1 , ψ2 i. Then the transpose formula reads

[Dψ1 , ψ2 ] = [ψ1 , t Dψ2 ] .

The existence of the transpose in general is a simple routine matter. Let Ω be


a volume form on a Riemannian manifold. In local coordinates x1 , . . . , xN on
a chart which in Euclidean space is a rectangle, say, we can write

Ω(x) = β(x1 , . . . , xN )dx1 ∧ . . . ∧ dxN .

We suppose the coordinates oriented so that the function θ is positive. Let D


be a monomial differential operator, so in terms of the coordinates

D = γ∂j1 . . . ∂jm ,

where γ is a function, and ∂j is the partial derivative with respect to the


j-th variable. Then integrating over the manifold, if γ or ϕ or ψ has compact
support in the chart, we can integrate by parts and the boundary terms will
vanish, so we get
88 4 Invariant Differential Operators on Posn (R)
Z Z
(Dϕ)ψΩ = γψβ(∂j1 . . . ∂jm ϕ)dx1 ∧ . . . ∧ dxN
Z
= (−1)m ∂j1 . . . ∂jm (γψβ)ϕdx1 ∧ . . . ∧ dxN
1
Z
m
= (−1) ∂j . . . ∂jm (γβψ)ϕΩ .
β 1
Thus we find

Proposition 4.1. In local coordinates, suppose

Ω(x) = β(x1 , . . . , xN )dx1 ∧ . . . ∧ dxN and D = γ∂j1 . . . ∂jm .

If γ, or ϕ, or ψ has compact support in the chart, then

t 1
Dψ = (−1)m ∂j1 . . . ∂jm (γβψ) .
β
Using a partition of unity, this formula also applies under conditions of ab-
solute convergence.
Apply this to the volume form corresponding to the measure on Posn :

dµn (Y ) = |Y |−(n+1)/2 dµeuc (Y ) whereby β(Y ) = |Y |−(n+1)/2 .

We get:

Proposition 4.2. Let


Y µ ∂ ¶mij
DY = α(Y )
∂yij
i5j

P
and m = mij . Then
Y µ ∂ ¶mij
t
DY = (−1)m |Y |(n+1)/2 ◦ (α(Y )|Y |−(n+1)/2 ) .
∂yij
i5j

We shall relate convolutions and symmetries with the transpose. We note


that the function (Z, Y ) 7→ tr(Y Z −1 ) is a point pair invariant on Posn . It
follows that the function
−1
(Z, Y ) 7→ e−tr(Y Z )
= exp(−tr(Y Z −1 ))

is a point pair invariant, which goes to zero rapidly as tr(Y Z −1 ) goes to


infinity. We define the gamma kernel or gamma point pair invariant to
be −1 −1
ϕ(Z, Y ) = e−tr(Z Y ) = e−tr(Y Z ) .
4 The Transpose of an Invariant Differential Operator 89

Proposition 4.3. For a character ρ on Posn lying in a half space of conver-


gence, the eigenvalue of the gamma operator is given by
(ϕ ∗ ρ)(I) = Γn (ρ) = λΓ (ρ) ,
which was computed in Chap. 3, Proposition 2.1. In other words,
(ϕ ∗ ρ)(Z) = Γn (ρ)ρ(Z) .
Proof. This is just Proposition 2.2 of Chap. 3.
As mentioned in Chap. 3, we emphasize the eigenvalue property of Γn (ρ)
by writing
Γn (ρ) = λΓ (ρ) .
We can now prove a formula for the transpose of an invariant differential
operator, as in Maass [Maa 71]. The result is stated without proof in [Sel 56],
p. 53.
Proposition 4.4. Let S(Y ) = Y −1 . Let D be an invariant differential oper-
ator, and let
D̃ = [S]D .
Then
t
D = D̃ and ¯ = t D̄ .
D∗ = D̃
Proof. By [JoL 01], Theorem 1.3 of Chap. 3, it suffices to verify the formula
on the characters ρ. Actually, if we parametrize the characters as ρs with n
complex variables s, it suffices to prove the relation for s lying in the half space
of convergence of the gamma convolution. Let ϕ(Z, Y ) = exp(−tr(Y Z −1 )).
Consider ρ = ρs lying in this half space. Let λρ = (ϕ ∗ ρ)(I) and let c(ρ)
denote the eigenvalue of t D on ρ. Then by definition
λρ t Dρ = λρ c(ρ)ρ .
On the other hand,
Z
λρ D̃ρ(Z) = D̃Z ϕ(Z, Y )ρ(Y )dµn (Y ) by Proposition 4.3
Posn
Z
= D̃1 ϕ(Z, Y )ρ(Y )dµn (Y )
Posn
Z
= D2 ϕ(Z, Y )ρ(Y )dµn (Y ) by [JoL 01],
Chap. 4 Lemma 1.4
Z
= ϕ(Z, Y )(t Dρ)(Y )dµn (Y ) by definition of transpose
Z
= ϕ(Z, Y )c(ρ)ρ(Y )dµn (Y ) ,

= λρ c(ρ)ρ .
90 4 Invariant Differential Operators on Posn (R)

Hence D̃ and t D have the same eigenvalues ρ for all ρ. Hence they are equal
by [JoL 01], Theorem 1.3 of Chap. 3. This concludes the proof.

5 Invariant Differential Operators


on A and the Normal Projection
We start with invariant differential operators on A at the level of freshman
calculus.
We have

A ≈ R+ × . . . × R+ with a = diag(a1 , . . . an ) ∈ A .

Let a = Lie(A) as usual, so a is the vector space of diagonal matrices. For


each i = 1, . . . , n let Di be the differential operator on the space Fu(A) of C ∞
functions on A, by
∂f
(Di f )(a) = ai ∂i f (a) = ai .
∂ai
Then a direct computation shows that each Di is invariant under multiplica-
tive translations on A, that is Di ∈ IDO(A).
The construction of invariant differential operators given previously on
Posn can be reproduced directly on A. In other words, for a polynomial
P (h1 , . . . , hn ) in the coordinates of an element of a with respect to the natural
basis of R × . . . × R, one may define the differential operator DP on Fu(A)
by the formula
¯
(DP f )(a) = P (∂/∂h)f (a exp h) ¯h=0
= P (∂/∂h1 , . . . , ∂/∂hn )f (a1 eh1 , . . . , an ehn ) ¯
¯
h=0
.

From the ordinary chain rule of calculus, it is immediate that

DP = P (D1 , . . . Dn ) .

Proposition 5.1. The algebra of invariant differential operators on A is the


polynomial algebra R[D1 , . . . Dn ]. The map P 7→ DP gives an algebra isomor-
phism from Pol(a) to IDO(A). The elements invariant under W (the Weyl
group of permutations of the variables) correspond to each other under this
isomorphism.

Proof. This is a result in calculus, because of the special nature of the group
A. Indeed, the ordinary exponential map gives a Lie group isomorphism of
R × . . . × R with A. Then invariant differential operators on a = R × . . . × R
are simply the differential operators with constant coefficients. One sees this
at once from the invariance, namely if we let f ∈ Fu(a), then for v ∈ a,

D(f (x + v)) = (Df )(x + v) .


5 The Normal Projection 91

This means that the coefficients in the expression


µ ¶j1 µ ¶jn
X ∂ ∂
D= c(j) (x) ...
∂x1 ∂xn

are translation invariant, and so constant. Note that the partial derivatives
∂/∂xi on a correspond to ai ∂/∂ai on A, with the change of variables

ai = exi or xi = log ai .

For the general Harish-Chandra theory of polynomials and invariant dif-


ferential operators, cf. [JoL 01], Theorem 1.1 of Chap. 2.
Next, we describe systematically the relation of invariant differential op-
erators on Posn with invariant differential operators on A via the normal
projection, obtained by extending a function on A to a function on Posn
by making it constant along orthogonal geodesics to A. The main theorem on
Posn , Theorem 5.4 is a special case of Helgason’s general theorem on semisim-
ple Lie groups [Hel 77] and [Hel 84], Chap. 2, Theorem 5.13. The treatment
here in some sense follows Helgason, but he works by descending from the
group, while we work directly on the symmetric space Posn .
The present section uses some concepts from differential geometry. it will
not be used in the sequel, and readers may omit it. Differential geometry will
be essentially absent from our development. On the other hand, some readers
may find it illuminating to see the present section provide an example of some
general differential geometric considerations.
As before, let a = Lie(A) be the algebra of diagonal matrices, and let

a⊥ = Sym(0)

be its orthogonal complement under the trace form, so a⊥ is the space (not
algebra) of matrices with zero diagonal components. For a ∈ A, the multi-
plicative translate [a]Sym(0) is the normal space to the tangent space of [a]I,
but in the present situation,

[a]Sym(0) = Sym(0) .

As we have seen previously, the variables X split naturally, so that we


write
X = (Xa , X (0) ) ,
where Xa = (x11 , . . . , xnn ) are the variables on a, and X (0) = (xij )i<j are the
variables on Sym(0) = a⊥ . We may decompose a polynomial P ∈ Pol(Sym) a
sum
P = Pa + P (0)
where Pa = Pa (Xa ) = Pa (x11 , . . . , xnn ) involves only the variables on a, and
all monomials occurring in P (0) (meaning with non-zero coefficient) involve
92 4 Invariant Differential Operators on Posn (R)

a positive power of some coordinate xij (i < j) on a⊥ . We call this the a-


orthogonal decomposition of P , and we call Pa the projection of P on
a. We call P (0) the a-orthogonal projection of P . We note the frequently
used fact that P (0) = P (0) (X (0) ) vanishes on a. In particular, Pa (Xa ) is the
restriction of P to a.

Proposition 5.3. The map P 7→ Pa is a linear isomorphism



Pol(Sym)K −→ Pol(a)W .

Proof. This is essentially a repetition of Theorem 5.1, combined with the


remarks preceding the proposition. The result essentially comes from the re-
lation
Sym = [K]a and Sym∨ = [K]a∨ .
Associated to the action [a] : Posn → Posn by an element a ∈ A, we have
the derivative

[a]0 (I) : Sym → Posn and [a]0 (I) = [a] .

As for the exponential map, for v ∈ Sym,

exp[a]I [a]v = [a] expI (v) .

We view TI Posn = Sym as the tangent space of Posn at I, and T[a]I Posn is
its image under [a]. So at each [a]I ∈ A we have the splitting

T[a]I Posn = [a]Sym = [a]a + [a]a⊥ ,

which, by our choice of chart can be viewed as simply Sym again, but one has
to be careful about the action in making identifications.

Let NPosn A be the normal bundle of A in Posn . Then the fibers of NPosn A
are simply the normal spaces [a]a⊥ , with a ∈ A, so may be identified with a⊥ .
The exponential map (varying at each point)

V 7→ exp[a]I (v) for v ∈ T[a]I Posn

induces a differential isomorphism of a neighborhood of the zero section in the


normal bundle with a neighborhood of A in Posn , called a tubular neigh-
borhood. Actually, the tubular neighborhood exists globally.

Theorem 5.4. The map

eN A : A × a⊥ → Posn given by (a, v) 7→ [a] exp(v)

is a differential isomorphism.
5 The Normal Projection 93

Proof. This is a special case of a result given by Loos [Loo 69], pp. 161–162, in
the context of semisimple Lie groups and symmetric spaces. However, it is also
a special case of a much more general theorem about Cartan-Hadamard spaces
in differential geometry, see [La 99], Chap. X, Theorem 2.5 for an exposition
and further historical comments. Helgason [Hel 78], Chap. 1, Theorem 14.6
and Chap. 6, Theorem 1.4 gives a topological version without differentiability,
extending theorems of Mostow [Mos 53], also stated without differentiability.
The product decomposition of Theorem 5.4 will be called the A-normal
decomposition of Posn .
The geometry is illustrated on the following figure, with w ∈ Sym(0) .
[a] exp(tw)

exp(tw)
A

[a] I
I

We have drawn the exponential curves emanating from the unit matrix, and
from some translation [a]I, in the geodesic normal direction, a situation we
now discuss more extensively. Given a function f on A, we can define its nor-
mal extension to Posn , namely we let fPosn be the function on Posn which
is constant on every translation [a] exp(a⊥ ), for all a ∈ A. From Theorem 5.4
we have two differential isomorphisms
e
a + a⊥ = a + Sym(0) −→ A × a⊥ −→
NA
Posn ,

where the left arrow is simply the exponential map on a and the identity on
a⊥ . Thus the function f on A not only can be extended normally to Posn via
eN A , but can be pulled back to a function F on a, and then be extended to
FSym constant on each coset h + a⊥ , with h ∈ a. Thus for X ∈ a⊥ = Sym(0) ,
and a = exp h, we have by definition

f (exp h) = F (h) and fPosn ([a] exp X) = FSym (h, X) .

For P (0) we then have


(1) P (0) (∂)FSym = 0.
Next we shall relate these partial differential operators with Proposition 4.1
via a commutative diagram. Our goal is to prove Theorem 5.5. Note that each
94 4 Invariant Differential Operators on Posn (R)

W -invariant polynomial Pa on a gives rise to an invariant differential operator


on A, in the obvious way, similar to the association P 7→ DP on Posn , namely
µ ¶
∂ ¯
(DPa f )(a) = P f ([a][exp h]I) ¯h=0 for f ∈ Fu(A) .
∂h

Here h is the n-tuple of diagonal variables with hi = xii (i = 1, . . . , n), and


Fu denotes the space of C ∞ functions.
Let D be an invariant differential operator on Posn . We define its A-

normal (or radial) projection DA , also written r⊥
A (D), on the space of

functions Fu(A) as follows. Let fPosn be the normal extension of f to Posn . We
⊥ ⊥
apply D to fPos n
and restrict this function to A to get DA f , so by definition,

r⊥ ⊥ ⊥
A (D)f = DA f = (DfPosn )A .


Lemma 5.5. The map D 7→ DA is a linear isomorphism
K ≈ W
r⊥
A : DO(Posn ) −→ IDO(A) .

For D = DP we have
DPa = r⊥
A (DP ) .

Proof. Immediate from the definitions and formula (1).

Theorem 5.6. We have a commutative diagram of linear isomorphisms:

DO(Posn )G −−→ IDO(A)W ⊥


DP −−→ DPa = DA
x x x x
given by
   
   
Pol(Sym)k −−→ Pol(a)W P −−→ Pa

Proof. This just puts Proposition 5.1 and the previous lemma together.
5
Poisson Duality and Zeta Functions

This chapter recalls some standard facts about the Poisson summation formula
on a euclidean space. It will be applied when the euclidean space is the space of
matrices, with the trace scalar product, so we make all the standard formalism
explicit in this case. We give two classical applications to the Epstein zeta
function, which serve as prototypes. In the extension of the theory to Posn ,
we have to generalize the definition of the Bessel functions to this higher
dimensional case, and this will be done in the next chapter. Then we can put
all these results together in the study of Eisenstein series.

1 Poisson Duality
Duality on Vector Spaces Over R

Let V be a finite dimensional vector space over R, of dimension N . We sup-


pose given a positive definite scalar product on V , denoted by hx, yi. We also
suppose given a Lebesgue measure (any two such differ by a positive constant
factor), denoted by µ. We normalize the Fourier transform of a function f
by the formula
Z
FT 1. f ∨ (x) = f (y)e−2πihx,yi dµ(y) .
V

We call f ∨ the Poisson dual, which is a normalization of the Fourier trans-


form adapted for the summation formula. Let b > 0. Then

dµ(by) = bN dµ(y) .

Define f ◦ b to be the multiplicative translation, i.e. (f ◦ b)(x) = f (bx).


Then

FT 2. (f ◦ b)∨ (x) = b−N f ∨ (x/b) .

Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 95–106 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
96 5 Poisson Duality and Zeta Functions

More generally, let A : V → V be an invertible linear map, and denote kAk the
absolute value of the determinant of A. Then for the composite f ◦ A we have
FT 3. (f ◦ A)∨ = kAk−1 f ∨ ◦ t A−1 .
In particular, if A is symmetric (which will be the case in practice) we find
FT 4. (f ◦ A)∨ = kAk−1 f ∨ ◦ A−1 .
For example, if A is represented by a diagonal matrix dia(a1 , . . . , aN ) with
respect to a basis, and aj > 0 for all j, then
kAk = a1 . . . aN .
Next, let z ∈ V , and define the additive translation fz by
fz (x) = f (x − z) .
Then
FT 5. (fz )∨ (x) = fz∨ (x) = e−2πihx,zi f ∨ (x) .
This comes at once from the invariance of µ under additive translations.
FT 6. If the measure of the unit cube for an orthonormal basis is 1,
and we define f − (x) = f (−x), then
f ∨∨ = f − .
One can either repeat the usual proof with the present normalization, or de-
duce it from the same formula for the otherwise normalized Fourier transform.
That is, if we define
Z √
f ∧ (x) = f (x)e−ixy dν(y) with dν(y) = ( 2π)−N dy ,
RN

and dy is the usual Lebesgue measure dy1 . . . dyN , then


f ∧∧ = f − .
A complete proof is found in basic texts, e.g. in Lang’s Real and Functional
Analysis, Chap. 8, Theorem 5.1.
FT 7. Assume again that the measure of the unit cube for an
orthonormal basis is 1. Then the function

f (x) = e−πhx,xi
is self-dual, i.e. f ∨ = f . For the other normalization, the function
g(x) = e−hx,xi/2
is self-dual, that is g ∧ = g.
These relations are elementary from calculus.
2 The Matrix Scalar Product 97

Lattices. Let L be a lattice in V , that is a free Z-module of rank N , which


is also R-free. We let L0 be the dual lattice, consisting of all x ∈ V such
that hx, yi ∈ Z for all x ∈ L. Then L0 ≈ Hom(L, Z) in a natural way.
For functions in the Schwartz space (at the very least) one has the

Poisson Summation Formula

X X
f (α) = µ(V /L)−1 f ∨ (α0 ) .
α∈L α0 ∈L0

Proof. The Poisson formula can be seen from our point of view as a special
case of the heat kernel relation on a torus. However, we give the usual proof
via Fourier inversion. Normalize µ so that µ(V /L) = 1. Let
X
g(x) = f (x + α) .
α∈L

Then g is L-periodic, so g has a Fourier series expansion


X 0
g(x) = cα0 (g)e2πihx,α i
α0 ∈L0

with Fourier coefficients


Z
0
cα0 (g) = g(x)e−2πihx,α i dµ(x)
V /L
Z X
0
= f (x + α)e−2πihx,α i dµ(x)
α∈L
V /L
Z
0
= f (x)e−2πihx,α i dµ(x) = f ∨ (α0 ) .
V

Then X X
f (α) = g(0) = f ∨ (α0 ) ,
α∈L α0 ∈L0

which concludes the proof of the formula.

2 The Matrix Scalar Product


We start with two positive integers p, q. Let:
V = Rp×q = the vector space of p × q real matrices
L = Zp×q = lattice of integral matrices.
98 5 Poisson Duality and Zeta Functions

For X, Y ∈ V define the trace scalar product, or scalar product for


short,
hX, Y i = tr( t XY ) .
P 2
This defines a positive definite scalar product, since hX, Xi = xij . Then
the lattice L is self dual, i.e. L = L0 .
Note that the vectors Eij with (i, j)-component 1, and 0 otherwise, form
an orthonormal basis, so we are dealing with a euclidean space of dimension
N = pq, with N as in Sect. 1. We take the measure µ to be the ordinary
Lebesgue measure, Y
dµ(X) = dxij .
i,j

The volume of the unit cube is 1. In particular, the function


f (X) = e−πhX,Xi
is Poisson self dual, and formula FT 5 also applies that is for any function f
in the Schwartz space,
f ∨∨ = f − .
Next, we look into some interesting linear automorphisms of V , to which
we can apply FT 3, FT 4 and the Poisson formula.
Let P ∈ Posp and Q ∈ Posq . These give rise to a linear map
1 1
MP,Q = M (P, Q) : V → V by MP,Q (X) = P 2 XQ 2 .
Lemma 2.1. The map MP,Q is symmetric positive definite, and satisfies
(1) M (P, Q)−1 = M (P −1 , Q−1 ) .
Its determinant is given by
(2) |M (P, Q)| = |P |q/2 |Q|p/2 .
The lemma is immediate. We note that
hP X, XQi = tr( t XP XQ) = tr(t XP 1/2 P 1/2 XQ1/2 Q1/2 )
= tr(Q1/2 t XP 1/2 P 1/2 XQ1/2 ) .
since tr(t XP XQ) = tr(P XQt X), we write tr(P [X]Q) without parentheses,
and get

(3) tr(P [X]Q) = hMP,Q (X), MP,Q (X)i .


We now define the theta series by using the function f (X) = e−πhX,Xi , and
its composite with the linear MP,Q , that is
X X
θ(P, Q) = e−πtr(P [A]Q) = f (MP,Q (A)) .
A∈L A∈L

The formalism of the Fourier transform and Poisson summation yields:


3 The Epstein Zeta Function: Riemann’s Expression 99

Theorem 2.2. The theta series satisfies the functional equation

θ(P, Q) = |P |−q/2 |Q|−p/2 θ(P −1 , Q−1 ) .

Remark. As a special case of the above situation, we can take q = 1, and


V = Rn (space of column vectors). The scalar product is the usual one if we
take Q = 1.

We can also incorporate a translation in our tabulations. Let Z ∈ V . We


define X X
θ(P, Z, Q) = e−πtr(P [A+Z]Q) = f−Z (MP,Q (A)) .
A∈L A∈L

Then FT 5 yields
−1
[A]Q−1 )
X
(4) θ(P, Z, Q) = |P |−q/2 |Q|−p/2 e−2πihA,Zi e−πtr(P .
A∈L

The sum on the right may be called a twist of the theta series by a
character, namely the character

X 7→ e−2πihX,Zi .

It is the only change in the Poisson formula of Theorem 2.2, but has to be
incorporated in the notation. Thus one may define the twisted theta func-
tion X
θZ (P, Q) = e−2πihA,Zi e−πtr(P A]Q) .
A∈L

Then we get:
Theorem 2.3. From the definitions and (4),

θ(P, Z, Q) = |P |−q/2 |Q|−p/2 θZ (P −1 , Q−1 ) .

Note that the sum over A ∈ L involves each non-zero element of L twice,
namely A and −A. Thus one could write the contribution of the character in
the series for θZ (P, Q) without the minus sign, since P −1 [A]Q−1 is even as a
function of A.

3 The Epstein Zeta Function: Riemann’s Expression


For Y ∈ Posn , we define the Epstein zeta function to be
X X
E(Y, s) = Y [a]−s = (t aY a)−s .
a∈Zn −{0}

The vectors a ∈ Zn are viewed as column vectors, so t a is a row vector.


100 5 Poisson Duality and Zeta Functions

Lemma 3.1. The Epstein series converges absolutely for all s ∈ C with
Re(s) > n/2.

Proof. We estimate the number of lattice points a lying in a spherical annulus,


that is
k − 1 5 |a| 5 k with, say, the euclidean norm |a| .
The area of the (n − 1)-dimensional sphere in Rn of radius k is >><< kn−1
for k → ∞, so the number of Zn -lattice points in the spherical annulus is
<< k n−1 . Hence the Epstein series is dominated for real s > 0 by

X k n−1
k 2s
k=1

because Y is positive definite and Y [a] >> |Y 1/2 a|2 >> |a|2 for |a| → ∞, so

Y [a]s >> |a|2s .

Hence the series converges whenever 2s − (n − 1) > 1, which is precisely when


s > n/2, as asserted.

Remark. Because of the way a vector a enters as a square in t aY a, we note


that the terms in the Epstein series are really counted twice. Hence the Epstein
zeta function is sometimes defined to be 1/2 of the expression we have used.
However, we find that not dividing by 2 makes formulas come out more neatly
later.

We let the completed Epstein Λ-function be

Λ(Y, s) = π −s Γ(s)E(Y, s) .

For t > 0 we let


X
(1) θ(Y, t) = e−πY [a]t .
a∈Zn

This is a special case of the theta series for matrices, with p = n, q = 1, P = Y


and Q = t.
For Re(s) > n/2 (the domain of absolute convergence of the zeta series),
we have the expression of the Λ-function as a Mellin transform
Z∞ ∞
s dt dt
XZ
(2) Λ(Y, s) = (θ(Y, t) − 1)t = e−πY [a]t ts .
t t
0 a6=0 0

This simply comes by integrating term by term the theta series, taking the
Mellin transform of each term. Having subtracted the term with a = 0 guaran-
tees absolute convergence and the interchange of the series and Mellin integral.
3 The Epstein Zeta Function: Riemann’s Expression 101

We shall now give the Riemann type expression for the analytic continu-
ation of the Epstein zeta series.
We define the incomplete gamma integral for c > 0 by
Z∞
dt
Γ∞
1 (s, c) = e−ct ts .
t
1

We note that Γ∞
1 (s, c) is entire in s.
Theorem 3.2. The function s 7→ Λ(Y, s) has the meromorphic extension and
à 1
!
|Y |− 2 1
Λ(Y, s) − −
s − n/2 s
− 12 ∞ n
X³ ³ ´´
= Γ∞
1 (s, πY [a]) + |Y | Γ1 − s, πY −1 [a]
2
a6=0

∞ Z∞
s dt dt
Z
1
= (θ(Y, t) − 1)t + |Y |− 2 (θ(Y −1 , t) − 1)tn/2−s .
1 t t
1

The series and truncated integrals converge uniformly on every compact set
to entire functions, and the other two terms exhibit the only poles of Λ(Y, s)
with the residues. The function satisfies the functional equation
1
³ n ´
Λ(Y, s) = |Y |− 2 Λ Y −1 , − s .
2
Proof. In (2), we write
Z∞ Z1 Z∞
= + .
0 0 1

Γ∞
P
The integral over [1, ∞] yields the sum 1 (s, πY [a]) taken over a 6= 0. For
the other integral, we get
Z1 Z1 Z1
s dt s−1 dt
(θ(Y, t) − 1)t =− t dt + θ(Y, t)ts
t t
0 0 0
Z1
1 dt
=− + |Y |−1/2 θ(Y −1 , t−1 )ts−n/2 .
s t
0

We subtract 1 and add 1 to θ(Y −1 , t−1 ). Integrating the term with 1 yields
the second polar term |Y |−1/2 /(s − n/2). In the remaining integral with
θ(Y −1 , t−1 )−1 , we change variables, putting u = t−1 , du/u = dt/t. Then the
interval of integration changes from [0, 1] to [1, ∞], and the remaining terms
in the desired formula comes out. This concludes the proof of the formula in
the theorem.
102 5 Poisson Duality and Zeta Functions

By Theorem 2.2, we know the functional equation for θ(Y, t). The func-
tional equation in Theorem 3.2 is then immediate, because except for the
factor |Y |−1/2 , under the change s 7→ n/2 − s, the first two terms are inter-
changed, and the two terms in the sum are interchanged. The factor |Y |−1/2 is
then verified to behave exactly as stated in the functional equation for ξ(Y, s).
This concludes the proof.
The integral expressions allow us to estimate ξ(Y, s) in vertical strips as is
usually done in such situations, away from the poles at s = 0, s = n/2.

Corollary 3.3. Let σ0 > 0, σ1 > n/2 and let S be that part of the strip
−σ0 5 Re(s) 5 σ1 , with |s| = 1 and |s − n/2| = 1. Then for s ∈ S we have
1 1
³ n ´
|Λ(Y, s)| 5 |Y |− 2 + 1 + Λ(Y, σ1 ) + |Y |− 2 Λ Y −1 , + σ0 .
2
Proof. We merely estimate the three terms in Theorem 3.2. The polar terms
give the stated estimate since we are estimating outside the discs of radius 1
around the poles. We make the first integral larger by replacing s by σ1 ,
and then by replacing the limits of integration, making them from 0 to ∞,
which gives ξ(Y, σ1 ) as an upper bound for the first integral. As to the second
integral, we perform the similar change, but use the value s = −σ0 to end up
with the stated estimate. This concludes the proof.

Corollary 3.4. The function s(s − n/2)Λ(Y, s) is entire in s of order 1. The


function Λ(Y, s) is bounded in every vertical strip outside a neighborhood of
the poles.

Proof. Routine argument from the functional equation, properties of the


gamma function, and the boundedness of the Dirichlet series for ζ(Y, s) in
a right half plane.

We give some useful complements to the functional equation. Let Ỹ be the


matrix such that
Y Ỹ = |Y |In ,
so Ỹ is the matrix of minors of determinants. Then:

Corollary 3.5. We have:

(a) E(Y −1 , s) = |Y |s E(Ỹ , s)


and Λ(Y −1 , s) = |Y |s Λ(Ỹ , s)
1
³ n ´
(b) Λ(Y, s) = |Y |n/2−s− 2 Λ Ỹ , − s .
2
Proof. By definition
Y −1 = |Y |−1 Ỹ ,
so y −1 [a] = |Y |−1 Ỹ [a] and the first formula (a) drops out. Then by Theorem
3.2,
3 The Epstein Zeta Function: Riemann’s Expression 103
n ³ ´
Λ(Y, s) = |Y |−1/2 Λ Y −1 , − s
³2 n ´
= |Y |n/2−s−1/2 Λ Ỹ , − s by (a).
2
This concludes the proof.

The case n = 2 is important in subsequent applications, because it is used


inductively to treat the case n = n. Hence we make some more comments here
when n = 2. In this case, the functional equation can be formulated with Y
on both sides (no Y −1 or Ỹ on one side is necessary).

Proposition 3.6. Suppose n = 2. Then E(Y, s) = E(Ỹ , s) and so

Λ(Y, s) = Λ(Ỹ , s) .

Proof. Note that


µ ¶ µ ¶
u v w −v
if Y = then Ỹ = .
v w −v u

Then for an integral vector (b, c) we have

[b, c]Ỹ = wb2 − 2vbc + vc2

so that
[b, c]Y = [c, −b]Ỹ .
The map (b, c) 7→ (c, −b) permutes the non-zero elements of Z2 so the propo-
sition follows.

Corollary 3.7. For n = 2, we have the functional equation


1
Λ(Y, s) = |Y | 2 −s Λ(Y, 1 − s) .

Proof. Corollary 3.5(b) and Proposition 3.6.

For later use, we insert one more consequence, a special case of Corol-
lary 3.3.

Corollary 3.8. In the domain −2 < Re(s) < 3 and s outside the discs of
radius 1 around 0, 1, we have the estimates:
1
|Λ(Y, s)| 5 1 + |Y |− 2 + Λ(Y, 3)|Y |5/2 + Λ(Y, 3) .

Proof. Immediate from Corollary 3.3 and the functional equation.


104 5 Poisson Duality and Zeta Functions

4 Epstein Zeta Function: A Change of Variables


For subsequent applications, we reformulate the functional equation of the
Epstein zeta function in terms of new variables. To fit the notation to be used
later, we write the Dirichlet series as
X
E(Y, z) = Y [a]−z
a∈Z2
a6=0

and we introduce two complex variables s1 , s2 for which we set


1
z = s2 − s1 + .
2
We then use the notation
ζ(Y, s1 , s2 ) = E(Y, z) .
Theorem 4.1. (a) The function
µ ¶
1
(z − 1)E(Y, z) = s2 − s1 − ζ(Y ; s1 , s2 )
2
in entire on C 2 .
(b) The function
π −z Γ(z)|Y |s2 E(Y, z) with z = s2 − s1 + 1/2 ,
is invariant under the permutations of s1 and s2 .
Proof. As to (a), from Theorem 3.2, with n = 2, we know that the function
(z − 1)zΛ(Y, z) is entire, and
(z − 1)zΛ(Y, z) = (z − 1)zπ −z Γ(z)E(Y, z)
= Γ(z + 1)π −z (z − 1)E(Y, z) ,
so
(z − 1)zΛ(Y, z)
π −z (z − 1)E(Y, z) =
Γ(z + 1)
which is entire, and thus proves (a).
As to (b), the invariance under z 7→ 1 − z means
π −z Γ(z)|Y |s2 E(Y, z) = π −(1−z) Γ(1 − z)|Y |s1 E(Y, 1 − z) .
But this relation is equivalent to
1
π −z Γ(z)E(Y, z) = Λ(Y, z) = |Y | 2 −z Λ(Y, 1 − z) ,
which is true by Corollary 3.7. This concludes the proof.
Note that we may further complete the function in Theorem 4.1 by defining
η(Y ; s1 , s2 ) = z(1 − z)π −z Γ(z)|Y |s2 ζ(Y ; s1 , s2 ) .
The factor z(1 − z) is invariant under the transposition of s1 and s2 , and
Theorem 4.1 may be alternatively stated by saying that η(Y ; s1 , s2 ) is entire
in (s1 , s2 ) and invariant under the transposition of the two variables.
5 Epstein Zeta Function: Bessel-Fourier Series 105

5 Epstein Zeta Function: Bessel-Fourier Series

The Bessel-Fourier series for the Epstein-Eisenstein function can still be done
with the ordinary Bessel function, so we carry it out here separately, as an in-
troduction to the more general result in the matrix case, when a generalization
of the Bessel function will have to be taken into account.
We write n = p + q with integers p, q = 1. We have a partial Iwasawa
decomposition of Y ∈ Posn , given by W ∈ Posp , V ∈ Posq , X ∈ Rp×q with
   
W 0 Ip x
Y = [u(X)]   where u(X) =   .
0 V 0 Iq

In Chap. 1, we took q = 1 with V = v, but there will be no additional


difficulty here by taking arbitrary dimensions, so we might as well record the
result in general. The Epstein zeta series (function) E(Y, s) is the same that
we considered in the preceding section, with its completed Λ-function Λ(Y, s).
We let Ks (u, v) be the usual Bessel integral
Z∞
dt
Ks (u, v) = e−(ut+v/t) ts .
t
0

Theorem 5.1. Let Y ∈ Posn have the above partial Iwasawa decomposition.
Then

Λ(Y, s) = Λ(V, s) + |V |−1/2 Λ(W, s − q/2)


XX t
+ |V |−1/2 e2πi bXc Ks−q/2 (πW [b], πV −1 [x]) ,
b6=0 c6=0

with the sums taken for b ∈ Zp , c ∈ Zq .

Proof. The Λ-function is the Mellin transform of the theta function,


Z∞
dt X
Λ(Y, s) = θ(Y, t)ts with θ(Y, t) = e−πY [a]t .
t
0 a6=0

We decompose a into two components


µ ¶
b
a= with b ∈ Zp and c ∈ Zq .
c

Then
Y [a] = W [b] + V [t Xb + c] .
We decompose the sum for θ(Y, t) accordingly:
106 5 Poisson Duality and Zeta Functions
X X X X
= + .
a∈0 b=0 b6=0 c∈Zq
c6=0

The sum with b = 0 gives the term Λq (V, s). The sum over all c for each
b 6= 0 is then a theta series to which Poisson summation formula applies as in
Theorem 2.3, to yield
X t
e−πW [b]t e−πV [ Xb+c]t
c∈Zq
t
bXc −πV −1 [c]t−1
X
= e−πW [b]t |V |−1/2 t−q/2 e−2πi e .
c∈Zq

The term with c = 0 summed over all b 6= 0 yields |V |−1/2 Λ(W, s − q/2). The
remaining sum is a double sum
Z∞
−1/2
X
2πit bXc −1
[c]t−1 ) s−q/2 dt
|V | e e−π(W [b]t+V t .
b=0
t
c6=0 0

The theorem follows from the definition of the K-Bessel function.


6
Eisenstein Series First Part

In this chapter, we start systematically to investigate what happens when we


take the trace over the discrete groups Γ = GLn (Z), for various objects. In
the first section, we describe a universal adjointness relation which has many
applications. One of them will be to the Fourier expansion of the Eisenstein
series.

1 Adjointness Relations
Let:
U = Uni+ = group of upper unipotent n × n matrices.
Γ = GLn (Z)
Γ∞ = ΓU = Γ ∩ U .
We let ρ be a character on Posn . The most classical Selberg primitive Eisen-
stein series is the series
X
E pr (Y, ρ) = ρ([γ]Y ) .
γ∈ΓU \Γ

Since a character ρ satisfies

ρ([u]Y ) = ρ(X) for u ∈ U ,

it follows that the value ρ([γ]Y ) depends only on the coset ΓU γ in ΓU \Γ,
whence the sum was taken over such cosets to define the Eisenstein series. If
ρ = ρ−s , then the sum is the usual

EUpr (Y, ρ−s ) = E pr (Y, ρ−s ) =


X
ρ−s ([γ]Y ) ,

depending on n complex variables s = (s1 , . . . , sn ), and ρ−s = ρ−1


s . The series
converges absolutely in a half plane, as will be shown in Chap. 7, Sect. 2. Also
note that for all γ ∈ Γ, we have

Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 107–120 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
108 6 Eisenstein Series First Part

E pr ([γ]Y, ρ) = E pr (Y, ρ) .

As usual, let d denote the determinant character. Let α be a complex


number. Then
E pr (Y, ρdα ) = |Y |α E pr (Y, ρ) .
This is because |γ| = ±1 for all γ ∈ Γ, so |[γ]Y | = |Y | for all γ ∈ Γ. In some
applications, it is convenient to carry along such an independent power of the
determinant.
Both groups Γ and U act on the space P = Posn . Under suitable absolute
convergence conditions, there are two maps which we consider:
The ΓU \Γ-trace (also called Einsenstein trace)

TrΓU \Γ : functions on U \P → functions on Γ\P

defined by X
TrΓU \Γ ϕ(Y ) = ϕ([γ)]Y ) .
γ∈ΓU \Γ

The unipotent trace or ΓU \U -trace

TrΓU \U : functions on Γ\P → functions on U \P

defined by Z
f (Y ) = f ([u]Y )du .
ΓU \U

Example The Eisenstein series is a ΓU \Γ-trace, namely

E pr (Y, ϕ) = TrΓU \Γ ϕ(Y ) or E pr (ρ) = TrΓU \Γ (ρ) .

We shall give two essentially formal properties of the ΓU \Γ-trace. For the
first one, see already SL2 (R), Chap. XIII, Sect. 1.

Proposition 1.1. Under conditions of absolute convergence, the two maps


TrΓU \Γ and TrΓU \U are adjoint to each other, that is more precisely

hTrΓU \Γ ϕ, f iΓ\P = 2hϕ, TrΓU \U f iU \P

where the scalar product is given by the usual hermitian integral, or also by
the bilinear integral without the complex conjugation.

Proof. For simplicity, we carry out the computation without the complex
conjugation. We write formally dy instead of dµ(Y ). The factor 2 appears
because Γ does not act faithfully on Posn , but with kernel ±I. So from the
point of view of the measure, terms get counted twice in the third step of the
following proof. We have:
1 Adjointness Relations 109
Z
hTrΓU \Γ ϕ, f iΓ\P = TrΓU \Γ ϕ(y)f (y)dy
Γ\P
Z X
= ϕ([γ]y)f (y)dy
Γ\P ΓU \Γ
Z
=2 ϕ(y)f (y)dy
ΓU \P
Z Z
=2 ϕ([u]y)f ([u]y)dudy
U \P ΓU \U
Z Z
=2 ϕ(y) f ([u]y)dudy
U \P ΓU \U

= 2hϕ, TrΓU \Γ f iU \P .
This concludes the proof.
Next, we give a second adjointness relation, with a twist from left to right.
Indeed, note how the ΓU \Γ-trace is a sum over γ ∈ ΓU \Γ, with ΓU on the left,
whereas the sum on the right side of the equation in the next proposition is
over γ ∈ Γ\ΓU , with ΓU on the right. Furthermore, the sum as written cannot
be taken inside the integral sign, because the integral over Posn is needed to
make the term involving γ independent of the coset γΓU . Cf. step (4) in the
proof.
Proposition 1.2. Suppose the function ϕ on Posn is U -invariant. Let f be
a function on Posn . Under conditions of absolute convergence, we have the
adjointness relation
Z X Z
TrΓU \Γ (ϕ)(Y )f (Y )dµ(Y ) = ϕ(Y )f ([γ]Y )dµ(Y ) .
Posn γ∈Γ/ΓU Posn

Proof. The function TrΓU \Γ ϕ is Γ-invariant on the left by construction. The


integral over Posn can then be decomposed by first summing the integrand
over Γ, and then integrating on the quotient space Γ\Posn , so we obtain:
Z
(TrΓU \Γ ϕ)(Y )f (Y )dµ(Y )
Posn
1X
Z
(1) = (TrΓU \Γ ϕ)(Y ) · f ([γ]Y )dµ(Y )
2
γ∈Γ
Γ\Posn
 
Z Z X
(2) = ϕ(Y ) f ([γu]Y )du dµ(Ẏ )
 
γ∈Γ
U \Posn ΓU \U
110 6 Eisenstein Series First Part
P
by applying Proposition 1.1 to the functions ϕ and Y 7→ f ([γ]Y ). Thus we
γ
are using the first adjointness relation to prove a second one. Now we consider
separately the integrand:
Z X Z X X
(3) f ([γu]Y )du = f ([γ][η][u]Y )du
γ∈Γ γ∈Γ\ΓU η∈ΓU
ΓU \U ΓU \U
X Z
(4) = f ([γ][u]Y )du .
γ∈Γ/ΓU U

Substituting (4) in (2), we find that (2) becomes


 
Z X Z
(5) ϕ(Y ) f ([γ][u]Y )du dµ(Y )
U \Posn Γ/ΓU U

X Z
(6) = ϕ(Y )f ([γ]Y )dµ(Y ) ,
γ∈Γ/ΓU Pos
n

which proves the proposition.

As a special case, we find a relation of Terras [Ter 85], Proposition 3.


Proposition 1.3. Let ρ be a character on Posn . Let A, B ∈ Posn . Then
Z
−1 X
E pr (Y, ρ)e−tr(AY +BY ) dµ(Y ) = Kρ (A[γ], [γ −1 ]B) .
Posn γ∈Γ/ΓU

Proof. We simply let


−1
ϕ(Y ) = ρ(Y ) and f (Y ) = e−tr(AY +BY )
.

Then TrΓU \Γ (ρ) is the Eisenstein series. Furthermore


−1
[γ −1 ])
f ([γ]Y ) = e−tr(A[γ]Y +BY
−1
B t γ −1 Y −1 )
= e−tr(A[γ]Y +γ ,

so the desired formula falls out.

2 Fourier Expansion Determined


by Partial Iwasawa Coordinates
We want to imitate the Epstein zeta function with matrices rather than vectors
(n-tuples). As far as is known, the general case has certain difficulties, so in
2 Fourier Expansion Determined by Partial Iwasawa Coordinates 111

the present section, we deal with a subcase where the proof for the Epstein
zeta function has a direct analogue, the main difference lying in the use of
the general Bessel function of Chap. 3, rather than the classical one-variable
Bessel function.
We fix positive integers p, q such that p + q = n, and p = q so that n = 2q.
We then decompose an element A ∈ Zn×q in two components
µ ¶
B
A= with B ∈ Zp×q and C ∈ Zq×q .
C

We define the q-non-singular theta series for Y ∈ Posn and Z ∈ Posq :


X

θn,q (Y, Z) = e−πtr(Y [A]Z) .
A∈Zp×q
rk(B)=q

The sum is over all integral A ∈ Zn×q such that the B-component as above
has rank q. Thus the sum can be taken separately over all such B, and for each
B over all C ∈ Zq×q without any restriction on C. Thus the singular part of
the theta series would correspond to the part with b = 0 in the Epstein zeta
function, but the higher dimension complicates the simpler condition b = 0.
We combine the above (p, q) splitting with corresponding partial Iwasawa
coordinates, that is
à !
W 0
Y = Iw+ (W, X, V ) = [u(X)]
0 V

with W ∈ Posp , V ∈ Posq and x ∈ Rp×q . Matrix multiplication gives

(1) Y [A] = W [B] + V [t XB + C] .

At this point, we want to emphasize the formal aspects of the remaining


arguments. What will be important is that B, C range over certain sets stable
under the action of Γ on the right. Thus we abbreviate:

Γ = Γq
M = M(q) = Zq×q . Elements of M are denoted by C.
M∗ = M∗ (p, q) = the set of elements B ∈ Zp×q of rank q.

For the non-singular theta series, we find


X X t

(2) θn,q (Y, Z) = e−πtr(W [B]Z) e−πtr(V [ XB+C]Z)
B∈M∗ C∈M
X X t
−πtr(W [B]Z)
= e e−πtr(V [ XB+C]Z)
.
B∈M∗ C∈M

The Poisson formula yields


112 6 Eisenstein Series First Part
X t
(3) e−πtr(V [ XB+C]Z)

C∈M
t
BXC) −πtr(V −1 [C]Z −1 )
X
= |V |−q/2 |Z|−q/2 e−2πitr( e .
C∈M

Define
X t
h1 (C, Z) = e−πtr(W [B]Z) e−2πitr( BXC)

B∈M∗
−1
[C]Z −1 )
h2 (C, Z) = e−πtr(V |Z|−q/2 .

Then both h1 and h2 satisfy the equation

h(Cγ, Z) = h(C, [t γ −1 ]Z) for all γ ∈ Γq .

This is verified at once for h2 . For h1 , we move γ from the right of C to


the left of t B. Then we use the fact that for given γ, the map

B 7→ B t γ −1

permutes the elements of M∗ , so the sum over B ∈ M∗ is the same as the


sum over B t γ −1 . The desired relationship drops out.
With the above definitions, we obtain
X

(5) θn,q (Y, Z) = |V |−q/2 h1 (C, Z)h2 (C, Z) .
C∈M

Note that each term in the sum satisfies the above equations. We then take

the convolution of θp,q with a test function Φ on Γq \Posq , namely
Z
∗ ∗
(6) (θn,q ∗ Φ)(Y ) = θn,q (Y, Z)Φ(Z)dµq (Z) .
Γq \Posq

We disregard for the moment questions of absolute convergence, and work


formally. Two of the more important applications are when Φ = 1 and when
Φ is an Eisenstein series. For the moment, we just suppose Φ is a function on
Γq \Posq .
For simplicity of notation, and to emphasize some formal aspects, we ab-
breviate
P = Posq .
We shall need a formula for the integral over the quotient space

Γ\P = Γq \Posq .

Note that Γ acts on M on the right, and thus gives rise to right cosets of
M. We shall deal with
2 Fourier Expansion Determined by Partial Iwasawa Coordinates 113
X
,
C∈M/Γ

and for any function f on M, we have the relation


X X X
f (C) = f (Cγ) .
C∈M C∈M/Γ γ∈Γ

Lemma 2.1. Let h = h(C, Z) be a function of two variables C ∈ M and


Z ∈ P. Suppose that

h(Cγ, Z) = h(C, [t γ −1 ]Z) for all γ∈Γ.

Then Z Z
X X
h(C, Z)dµ(Z) = 2 h(C, Z)dµ(Z) .
C∈M C∈M/Γ P
Γ\P

Proof.
Z X Z X X
h(C, Z)dµ(Z) = h(Cγ, Z)dµ(Z)
Γ\P
C∈M
Γ\P C∈M/Γ γ∈Γ
Z X X
= h(C, [t γ −1 ]Z)dµ(Z)
Γ\P C∈M/Γ γ∈Γ
X Z
= 2 h(C, Z)dµ(Z)
C∈M/Γ P

thus proving the lemma.

Note that if the function h(C, Z) satisfies (4), then so does the function
Φ(Z)h(C, Z), directly from the invariance of Φ. We shall now assume that Φ

is a ΓU \Γ-trace to get a Fourier expansion for θn,q ∗ Φ.

Proposition 2.2. Suppose Φ = TrΓU \Γ ϕ with a function ϕ on P. Then for


Y = Iw+ (W, X, V ) in partial Iwasawa coordinates, under conditions of ab-
solute convergence, we have
X X t

(θn,q ∗ TrΓU \Γ (ϕ))(Y ) = aB,C e−2πitr( BXC)
C∈M/Γ B∈M∗

with coefficients
X
aB,C = 2|V |−q/2 Kϕd−q/2 (πW [B][γ], π[γ −1 ]V −1 [C]) .
Γ∈Γ/ΓU
114 6 Eisenstein Series First Part

Proof. First remark that the expression on the right of the formula to be
proved makes sense. Indeed, if we replace C by Cγ with some element γ ∈ Γ,
then in tr(t BXC) we can move γ next to t B, and the sum over B ∈ M∗
then allows us to cancel γ. Hence the sum over B ∈ M∗ depends only on the
coset of C in M/Γ. Next we recall that Φd−q/2 is also a ΓU \Γ -trace, namely
trivially
Φd−q/2 = TrΓU \Γ (ϕd−q/2 ) .
Now:

|V |q/2 (θn,q

∗ Φ)(Y )
Z
= Φ(Z)|V |q/2 θn,q

(Y, Z)dµ(Z)
Γ\P
Z X
= Φ(Z) h1 h2 (C, Z)dµ(Z) by equation (5)
C∈M
Γ\P
X Z
= 2 Φ(Z)h1 h2 (Z)dµ(Z) (by Lemma 2.1)
C∈M/Γ P
X X t
= 2e−2πitr( BXC)
·
C∈M/Γ B∈M∗
Z
−1
[C]Z −1 )
Φ(Z)|Z|−q/2 eπtr(W [B]Z+V dµ(Z)
P
X X t
= |V |q/2 aB,C e−2πitr( BXC)
(by Proposition 1.2).
C∈M/Γ B∈M∗

This concludes the proof.

Remark. The preceding proposition applies to the special case when

Φ(Z) = E pr (Z, ρ)

is an Eisenstein series, in which case the Fourier expansion comse from Terras
[Ter 85], Theorem 1.

3 Fourier Coefficients from Partial Iwasawa Coordinates

We represent Y ∈ Posn in terms of partial Iwasawa coordinates

Y = Iw+ (W, X, V ) with W ∈ Posp , V ∈ Posq , X ∈ Rp×q .

Let f be a function on Posn , invariant under the group Γ = GLn (Z). What
we shall actually need is invariance of f under the two subgroups:
3 Fourier Coefficients from Partial Iwasawa Coordinates 115

– the group of elements [u(N )] with N ∈ Zp×q ;


– the group of elements
µ ¶
γ 0
with γ ∈ GLp (Z) .
0 Iq

The following theorem is valid under this weaker invariance, but we may as
well assume the simpler hypothesis which implies both of these conditions.
Under the invariance, f has a Fourier series expansion
X
f (Y ) = aN (W, V )e2πihN,Xi ,
N ∈Zp×q

with Fourier coefficients given by


Z µ µ ¶¶
W 0
aN (W, V ) = f [u(X)] e−2πihN,Xi dX ,
0 V
Rp×q /Zp×q

where dX = dxij is the standard euclidean measure on Rp×q . We then have


Q
the following lemma.

Lemma 3.1. ([Gre 92]) For γ ∈ Γp the Fourier coefficients satisfy

atγN (W, V ) = aN ([γ]W, V ) .

Proof. First note that


    
γ 0 γ γX W 0
(2)  Y =    .
0 Iq 0 Iq 0 V

Now:
  
Z Ip X [γ]W 0
aN (V, [γ]W ) = f    e−2πihN,Xi dX
0 Iq 0 V
   
Z Ip X γ 0 W 0
= f     e−2πihN,Xi dX
0 Iq 0 Iq 0 V
  
Z γ X W 0
= f    e−2πihN,Xi dX .
0 Iq 0 V

Now make the change of variables X 7→ γX so d(γX) = dX. Using (2) and
the invariance of f under the actions of
116 6 Eisenstein Series First Part
 
γ 0
 
0 Iq

shows that the last expression obtained is equal to atγN (W, V ), because

hN, γXi = ht γN, Xi .

This concludes the proof.

4 A Fourier Expansion on SPosn(R)


We consider the inductive Iwasawa decomposition of Chap. 1, taking p = n−1
and q = 1 on SLn (R). Let
 
In−1 x
u(x) =   for x ∈ Rn−1 (column vectors) .
0 1

We have the corresponding decomposition for Y ∈ SPosn :


 1/(n−1) (n−1) 
v Y 0
Y = [u(x)]  
0 v −1

Thus v ∈ R+ and Y (n−1) ∈ SPosn−1 .


As in Sect. 1, we let
U = Uni+ (R)
ΓU = Γ∞ = Uni+ (Z).
Let f be a function invariant under Γ∞ . Then in particular, f is invariant
under Zn−1 . We may write

f (Y ) = f (v, y (N −1) , X)

and f has a Fourier series


X
(1) f (Y ) = am,f (v, Y (n−1) )e2πihm,xi .
m∈Zn−1

The Fourier coefficients are given by the integrals

am,f (v, Y (n−1) )


  1/(n−1) (n−1) 
Z v Y 0
= f [u(x)]   e−2πihm,xi dx .
−1
Rn−1 /Zn−1 0 v
4 A Fourier Expansion on SPosn (R) 117

Example. We may use


X
f (Y ) = ρ([γ]Y )−1
Γ∞ \Γ

with a character ρ, so f is the standard Eisenstein series.


Proposition 4.1. ( [Gre 92]) For γ ∈ Γn−1 , the Fourier coefficients satisfy
at γm (v, Y (n−1) ) = am (v, [γ]Y (n−1) ) .
Proof. Matrix multiplication gives
" # " #Ã !
γ 0 γ γx v 1/(n−1) Y (n−1) 0
(2) Y = .
0 1 0 1 0 v −1

Then
am (v, [γ]Y (n−1) )
Z Ã" #Ã !!
In−1 x v 1/(n−1) [γ]Y (n−1) 0
= f e−2πihm,xi dx
0 1 0 v −1
Z Ã" #" #Ã !!
In−1 x γ 0 v 1/(n−1) Y (n−1) 0
= f e−2πihm,xi dx
0 1 0 1 0 v −1
Z Ã" #Ã !!
γ x v 1/(n−1) Y (n−1) 0
= f e−2πihm,xi dx.
0 1 0 v −1

We make the translation x 7→ γx, d(γx) = dx and use (2) to get


Z Ã " #!
(n−1) γ 0
am (v, [γ]Y )= f◦ (Y )e−2πihm,γxi dx
0 1
= at γm (v, Y (n−1) ) ,
thus proving the proposition.
For m 6= 0 we may write
m = d`, with ` primitive and d ∈ Z, d > 0 .
Then putting e = en−1 = t (0, . . . , 0, 1), we have
am (v, Y (n−1) ) = ad` (v, Y (n−1) ) = ade (v, [γ]Y (n−1) )
where ` = t γe and m = d t γe. Thus the Fourier series for f can be written

X X
(3) f (Y ) = a0 (v, Y (n−1) ) + ade (v, [γ]Y (n−1) )e2πihde,xi .
d=1 Γn−1,1 \Γn−1
118 6 Eisenstein Series First Part

5 The Regularizing Operator QY = |Y || ∂Y |

The above operator QY (Y variable in Posn ) was used especially by Maass


[Maa 55], [Maa 71], and deserves its own section. We shall use it in Chap. XII
as in Maass, as a regularizing operator. We have seen in Sect. 2 how to use
terms of maximal rank in an Eisenstein series. The operator QY kills terms
of lower rank because applied to exp(tr(BY )) a factor |B| comes out, and is
equal to 0 if B is singular.
First, plugging in formula (2) of Sect. 2, we get an example of the adjoint.

Proposition 5.1. Q̃Y = (−1)n |Y |(n+1)/2 | ∂Y |◦|Y |1−(n+1)/2 . More generally,
for a positive integer r, letting
∂ r
Qn,r = |Y |r | | ,
∂Y
we find
¯ ∂ ¯r
¯ ¯
Q̃n,r = (−1)nr |Y |(n+1)/2 ¯¯ ¯ ◦ |Y |r−(n+1)/2 .
∂Y ¯
We note that QY is an invariant differential operator of degree n (the size
of the determinant). Next we observe that for any n × n matrix M ,
¯ ¯
¯ ∂ ¯ tr(M Y )
¯ ∂Y ¯ e
¯ ¯ = |M |etr(M Y ) .

This is immediate from the definitions.


The gamma transform was used rather formally in Chap. 3, Proposi-
tion 2.5. We use more of its properties here, so we recall the notation
Z
−1
(Γn #f )(Z) = e−tr(Y Z ) f (Y )dµn (Y ) .
Posn

If f is an eigenfunction (the absolute convergence of the integral then being


assumed as part of the definition), we let λΓ (f ) be the corresponding eigen-
value. We recall that d is the determinant.

Theorem 5.2. Suppose f, f d are eigenfunctions of the gamma transform.


Then f is an eigenfunction of Q̃ with eigenvalue

λQ̃ (f ) = (−1)n λΓ (f d)/λΓ (f ),

assuming λΓ (f ) 6= 0.

Proof. We differentiate under the integral sign, namely:



5 The Regularizing Operator QY = |Y || ∂Y | 119
Z
−1
Q̃Z λΓ (f )f (Z) = Q̃Z e−tr(Y Z )
f (Y )dµn (Y )
Posn
Z
−1
= (QY e−tr(Y Z ) )f (Y )dµn (Y ) by Lemma 1.4
µ¯ ¯ ¶
¯ ∂ ¯ −tr(T Z −1 )
Z
= |Y | ¯ ¯ ¯ e f (Y )dµn (Y )
∂Y ¯
Z
−1
= (−1)n |Z|−1 e−tr(T Z ) (f d)(Y )dµn (Y )

= (−1)n d(Z)−1 λΓ (f d)(f d)(Z).

Then d(Z)−1 and d(Z) cancel, and the formula of the theorem drops out.

The first important special case is when f is a character ρs . In this case,


by Proposition 2.1 of Chap. 3,
n
Y
λΓ (ρs ) = bn Γ(si − αi )
i=1

with √ n(n−1)/2
bn = π and αi = (n − i)/2 .

Corollary 5.3. For f = ρs , the Q̃-eigenvalue is a polynomial, namely


n
Y
λQ̃ (ρs ) = (−1)n (si − αi ) .
i=1

Proof. We have f d = ρs+1 , and Γ(si − αi + 1) = (si − αi )Γ(s − αi ) for all i,


so the corollary is immediate.
(n−1)
Corollary 5.4. Let hs = q−z be the Selberg power character as in Chap.
3, Proposition 1.2. Then
n µ ¶ n µ ¶
n
Y n−1 n
Y n−1
λQ̃ (hs ) = (−1) sn−i+1 − = (−1) si − .
i=1
4 i=1
4

Proof. Use the value s# found in Chap. 3, Proposition 1.2, and plug in Corol-
lary 5.3. We note that in this particular case, there is a symmetry and a
cancelation which gets rid of the reversal of the variables s1 , . . . , sn .

Using some duality formulas, we can then determine the eigenvalue of Q


itself as follows.
120 6 Eisenstein Series First Part
n ¡
n−1
Q ¢
Theorem 5.5. λQ (hs ) = si + 4 .
i=1

Proof. We recall the involution

s∗ = (−sn , . . . , −s1 )

from Chap. 3, Sect. 1. Since hs∗ = h∗s , we have


n µ ¶
Y n−1
λQ̃ (h∗s ) = si + .
i=1
4

We have Q̃ = [S]Q and h∗s = [S][ω]hs . Directly from its definition, [ω]Q = Q.
The theorem is then immediate (canceling [S][ω], as it were).
7
Geometric and Analytic Estimates

In Chap. 1 and 2, we dealt at length with estimates concerning various co-


ordinates on Posn and the volume on Posn . Here we come to deal with the
metric itself, and the application of coordinate estimates to the convergence
of certain Dirichlet series called Eisenstein series. Further properties of such
series will then be treated in the next chapter. On the whole we follow the ex-
position in Maass [Maa 71], Sect. 3, Sect. 7 and especially Sect. 10, although
we make somewhat more efficient use of the invariant measure in Iwasawa
coordinates, thereby introducing some technical simplifications.

1 The Metric and Iwasawa Coordinates

The basic differential geometry of the space Posn is given in Chap. XI of [La
99] and will not be reproduced here. We merely recall the basic definition.
We view Symn (vector space of real symmetric n × n matrices) as the tangent
space at every point Y of Posn . The Riemannian metric is defined at the point
Y by the formula

ds2 = tr((Y −1 dY )2 ) also written tr(Y −1 dY )2 .

This means that if t 7→ Y (t) is a C 1 curve in Posn , then

hY 0 (t), Y 0 (t)iY (t) = tr(Y (t)−1 Y 0 (t))2 ,

where Y 0 (t) is the naive derivative of the map of a real interval into Posn ,
viewed as an open subset of Symn .
The two basic properties of this Riemannian metric are:
Theorem 1.1. Let Symn have the positive definite scalar product given by
hM, M1 i = tr(M M1 ). Then the exponential map exp : Symn → Posn is metric
semi-increasing, and is metric preserving on lines from the origin.

Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 121–132 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
122 7 Geometric and Analytic Estimates

Theorem 1.2. The Riemannian distance between any two points Y, Z ∈ Posn
is given by the formula
X
dist(Y, Z) = (log ai )2 ,

where a1 , . . . , an are the roots of the polynomial det(tY − Z).


See [La 99], Chap. XI, Theorems 1.2, 1.3 and 1.4. In the present section,
we shall consider the distance formula in the context of Iwasawa-Jacobi coor-
dinates.
As in Chap. 2, Sect. 2 the partial Iwasawa-Jacobi coordinates of an element
Y ∈ Posn are given by the expression
   
W 0 Ip X
Y = [u(X)]   , u(X) =   ,
0 v 0 Iq
with X ∈ Rp,q , W ∈ Posp and V ∈ Posq . Matrix multiplication shows that
 
W + [X]V XV
(1) Y =  .
V tX V
In particular, by definition
V = Subq (Y )
is the lower right square submatrix which we have used in connection with
the Selberg power function, cf. Chap. 3, Sect. 1. We shall need the matrix for
Y −1 , given by
 −1 
W 0
Y −1 =   [u(−X)]
0 v −1
W −1 −W −1 X
 

(2) =  .
−t XW −1 V −1 + [t X]W −1
Theorem 1.3. The metric on Posn admits the decomposition

tr(Y −1 dY )2 = tr(W −1 dW )2 + tr(V −1 dV )2 + 2tr(W −1 [dX]V ) .

All three terms on the right are = 0. In particular,

tr(V −1 dV )2 5 tr(Y −1 dY )2 ,

and the map

Posn → Posq given by Y 7→ Subq (Y ) = V

is metric decreasing.
1 The Metric and Iwasawa Coordinates 123

Proof. We copy Maass [Maa 71], Sect. 3. We start with

dW + [X]dV + dX · V t X + XV · d t X dX · V + XdV
 

(3) dY =   .
t t
dV · X + V · d X dV

With the abbreviation


 
L0 L1
dY · Y −1 =  
L2 L3

we have

(4) tr(Y −1 dY )2 = tr(dY · Y −1 · dY · Y −1 )


= tr(L20 + L1 L2 ) + tr(L2 L1 + L23 ) .

A straightforward calculation yields

(5) L0 = dW · W −1 + XV · d t X · W −1
L1 = −dW · w−1 X − XV · d t X · W −1 X + dX + X · dV · V −1
L2 = V · dX · W −1
L3 = dV · V −1 − V · d t X · W −1 X .

The formula giving the decomposition of tr(Y −1 dY )2 as a sum of three terms


then follows immediately from (4) and the values for the components in (5).
As to the positivity, the only possible question is about the third term on the
right of the formula. For this, we write W = A2 and V = B 2 with positive
A, B. Let Z = B · d t X · A−1 . Then

tr(W −1 [dX]V ) = tr(Z t Z) ,

which shows that the third quadratic form is positive definite and concludes
the proof.
Let G = GLn (R) as usual. It is easily verified that the action of G on
Posn is metric preserving, so G has a representation as a group of Riemannian
automorphisms of Posn . Again cf. [La 99] Chap. XI, Theorem 1.1. Here we are
interested in the behavior of the determinant |Y | as a function of distance.
Consider first a special case, taking distances from the origin I = In . By
Theorem 1.2, we know that if Y ∈ Br (I) (Riemannian ball of radius r centered
at I), then X
dist(Y, I)2 = (log ai )2 < r2 .
It then follows that there exists a number cn (r), such that for Y ∈ Br (I), we
have
1
(6) < |Y | < cn (r) .
cn (r)
124 7 Geometric and Analytic Estimates

Indeed, the determinant is equal to the product of the characteristic roots,

|Y | = a1 . . . an .

With the Schwarz inequality, we take cn (r) = e nr . Note that from an upper
bound for |Y |, we get a lower bound automatically because Y 7→ Y −1 is an
isometry. From another point of view, we also have (log ai )2 = (log a−1 2
i ) .
In the above estimate, we took a ball around I. But the transitive action
of G on Posn gives us more uniformity. Indeed:

Lemma 1.4. For any pair Y, Z ∈ Posn with dist(Y, Z) < r, we have

|Z|
cn (r)−1 < < cn (r).
|Y |

Proof. We have
|tZ − Y | = |Y | |tY −1 Z − I| .
The roots of this polynomial are the same as the roots of the polynomial
1
|t[Y − 2 ]Z ∈ Br (I), so the lemma follows from the corresponding statement
translated to the origin I.

Cij

Cii

We shall also be interested in the subdeterminants Subj (Y ) of Y . By


Theorem 1.3, we know that the association Y 7→ Subj (Y ) is metric decreasing.
Hence we may extend the uniformity of Lemma 1.4 as follows.

Lemma 1.5. For g ∈ GLn (R) and all pairs Y, Z ∈ Posn with dist (Y, Z) 5 r,
and all j = 1, . . . , n we have

cn (r)−1 |Subj [g]Y | < |Subj [g]Z| < cn (r)|Subj [g]Y | .

Briefly: |Subj [g]Z| À¿r |Subj [g]Y |.

Next, let
(
Dr = Y ∈ Posn such that
)
1
|Y | < cn (r) and |Subj Y | > for j = 1, . . . , n .
cn (r)
2 Convergence Estimates for Eisenstein Series 125

Lemma 1.6. For all γ ∈ Γ = GLn (Z) we have

Br ([γ]I) ⊂ Dr .

Proof. Let Y ∈ Br ([γ]I). Then [γ −1 ]Y ∈ Br (I), and we can apply (6), as well
as |[γ −1 ]I| = 1 to prove the inequality |Y | < cn (r). For the other inequality,
by the distance decreasing property, we have

dist(Subj [γ]I, Subj Y ) 5 dist([γ]I, Y ) < r .

Hence by Lemma 1.4,


1 1
|Subj Y | > |Subj ([γ]I)| =
cn (r) cn (r)

because [γ]I is an integral matrix, with determinant = 1. This concludes the


proof.

The set of elements [γ]I with γ ∈ Γ is discrete in Posn . We call r > 0 a


radius of discreteness for Γ if dist([γ]I, I) < 2r implies γ = ±I, that is [γ]
acts trivially on Posn . We shall need:
Lemma 1.7. Let γ, γ 0 ∈ Γ, and let r be a radius of discreteness for Γ. If
there is an element Y ∈ Posn in the intersection of the balls Br ([γ]I) and
Br ([γ 0 ]I), then [γ] = [γ 0 ], that is γ = ±γ.

Proof. By hypothesis, dist([γ]I, [γ 0 ]I) < 2r, so

dist([γ −1 γ 0 ]I, I) < 2r ,

and the lemma follows.

2 Convergence Estimates for Eisenstein Series


We shall need a little geometry concerning the action of the unipotent group
on Posn , so we start with an independent discussion of this geometry.
An element Y ∈ Posn can be written uniquely in the form

Y = [u(X)]A with u(X) = In + X ,

and  
a11 ... 0
 .. .. ..  , a > 0 ,
A= . . .  ii
0 ... ann
and X = (xij ) is strictly upper triangular. We call (X, A) the full Iwasawa
coordinates for Y on Posn .
126 7 Geometric and Analytic Estimates

Let Γ = GLn (Z) as usual, ΓU = subgroup of unipotent elements in Γ, so


the upper triangular integral matrices with every diagonal element equal to
1. Thus γ ∈ ΓU can be written γ = In + X with an integral matrix X.
It is easy to construct a fundamental domain for ΓU \Posn . First we note
that a fundamental domain for the real unipotent group Uni+ (R) modulo the
integral subgroup ΓU consists of all elements u(X) such that 0 5 xij < 1.
We leave the proof to the reader. In an analogous discrete situation when all
matrices are integral, we shall carry out the inductive argument in Lemma
1.2 of Chap. 8, using the euclidean algorithm. In the present real situation,
one uses a “continuous” euclidean algorithm, as it were. Then we define:

FU = set of elements [u(X)]A ∈ Posn with 0 5 xij < 1 .

From the uniqueness of the Iwasawa coordinates, we conclude that FU is a


strict fundamental domain for ΓU \Posn .
The main purpose of this section is to prove the convergence of a certain
series called an Eisenstein series. We shall prove it by an integral test, depend-
ing on the finiteness of a certain integral, which we now describe in a fairly
general context.
Let c > 0. We define the subset D(c) of Posn to be:

D(c) = {Y ∈ Posn , |Y | < c and |Subj Y | > 1/c for all j = 1, . . . , n} .

We recall the Selberg power function


n
(n)
Y
q−z (Y ) = |Subj Y |−zj .
j=1

We are interested in the integral of this power function over a set

D(c) ∩ FU .

To test absolute convergence, it suffices to do so when all zj are real. The next
lemma will prove absolute convergence when Re(zj ) > 1.

Lemma 2.1. Let b > 1. Then


Z n
Y
|Subj Y |−b dµn (Y ) < ∞ .
j=1
D(c)∩FU

Proof. In Chap. 2, Proposition 2.4, we computed the invariant measure


dµn (Y ) in terms of the Iwasawa coordinates, and found
n n
Y i−(n+1)/2
Y daii Y
(1) dµn (Y ) = aii dxij .
i=1 i=1
aii i<j
2 Convergence Estimates for Eisenstein Series 127

We note that |Subj Y | = an−j+1 . . . an , writing ai = aii . Hence, if we take


ε > 0 and set b = 1 + ε, we have
n
Y n
Y
(2) |Subj Y |−b = a−i−εi
i .
j=1 i=1

The effect of intersecting D(c) with Fu is to bound the xij -coordinates. Thus
the convergence of the integral depends only on the ai -coordinates. To con-
centrate on them, we let
n n
Y i−(n+1)/2
Y dai
dµn,A = ai .
i=1 i=1
ai

We let DA (c) be the region in the A-space defined by the inequalities


1 1
< a1 · · · an < c and an > ,
c c
1 1
an an−1 > , . . . , an an−1 · · · a1 > .
c c
Thus DA (c) is a region in the n-fold product of the positive multiplicative
group, and the convergence of the integral in our lemma is reduced to the
convergence of an integral in a euclidean region, so to calculus. Taking
the product of the expressions in (1) and (2), and integrating over DA (c),
we see that the finiteness of the integral in our lemma is reduced to proving
the finiteness
Z Y
−(εi+1+(n+1)/2)
Y
(3) aii daii < ∞ .
DA (c)

Just to see what’s going on, suppose n = 2 and the variables are

a1 = u and a2 = v .

The region is defined by the inequalities


1 1
< uv < c and v > .
c c
The integral can be rewritten as the repeated integral
 
Z∞ Zc/v
u−(1+ε+(n+1)/2) du v −(2ε+1+(n+1)/2) dv .
 

1/c 1/cv

The inner integral with respect to u can be evaluated, and up to a constant


factor, it produces a term
128 7 Geometric and Analytic Estimates

v ε+(n+1)/2
which cancels the similar expression in the outer v-integral. Thus finally the
convergence is reduced to
Z∞
1
dv < ∞
v 1+ε
1/c

which is true. Having n variables only complicates the notation but not the
idea, which is to integrate successively with respect to dan , then dan−1 , and
so forth until da1 , which we leave to the reader to conclude the proof of
Lemma 2.1.
Next we combine the metric estimates from the last section with the mea-
sure estimates which we have just considered. Let r be a radius of discreteness
for Γ, defined at the end of the last section. Then

Dr = D(cn (r)) ,

where D(c) is the set we considered in Lemma 2.1.


Let {γm } (with m = 1, 2, . . .) be a family of coset representatives for
±ΓU \Γ. For each m we let τmk (k = 1, . . . , dm ) be a minimal number of ele-
ments of ±ΓU such that
dm
[
Br ([γm ]I) ⊂ [τmk ]FU .
k=1

In particular, the intersection

Smk = Br ([γm ]I) ∩ [τmk ]FU

is not empty for each m, k. The set D defined above is stable under the action
of ΓU . Hence translating the sets Smk back into FU we conclude that
−1
(4) [τmk ]Smk ⊂ Dr ∩ FU for all m, k .
−1
By Lemma 1.7, the sets [τmk ]Smk are disjoint, for pairs (m, k) defined as
above.
We are now ready to apply the geometry to estimate certain series.
Let ρ be a character. The primitive Eisenstein series is defined by

EUpr (Y, ρ) =
X
ρ([γ]Y ) .
γ∈ΓU \Γ

We shall be concerned with the character equal to the Selberg power function,
(n−1)
that is q−z , so that by definition,
n−1
pr(n−1)
X Y
EU (Y, z) = |Subj [γ]Y |−zj .
γ∈ΓU \Γ j=1
2 Convergence Estimates for Eisenstein Series 129

First, note that any Y ∈ Posn lies in some ball Br (I), and by Lemma 1.5,
we see that the convergence of the series for any given Y is equivalent to the
convergence with Y = I. We also have uniformity of convergence in a ball of
fixed radius. In addition, we note that
|Subn [γ]Y | = |[γ]Y | = |Y | for all γ ∈ Γ .
Thus the convergence of the above Eisenstein series is equivalent with the
convergence of
n
pr(n)
X Y
EU (Y, z) = |Subj [γ]Y |−zj .
γ∈ΓU \Γ j=1

Furthermore, zn has no effect on the convergence. The main theorem is:


Theorem 2.2. The Eisenstein series converges absolutely for all zj with
Re(zj ) > 1 for j = 1, . . . , n − 1.
Proof. First we replace zj by a fixed number b > 1. We prove the convergence
for Y = I, but we shall immediately take an average, namely we use the
inequalities for Y ∈ Br (I), with r a radius of discreteness for Γ:
X n
Y
(5) E(I, b) = |Subj ([γ]I)|−b
γ∈ΓU \Γ j=1
X Z n
Y
¿ |Subj [γ]Y |−b dµ(Y )
γ∈ΓU \Γ B (I) j=1
r

X Z n
Y
¿ |Subj Y |−b dµ(Y ) .
γ∈ΓU \Γ B ([γ]I) j=1
r

We combine the inclusion (4) with the estimate in (5). We use the fact that
|Subj [τ ]Y | = |Subj Y | for τ ∈ ΓU ,
and we translate each integral back into FU . We then obtain from (5)
X∞ Xdm Z Yn
E(I, b) ¿n |Subj Y |−b dµ(Y )
m=1 k=1 −1 j=1
[τmk ]Smk
Z n
Y
¿n |Subj Y |−b dµn (Y ) .
Dr ∩FU j=1

The sign ¿n means that the left side is less than the right side times a
constant depending only on n. We have used here the fact already determined
−1
that the sets [τmk ]Smk are disjoint and contained in Dr ∩ FU . The finiteness
of the integral was proved in Lemma 2.1, which thereby concludes the proof
of Theorem 2.2.
130 7 Geometric and Analytic Estimates

3 A Variation and Extension


In the application of Chap. 8, one needs convergence of a modified Eisenstein
series, specifically the following case.
Theorem 3.1. The series
X n
Y
|Subj Y |−zj
γ∈ΓU \Γ j=2

converges absolutely for Re(z2 ) > 3/2 and Re(zj ) > 1 with j = 3.
The proof is the same as the proof of Theorem 2.2. One uses the same set
D(c). Lemma 2.1 has its analogue for the product with one term omitted. The
calculus computation comes out as stated. For instance, for n = 3, the region
D(c) is defined by the inequalities
1 1 1
< uvw < c, vw > , w> .
c c c
The series is dominated by the repeated integral

Z∞ Z∞ c/vw
Z
(vw)−3/2−ε u(n+1)/2 v 1−(n+1)/2 w2−(n+1)/2 dudvdw ,
1/c 1/wc 1/vwc

which comes out up to a constant factor to be


Z∞
w−1−ε dw .
1/c

For various reasons, including the above specific application, Maass ex-
tends the convergence theorem still further as follows [Maa 71].
Let
0 = k0 < k1 < . . . < km < km+1 = n
be a sequence of integers which we call an integral partition P of n. Let

ni = ki − ki−1 , i = 1, . . . , m + 1 .

Then n = n1 + . . . + nm+1 is a partition of n in the number theoretic sense.


Matrices consisting of blocks of size ni (with i = 1, . . . , m + 1) on the diagonal
generalize diagonal matrices. We let:

ΓP = Subgroup of Γ consisting of elements which are upper diagonal


over such block matrices, in other words, elements γ = (Cij )
Cii ∈ Γni for 1 5 i 5 m + 1 and Cij = 0 for 1 5 j < i 5 m + 1.
3 A Variation and Extension 131

In the previous cases, we have kj = j, nj = 1 for all j = 1, . . . , n, and


m + 1 = n. The description of the groups associated with a partition as above
is slightly more convenient than to impose further restriction, but we note
that in this case the diagonal elements may be ±1, so we are dealing with the
group T rather than the unipotent group U .
A group such that ΓP above is also called a parabolic subgroup.
We define the Eisenstein series as a function of variables z1 , . . . , zm by
X m
Y
EP (Y, z) = |Subki [γ]Y |−zi .
γ∈ΓP \Γ i=1

Theorem 3.2. ([Maa 71], Sect. 7) This Eisenstein series is absolutely con-
vergent for
1 1
Re(zi ) > (ni+1 + ni ) = (ki+1 − ki−1 ), i = 1, . . . , m .
2 2
Proof. One has to go through the same steps as in the preceding section,
with the added complications of the more elaborate partition. One needs the
Iwasawa-Jacobi coordinates with blocks,
   
W1 . . . 0 In1 . . . Xij
Y = [u(X)]  ... .. ..  and u(X) =  .. .. ..  .

. .   . . . 
0 ... Wm+1 0 ... Inm+1

The measure is given by


m+1
Y Y
dµn (Y ) = |Wi |(ki −ki−1 −n) dµ(Wi ) dµeuc (Xij ) .
i=1 15i<j5m+1

The fundamental domain for ΓP consists of those Y whose coordinates satisfy:

Wi ∈ Fundamental domain for Γni (i = 1, . . . , m + 1) in Posni .


Xij has coordinates 0 5 xνµ < 1.

The domain D(c) is now


½
DP (c) = Y such that Wi > 0 for all i = 1, . . . , m + 1;
m+1 ¾
Y 1 1 1
|Wi | < c, |Wm | > , |Wm ||Wm−1 | > , . . . , |Wm | . . . |W1 | > .
i=1
c c c

thus we merely replace ai by |Wi | throughout the previous definition. Maass


gives his proof right away with the more complicated notation, and readers
can refer to it.
132 7 Geometric and Analytic Estimates

Note that Theorem 3.1 is a special case of Theorem 3.2. However, the
notation of Theorem 3.1 is simpler, and we thought it worth while to state it
and indicate its proof separately, using the easier notation for the Eisenstein
series.
The subgroup ΓP is usually called a parabolic subgroup. Such sub-
groups play an essential role in the compactification of Γn \Posn , and in the
subsequent spectral eigenfunction decomposition.
8
Eisenstein Series Second Part

In Chap. 5, we already saw the Epstein zeta function, actually two zeta func-
tions, one primitive and the other one completed by a Riemann zeta function.
Indeed, let Y ∈ Posn . We may form the two series
X X
E pr (Y, s) = ([a]Y )−s and E(Y, s) = ([a]Y )−s
a prim a6=0

where the first sum is taken over a ∈ t Zn , a 6= 0 and a primitive; while the
second sum is taken over all a ∈ t Zn , a 6= 0. Any a ∈ t Zn can be written
uniquely in the form
a = da1 with d ∈ Z+ and a1 primitive .
Therefore
E(Y, s) = ζQ (2s)E pr (Y, s) .
We have to extend this property to the more general Selberg Eisenstein
series on Posn . This involves a more involved combinatorial formalism, about
integral matrices in Zj,j+1 with j = 1, . . . , n − 1. Thus the first section is
devoted to the linear algebra formalism of such integral matrices and their
decompositions. After that, we define the general Eisenstein series and ob-
tain various expressions for them which are used subsequently in deriving the
analytic continuation and functional equations. For all this, we will follow
Maass from [Maa 71] after [Maa 55], [Maa 56]. He did a great service to the
mathematical community in providing us with a careful and detailed account.
However, we have had to rethink through all the formulas because we use left
characters instead of right characters as in Maass-Selberg, and also we intro-
duce the Selberg variables s = (s1 , . . . , sn ) as late as possible. Indeed, we work
with more general functions than characters, for application to more general
types of Eisenstein series constructed with automorphic forms, or beyond with
the heat kernel.
We note here one important feature about the structure of various fudge
factors occurring in functional equations: they are eigenvalues of certain

Jay Jorgenson: Posn (R) and Eisenstein Series, Lect. Notes Math. 1868, 133–162 (2005)
www.springerlink.com °c Springer-Verlag Berlin Heidelberg 2005
134 8 Eisenstein Series Second Part

operators, specifically three operators: a regularizing invariant differential op-


erator, the gamma operator (convolution with the kernel of the gamma func-
tion on Posn ), a Hecke-zeta operator. To bring out more clearly the structure
of these operators and their role, we separate the explicit computation of their
eigenvalues from the position these eigenvalues occupy as fudge factors. When
the eigenfunctions are characters, these eigenvalues are respectively polyno-
mials, products of ordinary gamma functions, and products of Riemann zeta
functions, with the appropriate complex variables. Such eigenvalues are those
occurring in the theory of the Selberg Eisenstein series, which are the most
basic ones. However, Eisenstein series like other invariants from spectral the-
ory (including analytic number theory) have an inductive “ladder” structure,
and on higher rungs of their ladder, the eigenvalues are of course more compli-
cated and require more elaborate explicit computations, which will be carried
out in their proper place. On the other hand, the general formulas given in
the present chapter will be applicable to these more general cases.

1 Integral Matrices and Their Chains


Throughout, we let:

Γn = GLn (Z);
M∗n = set of integral n × n matrices of rank n;
M∗ (p, q) = set of integral p × q matrices of rank min(p, q);
∆n = set of upper triangular integral n × n matrices of rank n;
Tn = Γn ∩ ∆n = group of upper triangular integral matrices of
determinant ± 1 .

We note that M∗n and ∆n are just sets of matrices, not groups. The diag-
onal components of an element in ∆n are arbitrary integers 6= 0, so elements
of ∆n are not necessarily unipotent. On the other hand, the elements of Tn
necessarily have ±1 on the diagonal, so differ from unipotent elements pre-
cisely by such diagonal elements. Note that ∆n is stable under the action
of Tn on both sides, but we shall usually consider the left action. Thus we
consider coset representatives in Γn for the coset space Tn \Γn and also coset
representatives D ∈ ∆n of the coset Tn D, which is a subset of ∆n . Similarly,
M∗n is stable under the action of Γn on both sides, and we can consider the
coset space Γn \M∗n .

Lemma 1.1. The natural inclusion ∆n ,→ M∗n induces a bijection

Tn \∆n → Γn \M∗n

of the coset spaces.


1 Integral Matrices and Their Chains 135

Proof. By induction, and left to the reader. We shall work out formally a more
complicated variation below.

The bijection of Lemma 1.1 is called triangularization.


Next we determine a natural set of coset representatives for Tn \∆n .
Lemma 1.2. A system of coset representatives of Tn \∆n consists of the ma-
trices  
d11 . . . d1n
D =  ... .. ..  = (d )

. .  ij
0 ... dnn
satisfying dij = 0 if j < i (upper triangularity), djj > 0 all j, and

0 5 dij < djj for 15i<j5n.

In other words, in a vertical column, the components are = 0 and strictly


smaller than the diagonal component in this column.

Proof. In the first place, multiplying an arbitrary element D ∈ ∆n by a diago-


nal matrix with ±1 diagonal components, we can make the diagonal elements
djj , (j = 1, . . . , n) to be positive. Then we want to determine a nilpotent inte-
gral matrix X (upper triangular, zero on the diagonal) such that (I + X)D is
among the prescribed representatives, and furthermore, such X is uniquely de-
termined. This amounts to the euclidean algorithm, and is done by induction,
starting with the top left. Pictorially, given an upper triangular integral ma-
trix with positive diagonal elements dii and strictly upper triangular elements
yij , we want to find X = (xij ) such that the product
  
1 x12 . . . x1n d11 y12 . . . y1n
0 1 ... x2n    0 d22 . . . y2n 

 .. . . .. 
 
.. ..   ..
  ..
.
 .  . . 

0 . . . 1 xn−1,n   0 0 . . . dn−1,n−1 yn−1,n 
0 0 ... 0 1 0 0 ... 0 dnn

satisfies the inequalities in the lemma. We start at the top, so we first solve
for x12 such that
0 5 y12 + x12 d22 < d22 .
This inequality has a unique integral solution x12 . We then solve inductively
for x13 , . . . , x1n ; then we go down the rows to conclude the proof.

Lemma 1.3. Given integers djj > 0 (j = 1, . . . , n), the number of cosets
Tn D with D having the given diagonal elements is
n
j−1
Y
djj .
j=1
136 8 Eisenstein Series Second Part

Proof. Immediate.

Remark. The previous lemmas have analogues for the right action of Γn on
M∗n . First, Lemma 1.1 is valid without change for the right action of Γn on
M∗n and the right action of Tn on ∆n . On the other hand, the inequalities
defining coset representatives in Lemma 1.2 for the right have to read:

0 5 dij < dii for 1 5 i < j 5 n .

Then the number of cosets DTn with D having given d11 , . . . , dnn > 0 is
n
dn−j
Y
jj .
j=1

Next we deal with M∗ (n − 1, n) with n equal to a positive integer = 2.


Lemma 1.4. Let C ∈ M∗ (n − 1, n). There exist γj ∈ Γj (j = n − 1, n) such
that
γn−1 Cγn−1 = (0, D) with D ∈ ∆n−1 ,
that is D is upper triangular.

Proof. The proof is a routine induction. Let n = 1. Let C ∈ M∗ (1, 2), so


C = (b, c) is a pair of integers, one of which is 6= 0. Let us write b = db1 ,
c = dc1 where (b1 , c1 ) is primitive, i.e. b1 , c1 are relatively prime, and d is a
non-zero integer. We can complete a first column t (−c1 , b1 ) to an element of
SL2 (Z) to complete the proof. The rest is done by induction, using blocks. A
more detailed argument will be given in a similar situation, namely the proof
of Lemma 1.6.

We consider the coset space Tn−1 \M∗ (n − 1, n). Given a coset Tn−1 C, by
Lemma 1.4 we can find a coset representative of the form (0, D)γ with γ ∈ Γn .
We use such representatives to describe a fibration of Tn−1 \M∗ (n − 1, n) over
Tn \Γn as follows.
Lemma 1.5. Let π : Tn−1 \M∗ (n − 1, n) → Tn \Γn be the map which to each
coset Tn−1 C with representative (0, D)γ associates the coset Tn γ. This map π
is a surjection on Tn \Γn , and the fibers are Tn−1 \∆n−1 .

Proof. Implicit in the statement of the lemma is that the association π as de-
scribed is well defined, i.e. independent of the chosen representative. Suppose

(0, D)γ = (0, D0 )γ 0 with D, D0 ∈ ∆n−1 .

Then (0, D) = (0, D0 )γ 0 γ −1 . Let τ = γ 0 γ −1 ∈ Γn . Then the above equa-


tion shows that actually τ is triangular, and so lies in Tn . This is done by
an inductive argument, letting τ = (tij ) and starting with showing that
t21 = 0, . . . tn+1,n+1 = 0, and then proceeding inductively to the right with
1 Integral Matrices and Their Chains 137

the second column, third column, etc. Thus γ, γ 0 are in the same coset of
Tn \Γn , showing the map is well defined. We note that the surjectivity of π is
immediate.
As to the fibers, if τ ∈ Tn and D ∈ ∆n−1 , then (0, D)τ again has the
form (0, D0 ) with D0 ∈ ∆n−1 . Thus by definition, the fiber above a coset Tn γ
consists precisely of cosets

Tn−1 (0, D) with D ∈ ∆n−1 ,

which proves the lemma.

In Lemma 1.5, we note that for each γ ∈ Γn we have a bijection

Tn−1 \∆n−1 → fiber above Tn γ,

induced by the representative map D 7→ (0, D)γ.


The arguments of Lemma 1.4 and 1.5 will be pushed further inductively.
The rest of this section follows the careful and elegant exposition in Maass
[Maa 71].
Since we operate with the discrete group Γ on the left, we have to reverse
the notation used in Selberg, Maass, and other authors, for example Langlands
[Lgl 76], Appendix 1. Let Y ∈ Posn . If Yj = Subj (Y ) is the lower right j × j
square submatrix of Y , then we can express Yj in the form
µ ¶
0
(1) Yj = [(0, Ij )]Y = (0, Ij )Y ,
Ij

where Ij is the unit j × j matrix as usual. Note the operation on the left,
and the fact that 0 denotes the j × (n − 1) zero matrix, so that (0, Ij ) is a
j × n matrix. If Y = T t T with an upper triangular matrix T , then Yj = Tjt Tj ,
where Tj is the lower right j × j submatrix of T .
From a given Y we obtain a sequence (Yn , Yn−1 , . . . , Y1 ) by the operation
indicated in (1), starting with Yn = Y . We call this sequence the Selberg
sequence of Y . Given γ ∈ Γn , we shall also form the Selberg sequence
with Yn = [γ]Y . In some sense (to be formalized below) this procedure gives
rise to “primitive” sequences. It will be necessary to deal with non-primitive
sequences, and thus we are led to make more general definitions as follows.
By an integral chain (more precisely n-chain) we mean a finite sequence

C = (γ, Cn−1 , . . . , C1 ) with γ ∈ Γn and Cj ∈ M∗ (j, j + 1)

for j = 1, . . . , n − 1. Let C be such a chain. Let C 0 = (γ, Cn−1


0
, . . . , C10 ) be
another chain. We define C equivalent to C 0 if either one of the following
conditions are satisfied.
EQU 1. There exist γj ∈ Γj (j = 1, . . . , n) such that
−1
(2) γ 0 = γn γ and Cj0 = γj Cj γj+1 for j = 1, . . . , n − 1 .
138 8 Eisenstein Series Second Part

EQU 2. There exist γj ∈ Γj (j = 1, . . . , n − 1) such that

(3) Cj0 . . . Cn−1


0
γ 0 = γj Cj . . . Cn−1 γ for j = 1, . . . , n − 1 .

It’s obvious that (2) implies (3). Conversely, suppose EQU 2 and (3). We
then let γn = γ 0 γ −1 , and it follows inductively that (2) is satisfied.
A sequence (γ, Cn−1 , . . . , C1 ) will be said to be triangularized if we have
that Cj = (0, Dj ) with Dj ∈ ∆j for j = 1, . . . , n − 1. Thus the first column of
Cj is zero.
The next lemmas give special representatives for equivalence classes.
Lemma 1.6. Let Cj ∈ Zj,j+1 (j = 1, . . . , n − 1) be integral matrices. There
exist elements γj ∈ Γj (j = 1, . . . , n) such that for j = 1, . . . , n − 1 we have
 
0 ∗ ... ∗
−1
γj Cj γj+1 = (0, T1 ) =  ... ... . . . ...  ,
 

0 0 ... ∗
that is, the first column on the right is 0, and the rest is upper triangular, with
Tj ∈ Tri+ j . Thus every chain is equivalent to a triangularized one.

Proof. Induction. For n = 2, the assertion is obvious, but we note how it


illustrates the proof in general. We just have C1 = (b, c) with numbers b, c.
We have γ1 = 1 and we write b = db1 , c = dc1 with (b1 , c1 ) relatively prime.
Then we can complete a first column t (−c1 , b1 ) to an element of SL2 (Z) to
complete the proof. Now by induction, suppose n = 3. There exist β2 , . . . , βn
−1
with βj ∈ Γj such that the first column of Cj βj+1 is 0 for j = 1, . . . , n − 1.
−1
Then βj Cj βj+1 also has first column equal to 0, and this also holds for j = 1.
Hence without loss of generality, we may assume that Cj has first column
equal to 0, that is
 
0 ∗∗∗
Cj =  ...  with Hj−1 ∈ Z
j−1,j
.
 

0 Hj−1

By induction, there exists ηj−1 ∈ Γj−1 (j = 2, . . . , n) such that


 
0 ∗...∗
ηj−1 Hj−1 ηj−1 =  ... ... . . . ... 
 

0 0...∗
where the matrix on the right has first column 0, and the rest upper triangular.
We let  
1 0
γj =   for j = 1, . . . , n .
0 ηj−1
1 Integral Matrices and Their Chains 139

Then γj ∈ Γj and matrix multiplication shows that


   
1 0 0 ∗ 1 0
−1
γj Cj γj+1 =   
0 ηj−1 0 Hj−1 0 ηj−1
 
0 ∗
 ..
=. .

0 ηj−1 Hj−1 ηj−1

This last matrix has the desired form (0, Tj ), thereby concluding the proof.

The next lemma will give a refinement by prescribing representatives even


further.
Lemma 1.7. For each coset of Tn \Γn , Tn−1 \∆n−1 , . . . , T1 \∆1 fix a coset rep-
resentative. To each sequence

(γ, Dn−1 , . . . , D1 )

whose components are among the fixed representatives, associate the chain

(γ, (0, Dn−1 ), . . . , (0, D1 )) .

Then this association gives a bijection from the set of representative sequences
to equivalence classes of chains, i.e. every chain is equivalent to exactly one
formed as above, with the fixed representatives.

Proof. By Lemma 1.6, every equivalence class has a representative

(γ 0 , (0, Dn−1
0
), . . . , (0, D10 ))

with γ 0 ∈ Γn and Dj0 ∈ ∆j for j = n − 1, . . . , 1. There is one element τn ∈ Tn


such that τn γ 0 is the fixed representative of the coset Tn γ 0 . Then we select the
unique τn−1 such that if we put
0
(0, Dn−1 ) = τn−1 (0, Dn−1 )τn−1

then Dn−1 is the fixed representative of the coset Tn−1 Dn−1 . We can then
continue by induction. This shows that the stated association maps bijectively
on the families of equivalence classes and proves the lemma.

A chain (γ, Cn−1 , . . . , C1 ) is called primitive if all the matrices Cj , with


j = 1, . . . , n − 1, are primitive, that is, Cj can be completed to an ele-
ment of Γj+1 by an additional row. The property of being primitive de-
pends only on the equivalence class of the chain, namely if this property
holds for C then it holds for every chain equivalent to C. Furthermore, if
(γ, (0, Dn−1 ), . . . , (0, D1 )) is a triangularized representative of an equivalence
140 8 Eisenstein Series Second Part

class, then it is primitive if and only if each Dj ∈ Γj . In the primitive case,


we can choose the fixed coset representatives of Tj \Γj (j = 1, . . . , n − 1) to be
the unit matrices Ij . The primitive chains of the form

(γ, (0, In−1 ), . . . , (0, I1 )) with γ ∈ Tn \Γn

will be called normalized primitive chains. Alternatively, one can select


a fixed set of representatives {γ} for Tn \Γn , and the primitive chains formed
with such γ are in bijection with the equivalence classes of all primitive chains.
Formally, we state the result:

Lemma 1.8. The map γ 7→ chains of (γ, (0, In−1 ), . . . , (0, I1 )) induces a bi-
jection
Tn \Γn → primitive equivalence classes of chains .

2 The ζQ Fudge Factor


It will be convenient to put out of the way certain straightforward computa-
tions giving rise to the fudge factor involving the Riemann zeta function, so
here goes. For a positive integer j we shall use the representatives of Tj \∆j
from Lemma 1.2. We let n = 2.
Let {z1 , z2 , . . .} be a sequence of complex variables. Let m = n. On Posm
(n)
we define the Selberg power function qz by the formula
n
Y
qz(n) (S) = |Subj (S)|zj with S ∈ Posm .
j=1

(n−1) (n)
In particular, we may work with qz on Posn , or also with qz on Posn ,
depending on circumstances. In any case, we see that we may also write

qz(n) = dznn . . . dz11 ,

where dj is the partial determinant character, namely

dj (S) = |Subj (S)| .

In the next lemma, we consider both interpretations of qz . We shall look


at values
qz(n) ([(0, D)]S)
where D ∈ ∆n is triangular, and S ∈ Posm . We note that this value is
independent of the coset Tn D of D with respect to the triangular matrices
with ±1 on the diagonal. We shall sum over such cosets. More precisely, let ϕ
be a Tn -invariant function on Posn . Under conditions of absolute convergence,
we define the Hecke-zeta operator on Posm by the formula
2 The ζQ Fudge Factor 141
X
HZn (ϕ) = ϕ ◦ [(0, D)],
D∈Tn \∆n

that is for S ∈ Posm ,


X
HZn (ϕ)(S) = ϕ([(0, D)]S) .
D∈Tn \∆n

We consider what is essentially an eigenfunction condition:


EF HZ. There exists λHZ (ϕ) such that for all S ∈ Posm we have

HZn (ϕ)(S) = λHZ (ϕ)ϕ(Subn S) .

Implicit in this definition is the assumption that the series involved converges
absolutely. The next lemma gives a first example.
For any positive integer n, we make the general definition of the Riemann
zeta fudge factor at level n,
n
Y
ΦQ,n (z) = ζQ (2(zi + . . . + zn ) − (n − i)) .
i=1

Lemma 2.1. Let S ∈ Posm . Then


(n) (n)
X
q−z ([(0, D)]S) = ΦQ,n (z)q−z (S) .
D∈Tn \∆n

In other words,
(n)
λHZ (q−z ) = ΦQ,n (z) .
This relationship holds for Re(zi + . . . + zn ) > (n − i + 1)/2, i = 1, . . . , n, which
(n)
is the domain of absolute convergence of the Hecke-zeta operator on q−z .
(n)
Proof. Directly from the definition of q−z , we find
n
(n)
Y
(1) q−z ([(0, D)]S) = |[(0, Ii )(0, D)]S)|−zi
i=1
Yn
= |Subi (D)|−2zi |Subi (S)|−zi
i=1
n
(n)
Y
= (dn−i+1 · · · dn )−2zi q−z (S) ,
i=1

where d1 , . . . , dn are the diagonal elements of D. Next we take the sum over all
integral non-singular triangular D, from the set of representatives of Lemma
1.2, so
142 8 Eisenstein Series Second Part
 
d1 ... ∗
 .. .. ..  .
D= . . . 
0 ... dn
The sum over D can be replaced by a sum

X n
Y
dk−1
k
d1 ,...,dn =1 k=1

(n)
by Lemma 1.3. With the substitution k = n − i + 1, the factor of q−z (S) in
(1) can thus be expressed as
n
X Y
(dn−i+1 . . . dn )−2zi
D i=1
∞ ∞ n
−2(zn−k+1 +...+zn )+k−1
X X Y
(2) = ... dk
d1 =1 dn =1 k=1
= ΦQ,n (z)

after reverting to indexing by i instead of n − k + 1. This proves the lemma.

Next we deal with a similar but more involved situation, for which we
make a general definition of the Riemann zeta fudge factors, namely
j
Y
ΦQ,j (z) = ΦQ,j (z1 , . . . , zj ) = ζQ (2(zi + . . . + zj ) + j − i)
i=1

and
n
(n)
Y
ΦQ (z1 , . . . , zn ) = ΦQ,j (z) .
j=1

These products will occur as factors in relations among Eisenstein series later.
In the next lemma, we let {Dj } range over the representatives of Tj \∆j (j =
(j)
1, . . . n) as given in Lemma 1.2. We let dνν denote the diagonal elements of
Dj , with the indexing j − k + 1 5 ν 5 j, which will fit the indexing in
the literature. The indexing also fits our viewing Dj as a lower right square
submatrix.
Lemma 2.3.
n n j
(n)
X X Y Y Y
... (d(j)
νν )
−2zk
= ΦQ (z)
Dn D1 k=1 j=k ν=j−k+1
Y
= ζQ (2(zi + . . . + zj ) + j − i) .
15i5j5n
3 Eisenstein Series 143

Proof. For a fixed index j, we consider the sum on the left over the represen-
tatives {Dj }. The products inside the sum which are indexed by this value j
then can be written
j
X Y j
Y
(d(j)
νν )
−2zk
.
Dj k=1 ν=j−k+1

This is precisely the term evaluated in (2), and seen to be equal to ZQ,j (z).
Taking the product over j = 1, . . . , n concludes the proof of the lemma.

3 Eisenstein Series
Next we shall apply chains as in Sect. 1 to elements of Posn . Let Y ∈ Posn .
Let C be a chain, C = (γ, Cn−1 , . . . , C1 ). For each j = 1, . . . , n − 1 define

Cj (Y ) = [Cj · · · Cn−1 γ]Y, Cn (Y ) = [γ](Y ) .

Thus Cj (Y ) = [Cj ]Cj+1 (Y ) for j = 1, . . . , n − 1.


Let z1 , . . . , zn−1 be n−1 complex variables. We define the Selberg power
(n−1)
function qC = qC (depending on the chain) by the formula
(n−1)
qC,z (Y ) = |Cn−1 (Y )|zn−1 . . . |C1 (Y )|z1 .

(n)
One may also define qC with one more variable, namely
n
(n)
Y
qC,z (Y ) = |Cj (Y )|zj .
j=1

Let C be equivalent to C 0 . Then by (2) or (3) of Sect. 1 we have

Cj0 (Y ) = [γj ]Cj (Y )

with γj having determinant ±1, so |Cj0 (Y )| = |Cj (Y )|. It follows that

(n−1) (n−1)
qC 0 ,z (Y ) = qC,z (Y ) ;

(n−1)
in other words, qC,z depends only on the equivalent class of C. Hence
the power function can be determined by using the representatives given by
Lemma 1.7.
As in Sect. 1, we let Tn be the group of integral upper triangular n × n
matrices with ±1 on the diagonal. We define the Selberg Eisenstein series
(n−1)
X (n−1)
ET ,n (Y, z) = qC,−z (Y ) ,
C
144 8 Eisenstein Series Second Part

where the sum is taken over all equivalence classes of chains. We define the
primitive Selberg Eisenstein series by the same sum taken only over the
primitive equivalence classes, that is
pr(n−1) (n−1)
X
ET ,n (Y, z) = qC,−z (Y ) .
C primitive

Furthermore, from Lemma 1.8, we know that a complete system of represen-


tatives for equivalence classes of primitive chains is given by

(γ, (0, In−1 ), . . . , (0, I1 )) with γ ∈ Tn \Γn .

If C has the representative starting with γ, then we may write

qC,z (Y ) = qz ([γ]Y ) .

We may thus write the primitive Eisenstein series in the form


pr(n−1) (n−1)
X
(1) ET ,n (Y, z) = q−z ([γ]Y ) .
γ∈Tn \Γn

This is essentially the Eisenstein series we have defined previously, except that
we are summing mod Tn instead of mod ΓU . However, we note that for any
character ρ, and τ ∈ Tn we have the invariance property

ρ([τ ]Y ) = ρ(Y ) for all Y ∈ Posn .

Since (Tn : ΓU ) = 2n , denoting the old Eisenstein series by EUpr (Y, q−z ), we
get

(2) EUpr (Y, z) = 2n ETpr (Y, z) .


We recall explicitly that

EUpr (Y, ρ) = TrΓU \Γ (ρ)(Y ) =


X
ρ([γ]Y ) .
γ∈ΓU \Γ

To make easier the formal manipulations with non-primitive series, we list


some relations. For given k = 1, . . . , n − 1 we consider the product

(0, Dk ) . . . (0, Dn−1 ) = (0k,n−k , Tk )

where (γ, Dn−1 , . . . , D1 ) is a chain equivalent to C and Dj ∈ ∆j . Thus Tk is


a triangular k × k matrix. To determine more explicitly the Eisenstein series,
we may assume without loss of generality that

C = (γ, (0, Dn−1 ), . . . , (0, D1 )) .

Then
3 Eisenstein Series 145

(3) Ck (Y ) = [(0, Tk )γ]Y = [Tk ][(0, Ik )γ]Y


and therefore

(4) |Ck (Y )| = |Tk |2 |Subk ([γ]Y )| .


(k)
Let tνν denote the diagonal elements of Tk . Then of course
k
Y
(5) |Tk |2 = (t(k) 2
νν ) .
ν=1

These products decomposition allow us to give a product expression for E in


terms of E pr and the Riemann zeta function via the formula
n−1
(n−1)
Y
(6) qC,−z (Y ) = |Ck (Y )|−zk
k=1
n−1
Y n−1
Y j
Y
= |([γ]Y )k |−zk (d(j)
νν )
−2zk
,
k=1 j=k ν=j−k+1

(j)
here dνν are the diagonal elements of Dj .
(n−1)
Theorem 3.1. The Eisenstein series EU,n (Y, z) converges absolutely for
Re(zj ) > 1 (j = 1, . . . , n − 1) and satisfies the relation
(n−1) (n−1) pr(n−1)
EU,n (Y, z) = ΦQ (z1 , . . . , zn−1 )EU (Y, z) .

Proof. Both the relation and the convergence follow from (6) and Lemma 2.3
applied to n − 1 instead of n, and Theorem 2.2 of Chap. 7.

Next, we have identities concerning the behavior of the Eisenstein series


under the star involution. Recall that for any function ϕ on Posn , we define

ϕ∗ (Y ) = ϕ([ω]Y −1 ) = ϕ(ωY −1 ω) .

Proposition 3.2. Let ϕ be any U -invariant function such that its ΓU \Γ-trace
converges absolutely. Then

(TrΓU \Γ ϕ)(Y −1 ) = (TrΓU \Γ (ϕ∗ )(Y ) .

In particular, if ρ is a left character, then

EUpr (Y −1 , ρ) = EUpr (Y, ρ∗ ) .

If {γ} is a family of coset representatives of ΓU \Γ, then {ω t γ −1 } is also such


a family. Similarly for representatives of T \Γ.
146 8 Eisenstein Series Second Part
S
Proof. As to the second statement, write Γ = ΓU γ. Let ΓŪ be the lower
triangular subgroup. Then
[ [
t
Γ= γΓŪ = ΓŪ t γ −1 (taking the inverse)
γ
[
= ωΓŪ ωω t γ −1 (because Γ = ωΓ and ω 2 = I)
[
= ΓU ω t γ −1 (because ωΓŪ ω = ΓU ) .

This proves the second statement. Then the first formula comes out, namely:
X
TrΓU \Γ ϕ(Y −1 ) = ϕ([γ]Y −1 )
γ∈ΓU \Γ
X
= ϕ(γY −1 t γ)
γ
X
= ϕ∗ (ω(t γ −1 Y γ −1 )ω)
γ
= TrΓU \Γ ϕ∗ (Y )

by the preceding result, thus proving the proposition.

The next two lemmas deal with similar identities with sums taken over
cosets of matrices modulo the triangular group.
Lemma 3.3. Let ϕ be a Tn -invariant function such that the following sums
are absolutely convergent, i.e. a left character on Posn . Let S ∈ Posn+1 . Then
X X
ϕ∗ ((S[A])−1 ) = ϕ([C]S) .
A∈M∗ (n+1,n)/Tn C∈Tn \M∗ (n,n+1)

Proof. Inserting an ω inside the left side and using the definition of ϕ∗ , to-
gether with ϕ∗∗ = ϕ, we see that the left side is equal to
X X
ϕ(S[A][ω]) = ϕ(S[Aω]) .
A∈M∗ (n+1,n)/Tn A

By definition, M∗ (n + 1, n) =
S
ATn , with a family {A} of coset representa-
A
tives. Since M∗ (n + 1, n) = M (n + 1, n)ω, we also have

[ [ [
ATn = AωωTn ω = AωTn−
A Aω

where Tn− is the lower integral triangular group. Thus the family {Aω} is a
family of coset representatives for M∗ (n + 1, n)/Tn− . Writing

S[Aω] = [ω t A]S,
3 Eisenstein Series 147

we see that we can sum over the transposed matrices, and thus that the desired
sum is equal to X
ϕ([C]S) ,
C∈Tn \M∗ (n,n+1)

which proves the lemma.

Instead of taking M∗ (n + 1, n)/Tn we could also take M∗ (n + 1, n)/ΓU .


Since Tn /ΓU has order 2n , we see that we have a relation similar to (2), namely
X X
(7) ϕ∗ ([C]S) = 2n ϕ∗ ([C]S) .
ΓU \M∗ (n,n+1) Tn \M∗ (n,n+1)

Normalizing the series by taking sums mod Tn or mod ΓU only introduces the
simple factor 2n each time.
We shall now develop further the series on the right in Lemma 3.3, by
using the eigenvalue property EF HZ stated in Sect. 2.

Lemma 3.4. Suppose that ϕ is Tn U -invariant on Posn , and satisfies condi-


tion EF HZ (eigenfunction of Hecke-zeta operator). Then on Posn+1 ,
X
ϕ ◦ [C] = λHZ (ϕ)TrTn+1 \Γn+1 (ϕ ◦ Subn ) .
C∈Tn \M∗ (n,n+1)

Proof. By the invariance assumption on ϕ, we can use the fibration of Lemma


1.5, and write the sum on the left evaluated at S ∈ Posn+1 as
X X
ϕ([(0, D)][γ]S) .
γ∈Tn+1 \Γn+1 D∈Tn \∆n

Then the inner sum is just the Hecke operator of ϕ, when evaluated at
Subn [γ]S. The result then falls out.
(n)
In particular, we may apply the lemma to the case when ϕ = q−z , and we
obtain:

Corollary 3.5. Let S ∈ Posn+1 . Then


(n) pr(n) (n)
X
q−z ([C]S) = ΦQ,n (z)ET ,n+1 (S, q−z ) .
C∈Tn \M∗ (n,n+1)

Proof. Special case of Lemma 3.4, after applying Lemma 2.1 which determines
the eigenvalue of the Hecke-zeta operator.
148 8 Eisenstein Series Second Part

4 Adjointness and the ΓU \Γ-trace

We shall use differential operators introduced in Chap. 6. First, we observe


that for c > 0, Y ∈ Posn , B ∈ Symn we have by direct computation
¯ ¯
¯ ∂ ¯ −ctr(BY )
(1) ¯ ∂Y ¯ e
¯ ¯ = (−c)n |B| e−ctr(BY ) .

In particular, the above expression vanishes if B is singular. In the applica-


tions, B will be semipositive, and the effect of applying |∂/∂Y | will therefore
be to eliminate such a term when B has rank < n.
As in Chap. 6 let the (first and second) regularizing invariant differ-
ential operators be
¯ ¯
¯ ∂ ¯
(2) Q = Qn = |Y | ¯¯ ¯ and D = Dn = |Y |−k Q̃n |Y |−k Qn .
∂Y ¯

Throughout we put k = (n + 1)/2 and D = Dn if we don’t need to mention


n. We recall that

(3) D̃n = |Y |k Dn |Y |−k = Q̃|Y |k Q|Y |−k .

For S ∈ Posn+1 we let


X
θ(S, Y ) = e−πtr(S[A]Y )
A

where the sum is taken over A ∈ Zn+1,n . This is the standard theta series. We
can differentiate term by term. By (1) and the subsequent remark, we note
that
X X
DY θ(S, Y ) = DY e−π(S[A]Y ) = βA,S (Y )e−πtr(S[A]Y ) ,
rk(A)=n rk(A)=n

where βA,S (S being now fixed) is a function of Y with only polynomial


growth, and so not affecting the convergence of the series. Although its coef-
ficients are complicated, there is one simplifying effect to having applied the
differential operator D, namely we sum only over the matrices A of rank n.
Thus we abbreviate as before, and for this section, we let:

M∗ = M∗(n+1,n) = subset of elements in Z(n+1)×n of rank n .

Then the sum expressing DY θ(S, Y ) is taken over A ∈ M∗ .


Note that both θ and Dθ are functions of two variables, and thus will be
viewed as kernels, which induce integral operators by convolution, provided
they are applied to functions for which the convolution integral is absolutely
convergent.
4 Adjointness and the ΓU \Γ-trace 149

We recall the functional equation for θ,

(4) θ(S −1 , Y −1 ) = |S|n/2 |Y |(n+1)/2 θ(S, Y ) .

From (3), we then see that Dθ satisfies the same functional equation, that is

(5) (Dθ)(S −1 , Y −1 ) = |S|n/2 |Y |(n+1)/2 (Dθ)(S, Y ) .

Here we have used the special value k = (n + 1)/2.


We shall now derive an adjoint relation in the present context. For a U -
invariant function ϕ on Posn , we recall the ΓU \Γ-trace, defined by
X
TrΓU \Γ (ϕ)(Y ) = ϕ([γ]Y ) .
γ∈ΓU \Γ

For functions ϕ such that the ΓU \Γ-trace and the following integral are ab-
solutely convergent, we can form the convolution on Γn \Posn :
Z
(Dθ ∗ TrΓU \Γ ϕ)(S) = (DY θ)(S, Y )TrΓU \Γ (ϕ)(Y )dµn (Y ) .
Γn \Posn

We abbreviate as before

P = Posn , Γ = Γn

to make certain computations formally clearer.


Lemma 4.1. For an arbitrary U -invariant function ϕ on Posn insuring ab-
solute convergence of the series and integral, we have with k = (n + 1)/2:

(Dθ ∗ TrΓU \Γ ϕ)(S)


X Z
= 2(−1)n |πS[A]| e−πtr(S[A]Y ) |Y |k+1 Q(ϕd−k )(Y )dµ(Y ) .
A∈M∗ /ΓU P

Thus the convolution on the left is a sum of gamma transforms.

Proof. The proof is similar to those encountered before. We have:


Z
(Dy θ)(S, Y )TrΓU \Γ ϕ(Y )dµ(Y )
Γ\P
Z X
= DY e−πtr(S[A]Y ) TrΓU \Γ ϕ(Y )dµ(Y )
A∈M∗
Γ\P
X Z X
= DY e−πtr(S[Aγ]Y ) TrΓU \Γ ϕ(Y )dµ(Y )
A∈M∗ /Γ Γ\P γ∈Γ
150 8 Eisenstein Series Second Part
X Z X
= 2 DY e−πtr(S[A]Y ) ϕ([γ]Y )dµ(Y )
A∈M∗ /Γ P γ∈ΓU \Γ
X X
=
A∈M∗ /Γ γ∈ΓU \Γ
¯ ¯
¯ ∂ ¯ −πtr(S[A]Y )
Z
2 |Y |−k Q̃Y (|Y |k+1 ¯¯ ¯e )ϕ([γ](Y ))dµ(Y )
∂Y ¯
P
X X
=
A∈M∗ /Γ γ∈ΓU \Γ
Z
2(−1)n |πS[A]| e−πtr(S[A]Y ) |Y |k+1 QY (|Y |−k ϕ([γ]Y ))dµ(Y ) ,
P

using formula (2), and then transposing Q̃Y from the exponential term to the
ϕ ◦ [γ](Y ) term. Now we make the translation Y 7→ [γ −1 ]Y in the integral
over P. Under this change, ΓU \Γ 7→ Γ/ΓU , and the expression is equal to
n
X X
= 2(−1)n |πS[A]|
A∈M∗ /Γ γ −1 ∈Γ/ΓU
Z
−1
e−πtr(S[Aγ ]Y )
|Y |k+1 QY (|Y |−k ϕ(Y ))dµ(Y ) .
P

The two sums over Γ/ΓU and over M∗ /Γ can be combined into a single sum
with A ∈ M∗ /ΓU , which yields the formula proving the lemma.

Looking at the integral expression on the right in the lemma, we see at


once that it is a gamma transform. Furthermore, if ϕd−k is an eigenfunction of
Qn , then the integral can be further simplified, and this condition is satisfied
in the case of immediate interest when ϕ is a character. However, it continues
to be clearer to extract precisely what is being used of a more general function
ϕ, which amounts to eigenfunction properties in addition to Tn U -invariance
and the absolute convergence of the series and integral involved. Thus we list
these properties as follows.
EF Q. The function ϕd−(n+1)/2 is an eigenfunction of Qn .
EF Γ. The function ϕd is an eigenfunction of the gamma transform, it
being assumed that the integral defining this transform converges
absolutely.

We use λ to denote eigenvalues. Specifically, let D be an invariant differ-


ential operator. Let ϕ be a D-eigenfunction. We let λD (ϕ) be the eigenvalue
so that
Dϕ = λD (ϕ)ϕ .
4 Adjointness and the ΓU \Γ-trace 151

Similarly, we have the integral gamma operator, and for an eigenfunction ϕ,


we let
λΓ (ϕ) = Γn (ϕ) so that Γ#ϕ = λΓ (ϕ)ϕ .
In addition, we define
Λn (ϕ) = (−1)n λQ (ϕd−(n+1)/2 )λΓ (ϕd) .
Theorem 4.2. Assume that ϕ is Tn U -invariant and satisfies the two prop-
erties EF Q and EF Γ. Then for S ∈ Posn+1 , under conditions of absolute
convergence,
X
(Dθ ∗ TrΓU \Γ (ϕ)(S) = 2Λn (ϕ) ϕ((πS[A])−1 ) .
A∈M∗ (n+1,n)/ΓU

Proof. By using the eigenfunction assumptions on the expression being


summed on the right side of the equality in Lemma 4.1, and again if we
set k = (n + 1)/2, we obtain:
Z
|πS[A]| e−πtr(S[A]Y ) |Y |k+1 λQ (ϕd−k )(ϕd−k )(Y )dµ(Y )
P
Z
= λQ (ϕd−k )|πS[A]| e−πtr(S[A]Y ) (ϕd)(Y )dµ(Y )
P
−k
= λQ (ϕd )|πS[A]|λΓ (ϕd)(ϕd)((πS[A])−1 )
by definition of the gamma transform and an eigenvalue, cf. Chap. 3, Propo-
sition 2.2,
= λQ (ϕd−k )λΓ (ϕd)ϕ((πS[A])−1 )
because the determinant cancels. This proves the theorem.
Theorem 4.3. Let ϕ be Tn U -invariant, satisfying EF Q, EF Γ, and EF HZ.
Then for S ∈ Posn+1 , when the series and integral are absolutely convergent,
(Dθ ∗ TrΓU \Γ (ϕ∗ )(S) = Λn (ϕ∗ )λHZ (ϕ)TrΓUn+1 \Γn+1 (ϕ ◦ Subn )(πS) .
Proof. We apply Theorem 4.2 to ϕ∗ instead of ϕ. The sum in Theorem 4.2
can be further simplified as follows:
X
ϕ∗ ((πS[A])−1 )
A∈M∗ (n+1,n)/ΓU
X
= 2n ϕ∗ ((πS[A])−1 )
A∈M∗ (n+1,n)/Tn

=2 n
λHZ (ϕ)ETpr (πS, ϕ ◦ Subn ) by Lemmas 3.3 and 3.4.
The Eisenstein series here is on Posn+1 , and going back to ΓUn+1 instead of
Tn+1 introduces the factor 1/2n+1 , which multiplied by 2n leaves 1/2. This
1/2 cancels the factor 2 occurring in Theorem 4.2. The relationship asserted
in the theorem then falls out, thus concluding the proof.
152 8 Eisenstein Series Second Part

Corollary 4.4. Let D = Dn be the invariant differential operator defined at


the beginning of the section. Let ϕ be homogeneous of degree w, for instance
ϕ is a character. Then for S ∈ Posn+1 ,

(Dθ ∗ TrΓU \Γn ϕ∗ )(S) = π w Λn (ϕ∗ )λHZ (ϕ)TrΓUn+1 \Γn+1 (ϕ ◦ Subn )(S) .

Proof. We just pull out the homogeneity factor from inside the expression in
Theorem 4.3.

Remark. Remark Immediately from the definitions, one sees that for the
Selberg power character, we have
n
X
deg qz(n) = wn (z) = jzj .
j=1

This character may be viewed as a character on Posm for any m = n. The


degree is the same in all cases. For application to the Eisenstein series, we
(n)
use of course q−z , which has degree −wn (z) = wn (−z). Actually, in the next
section we shall change variables, and get another expression for the degree
in terms of the new variables.
The inductive formula of this section stems from the ideas presented by
Maass [Maa 71], pp. 268–272, but we have seen how it is valid for much
more general functions ϕ besides characters. Maass works only with the spe-
cial characters coming from the Selberg power function, and normalizes these
characters with s-variables. We carry out this normalization in the next sec-
tion, as a preliminary to Maass’ proof of the functional equation.

5 Changing to the (s1 , . . . , sn)-variables


We recall the Selberg power function of Chap. 3, Sect. 1, expressed in terms
of two sets of complex variables

z = (z1 , . . . , zn−1 ) and s = (s1 , . . . , sn ) ,

namely
n
(n−1)
Y
(1) |Y |sn +(n−1)/4 q−z (Y ) = hs (Y ) = (tn−i+1 )2si +i−(n+1)/2 ,
i=1

where
1
zj = sj+1 − sj + for j = 1, . . . , n − 1,
2
or also
(n−1)
(2) q−z (Y ) = |Y |−sn −(n−1)/4 hs (Y ) .
5 Changing to the (s1 , . . . , sn )-variables 153

To determine the degree of homogeneity of hs , we note that

Y 7→ cY (c > 0) corresponds to t 7→ c1/2 t .

Then we find immediately:


n
X
(3) deg hs = si and deg h∗s = −deg hs .
i=1

Throughout this section, we fix the notation. We let Γ = Γn , and


(n−1) (n−1)
ζ pr (Y, s) = EUpr (Y, q−z ) = TrΓU \Γ q−z (Y )
= |Y |−sn −(n−1)/4 TrΓU \Γ hs (Y ) .

Proposition 5.1. We have in the appropriate domain (see the remark be-
low):
ζ pr (Y −1 , s) = |Y |sn −s1 +(n−1)/2 ζ pr (Y, s∗ ) ,
where s∗ = (−sn , . . . , −s1 ), so s∗j = −sn−j+1 .

Proof. We have

ζ pr (Y −1 , s) = |Y |sn +(n−1)/4 TrΓU \Γ hs (Y −1 ) by (2)


sn +(n−1)/4
= |Y | TrΓU \Γ h∗s (Y ) by Prop. 3.2
= |Y |sn +(n−1)/4 TrΓU \Γ hs∗ (Y ) by Chap. 3, Prop. 1.7
sn −s1 +(n−1)/2 pr ∗
= |Y | ζ (Y, s ) by (2)

because TrΓU \Γ hs∗ (Y ) = |Y |sn +(n−1)/4 ζ pr (Y, s∗ ) by (2). This concludes the
proof.

Remark. The domain of absolute convergence of the Eisenstein series


(n−1)
EUpr (Y, q−z ) was proved to be Re(zj ) > 1 for j = 1, . . . , n − 1, that is
µ ¶
1
Re sj+1 − sj + > 1 for j = 1, . . . , n − 1 .
2

From the relation s∗k = −sn−k+1 we see that

1 1
s∗k+1 − s∗k + = sj − sj−1 + with j =n−k+1.
2 2
Thus the domains of convergence in terms of the s∗ and s variables are “the
same” half planes.

We shall meet a systematic pattern as follows. Let ψ = ψ(u) be a function


of one variable. For n = 2, we define
154 8 Eisenstein Series Second Part
n−1
Y
ψn (s) = ψn (s1 , . . . , sn ) = ψ(sn − si + 1/2)
i=1
n
Y
ψ (n) (s) = ψj (s1 , . . . , sj ) .
j=2

We note the completely general fact:

Lemma 5.2. ψ (n) (s∗ ) = ψ (n) (s).

This relation is independent of the function ψ, and is trivially verified from


the definition of ψ (n) . It will apply to three important special cases below. We
start with the function ψ(u) = ζQ (2u), where ζQ is the Riemann zeta function.
Then we use a special letter ZQ and define
n−1
Y
ZQ,n (s) = ζQ (2(sn − si + 1/2))
i=1
(n)
Y
ZQ (s) = ζQ (2(sj − si + 1/2)).
15i<j5n

Lemma 5.3. With ΦQ,n−1 as in Lemma 2.1, we have

ΦQ,n−1 (z1 , . . . , zn−1 ) = ZQ,n (s1 , . . . , sn ) .

Proof. By definition,
n−1
Y
ΦQ,n−1 (z) = ζQ (2(zi + . . . + zn−1 ) − (n − i − 1)) .
i=1

With the s-variables, we get a cancellation, namely


n−i
(4) zi + . . . + zn−1 = sn − si + .
2
This proves the lemma.
(n−1)
The non-primitive Eisenstein series EU (Y, z) is defined to be the prod-
(n−1)
uct of the primitive Eisenstein series times ΦQ (z). Hence from the transfer
to the s-variables in Lemma 5.3 we have
(n−1) (n)
(5) ζ(Y, s) = EU (Y, z) = ZQ (s)ζ pr (Y, s) .

(n) (n)
Since ZQ (s∗ ) = ZQ (s) by Lemma 5.2, it follows that Proposition 5.1 is valid
if we replace the primitive Eisenstein series ζ pr (Y, s) by ζ(Y, s).
In connection with using Posn+1 via Theorem 4.3 and Corollary 4.4, it is
(n) (n−1)
natural to consider q−z as well as q−z .
5 Changing to the (s1 , . . . , sn )-variables 155

Lemma 5.4. Put zn = sn+1 − sn + 1/2, and let ϕs,sn+1 be the character on
Posn defined by

ϕs,sn+1 (Y ) = |Y |−sn+1 −(n+1)/4 hs (Y ) .

In other words, ϕs,sn+1 = d−sn+1 −(n+1)/4 hs . Then on Posn ,


(n)
q−z = ϕs,sn+1 .

Proof. By definition,
(n) (n−1)
q−z (Y ) = |Y |−zn q−z (Y ) .

Substituting zn = sn+1 − sn + 12 and using (2) yields the desired relation.


The Hecke-zeta eigenvalue is given by

(6) λHZ (ϕs,sn+1 ) = ZQ,n+1 (s1 , . . . , sn+1 ) = ZQ,n+1 (s, sn+1 ) .

This is just the formulation of Lemma 2.1 in the (s, sn+1 ) variables. Further-
more,
n µ ¶
X n+1
(7) wn (z) = deg ϕs,sn+1 = si − sn+1 − .
i=1
4
This is immediate from (3) and the homogeneity degree of the determinant.
We define various elementary functions from which we build others, and
relate them to eigenvalues found in the preceding section. We let

g(u) = π −u Γ(u) and F (u) = u(1 − u)g(u) .

These are standard fudge factors in one variable u. Following the previous
general pattern, we define
n−1
Y
gn (s) = gn (s1 , . . . , sn ) = g(sn − si + 1/2)
i=1
n−1
Y
Fn (s) = Fn (s1 , . . . , sn ) = F (sn − si + 1/2).
i=1

Finally, we define
n
Y n
Y
g (n) (s) = gj (s) and F (n) (s) = Fj (s) .
j=1 j=1

These definitions follow the same pattern that we used with the fudge factor
(n)
involving the Riemann zeta function, i.e. ZQ,n (s) and ZQ (s). In particular,

F (n+1) (s1 , . . . , sn+1 )


Fn+1 (s1 , . . . , sn+1 ) =
F (n) (s1 , . . . , sn )
156 8 Eisenstein Series Second Part

and
n
Y
F (n+1) (s, sn+1 ) = Fj+1 (s1 , . . . , sj+1 ) .
j=1
The next lemma is the analogue of Proposition 5.3 for the fudge factor that
we are now dealing with.
Lemma 5.5. We have the explicit determination
Fn+1 (s, sn+1 ) = π wn Λn (ϕ∗s,sn+1 ) .
The exponent wn is the degree in (7), as a function of s1 , . . . , sn+1 .
Proof. This is a tedious verification.
(n)
We apply Corollary 4.4 to the character q−z = ϕs,sn+1 . We note that
(n)∗
(8) q−z = ϕ∗s,sn+1 = dsn+1 +(n+1)/4 hs∗ .

Lemma 5.6. For Y ∈ Posn ,


(u)∗
TrΓU \Γ (q−z (Y ) = TrΓU \Γ (ϕ∗s,sn+1 )(Y ) = |Y |sn+1 −s1 +n/2 ζ pr (Y, s∗ ) .
Proof. By definition of s∗ = (−sn , . . . , s1 ) we have
ζ pr (Y, s∗ ) = |Y |s1 −(n−1)/4 TrΓU \Γ h∗s (Y ) = TrΓU \Γ (ds1 −(n−1)/4 hs∗ )(Y ) .
Multiplying by dsn+1 −s1 +n/2 and using (8) concludes the proof.
For S ∈ Posn+1 , Γ = Γn , define
ξ(S; s, sn+1 ) = (Dθ ∗ TrΓU \Γ (ϕ∗s,sn+1 ))(S) .
Thus by definition of the convolution and Lemma 5.6,
Z
ξ(S; s, sn+1 ) = Dθ(S, Y )|Y |sn+1 −s1 +n/2 ζ(Y, s∗ )dµ(Y ) .
Γn \Pn

Let B be the domain defined by the inequalities


Re(sj+1 − sj + 1/2) > 1 for j = 1, . . . , n − 1 and sn+1 arbitrary,
while B1 is defined by these inequalities together with
Re(s1 − sn+1 + 1/2) > 1 .
Estimates in Chap. 7 show that the integral for ξ converges absolutely in the
domain B1 . In light of our definitions and Lemma 5.5, we may now reformulate
Theorem 4.3 or rather Corollary 4.4 as follows.
Theorem 5.7. In the domain defined by these inequalities, we have
ξ(S; s, sn+1 ) = Fn+1 (s, sn+1 )ζ(S; s, sn+1 )
F n+1 (s1 , . . . , sn+1 )
= ζ(S; s, sn+1 ) .
F (n) (s1 , . . . , sn )
6 Functional Equation: with Cyclic Permutations 157

6 Functional Equation: Invariance


under Cyclic Permutations
Here we follow Maass [Maa 71]. For the function ξ(S; s, sn+1 ) defined at the
end of the preceding section, we first have

Lemma 6.1. For S ∈ Posn+1 ,

ξ(S −1 ; s∗ , −sn+1 ) = |S|n/2 ξ(S; s, sn+1 ) .

Proof. This result is proved by the Riemann method. The integral over Γn \Pn
is decomposed into a sum
Z Z Z
= + ,
Γn \Pn (Γn \Pn )(=1) (Γn \Pn )(51)

where the parentheses (=1) and (51) signify the subdomain where the de-
terminant is = 1 resp. 5 1. On the second integral, we make the change of
variables Y 7→ Y −1 . Then letting Fn = Γn \Pn , we get:

(1) ξ(S; s, sn+1 )


Z
Dθ(S, Y )|Y |sn+1 −s1 +n/2 ζ(Y, s∗ )
©
=
Fn (=1)

+Dθ(S, Y −1 )|Y |s1 −sn+1 −n/2 ζ(Y −1 , s∗ ) dµ(Y ) .


ª

On the other hand,

(2) ξ(S −1 ; s∗ , −sn+1 )


Z
Dθ(S −1 , Y )|Y |−sn+1 +sn +n/2 ζ(Y, s)
©
=
Fn (=1)

+Dθ(S −1 , Y )|Y |sn+1 −sn −n/2 ζ(Y −1 , s) dµ(Y ) .


ª

We now use two previous functional equations. One is the functional equation
for the regularized theta functions, namely Sect. 4, formulas (4) and (5), which
read:

Dθ(S −1 , Y −1 ) = |S|n/2 |Y |(n+1)/2 Dθ(S, Y )

Dθ(S −1 , Y ) = |S|n/2 |Y |−(n+1)/2 Dθ(S, Y −1 )

The other equation is stated in Proposition 5.1, which is valid with ζ(Y, s)
(n) (n)
instead of ζ pr (Y, s), because ZQ (s∗ ) = ZQ (s) is the same factor needed
to change the primitive Eisenstein series into the non-primitive one. Ap-
plying this proposition and the functional equation for the theta function
158 8 Eisenstein Series Second Part

shows directly and immediately that the two terms under the integral for
ξ(S −1 ; s∗ , sn+1 ) are changed precisely into the two terms which occur in the
integral expression for ξ(S; s, sn+1 ) multiplied by |S|n/2 . This concludes the
proof.
Theorem 6.2. Let S ∈ Posn+1 and let

η(S; s(n+1) ) = F (n+1) (s1 , . . . , sn+1 )|S|sn+1 ζ(S; s1 . . . , sn+1 ) .

Then η(S; s1 , . . . , sn+1 ) is invariant under a cyclic permutation of the vari-


ables, that is

η(S; s1 , . . . , sn+1 ) =
F (n+1) (sn+1 , s1 , . . . , sn )|S|sn ζ(S; sn+1 , s1 , . . . , sn ) .

Furthermore, η(S; s1 , . . . , sn+1 ) is holomorphic in the domain B.


Proof. By Theorem 5.7 and F (n) (s∗ ) = F (n) (s), we have
F (n+1) (s∗ ,−sn+1 )
ξ(S −1 ; s∗ , −sn+1 ) = F (n) (s)
ζ(S −1 ; s∗ , −sn+1 )
F (n+1) (sn+1 , s)
= |S|−sn+1 +sn +n/2 ζ(S; sn+1 , s1 , . . . , sn )
F (n) (s)

by Proposition 5.1, valid in the domain Re(Sj+1 − sj + 12 ) > 1 for each index
j = 1, . . . , n − 1, that is in the domain B. On the other hand,

|S|n/2 ξ(S; s, sn+1 ) =


F (n+1) (s1 , . . . , sn , sn+1 ) n/2
|S| ζ(S, s1 , . . . , sn , sn+1 ) .
F (n) (s1 , . . . , sn )
Using the definition of η(S; s1 , . . . , sn+1 ) and cross multiplying, we apply
Lemma 6.2 to conclude the proof.
Note. The three essential ingredients in the above proof are:
EIS 1. For each integer n = 3 there is a fudge factor F (n) (s1 , . . . , sn ) such
that for S ∈ Posn+1 we have

F (n+1) (s1 , . . . , sn , sn+1 )


ξ(S; s, sn+1 ) = ζ(S; s, sn+1 ) .
F (n) (s1 , . . . , sn )

Furthermore, F (n) (s∗ ) = F (n) (s) (invariance under s 7→ s∗ ).


See Lemma 5.2 and Theorem 5.7.
EIS 2. ζ(Y −1 , s) = |Y |sn −s1 +(n−1)/2 ζ(Y, s∗ ) in the domain

Re(sj+1 − sj + 1/2) > 1 .

Ref: Proposition 5.1 and Lemma 5.2.


7 Invariance under All Permutations 159

EIS 3. ξ(S −1 ; s∗ , −sn+1 ) = |S|n/2 ξ(S; s, sn+1 )


Ref: Theorem 6.1.
Finally, we prove the analytic continuation over all of Cn+1 by means of
a theorem in several complex variables. That is, we want:

Theorem 6.3. The function η(S, s1 , . . . , sn+1 ) is holomorphic on all of Cn+1 .

Proof. We reduce the result to a basic theorem in several complex variables.


Let σ be the cyclic permutation

σ : (s1 , . . . , sn+1 ) 7→ (sn+1 , s1 , . . . , sn ) .

By Theorem 6.2, we know that η is holomorphic in the domain


n
[
D= σ j B ⊂ Cn+1 .
j=1

Let prRn+1 (D) = DR be the projection on the real part. Since the inequalities
defining D involve only the real part, it follows that

D = DR + iRn+1 ,

so D is what is commonly called a tube domain. By Theorem 2.5.10 in


Hörmander [Hör 66], it follows that η is holomorphic on the convex closure
of the tube. But DR contains a straight line parallel to the (n + 1)-th axis
of Rn+1 . This line can be mapped on a line parallel to the j-th axis of Rn+1
for each j, by powers of σ. The convex closure of these lines in the real part
Rn+1 is all of Rn+1 , and by the theorem in Hörmander, it follows that the
convex closure of D is Cn+1 . This concludes the proof.

7 Invariance under All Permutations


In light of the theorems in Sect. 6, all that remains to be done is to prove the
invariance of the function

η(Y ; s1 , . . . , sn ) = F (n) (s1 , . . . , sn )|Y |sn ζ(Y ; s1 , . . . , sn )

under a transposition, and even under the transposition between the special
variables s1 and s2 . Then we shall obtain Selberg’s theorem:

Theorem 7.1. For Y ∈ Posn , the function η(Y ; s1 , . . . , sn ) is invariant un-


der all permutations of the variables.
160 8 Eisenstein Series Second Part

Proof. The following proof follows Selberg’s lines and is the one given in
(n−1)
Maass [Maa 71]. We have ζ(Y ; s) = EU (Y, z) (the non-primitive Eisen-
stein series). The essential part of the proof will be to show that the function
(n−1)
π −s1 Γ(z1 )EU (Y, z) = π −(s2 −s1 +1/2) Γ(s2 − s1 + 1/2)ζ(Y, s)

is invariant under the transpositon of s1 and s2 . Before proving this, we show


how it implies the theorem. As before, let

g(u) = π −u Γ(u) .

Then it follows that


Y
g(sj − si + 1/2)|Y |sn ζ(Y, s) = g (n) (s)|Y |sn ζ(Y, s)
15i<j5n

is invariant under the transposition of s1 and s2 . By Theorem 6.2 we con-


clude that this function is invariant under all permutations of (s1 , . . . , sn ).
Theorem 7.1 follows by the factorization of F (n) (s1 , . . . , sn ) given in Sect. 5.
(n−1)
To prove the invariance of g(z1 )EU (Y, z) under the transposition of s1
and s2 , we go back to the definition of the Eisenstein series in terms of the
z-variables, and we write this definition in the form
n−1
(n−1) (1)
X Y
ET (Y, z) = |Cj (Y )|−zj ET (C2 (Y ), z1 ) with Cn = γ .
(Cn ,...,C2 ) j=2

The sum over (Cn , . . . , C2 ) is over equivalence classes, whose definition for
such truncated sequences is the same as for (Cn , . . . , C1 ), except for disre-
garding the condition on C1 . The theorem was proved in Chap. 5, Theorem
4.1 in the case n = 2, so we assume n = 3. We write the Eisenstein series with
one further splitting, that is
n−1
(n−1) (1)
X Y
ET (Y, z) = |Cj (Y )|−zj |C2 (Y )|−z2 ET (C2 (Y ), z1 ) .
(Cn ,...,C2 ) j=3

Although the notation with the chains was the clearest previously, it now
becomes a little cumbersome, so we abbreviate

Cj (Y ) = Yj for j = 1, . . . , n .

Then we rewrite the above expressions for the Eisenstein series in the form
n−1
(n−1) (1)
X Y
(1) ET (Y, z) = |Yj |−zj ET (Y2 , z1 )
(Yn ,...,Y2 ) j=2
7 Invariance under All Permutations 161
n−1
(1)
X Y
(2) = |Yj |−zj |Y2 |−z2 ET (Y2 , z1 ) .
(Yn ,...,Y2 ) j=3

With the change of variables zj = sj+1 − sj + 1/2, we can also write


(1)
|Y2 |−z2 ET (Y2 , z1 ) = 22 |Y2 |s2 −s3 +1/2 ζ(Y2 ; s1 , s2 ) .

By Theorem 4.1 of Chap. 5, the function

η2 (Y ; s1 , s2 ) = π −z1 Γ(z1 )|Y |s2 ζ(Y2 ; s1 , s2 )

is invariant under permutation of s1 and s2 , because in the notation of this


reference,
E(Y2 , z) = E (1) (Y2 , z) = ζ(Y2 ; s1 , s2 ) .
Thus formally, we conclude that

π −z1 Γ(z1 )ζ(Y ; s1 , . . . , s2 ) = π −z1 Γ(z1 )E (n−1) (Y, z)

is invariant under the permutation of s1 and s2 . The only thing to watch for
is that this permutation can be done while preserving the convergence of the
series expression (2) for E (n−1) (Y, z). Thus one has to select an appropriate
domain of absolute convergence, so that all the above expressions make sense.
Maass does this as follows. We start with the inductive lowest dimensional
piece,
Λ2 (Y, z1 ) = π −z1 Γ(z1 )E (1) (Y, z1 ),
which is the first case studied in Chap. 5, Sect. 3. We gave an estimate for
this function in the strip Str(−2, 3), that is

−2 < Re(z1 ) < 3 ,

away from 0 and 1, specifically outside the discs of radius 1 centered ar 0 and
1, as in Corollary 3.8 of Chap. 5.
Next, we consider the series

X n−1
Y
(3) π −z1 Γ(z1 )E (n−1) (y, Z) = |Yj |−zj Λ2 (Y2 , z1 ) .
(Yn ,...,Y2 ) j=2

By Theorem 3.1, mostly Theorem 2.2 of Chap. 7, the series in (3) converges
absolutely for Re(zj ) > 1, j = 1, . . . , n − 1. Similarly, by Chap. 7, Theorem
3.1, we also know that the series

X n−1
Y
(4) |Yj |−zj
(Yn ,...,Y2 ) j=2

converges absolutely for Re(zj ) > 1 (3 5 j 5 n − 1) and Re(z2 ) > 3/2. By


Chap. 5, Corollary 3.8 (put there for the present purpose), adding up the
162 8 Eisenstein Series Second Part

power of |Y2 |, in the sbove strip outside the unit discs around 0, 1, it follows
that the Eisenstein series from (1) converges absolutely in the domain
D1 = points in Cn with z1 in the strip Str(−2, 3) outside the discs of
radius 1 around 0, 1; and

Re(z2 ) > 7/2; Re(zj ) > 1 for j = 3, . . . , n − 1 .

Let
D2 = subdomain of D1 satisfying the further inequality Re(z2 ) > 6.
In terms of the variables z, we want to prove the functional equation

X n−1
Y
|Yj |−zj |Y2 |−z2 Λ2 (Y2 , z1 )
(Yn ,...,Y2 ) j=3

X n−1
Y
= |Yj |−zj |Y2 |−z1 −z2 +1/2 Λ2 (Y2 , 1 − z1 ) .
(Yn ,...,Y2 ) j=3

The series on both sides are convergent in D2 , so the formal argument is now
justified, and we have proved that

π −z1 Γ(z1 )E (n−1) (Y, z)

is invariant under the equivalent transformations:

z1 7→ 1 − z1 , z2 7→ z1 + z2 − 1/2, zj 7→ zj (j = 3, . . . , n − 1), sn 7→ sn ,

or
transposition of s1 and s2 .
This concludes the proof of Theorem 7.1.

Remark. Just as Maass gave convergence criteria for Eisenstein series with
more general parabolic groups [Maa 71], Sect. 7, he also gives the analytic
continuation and functional equation for these more general groups at the
end of Sect. 17, pp. 279–299.
Bibliography

[Ba 95] BALLMAN, W.: Lectures on Spaces of Nonpositive Curvature.


Birkhäuser (1995).
[Be 83] BENGSTON, T.: Bessel functions on Pn . Pacific J. Math. 108 (1983)
19–29.
[Boc 52] BOCHNER, S.: Bessel functions and modular relations of higher type
and hyperbolic differential equations. Comm. Sém. Math. Univ. Lund.,
Tome. suppl. dedicated to Marcel Riesz (1952), 12–20.
[Bor 69] BOREL, A.: Introduction aux groupes arithmétiques. Hermann (1969).
[Bum 84] BUMP, D.: Automorphic forms on GL(3, R). Lecture Notes in Math.
1083 Springer Verlag (1984).
[Dri 97] DRIVER, B. K.: Integration by parts and quasi-invariance for heat ker-
nel measures on loop groups. J. Functional Analysis 149 (1997) 470–
547.
[Gin 64] GINDIKIN, S.: Analysis in homogeneous domains. Russian Math. Sur-
veys 19 (1964) 1–90
[God 57] GODEMENT, R.: Introduction aux travaux de Selberg. Séminaire
Bourbaki (1957).
[Gre 88] GRENIER, D.: Fundamental domains for the general linear group. Pa-
cific J. Math. 132 (1988) 293–317.
[Gre 92] GRENIER, D.: An analogue of Siegel’s phi-operator for automorphic
forms for GLn (Z). Trans. AMS. 333 (1992) 463–477.
[Gre 93] GRENIER, D.: On the shape of fundamental domains in GL(n, R)/
O(n). Pacific J. Math. 160 (1993) 53–66.
[Gre 94] GRENIER, D.: Factoring L-functions as products of L-functions. Trans.
AMS 345 (1994) 673–692.
[Har 68] HARISH-CHANDRA.: Automorphic Forms on Semi-Simple Lie
Groups: Notes by J. G. M. Mars. Lecture Notes in Math. 62 (1968).
[Hel 62] HELGASON, S.: Differential Geometry and Symmetric Spaces. Acad-
emic Press (1962).
[Hel 68] HELGASON, S.: Differential Geometry, Lie Groups, and Symmetric
Spaces. Academic Press (1968).
[Hel 77] HELGASON, S.: Some results on eigenfunctions on symmetric spaces
and eigenspace representations. Math. Scand. 41 (1977) 79–89.
164 Bibliography

[Hel 84] HELGASON, S.: Groups and Geometric Analysis. Academic Press
(1984).
[Her 55] HERZ, C.: Bessel functions of matrix arguments. Ann. Math. 61 (1955)
474–523.
[Hla 44] HLAWKA, E.: Zur Geometrie der Zahlen. Math. Zeitschr. 49 (1944)
285–312.
[Hör 66] HÖRMANDER, L.: An introduction to complex analysis in several vari-
ables. VanNostrand, Princeton (1966).
[ImT 82] IMAI, K., and TERRAS, A.: Fourier expansions of Eisenstein series for
GL(3, Z). Trans. AMS 273 (1982) 679–694.
[JoL 99] JORGENSON, J., and LANG, S.: Hilbert-Asai Eisenstein series, regu-
larized products, and heat kernels. Nagoya Math. J. 153 (1999) 155–
188.
[JoL 01] JORGENSON, J., and LANG, S.: Spherical Inversion on SLn (R).
Springer-Verlag (2001).
[kAR 65] KARPELEVIC, F. I.: The geometry of geodesics and the eigenfunctions
of the Beltrami-Laplace operator on symmetric spaces. Trans. Moscow
Math. Obsc. 14 (1965) 48–185; Trans. Moscow Math. Soc. (1965) 51–
199.
[La 75/85] LANG, S.: SL2 (R). Addison-Wesley (1975); Springer-Verlag (1985).
[La 93] LANG, S.: Real and Functional Analysis. Graduate Texts in Mathemat-
ics 142 Springer-Verlag (1993).
[La 99] LANG, S.: Fundamentals of Differential Geometry. Springer-Verlag
(1999).
[Llds 76] LANGLANDS, R. P.: On the Functional Equations Satisfied by Eisen-
stein Series. Lecture Notes in Math. 1083 Springer Verlag (1984).
[Loo 69] LOOS, O.: Symmetric Spaces I and II. Benjamin (1969).
[Maa 55] MAASS, H.: Die Bestimmung der Dirichletreihen mit Grössen- charak-
teren zu den Modulformen n-ten Grades. J. Indian Math. Soc. 19 (1955)
1–23.
[Maa 71] MAASS, H.: Siegel’s Modular Forms and Dirichlet Series. Lecture Notes
in Math. 216 Springer Verlag (1971).
[Min 1884] MINKOWSKI, H.: Grundlagen für eine Theorie der quadratischen For-
men mit ganzzahligen Koeffizienten. Mémoire Acadḿie des Sciences
(1884). Collected Works I 3–144.
[Min 05] MINKOWSKI, H.: Diskontinuitätsbereich für arithmetische Äquivalenz.
J. reine angew. Math. 129 (1905) 270–274. Collected Works II 53–100.
[Moo 64] MOORE, C.: Compactifications of symmetric spaces II: The Cartan
domains. Amer. J. Math. 86 (1964) 358–378.
[Mos 53] MOSTOW, D.: Some new decomposition theorems for semi-simple
groups. Memoirs AMS (1953).
[Nar 68] NARASIMHAN, R.: Analysis on Real and Complex Manifolds. North
Holland (1968).
[Sat 56] SATAKE, I.: Compaction des espaces quotients de Siegel I. Séminaire
Cartan 1957–58, 3 March 1958, 12–01.
[Sat 60] SATAKE, I.: On compactifications of the quotient spaces for arithmeti-
cally defined discontinuous groups. Ann. Math. 72 (1960) 555–580.
[Sel 56] SELBERG, A.: Harmonic analysis and discontinuous groups. J. Indian
Math. Soc. 20 (1956) 47–87.
Bibliography 165

[Sie 56] SIEGEL, C. L.: Über die analytische theorie der quadratische Formen.
[Sie 35] Ann. Math. 36 (1935) 527–606.
[Sie 36] Ann. Math. 37 (1936) 230–263.
[Sie 37] Ann. Math. 38 (1937) 212–291.
[Sie 38] SIEGEL, C. L.: Über die zeta funktionen indefiniter quadratischen
Formen.
[Sie 38] Ann. Math. 43 (1938) 682–708.
[Sie 39] Ann. Math. 44 (1939) 398–426.
[Sie 40] SIEGEL, C. L.: Einheiten quadratischer Formen. Abh. Math. Sem. Han-
sische Univ. 13 (1940) 209–239.
[Sie 41] SIEGEL, C. L.: Equivalence of quadratic forms. Amer. J. Math. 63
(1941) 658–680.
[Sie 43] SIEGEL, C. L.: Discontinuous groups. Ann. Math. 44 (1943) 674–689.
[Sie 44a] SIEGEL, C. L.: On the theory of indefinite quadratic forms. Ann. Math.
45 (1944) 577–622.
[Sie 44b] SIEGEL, C. L.: The average measure of quadratic forms with given
determinant and signature. Ann. Math. 45 (1944) 667–685.
[Sie 45] SIEGEL, C. L.: Some remarks on discontinuous groups. Ann. Math. 46
(1945) 708–718.
[Sie 48] SIEGEL, C. L.: Indefinite quadratische Formen und Modulfunktionen.
Courant Anniv. Volume (1948) 395–406.
[Sie 51] SIEGEL, C. L.: Indefinite quadratische Formen und Funktionentheorie,
I. Math. Ann. 124 (1951) 17–54; II, 364–387.
[Sie 55/56] SIEGEL, C. L.: Lectures on Quadratic Forms. Tata Institute, Bombay
(1955–56).
[Sie 59] SIEGEL, C. L.: Zur Reduktionstheorie quadratischen Formen. Pub.
Math. soc Japan (1959) Collected Papers #72, Volume III, 275–327.
[Ter 80] TERRAS, A.: Integral formulas and integral tests for series of positive
matrices Pacific J. Math. 89 (1980) 471–490.
[Ter 85a] TERRAS, A.: The Chowla Selberg method for Fourier expansion of
higher rank Eisenstein series. Canad. Math. Bull. 28 (1985) 280–294.
[Ter 85b] TERRAS, A.: Harmonic Analysis on Symmetric Spaces and Applica-
tions, I. Springer-Verlag (1985).
[Ter 88] TERRAS, A.: Harmonic Analysis on Symmetric Spaces and Applica-
tions, II. Springer-Verlag (1988).
[ViT 82] VINOGRADOV, A., and TAKHTAZHAN, L.: Theory of Eisenstein se-
ries for the group SL(3, R) and its applications to a binary problem.
J. Soviet Math. 18 (1982) 293–324.
[Wal 73] WALLACH, N.: Harmonic Analysis on Homogeneous Spaces. Marcel
Dekker (1973).
[We 46] WEIL, A.: Sur quelques résultats de Siegel. Summa Braz. Math. 1
(1946) 21–39; Collected Papers I, Springer-Verlag (1979) 339–357.
Index

Adjointness formulas 102, 103 Fun conditions 7


Functional equation of Eisenstein 157,
Bengston Bessel function 58–73 160
Bessel function 58–73 Functional equation of theta 98, 149
Bessel-Fourier series 105 Fundamental Domain 1, 6

Chains of matrices 134 Gamma function Γn 55


Changing variables 152 Gamma integral 55, 118
Character 50 Gamma kernel 88
Completed Lambda function 100 Gamma point pair invariant 88
Convergence of Eisenstein series 129 Gamma transform 57
Grenier fundamental domain 6, 17
d 50
D 148 Haar measure 25
Decomposition of Haar measure 25 Hecke zeta operator 140
Determinant character 50
Dual lattice 97 Incomplete gamma integral 101
Inductive coordinates 14
EF HZ 141 Integral matrices 134
Eigenfunction of Hecke zeta operator Invariant differential operators 90
140 Invariant differential operators and
Eigenvalue 89, 118, 119 polynomials on A 90
Eisenstein series 107, 128, 133–135, Invariant polynomials 75, 91
142–145, 151–154, 157, 160, 162 Iwasawa coordinates 126
Eisenstein trace 108 Iwasawa decomposition 2, 16
Epstein zeta function 99–106
Jacobian 31
Equivalent chains of matrices 137
Estimate of Lambda function 107 K-Bessel function 59
First order Iwasawa decomposition 5, Λ-function 100–106
6 Lie algebra generators 84
Fourier series 111–116 Lower Bengston function 73
Fourier transform 70, 95
Full Iwasawa coordinates 125 Maass Selberg generators 78
168 Index

Maass Selberg operators 80 Regularizing differential operator 122,


Measure of fundamental domain 47 148
Measure on SPos 36 Reversing matrix 51
Mellin transform 55, 70 Riemann zeta fudge factor 141
Metric 121, 122
Minkowski fundamental domain 7 Selberg Eisenstein series 143
Minkowski Measure of fundamental Selberg power function 143
domain 47 Siegel set 20, 23
Minkowski-Hlawka 44 Siegel’s formula 41
Standard coordinates 15
Newton polynomials 77 Strict fundamental domain 1
Non-singular theta series 111 Subdeterminants 53
Normal decomposition 93
Normalized primitive chain 140 Theta series 102, 111
Trace 149
Parabolic subgroup 132 Trace scalar product 97
Partial determinant character 140 Transpose of differential operator 87
Partial Iwasawa decomposition 15, Triangular coordinates 26
114 Triangularization 135, 138
Poisson formula 97 Tubular neighborhood 92
Polar coordinates 32 Twists of theta series 99
Polar Haar measure 33
Polynomial expression 81 Unipotent trace 108
Primitive chain 139 Upper Bengston function 71
Primitive Eisenstein series 107, 128,
143 Weight of polynomial 83
Projection on A 92 Weyl group 75

Radius of discreteness 125 Xi function 156–159

You might also like