You are on page 1of 347

Other Titles in This Series

145 A. N. Andrianov and V. G. Zhuravlev, Modular forms and Hecke operators, 1995
144 0. V. Troshkin, Nontraditional methods in mathematical hydrodynamics, 1995
143 V. A. Malyshev and R. A. Minlos, Linear infinite-particle operators, 1995
142 N. V. Krylov, Introduction to the theory of diffusion processes, 1995
141 A. A. Davydov, Qualitative theory of control systems, 1994
140 Aizik I. Volpert, Vitaly A. Volpert, and Vladimir A. Volpert, Traveling wave solutions of parabolic
systems, 1994
139 I. V. Skrypnik, Methods for analysis of nonlinear elliptic boundary value problems, 1994
138 Yu. P. Razmyslov, Ide~tities of algebra~ and their representations, 1994
137 F. I. Karpelevich and A. Ya. Kreinin, Heavy traffic limits for multiphase queues, 1994
136 Masayoshi Miyanishi, Algebraic geometry, 1994
135 Masam Takeuchi, Modern spherical functions, 1994
134 V. V. Prasolov, Problems and theorems in linear algebra, 1994
133 P. I. Naumkln and I. A. Shishmarev, Nonlinear nonlocal equations in the theory of waves, 1994
132 Hajime Urakawa, Calculus of variations and harmonic maps, 1993
131 V. V. Sharko, Functions on manifolds: Algebraic and topological aspects, 1993
130 V. V. Vershinln, Cobordisms and spectral sequences, 1993
129 Mitsuo Morimoto, An introduction to Sato's hyperfunctions, 1993
128 V. P. Orevkov, Complexity of proofs and their transformations in axiomatic theories, 1993
127 F. L. Zak, Tangents and secants of algebraic varieties, 1993
126 M. L. Agranovskii, Invariant function spaces on homogeneous manifolds of Lie groups and
applications, 1993
125 Masayoshl Nagata, Theory of commutative fields, 1993
124 Masahisa Adachi, Embeddings and immersions, 1993
123 M.A. Akivis and B. A. Rosenfeld, Elie Cartan (1869-1951), 1993
122 Zhang Guan-Hou, Theory of entire and meromorphic functions: Deficient and asymptotic values
and singular directions, 1993
121 I. B. Fesenko and S. V. Vostokov, Local fields and their extensions: A constructive approach, 1993
120 Takeyukl Hlda and Masuyuki Hltsuda, Gaussian processes, 1993
119 M. V. Karasev and V. P. Maslov, Nonlinear Poisson brackets. Geometry and quantization, 1993
118 Kenkichi Iwasawa, Algebraic functions, 1993
117 Boris Zilber, Uncountably categorical theories, 1993
116 G. M. Fel'dman, Arithmetic of probability distributions, and characterization problems on abelian
groups, 1993
115 Nikolai V. Ivanov, Subgroups of Teichmiiller modular groups, 1992
114 Seiz6 Itll, Diffusion equations, 1992
113 Michail Zhitomirskli, Typical singularities of differential I-forms and Pfaffian equations, 1992
112 S. A. Lomov, Introduction to the general theory of singular perturbations, 1992
111 Simon Gindikin, Tube domains and the ~auchy problem, 1992
110 B. V. Shabat, Introduction to complex analysis Part II. Functions of several variables, 1992
109 lsao Miyadera, Nonlinear semigroups, 1992
108 Takeo Yokonuma, Tensor spaces and exterior algebra, 1992
I07 B. M. Makarov, M. G. Goluzina, A. A. Lodkin, and A. N. Podkorytov, Selected problems in real
analysis, 1992
106 G.-C. Wen, Conformal mappings and boundary value problems, 1992
105 D. R. Yafaev, Mathematical scattering theory: General theory, 1992
104 R. L. Dobmshin, R. Kotecky, and S. Shlosman, Wulff construction: A global shape from local
interaction, 1992

(Continued in the back of this publication)


Translations of
MATHEMATICAL
MONOGRAPHS
Volume 145

Modular Forms
and Hecke Operators
A. N. Andrianov
V. G. Zhuravlev
A. H. AHJ:(pHaHOB, B. r. )l{ypaBJieB
MO,l:t;YJUIPHbIE <l>OPMbl 11 OIIEPATOPbl rEKKE

Translated from the Russian by Neal Koblitz


1991 Mathematics Subject Classification. Primary l lFxx; Secondary 11E45.
ABSTRACT. The book contains the exposition of the theory of Hecke operators in the spaces of modular
forms of an arbitrary degree. The main consideration is given to the study of multiplicative properties of
the Fourier coefficients of modular forms.
The book can be used by researchers and graduate students working in algebra and number theory, to
learn about theta-series, theory of modular forms in one and several variables, 'theory of Hecke operators,
and about recent developments in arithmetic of modular forms.

Library of Congress Cataloging-in-Publication Data


Andrianov, A. N. (Anatolii Nikolaevich)
[Moduliarnye formy i operatory Gekke. English]
Modular forms and Hecke operators / A. N. Andrianov, V. G. Zhuravlev.
p. cm. - (Translations of mathematical monographs, ISSN 0065-9282; v. 145)
Includes bibliographical R1ferences.
ISBN 0-8218-0277-1
1. Forms, Modular. 2. Hecke operators. I. Zhuravlev, V. G. (Vladimir Georgievich) II. Title.
III. Series.
QA243.A52813 1995
512.9'44--dc20 95-30915
CIP

Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them,
are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research.
Permission is granted to quote· brief passages from this publication in reviews, provided the customary
acknowledgment of the source is given.
Republication, systematic copying, or multiple reproduction of any material in this publication (in-
cluding abstracts) is permitted only under license from the American Mathematical Society. Requests
for such permission should be a~dressed to the Manager of Editorial Services, American Mathematical
Society, P.O. Box 6248, Providence, Rhode Island 02940-6248. Requests can also be made by e-mail to
reprint-permissionOmath.ams.org.

© Copyright 1995 by the American Mathematical Society. All rights reserved.


The American Mathematical Society retains all rights
except those granted to the United States Government.
Printed in the United States of America.
@ The paper used in this book is acid-free and falls within the guidelines
established to ensure permanence and durability.
0 Printed on recycled paper.
10 9 8 7 6 5 4 3 2 I 00 99 98 97 96 95
Contents

Introduction 1
Chapter 1. Theta-Series 3
§1. Definition of theta-series 3
1. Representations of quadratic forms by quadratic forms 3
2. Definition of theta-series 5
§2. Symplectic transformations 6
1. The symplectic group 6
2. The Siegel upper half-plane 8
§3. Symplectic transformations of theta-series 11
1. Transformations of theta-series 11
2. The Siegel modular group and the theta-group 16
3. Symplectic transformations of theta-series 19
§4. Computation of the multiplier 25
1. Automorphy factors 25
2. Quadratic forms of level 1 27
3. The multiplier as a Gauss sum 28
4. Quadratic forms in an even number of variables 32
5. Quadratic forms in an odd number of variables 37
Chapter 2. Modular Forms 43
§1. Fundamental domains for subgroups of the modular group 43
1. The modular triangle 43
2. The Minkowski reduction domain 46
3. The fundamental domain for the Siegel modular group 52
4. Subgroups of finite index 57
§2. Definition of modular forms 59
1. Congruence subgroups of the modular group 59
2. Modular forms of integer weight 60
3. Definition of modular forms of half-integer weight 60
4. Theta-series as modular forms 60
§3. Fourier expansions 61
1. Modular forms for triangular subgroups 61
2. The Koecher effect 62
3. Fourier expansions of modular forms 65
4. The Siegel operator 72
5. Cusp-forms 75

v
vi CONTENTS

§4. Spaces of modular forms 79


1. Zeros of modular forms for r 1 79
2. Modular forms with zero initial Fourier coefficients 82
3. Finite dimensionality of the spaces of modular forms 85
§5. Scalar product and orthogonal decomposition 86
1. The scalar product 87
2. The orthogonal complement 90
Chapter 3. Hecke Rings 93
§1. Abstract Hecke rings 93
1. Averaging over double cosets 93
2. Hecke rings 94
3. The imbedding e: 100
4. The anti-isomorphism j 101
5. Representations in spaces of automorphic forms 103
6. Hecke algebras over a commutative ring 104
§2. Hecke rings for the general linear group 105
1. Global rings 105
2. Local rings 111
3. The spherical map 117
§3. Hecke rings for the symplectic group 122
1. Global rings 122
2. Local rings 133
3. The spherical map 138
§4. Hecke rings for the symplectic covering group 155
1. Global rings 155
2. Local rings 163
3. The spherical map 175
§5. Hecke rings for the triangular subgroup of the symplectic group 179
l. Global rings 179
2. Local rings 185
3. ExpansionofP(m)forn = 1,2 190
§6. Hecke polynomials for the symplectic group 192
1. Negative powers ofFrobenius elements 192
2. Factorization of Hecke polynomials 199
3. Symmetric.factorization of the polynomials Q~ (v) for n = 1, 2 202
4. Coefficients in the factorization of Rankin polynomials 204
5. Symmetric factorization of Rankin polynomials 220
Chapter 4. Hecke Operators 225
§1. Hecke operators for congruence subgroups of the modular group 225
1. Hecke operators 225
2. Invariant subspaces and eigenfunctions 229
§2. Action of the Hecke operators 234
1. Hecke operators for rjj {q) 234
2. Hecke operators for r 0 236
3. Hecke operators and the Siegel operator 243
4. Action of the middle factor in the symmetric factorization of Rankin
polynomials 253
CONTENTS vii

§3. Multiplicative properties of the Fourier coefficients 270


1. Modular forms in one variable 271
2. Mo4ular forms of degree 2, Gaussian composition, and zeta-functions 279
3. Modular forms of arbitrary degree and even zeta-functions 297
Appendix 1. Symmetric Matrices Over a Field 307
1. Arbitrary fields 307
2. The field of real numbers 308
Appendix 2. Quadratic Spaces 311
1. Geometrical language 311
2. Nondegenerate spaces 314
3. Gauss sums 316
4. Isotropy subspaces of nondegenerate spaces over residue fields 318
Appendix 3. Modules in Quadratic Fields and Binary Quadratic Forms 321
1. Modules in algebraic number fields 321
2. Modules and primes in quadratic fields 321
3. Modules in imaginary quadratic fields, and quadratic forms 322
Notes 325
References 329
List of Notation 333
Introduction

Throughout the history of number theory, a problem that has attracted and con-
tinues to attract the interest of researchers is that of studying the number r (q, a) of
integer solutions to equations of the form
q(xi, ... ,xm) =a,
where q is a quadratic form. The classical theory gave many exact formulas for the
functions r (q, a) which revealed remarkable multiplicative properties of these numbers.
For example, Jacobi's formula ·
r(xf + · · · + xJ, a)= 8a1 (a)

for the number of representations of an odd integer a as a sum of four squares and
Ramanujan's formula
2 2 ) 16 ( ) 33152 ( )
r ( X1 + ... + X24, a = 691 O"IJ a + """""691-r a
for the number of representations of an odd integer a as a sum of 24 squares, where
ak (a) denotes the sum of the kth powers of the positive divisors of a and -r (a) is defined
as the coefficients in the power series
00

x{(l - x)(l - x 2 ) ••• } 24 = L -r(a)x 0 ,

a=I

involve the multiplicative functions ak (a) with multiplication table


ak(a) • O"k(b) = L dkak(ab/d 2)
dla,b

and the Ramanujan function -r (a), which follows the same multiplication rule as a 11 (a).
In 1937, Hecke explained why this phenomenon occurs. In particular, from Hecke's
theory it follows that, given a positive definite integral quadratic form q in an even
number of variables, the function r (q, a) is a linear combination of multiplicative
functions whose values can be interpreted as eigenvalues of certain invariantly defined
linear operators (called "Hecke operators") on the spaces of modular forms. In sub-
sequent years, the work of Eichler, Sato, Deligne and others uncovered fundamental
relations between Hecke operators and algebraic geometry. In particular, their eigen-
values were interpreted in terms of the roots of the zeta-functions of suitable algebraic
varieties over finite fields. Another line of development, initiated by Selberg and then
greatly expanded by Langlands, considers Hecke operators from the point of view of
the representation theory of locally compact groups and hopes to find a prominent
place for them in a future noncommutative class field theory.
2 INTRODUCTION

A natural generalization of the problem of representing numbers by quadratic


forms is the problem of representing quadratic forms by quadratic forms. If q and a
are two quadratic forms in m and n variables, respectively, then we might want to study
the number r(q, a) of changes of variables (by means ofm x n integer matrices) which
taketheformq to theform a. For example, if a is an integer, then r(q,ax 2 ) = r(q,a).
In 1935-1937, Siegel laid the groundwork for the arithmetic-analytic study of r(q, a)
and began constructing a theory of modular forms in several variables. Neither he nor
later investigators were able to find much in the way of "exact formulas" (in the sense
of the classical theory) for these functions r(q, a). Moreover, since quadratic forms
generally do not have any composition rule, it is not at all clear in what sense one can
speak of multiplicative functions of the type r(q, a). Thus, there were no arithmetic
motives for trying to carry over the theory of Hecke operators to Siegel modular forms.
But the concept of Hecke operators was so simple and natural that, soon after Hecke's
work, attempts were made to develop a Hecke theory for such modular forms.
As this theory developed, the Hecke operators on spaces of modular forms in
several variables were found to have arithmetic meaning. In particular, the theory
provided a framework for discovering certain multiplicative properties of the number
of integer representations of quadratic forms by quadratic forms.
The theory has now reached a sufficient level of maturity, and the time has come
for a detailed and systematic exposition of its fundamental methods and results. The
purpose of this book is, starting with the basics and ending with the latest results, to
explain the current status of the theory of Hecke operators on spaces of holomorphic
m~dular forms of integer and half-integer weight for congruence-subgroups of integral
symplectic groups. In the spirit of Hecke's original approach, we consider Hecke
operators principally as an instrument for studying the multiplicative properties of the
Fourier coefficients of modular forms. We do not discuss other directions of the theory,
such as the connection of Hecke operators with algebraic geometry, representation
theory, and Galois theory, since in the case of several variables the study of these
connections is far from complete.
The book can also be used as an introduction to the theory of modular forms in
one or several variables and the theory of theta-series.
The book is intended for· those who plan to work in the arithmetic theory of
quadratic forms and modular functions, those who already are working in this area,
and those who merely want some familiarity with the field. The reader can get an idea
of the book's contents from the chapter and section headings and the introductory
remarks at the beginning of each chapter. Here we shall only make some general
comments. The first three chapters are independent of one another, except for a few
general lemmas. Chapter 4 relies upon Chapters 2 and 3~ Most of the exercises are not
standard drill problems, but rather indicate interesting branches of the theory which
for one reason or another did not fit into the main text. All of the prerequisites from
algebra and number theory that go beyond the standard university courses are given
at the end of the book in three Appendices. The most important references to the
literature are concentrated in the Remarks.
We hope that this book will help attract the attention of young researchers to a
beautiful and mysterious realm of number theory, and will make the path easier for all
who wish to enter.
CHAPTER 1

Theta-Series

In this chapter we look at a fundamental instrument in the analytic study of


quadratic Diophantine equations and systems-the theta-series of quadratic forms.

§1. Definition of theta-series


1. Representations of quadratic forms by quadratic forms. Suppose q(xi, ... , xm)
and a (y 1, ••• , Yn) are two quadratic forms with coefficients in some commutative ring
K'. By a representation of the form a by the form q over a subring K of K' we mean a
matrix C = (c;j) in the set Mm,n (K) of all m x n-matrices with entries in K such that
the change of variables
n
X; = ~::>ijYj (i = 1, ... , m)
j=I
takes the form q to the form a. We let rK (q, a) denote the number of representations
of a by q over the ring K. In the case n = 1, i.e., a(y 1) =a· y?, the definition says
that a column C with components c 1, ••• , cm E K is a representation of a by q if and
only if (ci. ... , cm) is a solution of the equation

(1.1)

and hence rK (q, a · Y?> is equal to the number rK (q, a) of solutions in K to the
equation ( 1.1). Similarly, in the general case rK ( q, a) can be interpreted as the number
of solutions of a certain system of equations of degree two.
If the number 2 = 2 · lK' is not a zero divisor in K', then it is convenient to use
matrix language. To every quadratic form

(1.2) q(xi, ... , Xin) = L . q pX Xp


0 0

l~a,p~m

we associate the even symmetriC matrix

(1.3)

where t denotes the transpose. We call Q the matrix of the form q (it is more traditional
but less convenient to call Q the matrix of the form 2q). Then q can be written in
terms of its matrix as follows:

(1.4) q(xi. ... , Xm) = ~ 'xQx,


3
4 I. THETA-SERIES

where x is the column with components xi, ... , Xm. Using these definitions and
notation, we immediately see that C E Mm,n (K) is a representation of a form a in n
variables by the form q if and only if X = C is a solution of the matrix equation

(1.5) 'XQX=A,

where A is the matrix of the form a and Xis an m x n-matrix. In particular, rK (q, a)
is equal to the number rK(Q, A) of solutions over K of the equation (1.5).
The methods of studying rK(q, a) and even the formulation of the questions
naturally depend upon the nature of the ring K and the properties of the quadratic
forms under consideration. For now we shall limit o.urselves to a simple but useful
observation.
Two quadratic forms q and q' in the same number m of variables are said to ·be
equivalent over K (or K -equivalent) if there exists a representation of one form by the
other that lies in the group GLm(K) of invertible m x m-matrices over K. In this case
we. write q "'K q'. The set {q} K of all forms that are equivalent over K to a given form
q is called the class of q over K. If Q and Q' are the matrices of the forms q and q',
then q "'K q' means that

(1.6) VQ_V = Q'


1 with V E GLm(K).

In this case we say that Q and Q' are equivalent over K (or K -equivalent), and we write
Q "'K Q'. We let { Q} K denote the K -equivalence class of the matrix Q.
For any fixed matrices V E GLm(K) and V' E GLn(K), the map C ---+ VCV'
is obviously a one-to-one correspondence from Mm,n (K) to itself. From this obvious
fact and the definitions we have
PROPOSITION 1.1. The function rK ( Q, A) depends only on the K -equivalence classes
of Q and A (or the K -equivalence classes of the corresponding quadratic forms).
The history of quadratic forms over rings is almost as old and colorful as that of
mathematics itself. The questions asked and the approaches to answering them vary
greatly from one ring to another and from one type of quadratic form to another. In
this book we are interested in the development of analytic methods in the simplest
nontrivial situation, and so we take the ring of rational integers Z as our ground
ring. Other rings will play only an auxiliary role. In addition, as· a rule we will be
considering only representations by positive definite forms. If q is such a form, the
number rz(q, a) of integral representations by q is always finite, and the theory studies
various properties of the function a ---+ rz (q, a).
It is the theory of modular forms which provides a natural language and an
apparatus for studying this function. The theta-series of quadratic forms are the link
between modular forms and quadratic forms.
PROBLEM 1.2. Let Q and A be symmetric m x m- and n x n-matrices, respectively,
with coefficients in the field of real numbers R, and let Q > 0 (i.e., Q is positive
definite). Prove that the equation 1XQX = A is solvable in realm x n-matrices X if
and only if A ;::: 0 (i.e., A is positive semidefinite) and rank A ~ m, and in this case the
entries in any solution X = C = (cij) satisfy the inequality max;,1 icijl ~ ;..- 112µ 112,
where J.. is the smallest eigenvalue of the matrix Q and µ is the largest eigenvalue of the
matrix A; in particular, rz(Q, A) < oo.
§1. DEFINITION OF THETA-SERIES s

2. Definition of theta-series. We start with some notation. Let

(1.7)

be the set of n x n integral even symmetric (i.e., with even numbers on the main
diagonal) semidefinite matrices, and let

(1.8) A~= {A E An;A > O}.


We fix Q E A;!; and n = l, 2, . . . . For every A E An we can define the finite number
r( Q, A) = rz (Q, A) of integral representations of the form with matrix A by the form
with matrix Q. Thus, the matrix Q corresponds to a function r(Q, ·): An --+ Z.
In order to study this function analytically, it is natural to consider the following
generating series, written as a power series,

L r(Q, A) II 1;1
A=((l+e0 ,11)a0 ,11)EAn l~a~P~n

in (n) = n(n + 1)/2 variables lap. where eap are the coefficients of the identity matrix
En = (eap). Setting lap = exp(2nizap), we obtain the Fourier series

on(z,Q)= L r(Q,A)exp(ni2 L aapZap)


A=((l+e0 ,11)a0 ,11)EAn l~a~p~n
(1.9)
= L r(Q,A)exp(nia(AZ)),
AEAn

where Z is an n x n symmetric matrix with coefficients Zap on and above the main
diagonal, and where a(M) denotes the trace of the matrix M. The last form of writing
the generating series is the more convenient one in most situations, and in particular
for finding the domain of convergence.
We write the matrix Z in the form Z = X + i Y, where X and Y are real matrices
and i = A. If the matrix Y does not satisfy the condition Y ~ 0, then there
exists a row of integers c = ( c 1, ••• , en) such that c Y 1c < 0. (A real solution of this
inequality exists by definition; a rational solution exists by continuity; and an integral
solution can be obtained from the rational solution using homogeneity.) Let C denote
the m x n integer matrix with c in the first row and zeros everywhere else. Then the
matrices Ad= d 2 • 'CQC = '(dC)Q(dC) with rational integers d belong to An, they
satisfy the condition r(Q, Ad) ~ 1, and we obviously have

lr(Q,Ad)exp(nia(AdZ))I = r(Q,Ad)exp(-nd 2 a(QCY'C))--+ oo asd--+ oo.

Thus, in this case the general term in (1.9) does not approach zero, and the series
diverges. Consequently, if the series (1.9) converges on some open subset of the (n)-
dimensional complex space of the variables Zap, then this subset must be contained in
the region

(1.10) Hn = {Z = X + iY E Sn(C); Y> O},


called the Siegel upper half-plane of degree n. The region Hn is obviously an open
subset of (n )-dimensional complex space.
6 I. THETA-SERIES

PROPOSITION 1.3. The series on(z, Q), where Q EA;!; and n EN, converges abso-
lutely and uniformly on any subset of Hn of the form
(1.11) Hn(e) = {Z = X+ iY E Hn; Y ~ eEn}.
where e > 0 and En is then x n identity matrix.
PROOF. Let e 1 denote the smallest eigenvalue. of the matrix Q. Then for any
N E Mm,n(R) we have the inequality 'NQN ~ e1 'NN. Consequently, on the set
Hn (e) the series
(1.12) L exp(nia('NQNZ))
NEM,.,,

is majorized by the convergent numerical series

L exp{-nee1a{'NN)) = ( L::exp{-nee1t 2)) mn'


NEMm,n tEZ

and hence it converges absolutely and uniformly on this set. If we gather together all
of the terms in {l.12) for which 'NQN is equal to a fixed matrix A E An, we see that
the number of such terms is r(Q, A), and thus the series (1.12) is equal to on(z, Q) in
any region of absolute convergence. D

The series
(1.13) on (Z, Q) = L r(Q, A) exp(nia(AZ)) = L exp(nia( 'NQNZ))
AEAn

is called the theta-series of degree n for the matrix Q {or the corresponding quadratic
form). Proposition 1.3 immediately implies
TllEoREM 1.4. The theta-series on (Z, Q) of degree n for the matrix Q E A;!; deter-
mines a holomorphicfunction on Hn; the function on(z, Q) is bounded on every subset
Hn(e) C Hn withe> 0.

§2. Symplectic transformations


1. The symplectic group. On the Siegel upper half-plane Hn, which arose in §1 as
the domain of definition of theta-series, we have an action of a large group of biholo-
morphic one-to-one transformations. An obvious example of such transformations is
the set of maps
u(v): z - 1v- 1zv- 1, T(S): Z - Z + S,
where V E GLn(R) and S E Sn(R). Another such' transformation is the map J: Z -
-z- 1. To see that this has the·desired properties, we must check that -z- 1 exists
and belongs to Hn for any Z = X + i Y E Hn. Since Y is a symmetric positive
definite matrix, there exists VE GLn(R) such that VY'V is the identity matrix. We
set T = VX 'V. Then VZ 'V = T + iEn. Since the matrix T 2 +En is positive definite,
it is invertible, and so T + iE is also invertible, with (T + iE)- 1 = (T- iE)(T2 + E)- 1
(here E =En). Thus, Z is an invertible matrix, and
-z-l = - 'V(T + iE)- 1V = 'V(-T + iE)(T 2 + E)- 1V
is contained in Hn.
§2. SYMPLECTIC TRANSFORMATIONS 7

We now examine the group of analytic automorphisms of Hn that is generated by


U(V), T(S), and J. We first note that each of the generating transformations is a
fractional-linear transformation of the form
(2.1) Z-+ M(Z} =(AZ+ B)(CZ + D)- 1 (Z E Hn),
where A, B, C, D are n x n-matrices, and in the three cases the 2n x 2n-matrix
M = ( ~ ~) is, respectively:

(2.2) U(V) = ( ~* ~), where VE GLn(R), V* = v- 1,


1

(2.3) T(S) = ( ~ ~) , where E =En, SE Sn(R),

in which Sn (R) denotes the set of symmetric matrices in Mn (R), and

(2.4) J = Jn = (_OE ~) .
Furthermore, it is easy to see that the composition of any two automorphisms of the
form (2.1) is also an automorphism of the form (2.1) with matrix equal to the product
of the original matrices. Thus, we obtain
PROPOSITION 2.1. Let S be the subgroup of GL2n(R) generated by the matrices
(2.2)-(2.4). Then for every M = ( ~ ~) ES the matrix CZ+ Dis invertible for all
Z E Hn, and the map
(2.5) f(M): Z-+ M(Z}
is a holomorphic automorphism of Hn. The map M-+ f(M) gives a homomorphism
from S to .the group of holomorphic automorphisms of
.
Hn.
In order to characterize Sas an.algebraic group, we first note that each generator
(2.2)-(2.4) leaves invariant the skew-symmetric bilinear form with matrix (2.4), i.e., it
satisfies the relation 'MJnM =Jn. Hence, Sis contained in the group
(2.6)
which is called the real symplectic group ofdegree n. It follows from the definition that
a 2n x 2n real matrix M = ( ~ ~) with n x n-blocks A, B, C, D is symplectic (i.e.,
belongs to the symplectic group of degree n) if and only if
(2.7) 'AC= 'CA, 'BD ='DB, 'AD - 'CB= En.
It is easy to see that a matrix Mis symplectic ifand only if the matrix 'M = JM- 1J- 1
is symplectic. This implies that the conditions (2. 7) can be rewritten in the form
(2.8) A 'B = B 'A, C 'D = D 'C, A 'D - B 'C =En.

Finally, we note that the inverse of a symplectic matrix M = ( ~ ~) is

(2.9) M-1 = J'"'1 'MJ = ( 'D


-'C
-'B)
'A .
8 I. THETA-SERIES

THEOREM 2.2. The symplectic group of degree n is generated by the matrices (2.2)-
(2.4). In other words, S = Spn(R).

PROOF. Let M = ( ~ ~) be an arbitrary symplectic matrix. The upper-left


block of the symplectic matrix U( V)MU( Vi) is equal to V* A Vt, and so by a suitable
choice of V, Vi E GLn {R) this block can be brought to the form ( 1{; ~), where
r is the rank of A and E, is the r x r identity matrix. Thus, we may assume from
the beginning that A = ( 1{; ~) . Then, if C = ( ~~ ~! ) is the corresponding
partition of C into blocks, the first relation in (2. 7) shows that C2 = 0 and C 1 = 'C1•
In addition, det C4 ~ 0, since otherwise the first n columns of M would be linearly
dependent. Ifwe now pass from M to the matrix T(A.En)M, where A. is a real number,
we see that the new matrix has upper-left block equal to

.
A' = A + A.C = ( E, A.~
+A.Ci
.
0 )
A.~

and so it has rank n for A. sufficiently small. We see that from the beginning we may
assume, without loss of generality, that A= En. Now, by the first relation in (2.7), C
is a symmetric matrix, and

J-• T( c)JM = ( ~c ~n) ( ~ ~) = ( ~n :. ) .

The third and second relations in (2.7) show that Di = En and 'B = Bin the last
matrix, and hence it is equal to the matrix T(B). D

PROBLEM 2.3. Suppose that the matrix M = ( ~ ~) E M 2n(R) satisfies the


condition 'MJnM = rJn, r ~ 0, where Jn is the matrix (2.4). Show that the map (2.5)
is defined and holomorphic on Hn, maps_ Hn onto itself if r > 0, and maps Hn onto
{Z=X.:__ Yi; Z=X+iYEHn}ifr<O.
2. The Siegel upper half-plane. We saw that the group Spn (R) acts as a group of
holomorphic automorphisms of Hn according to the rule ·

(2.10) M=(~ ~):Z-+M(Z}=(AZ+B)(CZ+D)- 1 •


PROPOSITION 2.4. The action of the symplectic group on the upper half-plane is
transitive.
PROOF. Let Z = X + iY E Hn. Since Y > 0, there exists a matrix A E GLn(R)
with 'AA = Y. Then M = T(X)U(A- 1) is a symplectic matrix, and M(iEn} =
X+i'AA=Z. D

This transitivity implies that Hn can be identified with the quotient Spn (R) /St(Z)
of the symplectic group by the stabilizer of an arbitrary point Z E Hn. All of the
stabilizers are obviously conjugate to the stabilizer of the point iEn. The structure of
the latter group is given by the next proposition.
§2. SYMPLECTIC TRANSFORMATIONS 9

PROPOSITION 2.5. One has

{ME Spn(R); M(iEn} = iEn} = { M = (-~ ! ); u(M) =A+ iB E U(n) }•

where U(n) is the unitary group of order n. The map M-+ u(M) is an isomorphism of
St(iEn) with the unitary group U(n).
PROOF. The proposition follows easily from the definitions. D

If Z = X + iY and Z1 = X1 + iY1 are any two matrices in Hn and t E R,


0 ~ t ~ 1, then the matrix t Y + ( 1 - t) Y1 is obviously positive definite. Hence,
tZ +{I - t)Z1 E Hn. This remark implies
PROPOSITION 2.6. Hn is a convex and simply connected domain.
The upper half-plane Hn is obviously an open subset of n(n + 1)-dimensional
real space. We shall show that Hn has an n(n + 1)-dimensional volume element that is
invariant under all symplectic transformations, and we shall find such an element. With
this purpose in mind, we first examine what happens under symplectic transformations
to the Euclidean volume element on Hn.
LBMMA2.7. Let
dZ = II dxapdyap (Z = (xap + iyap) E Hn)
l~a~P~n

be the Euclidean volume element on Hn. Then for any symplectic matrix M = ( ~ ~)
we have
dM(Z} =I det{CZ + D)l- 2n- 2dz.
PROOF. For Z =(zap)= (xap + iyap) we set Z' = (z~) = (x~ + iy~) = M(Z}.
To prove the lemma, we must find the Jacobian determinant of the variables x~, y~
with respect to the variables Xap, yap, i.e., the determinant of the transition matrix from
the n(n + 1)-vector whose components (in any order) are the differentials dx~, dy~,
to the analogous vector with components dxap. dyap· It is actually simpler to work
with the corresponding question for (n}-vectors made up of the complex differentials
dz~ = dx~ + i dy~ and dzap = dxap + i dyap· If Z1, Z2 E Hn, then, taking into
account the symmetry of Z2, we have
z2 - z; =(Z2 'C + D)- 1(Z2 'A+ 'B) - (AZ1 + B)(CZ1 + D)- 1
1

=(Z2 'C + 'D)- 1{(Z2 'A+ 1B)(CZ1 + D)


(2.11)
- (Z1 'C +D)(AZ1 +B)}{CZ1 +D)- 1
=(Z2 'C + 1D)- 1(Z2 - Z1)(CZ1 + D)- 1,
where we used (2. 7) in the last step. From (2.11) it follows that
(2.12) DZ'= '(CZ +D)- 1DZ(CZ +D)- 1,

where DZ = (dzap) and DZ' = (dz~) are symmetric matrices of complex differentials.
We let p denote the (n }-dimensional representation of GLn (C) which associates to every
matrix U the linear transformation (vap) -+ U(vap) 'U of the variables Vap = vpa.
10 I. THETA-SERIES

1 ~ a,p ~ n {this is the symmetric square of the standard representation of GLn).


Then

(2.13) detp{U) = {det U)n+I. 0

In fact, by replacing U by a matrix of the form w- 1UW for a suitable W, without


loss of generality we may assume that U is upper triangular (for example, a matrix
in Jordan normal form). If u 1, ••• , Un are the diagonal entries in U and the variables
Vap. 1 ~ a ~ p -~ n, are ordered lexicographically, then it is not hard to see that
p(U) is also an upper-triangular matrix with diagonalentri~s u1ui. ... , u1un, ... , unun;
this implies (2.13). The relations (2.12)-(2.13) enable us to find the determinant
of the transition matrix for the complex differentials under the map Z ---+ Z'. We
return to the real differentials. Let dZ (dZ') be the (n)-dimensional column with
components dzap (dz~), arranged in any order. Then in the above notati011 we can
writedZ' = p((CZ +D)*)dZ. Settingp({CZ +D)*) = R+iS and separating the real
and imaginary parts, we obtain the relations dX' = RdX - SdY, dY' = SdX + RdY.
Thus: the Jacobian matrix of the transformation Z ---+ Z' is (: -; ) , and

det (: -;) =det ( ~n ~:) (: -;) ( ~n -~~n)


=d t
e
(R +0 iS 0 )
R- iS
=I det{R + iS) 12 = Idet{ CZ+ D) 1-2n-2 •

LEMMA 2.8. If Z' = X' + iY' = M(Z), where Z = X + iY E Hn and M =


( ~ ~) E Spn(R), then

Y' ='(CZ+ D)- 1 Y(CZ + D)- 1,

in particular.

(2.14) det Y' =I det{CZ + D)l- 2 det ~

PRooF. Ifwe compute Y' = -{1/2i)(Z' - Z') using equation (2.11), we obtain
the lemma. 0

Combining Lemmas 2.7 and 2.8, we obtain


PROPOSITION 2.9. The volume element on the Siegel upper half-plane Hn that is given
by

where Z = X + iY = (xa11) + i(Ya11) E Hn. is invariant relative to all symplectic


transformations, i.e., d* M(Z) = d*Zfor ME Spn{R).
§3. SYMPLECTIC TRANSFORMATIONS OF THETA-SERIES II

PROBLEM 2.10. Prove that the Cayley map


z--+ w= (Z - iEn)(Z + iEn)- 1 (Z E Hn)

gives an analytic isomorphism ofHn with the bounded region { W E Sn(C); W · W <
En}, where the inequality is understood in the sense of Hermitian matrices. Prove that
the inverse map is given by the formula W--+ Z = i(En + W)(En - w)- 1
PROBLEM 2.11. Prove that the volume element
d* Y = (det Y)-(n+l)/2 IJ dya.p
l~a.~p~n

on the space Pn = { Y E Sn(R); Y > O} is invariant relative to all transformations of


theform Y--+ 1gYg, whereg E GLn(R).

§3. Symplectic transformations of theta-series


1. Transformations of theta-series. The analytic and algebraic study of theta-series
is based on the remarkable fact that the theta-series of integral quadratic forms trans-
form according to certain simple rules under a rather large group of symplectic transfor-
mations. Usually, this group is the subgroup of the symplectic group that is generated
by certain standard transformations whose action on theta-series can be found by a
direct computation. However, in the multidimensional situation, where the generators
and relations for these subgroups are often unknown, it. is very difficult to find the
transformation groups of arbitrary theta-series directly. Instead, we first express all
theta-series in terms of the simplest ones-the "theta-functions". After determining
the transformation groups of theta-functio_ns, we can readily find the transformation
groups of arbitrary theta-series.
We introduce some notation. If Q is a symmetric k x k-matrix and N is a k x /-
matrix, we write
(3.1) Q[N] = 'NQN.

If Z E Hk. W, W' E Mk,I (C), then it is easy to see that the series

(3.2) tJk(Z; W, W') = L exp(ni(Z[N - W'] + 2 'NW - 'W' · W))


NEMk,I

is absolutely convergent. Here, if Z E Hn(e), where e > 0 (see (1.11)), and if W


and W' belong to fixed compact subsets of Mk,I (C), then the series (3.2) converges
uniformly, as do the series that are obtained from it by taking partial derivatives. The
holomorphic function on Hk x Mk,I (C) x Mk,I (C) that is defined by the series (3.2) is
called the theta-function of degree n.
Let VE Ak = GLk(Z). lfwe replace Z by Z['V] in (3.2) and take into account
the absolute convergence of the series, we find by a simple computation that
ok(Z['V]; w, W') = Ok(z; v- 1 w, 'VW'),
and hence, replacing W by VW and W' by V* W', we obtain the identity
12 I. THETA-SERIES

Now let S =--(sap) be an integral symmetric k x k-matrix. Substituting Z +Sin


place of Zin (3.2), we obtain
ti (Z + S; W, W') = L exp(ni(Z[N - W'] + S[N - W'] + 21NW - 'W' W)).
NEMk,l

Since S[N - W'] = S[N] - 2 'NSW' + S[ W'] and

S[N] = L Sapnanp = L Saan~ + 2 L Sapnanp


a,p a a<P
=,L:saana = 'N · dc{S)(mod2),

where dc{S) is the column with components saa, it follows that

Ok(z + S; W, W') = exp ( ~ 'W' • dc(S)) Ok ( Z; W - SW'+ ~dc(S), W'),


from which, if we substitute W -+ W +SW' - !dc{S) and divide both sides by
exp{-¥ 'W' · dc{S)), we obtain the identity

(3.4) exp (- ~ 'W'·dc(s))ok(z+s; W+SW'-~dc(S), w') = Ok(z; W,W').

The last formula-and the only nontrivial formula among the basic transformation
rules for theta-functions-is the inversion formula, which had its origin in the famous
Jacobi inversion f~rmula.
LEMMA 3.1 (Inversion formula for theta-functions). One has the identity
(3.5) ok(_z- 1; w',-W) = (det(-iZ)) 112 ok(z; w,w'),
where the square root is positive for Z = iY and is extended to arbitrary Z by analytic
continuation (see Proposition 2. 6).
PR.ooF. The function
exp{-ni 'W' W)(Jk (Z; W, W') = L exp(niZ[N - W'] + 2ni '(N - W') W)
NEMk,l

obviously depends holomorphically on the components w:


of the vector W' and is
periodic of period 1 in each component. We introduce new variables u, by setting
u, = exp(2niw:), w: = 1/(2ni) logu,. Then our function is a single-valued analytic
function in u 1, ... , Uk in the region 0 :::;; lu1 j, ... , luk I < oo. Consequently, it has an
absolutely convergent Laurent expansion
00

exp(-ni'W'W)Ok(z; W,W') = L c(L)uf• ···u~


(3.6)
= L c(L) exp{2ni 'W' L),
LEMk,I

where the coefficients c(L) depend only on Z, W, and L. This series converges
uniformly if W' belongs to any set of the form Mk,l (R) + W0 with fixed W0 {since the
series is majorized by an absolutely convergent numerical series); and the series can
be integrated term-by-term over subsets of such sets. We multiply both sides of (3.6)
§3. SYMPLECTIC TRANSFORMATIONS OF THETA-SERIES 13

by exp(-2ni 'W' L), set W' = H + W0 (where W0 will be chosen later), and integrate
term-by-term over the unit cube C = {H = (h,) E Mk,I (R); 0 ~ h, ~ 1} with respect
to the Euclidean measure dH = dh1 · · · dhk. We obtain

c(L) =I ek(z; W,H + Wo)exp(-ni'(H + Wo)W -2ni '(H + Wo)L)dH


c
= JL
C NEMk,I
exp(ni(Z[N - H - W0 ]

+ 2 '(N - H - W0 ) W + 2 '(N - H - W0 )L)) dH


(note that the numbers 'NL are integers). If we integrate the uniformly convergent
series term-by-term, we obtain

c(L)= L
NEMk,•-N+C
J exp(ni(Z[H+Wo]-2 1(H+Wo)(W+L)))dH

= J exp(ni(Z[H + Wo] - 2 '(H + W0 )(W + L))) dH.


Mk,1(R)

Applying the obvious identity


Z[H + Wo]-2 '(H + Wo)(W + L) = Z[H + Wo - z- 1(W + L)J - z- 1[W + LJ
and then setting W0 = z- 1( W + L), we arrive at the formula

c(L) = exp(-niZ- 1[£ + WJ) J exp(niZ[HJ) dH.


Mk,1(R)

Ifwe substitute these expressions into (3.6), we find that

ek(z;W,W')=i(Z) L exp(ni(-z- 1[L+WJ+2 1LW'+'WW'))

where
l(Z) = J. exp(niZ[H]) dH.
Mk,1(R)
If Z = iY, then, making the change of variables H = VH', where V E GLk(R) and
Y[ VJ = Ek. we obtain

l(Z) = J exp(-n 'H' · H')I det VI dH'


Mk,1(R)

=(det Y)- 1! 2 ( J exp(-nh 2) dh) k = det(-iz)- 112 •


R

Since the left and right sides of this equality are holomorphic functions of each entry
in the matrix Z E Hk. it follows by the· principle of analytic continuation that the
equality holds for all Z E Hk. D
14 I. THETA-SERIES

In order to determine the transformation group that is implicit in the functional


equations (3.3)-(3.5), we introduce some notation. First of all, for W, W' E Mk,I (C)
we set

and we define
(3.7) e(z,n) = ek(z; w, w').
Next, we let
(3.8)
denote the set of integral symplectic matrices. It follows from the definition that rk
is a semigroup. The relation (2.9) shows that rk is a group. The group rk is qalled
the.Siegel modular group of degree k. Given a matrix M = ( ~ ~) E rk, we let

c!{M) and 17(M) denote the diagonal entries {arranged in a column) of the symmetric
matrices B 'A and C 'D, respectively, and we set

(3.9) c!(M)) (dc{B


( (M) = ( 17(M) = de{ C 'D)
'A)) E M2k,I ·

Finally, given M = ( ~ ~) E rk and a function Fon Hk x M2k,I (C), we define

{FIM)(Z,O) =det{CZ + D)- 112 exp ( - ~ '((M)hMO)


(3.10)
x F( M{M},MO- ~((M)),
where Jk is the matrix (2.4). The function F IM is defined up to a sign, which depends
on which root is chosen in the first factor. ·
REMARK. From now on, unless stated otherwise, thesymbolip 112, whereip = ip(Z)
is a certain nonvanishing holomorphic function on the upper half-plane Hk, will denote
one of the two single-valued {because Hk is simply-connected) holomorphic functions
on Hk obtained by analytic continuation of any local element of the function ±ip112 •
The two possible choices differ by a sign.
Now the formulas (3.3)-(3.5) can be rewritten in the form
(3.11) (OIM)(Z,n) = i(M)O(Z,n),
where the matrix M in rk is equal, respectively, to
(3.12) U(V), T(S), Jk.
where V E Ak, S E Sh and the number x(M) is chosen so that _the product
x(M) det( CZ+ D) 112 is equal to 1 in the first two cases and is equal to det{-iZ) 112 in
the third case.
PROPOSITION 3.2. Let r' denote the subgroup of rk that is generated by all elements
of the form (3.12). Then (3.11) holds for any ME r'. Here x(M) is a certain eighth
root of unity that depends upon the sign chosen for the root in (3.10).
§3. SYMPLECTIC TRANSFORMATIONS OF THETA-SERIES 15

PROOF. Since (3.11) holds for the generators of the group r', the proposition
follows if we verify that for any M, Mi E r'
(3.13)
where e is an eighth root of unity that depends both on the matrices M and Mi and
on the choice of sign in the definitions of OIM, OIMIMi, and OIMMi.
First of all, it is not hard to check by a direct substitution that, when the vector
((M) in the definition of FIM is replaced by any vector of the form ((M) +2L, where
L = (f~) E M1k,i. the expression for FIM is multiplied by a fourth root of unity equal
to
exp ( - ~ · 'LJk((M) - ni 'LiL2).

Furthermore, if M = ( ~ ~) and M 1 = ( ~: ~: ) , then from the definitions


we have

(OIMIMi)(Z,n) = det(CiZ +Di)-if2 exp ( - ~ · '((Mi)JMi'1)


x det(CMi(Z} + D)-i/2

x exp ( - ~ · '((M)JM( Mi'1- ~((Mi)))


x o( M(Mi(Z}},M(Min- ~((Mi))- ~((M)).
Since
CMi(Z} +D =(C(AiZ +Bi) +D(CiZ +D1))(C1Z +Di)- 1
(3.14)
=(C2Z + D1)(C1Z + D 1)- 1,

if MMi = ( ~~ ~~), and since the number exp(¥ · '( (M)JM( (Mi)) to the eighth
power is 1, it follows that, up to an eighth root of unity, the last expression is equal to

det(C2Z + D1)-if2 exp ( - ~ • '(M((Mi) + ((M))JMMi'1)

x 0 ( MMi (Z}, MMi'1 - ~(M((Mi) + ((M))),


where we used the equality J[M] = J to transform the exponent in the first term. If
we prove that M((Mi) + ((M) =
((MMi)(mod2), then it will follow that the last
expression differs from (OIMMi)(Z,n) only by a fourth root of unity. This will prove
(3.13), and hence the proposition.

LEMMA 3.3. For any M, Mi Erk one has ((MMi) := M((Mi) + ((M)(mod2).

PROOF OF THE LEMMA. If M = ( ~ ~) and Mi = ( ~: ~: ) ' then


MMi = (AAi + BCi ABi + BDi),
CAi + DCi CBi + DDi
i6 I. THETA-SERIES

and, by definition,

((MM)_ ( dc(ABi + BDi) • 1(AAi + BCi))


i - dc(CAi +DCi) · '(CBi +DDi) ·

If M and S are square integer matrices of the same size, and if S is symmetric, then it
is not hard to see that
=
dc(MS'M) Mdc(S)(mod2).
From this congruence, the relations (2.8), and the fact that the diagonal does not
change when taking the transpose, it follows that

((MMi) -(A· dc(Bi 'Ai)+ B · dc(Ci 'Di)+ dc(A(Bi 'Ci+ Ai 'Di) 1B))
- C • dc(Bi 1Ai) + D • dc(Ci 'Di)+ dc(C(Ai 1Di +Bi 'Ci) 1D)
=:M((Mi) + ((M)(mod2), .

since, by (2.8), Ai 'Di+ Bi 'Ci =Ek+ 2Bi 'Ci


and Proposition 3.2.
=Ek(mod2). This proves the lemmaD
PROBLEM 3.4. Show that the theta-function of degree k satisfies the relations

tJk(z; W +L, W') =exp(-ni 1LW')Ok(z; W,W'),


(Jk(z; w + ZL, W') =exp(ni(Z[L)- 2 LW + 1 LZW'))Ok(z;
1 w, W')
for any k-dimensional integer vector L E Mk,i · Further show that any function
F (Z; W, W') that satisfies all of these relations and is holomorphic in W has the form
F(Z; W, W') = F0 (Z, W')Ok(z; W, W'), where Fo depends only on Zand W'.
PROBLEM 3.5. Show that the theta-function of degree k satisfies the relations

ok(z; w, W' + L) =exp(ni 'LW)Ok(z; W, W'),


Ok(z; W, W' - z-iL) =exp(ni(-z-i[L)-2 1LW' + 1LZW))Ok(z; W, W')

for any k-dimensional integer vector L E Mk,i · Further show that any function
F(Z; W, W') that satisfies all of these relations and is holomorphic in W' has the form
F(Z; W, W') =Fi (Z, W)Ok(_z-i; W',-W), where Fi depends only on Zand W.
2. The Siegel modular group and the theta-group. In this subsection we show that
the group r' that is generated by matrices of the form (3.12) is actually the entire Siegel
group rk. Thus, the functional equations (3.11) hold for all M Erk. These equations
take a particularly simple form if M belongs to a certain subgroup of rk called the
theta-group.
THEOREM 3.6. The Siegel modular group of degree k

is generated by matrices of the form (3.12).


For later use, we shall prove a more general fact.
§3. SYMPLECTIC TRANSFORMATIONS OF THETA-SERIES 17

PROPOSITION 3.7. Let M be a 2k x 2k-matrix with entries in the rational number


field Q which satisfies the condition
(3.15)

where r =/:- 0. Then there exists a matrix g in the group r' that is generated by matrices
of the form (3.12) such that the product gM has a k x k block of zeros in the lower-left
corner:
gM = (~I ~:).
We first note that Theorem 3.6 follows from Proposition 3.7. Namely, if M E rk,
then, by Proposition 3.7, there exists g E r' such that the matrix M 1 = gM has the
aboveform. Since Mi isasymplecticmatrix, we have ·1A 1D 1 =Ek and 'B1D 1 = 'D 1B1
(see (2.7)). Since M 1 is also an integer matrix, it follows that Ai. D 1 E Ak and
S = B1Di 1 = '(B1D! 1) E Sk. Thus, Mt= T(S)U(Di) and M = g- 1M1 Er'.
Before proceeding to the proof of Proposition 3.7, we prove two useful lemmas.
LEMMA 3.8. Let k ;;:: 2, and let u = '(ui, ... , uk) be an arbitrary nonzero k-
dimensional column of integers. Then there exists a matrix Vin the group SLk(Z) of
k x k integer matrices of determinant +1 such that
(3.16) Vu= '(d,0, ... ,0),
where d is the greatest common divisor of u,, ... , uk.
PROOF. Fork = 2. the lemma follows from the fact that the g.c.d. of two integers
can be written as an integer linear combination of those integers. The general case
follows by an obvious induction on k. 0

LEMMA 3.9. Let u be a nonzero 2k-dimensional column ofintegers. Then there exists
a matrix g E r' such that
(3.17) gu = '(d,O, ... ,0),

where d is the greatest common divisor of the entries in u.


PROOF. We shall write u in the form u = e), where a and c are k-dimensional
columns. We first prove that the set {gu; g E r} contains a column u' = (~;) with
a' = 0 or c' = 0. Assume the contrary. Then a' =f:. 0 and c' =f:. 0 for every u' = gu. Let
a' and y' denote the greatest common divisor of the entries of a' and c', respectively.
We choose u' in such a way that the product a' y' is minimal and a' ;;:: y' (if y' > a',
we replace u' by Jku'). By Lemma 3.8, by replacing u' by U( V)u' with a suitable
V E SLk(Z), we may assume that c' = '(y', 0, ... , O). Replacing u' by T(S)u' takes
a' t~ a'+ Sc' and does not change c'. We can clearly choose S E Skin such a way
that all of the entries in the column a'+ Sc' belong to the set {O, l, ... , y' - 1}. Then
the greatest common divisor~ of these entries satisfies the conditions: ~ < y' ~ a' and
~y' < a'y'. This contradicts our choice of a' and y'. Thus, our set contains a column
u' with a' = 0 or c' = 0. If a' =:= 0, we replace u' by Jku'. Hence, we may assume
that e' = 0. It then follows by Lemma 3.8 that for suitable V E SLk (Z) the column
U(V)u' has the form (3.17). 0
18 I. THETA-SERIES

PRooF OF PROPOSITION 3.7. Without loss of generality we may assume that Mis
an integer matrix. By Lemma 3.9, we may also assume that the first column of M
has the form (3.17). In the case k = 1 this proves the proposition. Suppose that the
proposition has already been proved for 2k' x 2k' -matrices for k' < k. The relation
(3.15) for the matrix M = ( ~ ~) is equivalent to the conditions
(3.18) 'AC= 'CA, 'BD = 'DB, 'AD - 'CB= r ·Ek.

By assumption, the matrices A and C have the form

(3.19) A= (
a~1
.
a12 •••
a1k) -(~ C12
' C- .
: Ao Co
0 0
where an =I= 0. From the relation 'AC = 'CA it follows that c12 = · · · = c1k = 0,
'Ao Co = 'C0 A 0 • Since 'AD.= 'CB+ rEk. we conclude that
dn 0 ...
( d21
D= 'AoDo - 'CoBo = rEk-1>
: Do
d1k
where Bo denotes the corresponding block of B. Finally, the relation 'BD = 'DB
implies the relation 'BoDo = 'DoBo. From all of these relations it follows that the
matrix Mo = ( ~~ ~~) satisftes the condition: 'Moh-iMo = rEk-I· By the.

induction assumption, there exists a matrix go E rk._ 1 such that g0 Mo = ( ~ : ) .

For an arbitrary {2k - 2) x (2k - 2)-matrix M' = (C'A' D'B') , k ~ 2, we define the

2k x 2k-matrix M' = ( ~: ~: ) with blocks

A1 = (~ ~') ' (~ 2,) • C1 = (~ ~') • D1 = (~ i,) ·


With this notation, the C -block of the matrix g0 M consists of zeros. Thus, to prove
the proposition it suffices to verify that the map go --+go takes rk._ 1 to rk.. This, in
turn, follows if we show that the map takes all of the gener~tors of rL 1 to matrices iJ;J.
rk.. This is obvious for all of the generators except Jk-I · For Jk-I we have
(3.20) ~
Jk-1 =
( E
El_
1
Ek
Ek - E 1)
E1 ~
= (-Ek-1)(JkT(E )) Erk,
I 3 I

where E 1 = Elc is the k x k-matrix diag{l, 0, .. ., 0). D

From what we have proved it follows that the functional equation (3.11) holds for
any matrix Min the modular group rk. By the remark at the beginning of the proof of
§3. SYMPLECTIC TRANSFORMATIONS OF THETA-SERIES 19

=
Proposition 3.2, in the case when ((M) O(mod2) we may suppose that ((M) = 0.
Then the functional equation (3.11) can be written in the form
(3.21) det(CZ +n)- 1t 2e(M{Z},MO) = x(M)O(Z,O),
where x(M) is an eighth root of unity. From Lemma 3.3 it follows that the set
9k ={ME rk;((M) := O(mod2)}
is a subgroup of rk. Returning to our original notation, we see that we have the
following theorem.
'fHEoREM 3.10. The set

9k = { M = ( ~ ~) E rk;dc(B'A) := dc(C 'D) = O(mod2)}


is a subgroup of the modular group. For every M = ( ~ ~) E @k the theta-function
tJk(z; W, W') satisfies the functional equation
det(CZ +n)- 1t2ek(M{Z};AW + BW', cw +DW') = x(M)Ok(z; w, W'),
where x(M) is an eighth root of unity that depends on the choice of square root on the
left.
The group @k is called the theta-group ofdegree k.
PROBLEM 3.11 (Witt). Prove that the theta~group of degree k is generated by the
matrices U(V) with VE Ak, the matrices T(S) with SE Ek (where Ek is the set of
even symmetric k x k-matrices), the matrix Jk. and the matrices of the form

( 1_E; Ek -.E;) withE; = diag~,~ fori = 1, ... ,k-1.


E -Ek E1
i k-i

3. Symplecdc transformadons of theta-series. We now examine the action of sym-


plectic transformations on the theta-series of arbitrary positive definite integral qua-
dratic forms. With a view toward the applications of theta-series (for example, to the
problem of integral representations of quadratic forms by quadratic forms, where the
representing matrix satisfies certain congruences), it is convenient to generalize the
definition of theta-series by introducing some new parameters.
Let Q E Sm(R), Q > 0, Z E Hn, W, W' E Mm,n(C). By analogy with theproofof
Proposition 1.3, it is not hard to see that the series
en(z, Q, (W, W')) =On(z, Q,O)
(3.22) = L e{Q[N - W']Z + 2 'NQW - 'WQW'},.
NEMm,n

where n = (W, W'} E Mm, 2n(C), and for an arbitrary square matrix T we set
(3.23) e{T} = exp(niu(T)),
where u(T) is, as usual, the trace of T, converges absolutely and uniformly if n belongs
to a fixed compact subset of Mm,2n(C) and Z E Hn(e) withe> 0 (see (1.11)). Thus,
this series determines a holomorphic function on the space Hn x Mm,2n (C). The series
(3.22) is called the theta-function of degree n for the matrix Q (or the corresponding
20 I. THETA-SERIES

quadraticform). If we set W = 0 and W' = 0 in (3.22), we obtain the theta-series


on(z, Q).
If m = 1 and Q = (1), then on(z, Q, (W, W')) obviously becomes the theta-
function on(z; 'W, 'W') in (3.2). It turns out that, conversely, every theta-function
(3.22) is a restriction of a suitable theta-function (3.2). This fact enables us to reduce
the study of the action of symplectic transformations on general theta-functions to the
case we have already examined.
We first recall the definition and the simplest properties of the tensor product of
two matrices. If A and B =(hap) are m x m and n x n matrices, respectively, over the
field of complex numbers, we define their tensor product by setting
A® B = (Abap) E Mmn(C).
It follows from the definition that the tensor product is linear in each argument, and
(A® B)(Ai ®Bi)= AAi ® BBi.
From this relation it follows that the matrix A ® B is invertible whenever A and B are
invertible, and
(A® B)-i = A-i ® B-i,
in addition,
det(A ® B) = det(A ®En)· det(Em ® B) = (detA)n · (detBr.
Finally,
'(A®B) ='A® 'B
and if A and B are real, symmetric, and positive definite, then A ® B is also a positive
definite matrix.
LEMMA 3.12. Let m, n ~ 1, Q E Sm (R), Q > 0, Z E Hn, W, W' E Mm,n (C). Then
(3.24) on(z, Q, (W, W')) = omn(Q ® Z; c(QW), c(W')),
where the theta-function o/(3.2) is on the right and the theta-function o/(3.22) is on the
left, andfor every matrix T =(ti, ... , tn) E Mm,n(C) with columns ta we set

c(T) ~ (;:) E M~.1(C).


In addition, for arbitrary M = ( ~ ~) E Spn (R) the matrix

M = ( AQ BQ) = ( Em ®A Q®B )
<? CQ !JQ Q-i ®C Em®D
belongs to the symplectic group Spmn(R), and one has the following identities:
on(M(Z}, Q, (W, W') 'M)
(3.25)
= (J"'n(MQ(Q ® Z};AQc(QW) + BQc(W'), CQc(QW) + DQc(W')),
(3.26) det(CZ +Dr= det(CQ(Q ® Z) + DQ).
§3. SYMPLECTIC TRANSFORMATIONS OF THETA-SERIES 21

PROOF. First of all, in the above notation we have


n
(Q ® Z)[c(T)] = L z°'p 't°'Qtp = a(Q[T] • Z).
0t,P=I

Similarly, for T, V E Mm,n (C) we have

n
c(T)c(V) =
1 L t°'v°' = a('TV).
1

Ot=I

We thus obtain:

tr(z, Q, (W, W')) = L exp(ni((Q ® Z)[c(N) - c(W')]


c(N)EMmn,I
+ 2 'c(N)c(QW) - .'c(W')c(QW)))
=fJmn(Q ® Z; c(QW), c(W')),

which proves the first part of the lemma.


To prove that MQ E SPmn(R), it suffices to verify that the blocks of MQ satisfy
(2.7) whenever the blocks of M satisfy these relations. Using the above properties of
the tensor product, we obtain

'CQAQ =(Q- 1 ® 'C)(Em ®A) = Q- 1 ® 'CA


=Q- 1 ® 'AC = '(Em® A)(Q- 1 ® C) = 1AQCQ.

Similarly,

Finally,

To prove (3.25) it is now sufficient to verify that

(3.27) Q ® M(Z} = MQ(Q ® Z}

and

c(Q(W 'A+ W' 'B)) = AQc(QW) + BQc(W'),


c(W'C + W 11D) = CQc(QW) +DQc(W').

We have

AQ(Q®Z) + BQ =(Em ®A)(Q ® Z) + (Q ®B) = Q ®(AZ+ B),


CQ(Q®Z) +DQ = {Q- 1 ® C)(Q®Z) +(Em ®D) =Em® (CZ +D),
22 I. THETA-SERIES

from which (3.27) and (3.26) follow. Finally, if A = (a 0 p), B = (b 0 p), W =


(wi, ... ,wn), and W' = (wf, ... ,w~), then obviously

AQc(QW)+BQc(W') ~ (Ema~) ( ::) + (Qb~) G:)


= ( Q(w1a11 ~:: · + Wna1n)) + ( Q(wfb11 ~:: · + w~b1n))
Q(w1an1 + · · · + Wnann) Q(wfbn1 + · · · + w~bnn)
=c(Q(W • 'A+ W' • 'B)).
The second relation can be verified in the same way. D

Now suppose that Q is the matrix of a nondegenerate integral quadratic form q in


m variables, i.e., Q E Em and det Q =F 0. By the level of the matrix Q (or the form q)
we mean the least positive integer q = q(Q) such that q · Q- 1 E Em.
THEOREM 3.13. Suppose that m,n ~ 1, Q E A;t, and q is the level of Q. Then for
every matrix M = ( ~ ~) in the group

(3.28)

the theta-function (3.22) of degree n for the matrix Q satisfies the functional equation
(3.29) det(CZ + D)-ml2 fr(M(Z), Q,O'M) = XQ(M)On(z, Q,O),

where XQ (M) is a certain eighth root ofunity that for odd m also depends on the choice of
root ofthe determinant on the left. In particular, the theta-series (1.13) ofdegree n for the
matrix Q satisfies the following functional equation for every M = ( ~ ~) E q(q):

(3.30) det(CZ + D)-mf2rr(M(Z), Q) = XQ(M)tr(z, Q).

PRooF. Let MQ be the matrix that is constructed from Min Lemma 3.12. By
Lemma 3.12, MQ E SPmn'(R). From the definitions it follows that MQ is an integer
matrix, so that MQ E rmn. Finally, all of the diagonal entries in the matrices BQ 'AQ =
Q ® B 'A and CQ'DQ = Q- 1 ® C 'D = qQ- 1 ® (q- 1 C 'D) are even, because all of
the diagonal entries in the first factors of the tensor products are even, and the second
factors are integer matrices. Thus, MQ is contained in the theta-group @mn. Using
Lemma 3.12 for the matrix Mand Theorem 3.10 for the matrix MQ, we obtain
det(CZ + D)-mf2rr(M(Z), Q,O'M)
= det(CQ(Q ® Z) + DQ)- 112 • emn(MQ(Q ® Z};AQ • c(QW)
+ BQ • c(W'), CQ • c(QW) + DQ • c(W'))
= x(MQ)e n(Q ® Z;c(QW),c(W')) = x(MQ)en(z, Q, (W, W')),
111

which proves (3.29) if we set XQ(M) = x(MQ). (3.30) follows from (3.29) if we set
0=0. D
§3. SYMPLECTIC TRANSFORMATIONS OF THETA-SERIES 23

Theorem 3.13 answers {except for the computation of the factor XQ(M)) our
question about the action on on(z, Q) ofsymplectic transformations in the subgroup
r(){q) of the Siegel modular.group rn. However, when studying certain properties of
theta-series, such as their behavior near the boundary of Hn, one needs to understand
the action on theta-series of an arbitrary transformation in rn. In general {when
q -:/:- 1), such transformations do not take the theta-series on(z, Q) to itself (even
modulo a multiplicative factor). But the theta-series does remain inside a certain
finite-dimensional space that depends on· n and q-the space of generalized theta-
series of degree n for the matrix Q. Suppose that Q EA;!; and q is the level of Q. We
consider the set of matrices
(3.31) Tn(Q) ={TE Mm,n; QT= O{modq)},
and for each T E Tn(Q) we define the generalized theta-series of degree n for Q by
setting

(3.32) on(z, QIT) = on(z, Q, (O,-q- 1T)) = L e{Q[N + q- 1T]Z}.


NEM,..n

It is clear that, as a function of T, the generalized theta-series depends only on T


modulo q, and we have
on(z, QIT) = on(z, Q), if T =O(modq).
Thus, the space spanned by all generalized theta-series of degree n for Q is finite
dimensional and contains the theta-series on(z, Q).
PROPOSITION 3.14. Under the action of the generators (3.12) of the modular group
rn, we have the following transformation formulas for generalized theta-series of degree
nfor a matrix Q EA;!;:
on(U(V*)(Z}; QIT) = on(z; QITV)
on(T(S)(Z}, QIT) = e{q- 2 Q[T]S}On(z, QIT)
Finally,
on(Jn(Z}, QIT) ={det Q)-nf2 (det(-iz))ml 2
x L e{2q- 2 • 'TQT'}On(z, QIT'),
T'ET"(Q}/modq

where {det( -iZ)) 112 is positive if Z = i Y, and is uniquely determined by analytic


continuation for arbitrary Z E Hn.
We first prove the following generalization of Lemma 3.1, which is actually a
corollary of that lemma.
LEMMA 3.15 (Inversion formula for theta-functions of degree n for Q). The theta-
/unction (3.22) satisfies the following identity:
on(_z-1, Q-1, (QW', -QW))
(3.33)
= {det Q)nf2 (det(-iZ))ml 2 on(z, Q, (W, W')),

where (det(-iZ)) 112 is the function defined in Proposition 3.14.


24 I. THETA-SERIES

PRooF. We derive (3.33) from (3.5), using the connection that is given in Lemma
3.12 between the theta-functions in the two formulas. Ifwe use (3.24) and the properties
of the tensor product of matrices (listed right before Lemma 3.12), we obtain
on(_z- 1, Q- 1, (QW', -QW)) = omn(Q- 1 ® (-z- 1);c(W'), -c(QW)).
Since Q- 1 ® (-z- 1) = -(Q ® z)- 1, by Lemma 3.1 the last expression is equal to
det(-i(Q ® Z)) 112omn(Q ® Z; c(QW), c(W'))
and hence, by (3.24), it is equal to the expression on the right in (3.33). 0

PRooF OF PROPOSITION 3.14. The first two formulas follow directlyfrom the defi-
nitions:
on(z['VJ,QIT) = L e{'VQ[N +q- 1 T]VZ}
NEM.,,n

= L e{Q[NV +q- 1TV]Z}


NEM,.,.

= L e{Q[N' + q- 1TV]Z},
N'EMm,n
since we have Mm,n V = Mm,n for VE An;
on(z + s, QIT) = L e{Q[N + q- 1 T]Z}e{Q[q- 1T]S},
NEM.,,.

since
a(Q[N + q- 1T]S) =a(Q[N)S) + 2a( 'Nq- 1QTS) + a(Q[q- 1 T]S)
:=a(Q[q- 1 T]S)(mod2),
because N E Mm,n, S E Sn, and all of the diagonal entries in Q-and hence also in
Q[N]-are even, while q- 1QT is an integer matrix. To prove the third identity, we apply
the inversion formula (3.33) (with Q replaced by Q- 1, W' = 0, and W = l/qQT) to
the theta-series on(z, QIT) = on(z, Q, (0, -q- 1T)).
We obtain
on(_z- 1, QIT) =(det Q)-nf2 (det(-iz))mf 2on(z, Q- 1, (q- 1QT, 0))
=(det Q)-nf2 (det(-iz)rt2 L e{Q- 1[N]Z + 2q- 1 'NT}.
NEM,.,.

For every LE Mm,n the matrix qQ- 1Lis obviously an integer matrix belonging to the
set Tn(Q) (see (3.31)). Conversely, any matrix T' E Tn(Q) is uniquely representable
in the form qQ- 1L (L E Mm,n). Thus, the map L --+ qQ- 1L gives an isomorphism of
the additive group Mm,n with the group Tn ( Q). Under this isomorphism the subgroup
QMm,n is obviously mapped onto qMm,n c Tn(Q), so that we obtain an isomorphism
of quotient groups: Mm,n/QMm,n .'.:4 Tn(Q)/modq. Continuing the above chain of
equalities, we have
(d~t Q)-nf2 (det(-iz))m/2
x L e{Q- 1[QN+L]Z+2q- 11(QN+L)T}
NEM,.,..
LEM,.,./QM.,,n
§4. COMPUTATION OF THE MULTIPLIER 25

= (det Q)-nf2 (det(-iz)rt 2

x Le{Q(N + q- 1(qQ- 1L)]Z + 3. '(N + q- 1(qQ- 1L))QT}


N,L q
= (det Q)-nf2 (det(-iZ))mf 2

x L e{Q[N+q- 1T']Z+2q- 2 • 1T'QT},


NEMm,n
T' ET"(Q}/mod q

since q- 1 • WQT = 'N(q- 1Q)T is an integer matrix. D

PROBLEM 3.16. Prove that there are (det Q)n elements in the set rn(Q)/mod q.
PROBLEM 3.17. Suppose that Q EA;!;, q is the level of Q, and T E Tn(Q). Prove
that for every matrix M = ( ~ ~) in the group r0(q) the theta-series on(z, QIT)
satisfies the functional equation
det(CZ +D)-mfion(M(Z), QIT) = XQ(M)e{q- 2 A • 'B • Q[T]}On(z, QITA)
with the same scalar XQ(M) as in Theorem 3.13. Thus, if M =E n(modq), then
2

det(CZ + D)-mf2£r(M(Z), QjT) = XQ(M)On(z, QIT).

§4. Computation of the multiplier


The scalar factor XQ(M) that appears in the functional equations for theta-
functions and theta-series is usually called the multiplier of degree n for Q. In this
section we shall find an explicit expression for this multiplier.
1. Automorphy factors. Suppose that Q EA;!;, q is the level of Q, and n EN. For
ME r 0(q) and Z E Hn we set
(4.1)

By definition, the function j;: r 0(q) x Hn - C is meromorphic in the second argu-


ment. By Theorem 3.13, it can be written in the form

jQ(M,Z)=det(CZ+D)mfiXQ(M) (M= (~ ~) )•
from which it follows that, as a function of Z, it is holomorphic and nonzero on Hn
for every M E r 0(q). Finally, from (4.1) we see that the following relation holds for
any M,M1 E r 0(q) and Z E Hn:

jQ(MMi, Z) =On(MM1 (Z), Q)On(z, Q)- 1.


=On(M(M1 (Z)), Q)On(M1 (Z), Q)- 1on(M1 (Z), Q)On(z, Q)- 1
= jQ(M, Mt (Z) )jQ(M1, Z).

Let S be a multiplicative semigroup acting on a set H, S 3 g: h - g(h), as a


subsemigroup of the group of all one-to-one maps of H onto itself. A function cp on
26 I. THETA-SERIES

S x H with values in a multiplicative group T will be called an automorphy factor of


Son H with values in T if for all g,g 1 ES and h EH one has
cp(gg,,h) = cp(g,g,(h))cp(g,,h).
LEMMA 4.1. Let cp : S x H -+ T be an automorphy factor of S on H with values in
T. Then:
(I) If f : T -+ Ti is a homomorphism of groups, then the function (g, h) -+
f (cp (g, h)) is an automorphy factor of S on H with values in Ti.
(2) Ifx: S -+ Tis a map whose image is contained in the center vf the group T, then
the function (g, h)-+ x(g)cp(g, h) is an automorphyfactor if and only if x isasemigroup
homomorphism.
(3) For every function F on H with values in a left T-module V andfor every g E S
define the function Fig: H-+ V by setting
(Flg)(h) = (Flipg)(h) = cp(g,h)- 1F(g(h)).
Then for any g,g, ES one has

PR.ooF. All three assertions follow directly from the definitions. D

The discussion at the beginning of the section shows that the function J'Q is an
ro
automorphy factor of (q)' where q is the level of the quadratic form Q, on the upper
half-plane of degree n with values in the multiplicative group C* of nonzero complex
numbers. The next lemma gives other examples of automorphy factors.

LEMMA 4.2. For any matrix M =(~ ~) in the group

(4.2) SR= GSp~(R) ={ME M2n(R);Jn[M] = r(M)Jn,r(M) > O}


(the general symplectic group of degree n) and any Z E Hn the matrix CZ+ D is
invertible. The correspondence that associates such an M to the map
f M: Z-+ M{Z} =(AZ+ B)(CZ + D)- 1
gives a homomorphism of the group Sil to the group of holomorphic automorphisms of
the space Hn. The functions
(4.3) (M,Z)-+ CZ+ D, J(M,z)k = det(CZ + D)k (k E Z)
are automorphy factors of Sil on Hn with values in the groups GLn(C) and C*, respec-
tively.
PROOF. If A. E R, then obviously r(A.M) = A.2r(M). Thus, A.M E Spn(R) if
A. = r(M)- 112, and the first two statements in the lemma follow immediately from
the corresponding statements for the group Spn (R) (see Proposition 2.1 and Theorem

2.2). Furthermore, if M = ( ~ ~ ). M 1 = ( ~: ~:).and MM1 = ( ~~ ~~)·


then
(C(A,Z + B,)(c,z + D,)- 1 + D)(c,z +Di)
=(CA,+ DCi)Z + (CB1 + DD1) = C2Z + D2.
§4. COMPUTATION OF THE MULTIPLIER 27

The analogous identity for j(M, z)k follows from this, since the map A - (detA)k is
a group homomorphism from GLn (C) to C*. D

2. Quadratic forms oflevel 1.


PROPOSITION 4.3. Let Q E A;!;. where m E N. If the level of Q is l, then m =
0 (modS) and XQ(M) = lfor all ME rn and n EN.
PROOF. Because Q has level l, it follows that Q- 1 is an integer matrix whenever
Q is. Since det Q · det Q- 1 = 1, we have det Q = ±1. Since Q > 0, it follows that
detQ = 1.
By the inversion formula (3.33) with W = W' = 0, we have
on(_z- 1, Q- 1) = det(-izrt2on(z, Q).

On the other hand, since Q- 1is a unimodular matrix, it follows that when we replace N
by Q- 1Nin the definition (1.13) ofthetheta-serieson(z', Q), weobtainOn{z', Q- 1) =
on(z', Q). Thus,

{4.4)
If n = 1, then the relations 0 1(z + 1, Q) = 0 1(z, Q) and (4.4) imply that
0 1(-(z + 1)- 1, Q) = (-i(z + l)rt20 1(z, Q).
,I
We set

A= ( ~l ~) (~ !) = ( ~l ~ 1) .
Thenforz E H 1 wehaveA(z} = -{z+l)- 1,A 2(z} = -{z+l)z- 1,A 3(z} = z. Using
these relations, we obtain
0 1(z, Q)' =0 1{A{A 2 (z}}, Q)'
=(-i(-(z + l)z- 1 + l))'m/20 1(A 2(z}, Q)'
=(iz-l)vm/2(-i(-(z + 1)-1 + l))'m/29l(A{z}, Q)'
=(iz-1 )vm/2(-iz(z + 1)-1 )vm/2(-i(z + l))'m/291 {z, Q)v,
where we take v = 1 form even and v = 2 for m odd. We choose a point zo E H1 for
which 0 1(z0 , Q) -=/; 0. We then have the equality
(izol ym/2(-izo(zo + 1)-1 ym/2(-i(zo + l))'m/2 = l,
from which (since vm is even) it follows that
;vm/2, {-Wm/2, (-i)vm/2 = {-Wm/2 = 1,
and hence
(4.5) vm/2 =O(mod4).
Thus, m
O{mod8).
=
O{mod4). But then v = 1, and the congruence {4.5) shows that m =
=
Since m O{mod8), we can rewrite (4.4) in the form
on(Jn(Z}, Q) = det{-zrt 2on(z, Q).
28 I. THETA-SERIES

For the other generators of the modular group rn we immediately find from the
definitions that
on(M(Z), Q) = on(z, Q) for M = U(V) and T(S).

This implies that the automorphy factor j'Q(M, Z) for M = ln, U( V), and T(S) is
equal, respectively, to det(-z)m/2 , 1, and I. The automorphy factor j(M, z)m/2 also
takes these values on the generators. On the other hand, for any M E rn we have, by
Theorem 3.13,
j'Q(M,z) = x'Q(M)j(M,zr12 ,
and, by Lemma 4.1 (2), the map x'Q: rn --+ C* is a homomorphism of groups. The
above discussion shows that this homomorphism is trivial on the generators of rn, and
hence on the entire group. D

PROBLEM 4.4. Verify that the matrix Q8 of the quadratic form

1 8
2 ~ x; + 2 ~ Xr
1( 8 )2 - XJ x2 - x2xs

_is contained in At and satisfies the condition: det Q8 = q(Q8) = I. Conclude from
this that for any natural number m divisible by 8 there exist matrices Qm E A;!; with
det Qm = q(Qm) = I.
3. The multiplier as a Gauss sum. We first fix the square root of a complex number
z =I- 0 by setting
(4.6)

where lzl 1/ 2 > 0, -n/2 < <p ~ n/2, and k is any integer. Next, suppose that
( ~ ~) E Si{. By Lemma 4.2, the function /(Z) = det(CZ + D) is nonzero on
Hn. If detD =I- 0, then of the two branches off (Z) 112 that are holomorphic on Hn
and differ from one another by a sign (see the remark in §3.1 ), we shall usually use the
notation /(Z) 112 to denote the branch satisfying the condition

(4.7) lim det(CZ + D) 112 = (detD) 112 ,


Z=i..tE.,..t-++0

where the right side is understood in the sense of (4.6). Finally, for any integer k we
set

(4.8) det(CZ + Dll2 = (det(CZ + D) 112)k.

PROPOSITION 4.5. Suppose that, under the assumptions of Theorem 3.13, the level
q is greater than 1. If the function det( CZ + D )-m/2 on the right in the functional
equations (3.29) and (3.30) is understood in the sense o/(4.7)-(4.8), then for any M =
( ~ ~) E r(i(q) the multiplier x'Q(M) in these equations can be computedfrom the
formula

(4.9)
§4. COMPUTATION OF THE MULTIPLIER 29

where the roots are understood in the sense of (4.6), and G(S, Q) denotes the following
Gauss sum, for S .an n x n symmetric matrix with rational entries and for Q an m x m
symmetric integer matrix with even integers on the main diagonal:

{4.10) G{S, Q) = d-mn e{Q[L)S},


LEMm,nCZ/dZ)

where d is any positive integer such that dS E Mn.

Note that detD is prime to q for M = ( ~ ~) E r(j{q) (for example, use

the third relation in (2.7)), and hence it is nonzero if q > I. Further note that the
Gauss sum (4.10) does not depend on the choice of integer d satisfying the property
dS E Mn.
PROOF. We compute the limit

(4.11)

where E = En is the n x n identity matrix, in two different ways. On the one hand, by
Theorem 3.13 it is equal to

.1.~~o ;.mn/2x'Q(M)fr(i).E, Q),

which, by the inversion formula (3.33), can be written in the form


lim ;.mnf2xn (M)det().E)-mf 2 (detQ)-nl 2tr(i;.- 1E, Q- 1)
.1.-++0 Q
(4.12)
= x'Q(M)(det Q)-n/2 .1.-++0
lim tr(i;.- 1E, Q- 1) = x'Q(M)(det Q)-n/2 •

On the other hand, we set M (i).E) = Bn- 1 + Z 0 • Then, applying (2.8), we find that

Zo(i).C + D) =(i).A + B) - BD- 1(i).C + D)


=i).(A - Bn- 1c) = i).(A. 'D - Bn- 1c. 'D)D*
=i).(A • 'D - B • 'C)D* = i).D*.
Thus,

(4.13)

Substituting, we obtain

fr(M(i).E), Q) = L e{Q[N](Bn- 1 + Zo)}.


NEMm.n

Let d be a positive integer for which dBn- 1 is an integer matrix. We represent Nin
the form N = L + dN1, where LE Mm,n/dMm,n, N1 E Mm,n· Since then

Q(N] = Q(L] + d 2Q[Ni] + d • 'LQN1 + d • 'Ni QL


and since
(d 2 Q[Ni] + d • 'LQN1 + d • 'Ni QL)BD-i
30 I. THETA-SERIES

is an integer matrix with even trace, it follows that

on(M(iAE), Q) = L e{Q[L]{Bn- 1 + Zo)}


LEMm,n/dM,.,n

x L e{ d 2 Q[NI]Zo + 2d • 'N1 QLZo}

= L e{Q[L]{Bn- 1 + Z 0 )}8n(d 2Z 0 , Q, (dLZo, O)),


LEMm,n/dMm,n

which, by the inversion formula (3.33), can be rewritten in the form


(det Q)-nf2 (det(-id 2 Z 0 ))-mf2
(4.14)
x L e{Q[L]{BD- 1 +Zo)}On(-(d 2Zo)- 1,Q- 1,(0,-dQLZo)).
LEMm,n/dM,.,,

From (4.13) it follows that Zo --+ 0 if A --+ O; in addition, -Z0 1 = -C · 1D +


iA- 1D · 'D. If we take into account that on(z, Q(W, W')) converges uniformly if
Wand W' are in fixed compact subsets and Z E Hn(e) withe: > 0, we see that in
computing
}!_~0 on(-(d 2 Zo)- 1, Q- 1, (0,-dQLZo))
we can take the limit term by term. This implies that the limit exists and is equal to 1.
Thus, from (4.14) it follows that the limit (4.11) is equal to

(detQ)-nf2 L e{Q[L]BD- 1}

x lim (det(-iAC + D))-mf2 (det(-id 2 Z 0 ))-mf2


A-++O
= (det Q)-nf2 G(BD- 1, Q)(detD)-m/2
x lim (det(-i(iD*(iAC + D)- 1)))-m/2 •
A-++0

According to Lemma 3.15, the function inside the last limit is continuous in Z =
iD*(iAC +n)- 1 E Hn. and so the limit is (det(-i(iD* n- 1m-m12 =I detDlm. Thus,
the limit (4.11) is equal to
(det Q)-nf2 (detD)-mf 2 detDlmG(BD- 1, Q).
1

Comparing this expression with (4.12), we find that


(4.15)
and so to prove the proposition it remains to verify that

(4.16) G(BD- 1,Q)=G(-D- 1C,Q), if(~ ~) EranddetD#O.

In order to do this, we make a small modification in our definition (4.10) of Gauss


sums. Suppose that S and Q are as before, and D satisfies the conditions
(4.17) D E Mn, detD # 0, DS E Mn.
§4. COMPUTATION OF THE MULTIPLIER 31

We then set

(4.18) GD(S, Q) =I detDl-m e{Q[L]S}.


LEMm,n/M,.,nD

It is easy to see that i( D satisfies (4.17) and M is any nonsingular n x n integer matrix,
then
GMD(S, Q) = GD(S, Q).
Thus, if D and D 1 are two matrices that satisfy (4.17), then because the matrix D' =
det D · det D1 · En is divisible on the right by both D. and D', it follows that

GD(S, Q) = GD1(S, Q) =GD, (S, Q),

so that GD(S, Q) does not depend on the choice of D. Then, taking D to be the matrix
dEn, where d EN and dS E Mn, we see that
(4.19) GD(S, Q) = G(S, Q).

Returning to the proof of (4.16), we can write

(note that 'DBD- 1 = 'DD* · 'B = 'B is an integer matrix). Since 'DB = 'BD, it
follows that the map L -+ LB gives a homomorphism of quotient groups

Similarly, the map L -+ L · 'C gives a homomorphism

Since B 'C = A 'D - En, it follows that the composition B 'C of these two homo-
morphisms coincides with the automorphism of multiplying by - En. Since 'CB =
'AD - En, the homomorphism 'CB also coincides with multiplication by - En. Hence,
the maps 'C and B are isomorphisms, and we can write

e{Q[LB](-D- 1C)}
LEM.,,./M,.,n 'D
=
LEM,.,n/Mm,n 1D

since C 'B = '(A 'D - En)= D 'A - En. D

PROBLEM 4.6. In the notation of the definition (4.10) of Gauss sums, let S =
J- 1S', where S' is a symmetric integer matrix. Show that the Gauss sum of degree n
reduces to the usual Gauss sum modulo d of the quadratic form with matrix Q ® S':

G(S, Q) = d-mn exp((ni/d)(Q ® S')[L]).


LEMmn,1 (Z/dZ)
32 I. THETA-SERIES

PROBLEM 4.7 (The Gauss sum as an "automorphic form"). For n EN and q >1
define the set
S = { S = BD- 1; ( ~ ~) E rQ (q)}.

Prove the following facts:


(1) If M = ( ~ ~) E r3 (q) and S E S, then det( CS + D) =f:. O; and if to every
such M we associate the map S--+ M(S) =(AS+ B) ·(CS+ D)- 1, we obtain a
transitive action of the group r3(q) on the set S.
(2) If m is even, Q E A;!;, and q is the level of Q, then the Gauss sum (4.10),
regarded as a function on S, satisfies the functional equation
det(CS + D) 11112 G(M(S), Q) = x{2(M)G(S, Q)

for any M = ( ~ ~) E r3(q) and S ES, where x{2(M) is the same multiplier as
in the corresponding functional equation for the theta-series. [Hint: Use the fact that
in this case XQ is a homomorphism of the group q; (q) .]
4. Quadratic forms in an even number of variables. Suppose that Q E A;!;, where
m = 2k is even. By Theorem 3.13, the automorphy factor j{2(M,Z) (see (4.1)) for
n EN and M = ( ~ ~) E r3(q), where q is the level of Q, can be written in the
form
j{2(M,Z) = x{2(M)det(CZ +D)k = x{2(M)j(M,z)k,
where j(M, Z) is the automorphy factor (4.3) for the group S~, and hence also for the
group q; (q), and XQ is a function on rQ (q) with values in the group of eighth roots
of unity. According to Lemma 4.1(2), the map XQ = x{2 : rcJ(q) --+ C* is a group
homomorphism:
(4.20) XQ(MM1) = XQ(M)xQ(Mi) for M,M1 E rQ(q).
If q = l, then XQ is trivial by Proposition 4.3. Hence we may assume that q > 1. In
this case, by Proposition 4.5,
(4.21) XQ(M) = (detD)kG(-n- 1c, Q).
We let K denote the subgroup of q(q) generated by matrices of the form U(V)
(see(2.2))for VE SLn(Z), T(S) (see(2.3))forS E Sn, and 'T(S)forS E qSn. From
(4.21) it immediately follows that the character XQ is trivial on all of the generators of
K, and hence on all of K. Hence, XQ is constant on every double coset KMK with
ME rcJ(q).

LEMMA 4.8. Every double coset KMK, where M = ( -~ ~) E r3 (q ), has a

representative of the form Mo = ( ~~ ~~), where

0- ~) Bo = ( ~ ~) ,
Ao = ( E 1 ,

Co=(~~). Do=(E·0-1 ~).and(; ~)ErA(q).


§4. COMPUTATION OF THE MULTIPLIER 33

Here
(4.22) ~ = detD(modq).
PRooF. It suffices to prove that every double coset has a representative M 1 =
( ~: ~:) all of whose entries in the first rows and columns of A 1, Bi. Ci, D 1 are
zero, except for the entries in the first row and first column of A 1 and Di. which equal
1. Once that has been proved, the lemma will follow by induction on n.
When we pass from the matrix M to the matrix M' = (C'A' B')
D' = MU(V),
the block D goes to the block D' = DV. By Lemma 3.8, the matrix V E SLn(Z)
can be chosen in such a way ~hat the first row of D' has the form (d, 0, ... , O), where
d is the greatest common divisor of the entries in the first row of D. Let c be the
greatest common divisor of the entries cf 1, ••• , cf n in the first row of C': Then there
exist integers s21, ••. , s2n such that cf 1s21 + · · · + c~ 1s2n = c. Let S = (sap) denote any
symmetric integer matrix whose second column is '(s21, .•• , s2n). Then in the matrix
M" = (C"A" B")
D" = M'T(S) the first two entries in the first row of the block
D" = C'S+ D' are equal to cf 1s11 + · · · + cfns1n + d and c, respectively. Since c
divides cf 1, •.• , c~ 1, and c and d are relatively prime, it follows that these two entries
are relatively prime. Thus, if we again multiply M" on the right by a suitable matrix
of the form U(V'), we may assume that the first row of the D-block of this matrix is
(1, 0, ... , 0). Since

-~) (+
0 ...

1 dnt
*
it follows that, after multiplying the above matrix on the left by a suitable matrix of
the form U( V"), we may assume that its D-block has the form

(4.23)

If the block D in M already has the form (4.23), then we can pass from M to the
matrix

M1= (A'C' D'B') = T(S)M 1T(S1) = ( A


C +DSi
B+SD)
D '
where we can obviously choose symmetric integer matrices Sand Si, the second of
which is divisible by q, so that the first column of the matrix B + SD ~nd the first row
of the matrix C + DS 1 consist of zeros. Then the relations C' · 'D' = D' · 'C' and
'B' · D' = 'D' · B' {see (2.7), (2.8)) imply that the first column of C' and the first row
of B' also consist ofzeros. Finally, if we take into account the structure of the matrices
B', C', D' and the relations 'A'· D' - 'C' · B' =En and A'· 'D' - B' · 'C' =En {see
(2.7), (2.8)), we conclude that A' as well as D' has the form (4.23). The first part of
the lemma is proved.
34 I. THETA-SERIES

=det D (mod q) for any matrix


Since we obviously have det D'

M'=(Ac', D'B') K(AC DB) K' E

the congruence (4.22) follows. D

We proceed with the computation of the multiplier XQ· Since XQ is constant on


K-double cosets, by (4.21) and Lemma 4.8 we find that for any ME KM0 K

XQ(M) =xQ(Mo) = okG(-Di) 1Co, Q)

=Okd-mn L e{ (~ . -o~ly) (: Qlln]) }•


L=(l, ,... ,l. )EM.,.• {Z/dZ)

where Mo = ( ~~ ~~),dis any natural number divisible by o, and 11, ••• , ln are the
columns of the matrix L. By the definition of e{T} (see (3.23)), the last expression is
equal to
okd-m I:
e{-o-lyQc1n.
IEM.,,1(Z/dZ)

By the formula (4.21) applied to Mi = ( ; ~), the last expression can be written
as xb ( (; ~)); hence in the notation of Lemma 4.8 we obtain the relation
(4.24)

We have thereby reduced the calculation of the multiplier XQ for arbitrary n to the case
n = 1.
PROPOSITION 4.9. Let Q E A;!;, where m = 2k. is even and the level q of Q is greater
than 1. Then for any (; ~) E rA(q) one has

(4.25)

where XQ is the character of the quadratic form Q, i.e., it is the real Dirichlet character
mo_dulo q defined on integers oprime to q by the formula
(4.26) XQ(o) = (signo)kloi-k L exp((1li/o)Q[l]),
IEM.,,1(Z/JZ)

in particular
XQ(-1) = (-l)k.
If p is an odd prime, then XQ (p) can be computed from the formula
XQ(p) __ ( (-l)kpdet Q) (Legendre symbol).
§4. COMPUTATION OF THE MULTIPLIER 35

PROOF. The formula (4.21) shows that the number <! = xb ( (; ~)) belongs
to the field Q1.s1 of lo Ith roots of unity. On the other hand, because xb is a character of
the group rA(q) and xb ( ( ~ t)) = 1 for any b E Z, we obtain

xb ( (; ~) ) = xb ( (; ~) ( ~ t) ) = xb ( ( ; ~ ! :; )),
so that<! also belongs to any of the fields Q1.i+trl· But the arithmetic progressiono +by
(b E Z) contains numbers that are relatively prime, and Q1 0 1n Qlbl = Q if a is prime
to b (in this case the compositum ofQ1 0 1and Qlbl is Qlabl• and its degree over Q is the
product of the degrees ofQ1 0 1and Qlbl). Hence,<! is a rational number. Consequently,
e does not change under any of the automorphisms exp{2ni/o) - exp(2nit/o) of the
fieldQ 1.i 1(here{t,o)=1). Takingt = -Pandtakingintoaccountthatyp l(modo), =
we find that
xi
Q
((a OP)) -okd-m
y -
L
IEM.,,1(Z/dZ)
e{o-1Q[IJ}

depends only ono and Q. Ifwe set d = lol here, we obtain (4.26).
Given any integer o prime to q, there exist integers a and b such that ab - qb = 1.
Then (: : ) E rA{q), and

XQ(o) =xb ( (: : ) ) = xb ( (: : ) ( ~ ~) )
=xb( (: :!:!)) =xQ(o+tq)
for any t E Z. Thus, the function XQ(o) is defined for all o prime to q, and it depends
o
only on the residue class of modulo q. If o 1 is also an integer prime to q and
( ~1 :: ) E rA(q), then

:) (~I !: ))
qbi : 001 )) = XQ(qbi +&51) = XQ(M1).
Thus, XQ is a real Dirichlet character modulo q. Now let p be an odd prime not
o
dividing q. If we set = p in (4.26), we can write

XQ(p) = p-k L exp{(2ni/p)((l/2)Q[/])).


/EM.,,1(Z/pZ)

If M E Mm and the determinant of M is prime top, then the map I ---+ Ml obviously
gives a bijection of the set Mm,1/ pMm,I with itself. Hence, for any such M we can write

XQ(p) = p-k L exp{(2ni/p)((l/2)Q[M/])).


/EM,.,1(Z/pZ)
36 I. THETA-SERIES

It is well known (see Appendix 1.1) that the matrix M can be chosen in such a way
that the quadratic form (1/2)Q[MX] is congruent modulo p to a diagonal quadratic
form a1x? + · · · + amx~. Here we clearly have
a1 ···am =det((l/2)Q[M]) = 2-m(detM) det Q(modp). 2

With this choice of M, the last formula for XQ (p) can be written in the form

XQ(p) = p-kGp(a1) · · · Gp(am),


where GP (a) denotes the Gauss sum

(4.27) Gp(a) = L exp(2niat 2 /p).


tEFp=Z/pZ

If we use the definition and properties of the Legendre symbol modulo p (see Appendix
2.3), we find that

Gp(a) = b~ (1 + (%)) exp(2niab/p)


(4.28) = L
bEFp
(!:) exp(2niab/p)
p

= (~) b~ ( ~) exp(2niab/p) = (~) Gp(l).


On the other hand, taking into account that p is an odd prime, we have

Gp(l) 2 =( ~l )ap(l)Gp(-1)
(4.29) =( ~l) L exp(2ni(t1 - t2)(t1 + t2)/p)
t1.tiEFp

=(-l) LL exp(2niab/p) =(-l)P·


p aEFpbEFp p

Returning to the calculation of XQ (p), from the above formulas and the properties
of the Legendre symbol we obtain

XQ(p) =p-k(a1 ·~·am )ap(lr = c-l)k~ ... am)


=c-l)k2-m(~tM) 2 detQ) = c-l):detQ). D

The number (-1 )k det Q is called the discriminant of the quadratic form with
matrix Q. The reader can easily verify that the discriminant of any integral quadratic
form in an even number of variables is congruent to 0or1modulo4.
The next theorem summarizes our computation of the multiplier for theta-series
of quadratic forms in an even number of variables.
§4. COMPUTATION OF THE MULTIPLIER 37

THEoREM 4.10. Suppose that Q E A;:;, q is the level of Q, and n ;;::: 1. Further
suppose that m = 2k is even. Then for any matrix M .= ( ~ ~) E r() (q) the
multiplier XQ(M) in the functional equations (3.29) and (3.30) of Theorem 3.13 is given
by the following formulas:
if q = 1, then
(4.30)
if q > 1, then
(4.31)
where XQ is the character of the quadratic form of Q, i.e., the real Dirichlet character
modulo q that satisfies the conditions
(4.32) XQ(-1) =(-l)k,

(4.33) XQ(p) --((-l)kpdet Q) (Legendre symbol),

if p is an odd prime not dividing q, and


(4.34) XQ(2) = 2-k L exp(1liQ[t]/2),
tEMm,1(Z/2Z)

if q is odd
PRooF. Formula (4.30) was proved in Proposition 4.3, and (4.31) follows from
(4.24), (4.25), and (4.22). Formulas (4.32)-(4.34) were proved in Proposition 4.9. D

5. Quadratic forms in an odd number of variables. We first prove the following


useful proposition.
PROPOSITION 4.11. Let Q be a nonsingular symmetric integer matrix with even
entries on the main diagonal. Suppose that the order of Q is odd Then its determinant
det Q and its level q = q(Q) satisfy the congruences
(4.35) detQ =O(mod2), q(Q) =O(m~d4).

PRooF. For brevity we shall use the term "even matrix" to refer to a symmetric
integer matrix with even entries on the main diagonal. Recall that the level of a
nonsingular even matrix Q is the least natural number q such that q · Q- 1 is an even
matrix. Let Q = (aap) be a nonsingular m x m even matrix, where mis odd. We set
Q- 1 = Q* = (det Q)- 1 • (Aap). Then for every a = 1, ... , m we have the equality
m
det Q = L aapAap;
P=I
summing these equalities, we obtain
m

m det Q = L aaaAaa + 2 L aapAap·


P=I l~a<P~m
38 I. THETA-SERIES

Since all of the coefficients a°'°' ( 1 ~ a ~ m) are even, it follows that m det Q is divisible
by 2, and hence so is det Q.
To prove the second congruence in (4.35), we first note that since q(Q)Q- 1 is an
integer matrix, its determinant is an integer, i.e., det Q divides q(Q)m. Thus, if m is
odd, the level q = q(Q) is divisible by 2. To show that q is divisible by 4, we use
induction on the odd number m. The congruence is obvious if m = I. Suppose that
it has already been proved for all nonsingular even matrices of odd order less than m,
where m > I, and let Q be a nonsingular even matrix of order m. We consider two
cases:
(I) All of the entries of Qare even, i.e., Q = 2Qi. where Q1 is an integer matrix.
Then Q2 = qQ- 1 = (q/2)Ql 1 is a nonsingular even matrix of odd order, and hence
has even determinant. Since this determinant divides (q/2)m, it follows that q/2 is
even, and hence q is divisible by 4.
(2) Not all of the entries in Q are even. In this case, if we make a suitable
permutation of the rows of Q and the same permutation of its columns-Le., for
suitable V E GLm(Z) we perform the transformation Q -+ Q[V], which does not
change the level q and takes even matrices to even matrices-then we may suppose
that the entry a12 = a21 is odd. We divide Q into blocks ( ~~: ~~~), where

Q11 = ( a11 a12). Since


a21 a22

det Q11 = a11 a22 - af2 =-af2 =1(mod2),


the matrix Q11 is invertible and the matrix Qjj 1is a 2-integral matrix with even diagonal.
From the obvious identity

where U= (Ei0 -Q1i 1Q 12 ),


Em-2
we obtain
Q-1[U] = ( Q01i1 0 )
(Q22 - Qjj 1[Q12))-I .
The matrix Q22 - Qjj 1[Q 12 ] is 2-integral, and we easily see that it has even diagonal.
This implies that it can be written in the form d- 1Q', where d is an odd natural number
and Q' is an even matrix. By the last identity, q · d · (Q')- 1 is a 2-integral matrix with
even diagonal. Consequently, by the induction assumption, the number qd is divisible
by 4, and hence so is q. 0

Proposition 4.11 shows that the level of any quadratic form in an odd number of
variables-or, equivalently, the level of the corresponding matrix Q E A;!;-is divisible
by4; hence, we have the inclusion r(l(q) c rQ(4). According to Theorem 3.13, for any
ME r 0(q) the theta-series lr(z, Q) satisfies the functional equation (3.30), in which
the multiplier XQ is not a character of the group r 0(q) as in the case of theta-series
of quadratic forms in an even number of variables (see (4.31)), but rather is more
complicated.
On the other hand, the example of the simplest quadratic form with I x I-matrix
(2) E Af shows that there exist matrices of level 4. Using the notation in (4.1) and
(4.10) and the formula (4.15), we can write the functional equation for the theta-series
§4. COMPUTATION OF THE MULTIPLIER 39

on(z, (2)) in the form


(4.36) on(M(Z), (2)) = j(2i(M,z)On(z, (2)),

where M = ( ~ ~) E r 0(4) and


(4.37) j(2J(M, Z) = X(iJ (M) det( CZ + D) 112 ,

in which the square root is determined from the condition (4.7);


(4.38)

where for any odd integer d we determine cd from the formula

Cd= {
l,· if d =l(mod4),
(4.39)
=+A,
i if d =-l(mod4),
(4.40) G(BD- 1, (2)) =ldl-n L e{2BD- 1[r]},
rEMn,1(Z/dZ)

where dis any nonzero integer such that d · v- 1 E Mn, and e{ ... } is the function
(3.23).
Since r 0(q) c r 0(4), and since the product of the theta-series for the matrices Q
and (2) is the theta-series for a matrix of even order and the same level q, it follows
that Theorem 4.10 enables us to obtain the functional equation for the theta-series
on(z, Q) in terms of the automorphy factor j(2J. Using this connection, we prove the
following theorem.
THEOREM 4.12. Suppose that on(z, Q) is the theta-series (1.13) for the matrix
Q E A;!; with m = 2k +l odd, q is the level of Q, M = ( ~ ~) E r(j(q), and

j(2i (M, Z) is the automorphy factor (4.37). Then the theta-series satisfies the functional
equation
(4.41) on(M(Z), Q) = XQ(detD)j(2i(M, zron(z, Q),
where

(4.42) (d) = (2detQ)


XQ ldl
is a Dirichlet character modulo q, and ( -;-) is the Jacobi symbol.
PRooF. Using the definition (l.13) of theta-series, we easily see that
(4.43) on(z, Q)On(z, (2)) = on(z, Q 1) with Qi = Q E9 (2).
According to Proposition 4.11, the level of Q 1 is also q. Since Q1 E A!+i and m +l
is even, it follows from (3.30) and Theorem 4.10 that
(4.44) on(M(Z), Q1) = XQ, (detD) det( CZ+ D)(m+l)/ion(z, Q1).
On the other hand, the functional equation (3.30) implies that
(4.45) on(M(Z), Q) = x(M)j~)(M, zron(z, Q).
40 I. THETA-SERIES

Ifwe now multiply this equation and the equation (4.36) and take (4.43) into account,
we find that the last equation is preserved if Q and m are replaced by Q1 and m + 1.
Since all of our theta-series are nonzero functions, it follows from {4.45) for Qi and
from (4.44) that

det{CZ + D))(m+t)/2
(4.46) x(M) = XQi {detD) ( ·n (M Z)2 .
1(2) '

Furthermore, if we square the equality (4.36) and let Q = (2) in (4.43) and (4.44),
we obtain

Hence, by (4.46) and the definition of the characters of quadratic forms in (4.32),
(4.33), and (4.42), we conclude that

x(M) = XQ1 (detD)XQ2 {detD)(m+t)/2 = XQ(detD). 0

Although the automorphy factor j(2) is simpler than the automorphy factor j'Q for
an arbitrary matrix Q of odd order, nevertheless it has a rather complicated structure.
In certain cases, however, it is possible to express j(2) in terms of j{2), and hence,
because of (4.37)-(4.40), in terms of the one-dimensional Gauss sums

(4.48) Gd(c) = L exp (2niJr 2),


rEZ/dZ

which are the subject of the next two lemmas.


LEMMA4.13. Suppose thatc,d E Z, d isapositiveoddnumber, (c,d) = 1, and(.;)
is the Jacobi symbol. Then the Gauss sum satisfies the relation

Gd(c) = (J)GAl).
PRooF. If d =pis an odd prime, then the lemma follows from (4.28). Suppose
that d = pn with n > 1. If we set r = r1 + pn- 1r2 in (4.48), where r1 runs through
Z/pn-iz and r2 runs through Z/pZ, we find that

Gp.(c) = pGpn-2(c),
and the proof of the lemma ford = pn can be obtained from this relation by induction
on n. Now suppose that d = di · d2, where d 1 is prime to d2, and suppose that b1 and
b2 are integers such that b1d1 + b2d2 = 1. In (4.48) let r =dirt + d1r2, where r; runs
through Z/d;Z, and replace c by c{b1d1 + bid2). We then find that the Gauss sum
satisfies the relation

(4.49)
§4. COMPUTATION OF THE MULTIPLIER 41

We assume, by induction, that the lemma holds for d 1 and d 2, and we prove that it
holds ford. From (4.49) we have

Ga(c) =(;.) (~~)Ga1 (1)(;J (~:)Ga2 (1)


(;Jaa (b2)Ga (bi) (J )aa(l). D
= (;.) 1 2 =

LEMMA 4.14. If d is a positive odd integer, then

(4.50)
where the square root is positive and ea is the function (4.39).
PRooF. We compute the value of the theta-function 8 1(z;0, 0) (see (3.2)) at the
point z = 2c/d +iii., where .il > 0, c and d =I 0 are integers. Let d 1 = d if dis odd and
d 1 = d/2 if dis even. In the definition of 8 1(z;0, 0) in (3.2) we divide the summation
into two parts: we set N = r +d1m, where r runs through the set ofresidues modulo d 1
and m runs through all integers. Then, after some simple transformations, we obtain
the identity

(4.51) 0 1(z; o, o) =:Ee{ 2 (


r
J+ ~) r 2}01(iildr; iild,r, o).
Let c = l, and let d > 0 be an odd number. Then (4.5 l) and the inversion formula
for the theta-function (3.5) imply that

lim .il 1120 1(z;0,0) =


A-++O
~
L..t
e{2.:_r 2}d- 1 lim 8 1( - -.1-;0,-iild;)
d A-++0 z.ild2
(4.52) rEZ/dZ
=d- 1Ga(l).
We now compute the same limit in another way. Since (-iz )- 1/ 2 ---+ e{ ! }(d/2) 1/2
as .il ---+ +o, it follows from (3.5) that the first limit in (4.52) is equal to

(4.53) e{ ~} (~) 112 .t!!.~o.il'/2 0 1 (-z- 1 ;0,o).


Ifwe observe that -z- 1 = 2(-d)/4 + i.il. 1, where .il 1 = [(2/d)(2/d + i.il))- 1 and apply
(4.51) and (3.5), we find that the limit in (4.53) is equal to

~ {-d2}1· .il1/2 l'


L..t e Tr A~~o (4.il1) 1/2 A~~o u
a•( -
l 0
4iil 1 ' ' -
2·.il)
z ir
rEZ/2Z

=d-1 . L {-d
2r2 }.
e
rEZ/2Z

This, along with (4.53) and (4.39), implies that the first limit in (4.52) is equal to
ea· d- 112; in view of (4.52), we hence obtain (4.50). 0

From Lemmas 4.13 and 4.14 we obtain the following proposition,


42 I. THETA-SERIES

PROPOSITION 4.15. In the case n = 1 the automorphy factor (4.37) is given by the
explicit formula

(4.54) jl >(M,z)=ei
2
1 (J)(cz+d) 112 forM= (: ~) erA(4),

where·ed is the function (4.39), (c/d) = (c/ldl) is the Jacobi symbol, and the square root
is determined by the condition (4.1).
PROOF. By (4.16), for odd d > 0 we have G(b/d,(2)) = G(-c/d,(2)) and
(-1/d) = eJ. Thus, in the cased > 0 the formula (4.54) follows from (4.37),
(4.38), and Lemmas 4.13 and 4.14. On the other hand, if d < 0, then to prove (4.54)
it suffices to replace -c/ d by c/(-d) in the Gauss sum. 0

To conclude this section, we give a simple but important property of the multiplier
x(2> in (4.38). From (4.47) and (4.37) it follows that [x(2>(M)]2 = XQ2 (detD), and
hence, by (4.32) and (4.33), we find that
(4.55) [X(2)(M)]4 =1 for ME r(i(4).
CHAPTER 2

Modular Forms

The development in Chapter 1 of the analytic and group-theoretic properties of


theta-series of integral quadratic forms provides the basis for the axiomatic definition
and the study of all functions with similar properties. The resulting class of modular
forms, although not generally exhausted by the set of theta-series, share many of their
analytic and structural features. Moreover, the invariant definition of modular forms
enables one to find and prove these properties much more easily.

§1. Fundamental domains for subgroups of the modular group


The functional equations for theta-series that were proved in the previous chapter
show that all of the values of a theta-series at points of a fixed orbit

(1.1) K{Z) = {M{Z);M EK} (Z E Hn)

of the corresponding subgroup K of the modular group are determined if we know the
value at one of the points. Thus, a theta-series is uniquely determined by its restriction
to any subset of the upper half-plane which intersects with all of the orbits of K. In
this section we shall construct a fundamental domain in Hn for an arbitrary subgroup
K of finite index in rn, i.e., we shall give a set of representatives of the orbits ( 1.1) that
deserves to be called a "domain".
1. The modular triangle. For brevity we shall call the imaginary party of a complex
number z = x + iy E H 1 the height of z, denoted h(z). By Lemma 2.8 of Chapter 1
(or a direct computation) we see that

Since the inequality


lcz + dl 2 = (cz + d) 2 + (cy) 2 < 1
has only finitely many solutions in integers c and d for any fixed z = x + iy E Hi. it
follows that there are only finitely many values of hon the orbit r 1(z) that are greater
than h(z). Consequently, every orbit of r 1 in H 1 has points of maximal height, and
these points are characterized by the inequality lcz +di ~ 1, which must hold for any
pair of integers c, d that form the second row ofa matrix M = ( ~ ~) E r 1, i.e., for
any pair of relatively prime integers. The transformation z --+ ( ~ ~) (z) = z + b =
(x + b) + iy, where b E Z, does not affect the height of z. Here b can always be chosen
43
44 2. MODULAR FORMS

so that Ix + b I ~ 1/2. We thus see that every orbit of ri in Hi has a point in the set

Dr= {z = x + iy E Hi; lxl ~ 1/2, lcz +di~ 1,


(1.2)
if c,d E Zand (c,d) = l}.

We now show that the set Dr is actually given by a finite number of inequalities.
Let

(1.3) Di= {z = x + iy E Hi; lxl ~ 1/2, lzl 2 = x 2 + y 2 ~ 1}.

Since 1, 0 E Zand (1,0) = 1, it follows that Dr c Di. Conversely, if z = x + iy E Di,


c, d E Z, and (c, d) = 1, then

lcz + dl 2 = c 2 (x 2 + y 2 ) + 2cdx + d 2 ~ c 2 - lcdl + d 2 ~ 1,

so that z E D{. Thus, Di = Dr, and _every orbit of the modular group ri in the
upper half-plane Hi intersects the set Di. This set may be regarded as a "triangle"
(the modular triangle) with vertices at p, p 2, and i oo (see Figure 1).

ioo
y

FIGURE 1

We now show that Di is a fundamental domain for ri on Hi. More precisely, we


have the following theorem.
THEOREM 1.1. (1) For every point z E Hi there exists a matrix M E ri such that
M(z} E Di.
(2) If z = x + iy and z' are two distinct points in Di that lie in the same orbit ofri,
then either x = ±1/2 and z' = z =i= 1, or else lzl = 1 and z' = -z-i. In particular, no
two interior points of Di lie in the same orbit of ri.
PRooF. The first part of the theorem was proved above. Suppose that z = x + iy E
Di, M = (~ ~) E ri, z' = x' + iy' = M(z} E Di, andz =/:- z'. Since Di= Dr,

we have h(z) = h(M(z}), and hence lcz +di= 1.


If c = 0, then d = ±1. Then M = ( ~l
; 1 ) and z' = z ± b. Since
-1/2 ~ x, x' ~ 1/2 and x' = x ± b, it follows that x = ±1/2, b = =i=l, x' = =i=l/2.
§1. FUNDAMENTAL DOMAINS FOR SUBGROUPS OF THE MODULAR GROUP 45

Ifd = 0, thenc = ±1,and lzl = 1. ThenM = (:i i 1).andz' = ±a-z-1.


Since we have -z- 1 E D 1 and I - z- 11=1, it follows that a= 0, except in the cases
z = p or p 2 and a = -1 or 1, respectively. But in those cases z' = z, contradicting
our assumption.
Finally, suppose that c =f:. 0 and d =f:. 0. Then the inequalities
1 = lcz + dl 2 = c 2(x 2 + y 2) + 2cdx + d 2 ~ c 2 - lcdl + d 2 ~ 1
imply that x 2 + y 2 = 1, x = ±1/2, c = ±1, d = ±1, and the product cd has the
opposite sign of x. Thus, (c,d) = ±(1, 1).and z = p 2, or else (c,d) = ±(1, -1) and
z = p. In the first case, up to a sign the matrix M is equal to

( a1 a- 1) / ap 2 +(a-1) 1 2
1 and z = p2 + 1 = a - p2 + 1 = a + p ,

so that (because z =f:. z') we have a = 1 and z' = 1 + p 2 = p = -1 / p 2. The second


case is similar. D

PROBLEM 1.2. Show that the stabilizer


S(z) ={ME r 1;M(z} = z}
of the point z E H1 in the group r 1 is {±E2} if z does not belong to either of the two
orbits r 1(i}, r 1(p}. Show that

S(i) = { ±E2,± ( ~l ~) }•_


S(p) = { ±E2,± ( =~ ~) ,± (~ =~) }·
PROBLEM 1.3. Two binary quadratic forms
Q(x,y)=ax 2 +bxy+cy 2 and Q1(x1,y1)=a1x?+b1x1y1+c1y?
are said to be equivalent (over Z) if

(1.4) Q1(xi.yi)=Q(ax1+Pyi.yx1+<5y1), where(; ~) Er 1.

Show that any real positive definite form Q is equivalent to a form Qi with lb1 I ~ a1 ~
c 1. Further show that in the interior of the region defined by these inequalities in the
space of coefficients of binary quadratic forms, there are no two distinct points that
correspond to equivalent forms.
[Hint: Let w and w 1 be the roots of the quadratic equations Q(t, 1) = 0 and
Q1(t,1) = 0, respectively, that belong to H 1. Show that (1.4) is equivalent to the
conditions b? - 4a 1c1 = b2 - 4ac and w1 = M- 1(w} with M = (; ~),and use
Theorem 1.1.]
PROBLEM 1.4. Show that there are only finitely many equivalence classes of positive
definite integral binary quadratic forms Q = ax 2 + bxy + cy 2 with fixed discriminant
b2 -4ac < 0.
46 2. MODULAR FORMS

2. The Minkowski reduction domain. The construction of a fundamental domain


for the modular group rn for. n > l is based on the same idea as in the case n = 1 above.
Again, every orbit of rn in Hn has points Z = X +i Y of maximal height h (Z) = det Y,
and in the set of such points we make a fUrther reduction by means of transformations
in rn that do not affect the height. But whereas all such transformations were of the
form z --+ z + b (b E Z) in the case n = 1, when n > 1, in addition to the analogous
transformationsZ--+ Z+B (BE Sn). thetransformationsZ--+ Z[V] (VE GLn(Z))
also have this property. Minkowski reduction theory is devoted to the construction
and the study of the properties of a fundamental domain for the group
(1.5)
of n x n unimodular matrices acting on the set
(1.6) p = Pn ={YE Sn(R); Y> O}

of matrices of real positive definite quadratic forms in n variables, where the action is
given by:
A 3 U: Y--+ Y[U] = 'UYU.
We shall let u1, ... , Un denote the columns of U E An, so that U = (ui. ... , un).
In order to choose a "reduced" representative Y[ U] in the orbit
(1.7) {Y}A = {Y[U]; U EA}

of a point Y E P, we construct the matrix U E A column by column, starting from


certain minimality conditions. We let AZ denote the set of integer matrices made up
from the first k columns of the matrices in A:

(1.8)
In other words, AZ is the set of n x. k integer matrices which can be completed to an
n x n unimodular matrix. Starting with a fixed matrix Y E P, we choose u1 EA) in
such a way that the value Y[u 1] is minimal in the set A). This can be done, because
A) consists of integer vectors and Y > 0. After choosing u 1, we choose u2 so that
(ui. u2) E A2 and the value Y[u2] is minimal. Possibly replacing u2 by -u2, without
loss of generality we may assume that 1 u1 Yu 2 ~ 0. Continuing this process, at the
kth step we find a column uk for which (ui. ... , uk) E Az, Y[uk] is minimal, and
'uk-1 Yuk ~ 0. After n steps we have a matrix U = (ui, ... , un) E A~ = A and a
matrix T =(tap)= Y[U] E { Y}A which we call reduced.
We now explain what the reduced property of a matrix means in terms of the
entries of the matrix.
LEMMA 1.5. Let r ~ 1 and IE M,,1. Then IE A~ if and only if the components of
the vector are relatively prime.
PROOF. Necessity is obvious. Conversely, if the components of I are relatively
prime, then, by Lemma 3.8 of Chapter 1,. there exists V EA' such that

VI= 1(1,0, ... ,0), and hence/= v- 1 . 1(1,0, ... ,0)


and l coincides with the first column of the matrix v- 1. D
§1. FUNDAMENTAL DOMAINS FOR SUBGROUPS OF THE MODULAR GROUP 47

LEMMA 1.6. Let U, U' E An. Then the.first r columns of U' coincide with the.first r
columns of U if and only if

U' = U ( ~ ~), where DE An-r, BE Mr,n-r·

PROOF. The direct implication is obvious. To prove the converse, we let ui, ... , Un
denote the columns of U, and we suppose that the first r columns of U' are ui, ... , u,.
We set
u u- 1 =
1
V= (~ ~)EA\
where A= (aap), B, C =(cap), andD are (r xr)-, (r x (n-r))-, and ((n-:r) x (n-r))-
matrices, respectively. Since the P-column of the matrix U' ~ U( ~ ~) is equal
to up for 1 ~ p ~ r, it follows that
r n-r
L aapUa + L CapUr+a = up,
a=I a=I

which, by the linear independence of the columns u1, ••• , un, implies that aap = 1
for a = p, aap = 0 for a =/= p, and C,ap = 0. Thus, A = E, and C = O; hence D E
An-r. D

Let U = (ui, ... , un) E An and 1 ~ k ~ n. By Lemma 1.6, the set of kth columns
of all of the matrices U' E An with first k - 1 columns u1,. • ., uk-t coincides with the
set of columns of the form

m, wh~ I = CJ EM.. ,

and lk, ... , In are the components of the first column of some matrix D E An-k+t. By
Lemma 1.5, the latter condition means that lk, ... , ln are relatively prime. We thus find
that, if U = {u1, .. . ,un) E An and 1 ~ k ~ n, then
(1.9) {u E Mn,1; (ui. ... , uk-l> u) E Ak} = ULk,n•
where Lk,n is the set of columns in Mn,t whose last (n -k + 1) components are relatively
prime.
From the definition and the relations (1.9) it follows that T = (tap) = Y[U] is a
reduced niatrix if and only if
Y[Ul] ~ Y[uk] for alll E Lk,n and I~ k ~ n
and
'uk-1 Yuk~ 0 for 1 < k ~ n,
where (J = (ui, ... , un) EA. Since Y[U] = T, Y[uk] = tkk. and 'uk-1 Yuk = tk-1,k.
these conditions mean precisely that T belongs to the Minkowski reduction domain
Fn = {T =(tap) E Pn; tkk ~ T[/],
(1.10)
if IE Lk,n (I~ k ~ n), and tk-1,k ~ 0 (1 <k ~ n)}.
48 2. MODULAR FORMS

THEOREM 1.7. In every orbit {Y}A of the group An in Pn there exists at least one
point-and no more than.finitely many points-belonging to the reduction domain Fn. If
T and T' are two interior points of Fn with T' = T[U], where U E An, then U =±En.
In particular, any two interior points of Fn are in different orbits of An.
PROOF. The above discussion shows that for every matrix Y E Pn there exists
U E An such that Y[ U] E Fn, and each column of this matrix U can be chosen in only
finitely many ways.
Let ei, ... , en denote the columns of the identity matrix En. We set
F~ = {T =(tap) E Pn; tkk < T[l], if / E Lk,n•
1 =/:- ±ek (I:::;; k:::;; n), and tk-1,k > 0 (1 < k:::;; n)}.
Clearly Fn° C Fn, and every interior point of Fn is contained in F~. If T = (tap),
T' = (t~p) E F~, and T' = T[U], where U = {u1, ... , un) E An, then
tkk = tfck = T[uk] (1 :::;; k :::;; n).
Since u1 E L1,n. this equality and the definition of F~ imply that u1 = ±e1. Then
obviously u2 E L2,n. and we find that u2 = ±e2. Continuing in this way, we .obtain
uk = ±ek for all l :::;; k:::;; n. Furthermore, from the conditions
tk-1,k > 0' tk-1,k
i I · 'P.
= Uk-1.t Uk >0 (1 < k:::;; n)
it follows that either u1 = e1, ... , Un = en, or else u1 = -ei, ... , Un = -en. Thus,
U=±EnandT'=T. 0

The inequalities that determine the reduction domain imply ·a series of useful
inequalities for the entries in a reduced matrix T = (tap). lnthe first place, since
tkk :::;; T[ek+d = tk+l,k+I (I :::;; k :::;; n), it follows that
(1.11)
In addition, since tn :::;; T[ek ±et]= tkk + 2tk1 + t11for1 :::;; k < / :::;; n, it follows that
(1.12)
Finally, we have the following important theorem.
'fHEoREM 1.8. /f T. = (tap) E Fn, then
(1.13) t11t22 ···Inn :::;; Cn det T,
where Cn depends only on n.
PROOF. For a = 1, ... , n we determine the nonnegative integer µa = µa (t) by the
following conditions:
(I) The columns of integers m satisfying T[m] :::;; µa include at least a linearly
independent columns.
(2) The maximum number oflinearly independent columns of integers m satisfying
the inequality T[m] <µa is at most a - 1.
The numbers µ,, ... , µn are called the successive minima of the matrix T > 0.
It is clear that µ 1 :::;; µ1 :::;; · · · :::;; µn. and there exist linearly independent columns
mi, ... ,m,, such that T[ma] =µa.
We prove the theorem in three stages.
§1. FUNDAMENTAL DOMAINS FOR SUBGROUPS OF THE MODULAR GROUP 49

LEMMA 1.9. Let T =(tap) E Fn, and let µ1, ••• ,µn be the successive minima ofT.
Then·
{1.14) (1 ~a~ n),
where c {a) depends only on a.
PROOF OF THE LEMMA. As before, let m1, ... , mn be linearly independent columns
such that T[ma] = µa, and let e1, ... , en be the columns of En. If a is fixed, then
at least one of the columns m1, ... , ma is not a linear combination of the columns
e1, ... , ea-I· Suppose that mk is such a column. Then there exists a column e~ such
that {e1, ... , ea-t. e~) EA~ (see (1.8)) and

mk = h1e1 +···+ha-tea-I+ se~,


wheres> Oandh1, ... ,ha-I are integers. Ifwereplacee~ byea+h1e1 +····+ha-tea-I
for suitable integers b1, ... , ba-1. without loss of generality we may assume that lhpl ~
s/2 (1 ~ p <a). Since
e~ = (1/s)mk - (n1/s)e1 - · · · - (ha-1/s)ea-t.

it follows from the triangle inequality for the norm llxll = {T[x]) 112 in the space
Mn,I (R) that

(1.15)

Since {1.4) obviously holds for a = 1 with c{l) = 1, we can proceed to prove (1.14)
by inductio~ on a. If the inequality holds for all p < a, then
T[ep] = tpp ~ c(p)µp ~ c(p)µa.
Since k ~ a, we have T[mk] = µk ~ µa. From this and (1.15) we obtain T[e~] ~
c(a)µa, where

c(a) = (1+~L:c(p) 1 l2 ) 2.•


P<a
Since (ei. ... , ea-t. e~) E A~ and Tis a reduced matrix, it follows that taa ~ T[e~].
D

LEMMA 1.10. Let T E Pn, and let µ1 be the first minimum ofT. Then
µ1 ~ Yn(detT)lfn,
where Yn depends only on n.
PROOF. We regard the set of columns Mn, I {R) as an n-dimensional real space with
the usual coordinates. For any µ > 0 the set ·
{XE Mn,1(R); T[X] ~ µ}
is obviously a centrally symmetric convex set centered at the origin, and its volume v is
equal to snµnf 2 (det T)- 112 , where sn is the volume of the unit sphere inn-dimensional
space. By Minkowski's theorem on convex solids, this set contains a point other than
50 2. MODULAR FORMS

the origin with integer coordinates, provided that v > 2n, i.e.,µ > 4s; 2/n {det T)lfn.
Thus,
µ1= inf T[m]~4s; 2fn(detT) 1 fn. D
mEM.,1,m;lO

LEMMA 1.11. Let TE Pn, and let µi, ... ,µn be the successive minima ofT. Then
µ1 ·' · µn ~ (yn)n det T,
where Yn is the same constant as in Lemma 1.10.
PROOF. As before, let m1, ... , mn be linearly independent columns of integers such
that T[ma] =µa (1 ~a~ n). Then the matrix M = (m1, ... ,mn) is nonsingular,
and by Theorem 1.5 of Appendix 1, the matrix T[M] can be represented in the form
T[M] = 'L · L, where L = (lap), lap = 0 for a > p. We set
Q = D[LM- 1], whereD = diag{µl 1, ... ,µ; 1),
and we show that Q[m] ~ 1 for nonzero m E Mn,t ·
In fact, let m = Mh, where 'h = (hi. ... , hn), and let a be the greatest index
for which ha =F 0. Then m is a linear combination of the columns m1, ... , ma with
coefficientsh1, ... , ha, and it is not alinearcombinationofthecolumnsm1, ... ,ma-I·
From the definition of the minimum µa it now follows that T[m] ~ µa. Hence, taking
into account that the components (Lh) p of the column Lh are zero if p > a, we obtain
a a
Q[m] = D[Lh] = Lµfj 1 (Lh)~ ~ µ; 1 °L(Lh)~
P=I P=I
1
= µ; En[Lh] = µ; 1T[m] ~ 1.
From this inequality and Lemma 1.10 it follows that
Yn(det Q)lfn = Yn((µi · · · µn)-I det T)lfn ~ 1. D
Retuming·to the proof of Theorem 1.8, we see that the inequality (1.13) follows
from Lemma 1.9 and Lemma 1.11. The theorem is proved. D

CoROLlARY. Suppose that T =(tap) E Fn. Then


(1.16) n 1-nc; 1To ~ T ~ nTo, where To= diag{tu, ... , tnn)
and Cn is the same constant as in Theorem 1. 8.
PROOF. Let pi, ... ,pn denote the eigenvalues of the matrix T[T0- 1l 2], where
T.01/2 = d"tag ( t 11
1;2, ••• , tnn
1;2) . Then

Pt+···+ Pn = u{T[T0- 112]) = n,


and by Theorem 1.8
Pt··· Pn = det T(tu · · · tnn)- 1 ~ l/cn.
Thus, n 1-nc; 1 ~Pa ~ n for a = 1, ... , n. If Vis an orthogonal matrix such that

T[T0- 112][V] = diag{p1, ... , Pn),


§1. FUNDAMENTAL DOMAINS FOR SUBGROUPS OF THE MODULAR GROUP 51

then we have
n 1-nc; 1En ~ T[T0- 1/ 2][V] ~ nEn,
which is equivalent to the inequalities ( 1.16). 0

The inequalities proved above imply the following


THEOREM 1.12. The number of classes
{R}z = {R[U]; U E An}
of matrices R E A; offixed determinant det R = d is finite.
PRooF. By Theorem 1.7, every class contains a reduced representative R' EA; n
Fn. From ( 1.11)-(1.13) it follows that all of the entries in R' are bounded in absolute
value by a constant that depends only on n and det R' = d. But these entries are
integers, and hence the number of such R' is finite. 0

COROLLARY. The number of classes {R}z of matrices R E A; offixed level q is


finite.
PRooF. If q is the level of R E A;, then the inclusion qR- 1 E A; implies that
det R divides qn. Hence, det R can only take finitely many values. 0

PROPOSITION 1.13. Set


Cn = sup t11t22 .. · tnn(det T)- 1.
T=(t0 pEFn)

Thencn ;;:i: Cn-1foreveryn > 1.


PROOF. If T' = (t0 p) E Fn-1 and t = tnn is a real number not less than
tu, ... , tn-l,n-1, then the matrix T = ( ~' ~) is contained in Fn. Namely, if
I= (/J E Lk,n and Inn # 0, then T[/] = T'[/'] + 11-;n ;;:i: I ;;:i: tkk· If, on the other hand,
Inn = 0, then k < n and/' E Lk,n-1 •so that T[/] = T'[/'] ;;:i: tkk· Thus,
Cn ;;:i: sup t11 ... tn-l,n-11(1 det T')- 1 = Cn-1· 0
T=(~' ~)EFn

PROBLEM 1.14. Show that

F2 = { ( 111
112
112 ) E
122
P2; 0 ~ 2112 ~ 111~112}·
[Hint: See Problem 1.3.)
PROBLEM 1.15. Show that Lemma 1.10 for n = 2 holds with )12 = 2/./3. By
considering the matrix T = ( 1~ 2
1{2 ), show that this value of y2 cannot be
improved.
PRoBLEM 1.16. Show that Theorem 1.8 for n = 2 holds with c2 = 4/3, and that
this value cannot be improved.
52 2. MODULAR FORMS

3. The fundamental domain for the Siegel modular group. Just as in the case of the
classical modular group, the basic step in the construction of a fundamental domain
for rn on Hn is to choose in each orbit a representative Z that satisfies the inequalities
I det(CZ + D)I ~ I for every pair of n x n-matrices (C,D) that occurs in a matrix
( ~ ~) E rn. The proof that such a choice is possible is based on an explicit
description of all possible bottom rows of the matrices in rn.
We shall examine pairs ( C, D) of n x n integer matrices, where n is fixed. Such a
pair is said to be symmetric if C · 1D = D · 1 C. A pair is said to be relatively prime if,
whenever GC and GD are integer matrices for an n x n rational matrix G, it follows
that G itself is an integer matrix.
LEMMA 1.17. Let ( C, D) be a pair of n x n integer matrices. Then the following
conditions are equivalent:
(1) there exist matrices A and B such that M = ( ~ ~ ) E rn;
(2) the pair ( C, D) is symmetric and relatively prime.
PRooF. If ( C, D) satisfies (1), then the pair is symmetric by the second relation
(2.8) of Chapter I. Furthermore, if GC and GD are integer matrices, then the same
relations imply integrali~y of the matrix
G = G · 'C-B'C + A'D) = -(GC) 'B + (GD)'A.
Now suppose that ( C, D) satisfies (2). Note that the pair
(C',D') = (C,D)M" =(CA"+ DC", CB"+ DD"),

where M" = (C"A" D"B") E rn, then also satisfies (2). According to the conditions
(2.8) of Chapte~ 1, the matrix
C' 'D' = CA" 'B" 'C +DC" 'B" 'C + CA" 'D" 'D + DC" 'D" 'D
= CA" 'B" 'C +DC" 'D" 'D +DC" 'B" 1C + CB" 'C" 'D + C 'D
is symmetric; in addition, it is clear that the pair ( C', D') is relatively prime. We choose
a matrix M" E rn such that (C',D') = (E,O). Lett be the first row of the matrix
( C, D). Since ( C, D) is a relatively prime pair, it follows that t =F 0. By 'Lemma 3.9 of
Chapter 1, there exists a matrix Mo E rn such that Mo· 1t = (t, 0, ... , O), where t E N.
Then
(C,D) 'Mo= (C',D')
and

C' ~ ( <11 O Ci } D' ~ ( d:I Di }


~. ~.
Since C" D' = D" C', it follows that c2 1 = · · · = c~ 1 = d2 1 = · · · = d~ 1 = 0, and
C 11D 1 is a symmetric matrix. Since the pair ( C', D') is relatively prime, it follows that
t = I, (Cr. Di) is a relatively prime pair, and our claim is proved for n = I. If n ~ 2,
then by induction we may assume that the assertion holds for the pair (Ci.D 1), i.e.,
(Ci.D 1)M2 = (E,,_.,o) for some M1 = ( ~~ ~~) E rn- 1• The matrix Mi with
§1. FUNDAMENTAL DOMAINS FOR SUBGROUPS OF THE MODULAR GROUP S3

blocks ( ~ JJ, (~ ~i ). (~ 2J ,(~ ii ) belongs to the group P, and


{C, D )' MoAfi = (E, 0), or
(C,D) = (E,O)(M")- 1 = (O,E)Jn- 1(M")- 1, where M" E rn.
Consequently, (C,D) satisfies (I) if we set M = Jn- 1(M")- 1 E P. 0

We shall say that two symmetric and relatively prime pairs ( C, D) and (C', D') are
equivalent (or belong to the same class) if
{l.17) (C',D') = U(C,D) =(UC, UD), where U E An ..
In this case obviously
{1.18) C · 'D' = D · 'C'.
Conversely, if (1.18) holds, and if M, M' are matrices in P with bottom rows (C, D)
and (C',D'), respectively, then M' = (M' M- 1 )M and M'M- 1 = ( ~1 ~) E rn.
Hence, U E P, and the pairs satisfy {l.17). Thus, the conditions (1.17) and (1.18)
are equivalent.
LEMMA 1.18. Every symmetric and relativelyprime pair {C, D) such that rank C =
r, where 0 ~ r ~ n, is equivalent to a pair of the form

(1.19) (( Ct0 0)
0 1'
0 ) ut)
'U (Di0 En-r 1

(to the pair (0, En) if r = 0), where {Ci, Di) is a symmetric and relatively prime pair of
r x r-matrices, rank C1 = r, and U1 E An.
1Wo symmetric and relatively prime pairs ofthe form {1.19), one ofwhich corresponds
to C1, D1, U1and the other of which corresponds to Ci, Di, Ui, are equivalent if and only
if
(1.20)

and the pair (Ci' V, Di v- 1) is equivalent to the pair (Ci, D1).


REMARK. If U1 = {Qi. QD and Ui = (Qi, Q~) are two matrices in the group An,
where Q1and Qi are n x r-blocks, then (1.20) is equivalent t.o the condition
(1.21) Qi= Q1 V, where VEN.
Namely, (1.21) obviously follows from (1.20). Conversely, from (1.21) it follows that
the matrix
U{ = Ui ( V
0 En-r
O )-IE
An

has the same first r columns as U1; then, by Lemma 1.6,

U'I = UI (Er
0 B')
V' '
which implies (1.20).
54 2. MODULAR FORMS

PROOF OF THE LEMMA. If r = 0 or n, then the lemma is obvious. Suppose that


0 < r < n. In this case the homogeneous system of linear equations Cx = 0, where
'x = (xi, ... , xn), has a nonzero integer solution I, where we clearly may suppose
that the components of the column I are relatively prime. Let V be a unimodular
matrix with last column I, which exists by Lemma 1.5. Then the last column of the
matrix CV consists of zeros. Repeating the same argument for the rows of CV, we
Vi CV = ( ~' ~),where C' is
find that there exist matrices V, Vi E An such that
an (n - 1) x (n - 1)-matrix. If r = rank C = rank C' < n - 1, then we similarly find
V{, V' E An-I such thatthe last row and last column of the matrix V{ C' V' consist of
zeros. Then

01 0)
( V' 1 Vi CV ( V' 0 0) 1 = ( C"0 0) 0 '
where C" is an (n -2) x (n -2)-matrix. Continuing this process, we eventually obtain
two unimodular matrices, which we shall denote U' and Ut, such that

U'CU*
I
=(Ci0 00)
where C1 is an r x r-matrix of rank r. We set

U'D =(DiD3 D2)


D4
u-1,
I

where D 1 is an r x r-matrix, and the sizes of the other blocks are determined by the
size of D 1• The pair ( U' C, U' D ), and hence also the pair

is clearly symmetric and relatively prime. From this it easily follows that ( C1, D 1) is a
symmetric pair, D3 = 0, and D4 E An-r. If we now set

U = (Er -D2D4 1 ) U' E An


0 D-
4
1 '

we see that

UC= (Ci0 00) 'U


),

This implies, in particular, that (C1, D1) is a relatively prime pair, and the first part of
the lemma is proved.
Now suppose that we are given two symmetric and relatively prime pairs of the
form (1.19), written in terms of the matrices C1,D1, U1 and C2, D2, U2, respectively. If
they are equivalent, then, by (1.18), we have the equality

( ~1 ~) u1u; (
1 '~2 ~) = ( ~1 ~) u1-1u2( 1~2 ~).
If we divide the matrices 1 U1 u; and u1- 1 U2 into blocks of the corresponding sizes

'U1 u; = ( 1J; ::;, ) , u1- 1 u2 = (;, :, ) ,


§1. FUNDAMENTAL DOMAINS FOR SUBGROUPS OF THE MODULAR GROUP 55

then we can rewrite the last relation in the form

( C1 W'D2 C1H') =(Di V'C2 0)


0 0 B"C2 0 '
from which it follows that C1 W' D2 = Di V' C2 and B' = 0, since C2 is a nonsingular
matrix. In particular, U2 = U1 ( 6 :, ).
Furthermore, since 1 U1Ui = ( u 1- 1U2)*,
it follows that W = V*. Thus,
C1 1(D2 v- 1) = Di 1( C2 'V),
so that the pair ( C2' V, D2 v- 1) isequivalenttothepair (Ci, D1). The converse assertion
is obvious. D

We now examine the orbits (1.1) of the Siegel modular group K = r on Hn.
By the height of a point Z = X + iY E Hn, denoted h(Z), we mean the determinant
det Y. By Lemma 2.8 of Chapter 1 we have

(1.22) h(M(Z}) =I det(cz + D)l- 2h(Z), if M = ( ~ ~) Er.


LEMMA 1.19. Every orbit of r on Hn contains points Z of maximal height, i.e.,
points with the property
(1.23) Idet(CZ + D)I ~ 1
for any symmetric and relatively prime pair ( C, D ).
PROOF. If h(Z) ~ h(M(Z}) forallM E rn, then from (1.22) it follows that (1.23)
holds for any pair ( C, D) that gives the bottom rows of a matrix in rn, i.e. (by Lemma
1.17), for any symmetric and relatively prime pair. Conversely, if all of the inequalities
(1.23) hold, then h(M(Z}) ~ h(Z) for all M E rn. Thus, it remains to prove that
every orbit has a point of maximal height. This, in turn, follows if we show that for
every fixed Z E Hn the inequality
(1.24) Idet(CZ + D)I <1
has· only finitely many solutions in nonequivalent symmetric and relatively prime pairs
(C,D). Namely, in that case the function h(M(Z}) takes only finitely many values
greater than h(Z) on the orbit of Z.
Suppose that rank C = r. By Lemma 1.18, we may assume that the pair ( C, D)
has the form (1.19). Hence, setting U1 = (Q, Q'), where Q is an n x r-block, we obtain

CZ+ D = [ (~I ~) (,~,) Z(Q, Q') + (~I ~)] u 1- 1


= (C1ZIQI +D1 C1 'QZQ') u-1
0 E 1 •

Thus, Idet( CZ+ D)I = Idet C1 I · Idet(Z[Q] + P)I, where P = c 1- 1Di is a rational
symmetric r x r-matrix. By Theorem 1.7, if we replace Q by QV for a suitable VE An
(see Lemma 1.18 and the subsequent remark), we may assume that Y[Q] E P,. We
note that the class of the pair ( C 1, D 1) is uniquely determined by the symmetric matrix
P = c 1- 1·D 1. In fact, if c 1- 1·D 1 = c 2- 1·D2 for another symmetric and relatively prime
pair of r x r-matrices ( C2, D 2), and if det C2 =I 0, then c 1- 1 · Di = 1D2Di, and hence
56 2. MODULAR FORMS

C1·1 D 2 = D 1·1 C2, so that our two pairs satisfy the condition for equivalence in the form
(1.18). We set T = Y[Q] and S = X[Q] + P. Since T > 0 and Sis symmetric, there
exists a real r x r-matrix F such that T[F] = E, and S[F] = H = diag(h1, ... ,h,).
Since (detF)- 2 = det T, we have
Idet(Z[Q] + P)I =I det(S + iT)I =I det(H + iE)[F- 1]1
=I detT · g(hQ + i),.
Thus, ( 1.24) is equivalent to the inequality
r
(l.25) Idet Cil det T II (1 + h~) 1 12 < I.
Q=I

From this inequality it follows that det T < I. Let qi, ... , q, denote the columns of the
matrix Q. Since T = (1qQ Yqp) is a reduced matrix, it follows by Theorem 1.8 that
r
II Y[qQ] ~ c, det T < c,.
Q=I

On the other hand, if A. is the smallest eigenvalue of the matrix Y, then Y[qQ] .~
A.1qQ · qQ ~A.. These inequalities imply that Y[qQ] < A. 1-'c, (1 .~a ~ r), and so
all of the qQ belong to a certain finite set of integer vectors. In particular, there are
only finitely many matrices Q that are not connected by relations of the form (1.21)
and have the property that a pair of the form (1.19) satisfies (l.24). Furthermore,
det T = det Y[Q] takes only finitely many values, and hence the inequality (l.25)
implies that the numbers I det C1 I, h 1, ... , h, are bounded from above. In addition,
since T = F* p- I, jt follows that all of the entries in the matrix p- 1·are boundeJ from
above. Consequently, all of the entries in S = H[F- 1]-and hence all of the entries in
P = S - X[Q]-are bounded from above. We conclude that Pis a rational matrix with
bounded entries, all of whose denominators are also bounded, since they are divisors
of a finite number of values of det C1. There are only finitely many such P, and hence
only finitely many nonequivalent pairs ( C1, D 1). Applying the second part of Lemma
1.18, we complete the proof of the lemma. 0

'fHEoREM 1.20. Let Dn be the subset of the upper half-plane Hn that consists of all
matrices Z = X + i Y E Hn that satisfy the conditions:
(1) Idet( CZ +D )I ~ lfor all symmetric and relatively prime pairs ofn·x n-matrices
(C,D);
(2) YE Fn, where Fn is the reduction domain (1.10);
(3) XE Xn = {X = (xQp) E Sn(R); lxQpl ~ 1/2 (1 ~ a,p ~ n)}.
Then Dn intersects with every orbit of rn on Hn. If Z and Z' are two interior points of
Dn and Z' = M (Z} with M E rn, then M = ±E2n. In particular, all of the interior
points of Dn lie in distinct orbits ofrn.
PROOF. We consider the orbit rn (Z"} of an arbitrary point Z" E Hn. By Lemma
1.19, there exists a point Z' E rn (Z"} having maximal height, and this point satisfies
all of the inequalities in (1). Any transformation of the form

Z'--+ ( '6 s;_~ 1 ) (Z') = X'[V] + S + iY'[V],


§1. FUNDAMENTAL DOMAINS FOR SUBGROUPS OF THE MODULAR GROUP 57

where V E An and S E Sn, belongs torn and does not change the height of Z'. By
Theorem 1.7, there exists a matrix V E An such that Y = Y'[V] E Fn. There also
obviously exists a symmetric integer matrix S such that X = X'[V] +SE Xn. Then

Z = X + iY E r(Z"} nDn.

Now suppose that Zand Z' E Dn, Z' = M(Z}, where M = ( ~ ~) E rn.
Thenh(Z) = h(Z')byLemmal.19,andfrom(l.22)itfollowsthatldet(CZ+D)I = 1.
Similarly, becauseZ = M- 1 (Z'}, we have Idet(- 1 CZ'+' A)I = 1 (see (2.9) of Chapter
1). If C =I 0, then these equations are nontrivial, and consequently the points Zand Z'
lie on the boundary of Dn. If C = 0, then M can obviously be written in the form M =
('~ s;_~ 1 )•where VE An, SE Sn. Then Z' = X' + iY' = X[V] + S + iY[V],
where X + iY = Z. In particular, Y' = Y[V]. Since Y, Y' E Fn, it follows by Theorem
1.7 that either Y and Y' are boundary points of Fn, or else V = ±En. In the latter
case X' = X + S, and hence S = 0 if X and X' do not lie on the boundary of Fn. We
conclude that M = ±E2n if Zand Z' are not boundary points of Dn. 0

THEoREM 1.21. Any matrix Z = X + iY E Dn satisfies the inequalities

' n1-nJ3
(1.26) where en= 2Cn ,

and en is the constant in Theorem 1. 8. In particular,


· 2nn · Cn
(1.27) where (1n = v'3 .

PROOF. WesetZ =(zap), zap= Xap+iYaP· Fromtheinequalityldet(CZ+D)I ~


1 for the symmetric and relatively prime pair

(C,D) = ( (~ ~), (~ E~_J)


we obtain: lz11 I = (xf1 + yfi) 112 ~ 1. Since lx11 I ~ 1/2, this implies that yf1 ~
1 - 1/4, i.e., Y11 ~ ../3/2. The last inequality and (1.11) imply that Yaa ~ v'3/2
for a = 1, ... , n. These inequalities, along with (1.16), prove (1.26). From (1.26) it
follows that the smallest eigenvalue of Y is greater than or equal to c'. Hence, the
largest eigenvalue of y- 1 is less than or equal to l/c', and a(Y- 1) ~ n/c'. 0

From Theorem 1.21 it follows that the fundamental domain Dn is closed in the
space of all complex symmetric matrices. Siegel proved that Dn is a connected domain
bounded by a finite number of algebraic hypersurfaces.
4. Subgroups of finite index. Let K be an arbitrary subgroup offinite index in the
modular group r. We set

(1.28) K' = (-E2n)K UK.


58 2. MODULAR FORMS

Clearly, K' is also a subgroup of rn. We let M 1, • •• , Mµ denote a complete set of left
coset representatives for P modulo K', so that
µ
(1.29) P = LJ K' Ma and K' Ma n K' Mp = 0, if a -=/; p,
a=I

and we set
µ
(1.30) DK = LJ Ma(Dn),
a=I

where Dn is a fundamental domain for P. We then have the following theorem.


THEOREM 1.22. The K-orbit K (Z) of every point Z E Hn intersects with the set
DK. If Z and Z' are two interior points of DK and Z' = g(Z), where g E K, then
g = ±E2n· In particular, all of the interior points of DK lie on different orbits of K.
PROOF. Since all of the orbits of K and K' coincide and DK = DK•, it follows
that, replacing K by K', we may suppose that K = K'. By Theorem 1.20, there exists
a matrix M E rn such that M(Z) E Dn. Let KMa be the left coset in (1.29) that
contains the matrix M- 1. Then, writing M- 1 in the form g- 1Ma, where g E K, we
obtainZ E M- 1(Dn) =g- 1Ma(Dn),andhencegZ E Ma(Dn) C DK.
Suppose that Z and Z' = g(Z) are interior points of DK. By replacing Z, if
necessary, by a sufficiently nearby point, we may assume that Z is an interior point of
one of the sets in the decomposition (1.30), say, the set Ma(Dn)· Then Z = Ma(Z1),
where Z1 is an interior point of Dn. Similarly, g(Z) = Mp(Z2), where Z2 E Dn. Since

Mi 1gMa(Z1) = Mi 1g(Z) = Z2,


it fol~ows by Theorem 1.20 that Mp- 1gMa = ±E2n. and this obviously implies that
a= Pandg = ±E2n· D

From Theorem 1.22 it follows that


(1.31) u
gEK' /{±E2n}
g(DK),

and the intersection of any pair of subsets on the right in this decomposition does not
contain any interior points of the subsets.
PROPOSITION 1.23. (1) Let K c rn be a subgroup offinite index, and let DK be a
fundamental domain for Kin Hn. Then the volume

v(K) = v(DK) = j d*Z,


Dx

where d* Z is the invariant volume element defined in Proposition 2. 9 of Chapter l, is


finite and does not depend on the choice offundamental domain.
(2) Let K 1 c K be subgroups offinite index in rn. Then in the notation (1.28)

(1.32) v(K1) = [K': K:Jv(K).


§2. DEFINITION OF MODULAR FORMS S9

PRooF. From Proposition 2.9 of Chapter 1 it follows that v(D K) does not depend
on the choice of DK. Ifwe choose DK in the form (1.30), we have

v(DK) = L µ I d*Z = L f d*Ma(Z)


µ·

a=IMo(Dn) a=lv.

= tj
a=lv.
d*Z = [r: K']v(Dn),

from which (1.32) follows. To prove finiteness of the volume it suffices to treat the case
v(Dn). From Theorems 1.21and1.8 and inequalities (1.11) and (1.12) we obtain

v(Dn) ~ f (det Y)-(n+I) dY


{YEFn: Y~c~E}

~c f (yu · · · Ynn)-(n+I) II
1,.;a,.;p,.;n
c:,~Yll s:;; ... ~Ynn
IYapl.;;;y.... /2 (a#)

~c I
Y11, ... ,y,,n~C~
{y11 · · · Ynn)-(n+I)
n

a=l
n
II y~;;a II dyo;o;
a=l

II I y-(a+I) dy
n oo
= C QQ QQ
< 00 '

a=lc~

where c denotes suitable constants. 0

PROBLEM 1.24. Sketch connected fundamental domains for rA(2) and r 1(2).
PROBLEM 1.25. Computev{r 1).

§2. Definition of modular forms


1. Congruence subgroups of the modular group. The principal congruence subgroup
of level q in the Siegel modular group is the group
(2.1) r(q) ={ME r; M::: E2n(modq)};

this is a normal subgroup of finite index in rn, since it is the kernel of the homomor-
phism from rn to the finite group GL2n(Z/qZ) that is defined by reduction modulo q.
If for some q asubgroup K satisfies

(2.2) rn(q) c Kc rn,

then it is called a congruence subgroup. It is clear that

(i.3) µ(K) = [r: K] < oo.


60 2. MODULAR FORMS

PROBLEM 2.1. Show that


µ(rl(q)) = q3 rro -
plq
p-2),

where p runs through all prime divisors of q.


2. Modular forms of integer weight. Let K be a congruence subgroup of the Siegel
modular group rn, and let x be a.finite character of K, i.e., a homomorphism from K
to a finite group of roots of unity. A function F on the Siegel upper half-plane Hn is
said to be a modular form of degree n, weight k {where k is an integer), and character
x for the group K if it sati~fies the following three conditions:
(1) Fis a holomorphic function in (n) complex variables on all of Hn;
(2) for every M = ( ~ ~) E K, F satisfies the functional equation

(2.4) det{CZ + D)-k F(M(Z)) = x(M)F(Z);

(3) if n = 1, then for any matrix (: ~) E r 1 the function

(cz + d)-k F((az + b )(cz + d)- 1)


is bounded on any region H 1(e) c Hi. where e > 0 {see (1.11) of Chapter 1).
The set rotk (K, x) of all modular forms of degree n, weight k, and character x for
the group K is obviously a vector space over the field of complex numbers. We let rotz
denote the space rotk(rn, 1).
3. Definition of modular forms of half-integer weight. Let K be a congruence
subgroup of the group r3(4), let x be a finite ch~racter of K, and let k be an odd
integer. A function on Hn is said to be a modular form of degree n, weight k/2, and
character x for the group K if the following conditions .are fulfilled:
(1) Fis a holomorphic function in (n) complex variables on all ofHn;
(2) for every M = ( ~ ~) E K, F satisfies the functional equation

(2.5) j(2)(M,z)-kF(M(Z)) = x(M)F(Z),


where j(2)(M, Z) is the automorphy factor (4.37) of Chapter 1;

(3) if n = 1, then for any matrix (: ~) E r 1 the function

(cz + d)-kl 2F((az + b)(cz + d)- 1)


is bounded on any region H1 (e) c H1, where e > 0.
The set rotk 12 (K, x) of all modular forms of degree n, weight k/2, and character x
for the group K is a vector space over C.
4. Theta-series as modular forms. Let Q E Am be the matrix of a positive definite
integral quadratic form in m variables, let q be the level of Q, and let n E N. Then
Theorems 1.4 and 3.13, along with Theorem 4.10 if m is even or Theorem 4.12 and
Proposition 4.11 if m is odd (see Chapter 1), show that the theta-series on(z, Q) of
degree n for Q satisfies conditions (1) and (2) in the definition of a modular form of
§3. FOURIER EXPANSIONS 61

weight m/2 and character XQ for the group r 0(q). Furthermore, from Proposition
3.14 and Theorem 3.6 of Chapter 1 it is easy to see that any function of the form

(2.6) det(CZ + D)-ml2 on(M(Z), Q), where M = ( C D) E rn,


is a linear combination with constant coefficients of a finite number of generalized
theta-series on(z, QIT). Since each of these series is absolutely and uniformly conver-
gent on any Hn(e) fore> 0 (see (1.11) of Chapter 1), it follows that the function (2.6)
has this property. Thus, we have
THEOREM 2.2. The theta-series On(z, Q) of degree n for matrices Q E Am are
modular forms of weight m/2 and character XQfor the group r 0(q):
(2.7) on(z,Q) E ro?m;2(r(}(q),XQ),

where q is the level of Q and XQ is the character in Theorem 4. l 0 of Chapier 1 if m is


even or in Theorem 4.12 of Chapter 1 if m is odd.
This theorem reduces the study of the theta-series of quadratic forms to the inves-
tigation of modular forms. The latter study often turns out to be simpler, because of
the invariant definition of such forms .
. PROBLEM 2.3. Let n,m EN, Q EA;!;, and TE Tn(Q). Show that the generalized
theta-series on(z, QIT) (see (3.32) of Chapter 1) is a modular form ofweightm/2 and
trivial character for the grouprn (q)' where q is the level of Q.
[Hint: See Problem 3.17 of Chapter l.]

§3. Fourier expansions


1. Modular forms for triangular subgroups. Some of the important properties of
modular forms actually depend only on their analyticity and their invariance under
certain subgroups of infinite index in the modular group. It is convenient to examine
the corresponding function spaces on an abstract level, because this will be useful later
when we construct a theory of Hecke operators and when we analyze the connections
between modular forms for different congruence subgroups.
We introduce the triangular subgroup of the modular group:

(3.1)

Let T = Tn be a subgroup of r(j, and let X be a finite character of T. We say that


a function F on Hn is a modular form of character x for the group T if it satisfies the
following three conditions:
(1) Fis a holomorphic function in (n) complex variables on all ofHn;
(2) for every matrix ME T the function F satisfies the functional equation
(3.2) F(M(Z)) = x(M)F(Z);
(3) if n = 1, then Fis bounded in any region H 1(e-), wheree- > 0.
The space of all such functions will be denoted rot( T, x). If K is a congruence
subgroup of rn, K :J P(q), and Xis a finite character of K, then from the above
definitions it follows that
(3.3)
62 2. MODULAR FORMS

where

{3.4) r; = { ( ~ ~) E rn{q);detD = 1}
is a subgroup of finite index in ro.
and w = k or k/2 is an integer or half-integer.
Since xm = 1 for some natural number m (the smallest such m is called the order of
x), it follows that x(T(mqS)) = 1 for any matrix SE Sn. In addition, T(mqS) E r;.
2. The Koecher effect. We now prove the following fact.
THEOREM 3.1. Every modular form F E M(T,x), where Tis a subgroup of.finite
index in r(j, n ~ 1, and x is a finite character of T, has a series expansion of the form
(3.5) F(Z) = L f(R)e{q-I RZ} {Z E Hn),
REAn

where An is the set (l.1) of Chapter l, e{· ··}is the function (3.23) of Chapter 1, and
q = q(T, x) is the smallest natural number such that
(3.6) T(qS) E T and x(T(qS)) =1 for any S E Sn.
The series (3.5) is absolutely convergent on all of Hn, and it converges uniformly on every
Hn (e ), where e > 0. In particular, the function F (Z) is bounded on each Hn (e ).
For every matrix V such that
(3.7) U('V) ET,
the coefficients f(R) in (3.5) satisfy the relation
(3.8) f (R[V]) = x( U( 'V))f (R).
We call (3.5) the Fourier expansion of the form F, and we call the numbers f(R)
its Fourier coefficients.
PRooF. In the case of matrices of the form (3.6), the functional equation (3.2)
becomes
F(Z + qS) = F(Z) (Z = (zap) E Hn),
which holds for any symmetric n x n integer matrix. This means that F is a periodic
function with period q in each variable Zap = zfJa. Since it is a regular analytic function,
F then has a Fourier expansion of the form

where Z = X + i Y and X = (xap). This expansion can be differentjated term by term


any number of times with respect to any of its variables. Since 2 Ei~a~P~n rapXap =
u (RX), where R is the matrix with entries 2raa and rap for a # p, the above expansion
can be rewritten in the form
F(Z) = Lf(R, Y)e{q- 1RZ},
R

where f(R, Y) = g((rap), Y)e{-iq- 1RY}, and R runs through the set En of all
matrices in Sn with even main diagonal. Since F(Z) is holomorphic in each of
§3. FOURIER EXPANSIONS 63

the variables Zap. using term-by-term differentiation and uniqueness of the Fourier
expansion we see that the Cauchy-Riemann equations

8F =O,
Ozap
8
where--=
Ozap
8
- --+i--
2 . 8Xap
1(
. 8) ,
8yap
lead to the equations

8/(R, Y) = ~ 8/ (R, Y) = O (l :!( :!( p :!(.....,n,)


8-Zap 2 8 .....,a.....,
'Yap
which show that the coefficients f(R, Y) do not depend on Y. We thus obtain the
expansion

(3.9) F(Z) = L:t(R)e{q- 1RZ}


R

with constant coefficients f(R), where R runs through the same set as above.
The expansion (3. 9) may be regarded as the Laurent series for the analytic function
Fin the variables tap = exp(27r.izap/q) (1 ~ a ~ p ~ n). Consequently, the series
(3.9) converges absolutely on all of Hn.
We now substitute the expansion (3.9) for Fin the functional equation (3.2) for
a matrix of the form (3.7). Ifwe replace R by R[V] and equate coefficients, we then
obtain (3.8).
To complete the proof of the proposition it remains to verify that f(R) = 0 if·
R ¢.An, and that the series converges uniformly on Hn(e).
We first consider the case n = 1. In this case the expansion (3.9) takes the form

+oo ( 2 . ) +oo
F(z) = r~oo /(2r)exp ; 1 rz = r~oo f(2r)t'

(1 = exp( 2;; z)).


If we regard this as the Laurent expansion of an analytic function in t in the region
ltl = exp(-2n/qy) < 1 (where z = x + iy E H 1), and if we take into account that, by
condition (3) in the definition of a modular form for T, the function F is bounded for
ltl < exp(-2ne/q), where e > 0, then we see that Fas a function oft is holomorphic
in the interior of the unit disc including its center. Thus, f (2r) = 0 for r < 0, and our
series converges uniformly on any H 1 (e ), where e > 0.
Now let n ~ 2. Using the absolute convergence of (3.9) and the relations (3.8),
we can rewrite the expansion (3.9) in the form

F(Z) = L f (R)e(Z, {R}r,x),


{Rh:x

where the sum is taken over a complete set of representatives of the classes
(3.10) {Rhx =·{R[V]; U('V) E T.x(U('V)) = 1}
of matrices R E En, and
(3.11) e(Z,{Rh.x) = L e{q- 1R'Z}.
R'E{Rh:x
64 2. MODULAR FORMS

If f(R) '# 0, then the series f(R)e(Z, {Rh:x) converges for all Z E Hn, since it is
a partial sum of the absolutely convergent series for F. In particular, in this case the
following series converges:

e(iEn, {R}r.x) = L exp(- 21t a(R')).


R'E{R}T.z q

Since the traces a(R') of the matrices R' are integers, from the convergence of the last
series it follows that the inequality a(R') < 0 can hold for at most a finite number of
different matrices in {R} r.x.
We show that for any symmetric n x n integer matrix R, n ;;;.: 2, with even diagonal,
the function a (R') takes infinitely many negative values on the Class { R h:x if R is not
semidefinite. If R rt An, then there obviously exists a column vector h of n integers
such that R[h] < 0. We set Vs = En+ sH, where H = (tth, ... , tnh) E Mn and
s, ti. ... , tn are integers. Since the matrix sH has rank 0 or 1, we clearly have

where ht, ... , hn are the coordinates of h. Since n ;;;.: 2, the integers It, .... , tn can be
1;
chosen so that ltht + · · · + tnhn = 0 and tf + · · · + > 0. Then Vs E SLn(Z) and
H 2 = (haip(ttht + · · · + tnhn)) = 0, from which it follows that, in particular, Vs = Vt.
x
Since the index [r0: T] and the character are finite, it follows that U(' Vr) = U(' V.Y
lies in T for some r E N, and also x(U (' Vr)) = 1; this implies that for any integer /,
R1 = R[Vr1] is contained in {R}r.x and
a(R1) = a(R) + 2rla(RH) + r 2 / 2 R[h](tf + · · · + tn).
Since the last expression is a quadratic trinomial in I with negative coefficient of / 2 , it
takes negative values of arbitrarily large absolute value for suitable integers /.
From what we have proved it follows that the coefficient f (R) in (3.9) vanishes if
R rt An. This proves the existence of the expansion (3.5).
Finally, suppose that Z = X + i Y E Hn (e) for some e > 0. Then, by the inequality
(1.6) of Appendix 1, we find that for any RE An

(3.12) a( YR) ;;;.: ea(R).


Since the series (3.5) converges absolutely at Zo = (ie/2)En E Hn, it follows that for
any matrix RE An we have the inequality lf(R)e{q-t ZoR}I .
~ c, and. hence

I/ (R)I ~ Ct exp ( n; a(R)),


where ci depends one. From thes.e inequalities and (3.12) we obtain

L lf(r)e{q-tRZ}I ~Ct L exp(-nea(R)/2q).


REA.,

Now if k = (Ra 11 ) E An and a(R) ~ N, then from the inequality (1.5) of Appendix 1
we obtain IRatil ~ N. Thus, the number of different matrices RE An with a(R) ~ N
is no greater than

(3.13)
§3. FOURIER EXPANSIONS 65

(note that the Raa are nonnegative even integers), and


00

L lf(R)e{q- 1RZ}I ~ c1.L(N/2 + l)n(2N + l)(n-t>exp(-neN/2q).


REA. N=O

Since the latter series converges, it follows that the series (3.5) converges uniformly on
Hn(e). D

PROBLEM 3.2. Suppose that n, q E N, xo is a finite character of the group An,


and F is a series of the form (3.5) with Fourier coefficients satisfying the condition
f(R[VJ) = xo(V)f(R) for any matrix V =En (mod q) in An. Show that any such
series F having the analytic properties listed in Theorem 3.1 determines a modular
form in rot(T;,x), where xis given by the relation x(M) = xo('D) for any M =

(~ ; ) Er;.
PROBLEM 3.3. Suppose that T = r; and { R}r is the set (3.10) with x = 1. Show
that any series e(Z, {R}r) of the form (3.11) with R E An is a modular form of trivial
character for the group T.
3. Fourier expansions of modular forms. The inclusions (3.3) show that Theorem
3.1 can be applied, in particular, to modular forms for congruence subgroups of the
modular group. Thus, every such form has a Fourier expansion with the properties
described above. However, both in the development of the theory of modular forms
and in applications of the theory it turns out that one also needs to consider analogous
expansions of functions obtained from modular forms by means of certain standard
transformations. In addition, one wants to have bounds on the Fourier coefficients for
all such expansions.
In the case of modular forms of integer weight k, the transformations we are
referring to can be expressed in terms of the elementary transformations that take a
function F on Hn to the function

(3.14) FlkM = det(CZ + D)-k F(M(Z}), where M = ( ~. ~) E SR.


By Lemma 4.2 of Chapter 1, the function FlkM is also a function on Hn, and it is
analytic on all ofHn if F is analytic there. According to the same lemma, the expression
det(CZ + D)k is an automorphy factor of the group SR. By condition (3) in Lemma
4.1 of Chapter 1, we then have the relation ·
(3.15) FlkMtlkM2 = FlkM1M2 for Mi,M2 E SR . .
Using the transformations (3.14), we can write the second condition in the definition
of a modular form of integer weight k in the form
(3.16) FlkM = x(M)F, if FE rotk(K,x) and ME K.
If we want to define analogous transformations for half-integer weight k/2 and
require that they satisfy the property (3.15), then instead of SR we must consider a
covering group of SR, denoted <?S. By definition, <?S consists of all pairs (M, cp ), where
M =(~ ; ) E SR and cp is any holomorphic function cp = cp(Z) on Hn such that
(3.17) cp(Z) 2 = t · det(CZ + D),
66 2. MODULAR FORMS

where t is an arbitrary complex number in the set

(3.18) C1 = {z EC; lzl = l},

and the multiplication law on (!5 is given by the formula

(3.19)

It is not hard to check that (!5 is a group: this follows immediately from the definition
of the group operation and the basic property of the automorphy factor det( CZ+ D ).
The groups SR and (!5 are related by the epimorphism

(3.20)

whose kernel is contained in the center of the group (!5 and is obviously isomorphic to
the multiplicative group C1.
We are now ready to define the transformations in the case of half-integer weight
k/2 that are analogous to the transformations (3.14). We.set

(3.21) - = i,o(z)- k F(M, (Z}),


Flk12M where M = (M, i,o) E (!5;

this, along with (3.19), implies the relation

(3.22)

Using the automorphy factor j(2) of Chapter 1, we define the imbedding

(3.23) r 0(4) L: M -'ii= (M,j(2J(M,z)).


r
From (4.36) in Chapter lit follows that is a group homomorphism. Ifwe agree to
let M denote the elementr(M) E (!5 for every ME r 0(4), then condition (2) of the
definition of a modular form of half-integer weight k/2 can be written in the form

(3.24) Flk12M = x(M)F, if FE Vltk12(K,x) and ME K.

Suppose that K is a congruence subgroup of rn, K :::> rn (q), and M is any matrix
in the group sn = SQ. Let

(3.25)

where from now on r denotes either rn or r 0(4), depending on whether we are


considering forms of integer or half-integer weight; in the latter case, we always assume
that K C r 0(4). We show that KM is also a congruence subgroup of rn. To prove
this, we write M in the form M = tMo, where Mo is an integer matrix, we set
qo = r(Mo), and we verify that P(qqo) C KM. In fact, if LE rii(qqo), then L E r
and MLM- 1 E P{q) c K, since

ML(qoM- 1) =M(qoM- =qoE2n(mod qqo).


1)
§3. FOURIER EXPANSIONS 67

LEMMA 3.4. Let K be a congruence subgroup ofr0(4), and let M E sn. Let the
map

(3.26)
be defined for any Mo KM by the equality

(3.27)
E
---
MMoM- 1 = M MoM
--.....-.. ...-..._]
(Ein, tM(Mo)),
where M = (M,<p) is any P-preimage of Min 15 and L = j(i)(L)for all LE r 0(4).
Then this map does not depend on the choice of M, it is a character of the group KM,
and, in addition, tit = 1.
PROOF. Since the P-images of the elements on the left and right sides of (3.27)
are the same, it follows that they differ from one another by a factor in the kernel of
P. It is easy to see that this kernel consists of elements of the center of 15 of the form
(E2~, t), where t E C 1• We thus find that the equality (3.27) uniquely determines a
number tM(Mo) E C1.
We now show that t M is a homomorphism. For any matrices M1, Mi E KM we
have
--- ---- -----
MM1 MiM-1 = (MM1M- 1. )(MMiM- 1)
..-.....-.. ..-... 1 ..-.....-.. ..-... 1
= M M,M- (E2n, tM(M1)) · M MiM- (Ein, tM(Mi))
=M
--------
M1MiM- ...-.. 1(Ein. tM(M1)tM(Mi)),

which, together with the relation (3.27) for the matrix Mo = M 1Mi, implies that
fM(M1Mi) = tM(M1)tM(Mi).
Finally, if we multiply the elements on the right in (3.27) using (3.19) and recall
the definition (3.23) of the homomorphism r'
we obtain the relation
(3.28)
j(iJ(MMoM- 1, Z) = ip(MoM- 1(Z})j(iJ(Mo, M- 1 (Z})ip(M- 1(Z})tM(Mo),
since
(3.29)
Squaring both sides of (3.28) and using Lemma 4.2 of Chapter 1, formula (4.47) of
Chapter 1, and the definition (3.17), we find that tM(Mo) satisfies the relation
(3.30) tM(Mo)i = XQ2(MM0M- 1)x(22(Mo).
where xQi is the character (4.31) of Chapter 1 for the matrix Qi = 2Ei. Thus,
[tM(Mo)) 4 = I. D

Given an arbitrary character x of a congruence subgroup K of rn and M E SQ,


we define the character
(3.31)
of the group KM, and, if K c r(j(4) and k is any integer, then we also define the
character
(3.32)
68 2. MODULAR FORMS

of the group KM. where we naturally taker= r(j(4) in the definition (3.25). From
(3.31), (3.32), and Lemma 3.4 it follows that the characters XM and XM,k are finite if x
is a finite character.
'fHEoREM 3.5. Let F E rotw (K, x) be a modular form of degree n ~ 1, of integer or
half-integer weight w (w = k or k/2), and of character x (where x has order m)for the
congruence subgroup K ofrn, K ~ rn(qi); and let Kc r 0(4) ifw = k/2. Then for
every matri:ic M E rn one has the expansion

(3.33) Flwe =L f~(R)e{q-iRz},


REA.

where e =Mand q = qim if w = k, while ifw = k/2, then e =Mis any P-preimage
of Min the symplectic covering group t!S and q = 4q1m. Each of these series converges
absolutely on all of Hn and uniformly in Hn (e) for any e > 0. In particular, each function
Flwe is bounded on any of the sets Hn(e).
The Fourier coefficients in the expansion (3.33) satisfy the relations
(3.34) f~(R[V]) = x'(U( 'V))fe(R) (R. E An),
where x' = XM or XM,kfor w = k or k/2, respectively, VE A", and V =En(mod qi).
Ifw ~ 0, then
(3.35)
where yF depends only on F.
REMARK. We shall soon see that rotw(K, x) = {O} if w < 0. So there is no loss of
generality in the condition w ~ 0 in (3.35).
PROOF. Since obvfously F E rot = rotw(r"(qi),x), we can start by replacing
rotw (K, X) by rot.
We first consider the case w = k. As we already noted, the function FlkM has the
same analytic properties as F. If Mo E r"(qi), then MMoM-i E r"(qi), and hence

(3.36) FjkMlkMo = FjkMMoM-i lkM = XM(Mo)FjkM.


Furthermore, any function FlkMlkM', where M' E rn, is also of the form FlkMi,
where Mi E rn, and so is bounded wherever the latter type of function is bounded.
From this and the inclusions (3.3) it follows that FlkM E rot(T;., XM ). If we now
apply Theorem 3.1 with q = qi m to the function F lkM, we obtain all of the statements
in our theorem, except for (3.35), in the case w = k.
In the case w = k/2, the same parts of the theorem follow if we use the above
argument, (3.21)-(3.24), Lemma 3.4, and the relations
...-.. ..-... ..-....-.. ...-.. I ..-..
Flk;2Mlk;2Mo = Flk;2M MoM- ik;2M
(3.37) - -i .-..
= Flk;2MM0M-i lk/2(E2n, tM(Mo) )ik;2M
= XM(Mo)tM(Mo) k Flk;2M
- -
= XM,k(Mo)Flk;2M.

To prove (3.35) for the Fourier coefficients f M (R), we consider the function

(3.38)
§3. FOURIER EXPANSIONS 69

where {Ma} is a complete set of representatives of P(qi)\rn. From (3.15) and


(3.16) it follows that G does not depend on the choice of representatives; since the set
{Ma · M}, where M E rn, is also a set of representatives, we have
µ µ
(3.39) jGjkMI = LIFlkMalkMI = LIFlkMaMI = G.
a=I a=I

In addition, because the functions FlkMa are bounded on Hn(e) fore> it follows o:
from Theorem 1.21 that any of these functions-and hence also G~are bounded on
the fundamental domain Dn for the group rn.
LEMMA 3.6. Suppose that the nonnegative real-valuedfunction G on Hn satisfies the
functional equation (3.39) for any M E rn and is bounded on Dn. Then the following
bound holds uniformly in X E Sn(R) and R E At:
G(X + iR- 1) ~ y(detR)k,

where y depends only on G.


PROOF OF THE LEMMA. Let z = x + iY E Hn. Then, since Dn is a fundamental
domain for rn on Hn, there exists a matrix M = ( ~ ~) E rn such that Zo =
M(Z) E Dn. If det C-:/:- 0, then Idetej ~ 1, and by (3.39) and the boundedness of G
on Dn we obtain
G(Z) =I det(CZ + D)l-kG(M(Z))
(3.40)
~ I det q-k1 det(X + c- 1D + iY)l-koG ~ oa(det Y)-k

(see the inequality (1.10) in Appendix 1). If det C = 0, we replace Z by the point

Then

and MM-I=
1
(AS+B -A)
cs+D -D ·
We show that there exists a symmetric n. x n integer matrix S such that
(3.41) det(CS + D)-:/:- 0.
Let r be the rank· of C. From Lemmas 1.17 and 1.18 we see that in this case the
matrices C and D can be represented in the form

C = U (Ci0 0)0 'U1


'

where U, U1 E An, (Ci, D 1) is a symmetric and relatively prime pair of r x r-


matrices, and det C1 -:/:- 0. Then for t a sufficiently large integer the matrix S =
( t~, E~-r) [U1- 1] satisfies the inequality (3.41). By (3.40) and Lemma 2.8 of
Chapter 1, for any such matrix S we have
G(Z1) ~ oa(detYi)-k = oa(det Y)-kl det(-Z + S)j 2k.
70 2. MODULAR FORMS

Hence,
(3.42) G(Z) =I det(-Z + S)l-kG(M1 {Z)) ~ oa(det Y)-kl det(-Z + S)lk·
The expression det( CS+ D) is a polynomial in the entries sap of the matrix S of degree
at most two in each variable sap. Since this polynomial takes nonzero values, using
induction on n it is easy to see that it is nonzero for certain integer values of the sap
satisfying the inequalities - 2 < sap - Xap < 2 (a, P = 1, ... , n), where Xap are the
entries in the real part X = (xap) of the matrix Z. Supposing that these inequalities
hold, we see that I det(-Z + s)l2 = I det(S - X + iY)l 2 is a polynomial of degree 2n
with bounded coefficients in the entries Yap of the matrix Y. Since Y > 0, it follows
by inequality (1.5) of Appendix l that IYapl ~ u( Y). Hence,
I det(-Z + S)I ~ on(l + u(YW,
where on depends only on n. From this bound and (3.42) we obtain
G(Z) ~ o'(l + u(YWk(det Y)-k.
Now let Y = R- 1, where R E A~. The matrix R can be written in the form R =
Ro[U- 1), where Ro is Minkowski reduced and U E An. Then, using (3.39) for
M = ( ~ i* ) E rn and the last inequality, we obtain
G(X + iR- 1) = G(U(u- 1XU*+ 1) 'Vi Ro
= G(u- 1XU*+ R0 1) ~ o'(l + u(R01))nk(detRo)k.

Let Raa denote the matrix obtained from Ro by crossing out the ath row and column,
and let r a denote the ath diagonal entry of Ro. Since Raa > 0 and Ro is Minkowski
reduced, we can apply the inequality (1.8) of Appendix 1 to the matrices Raa and use
Theorem 1.8 to obtain
n d tR _,,.~'a n -I r1 .. ·rn - -I~ n -I_,,. -I
(R -1) - ~ e (i(i
O' 0 - L...J d R ""'L...J -Cn L...J'a ::::::en n.
a=I et O a=I CnYI • • · Yn a=I

Thus, we finally have


G(X + iR- 1) ~ y(detR 0 )k = y(detR)k,
where y = o'(l + nc; 1)nk. 0

We return to the proof of the bound (3.35) on the coefficients f M (R). Since FlkM
is obviously equal to one of the functions FlkMa in (3.38), if we apply Lemma 3.6 to
G = GF we obtain
(3.43) l(FlkM)(Z)I ~ y(detR)k, where Z = (xap) + iR- 1,
and hence

l/M(R)l=lq-(n) /···/(FlkM)(Z)e{-q- 1RZ} IT dxapl


,,. ,,.
0"'>.Xafl"'>.q l~a~p~n
(3.44)
(I ~a~/l~n)

~ q-<n>y(detR)kexp(nnq- 1)q<n> = YF(detR)k.


Before examining the case w = k /2, we prove the following lemma.
§3. FOURIER EXPANSIONS 71

LEMMA 3.7. Let F; E rotk1; 2(K;,x;), where i = 1,2, the k; are odd integers, and
the K; are congruence subgroups in r 0(4). Then the product F = F1F2 is a modular
form of integer weight k = {k1 + ki)/2 belonging to rotk(K,x). where K = K1 n K2
is a congruence subgroup in r 0{4), x = x1x2(XQ)k, and XQ2 is the character (4.31) of
Chapter lfor the matrix Qi =.2E2.
PROOF. Since K; contains a principal congruence subgroup rn (q; ), it follows that
K :::> rn(q 1q2 ). We also obviously have K c qj(4). Next, according to (3.24) we
can write Fdk1; 2M = x;(M)F; for any matrix ME K. Ifwe multiply these equalities
together for i = 1and2, then from (4.47) of Chapter 1 and the definition of modular
forms we obtain all of the claims in the lemma. D

To prove (3.35) for a modular form F of half-integer weight w = k/2, instead


Of F we consider its square F 2, which, by Lemma 3. 7, is a modular form of integer
weight k for the same group K as F. For F 2 we can use the bound (3.43):
{3.45) l(F 2lkM)(Z)I ~ y(detR)k, where Z = X + iR- 1•
On the other hand, the definitions of the transformations (3.14) and (3.21) imply that
-2
(3.46) IFlk/2 WI = IF2 lk WI -
for W E QS,

where W = P(W). From this and (3.45) we find that


(3.47) l(Flk;2.M)(Z)I ~ y(detR)kfi, where Z = X + iR- 1,
P(M) E rn, and y is a new constant that depends only on F. We now use the
expansion (3.33) for the function Flk; 2M and the bound (3.47). By analogy with
(3.44), we obtain the bound (3.35) for the coefficients f ,q(R) with w = k/2. · D

At the beginning of the proof of Theorem 3.5 we saw that the operators !we!. where
c; = M or Mand M E rn, map modular forms to modular forms. We now show that
this is also a property of the analogous operators for any rational matrix (or matrix
proportional to a rational one).
PROPOSITION 3.8. Let F E. rotw (K, x) be a modular form of integer or half-integer
weight w (where w = k or k/2) and.finite character x for the congruence subgroup K of
P; if w = k/2, th~n also K c r3(4). Further let c; = M if w = k and c; =ME QS if
w = k/2, where M is any matrix in SQ, and P(M) = M. Then

Flw<! E rolw(KM,X 1),


where KM is the congruence subgroup (3.25) and x' is the finite character XM if w =k
and XM,k ifw = k/2.
PRooF. By Lemma 4.2 of Chapter 1 and the definition of the transformations
(3.14) and (3.21), the function G = Flw<! is regular on all of Hn. Furthermore,
the functional equations (3.16) and (3.24) for G follow from the relations (3.36) and
(3.37), where we take the matrix Mo from the group KM. Thus, to show that G is a
modular form it remains to verify part three of the definition.
Suppose that n = 1 and w = k. Then every function Flk W' for W' E r 1 is
bounded on H 1(e) c H 1 for any e > 0, and, by (3.15), we have Glk W = FlkMW for
72 2. MODULAR FORMS

any W E r 1. By Proposition 3.7 of Chapter 1, we can write MW = W' M', where


W' E r 1, M' = ( ~ ~) and ad = r(M) > O; hence,
(3A8) Glk W = Flic W'lkM' = d-k(Flk W')((az + b)/d),
and boundedness of this function on H1 (e) follows from the boundedness of F ik W'
onH1(ae/d).
Finally, if n = 1 and w = k/2, then the function Flk;2W', wh~re P(W') E r 1,
is also bounded on H 1(e) for any e > 0. By analogy with (3.48), the function
G = F lk;2 M'satisfies the r~lation

Glk/2 W = tjdj-k12(Flk/2 W')((az + b )/d),


where P(W) E r 1 and tis a complex number in C 1. This implies that the function
Glk/2 Wis bounded on H1 (e). 0

PROBLEM 3.9. Prove that if K = rn(4q) and M E rn, then the homomorphism
t M in Lemma 3.4 satisfies the condition t1- = 1. If M E rg (4), then show that t M = 1.
PROBLEM 3.10. Let K be a congruence subgroup of rg (4), and let M be any matrix
in S8.
Show that the characters tM and tM-1 are related as follows: tM(w)- 1 =
tM-1(MwM- 1) for any w E KM.
4. The Siegel opera~or. In this subsection we shall establish connections between
modular forms of degree n and degree n - 1. These connections come from the
properties of the Fourier expansion of a modular form that were described in Theorems
3.1 and 3.5. Let i;
denote the set of all Fourier series of the form (3.5) that co:.1verge
absolutely and uniformly on Hn(e) fore> 0. LetF E i;.
If Z E Hn-1(e) and A.> e,
then obviously

Zl = ( ~ ~) E Hn(e).
Because of the uniform convergence of the series (3.5) for F, we have

lim F(Zl) = "" f(R) lim e{q- 1RZl}:


l-+oo L.J
REAn
l-+oo

If
R= (R'* * ),
2rnn
then a(RZl) = a(R'Z) + 2rnnA.i, and hence
(3.49) lim e{q- 1RZl} = { e{q-I R'Z}, if Ynn = 0,
l-+oo 0, if Ynn > 0.
Since R ~ 0, the equality Ynn = 0 implies that r1n = Ynl = ... = Yn-1,n = Yn,n-1 = 0,
.
1.e., R: = (R' 0)
0 0 . Thus, for Z E Hn-I we have

(3.50) (Fi<l>)(Z) = l.!!Too F(Zl) = ,L f ( ( ~, ~) )e{q- 1R'Z}.


R EAn-1
§3. FOURIER EXPANSIONS 73

Since this last series is a partial sum for the expansion (3.5) of F, it converges absolutely
anduniformlyonHn-1(e). Thus,Fj<l>Ej~- 1 . Ifn = l,weset
(3.51) Fj<l> = lim F(i).) .
.1.-++oo

As before, the limit exists and is. equal to the constant term of the Fourier expansion
of F. Setting~ = C, for all n, q ~ ·1 we obtain the linear operator

{3.52) m. '?:n --+ '?:n-1


""· Uq Uq '

which is called the Siegel operator.


The Siegel operator has an especially simple action on the theta-series of positive
quadratic forms.
PROPOSITION 3.11. The image of the generalized theta-series On(J-V, QIT), where
WE Hn. Q EA;!;, TE Tn(Q), m,n ~ 1, under the action of the Siegel operator <I> is
=
equal to on-I(z, QIT') or 0, depending on whether t 0 or~ 0 {mod q), respectively,
where q is the level of Q, tis the last column of the matrix T = (T', t), and Z E Hn-I
(we set 0° = 1). In particular, <I> takes on( W, Q) to on-I {Z, Q).
PROOF. Since the series defining the function on ( W, QI T) converges uniformly on
Hn (e) fore > 0, it follows that

lim on(z.i.,QIT)=
.1.-++oo
~
L.J
lim e{Q[M+q- 1T](Zo
.1.-++oo lA
~) }·
MEMm,n .

If M = (M', m') and m' is the last column of the matrix M, then the entry in the
lower-right comer of the matrix Q[M + q- 1T] is obviously equal to Q[m' + q- 1t] ..
Using (3.49) and the positivity of Q, we see that the limit of the corresponding term
in the sum on the right is equal to e{Q[M' + q- 1T']Z} or 0, depending on whether
m' + q- 1t = 0 or =I 0, respectively. · D

We now consider the action of the Siegel operator on modular forms for congru-
ence subgroups of the modular group. For n > 1 we define the monomorphism

(3.53) r- 1 ~ r: M' = ( ~: ~:) --+ M = ( ~ ~) '


where

A = ( ~' ~) , B = ( ~' ~) , C = ( ~' ~) , D = ( ~'. ~) ,


and we let
(3.54) r·n-1 = ip(r-I ).

For an arbitrary subgroup K = Kn c r we set


(3.55) Kn,n-1 =Kn r·n-1, Kln-IJ = 'P-l(Kn,n-1).

If K is a congruence subgroup of r, K ::J rn (q), and x is a finite character of the


group K, then, since Kn,n-I ::J rn(q) n p.n-I, we have
r-I(q) = 'P-I(r·n-l(q)) c Kln-IJ,
74 2. MODULAR FORMS

so that Kfn-ll is a congruence subgroup of rn-I, and


(3.56) X[n-ll(M') = x(i,o(M')) (M' E Kfn-11)
is a finite character of this group. Now let F E ro?w (K, x) be a modular form of integer
or half-integer weight w (where w = k or w = k/2). If Z E Hn-h M' E rn-I, and
M = i,o(M'), then for A. > 0 we have

(3.57) M (ZA} = M' (Z} A> where ZA = ( ~ i~) E Hn;

(3.58) det(CZA + D) = det{C'Z + D'),

whereM = (~ DB) and M' (A'C' D'B') . Thus,


=

(3.59) (Fl'1>)1kM' = (FlkMl)j'1>.


In particular, if M' E K[n-ll, then
(3.60) (Fj'1>)1kM' = x(M)Fl'1> = X(n-ll(M')Fj'1>.
We next suppose that w = k/2 and Kc q)(4). In the earlier notation, if we use
the functional equation (4. 36) of Chapter 1, the relation (3. 57), and Proposition 3.11,
we obtain
l' ·n (M z) l' On(M(ZA}, (2))
A_!~oo) (2) ' A = A_..!Too (}n (ZA, (2))
= on-I (M' (Z}, (2)) = ·n-1 (M' Z)
(}n-l(Z,(2)) lc2) ' ·
But in view of (4.37) of Chapter 1 and (3.58), thefunctionj(2i(M. ZA) does not depend
on A.; hence,
(3.61) H2i(M, ZA) = j(2) 1(M', Z),

where M' E r 0- 1(4), M = i,o(M'), and ZA = ( ~ i~) E Hn.


We now consider the more general case when M' E rn- 1 and M' = (M',i,o'),
M= (M, i,o) are arbitrary P-preimages of M' and Min 18. From the definition (3.17)
of 18 and the equality (3.58) we see that
(3.62) i,o(ZA) = ti,o'(Z), where t E C1.
Hence, using the definition (3.21) of the transformation lk;2M', by analogy with (3.59)
we obtain the relation
(3.63)
If M' E ro- 1(4), then (3.61) and the definition (3.23) of M' and Mimplythatin this
case t = I. We thus have
(3.64) (Fl'1>)lk;2M' = (Flk;2ii)l'1> for M' E r 0- 1(4),
which implies, in particular, that for FE !mk; 2(K,x) and ME Kfn-ll
{3.65) (Fl'1>)lk;2M' = x(M)Fl'1> = xfn- 11 (M')Fl'1>.
§3. FOURIER EXPANSIONS 75

Thus, by (3.60) and (3.65), the function Fl<I> satisfies the functional equations for a
modular form in OOlw(K[n-IJ,x[n-IJ) with finite character x[n-IJ, In addition, from
(3.59) and (3.63) it follows that for any M' E rn- 1 the functions (Fl<l>)lkM' and
(Fl<I>)lk;2M' are boundedonHn-1 (e), provided thatthefunctionsFlkM andFlk;2M',
where ME rn, are bounded on Hn(e). Finally, if K ::) P(q 1) and the character x
has order m, then by (3.33) we have Fl<I> E -iJ;-
1, where q = 4q 1m, and hence this
function is analytic on Hn-1 · We have thereby proved
PROPOSITION 3.12. Suppose that K is a congruence subgroup of the modular group
rn. xis a.finite character of K, and w is an integer or half-integer (w = k or w = k/2).
Set 00lw(Kl01, x[OJ) = C. Then the Siegel operator <I> gives a linear map

(3.66)

where Kc r 0(4) ifw = k/2.


5. Cusp-forms. A modular form F E OOlw (K, x) is called a cusp-form if all of the
e e
Siegel operators <I>~, where =ME rn if w = k and =ME QS with P(ii) E rn if
w = k/2, map the form to zero:

(3.67)

The condition (3.67) means that F approaches zero as the argument makes certain
"rational" approaches to the boundary of the upper half-plane Hn, i.e., in some sense
F is small near the boundary. This circumstance makes it possible to substantially
strengthen the bounds (3.35) on the Fourier coefficients of the functions Flwe for such
F.
THEOREM 3.13. Let F E OOlw (K, x) be a cusp-form of degree n ;;;:: 1, integer or
half-integer weight w (where w = k or w = k/2), and.finite character x of order mfor
a congruence subgroup Kin rn, where K ::) P(q1) and if w = k/2, then K c r 0(4).
Then, in the notation of Proposition 3. 8:
(1)/or any matrix ME rn the Fourier expansion (3.33) of the/unction Flwe E
OOlw(KM,x') has theform

(3.68) Flwe = L fe(R)e{q- 1RZ},


REA;!°

whereq = q1m ifw = k and4q1m ifw = k/2, i.e., only positive definite matrices appear
in the expansion;
(2) if w ;;;:: 0, then the/unctions Flwe (M E rn) and its Fourier coefficients satisfy
the bounds

(3.69) l(Flwe)(X + iY)I ~ c5~(det Y)-w/i for X + iY E Hn,


(3.70) lfe(R)I ~ c5F(detR)wfi for RE A:,

wherec5~ andc5F depend only on F;


(3) the function Flwe is a cusp-form/or every ME SQ.
We first prove a lemma.
76 2. MODULAR FORMS

LEMMA 3.14. Let R E Am, m > 1, and detR = 0. Then there exists a matrix
V E SLm (Z) such that

(3.71) R[V) = ( ~' ~), where R' E Am-I·

PRooF. Since R is a singular integer matrix, there exists a nonzero m-dimensional


integer column-vector v such that Rv = 0. Without loss of generality we may assume
that the coordinates of v are relatively prime. Then by Lemma 1.5 there exists an
m x (m - 1) integer matrix V' such that V = (V',v) E SLm(Z). This matrix
obviously satisfies the lemma. D

PRooF OF THE THEOREM. Let w = k. We shall show that the coefficients f M(R)
in the expansion (3.33) of a cusp-form are zero for matrices with det R = 0. If n = 1,
then this follows immediately from (3.67), since in this case (FlkM)l<I> = f M(O).
Suppose that n > 1 and V E SLn(Z) satisfies (3.71) for the matrix R = Ro. Then
Mo= U(V*) E P, and

FlkMlkMo = L J M(R[v- 1])e{q- 1RZ}.


REA.

On the other hand, the Fourier coefficients of the function FlkMlkMo = FlkMMo are
f MM0 (R), SO that
f M(R[V- 11) = f MM0 (R) (RE An).
Ifwe set R = R 0 [V] here, we obtain

f M(Ro) = f MM0 (Ro[V]) = f MMo ( ( ~1 ~)) = 0,

since Fl<l>MMo = 0, and the last expression is one of the Fourier coefficients of this
function {see (3.50)). If we use (3.33) with w = k/2, then the above argument goes
through for modular forms of half-integer weight as well.

LEMMA 3.15. Suppose that a series of the form

Q(Y) = L q(R)exp(-ea(RY)),
REAt

where all of the coefficients q(R) are nonnegative and e> 0, converges for all YE Pn.
Then the following bounds holdfor any matrix Y that belongs to the Minkowski reduction
domain Fn and satisfies the inequality Y ~ eEn, where e > 0:

Q( Y) ~ 01 exp{-02a ( J(")),
Q(Y) ~ 01exp{-02n{det Y) 1fn),

where o1 and 02 are positive constants, the first of which depends only on Q and e and the
second of which depends only on n and e.
§3. FOURIER EXPANSIONS 77

PR.OOF OF THE LEMMA. Since y E Fn. it follows from (1.16) and also (1.6) of
Appendix 1 that
u(RY);;;:: bnu(Rdiag(yu, ... ,ynn))
n
= bn L2TaaYaa;;;:: 2bnu(Y) (RE A~),
a=I

where bn = n 1-nc; 1 depends only on n and 2r00 ;;;:: 2 (i.e., the diagonal entries in R).
On the other hand, since Y;;;:: eEn, it follows that u(RY) ;;;:: eu(R) for R ;;;:: 0. From
these inequalities we obtain u(R Y) ;;;:: bnu( Y) + e/2u(R), and hence

Q(Y) ~ exp(-«!bnu(Y)) L q(R)exp(-«!eu(R)/2)


REA;!'
= exp(-«!bnu(Y))Q((e/2)En ),
proving the first inequality. The second inequality follows from the first, since, if we
use the inequality between the arithmetic and geometric means and the inequality ( 1.8)
of Appendix 1, we obtain
u(Y)/n;;;:: (y11 · · · Ynn)lfn;;;:: (det Y)lfn for YE Pn. 0

We now return to the proof of Theorem 3.13 for w = k. By analogy with the
function (3.38) we consider the function

(3.72) G = GF = L IFlkMI.
MEK\r•

From (3.15) and (3.16) it follows that G does not depend on the choice ofleft coset
representatives of P modulo K. Consequently, for any matrix M' = ( ~ ~) E rn
we have

M M

From these relations and Lemma 2.8 of Chapter 1 it follows that the function
'PF(Z') = 'PF(X + iY) = (det Y)kf 2 GF(Z)
on Hn is invariant relative to all transformations in rn:
'PF(M'(Z)) = 'PF(Z) for M' E rn.
Hence, any value it takes on Hn is already taken on the fundamental domain Dn of
the group P. However, if Z = X + iY E Dn, then, by the definition of Dn and the
inequality ( 1.26), Y satisfies the conditions of Lemma 3.15 for suitable e > 0. If we
apply the first inequality in Lemma 3.15 to each of the functions

L lfM(R)lexp(-:n:u(RY)/q);;;:: IFlkMI
REA;!'

we obtain
78 2. MODULAR FORMS

o o
where 2 and 3 are positive constants depending only on F. If Yaa are the diagonal
entries in Y, then, by (1.8) of Appendix 1 (recall that k ~ 0), the last expression is no
greater than
n
.03 IT Y!~2exp(-02Yaa),
a=l
and consequently it is bounded for all Y > 0. Thus, the function 'I' F is bounded on
Dn. and hence on all of Hn: 'I'F(Z) ~ oj for Z E Hn. Since for an arbitrary matrix
ME rn the function IFlkMI is obviously equal to one of the terms in (3.72), the last
bound will give us (3.69) with w = k:
(FlkM)(X + iY) ~ GF(X + iY)
(3.74)
= (det Y)-kf 2'PF(X + iY) ~ oj.(det Y)-kf 2.

If we now substitute this bound in the integral in (3.44), we obtain the bound (3.70)
for the coefficients f M(R). To prove (3.69) and (3.70) for w = k/2 one can repeat
the proof of (3.35) in Theorem 3.5, i.e., instead of the modular form F of half-integer
weight k/2 one considers the modular form F 2 of integer weight k. Then F 2 satisfies
(3.74), and from that, along with (3.46), one obtains

l(Flk;2.M)(x + iY)l 2 = l(F 2lkM)(X + iY)I ~ oj.(det Y)-k/2,


i.e., (3.69) with w = k/2. If we substitute this bound in the integral in (3.44), we
obtain (3.70) for the coefficients f ,q(R).
Finally, to prove the last part of the theorem, by Proposition 3.8 we must show
that
(3.75)

where <!' = M' E rn if w = k and <!' = M' E <!S, P(M') = M' if w = k/2. By
Proposition 3.7 of Chapter 1, the matrix MM' can be written in the form M 1M0 ,
where Mo = ( ~o ~:) , M1 E rn. Let <! 1 and <!2 be defined in the same manner
as in addition, if w = k/2, then let i!o =
<!; Mo
E <!S be chosen so that we have
MM' = Mi Mi
(this is always possible). Then, by the first part of the theorem, we
have the expansion
Flwi!1 = L f ~ 1 (R)e{q- 1RZ},
REAt
and hence

Flwi!i!' = Flwi!1i!o = Flwi!dwi!o


= t(detDo)-w L h. (R)e{q- 1RBoD0 1}e{q- 1D0 1RAoZ},
REAt

where tis a complex number in C 1• Since the matrix q- 1D 0 1RA0 is positive definite,
from (3.49) and the definition of the Siegel operator it follows that Flwi!i!'l<I> = 0.
This, along with the relation Flwi!i!' = Flwi!lwi!', implies (3.75). D

We let 1Jtw (K, X) denote the space of all cusp-forms of weight w and character X
for the group K.
§4. SPACES OF MODULAR FORMS 79

§4. Spaces of modular forms


In §2, starting from certain properties of theta-series, we defined modular forms
for congruence subgroups of the modular group. Generally speaking, there are more
modular forms than theta-series; nevertheless, the conditions in the definition of a
given type of modular form turn out to be so rigid that they are satisfied by only a
finite number of linearly independent functions.
1. Zeros of modular forms for ri. By the order at a point p E Hi of a nonzero
holomorphic function Fon Hi we mean the integer n = Vp(F) for which the function
F(z)/(z - p)n is holomorphicand nonzero atp. If F E rotk = rotk(ri, 1) is a modular
form of weight k for ri' then the functional equation

{4.1) {cz + d)-k F(M(z}) = F(z) for M = ( : ~) E ri

implies that Vp(F) = VM(p)(F) for ME ri; thus, theordervp(F) depends only on the
ri-orbit ri(p} of p. In addition, by Theorem 3.1, in thiscaseF has a series expansion
of the form
00

(4.2) F(z) = Lf(2r)exp(2nirz),


r=O

which converges uniformly on Hi (e) for any e > 0. This implies that F may be
regarded as a function of the variable q = exp(2niz):
00

(4.3) F(z) = F(q) = Lf(2r)q',


r=O

and, as a function of q, it is holomorphic in the open unit disc lql < 1, including the
center q = 0 = limz-+ioo exp(2niz). The order of F(q) at the point q = 0 is called
the order of Fat the point ioo; it is denoted v; 00 (F). In other words, v; 00 (F) = n if
f(O) = f(2) = · · · = f(2(n - 1)) = 0 but f (2n) "# 0 in the expansion (4.2).
PRoPOSITION 4.1. Any nonzero modular form F of weight k and trivial character for
the modular group ri vanishes on only a.finite number ofri-orbits in Hi. If pi, ... , Pm
are a set of representatives of these orbits, then
m
(4.4) v; 00 (F) +L e(pa)-ivp., (F) = k/12,
a=i

where e (p) = 2 if p belongs to the orbit of the point i, e (p) = 3 if p belongs to the orbit
of the point p = (1 + h/3) /2, and e (p) = 1 otherwise.
PROOF. By Theorem 1.1, every ri-orbit in Hi intersects with the fundamental
domain Di. Thus, to prove the first part of the proposition it suffices to verify that F
vanishes at only finitely many points p E Di. Since the function F(q) (see (4.3)) is
holomorphic at q = 0 and is not identically zero, it must be nonzero in some region of
theformO < lql < e, wheree < 1. ThisimpliesthatF(x+iy) "# Oify > (lnci)/21r.
But the subset of Di consisting of all points x + iy for which y ~ (lnci)/211:
is compact, and hence can contain only finitely many zeros of the holomorphic
functionF.
80 2. MODULAR FORMS

C~---1--...,B

-1 -1/2 0 112

FIGURE2

· In proving (4.4) we may assume that p 1, ••• , Pm E D 1• We first suppose that the
boundary of D 1 does not contain any zeros of F, except possibly for i, p, p 2• Then
one can draw the contour Lr shown in Figure 2; where DD', EE', and A' A are arcs of
small circles all of radius r centered at p 2, i, and p, respectively, such that Lr contains
all of the zeros p 1, ••• , Pm that are distinct from p, p 2 , and i. Since all of the interior
points of D 1 lie in different r 1-orbits, it follows that there are no other zeros of F inside
Lr, and so, by the residue theorem, we have

(4.5) L vp,.(F) = -2 ·
1tl
l !dF .
-F = r-+O
l !dF
hm 2--:
1tl
-F'
p,.o/p.112 ,i L, L,

where the integral is taken counterclockwise around Lr.


The integral over L, can be computed by dividing Lr into pieces with endpoints
at A, B, C,D,D',E,E',A' (see Figure 2). First of all, since F(z - l) = F(z) and the
transformation z -+ z - l takes the segment AB to the segment DC, we have ·

2~i I ~ + 2~i I ~ = 0·
AB CD

Next, since the map z -+ q takes the segment BC to a (clockwise) circle R centered at
q = 0 that does not contain any zeros of F -with the possible exception of a zero of
order Vfoo(F) at q = 0-it follows that

2~i I ~ = 2~i I d: = -V;oo(F).


BC R

The integral of 2 ~; ''; over the entire circle containing the arc DD' (taken in the same
direction as the arc) is equalto -v1,2(F) = -v1,(F) for small r. Since the angle between
the radii from p 2 to D and from p 2 to D' is obviously 211:/6, we have

. l
hm --:
211:1
1°-+0
I dF
-F = --6l v,,(F).
DD'
§4. SPACES OF MODULAR FORMS , 81

Similarly,
. -1. fdF
hm 1
- = --v;(F)
r--+02m F 2
EE'
and
. ----:-
hm 1
21tl
r--+O
f dF = - -1 vp(F).
-F 6
A'A
Finally, the transformation z - z- 1 takes the arc A' E' to the arc D' E; and the relation
F(-1/z) = zkF(z) implies that
_dF_,(_-_1'-/z,,_) = _kd_z + _dF
F(-1/z) z F '
hence,
1 f dF 1 f dF 1 f dF(-1 I z) 1 f dF
2ni F + 2ni F = 2ni F(-1/z) + 2ni F
D'E E'A' A'E' · E'A'

= 2~i !
A'E'
d: + 2~i (
A'E'
f ~ + f ~) = 2~i
E'A' A'E'
f d:.

From this it follows that as r - 0 our sum approaches the limit


k
2ni
j zdz = 12'
k
(p,i)

since the length of the arc from p to i is 211:/12. Ifwe substitute these expressions into
(4.5), we obtain (4.4).
If F has zeros other than p, p 2, or i on the boundary of D 1, then the same argument
goes through if we d~form L, in such a way that its interior contains only one from
each pair ofzeros lying in the.same orbit. For example, if we have a pair of zeros A. and
A. + 1 on the lines x = ± 1/2 and another pair p and -1 / p on the circle lz I = 1, then
we draw the contour shown in Figure 3, where the small circular arcs have the same
radius and are centered at the points indicated. D

-1 -1/2 0 1/2

FIGURE 3
82 2. MODULAR FORMS

PROBLEM 4.2. Prove that


(1) dim!Dtk(r 1, 1) = 0 if k is negative, if k is odd, or if k = 2;
(2) dimrotk(r 1, 1) ~ 1ifk=o,4, 6, 8, 10;
(3) dimrotk(r 1, 1) ~ [ f2] + 1 if k ~ o.
2. Modular forms with zero initial Fourier coefficients. Proposition 4.1 implies that
for any nonzero F E rotk (r 1, 1) we have

(4.6)

This means that F must be identically zero if sufficiently many of its initial Fourier
coefficients are zero. The analogous fact holds for all modular forms for congruence
subgroups of the Siegel modular group, and it is this fact that implies finite dimension-
ality of the space of such forms.
THEOREM 4.3. Suppose that K is a congruence subgroup of the modular group rn,
K ::> rn (q 1), x is a finite character of K of order m, and w is an integer or half-integer
(w = k or w = k/2), where Kc r3(4) ifw = k/2. Then a modular form

F(Z) = L F(R)e{q- 1RZ} E !mw(K,x),


REAn

where q = q1m or 4q1 m for w = k or k/2, respectively, is identically zero ifits coefficients
satisfy the condition -

(4.7) f(R) = 0, if a(R) ~ ~~ anµ(K)q,

where w1 = k or (k + 1)/2for w = k or k/2, respectively, <Tn = 2nncnv'3. Cn is the


constant in Theorem 1.8, and µ(K) is the index of the subgroup Kin rn.
PRooF. Supposing that the theorem has been proven for modular forms of integer
weight, we show that this implies the theorem for modular forms of half-integer weight.
Thus, let F be a modular form of weight w = k/2 that satisfies (4.7). By Lemma 3.7
and Theorem 2.2, the product G(Z) = F(Z) · on(z, (2)) is a modular form of integer
weight w 1 = (k + 1)/2 and character x(xQ)k (of order m' dividing 2m) for the
same group K ::> rn(q 1) as F. By the definition (1.13) of Chapter 1, the theta-series
on(z, (2)), like F, has an expansion with Fourier coefficients f( 2)(R) = r((2), q- 1R).
Hence, G has Fourier coefficients of the form

g(R) =

Let R E An and a(R) ~ y = (w1/2n)anµ(K)(4q1m). If R = R1 + R2, then a(R1) ~


a (R) ~ y, since any matrix in An has nonnegative trace. From this and the assumption
(4.7) it follows that f(R1) = 0, and hence g(R) = 0 if a(R) ~ y. Since q1m' ~ 4q 1m,
all of the conditions of the theorem hold for the modular form G. Hence G = 0, and
therefore F = 0, since on(z, (2)) ~ 0.
The proof of the theorem for w = k will be divided into three stages. First, using
(4.6), we examine the case K = r 1 and x = 1; we then use induction on n to prove
the theorem for K = rn and x = 1; finally, we deduce the general case from the case
K=Pandx= 1.
§4. SPACES OF MODULAR FORMS 83

Suppose that K = rJ and x = 1. If the Fourier coefficients f (2r) of a modular


form FE rotk(rJ, 1) satisfy (4.7), then
k k k
V;oo(F) > 4x UJ ~ 2xVJ ~ 12'

since CJ ~ 1 and UJ ~ 2/VJ. This contradicts (4.6). Hence, F must be identically


zero.
We now prove the theorem for K = rn and x = 1 by induction on n. To be
definite, we shall take the constant en in (4.7) to be the constant in Proposition 1.13.
The case n = 1 has already been considered. Suppose that n > 1, and the theorem has
already been proved for K = p-J and x = 1. Let F be a modular form in rotk (rn, 1)
that satisfies the conditions of the theorem. If we apply Proposition 3.12 with K = rn
and M = En to the function F, we conclude that the function

F' = FICI> = ,L f ( ( ~' ~) )e{R'Z'} (Z' E Hn-J),


R EAn-1

where Cl> is the Siegel operator (3.50), is contained in rotk(rn-J, 1). If u(R') ~
(k/2x)un-i. then
u ( ( R'
0
0))
0
/
= u(R ) ~ 2k7t Un,

sincecn-J ~en by Proposition 1.13, andhenceun-J ~Un. Then, using the assumption
onF, we have
f 1
(R') = f ( ( ~1 ~) = 0.
)

By the induction assumption, F' is identically zero. This means that F is a cusp-form.
Then, by Theorem 3.13, F has a Fourier expansion of the form

F = L f(R)e{RZ},
REA;!"

which, by Theorem 1.21 and the second inequality in Lemma 3.15 for the function

L lf(R)lexp(-xu(RY)) ~ IFI
REA;!"

implies the following bound for all Z = X + i Y in the fundamental domain Dn of rn:
IF(Z)I ~ OJexp(-of (det Y)Jfn),
where OJ and of are positive constants. From this bound it follows that the function
G(Z) = (det Y)kf2 1F(Z)I approaches zero as det Y-+ +oo and Z = X + iY remains
in Dn. On the other hand, from the definition of Dn and Theorem 1.21 it follows that
any subset of Dn of the form {X + iY E Dn; det Y ~ o} with > 0 is closed and o
bounded, and hence compact. Thus, the function G(Z) attains its maximumµ on Dn
atsomefinitepointZ0 = Xo + iYo E Dn. Next, fromLemma2.8 of Chapter 1 and the
definition of a modular form it follows that for any M = ( ~ ~) E rn the function
G satisfies the relation
G(M{Z)) = (ldet(CZ +D)l- 2 det Y)kf2 ldet(CZ +D)kF(Z)I
= (det Y)kf 2 IF(Z)I = G(Z)
84 2. MODULAR FORMS

and so is constant on every rn-orbit in Hn. According to Theorem 1.20, the set Dn
intersects with each P-orbit in Hn; thus, the maximumµ of G on Dn is also its
maximum on Hn: G(Z) ~ G(Zo) =µfor all z E Hn. We introduce the complex
parameter t = u +iv, set z,
= Zo + tEn, and consider the function
g(t) = F(Z,)exp{-iA.a{Z1)),
where A. is determined from the condition nA./n = 1 + [tnan] {here [· · ·] denotes the
greatest integer function). If we substitute the Fourier expansion for F, we obtain the
expansion

g(t) = L f(R)e{RZ,}exp(-iA.a(Z,))

= L f(R)e{RZo}exp(-iA.a(Zo))qu(R)-An/n = g(q),
REAt

where q = exp{nit). The assumptions of the theorem imply that f(R) = 0 if a(R) -
A.n/n < 0. Hence, the series for g(q) does not contain negative powers of q. If e > 0 is
small enough so that z,E Hn for v ~ -e, then the series for g{t) converges absolutely
and uniformly in the half-plane v ~ -e. Then g(q) is a holomorphic function in the
disc jq I ~ exp{ -ne) = p. Since p > 1, it follows from the maximum principle that
there exists a point q0 = exp(nito) such that

lqol = P and jg{l)I ~ jg{qo)I.


If we return to the variable t and recall the definition of g, we can rewrite the last
inequality in the form
IF(Zo)lexp{A.a(Yo)) ~ IF(Z,0 )jexp{A.a{Yo))exp{A.nvo),

where to = uo + ivo; hence,


{det Yo)-kf 2 G(Zo) ~ {det Y,0 )-kf2 G(Z,0 )exp(A.nvo).

Since G{Z0 ) =µand G(Z,0 ) ~µ,this inequality implies that

(4.8) µ ~ µ{det Yo)kf 2 (det Y,0 )-kf2exp(A.nvo) = µcp(vo),


where cp{v) = det(En + vY0- 1)-k/2 exp(A.nv ). Clearly, cp(O) = 1. We show that cp has
positive derivative cp' at the point v = 0. In fact,
n
cp(v) = II(l +vA.;)-kf exp(A.nv),
2

i=I

where A. 1, ... , An are the eigenvalues of the matrix y 0- 1. Hence,

cp'(O) = A.n - k(A.1+ · · · + A.n)/2 = A.n -ka(Y0- 1)/2


~ A.n - kan/2 = n{l + [kan/2n] - kan/2n) > 0,

since Xo + iYo E Dn and, by Theorem 1.21, a{Y0- 1) ~an. This implies that cp{vo) =
cp{-e) < 1 if e is sufficiently small. This, along with (4.8), proves that µ = 0.
Consequently, the function F is identically zero, and the theorem is proved in the case
K = r'' and x = .I.
§4. SPACES OF MODULAR FORMS 85

Finally, we consider the general case. Suppose that F E rotk (K, x), M E rn, and
FlkM is defined by (3.14). From (3.15) and (3.16) it follows that any function of the
form (FlkM)m depends only on the left coset KM of the group K. Thus, the function
µ
(4.9) G(Z) = II(FlkMar.
a=I
where M 1, ... , Mµ is a complete set of representatives of K\rn, does not depend on
the choice of representatives. Since for any ME rn the set M 1M, ... , MµM is also a
set of representatives of K\rn, if we again use (3.15) we find that GlkmµM =.G for
M E rn (compare with (3.73)).
On the other hand, G is obviously a holomorphic function on Hn, and, by Theorem
3.5, it is bounded on Hn(e) for any e > 0. Thus, GE rotkmµ(rn, 1). Letting fa(R)
denote the Fourier coefficients (3.33) of the function FlkMa, we easily see that the
Fourier coefficients g(R) of Gare given by the formula

g(R) = L IIJ a(Rap),


R 0 pEAn a,p
L,Rop=qR
o.(J

where 1 ~ a ~ µ, 1 ~ p ~ m. We suppose that F satisfies (4.7), and we let R E An,


a(R) ~ ~an. Since any matrix in An has nonnegative trace, it follows from the last
inequality that the ~ollowing holds for any partition qR = Ea,p Rap. where Rap E An,
and for any a= 1, ... ,µ:

a( L Rap
m )
= L a(Rap) ~ k;: anq,
m

P=I P=I
and this implies that for any a there exists a Psuch that
k
a(Rap) ~ 211: anµq.

To be definite, we suppose that F lkMi = F, and hence f 1 = f. We then see that in the
expression g(R) every term contains a factor of the form /1(R 1p) =/(Rip), which
is equal to zero, because F satisfies (4.7). Consequently, g(R) = O; and, by what was
proved above, G and so also F are identically zero. D

PROBLEM 4.4. Prove that a function F E ro?w(K,x), where K, x. and ware as in


Theorem 4.3, is identically zero if for every M E rn the coefficients h,(R) in (3.33),
where c; = M for w = k and c; = ME <!S, P(M) = M for w = k/2, are equal to zero
for all R E An satisfying the condition a(R) ~ w1 anq/211:.
3. Finite dimensionality of the spaces of modular forms.
'fHEoRBM 4.5. Suppose that K is a congruence subgroup of the modular group rn,
K :::) P(q 1), xis a character of K of order m, w is an integer or half-integer (w = k or
w = k/2), and Kc r3(4) ifw = k/2. Then the C-dimension of the space of modular
forms of weight w and character x for K satisfies the following relations:
dim ro?w(K,x) ~ dn(w1µ(K)q)(n), ifw > 0,
86 2. MODULAR FORMS

where wi = k and q = qim if w = k, wi = (k + 1)/2 and q = 4qim ifw = k/2, dn


depends only on n, and µ(K) is the index of Kin rn;

dim VRo(K,x) = { ~'. if x = 1,


ifx=l l;
dimw(K,x) = 0, ifw < 0.

PROOF. The case w > 0. If we use the bound (3.13) for the number of matrices
RE An with a(R):::;; N, we see that the number of different RE An satisfying (4.7) is
at most d = dn (w 1µ (K)q) (n), where dn depends only on n. Then any d + 1 functions
in ro?w (K, x) are linearly dependent, since one can always find complex numbers not
all zero such that the corresponding linear combination of the functions satisfies (4. 7),
and so is equal to zero.
The case w = k = 0. In this case Theorem 4.3 shows that F = 0 if f (0) = 0.
Since obviously 1 E VRo = VRo(K, 1), it follows that for any form F E ro?o we have
F - /(0) x 1 E rot0 • Hence, F - /(0) = 0 and F = /(0). If F E rot0 (K,x) and
x(M) =I l for some ME K, then from the functional equation for the function F and
matrix Mand the uniqueness of Fourier expansions it follows that /(0) = 0. Hence,
F=O.
The case w = k < 0. We first prove the theorem for K = rn and x = 1 by
induction on n. If n = 1, the result follows from Proposition 4.1, since the left side
of (4.4) is nonnegative. Now suppose that n > 1, and we have already shown that
v.n;:-1 = {O}. If F E rot;:, then FICI> Ev.n;:-
1 by Proposition 3.12, and hence FICI> = 0
and F is a cusp-form. Let G be an arbitrary nonzero modular form of positive integer
weight I and character x1 for some congruence subgroup K 1of rn (for example, a
theta-series). Then obviously F 1Glkl E rot0 (K', xlkl), and the constant coefficient in
the Fourier expansion of this function is equal to zero. By what was proved before, the
function is zero, and hence F = 0.
The general case when w = k < 0 reduces to the case K = rn and x = 1 by
the same method as in the proof of Theorem 4.3. That is, if F E rotk(K,x), then the
function Gin (4.9) belongs to VRkmµ(K)· Hence G = 0, and then F = 0.
Ifw = k/2 < 0, then, byLemma3.7, thefunctionF 2 is amodularformofweight
2w = k < 0, and the proof reduces to the previous case. D

PROBLEM 4.6. Prove that


(1) dim rotk(r2 , 1) = Ofork= 1, 2, 3;
(2) dim rot4(r2 , 1) :::;; 1.

[Hint: Use Theorem 4.3 and.Problem 1.16; verify thatthe space rotk (r2, 1) has no
nonzero cusp-forms fork = 1, 2, 3, 4; and then use Problem 4.2.]

PROBLEM 4. 7. Prove that dim rotk (rn, 1) = 0 if nk is odd.

§5. Scalar product and orthogonal decomposition


In the space of modular forms one can introduce an invariant scalar product
that makes it possible, in particular, to find a natural complement to the space of
cusp-forms.
§5. SCALAR PRODUCT AND ORTHOGONAL DECOMPOSITION 87

1. The scalar product. Given any two functions F and G on Hn, we consider the
differential form

(5.1) row(F. G) = F(Z)G(Z)h(z)wd*Z,

where w is an integer or half-integer (w = k or w = k/2), h(Z) = det Y is the height


of Z = X +iY, andd* Z is the invariant volumeelementinProposition2.9 of Chapter
1.

LEMMA 5.1. For an arbitrary matrix M = ( ~ ~) in Si{ one has the transfor-
mation formulas

(5.2) Y' = r(M) · '(CZ+ D)- 1 Y(CZ + D)- 1,

where Z = X + iY E Hn, M(Z) = X' + iY'; in particular,


(5.3) h(M(Z)) = det Y' = r(M)nldet(CZ + D)l- 2 h(Z);

(5.4) d* M (Z) = d* Z;

(5.5) row(F. G)(M(Z)) = r(M)nwrow(Flwe. Glwe)(Z),

where, as before, e = M for w = k and e = M E ~. P(M) = M for w = k/2, and lw


is the operator (3.15) or (3.21).
PRooF. The formulas (5.2) and (5.3) follow from Lemma 3.8 of Chapter 1, if we
note that M1 = r(M)- 112 M E Spn (R). Since d* A.Z = d* Z for A. > 0, (5.4) follows
from Proposition 2.9 of Chapter 1. The relation (5.5) is a direct consequence of (5.3),
(5.4); moreover, froni (5.1) and (3.17) it follows that the right side of (5.5) does not
depend on the choice of M E ~. 0

Now suppose that F, G E !mw(K,x), M E K, and Mis as in (3.23). Then by


(5.5) we have

row(F. G)(M(Z)) = row(Flwe, G!we)(Z)


= row(x, (M)F,x(M)G)(Z) = row(F. G)(Z),

i.e., the differential form (5.1) is invariant under the group K. This implies that the
integral

(5.6) J
DK
row(F. G)(Z),

where DK is a fundamental domain for K in Hn, does not depend on the choice of DK,
provided that the integral is absolutely convergent.
LEMMA 5.2. If at least one of the two forms F, G E !mw (K, x) is a cusp-form, then
the integral (5.6) is absolutely convergent.
88 2. MODULAR FORMS

PROOF. Since D x is a finite union of sets of the form M (Dn), where M E and rn
Dn is the fundamental domain for rn described in Theorem 1.20, to prove the lemma
it suffices to verify absolute convergence of an integral of the form

J
M(D.)
ww(F, G)(Z) = J
D.
ww(Flw<!, Glw<!)(Z),

where<! has the same meaning as in Lemma 5.1. To be definite, suppose that Fis a
cusp-form. According to Theorems 3.13 and 3.5, we have the Fourier expansions

Flw<! = L fdR)e{q- 1RZ},


REA;!'

Glw<! =L g{(R)e{q- 1RZ}.


REA.

Since both expansions converge absolutely on Hn, it follows that

l(Flw<!)(Z)(Glw<!)(Z)I ~ L t(R)exp(-nq- 1a(RY)),


REA;!'

where
t(R) = L lh(R1)g{(R2)I
R1+R2=R
is a finite sum and the last series converges on Hn. Then, by Theorem 1.21 and the first
inequality in Lemma 3.15, we obtain the inequality

l(Flw<!)(Z)(Glw<!)(Z)I ~ 01 exp(-02a(Y)) (Z = X + iY E Dn),


where o1 and o2 are positive constants. Hence, the lemma follows if we verify conver-
gence of the integrals

j exp(-oa(Y)){det Y)wd* Z
D.

'= J
Dn
exp(-oa(Y))(det Y)w-n-I II
l:i;;;a:i;;;p:i;;;n
dxapdYaP•

where o > 0. If (x 0 p) + i(y0 p) E Dn, then the definition of Dn and the inequalities
(1.12) imply that lx0 pl ~ 1/2, IYapl ~ Yaa/2 (a =/:- p). In addition, at the beginning of
the proof of Theorem 1.21 it was shown that in this case y 00 ;;;:: ../3/2. Thus, applying
the inequality (1.13) if w < n + 1 and the inequality ( 1. 8) of Appendix 1 if w ;;;:: n + 1,
we see that the last integral is majorized by an integral of the form
n

J II Yaaw-n-1 exp ( -uYaa i: )

a=I
lxapl:i::;l/2
Yoo v'3/2,IYap I:i::;yaa/2
n oo

= c II J Yaaw-n-l+n-aexp (-uYaa i: ) dYaa < oo. D


a=l./3/2
§5. SCALAR PRODUCT AND ORTHOGONAL DECOMPOSITION 89

We are now ready to prove the following fact.


THEoREM 5.3. Suppose that K is a congruence subgroup of rn, xis a.finite character
of K, w is an integer or half-integer (w = k or w = k/2), and K c q(4) if w = k/2.
If at least one of the modular forms F, G E ro?w (K, x) is a cusp-form, then define their
scalar product by the formula

(5.7) (F, G) = µ(K')-I / row(F. G)(Z),


DK

where K' =KU (-E2n)K, µ(K') = [P: K'], and DK isafundamental domain/or K
in Hn. This scalar product then has the following properties:
(1) (F, G) converges abso.lutely and does not depend on the choice offundamental
domain DK;
(2) (F, G) does not depend on the choice ofgroup K such that F, G E ro?w(K, x);
(3) (F, G) is a positive definite nondegenerate hermitian scalar product;
(4) if M E SQ, then

(5.8)
where the functions F lwe and G lwe are regarded as elements ofro?w (KM, x') (see Propo-
sition 3.8 and Theorem 3.13(3)).
PRooF. Property (1) has already been proved, and (3) follows immediately from
the definitions.
We prove (2). If F, GE rolw(Ki.xi), then, replacingK1by K 1nK, we may assume
thatK1 CK. Let
K' =LJK1Np,
p
where kf = k1 U (-e2n)k1, be partitions into left cosets. Then

r =LJK{NpMa
a,p
is also a partition into disjoint left cosets. By Theorem 1.22, we can take D Ki =
Ua,p NpMa(Dn}; hence,

µ(Kl)-I / row(F. G)(Z) = µ(Kl)-I L / row(F. G)(Z)


DK1 a,p Np(M (D,,))

I
0

[K': K']
= [P: KfJ L row(F. G)(Z)
Ma(Dn)

= µ(K')-I / row(F. G)(Z),


DK

where for the second equality we used the invariance of the differential form row under
the group K'. This proves property (2).
We now prove (5.8). Since Kand KM are both congruence subgroups of rn, their
intersection K(M) = K n KM is also a congruence subgroup, and so has finite index in
90 2. MODULAR FORMS

rn. Then if we use (5.5) and property (2), we obtain

(Flwe. Glwe) = µ(K{MJ)- 1 J


D
row(Flwe. Glwe)(Z)

= r(M)-nw µ(K{M))~l / a>w(F, G)(M, {Z))


.D

= r(M)-nw µ(K{M))-l / a>w(F, G)(Z),


M(D)

where D = D KCMi. It is easy to see that the set M (D) is a fundamental domain for the
group MK(M)M- 1 = MKM- 1 n K = K(M-1)· Thus, again using property (2), we
can rewrite the last expression in the form

and it remains for us to verify that µ(K{MJ) = µ(K{M- 1)). Since we obviously have

where r = rn, we can limit ourselves to the case K = r. For future reference we shall
prove a more general fact. D

LEMMA 5.4. Let G be a congruence subgroup of rn. Then for every matrix M E S8
the group G(M) = G n M--: 1GM isa congruence subgroup of rn, and one has

(5.9) [G: G(MJ] = (G: G(M-1Jl·

PROOF. The first part follows from the relation G(M) = G n GM, where GM is
defined as in (3.25). Now let D be a fundamental domain for G(M) in Hn. Since
GcM-1) = MG(M)M- 1; it follows that M(D) is a fundamental domain for G(M-1)·
Then from Proposition 1.23(2) we have the relations

v(D) = (G: G(MJ]v{D~), v(M(D)) = (G: G(M-IJ]v(Da),

where Da is a fundamental doi;nain for G. On the other hand, by (5.4) we have


v(D) = v(M (D) ). But this, along with the above relations, implies (5.9). D

2. The orthogonal complement. We now define the subspace fw (K, x) of the space
ro?w(K, x) of modular forms of integer or half-integer weight w (w = k or w = k/2)
and character x for the congruence subgroup K of rn, whereK c r(i(4) if w = k/2.
This subspace is the set of all forms that are orthogonal to the subspace of cusp-forms
with respect to the scalar product (5.7):

(5.10) <Eu.(K, x) = {F E ro?w(K, x); (F, G) = 0 for all G E IJ?w(K, x)}.


§5. SCALAR PRODUCT AND ORTHOGONAL DECOMPOSITION 91

PROPOSITION 5.5. The space of all modular forms splits into the direct sum
(5.11)
of orthogonal subspaces with respect to the scalar product (5.1). In addition, for any
matrix ME ~Q the map (see Proposition 3.8)

ro?w(K,x) ~ ro?w(KM,x'): F---. Flwe


is an isomorphic imbedding; here cusp-forms go to cusp-forms, and if F, G E ro?w (K, x)
with at least one of these forms a cusp-form, then
(F, G) = r(M)nw(Flwe. Glwe).

PRooF. The decomposition (5.11) follows from Theorem 5.3(3) and standard
linear algebra. Next, since (Flwe)lwe- 1 = F by (3.15) and (3.22), it follows from
Proposition 3.8 that the map lwe is an isomorphic imbedding. The remaining claims
in the theorem follow from Theorem 3.13(3) and (5.8). D

From (5.11) it follows that any modular form F E ro?w(K,x) can be uniquely
represented in the form
F = F1 +F2, where Fi E ~w(K,x);F2 E !Jtw(K,x).
Equating Fourier coefficients, we obtain
(5.12) f(R) = f 1(R) + fz(R) (R E An),
where f(R), f 1(R), and fz(R) are the Fourier coefficients of the functions F, Fi.
and F2, respectively. The Fourier coefficients f 1(R) of F 1 E ~w(K,x) can sometimes
be computed in explicit form. On the other hand, the Fourier coefficients fz(R) of
the cusp-form F2 are relatively small, by (3.70). Starting from these considerations,
in many cases one can· prove that as detR ---. +oo the decomposition (5.12) gives an
asymptotic formula for the function f (R) with principal term f 1(R).
CHAPTER 3

Hecke Rings

One of the most fruitful ideas in the theory of modular forms-the notion of a
Hecke operator-is based on a procedure for taking the average of a function over
suitable double cosets of subgroups of the modular group. Chapter 4 is devoted to
the theory and application of Hecke operators. The properties of Hecke operators are
to a large extent a reflection of the connections that exist between the corresponding
double cosets. The _present chapter examines these connections.

§1. Abstract Hecke rings


1. Averaging over double cosets. As in §4.1 of Chapter 1, let S be a multiplicative
semigroup that acts on a set H: h -+ g(h) (h EH, g ES) as a subsemigroup of the
group of all l-to-1 maps from H onto itself. Let cp be an automorphy factor of the
semigroup S on H with values in a group T, and let V be a left T-module. A function
F: H -+ V is called an automorphic form of weight cp for a subgroup r c S if for any
y E r it satisfies the functional equation

(1.1) (Flll'y)(h) = cp(y,h)- 1F(y(h)) = F(h).


It is clear that the set of such functions forms an abelian group, which we shall denote
rot= rotll'(r).
If F E rot and g E S, then the function F' (h) = (F lll'g )(h) does not generally lie
in rot. Moreover, there might be infinitely many pairwise distinct functions of the form

(1.2)
However, it often turns out that there are only finitely many distinct functions among
the functions (1.2). In that case, if we sum these functions, we might again obtain a
function in rot. A typical situation of this sort occurs if the double coset r gr contains
only finitely many left cosets modulo r:

(l.3)
Namely, each product gy (y E r) is contained in the double coset r gr; if gy lies in a
fixed left coset rg' c rgr, then gy = y'g', where y' E r, and, by Lemma 4.1(3) of
Chapter 1 and the equality ( 1.1) above, we find that

F'lll'Y = Flll'gy = Flll'r'lll'g' = Flll'g',


so that any function (1.2) is equal to one of the functions Flll'gi, ... , Fll'gP, where
g 1, ••• , gµ are a complete set of representatives of the left cosets modulo r contained
in rgr.
93
94 3. HECKE RINGS

We consider the following average of the function F over the double coset r gr
(or, if we want, the average of the function F' over the group r):

(1.4) Fl(g) = Flrp(g) = L Flrpg;.


g,er\rgr

LEMMA 1.1. Suppose that F E rot = rotrp(r), and the double coset rgr, where
g E S, satisfies the condition (1.3). Then the function Fl(g) does not depend on the
choice of representatives gi, ... ,gµ ofr \ rgr, and it is an automorphicform of weight
r.pfor r. .
PRooF. If gf = y;g; (i = l, ... ,µ),where y; Er, form another set of representa-
tives, then, by Lemma 4.1 (3) of Chapter 1 and the definition of an automorphic form,
we have
LFlrpy;g; = LFlrpY;lrpg; = LFlrpg;.
i i i
Let y E r. Since the set g 1y, ... ,gµy is also obviously a set of representatives of
r \ rgr, it follows that

(Flrp(g))lrpy = LFlrpg;y = Flrp(g),


i

so that F lrp (g) E rot. D

The operators
(1.5)
are called Hecke operators on the space OO?rp(r).
2. Hecke rings. In order to study the connections between the Hecke operators
corresponding to different double cosets, we first examine the connections between the
double cosets themselves, where we suppose that the double cosets satisfy (1.3).
LEMMA 1.2. Suppose that G is an arbitrary group, r is a subgroup of G, g E G, and

(1.6) r = u
y,er<ri\r
r(g)'Yi> where r(g) = rng- 1rg,

is the partition of r into a disjoint union of left cosets of the subgroup r(g)· Then

(1.7) rgr= LJ rgy;


v1Er(g> \r
and the left cosets in this union are pairwise disjoint. In particular,
(1.8) µ(g) = µr(g) = 1r \ rgq = [r: r(gJ]·

PRooF. The right hand side of (1.7) is clearly contained in the left hand side.
Conversely, suppose that g' = ygo, where y,o E r. By (1.6), the element o lies in
one of the left cosets r(g)'Yi• i.e., o = ay;, where a E r and gag- 1 E r. Then
g' = ygag- 1gy; E rgy;. If rgy; and rgyj intersect, then for some y,o Er we have
the equality ygy; = ogyi, and henceg- 10- 1ygy; =Yi and r(g)'Yi = r(g)'Yi· D
§I. ABSTRACT HECKE RINGS 95

Thus, if g is an invertible element, the condition (l .3) holds ifand only if rng- 1r g
has finite index in r.
Two subgroups r 1 and r 2 of a group G are said to be commensurable if their
intersection r. nr2 has finite index both in r. and in r2; in this case we writer. ,...., r2.
LEMMA 1.3. The commensurability relation is transitive on the set of subgroups of a
group G.
PROOF. Suppose that r 1 ,...., r 2 and r 2 ,...., r 3. If we take left cosets modulo r 2n r 3,
the imbedding r I n r 2 C r 2 gives the imbedding
r1 nr2nr3 \r1 nr2 c r2nr3 \r2,
so that

and
er. : r. n r3] ~er. : r. n r2 n r3] =er. : r1 n r21er1 n r2 : r1 n r2 n r3] < oo.
Similarly, er2 n r3 : r1 n r2 n r3] < oo and er3 : r1 n r3] < oo. Thus, r1 ,...., r3. o
LEMMA 1.4. Let G be a group, and let r be a subgroup. Then the set
f = {g E G;g- 1rg,...., r}
is a group.
PROOF. If r 1 and r 2 are two commensurable subgroups of G and g E· G, then
clearly the subgroups g- 1r1g and g- 1r2g are also commep.surable. Thus, if g E f,
then r,...., g- 1rg, and hence grg- 1 ,...., g(g-•rg)g- 1 = r, and g- 1 E f. Next, if
g1,g2 E f, then g) 1rg1 ,...., r, and hence g2 1g) 1rg1g2 ,...., g2 1rg2 ,...., r; then, by
Lemma 1.3, g1g2 E f. D

The group f is called the commensurator of the subgroup r in G, and its elements
are called r-rational elements of G.
Let r be a subgroup of G, and let S be a multiplicatively closed subset of G. We
call (r, S) a Hecke pair if
(1.9) res c r,
where f is the commensurator of r in G. To each Hecke pair (r, S) we associate the
free Z-module L = L(r,s) whose generators over Z are the symbols (rg) (g ES),
one for each left coset rg. The elements of S act as linear transformations of the
module L according to the rule
s 3 g: t = Ea;(rg;)---+ tg = Ea;(rg;g).
i i

We consider the submodule D = D (r, S) of L that consists of all r-invariant elements:


D(r, S) = {t E L(r, S); ty = t for ally Er}.
If t = L:; a;(rg;) and t' = Ei bi(rh1 ) are two elements of D, then the element

(1.10) t. t' = Ea;bi(rg;hi) EL


i,j
96 3. HECKE RINGS

does not depend on the choice of left coset representatives, and it also belongs to the
module D. Namely, t · t' obviously does not depend on the choice of representatives
g;. Let yjhio where Yj E r, be different representatives of the left cosets rhj. Since, by
assumption, ty j = t for each j, it follows that

'L,a;bj(rg;yjhj) = 'L,bj(tyj)hj = t. t',


i,j j

and the element (I. I 0) does not depend on the choice of representatives hj. Finally, if
y E r, then (t · t')y = t(t'y) = t · t', so that t · t' E D. Since the multiplication map
(t, t') -4 t · t' on elements of Dis obviously bilinear and associative, it follows that D
becomes an associative ring, called the Hecke ring of the pair (r, S).
If (r, S) is a Hecke pair, then, by Lemma 1.2, the double coset rg r of any g E S
is a finite union of disjoint left cosets of r:
µ
rgr= LJrg1.
i=I

If y E r, then the set g 1y, ... , gµ y obviously is also a full set of representatives of the
distinct left cosets r \ rgr. Thus, the elements

(I.II) (g) = (g)r = 'L, (rg;)


g;Er\rgr
of the module L(r, S) satisfy the condition (g )y = (g) for y E r, and hence belong
to the Hecke ring D = D(r, S). From the definition of multiplication in D it follows
that the element
(e) = (e)r = (r),
where e is the identity of the group G, is the unit of the ring D.
LEMMA 1.5. The elements ( 1.11) corresponding to the distinct r-double cosets of S
form a Z-basis of the module D (r, S). The product of elements of the form (I. I) in the
ring D can be calculated from the following formulas.
Let g,g' ES and let
µ(g) µ(g')
rgr= U rg;, rg'r= U rgf
i=I j=I

be the decomposition of the double cosets into distinct left cosets. Then

(g)(g') = 'L, c(g,g';h)(h),


r1rrergrg r
1

where h runs through a set of representatives of the r-double cosets contained in the set
_rgrg'r, and/or each h the coefficient c(g,g'; h) is equal to the number ofpairs (g;,gj)
such that g;gj E rh. The coefficients c(g,g'; h) can also be expressed in the form

c(g,g';h) = v(g,g';h)µ(g')µ(h)- 1,

where v(g,g';h) is the number of elements g; such that g;g' E rhr, andµ(g'), µ(h) are
the indices (1.8). ·
§1. ABSTRACT HECKE RINGS 97

PROOF. Let D' denote the submodule of D ·consisting of all finite linear combina-
tions of elements of the form ( 1.11) with coefficients in Z. Any nonzero element t E D
can be written in the form
µ
(1.12) t = L:a;(rg;),
i=I
where all of the coefficients a; are nonzero and all of the left cosets rg; are pairwise
distinct. We then callµ= µ(t) the length oft, and we prove that t ED' by induction
onµ. Ifµ= l, then t = a(rg). Since t ED, it follows that ty = t for ally Er, i.e.,
a(rgy) = a(rg) for y Er, and hence rg = rgr and t = a(g) ED'. Now suppose
thatµ > I, and we have already verified that all elements of D of length less than µ are
contained in D'. Lett be an element of D of the form (l.12) that has lengthµ. Since
ty = t for y E r, it follows that, if the left coset (rg;) appears in (l.12), then all of
the left cosets (rg;y) for y E r appear in (l .12) with the same coefficient." By Lemma
1.2, every left coset in the double coset rg;r can be written in the form rg;y for some
y E r. Thus, all left cosets in the decomposition of the double coset rg;r appear in
(1.12) with coefficient a;. Hence, the length of the element t -a;(g;) is less thanµ. By
the induction assumption, t - a;(g;) ED', and sot ED'. The first part of the lemma
is proved.
By definition, we have (g) = E;(rg;), (g') = E/rgj), and

(g)(g') = L:(rg;gj).
i,j

Since all of the products g;gj obviously lie in the set rg rg'r, it follows from what was
just proved that the product (g) (g') can also be written in the form

L c(g,g';h)(h) = L c(g,g';h) L (rhk)

with certain coefficients c(g,g'; h). If we equate coefficients of (rh) in these two left
coset decompositions of the product (g)(g'), we find that c(g,g';h) is equal to the
number of pairs (g;,gj) such that rg;gj = rh, i.e., g;gj E rh. From what we proved
before it follows that c (g, g'; h) depends only on the double cosets of g, g', and h. Ifwe
sum the numbers c(g,g'; h) over all left cosets in rhr, we find that µ(h)c(g,g'; h) is
equal to the number of pairs (g;,gj) such thatg;gj E rhr. Taking the set of g'yj with
yj E r (see Lemma 1.2) as our set of representatives gj, we see that the last numberis
equal to the product of µ(g') with the number of elements g; for which g;g' E rhr. 0

Let G and Gbe multiplicative groups, let r be a subgroup of G, and let


(1.13) P: G ---+ G, o: r ---+ G
be a group epimorphism and a group monomorphism that satisfy the conditions:
P(o(y)) = y for any y Er, and Ker Pis contained in the center of G. Note that we
already encountered these conditions in Chapter 2 when we studied modular forms of
half-integer weight. For any Hecke pair (r, S) for G we define a new pair
(1.14) (f, S), where f = o(r) ands= p- 1(S),
and we examine the conditions under which (f, S) is a Hecke pair for G.
98 3. HECKE RINGS

To do this, we consider the map


(1.15}
where g is an arbitrary element of G, that is defined for y E r (g l by the equality

(1.16)
e
where ii= c5(o:) for 0: E rand E Gis any P-preimage of g. Since Ker pis contained
e;
in the center of G, it is clear that p(y) does not depend on the choice of moreover,
p(y) belongs to the center of G.
LEMMA 1.6. The map Pg (g E G) is a homomorphism, and for any yi, Y2 E r it
satisfies the relation
(1.17)

PRooF. The proof of the first part of the lemma is similar to the proof of Lemma
3.4 of Chapter 2. As for (1.17), we first note that the right side is in fact well defined,
because, by (1.6), y' = Y2IYY2 E r(g') for g' = 'Yig'Y2 and y E r(g)· We now use the
definition of the map Pg' and choose g' E Gto be the product YIKY2· We then have

-
g'y'(g')-I = g':y'(g')-I Pg (y') =Yi (m-I Pg' (y'))ylI,
1

since Y' = Y2In2, and the element Pg' (y') is in the center of the group G. On the
other hand, using the definition of the map pg, we have

- -
g'r'(g')-I = Yi (gyg-I w1I = YI (m-I Pg (r ))y1I,
which implies that Pg (y) = Pg' (y'). D

We shall call p the lifting homomorphism from G to G. Using this homomorphism,


we can easily formulate a condition under which (f, S) is a Hecke pair.
LEMMA 1.7. Let (r,s) bea Hecke pair for the group G, which is related to the group
Gby the relations (1.13). If the kernel of the homomorphism p =pg has.finite index in
r for every g E S, then (f, S) is a Hecke pair.
PRooF. We first prove the equality
(1.18) c5(Ker p) = f(~) = f n e-I f e. where e E G, P(e) = g.
If y E Ker p, then by (1.16) we have

(1.19) -
gyg-I =eye-I.
Since y E r(g)> it follows that gyg-I E r, and hence the right side of (1.19) is
contained in the group f = c5(r). Using this and the fact that y E f, we see that
c5(Ker p) c f(~)' We now prove the reverse inclusion. Let y E f (~)> i.e., y E f and
y = e-IYie. where YI E f. Then 'Y E r(g)> since 'Y = g-Iyig, and by (1.16) we have
-gyg-I =eye-I p(y) = YIP(Y ). If we note that -gyg-I =YI, we find that p(y) = 1, and
soy E c5(Ker p ). This proves (1.18).
§I. ABSTRACT HECKE RINGS 99

We now prove that (f, S) is a Hecke pair. Since f c S, by (1.19) it suffices to


verify that
(1.20)
By the hypothesis of the lemma, o: r --+ f is an isomorphism of groups. Hence, using
(1.18), we have
[f: fc~>] = [f: o(Ker p)] = [r: Ker p] < oo.
Finiteness of the second index in (1.20) follows from the equalities

le -1-
re : -rc~>1 = [r:
-
e-1-rc~>e1 = [r:
- -
rc~-·>1
and from the previous argument. D

If (r, S) is a Hecke pair that satisfies the conditions in Lemma 1.7, we let
(1.21) B(r, s) = D(f, S)

denote the Hecke ring of the pair (1.14), and we call this ring the Hecke ring obtained
by lifting of the ring D(r, S).
In order to clarify how the Hecke ring. D(r, S) differs from the original ring
D(r,s), we look at the relation between the partition of the double coset fef into
f-left cosets and the partition of the double coset rgr, where g = P(e) E S, into
r-left cosets. Suppose we are given the partitions
(1.22) r = LJ r{g)Yi and r {g) = LJ(Ker Pg )Pi·
j

Since ogives an imbedding of r in G, it follows from (1.22) and (1.18) that we have
the partition
(1.23) f = LJfc~>PJYI,
i,j

which, in conjunction with Lemma 1.2, gives us the corresponding partition of the
double coset fer:
(1.24) ref= LJfeiijYI·
i,j

On the other hand, from ( 1.22) and Lemma 1.2 we also have the partition
(1.25)

which, when compared to (1.24), shows the similarities and differences between the
partitions of the double cosets ref and rgr, and the role played by the lifting
homomorphism p. Using (1.24) and (1.25), we obtain the following result.
LEMMA 1.8. The equality Ker Pg = r{g) holds if and only if the map
P: fef-+ rgr, where g = P(e),
is a one-to-one correspondence.
100 3. HECKE RINGS

. 3. The imbedding e. We now examine the connection between the Hecke rings
corresponding to different Hecke pairs for the same group G. Let (r, S) and (ro, So)
be two Hecke pairs. Suppose that the following conditions hold:
(l.26) ro c r, sc rso, and rn So· s 0 1 c ro.
According to the second of these conditions, every left coset r g, where g E S, contains
an element g0 E S 0 • If we now set e((rg)) = (r0g 0 ), then, by the third condition in
(l.26), we see that (rogo) E L(ro, So) does not depend on the choice of go. The first
condition in (1.26) shows that the map e takes distinct r-left cosets to distinct r 0 -left
cosets. Thus, if we extend e by Z-linearity onto all of L(r, S), we obtain an imbedding
of this module into L(r0 , So).
PROPOSITION 1.9. Suppose that the Hecke pairs (r,s) and (r0,S0 ) satisfy (l.26).
Then the restriction of e to the Hecke ring D(r, S) is amonomorphismfrom this ring to
the Hecke ring D(ro, So):
(l.27) e: D(r, s) ---+ D(ro, So).
If, in addition,
(l.28) Socs and µr(g)=µr 0 (g) forallgESo,
whereµ denotes the index (1.8), then the map (l.27) is an isomorphism of rings.
PRooF. The first part follows directly from the definitions and the assumption that
r 0 c r. To prove the second part, by Lemma 1.5 it suffices to verify that under our
assumptions
(l.29) e(g)r = (g)r0 for g E So.
Let )11,. • ., )Iµ, whereµ = µr 0 (g), be a set of left coset representatives of ro modulo
r o n g- 1r og. Then, by the definition of (g )r0 and Lemma 1.2, we have
µ
(g)ro = ~)rogy;).
i=I

On the other hand, the elements gy1, ... , gyµ all lie in r gr and belong to different left r-
cosets, since if we had gy; =ogyjwitho E ritwouldfollowthato E rnS0 ·S0 1 c r 0 ,
and hence i = j and o = e. By ( 1.28) and Lemma 1.2, the number of elements is equal
to the number of left r-cosets in r gr. Hence,
µ
(g)r = L:)rgy;).
i=I

Then, by the definition of e, we havee(g)r = (g)r0 • D

Again suppose that the Hecke pairs (r, S) and (r0 , S 0 ) are related as in (l.26), and
let y be an arbitrary element of r. We further suppose that Soc S, and we consider
the commutative diagram
L(r,s) L(ro, So)
(l.30)

L(r,s) ~ L(ro,So),
§I. ABSTRACT HECKE RINGS IOI

where the vertical arrows denote the Z-linear homomorphisms that take (rg) E
L(r, S) and (r0g0 ) E L(r0 , S 0 ), respectively, to
(rg). y = (rgy) and (rogo). y = (rogo),
where g0is any element of Son rgoy. From the inclusions So c Sand r c Sand
the second property in ( 1.26) we find that g0 y E S, and this product can be written in
the form y'g0with y' E r and g0E S 0 • From this, together with the third property in
(l.26), it follows that g0 E Son rgoy, and the left coset r 0g 0does not depend on the
choice of g0.
LEMMA 1.10. Suppose that the Hecke pairs (r, S) and (r0 , So) satisfy (l.26), and
So C S. Then the map e in the diagram (l.30) is an isomorphism between L(r, S)
and L(ro, So), and the e-image of the Hecke ring D(r, S) coincides with the set of
t E L(r0, So) such that t · y = t for ally E r.
PROOF. From the inclusion S 0 c Sit follows that e is an epimorphism. Since e is
also an imbedding, jt must in fact be an isomorphism. The second part of the lemma
follows from the commutativity of the diagram (1.30) and the definition of the Hecke
ringD(r,s). D

4. The anti-isomorphism j.
PROPOSITION 1.11. Let (r,s) be a Hecke pair for the group G. Then the pair
(r, s-t ),where s- 1 = {g- 1; g E S}, is also a Hecke pair, and the Z-linearp-a
u k .
nee e rings r••".~
(I.31) j: n(r,s) ~ n(r,s- 1), -
which is defined on elements of the form (1.11) by setting
j((g)r) = (g- 1)r (g ES),
is an anti-isomorphism of rings. In particular, if S is a group, then j is an anti-
automorphism ofihe Hecke ring D(r,s).
We first prove a lemma.
LEMMA 1.12. Let r be a subgroup of G, and let f be the commensurator of r in G.
Then the map
g ~ A.(g) = µ(g)µ(g-1)-1, g E f,
where µ(h) = [r : r(h)], is a homomorphism from r to the multiplicative group of
rational numbers.
PRooF. We let X denote the set of all subgroups of G that are commensurable
with r. If r 1, r 2 E X, then there exists r' E X having finite index both in r I antin
r2 (for example, by Lemma 1.3 we can taker'= r 1 n r 2). We then set
A.(rifr2) = [r1 : r'][r2 : r'r 1.
It is easy to see that this number does not depend on the choice of r', and it satisfies
the relations
A.(r1/r2)A.(r2fr3) = A.(r1/r3) (r; Ex),
(l.32)
A.(g- 1r1g/g- 1r2g) = A.(r1/r2) (r; E X,g E G).
102 3. HECKE RINGS

Let g E f and r' E X; we set .A.'(g) = .A.(r' /g- 1r'g). It is easily verified that .A.'(g)
does not depend on the choice of r' E X. Using (1.32), we find that, on the one hand,

for any g 1,g2 E f, and, on the other hand, for any g E f

PRooF OF PROPOSITION 1.11. Since the elements of the form ( 1.11) form a Z-basis
of the module D(r, S), it suffices to prove that for any gi,g2 ES one has

By Lemma 1.5 and the definition of j, this relation will be proved if we show that for
any h E rg1rg2r

Since the map g -+ .A.(g) in Lemma 1.12 is trivial on the group r, Lemma 1.12 implies
that

From this it follows that the equality (1.33) is equivalent to the relation

(1.34)

From the definition of µ (gi, g2; h) and the decomposition ( 1. 7) it follows that
v (g 1, g2; h) is equal to the number of elements in the set

It is easy to see that for y Er the double coset rg1')'g2r depends only on r(gi)'Yr (g;•i·
and for y E rand t E r(g2-•i we have r(gi)'Yt = r(gi)'Y if and only if t E y- 1r(g.J'Y·
Thus, v(gi,g2; h) can be written in the form

(1.35)
rEr1K11\r/r1K;•1
K1rg2Erhr-

Similarly,
v(gil ,gtl;h-l) = L(r(gi): (r(gi) n yr(gi"•)'Y-1)].
l'
§I. ABSTRACT HECKE RINGS 103

Using the notation .A.(r 1/r2) (see the proof of Lemma 1.12) and the properties (1.32)
of this symbol, we obtain
v(g;21,g)l;h-I) = L:.A.(r{g1)/r{g1) n yr{g2-1)y-I)
y

= L:.A.(y- 1r{g,)Y/(r- 1r{g,)>' n r,g2 'l))


y

=I: .A.(y- 1r,g,)Y/r{giJ)i(r,giJ/r)

x .A.(r/rcg2-1i).A.(r<g2-1/r,g2_,J n y- 1r,giJ>')
= L .A.(y )µ(gi)-1 µ(g21 ).A.(r<K2-•/r{g2'i n >'-1 rcgiJ>')
y

=µ(gi)- 1µ(g;2 1)v(g2,g2; h),


where all of the summations are over the same y as in (l.35). 0

The next lemma shows that the anti-isomorphism j is compatible with the mono-
morphism e that was defined in the previous subsection.
LEMMA 1.13. Let (r, S) and (r0 , S0 ) be two Hecke pairs satisfying (l.26). Suppose
that the Hecke pairs (r,s- 1) and (r0 ,S0 1) also satisfy (1.26). Then the following
diagram is commutative:
E
L(r,s) -------+ L(ro,So)

PROOF. From the definition of e it follows that e(g)r = L:;(g;)r0 , where the
summation is over the double r 0 -cosets in rgr n S 0 • The map g; --+ g;- 1 gives a one-
to-one correspondence between the double r o-cosets in r gr n So and in rg- 1r n S 01 •
We thus have
je(g)r = L(g;- 1)r0 = e(g- 1)r = ej(g)r.
i
The lemma follows from this relation and Lemma 1.5. 0

5. Representations in spaces of automorphic forms. We return to the situation


described at the beginning of the section: a semigroup S acts on a set H, cp is an
automorphy factor of S on H with values in a group T, V is a left T -module, and
rot = rot'I' (r) is the additive group of all r-automorphic forms of weight cp with values
in V, where r c Sis a subgroup. We further suppose that (r, S) is a Hecke pair, and
D = D(r, S) is its Hecke ring. If L:; a;(rg;) ED and F E rot, then, in the notation
of Lemma 4.1 (3) of Chapter 1, we have

(l.36) Flt = Fl'l't = L a;Fl'l'g;.


Using property (3) ofLemma4.l of Chapter land (1.1), we see that Fly;g; =Fig;,
where)'; E r, and hence the function ( 1.36) does not depend on the choice ofleft coset
104 3. HECKE RINGS

representatives int. Furthermore, if y Er, then (Flt)ly = Flty =Flt, so that the
function FI t also belongs to rot Thus, to every t E D is associated a linear operator

It = l<pt: rot,,(r) - rot<p(r),


which coincides with the operator ( 1.4) in the case t = (g )r. Like the operator (1.4),
this operator is called a Hecke operator. By Lemma 1.5, any such operator is a Z-linear
combination of the operators ( 1.4).
PROPOSITION 1.14. The map
t-+ l'Pt (t E D(r, S))

is a homomorphism from the Hecke ring D(r, S) to the endomorphism ring of the
Z-module !Dt'P (r).
PRooF. The map is obviously linear. Hence, it suffices to prove that a product
of elements of the Hecke ring is taken to the corresponding product of operators. If
FE !Dt'P(r) and t = E; a;(rg;), t' = Ei bi(rhi) E D(r, S), then, by Lemma 4.1(3)
of Chapter 1, we have

Fltt' = ~a;biFlg;hi =~bi ( ~a;Flg;)lhi = Fltlt'. 0


I,) J I

From Proposition 1.14 it follows that any algebraic relation between elements of
the Hecke ring is also valid for the corresponding Heck~ operators.
6. Hecke algebras over a commutative ring. Let (r, S) be a Hecke pair, and let A
be an arbitrary commutative ring with unit. Just as in subsection 2 for the case of Z, we
can define the free A-module LA = LA (r, S) whose generators over A are the symbols
(rg) (g E s); one for each left coset rg c S, and we can define the submodule
DA = DA (r, S) consisting of all r-invariant elements. Again, the multiplication

( ~a;(rg;)) ( ~bi(rhi)) = ~a;bj(r(g;hj))


I J I,) '

of elements in DA does not depend on the choice of left coset representatives, and it
makes DA into an associative ring with unit, called the Hecke algebra of the pair (r, S)
over A. All of the results of subsections 2-4 carry over without any change to the Hecke
algebras DA(r, S). The results of subsections 1 and 5 concerning representations of
Hecke rings also carry over (along with their proofs), if we suppose that Vis also an
A-module and the actions on V of the group T and the ring A commute with one
another. Clearly, if Z c A, then

(1.37) DA(r,s) = D(r,s) ®z A

(tensor product of rings).


PROBLEM 1.15. Let r be a normal subgroup of finite index in the group S. Show
that (r,s) is a Hecke pair, and the Hecke algebra DA(r, S) of the Hecke pair (r, S)
over the ring A is isomorphic to the group-ring of the quotient group r \ S over A.
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 105

PROBLEM 1.16. Let (r, S) be a Hecke pair. Show that the A-linear map N from
the Hecke algebra DA(r, S) to A that is defined on elements of the form (1.11) by
setting N((g)) = µr(g )eA, where µr(g) is theindex (1.8) and eA is the unit of the ring
A, is a ring homomorphism.

§2. Hecke rings for the general linear group


Our ultimate goal is to study the Hecke rings of the symplectic group and their
representations in spaces of modular forms. To a large extent we do this by reducing
the questions to the case of Hecke rings for the general linear group.
1. Global rings. We consider the groups
(2.1) A= An= aLn(Z) and a= an= aLn(Q),
where n ~ 1 and Q is the field of rational numbers.
LEMMA 2.1. The commensurator of the subgroup A in the group a is a itself. In
particular, (A, a) is a Hecke pair.
PROOF. If in every matrix of A we replace the entries by their residue classes
modulo q, where q is a positive integer, we obtain a homomorphism of the group A
to the finite group aLn(Z/qZ). The kernel of this homomorphism is the principal
congruence subgroup of level q
(2.2)
which is a normal subgroup of finite index in A.
Let g = Igo, where t E Q and go is an integer matrix. It is easy to verify
that gA(d)g- 1 c A and g- 1A(d)g c A, where d = Idetg0 j. This means that
Ace) = A n g- 1Ag contains the subgroup A(d), and so it has finite index in A. The
intersection Acc-1) also has finite index in A; hence, the group g- 1Acc-1 )K = A(g) has
finite index in the group g- 1Ag. All of this implies that g- 1Ag is commensurable with
A for any element g E a. D

The main object of study in this section will be the Hecke ring
(2.3) H = Hn = DQ(An, an)= D(An, an) ®z Q
of the Hecke pair (An, an) over the field Q and over subrings of Q. By Lemma 1.5, we
can take as a Q-basis of H the elements (g) = (g)A of the form (1.11), one for each
double A-coset AgA of the group a. In order to visualize this set of double cosets, we
prove that there is a special type of diagonal representative in each double coset.
LEMMA2.2. EverydoublecosetAgA where A= aLn(Z)andg E an= aLn(Q),
has one and only one representative of the form
(2.4) ed(g) = diag(di. ... , dn), where d; > 0, d;jd;+I ·
PRooF. Let d 1 be the greatest common divisor of the entries of the matrix g, i.e.,
di is the positive rational number such that g = d 1g 1, where g 1 is an integer matrix with
relatively prime entries. Using induction on the minimum~ = ~(g 1 ) of the greatest
common divisors of the columns of these matrices g 1, we prove that the double coset
Ag 1A of such a matrix contains a representative of the form ( ~ : 2 ) , where g2 is an
106 3. HECKE RINGS

(n - 1) x (n - 1) integer matrix. First suppose that c5 = 1, i.e., g1 contains a column


with relatively prime entries: If we multiply g1 on the right by a suitable permutation
matrix-an operation which does not cause us to leave the double coset-we may
assume that the first column of g1 has relatively prime entries. By Lemma 3.8 of
Chapter 1, we can then multiply g1 on the left by a suitable matrix in A that reduces g 1
to the form

( ~ ; 2) = ( ~ ; 2) ( ~ E:_J, where/ E M1,n-1>

and this proves the claim in the case c5 = 1. Now suppose that c5 > 1, and the claim
has already been proved for all g1 with c5 (g1) < c5. If we are given a matrix g 1 with
c5 (g1) = c5, by permuting the columns we may assume that the greatest common divisor
of the entries in the first column is equal to c5. Again using Lemma 3.8 of Chapter 1,
we multiply g 1 on the left by a suitable matrix in A so as to reduce it to the form

( c5 I ) = (c5 I + c5l') ( 1 -/' ) ,


0 h 0 h 0 En-I

where I E Mi,n-i. I' is an arbitrary (n - 1)-dimensional integer row, and h is an


(n - 1) x (n - 1) integer matrix. The row I' can be chosen in such a way that all of
the entries in the row I" = I+ c5l' are positive and do not exceed c5. Then the matrix
gf = ( ~ 1~') E Ag1A satisfies the inequality c5(gf) < c5 (the inequality c5(gf) ~ c5 is
obvious; equality here would mean that all of the columns of gf were divisible by c5, but
that contradicts the assumption that the entries are relatively prime). By the induction
assumption, our claim holds for gf, and hence it holds for g 1.
Returning to the proofof the lemma, we see that the double coset Ag A = d 1Ag 1A
contains a representative of the form g' = ( ~1 d~2 ), where d1 is the greatest
common divisor of the entries of g and g 2 is an (n - 1) x (n - 1) integer matrix. If g 2
varies within the double cosetAn- 1g2An- 1, then clearly g' remains in the double coset
AgA. Hence, we can continue this diagonalization process, applying it to the matrix
d 1g 2, and so on. We finally obtain a representative of the form (2.4).
If D = diag(d1, ... , dn) and D' = diag(d{, ... , d~) are two diagonal matrices of
the form (2.4) and D' = A. 1DA.2, where A.; EA, then it follows from the Cauchy-Binet
formula that all of the r x r-minors of D', where 1 ~ r ~ n-and, in particular,
d{ · · · d:-are divisible by d 1 · · · d,. Similarly, d 1 · · · d, is divisible by d{ · · · d:. Since
the diagonal entries are positive, this implies that d{ · · · d: =di · · · d, (1 ~ r ~ n), and
soD'=D. D

We shall call ed(g) = diag(d1,. . ., dn) the matrix of elementary divisors of the
matrix g, and we shall call the numbers d, = d, (g) the elementary divisors ofg. Using
an argument similar to the proof of uniqueness in the previous lemma, we see that the
product d 1 · · · d, is equal to the greatest common divisor of the r x r-minors of g. In
particular,
(2.5)
We now turn our attention to the multiplicative properties of the Hecke rings nn.
THEOREM 2.3. The Hecke ring Hn, n ~ 1, is commutative.
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 107

PROOF. According to Lemma 1.5, it suffices to prove that


(g )(g') = (g')(g ), where g, g' E an.
Since every double coset AgA (g E an) contains a diagonal matrix, the coset is taken
to itself by the transpose: '(Ag A) = A 1g A = Ag A. This implies that the number of
A-left cosets in Ag A is equal to the number of right cosets; but since each of the left
cosets intersects with every right coset, there exists a set of representatives g 1, ••• , gµ
of the left cosets which is also a set of representatives of the right cosets. It is easy to
see that the set 1gi, ... , 1gµ has the same property. Let g), ... , g~ be a similar set of
representatives for the double coset Ag'A. Then Lemma 1.5 implies that

(g)(g') = v'(g, g'; h )µ(h )- 1(h ),


AhAEAgAg'A

where v'(g,g';h) = v(g,g';h)µ(g') is equal to the number of pairs (i,j) such that
g; · gj E AhA. Ifwe replace the representatives {g;} and {gj} by { 1g;} and { 'gj}, we
see that v'(g,g';h) is also equal to the number of pairs (i,j) such that
g; • 'gj = '(gjg;) E AhA = '(AhA),
1 i.e., gjg; E AhA.

Thus, v'(g,g';h) = v'(g',g;h), and

(g)(g') = v'(g',g;h)µ(h)- 1(h) = (g')(g),


AhAEAgAg'A

since the set Ag Ag'A, as a finite union of double A-cosets, coincides with its transpose
'(AgAg'A) = A'g'A'gA = Ag'AgA. D

The rule for multiplying double cosets becomes especially simple when one of the
cosets is proportional to the identity coset. In this case the definition of multiplication
in the Hecke ring immediately implies
LEMMA 2.4. The fol/owing relation holds in the Hecke ring Hn for any g E an and
r E Q*:

(2.6) (rEn)(g) = (g)(rEn) = (rg).

PROPOSITION 2.5. Let g, g' E an. Suppose the ratios dn (g) I di (g) and dn (g') I di (g'),
where d, denotes the rth elementary divisor, are relatively prime. Then the following
equality holds in the Hecke ring Hn:
(2.7) (g)(g') = (gg').

PRooF. Lemma 2.4 obviously implies that it suffices to prove the proposition in
the case when d 1(g) = d 1(g') = 1. Hence, we suppose that g and g' are integer
matrices, and the numbers d = Idetgl and d' = Idetg'I are relatively prime (see (2.5)
and (2.6)). From Lemma 1.5 it follows that

(g)(g') = c(g,g';h)(h),
AhAEAgAg'A
108 3. HECKE RINGS

where c{g,g'; h) are positive integers that depend only on the double A-coset of g, g',
and h. Since Agg'A c AgAg' A, it follows that c(g,g';gg') ~ 1, and the last relation
can be rewritten in the form

(2.8) {g){g') = {gg') + c'(g,g';h)(h),


AhAEAgAg'A

where c'{g,g'; h) are nonnegative integers that depend only on the double cosets of g,
g', and h. Form ~ 1 we let

(2.9) ED(m) = EDn(m)

denote the set of all integer matrices of the form (2.4) having determinant m. By
Lemma 2.2, we may assume that the matrices g, g', and h in (2.8) belong to the sets
ED(d), ED(d'), and ED(dd'), respectively, since every h in (2.8) is an integer matrix
with Idethl = dd'. Since the EDn(m) are obviously finite sets, we can define the
following elements of nn:

(2.10) t(m) = tn(m) = L (g).


gEED.(m)

Summing (2.8) over all g E ED(d) and g' E ED(d'), we obtain the relation

(2.11) t(d)t(d')=L:(gg')+ L: (L:c'(g,g';h))(h).


g,g' hEED(dd 1 ) g,g'

It is easy to see that for d prime to d' the map {g, g') - gg' gives a bijection of
ED(d) x ED(d') with ED(dd'). Hence, the first sum on the right in (2.11) is equal to
t(dd'). Ifwe prove that t(d)t(d') = t(dd') fot d prime to d', then it will follow from
(2.11) that the double sum on the right is equal to zero; hence, all of the coefficients
c'(g,g'; h), since they are nonnegative, must equal zero. Thus, (2.8) would turn into
(2.7), and the proposition would be proved. In other words, to prove the proposition
it suffices to prove the following lemma, which is also of independent interest. 0

LEMMA 2.6. Let d ~ 1. Then the set

(2.12)

is the union of finitely many left cosets modulo the group A = An, and the element
t(d) = tn(d) of the Hecke ring Hn can be written as a sum of the form

(2.13) t(d) = L (Ag).


gEA\M.(±d)

If d and d' are relatively prime and the matrices g and g' run through sets ofrepresenta-
tives of the left cosets A\ Mn(±d) and A\ Mn(±d'), respectively, then the product gg'
runs through a set of representatives of the left cosets A\ Mn (±dd'). In particular,

(2.14) t(d)t(d') = t(dd'), if (d, d') = 1.


§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 109

PROOF. Lemma 2.2 implies that the set Mn(±d) is the union of finitely many
double A-cosets, and each of these double cosets has exactly one representative in the
set ED (d). The first part of the lemma now follows from Lemma 2.1 and the definition.
of t(d) and (g).
Suppose that d and d' are relatively prime, and {g 1, ... ,gµ} and {gf, ... ,g~} are
fixed sets of representatives of the left cosets A\ Mn (±d) and A\ Mn (±d'), respectively.
Each productg;gj is clearly contained in Mn(±dd'). Suppose that two such products
belong to the same left A-coset:

where A.EA.

We set h = K;~ 1 A.g; = gj 1 (gj)- 1• Then dh = dg;~ 1 A.g; and d'h = gj1d'(gj)- 1 are
integer matrices; since dis prime to d', it follows that his an integer matrix. Further-
more, deth = ±1. Thus, h EA, so that gj 1 = hgj E Agj, and hence j 1 = j. Then
also A.g; = g; 1, and so i 1 = i. We have thus proved that the products g;gj belong to
distinct left cosets of Mn (±dd') modulo the group A. Now suppose that Ag, where
g E Mn(±dd'), is an arbitrary left coset of Mn(±dd') modulo A. Then Lemma 2.2
implies thatg can be written in theformg = vw, wherev E Mn(±d), w E Mn(±d').
The element w lies in some left coset Ag), i.e., w = A. 1gj, where A. 1 E A, and vA. 1 lies
in some left coset Ag;, i.e., vA. 1 = A.g;, where A.EA. Consequently, g = A.g;gj, and the
left coset Ag contains the product g;gj. The second part of the lemma is proved.
The relation (2.14) follows from what has already been proved and from the
definition of multiplication in Hecke rings. This proves Lemma 2.6, and hence also
Proposition 2.5. D

·The next lemma turns out to be useful for explicitly computing the left coset
decomposition of elements of the Hecke ring nn.
LEMMA 2.7. Every left coset Ag, where g is an integer matrix in Gn, contains one
and only one reduced representative C = (c;i) with
(2.15) 0 ~ c;i < cjj, cji = 0 for 1 ~ i < j ~ n.

PRooF. We use induction on n to prove the existence of a reduced representative.


The assertion is obvious if n = 1. Suppose that n > 1, and the claim has already been
proved for n - 1. If we apply Lemma 3.8 of Chapter 1 to the first column of g, we
find that Ag con~ains a representative of the form g' = ( c~ 1
; 1 ), where c 11 > 0
and g1 is an integer matrix in Gn- 1• By the induction assumption, we have g1 =ego,
where e E An- l and g0 is a reduced matrix. It is then easy to see that there exists a row
l E M1,n-1 such that ( ~ E:_ 1 ) ( ~ e~ 1 ) g' E Ag is a reduced matrix. Finally,
uniqueness of the reduced representative easily follows using proof by contradiction.
D

The study of the global Hecke ring H reduces to the study of its local subrings
Hp, where p runs through all prime numbers. Let p be a prime. We set

(2.16)
110 3. HECKE RINGS

where

(2.17) Z[p- 1] = {apb E Q;a,b E Z}

is the ring of rational numbers that are integral outside p. Since A c GP c G, we can
consider the Hecke ring

(2.18)
and this ring can be regarded as a subring of the Hecke ring H. The subrings HP cH
for prime p are called the local Hecke rings of the group G.
THEOREM 2.8. The Hecke ring Hn is generated by the local subrings H; asp runs
through the prime numbers.
PRooF. Given a nonzero rational number r and a prime p, we let vP (r) denote the
exponent of p that occurs in the prime factorization of r. If g E an, we define the
matrix edp (g) of elementary p-divisors ofg by setting

(2.19) edp(g) = diag(p 0 '1, ... ,p°'•) with a;= vp(d;(g)),

where d; (g) are the elementary divisors of g. The numbers p°'' are called the elementary
p-divisors ofg. For fixed g, the matrices edp (g) are clearly equal to the identity matrix
for all but finitely many p; furthermore,

(2.20) ed(g) =II edp(g) (g E G),


p

where the produc~ is taken over all prime numbers. Since g E Aed(g )A, this product
formula implies that g can be written in the form

(2.21) g= II gp, where ed(gp) = edp(g)


p

and gp = En for all but finitely many p. It now follows from Proposition 2.5 that
we have the expansion (g) = TIP(gp) for the corresponding double cosets. Since
(gp) E Hp, and H consists of finite linear combinations of elements of the form (g)
(g E G), we conclude that every element in His a finite sum of finite products of
elements of the subrings Hp. 0

PROBLEM 2.9. Let g,g' E Gn. Suppose that the numbers dn(g)/d1(g) and
dn(g 1 )/d1 (g') are relatively prime. Show that d;(gg') = d;(g)d;(g') for i = 1, ... , n.
PROBLEM 2.10. Show that the set of reduced integer matrices C E Mn with det C =
d, where d EN, can be taken as a set of representatives of the leftcosets A\ Mn(±d).
Conclude from this that the zeta-function of the ring Hn, which is defined as the
Dirichlet series
Z(s, N) = ~ N(t(m))'
L.,, ms
m=I
where N: Hn --+ Q is the homomorphism in Problem 1.16 and Res> n, is equal to
the product {(s ){(s - 1) · · · {(s - n + 1) of Riemann zeta-functions.
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 111

PROBLEM 2.11. Prove the following identities for formal Dirichlet series with coef-
ficients in the ring nn:

Z(s) = ~
L...J t(m)
ms = II Zp(p -s
),
m=l p

where p runs through the prime numbers, and


00

Zp(v) = Lt(l)v.s;
J=O
( ) '°'
Z S(, " , 'Sn =
L...J
(diag(di, .. ·, dn))A
ds' ds•
1 , ', n
-
_II z p(p -s 1
'" ' 'p
-sn)
'
diag(d,, ... ,d.) P

where diag(di, ... ,dn) runs through the set LJ:'=I ED(m), p runs through the prime
numbers, and
Zp(vi, ... ,vn) = L (diag(p.s1 , . . . ,d.s·))Avf1 00 ·v!•.
o..;;;.s,,.;;; ... ,.;;;.s.

PROBLEM 2.12. Using the coset representatives in Problem 2.10; prove that in the
case n = 2 the following relation holds for every prime p:
t(p )t(l) = t(l+l) + p(pE2)At(l-l) (15 ;;;,: 1).
From this derive the following identities in the ring of formal power series over n;: ·
Zp(v) =(1 - t(p )v + p(pE2)v 2)- 1,
Zp(vi.v2) =(1- (pE2)vn(l - (pE2)v1v2)- 1(1- t(p)v2 + p(pE2)vn- 1.

PROBLEM 2.13. Show that in the ring H 2 we have the relations


t(m)t(mi) = L d(dE2)t(mm1/d 2) (m, mi EN),
dlm,m1

where d runs through the positive common divisors of m and m1.


PROBLEM 2.14. Show that G; coincides with the subset of Gn consisting of all
g E Gn all of whose elementary divisors di (g), ... ,dn(g) are powers of p.
2. Local rings. In this subsection we study the structure of the local Hecke rings
n; defined by (2.18). We first note that the investigation of n; reduces to that of its
integral subring
(2.22)
which consists of all linear combinations of elements (g) where g E G; is an integer
matrix.
LEMMA 2.15. The element
(2.23)
is invertible in the ring n;.
and n(p)- 1 = (p- 1En)A. The ring n; is generated by the
sub ring n; and the element n (p )- 1•
112 3. HECKE RINGS

PROOF. The lemma follows from Lemma 2.4 and the definitions. D

For brevity we denote


(diag(p'\ ... ,d15•))A = t(p151 , ••• ,d15•).
In this notation the ring n;consists of linear combinations of elements of the form
t(p15 •, ••• , p15• ), where 0 ~ 01 ~ · · · ~ On. We say that such an element is primitive if
o1 = 0, and we say that it is imprimitive if o1 ~ l. An arbitrary element t E is said n;
to be primitive (respectively, imprimitive) if it is a linear combination of primitive (resp.
imprimitive) doublecos~ts t(p15•, ••• , p15• ). Clearly, any t. E n;
can be uniquely written
in the form t = tPr + t•m, where tPr is primitive and t•m is imprimitive. Lemma 2.4
implies that the set I of all imprimitive elements in n;
is the principal ideal generated
by n(p).
LEMMA 2.16. Let 'I': n; ---+ n;-I be the Q-linear map defined by setting
~u( ( 15 1 . ., dJ•)) -_ { t(p15',. •., d 15• ),
rtp,.
if 0 = 01 ~ 02 ~ "• ~On,
0, if 0 < 01 ~ 02 ~ .. ·~On.
Then 'I' is an epimorphism ofrings. The kernel of 'I' is the ideal I of imprimitive elements
ofH;.
PROOF. From Lemmas 1.5 and 2.2 it follows that the map 'I' gives an isomorphism
between the subspace of all primitive elements of and the space n; n;-
1• In particular,
'I' is an epimorphism. It is clear that the kernel of 'I' is I. Hence, it remains to prove
that 'I' is a ring homomorphism. To do this it suffices to verify that

(2.24) 'P(t(l,pJ2 , . . . ,dJ•)t(l,pJ'2,. . .,dJ'·)) = t(pJ 2,. . .,dJ•)t(pJ'2,. . .,dJ'"),


where 0 ~ 02 ~ .. · ~ On, 0 ~ o~ ~ .. · ~ o~. We set go = diag(p152 ,. • .,p15•),
g0= diag(p15~,. • ., p15!), g = ( ~ ; 0 ) , and g' = ( ~ ~). By Lemma 1.5 we have
(g)(g') = 2: c(g,g';h)(h).
AhAEAgAg'A

From Lemma 1.5 and the relations (2.5) it follows that every element (h) in this
expansion has the form t(pr•, ... , pr"), where 0 ~ Y1 ~ • · • ~ Yn and YI + · · · + Yn =
o.
02 + · · · +On + o~ + · · · + o~ = Thus, by the definition of 'I', we find that
'P((g )(g')) = L c(g, g'; diag(l, pl'2,. . ., pY· ))t(pl'2,. . ., pY• ).
o,..r2,..···,..r,,
n+···+r.=0
Similarly, in the ring n;- 1 we have the relation

(go)(go) = L c(go,go;diag(pl'2,. . .,pY"))t(pY2,. . .,pY•).


o:s;;;i'2'""":e;;)'n
1·2+···+1·,,=tS

These relations imply that to prove (2.24) it suffices to verify that


(2.25) c(g,g';h) = c(go,go;ho),
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 113

if h = (~ ~0 ) and ho = diag(pr2 , . . . ,pY•). Since c(g,g';h) depends only on

the double coset of h, we have c(g,g';h) = c(g,g';h'), where h' = ( ~ ~). By


Lemmas l.5 and 2.7, the coefficient c(g,g';h') is equal to the number of pairs (C, C')
of reduced matrices such that C E AgA, C' E Ag'A, and CC'= A.h', where A.EA.
Since A.= CC'(h')- 1, it follows that A. must be a triangular matrix. In particular, it can
be written in the form A. = ( ~ A.~n ) , where A.o E An- I and Ann = ± 1. Ifwe similarly
divide C and C' into blocks, we see that these matrices can be written in the form
C = ( c0° v)
Cnn , C' = 0 (C'° c~nv') ,
where Co and CO are reduced (n - 1) x (n -1)-
matrices, Cnn• c~n > 0, and the entries in the columns v and v' are nonnegative and
less than Cnn and c~n• respectively. From the relation CC' = A.h' it now follows that
CnnC~n = Ann and CoCO = A.oho. The first equality gives us Cnn = c~n = 1, and so

v = v' = 0. Thus, C = ( ~o ~) and C' = ( ~o ~), where Co and CO are reduced


(n - 1) x (n -1)-matrices satisfying the relation Co CO= A.oho with A.o E An-I. Clearly
Co E An- 1goAn-I and COE An- 1g0An-I. Conversely, any two matrices C and C'with
these properties are reduced, belong to the double cosets Ag A and Ag' A, respectively,
and·satisfytherelation CC'= A.h' with A.EA. Hence, c(g,g';h') = c(g0 ,g0;h0 ). D

We can now completely determine the structure of the rings n; and n;.
THEOREM 2.17. Let n ~ 1, and let p be a prime number. Then:
(I) the ring n; is generated over Q by the elements
(2.26) n;(p)=n/(p)=(diag(l, ... ,l,p, ... ,p)) (l~i~n);
~..__.......,

n-i ;

(2) the Hecke ring n; is generated over Q by the elements 11:1(p), ... ,1tn-I (p) and
1tn(p)±I;
(3) the elements 11:1(p), ... ,1tn(p) are algebraically independent over Q.
REMARK. We are identifying Q with the subring Q(En)A c n;.
PRooF. To prove the first part it suffices to verify that every element t (p61 , ••• , p 6•),
where 1 ~ 01 ~ · · · ~ On, is a polynomial in 11:1 (p), ... , 1tn (p) with rational coefficients.
We prove this by induction on n, and for each fixed n > l by induction on N =
01 +···+On. If n = 1, the claim is obvious, since t(p6) = t(p )6 = nl (p )6 • Suppose
that n > 1, and the claim has been verified for smaller orders. If N = o 1 + · · ·+on = l,
then t(p61 , ... , p6•) = 11: 1(p ). Suppose that for all t(p6:, ... , p6~) witho( + · · ·+o~ < N
it is already known that they are polynomials in the elements (2.26), an~ let 0 ~ o 1 ~
···~On ando 1 +···+on= N. If o 1 ~ l, then by Lemma 2.4 we have

t(p61' ... 'p6•) = 1tn(P )t51 t(l, pt52-t51, ... 'l·-t51 ).


Since (02-0 1)+· · +(on-o 1) < N, it follows thattheelement t(l, pt52 -t5 1 , ••• , p.s.-.si) is
a polynomial in 11:1 (p ), ... , nn(P ), and hence s·o is the element t(pt5 1, ... , p.s• ). Suppose
that o 1 = 0. By the first induction assumption, the element
'P(t(l, p 62 , ••• 'p6• )) = t(l2, ••• 'l· ),
114 3. HECKE RINGS

where \J' is the homomorphism in Lemma 2.16, is a polynomial in the .,,,7- 1(p):
t( P62 , ... ,pJn)_ n-1( P)) •
- F( n,n-1(P), ... ,nn-1
where
F(xi. ... , Xn-1) =
a=(a1,. .. ,a.-1)
Since each element (h) in the expansion of the product (nj'- 1(p))a 1 ... (n~::::l(p))an-1
obviously satisfies the relation Idethl = plal with lal = a1 +2a2 + .. · + (n - l)an-i.
it follows that, after combining similar terms, we may assume that the only nonzero
coefficients aa in Fare those for which lal = 02 +···+on= N. Since \J'(nj(p)) =
.,,,7- 1(p) (1 ~ i ~ n - 1), it follows that \J' takes the element
t = t(l,p62 , . . . ,p6•)- L aa(nj(p))a1 ... (n~_ 1 (p))a,,_1
JaJ=N
to zero. Thus, t is imprimitive. On the other hand, from the form of t it follows
that t is a linear combination of elements t (p 6 ;, .•• , p6~) with of + · · · + o~ = N.
Hence, t = nn(p)t', where t' is a linear combination of elements t(pr 1, ... , pY•) with
'YI + · · · + 'Yn < N. By the second induction assumption, t' is a polynomial in
nj (p), ... , n~ (p), and so the same is true of t. Part ( 1) of the theorem is proved.
Part (2) follows from part (1) and Lemma 2.15.
We prove the third part by induction on n. If n = 1, it is obvious, since the
ni
elements (p )6 = (p6 ), o = 0, 1, 2, ... , correspond to pairwise distinct double cosets
modulo A 1 = {±1}, and so they are linearly independent. Suppose that n > 1, and
the claim has been verified for all n' < n. Suppose that the elements nj(p), ... , n~(p)
are algebraically dependent over Q, and let F (nj (p), ... , n~ (p)) = 0 be an algebraic
relation in which the polynomial F has the smallest possible degree. If we now apply
the homomorphism \J', we obtain
\J'(F(nj (p ), ... , n~ (p))) =F(\J'(nj (p )), ... , \J'(n~ (p)))
=F(nj'- 1(p), ... , n~::::l(p), O) = 0.
By the induction assumption, this implies that F = xnFi. where F1 = F1 (xi, ... , xn) is
also a polynomial. By Lemma 2.15, n~ (p) is not a zero divisor. Hence, the relation 0 =
F(nj(p), ... , n~(p)) = n~(p)F1 (nj(p ), ... , n~(p)) implies that F1 (nj(p), ... , n~(p))
= 0, and this contradicts our assumption that F has minimal degree. D

We conclude this subsection by proving some technical facts about the generators
n; (p) and their pairwise products which will be needed later.

LEMMA 2.18. Let n ~ 1, and let p be a prime number. Then:


( 1) For 0 ~ i ~ n the subset of reduced matrices

(2.27)
{C =(cap) E Mn;detC = pi,Caa = 1, or p,
Cap = 0, if a > p, or a < Pand Caa = p}
is a complete set of representatives of the distinct left cosets of AD;A modulo A= An;
the number of matrices in this subset is equal to
µA(D;)=~,
'Pi'Pn-1
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 115

where

(2.28) D; = D'/ (p) = ( EO-; p~;)' cp, = cp,(p),


r
(2.29) cp,(x) = II(x; -1) forr;;::: 1andcpo(x)=1.
i=l

(2) The double coset expansion of the product in the Hecke ring n; of the two
elements n; = n'/ (p) and n j = nj (p ), where 1 :::;; i, j :::;; n, has the form

(2.30) 1t;1tj = "


L..J' 'Pa+j-b 1ta+j-b,b1
'Pa'Pj-b
O~a~n-j
O~b~j
a+b=i
where

(2.31) 0
pEa
0

The number of A-left cosets in the double coset ADapA is given by the formula

_ P(n-a-p) 'Pn
(2.32) µA (D ap ) -p
'Pn-a-p'Pa'Pp

PROOF. Every matrix in (2.27) can be written in the form diag(p'\ ... ,p.s•)C',
where On = 0, 1, 01 + · · · +On = i, and C' E A. Thus, the matrix lies in AD;A.
By Lemma 2.7, all of these matrices belong to different A-left cosets. Now let C be
any matrix of the form (2.15) that lies in AD;A. Its diagonal entries Caa are positive
integers, and their product is p;. Hence, Caa = p.S0 , where Oa ;;::: 0, 01 + · · · + On = i.
From the integrality of the matrix p · c- 1 it follows that each Oa is either 0 or 1.
Suppose that Cap =F 0, where 1 :::;; a < p :::;; n. Then op = 1 and Cap is not divisible by
p. We let )'Ii ... , Yn-i denote the indices y for which o.,
= 0. If Oa = 1, then the )'1th,
... , Yn-;th, and Pth columns of C are linearly independent modulo p; but the rank of
C modulo p is obviously equal to n - i. Thus, Oa = 0, and C lies in the set (2.27).
It is easy to see that the number of elements of the set (2.27) with fixed diagonal and
with Oa = 1 precisely when a = ai. a2, ... , a;, where 1 :::;; a1 < a2 < · · · <a; :::;; n, is
equal to pa1- 1 • • • pa,-i; this implies that the number of elements in the set (2.27) is

and to prove (2.28) it suffices to verify the following identity:

(2.33) '°'
L..J
xa1+···+a1 = x(i)
cp; ()
'Pn(x)
()
X 'Pn-i X
(l __., · __., )
:::::,l:::::,n,
l~a1<"·<a1~n
116 3. HECKE RINGS

where cp,(x) is the function (2.29). This identity can be obtained by equating coeffi-
cients of ti on both sides of the identity
n n (i)
( )
(2.34) no+ tx°') = L I ; x 'Pn x ,
a=I i=O cp; (x )'Pn-i (x)
an identity which the reader can easily prove by induction on n, in a man'1.er analogous
to the standard proof of Newton's binomial expansion.
To prove (2.30) we use induction on n. If n = 1, we must prove the formula
n/n/ = nA. 1, which is obvious. Now let n > 1, and suppose that the formula has been
proved for smaller orders. By Lemma 1.5 and the first part of Lemma 2.18, we have
the expansion
1t;1tj = C;j(g)(g)A,L
AgAEGp
where
cij(g) = v(D;,Dj;g)µA(Dj)µA(g)- 1
and v(D;, Dj;g) is the number of matrices C in the set (2.27) such that CDj E AgA.
This number is easy to compute. If the diagonal entries in C are pJ', ... , pJ•, then, as
noted before, C = diag(pJ', ... , pJ•) C1, where C1 is an upper-triangular matrix in A.
Then
CDj = diag(i', ... ,i•)DjDj 1C1Dj E Adiag(pJ', ... ,i•)DjA,

where Dj = DJ(p), since obviously Dj 1C1Dj EA. Let aq <···<a.; be the indices
a. for which Oa = 1. If a of the numbers a.1, ... , a.; do not exceed n - j and b = i - a
of these numbers are greater than n - j, then clearly
diag(pJ', ... ,i•)Dj E ADa+j-b,bA,
where Dap = D~p(P ). As noted before, the number of matrices C with fixed a.1, ... , a.;
is equal to p°'' - 1 • • • p°';-i. Thus, the number of matrices C in the set (2.27) such that
CDj E ADa+j-b,bA for fixed a and b satisfying the inequalities 0 ~ a ~ n - j,
0 ~ b ~ j, and a + b = i, is equalto

1i;;;a1< ·<a.i;;;n-j ·
00

ti;;;P1<· .. <Pbi;;;j

=Ph (n-j)-(i) P (<!)in


Tn-1
.
,
p(a),,,.
Tl = ph(n-j-a}
"' ·tn.
Tn-JTJ
~~~- ~~~ ~~~-~~~
(we have used the identities (2.33)). These arguments, along with (2.28), imply that

(2.35) 1t;1t,; = L C;,;(Da+j-b,b)1ta+j-h,h•


oi;;;ai;;;n-j
oi;;;hi;;;,;
a+h=i

where
CiJ (D 11+}-h,h ) = p
h(n-j-a)
. 'Pn
µA (D a+j-h,h )-1 ·
'Pa'Pn-.i-a'Ph'P j-h
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 117

In the case when i =nor j = n, (2.30) follows from the definitions and Lemma 2.4.
Hence, we may assume that 1 :::;; i,j < n. Ifwe now apply the homomorphism'¥ of
Lemma 2.16 to the left and right sides of (2.35), we find that
,,,.n1• -l(p),,,.1(.n-l(p) =
1• 1•
~
L...J C;j (D a+j-b,b )'ll:a+j-b,b
n-1 {p ) ·
o,.;;a,.;;n-j
O,.;;b,.;;j
a+b=i
On the other hand, by the induction assumption we have
nn,. -l(p)n1\n-l(p) = ~ 'Pa+j-b n-1 { )
L...J cp cp. na+j-b,b P ·
a 1-b
o,.;;a,.;;n-1-j
o,.;;b,.;;j
a+b=i
Since the double cosets in this expansion are linearly independent, we obtain the
relation
'Pa+j-b
C;j (D a+j-b,b ) = ,
'Pa'Pj-b
where a+ b = i, 0:::;; a :::;; n - 1 - j, 0:::;; b :::;; j. The same formula can be obtained
in the case a = n - j if we use the original formula for C;j(D2n-i-j,i+j-n) and (2.28)
and take into account that
µA(D2n-i-j,i+j-n) = µ~(D'/+j-n(p)) = 'Pn ,
'Pi+j-n'P2n-i-j
This proves (2.30). Comparing the expressions for the coefficients C;j(Da+j-b,b), we
find that
. ) _ b(n-j-a) 'Pn
µA (D a+J-b,b - P •
'Pn-j-a'Pa+j-b'Pb
from which (2.32) follows if we set a + j - b = a, b = p. D

PROBLEM 2.19. Let Qp be the field of p-adic numbers, and let Zp be the ring of
p-adic integers. Then G = GLn (Qp) is a locally compact group in the p-adic topology,
and r = GLn(Zp) is a (maximal) compact subgroup. Let Dx, where K is a subring
of C, denote the K -module consisting of all continuous functions f : G --+ K with
compact support which satisfy the condition f(y1gy2) = f(g) for any y1, r2 Er and
g E G. Fix the Haar measure µ on G for which µ (r) = 1, and define the product
f * f 1 of functions /, f 1 E DK by the formula

(/*f1)(h)= lag(hg- 1)/1(g)dµ(g) (hEG).

Show that the K-linearmap from the Hecke ringDx(An, a;) to Dx that associates to
an element {g) A {g E a;) the characteristic function of the double coset rg r C G, is
an isomorphism of rings.
3. The spherical map. We have shown th~t every element in the local Hecke ring of
the general linear group ~an be uniquely expressed as a polynomial in a finite number
of generators. But often it is not so simple to find this polynomial if the element
is given, say, as a linear combination of left cosets. In order to solve this problem,
we define certain maps from the local Hecke rings to rings of symmetric polynomials
that are analogous to the spherical functions in the representation theory of locally
compact groups.
118 3. HECKE RINGS

As in the previous subsection, we suppose that n ;;i: 1 and p is a prime number.


We first define a linear map
_ n. L Q (An ' an)
OJ-OJP. p -4
Q[ Xi±t ,. . .,xn±t]

from the Q-vector space spanned by the distinct left cosets (Ag) of modulo An to thea;
subring of the field Q(x1, ... , xn) of rational functions inn variables that is generated
over Q by xt 1, ... , x;= 1• Lemma 2.7 implies that every left coset Ag (g E a;) has a
representative of the form
.s,
P c12 ·· ·
.~. ~.s~
(
(2.36) ::: where Oi, ... , On E Z,
0 0 ...

and the diagonal (p.S1 , ••• , ~·) is uniquely determined by the left coset. We set
n
OJ((Ag)) = IT(x;p-i).S'
i=I

and for an arbitrary element

t = L:aj(Agj) E LQ(An,a;)
j

we define

We call OJ the spherical map. We would like to describe the image of the Hecke rings
under OJ. Recall that an element of the field Q(xi. ... , xn) is said to be symmetric if it
does not change under any permutation of the variables x 1, ... , Xn.
THEOREM 2.20. The restriction of the map ro = OJ; to the Hecke ring n; =
DQ(An, a;) c LQ(An, a;) isanisomorphismofthisringwiththeringQ[xt 1, ... ,x;= 11'_
of all symmetric elements of Q[xt 1, ... , x;= 1]. The image of the integral subring n;
under the map OJ is the ring Q[x1, ... , xnlr of all symmetric polynomials in xi, ... , Xn
overQ.
We first prove a lemma.
LEMMA 2.21. The images of the elements (2.26) under the map OJ = OJ; are given by
the formulas

where
s;(x1, ... ,xn) =
I ~°'I<··· <a; ~n

is the i th elementary symmetric polynomial.


§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 119

PROOF. In the expansion of n[ (p) into left cosets we take the set (2.27) as the
representatives of the different left cosets, and we use the fact that the number of
elements in this set for which <5"' = 1 precisely when a = a 1, a 2 , ••• , a;, where 1 ~
ai < · · · <a; ~ n, is equal to p"' 1-i · · · p"';-i. By the definition of OJ we then have
OJ(n7(p)) =

=p -(i) S; ( xi, ... ,Xn.


) D

PROOF OF THEOREM 2.20. We first show that the restriction of OJ to Hp is a ring


homomorphism. In fact, if t =I:; a;(Ag;), t' = Ej bj(Agj) are two elements of Hp,
then it follows from Lemma 2.7 that all of the left coset representatives g;, gj can be
assumed to have been chosen in the form (2.36). Then all of the representatives g;gj
in the expansion of the product

tt' = L a;bj(Ag;gj),
i,j
have the same form, and the diagonal entries in the matrices g;gj are equal to the
products of the corresponding diagonal entries in g; and gj. Thus, from the definition
of OJ we have OJ((Ag;gj)) = OJ((Ag;))OJ((Agj)). Using this and the linearity of OJ, we
obtain the relation OJ(tt') = OJ(t)OJ(t').
Clearly OJ((En)) = 1, where (En) = (En)A is the unit of the ring Hp. From
Theorem 2.17 it then follows that the OJ-image of HP consists of all polynomials over
Qin the elements OJ(n;(p)) (1 ~ i ~ n), and the OJ-image of Hp is generated by
OJ(Hp) andtheelementOJ(n,,(p)-i) = OJ(nn(p))-i. ButthenLemma2.21 andthe
fundamental theorem on symmetric polynomials imply that OJ(H p) coincides with
the ring of symmetric; polynomials in n variables over Q, and the ring OJ(Hp) is
generated over OJ(H p) by the element (xi··· xn)-i, and so obviously coincides with
Q[ Xi±i ,. . .,xn±i]s•
Finally, if OJ(t) = 0 for some t E Hp, then by Lemma 2.15 we can write t =
nn(p)15 ti. where<5 E Zand ti EH p· Then, since OJ(nn(p)±1) =fa 0, we have OJ(ti) = 0.
By Theorem 2.17, ti = Cl>(ni (p ), ... , nn(P )), where Cl>(xi. ... , xn) is a polynomial with
rational coefficients. But by Lemma 2.21, the equality OJ(ti) = 0 means that
Cl>(p-i si (xi. ... , Xn), ... , p-(n) Sn (xi, ... , Xn)) = 0,
and because of the algebraic independence of elementary symmetric polynomials, this
implies that Cl>(xi. ... , xn) = 0. Hence, ti = 0 and t = 0. D

Theorem 2.20 reduces the computation of the product of elements of a local Hecke
ring to the computation of the product of the corresponding symmetric polynomials.
The problem of expressing elements of the local Hecke rings in terms of the generators
n; (p) reduces to the problem of expressing symmetric polynomials in terms of the
elementary symmetric polynomials.
We illustrate ·the usefulness of the spherical map by discussing the problem of
summing the formal generating series for elements of the form t (pt'i), where p is a prime
number. Note that when n > 2 these elements do not have a simple multiplication
table (the case n = 1 is trivial, and the case n = 2 is treated in Problem 2.12).
120 3. HECKE RINGS

PROPOSITION 2.22. In the notation (2.10) and (2.26), the following identity holds in
the ring offormal power series over n;:

PRooF. From the definition of t(pJ) and Lemma 2.7 it follows that

By the definition of the map w= w; we then have


w(t(p6)) = L p62 P2J3 ... p(n-l')J.(x.;p-l)J• ... (xnp-n)J•
J;;;::o
J1+ .. ·+o.==J
(2.37)
= L (x1p-I )Ji · · · (XnP-I )J•,
J,;;::o
J1+···+J.==J

so that
(2.38)
00 00

L w(t(pJ))vJ = L (x1p- 1)J1 .. .(xnp- 1)J•vJ1 .. · vJ•


J=O J1,. .. .J.=O

where we used Lemma 2.21 for the last step. From the identity for formal power series
over the ring of polynomials in x1, ... , Xn it follows that all of the coefficients in the
formal series

( ~ t(p6)vJ) ( ~(-1); p<i-l)n;(p)v;).


except for the coefficient of v 0 , belong to the kernel of w, and hence are equal to zero.
As for the constant term, it is obviously equal to the unit of the ring Hr 0

This proposition can be used to find explicit expressions for t (pJ) in terms of the
generators n;(p).
We conclude this section by describing the anti-automorphism j of the Hecke ring
n; (see §1.4) in terms of symmetric functions.
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP i2i

LEMMA 2.23. Let n ~ 1, and let p be a prime number. Then the diagram

H Pn w
----+ Q(Xi±i , ... ,xn±i]

H Pn w
----+ Q(Xi±i , ... ,xn±i]

commutes, where (J) is the Spherical map, j is the anti-automorphism (1.31), and W = Wn,p
denotes the Q-linear ring homomorphism given by w(x;) = pn+i x;-i (1 : : ; i : : ; n) on the
generators.
PROOF. Since n; is a commutative ring, all of the maps in this diagram are Q-
linear ring homomorphisms. Taking into account Theorem 2.17 and Lemma 2.4, it is
therefore sufficient to verify that ro(j(7t; (p))) = w(ro(1t; (p))) (1 ::::;; i ::::;; n). We have

ro(j(1t;(p))) =ro((p-iDn-i)A) = ro(1tn(P)-i1tn-;(p))


=p(n}(xi ···Xn)-lp-(n-i)Sn-;(xi ···Xn)
=p-(i) s;(pn+i xii, ... , pn+i x;i) = w(ro(1t;(p))),

where we have used Lemma 2.21 twice in the computation. D

PROBLEM 2.24. Show that any Q-linear ring homomorphism from n; to C that
takes the unit of n;
to 1 has the form

t ---+ A(t) = ro(t)lxi=A 1, ••• ,x.=A. (t En;),


where ro is the spherical map and Ai, ... , An are nonzero complex numbers that are
determined except for their order by the homomorphism A. The numbers Ai, ... , An
are called the parameters of the homomorphism A. Show that for arbitrary nonzero
Ai, ... , An E C there exists a homomorphism A: H;
---+ C having parameters Ai , ... , An.

PROBLEM 2.25. Let A: Hn ---+ C be a Q-linear homomorphism. Show that the


formal zeta-function of the ring nn with character A, i.e., the formal Dirichlet series

Z(s, A) = ~ A(t(d))'
L.J dS
d=i

where t(d) is the element (2.13), has a formal Euler product expansion of the form

Z(s, A) = IT Zp(s, Ap),


p

where p runs through all prime numbers, Ap denotes the restrictio,i of A to n;, and
the local zeta-functions Zp(s, A.p) have the form
n
Zp(s,A.p) = IT(l-A.;(p)p-s-i)-i,
i=i
where A.i (p), ... , An (p) are the parameters of the homomorphism Ap.
122 3. HECKE RINGS

PROBLEM 2.26. We return to the notation of Problem 2.19. Recall that a continuous
function won the group G = GLn(Qp) with values in C is called a (zonal) spherical
function if w(y1gy2) = w(g) for any yi, Y2 E r = GLn(Zp) and g E G, w(En) = 1,
and the map
f - w(J) = l f(g)w(g) dµ(g)
is a ring homomorphism from De to C. Show that every spherical function on G has
theform
W(),1, ... ,;..)(g) = l 'P(J.1, ... ,J..)(gy)dµ(y),
where A1, ... , An are nonzero complex numbers and the function c,o = 'P(J. 1,. ..,;.• ) is given
by the conditions: c,o(yg) = c,o(g) for y E rand g E G, and

Show that the spherical functions W(J.i,. ....t.) and wo.; ......t:,) coincide ifand only ifthe
numbers A~, ... , A~ are a permutation of the numbers A1, ... , An.
[Hint: Use Problems 2.19 and 2.24, Theorem 2.20, and Lemma 2.21.)

§3. Hecke rings for the symplectic group


1. Global rings. In all of the Hecke pairs for subgroups of the symplectic group
that we shall be considering, the second component is a subgroup of the group
sn =SQ= GSp:(Q).
From the definition it easily follows that a rational (2n x 2n)-matrix M = ( ~ ~)
lies in sn if and only if its (n x n )-blocks satisfy the relations
(3.1) 'AC= 'CA, 'BD ='DB, 'AD- 'CB= r(M)En,
where r(M) > 0. Since the matrix 'M = r(M)JnM- 1Jn-I lies in sn whenever M
does, the conditions (3.1) are equivalent to the conditions
(3.2) A 'B = B 'A, C 'D = D 'C, A 'D - B 'C = r(M)En.
LEMMA 3.1. Let K be an arbitrary congruence subgroup of the modular group
r = P = Spn(Z). Then the commensurator of K in the group sn is all of sn. In
particular, (K, sn) is a Hecke pair.
PROOF. LetM E sn. Accordingto§3.3ofChapter2,theintersectionM- 1KMnr
isacongruencesubgroupofr. ThegroupK(M) = KnM- 1KM = rnKnM- 1KM
is also a congruence subgroup, and so it has finite index in K. If we replace M by
M- 1, we see that K(M-1) has finite index in K, and therefore K(Ml = M- 1K(M-1lM
has finite index in M- 1KM. D

Using Lemma 3.1 as a point of departure, one could determine the Hecke ring
of the pair (K, S") and then consider its representations on spaces of modular forms
for the group K. However, the structure of the Hecke rings that arise is in general
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 123

unknown, and one does not yet have a concrete general theory of Hecke operators.
Because our constructions are not meant as an end in themselves, but rather as a means
for studying Diophantine problems in number theory, we shall simplify the situation
by, in the first place, limiting ourselves to the types of congruence subgroups that arise
in arithmetic, and, in the second place, considering certain subrings of the Hecke ring
of the pair (K, sn), rather than the entire Hecke ring.
We first prove an approximation lemma.
LEMMA 3.2. ( l) The natural homomorphisms mod q
SLn(Z)---+ SLn(Z/qZ)
and
rn = Spn(Z)---+ Spn(Z/qZ),
where n, q E N, are epimorphisms.
(2) If q and qi are relatively prime, then
rn(q)rn(q1) = rn.

PROOF. We prove the first part separately for each of the two groups using in-
duction on n. For SLn it is obvious in the case n = 1. Suppose that n > l, and
the claim has been proved for SLm with m < n. Let T be an n x n integer matrix
such that det T =
l (mod q). We can replace T by a matrix congruent to it modulo q
that has the property that the entries in its first column are relatively prime. Then, by
Lemma 3.8 of Chapter l, there exists Vi E SLn(Z) such that Vi T = ( ~ ~~). Since
det T4 =det T =l(modq), it follows by the induction assumption that there exists
Vi E SLn-1(Z) that is congruent to T4moduloq. Then T::::: v1- 1 ( ~ ~) (modq),
and the last matrix obviously lies in SLn (Z). This proves the first part of the lemma for
SLn. Since Sp 1(Z) = SL2(Z), part one of the lemma holds for Sp 1(Z). Suppose that
n > l, and the claim has already been proved for Sp111 with m < n. If Mis a·2n x 2n
integer matrix satisfying the congruence Jn[M] = Jn(modq), then we first replace the
entries in the first column of M by suitable integers congruent to them modulo q so
that they are relatively prime. Applying Lemma 3.9 of Chapter l to the first column of
M, we find that there exists g E rn such that the first column of the matrix M' = gM
is the same as the first column of the identity matrix E 2n. The reader can easily verify
that it is always possible to choose matrices V E SLn (Z) and S = 1S E Mn in such a
way that the first row of the matrix

M " -_ (A
C D
B ) -_ M' g1, whereg1 = ( ~
is the same as the first row of E2n· Thus,

A= ( ~ Jo) ' B= ( ~ ~o) ' C = (Q0 CoC2) '


where Ao, Bo, C0 , and Do are (n - l) x (n - 1)-blocks. Like M, the matrix M" satisfies
the congruence Jn [M"] = Jn (mod q), which is equivalent to the following congruens:es
for the blocks:
AC
1 =CA,1 BD
1 =DB,
1 AD - 1CB
1 =E11(modq).
124 3. HECKE RINGS

The first of these congruences shows that


c2 = O(modq) and 'AoCo = 'CoAo(modq).
From the third congruence it follows that
d1=1, d2 = 0, and 'AoDo - 'CoBo = En-1(modq),
and the second congruence implies that 'B0 D0 = 'DoBo. In particular, it follows
from all of these congruences that the matrix Mo = ( ~~ ~~) satisfies the condition
Jn-i[Mo] =Jn-I (modq). Hence, by the induction assumption, there exists a matrix
g0 E rn- 1 such that g 0 Mo = E2n-2(modq). Then the above congruences and the
symplectic condition modulo q imply that ioM" = E2n (mod q), where io is a matrix
in p.n-I such that rp(g0) = g 0 (here rp is the map (3.53) of Chapter 2). Returning
to our original matrix, we see that M = g- 1(go)- 1g} 1(modq). The first part of the
lemma is proved.
It follows from the first part of the lemma that for q E N the index µ(rn (q)) is
equal to the order of the group Spn(Z/qZ), i.e., it is equal to the number of solutions
of a certain system of polynomial congruences modulo q having rational coefficients
that do not depend on q. It follows from the Chinese Remainder Theorem that the
number of solutions of such a system modulo a composite number of the form qq 1
with q prime to q1 is equal to the product of the number of solutions modulo q and
the number of solutions modulo q 1• In particular, the indices are multiplicative:
µ(r"(qq1)) = µ(r"(q))µ(r"(qi)), if (q,q1) = 1.
Furthermore, becauseq and q1are relatively prime, wehaver"(q)nr"(q 1) = P(qqi),
from which we conclude that the imbedding rn (q) --+ rn determines an imbedding of
quotient groups
rn(q)/r"(qq1)--+ r /r(q1).
On the other hand, since the indices are multiplicative, it follows that both of these
quotient groups have the same order. Hence, this imbedding is a one-to-one correspon-
dence. Consequently, for any y E rn there exists YI E P(q1) such that YY1 E P(q),
and soy E rn(q)P(q,). o
We now turn to the Hecke rings. For n, q E N we define the group

(3.3) S(rn(q)) = {ME sn n GL211 (Zq); M::: ( ~" r(~)En) (modq) }•


where Zq is the ring of q-integral rational numbers. We shall limit ourselves to groups
K, rn (q) c K c rn, which satisfy the following q-symmetry condition:
(3.4) KS(r"(q)) = s(r(q))K.
For such groups the set
(3.5) S(K) = S(K)q = KS(r(q))K = KS(r"(q)) = s(r(q))K
is a subgroup of the group sn. From Lemma 3.1 it follows that (K, S(K)) is a Hecke
pair. We let
(3.6) L(K) ~ DQ(K, S(K))
denote the Hecke ring of this pair over Q.
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 125

THEOREM 3.3. Let Kand K1 be two groups such that


P(q)CK1cKcrn,
where n, q EN. Suppose that Kand K1 each satisfies the q-symmetry condition (3.4).
Then:
(1) S(K) = KS(K1) = S(K1)K;
(2) rn n S(K) = K;
(3) if M,M' E S(Ki) and M' E KMK, then M' E P(q)MK1;
(4) if M E S(Ki) and (M)K = L;(KM;) is the expansion into left cosets, and if
all M; E S(Ki), then (M)K, = L;(K1M;) and CQnversely, where the double cosets have
the form (1.11);
(5) the linear maps e: L(K) - t L(Ki) and (: L(Ki) - t L(K) given by the condi-
tions
e((M)K) = (M')K, (ME S(K), M' E KMK n S(K1)),
(((M')K.) = (M')K (M' E S(K1))
are mutually inverse isomorphisms of rings.
PRooF. By (3.5) we have
S(K) = KS(P(q)) = KK1S(P(q)) = KS(K1).
Similarly, S(K) = S(K 1)K. Part (1) is proved.
If g E rn n S(K1), then g = g1M, where g1 E K 1 and M E S(P(q)) . .Then
M = g( 1g E P n S(P(q)). The latter intersection is clearly rn(q). Thus, g E
K 1P(q) c K 1• Part (2) is proved.
We now prove part (3). By assumption, M' = yMy', where y, y' E .k. We choose
an integer q1 prime to q such that q1M±1 are integer matrices. By Lemma 3.2(2), the
matrix y can be written in the form y =rm, where r1 E P(q) and Y2 E P(qn. Then
M' = YI My3, where y3 = M- 1y2My'. Because of our choice of y2, the last matrix is
an integer matrix, and hence y3 E rn. On the other hand, y3 = M- 1y( 1M' E S(K 1).
Thus, Y3 E P n S(K1) = K1. This proves part (3).
To prove part (4) we note that the first of the expansions implies that the left cosets
K1 M; are pairwise distinct, and their union contains the double coset K 1MK1. Hence,
to prove the claim in one direction it suffices to verify that M; lies in K 1MK 1 for every
i, and this follows from part (3). This result along with part (1) obviously implies the
claim in the converse direction. Part (4) is proved.
Finally, from parts (1), (2), and (4) it follows that the Hecke pairs (K, S(K)) and
(Ki. S(K 1)) satisfy the conditions (1.26) and (1.27). By Proposition 1.9, we then have
the map e: D(K,S(K)) - t D(Ki.S(K1)), and this map is an isomorphism of rings.
From part (4) and the definition of e it follows that e coincides with the first of the
maps in part (5). The second map is clearly the inverse of the first, and so is also an
isomorphism.· D

For subgroups K c rn containing rn (q) and satisfying the q-symmetry condition,


the above theorem says that the Hecke rings L(K) are canonically isomorphic. Hence,
from an abstract point of view, when studying the Hecke rings one can start with
any K with these properties. When we investigated the symplectic transformations
of theta-series of quadratic forms in Chapter 1, we arrived at congruence subgroups
of the form r(j(q). We shall use the same congruence subgroups as the basis for our
development of a theory of Hecke rings, because, in addition to their arithmetic origin,
126 3. HECKE RINGS

these groups have a number of technical advantages that make the calculations easier.
Using Theorem 3.3, a reader who has need of results for the rings L(K) for other K
will easily be able to obtain them from the corresponding results for L(r0(q)).
We start with some technical lemmas that give information on the left and double
cosets ofr0(q). We set

( 3 .7 ) sn(q,q1) ={ M =(~ ~) E sn n GL2n(Zq); C =: O(modq1) }•


sn(q) =Sn(q, q),
where in the first case we assume that q1 divides q. These are clearly subgroups of sn.
LEMMA 3.4. Every left coset r 0(q 1)M, where M E sn (q, q1) and q1lq, contains a
representative of the form Mo = ( ~ ~) .
PROOF. By Proposition 3.7 of Chapter 1, there exists y E rn such that yM =Mo
has the required form. But obviously y = MoM- 1 E sn(q, q1) n rn = r 0(q 1). D

LEMMA 3.5. We have


sn(q,q1) = r 0(q1)S(r(q)) = S(r(q))r0(q1),
where qdq and S(rn(q)) is the set (3.3). In particular, the group r 0(q1) has the q-
symmetry property for any q that is divisible by q1.
PROOF. It suffices to prove, say, the first equality, since the second one can then be
obtained by applying the group anti-automorphism M--+ M- 1. Let M E sn(q,q 1).
By the previous lemma, we may assume that M = Mo = ( ~ ~) . The matrix

M1 = ( ~1 ~:)•where A 1 = A- 1, B 1 = -A- 1BD- 1, D 1 = 'A, is obviously q-


integral, and modulo q it belongs to the group Spn(Z/qZ). Then by Lemma 3.2(1)
there exists y E rn such that y =: M1 (mod q). Then

y E r 0(q) c r 0(q), and yMo =: ( ! r(~o)E) (modq). D

Lemma 3.5 implies that S(r0(q))q = sn(q), and the Hecke ring (3.6) for the
group K = r 0(q) has the form
(3.8)
By Lemma 1.5, we can take elements of the form (1.11) as a Q-basis of the ring
Ln(q), one element for each double coset r 0(q)Mr0(q) of the group sn(q) modulo
the subgroup rg(q).
LEMMA3.6. Everydoublecosetr0(q)Mr0(q), where ME sn(q), contains one and
only one representative of the form
(3.9) sd(M) = diag(d1,. . .,dn;e1,. . .,en),
where d;, ei > 0, d;ld;+1. dnlen, e;+de;, d;e; = r(M).
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 127

PROOF. We first treat the case q = 1 using induction on n. Since r 1 = SL 2 (Z)


and S 1( 1) is the group of 2 x 2 rational matrices with positive determinant, the lemma
in the case n = 1 follows from Lemma 2.2. Suppose that n > 1, and the lemma
for q = 1 has been proved for (2(n - 1) x 2(n - 1))-matrices. In the argument
below we shall systematically make use of partitions of the standard blocks of a matrix
M = ( ~ ~) E sn into smaller blocks offixed size. For example, A = ( ~~ ~~ ) ,
B = ( !~ !~),and so on, where A4, B4, C4 , D4 are (n - 1) x (n - 1)-blocks, and
the sizes of the other blocks are determined correspondingly. Let M' E sn = sn(l),
and let d 1 be the greatest common divisor of the entries in M'. Then M' = d 1M,
where M E sn is an integer matrix with relatively prime entries. We let o = o(M)
denote the minimum of the greatest common divisors of entries in the columns of M,
o
and we prove by induction on that the double coset rn MP contains a representative
( ~ ~) with blocks of the form
(3.10)
A=(~ D = (r(M)
0
0) '
D4

where ( ~: ~:) E sn-t. First suppose thato = 1, and let i be the index of the first
column of M whose entries are relatively prime. By replacing M by Mln if necessary,
we may suppose that i :.;; n. If we then replace M by MU(V*), where V E An is a
suitable permutation matrix, we may assume that i = 1. We now apply Lemma 3.9
of Chapter 1 to the first column of M; we find that the left coset rn M contains a
representative whose A 1-block is 1 and whose A 3-, C1-, and C3-blocks consist ofzeros.
After multiplying this matrix on the right by

U( V*) = ( ~ i.) , where V = ( ~ in~~) '


we may assume that its A2-block also consists of zeros. If we multiply the last matrix
on the right by the matrix

T(S) = ( ~n in), wheres= ( ~~2 - :2 ) ,

we obtain a matrix with zero B 1- and B2-blocks. Thus, in our double coset we have
found a matrix with A1 = 1, A2 = 0, B1 = 0, B2 = 0, AJ = 0, C1 = 0, and C3 = 0.
From (3.1)-(3.2) it now follows that this matrix has the form (3.10). For example,
the first relation in (3.1) shows that C2 = 0 and 'A 4C4 = 'C4A 4, the first relation in
(3.2) leads to the equalities B3 = 0 and A 4 'B4 = B4 'A 4, and so on. Now suppose that
o > l, the claim has been proved for all integer matrices M' E sn with relatively prime
entries and with o(M') < o, and o(M) = o. Just as in the above discussion, in the
o,
doublecosetof M wecanfindarepresentativeM0 withA 1 = withzeroA 3-, C1-, and
C3-blocks, and with the property that all of the entries in the Ai-, B 1-, and Bi-blocks
are between 1 and o. Then o(Mo) < o. In fact, we obviously have o(Mo) :.;; o. If
o(M0 ) were equal too, then all of the entries of M 0-and hence all of the entries of
o,
M -would be divisible by contradicting the assumption that its entries are relatively
prime. By the induction assumption, the double coset rn Morn = rn Mm contains a
128 3. HECKE RINGS

representative of the form (3 .10), and the proof of the claim is complete. Returning to
the proof of the lemma, we see that the P-double coset of an arbitrary matrix M E sn
contains a representative Mo =(~ ~) with blocks of the form

d1
A= ( 0
0)
A' '

where di,e1 > 0, dde1, dlei = r(M), M' = . (A'C' D'B') E sn-l, and all of the
e,
entries in M' are divisible by di. By the induction assumption, there exist e1 E rn- 1
such that the matrix eM'e1 has the form (3.9). Then the matrix [Mocfi, where for
e=(; ~)Ern- 1 weset

0 0 P0)
a 0 Ern
0 l 0 '
y 0 0

has the form (3.9). The uniqueness of the rn-double coset representative of the form
(3.9) follows from Lenima 2.2, since the numbers di. ... , dn, en, en-1> ... , e1 obviously
are the elementary divisors of this matrix in the sense of §2.1. The lemma is proved in
the case q = I. ·
We now turn to the case of arbitrary q ~ l. If M E sn (q), then, by what was
proved abqve, there exist Yi.Yi E rn such that y1My2 = sd(M). By Lemma 3.5,
the groups K = rn and K 1 = r(j(q) satisfy the conditions of Theorem 3.3. Since
sd(M) E sn(q), it follows from part (3) of that theorem that sd(M) E r(j(q)Mr(j(q);
the uniqueness of this element follows from its uniqueness in the larger double coset
rnMrn. o
We call a matrix sd(M) of the form (3.9) the symplectic divisor matrix of M,
and we call d; = d;(M) and e; = e;(M) (i = 1, ... , n) the symplectic divisors of M.
Clearly, the numbers d1,. . .,dn,en,. .. ,e1 are the elementary divisors of M. From
Lemmas 1.5 and 3.6 it follows that the elements of the form

form a basis of the space Ln(q) over Q, where d;, ej are positive rational numbers that
are q-integral, have q-integral inverses, and satisfy the conditions

(3.12)

We now turn to the multiplicative properties of the ring Ln(q).

THEOREM 3.7. For n,q EN the ring Ln(q) is commutative.

PROOF. The map M--+ r(M- 1)M is obviously an automorphism of the group
S" (q) that does not affect the elements of r = r(j (q ). Hence, the Q-linear map from
L"(q) to itself that is given by the condition (M)r--+ (r(M- 1)M)r for ME sn(q)
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 129

is an automorphism of the ring Ln(q). It now follows from Proposition 1.11 that the
map

(3.13)

is an anti-automorphism of the ring Ln(q). In particular, (XY)* = Y* X* for X, YE


Ln(q). On the other hand, if Mo= sd(M) = diag(di. ... ,dn,ei. ... ,en) and ME
sn(q), then by Lemma 3.6 we have (M)r = (Mo)r, and hence
(M)[. = (Mo)[ = (diag(ei. ... , en, di. ... , dn))r = (JnMoJn-l )r.
Since the P-double coset of the matrix JnMoJn- 1 coincides with rn Morn, it follows
that sd(JnMoJn- 1) =Mo. Since the symplectic divisor matrix does not depend on q,
by Lemma 3.6 we obtain (JnMoJn- 1)r = (Mo)r = (M)r. Thus,

(3.14) X* = X for XE Ln(q),

and XY = (XY)* = Y* X* = YX. D

We now examine the products of concrete elements of L n ( q ).


LEMMA 3.8. For any ME sn(q) and r E z; the following relations hold in the ring
Ln(q):

(3.15) (rE)r(M)r = (M)r(rE)r = (rM)r,


where E = E2n and r = r 0(q ).
PROOF. This lemma is proved in the same way as Lemma 2.4, but with A replaced
by r and g replaced by M. D

PROPOSITION 3.9. Let M,M' E sn(q). Suppose that the symplectic divisor ratios
el (M)/di (M) and e 1(M')/di (M') are relatively prime. Then the relation
(3:16) (M)r(M')r = (MM')r,
where r = r 0(q), holds in the Hecke ring Ln(q).
PRooF. The proof of this proposition is similar to the proof of Proposition 2.5,
with obvious modifications. So we shall be brief. From Lemma 3.8 and the definition
of the symplectic divisors it follows that one need only prove the proposition in the
case when Mand M' are integer matrices and r = r(M) and r' = r(M') are relatively
prime. In analogy with (2.8) we obtain

(3.17) (M)r(M')r = (MM')r + a(M, M'; H)(H)r,


rHrcrMrM r 1

where a(M, M'; H) are nonnegative integers depending only on the r-double cosets
of M, M', and H. Form EN we let
SD(m) = SDn(m) = {diag(di. ... ,dn,ei, ... ,en);
(3.18)
130 3. HECKE RINGS

denote the set of all integer matrices of the form (3.9) with r(M) = m. By Lemma
3.6, we may assume that ME sd{r), M' E sd(r'), and HE sd(rr') for all Hin (3.17).
If m is prime to q, we can define the element
(3.19) T(m) = Tn(m) = L (M)r
MESD.(m)

of the ring Ln(q). If m and m' are relatively prime and also prime to q, then, summing
(3.17) over all M E sd{m) and M' E sd(m'), we obtain

T(m)T(m') = T(mm') + L (L a(M,M';H))(H)r.


MESD.(mm') . M,M'

This implies that to prove (3.16) it suffices to show that


(3.20) T(m)T(m') = T(mm'), if (m,m') = (mm',q) = 1,
where T(l) = Tn(l), since in that case it will follow from the nonnegativity of
a(M,M';H) that all of these coefficients are zero, and (3.17) will turn into (3.16).
D

LEMMA 3.10. Let m E N, (m, q) = 1. Then the set


(3.21) SM(m) = SMn(m,q) ={ME sn(q) nM2n;r(M) = m}
is the union of.finitely many left cosets of the group r = r 0{q), and the element (3.19)
has an expansion of the form
(3.22) T(m) = L (rM).
MEr\SM(m)

If m and m' are relatively prime (and also prime to q ), and if the matrices Mand M' run
through a set ofrepresentatives of the left cosets r\ SM (m) and r\ SM(m'), respectively,
then the product MM' runs through a complete set of representatives of the left cosets
r \ SM(mm'). In particular, the relation (3.20) holds.
PRooF. The first assertion follows from Lemmas 3.6 and 3.1.
If {Mi. ... , Mµ} and {M{, ... , M:} are fixed sets of representatives of the left
cosets r \ SM(m) and r \ SM(m'), respectively, then every product M; M} is obviously
contained in SM(mm'). Suppose that two such products lie in the same r-left coset, say,
yM;Mj = MkMf, wherey Er. WesetH = Mk- 1yM; = Mf(Mj)- 1• ThenmH and
m' H are integer matrices, and since (m, m') = 1 it follows that H is an integer matrix.
On the other hand, HE sn(q) and r(H) = 1. Thus, HEP n sn(q) = r 0{q) = r,
so thatMk E rM; and Mf E rMj. This means that k = i and l = j. Thus, all of the
productsM;M} belongtodifferentleftcosetsr\SM(mm'). IfrM0 (Mo E SM(mm'))
is an arbitrary left coset, then it follows from Lemma 3.6 that Mo can be written in the
form Mo = MM', where M E SM(m) and M' E SM{m'). Then M' = y' Mj, where
y' Er, and My'= yM;, where y Er; hence, Mo= yM;Mj, and the left coset rMo
contains the product M;Mj. Lemma 3.10, and hence also Proposition 3.9, are proved.
D

The next lemma is useful for explicitly computing the left coset expansions of
elements of the Hecke ring Ln(q).
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 131

LEMMA 3.11. Every left coset rM, where r = r 0{q) and ME sn(q), contains one
and only one representative of the form

(3.23) (A B)
0 D
= (r(M)D*
0
B) '
D
where D belongs to a.fixed N'-left coset of GLn(Zq) •. and B belongs to a.fixed residue
class of the set
(3.24) B(D) = B(D)Q ={BE Mn(Q); 'BD = 'DB}
modulo D, where
(3.25)

PRooF. The lemma follows directly from Lemma 3.4, the relations (3.1), and the
definitions. D

Just as in the case of the general linear group, the study of the global Hecke rings
L n (q) reduces to the study of the local subrings. Let p be a prime number not dividing
q. We set

(3.26)

where Z[p- 1] is the ring (2.17). Since r 0{q) c s;(q) c sn(q), it follows that
(r0{q), s;(q)) is a Hecke pair, and the Hecke ring

(3.27)

may be regarded as a subring of the Hecke ring Ln(q). The subrings L;(q) c Ln(q)
as p runs through the primes not dividing q are called the local subrings of L n {q).
'fHEoREM 3.12. For n, q E N the Hecke ring L n (q) is generated by the local subrings
L;(q) asp ranges over all primes not dividing q.
PROOF. For r E Q* and pa prime number, as before we let vp(r) denote the power
with which p occurs in the prime factorization of r. If M E sn (q) and p ,j'q, we define
the symplectic p-divisor matrix of M by setting
(3.28) sdp(M) = diag{pvp(d1), ... , pvp(d.), pvp(ei), ... , pvp(e.l),

where d; = d;(M), ei = ei(M) are the symplectic divisors of M. Clearly sdp(M) E


s;(q). If Mis a fixed matrix, then obviously sdp(M) = E 2n for almost all p, and

(3.29) II sdp_(M) = sd(M) (ME sn(q)).


pEP(qJ

Since ME r 0{q)sd{M)r0{q), it follows from (3.29) that M can be represented in the


form

(3.30) M = II Mp, where Mp E s;{q), sdp(Mp) = sdp(M),


pEPc9 J
132 3. HECKE RINGS

and Mp = E2n for all except finitely many p. From Proposition 3.9 it now follows that
the corresponding double coset in the Hecke ring L n (q) has the expansion
(M)r = II (Mp)r,
pEPc9J

where r = r3(q). Since (Mp)r E L~(q), and since Ln(q) consists of finite linear
combinations of (M) r (where M E sn (q)), this proves the theorem. D

PROBLEM 3.13. Let M,M' E sn(q). Suppose that the ratios en(M)/di(M) and
en(M')/d1(M') are relatively prime. Show that sd(MM') = sd(M)sd(M').
·PROBLEM 3.14. Show that the following set can be taken as a set of representatives
of the left cosets of SMn(m, q) (where (m, q) = l) modulo the group r3(q):

{ ( ~ ~); DE An\ GLn(Q) n Mn,dn(D)lm,

BE B(D) nMn/modD,A = mD* }·

PROBLEM 3.15. Let D be a nonsingular n x n integer matrix. Show that the number
p(D) = IB(D) n Mn/modDI of left residue classes of the set B(D) n Mn modulo D
is finite and satisfies the relations
p(UDV) = p(D), ifil, VE An,
1
p(D) = dfdf- · · · dn, if ed(D) = diag(d1, ... , dn).

PROBLEM 3.16. Show that the zeta-function of the ring Ln(q), which is defined as
the Dirichlet series
ZN(s) = L
N(T(m))m-s,
mENc,J
where N: Ln(q)-+ Q is the homomorphism in Problem 1.16 and the real part of sis
sufficiently large, converges and has an Euler product of the form

Show that
N(T(m)) = L N(t(di, ... , dn))df df-I · · · dn,
cl1EN
cldc/2 I··· lcln Im
wheret(d,, ... ,dn) = (diag(di. ... ,dn))A E Hn andthesymbolNontherightdenotes
the corresponding homomorphism of the ring nn.
[Hint: Use the two preceding problems to prove the last relation.]
PROBLEM 3.17. Show that for L 2 ( q) one has
ZN(s) = (q(s)(q(s - l)(q(s - 2)(q(s - 3)(q(2s - 2)- 1,
where
(q(s) = L m-"'.
• mENlql
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 133

[Hint: Use the previous problem and the following identity, which is a consequence
of Problem 2.12:

'°'
L....J
o,.;6, ,.;62
N(t( 61
p ,p
62)) 61 62 _
VI V2
1- vi
-(1- vi v2 )(1-( p +l) v2 + pv22)"
]
PROBLEM 3.18. Show that the Hecke pairs (A 2, G2) and (r 1,S 1) satisfy the con-
ditions (1.26) and (1.28), so that the map (1.27) gives a natural isomorphism between
the rings H 2 and L 1(1). From this and Problem 2.13 deduce that in the case n = 1 the
elements (3.19) of the ring L 1{q) can be multiplied by the rule

T(m)T(m1) = L d(dE2)rT(mmif d 2 ),
dlm,m1

where (m,q) = (mi,q) = 1 andr = rA{q).


PROBLEM 3.19. Show that for n = 1 one has the formal identity
L T(m)m-s IT {1- T(p)p-s + (pE2)rpl-2s)-1.
mENc,J pEPc,J

2. Local rings. In this subsection we study the structure of the local Hecke rings
Lp(q), where pis a prime not dividing q. We first note that this structure does not
dependonq.
LEMMA 3.20. Let p be a prime not dividing q. Then the restriction of the map
e: DQ(rn, sn(q, 1)) --+ Ln(q) to the subring
L; = L;(l) = DQ(rn,s;(q))

is an isomorphism of this ring with the ring L;(q), and

(3.31) e((M)rn) = (M)q;(q) (ME s;{q)).

PRooF. The lemma follows from Theorem 3.3(5) and Lemma 3.5. D

This lemma enables us to restrict ourselves to the case q = 1when proving structure
theorems.
As in §2.2, it is convenient to reduce the study of L;(q) to the study of its integral
sub ring
(3.32)

Lemma 3.20 implies that the ring L.; (q) is naturally isomorphic to the ring L.; = L.; ( 1).
LEMMA 3.21. The element
(3.33)

of the Hecke ring L;(q), where (p, q) = 1, is invertible in L;(q), and ll.- 1 = {p- 1E2n).
The ring L;(q) is generated by ll.- 1 and the subring L.;(q).
134 3. HECKE RINGS

PRooF. The lemma follows from Lemma 3.8 and the definitions. 0

We set
(3.34) T( PJ1 ' ... ' pJ• ' pE• ' ... ' pE•) -_ (di"ag( PJ1 ' ... 'PJ. ' PE' ' ... ' pE•)) rjj(q)>

where O;,Ej E Z, 01 +Et = · · · =On.+ En· In this notation the ring L.;(q) consists of
linear combinations of elements of the form (3.34), where
(3.35)
This follows from Lemmas 1.5 and 3.6. We say that such an element is primitive if
01 = 0, and that it is imprimitive if01 ~ 1. An arbitrary element T E L.; (q) is said to be
primitive (or imprimitive) if it is a linear combination of primitive (resp. imprimitive)
elements of the form (3.34)-(3.35). Clearly, any element Tin L.;(q) can be uniquely
represented in the form
(3.36)
where TPr is primitive and Tim is imprimitive. Lemma 3.21 implies that the subset I
of all imprimitive elements of L.; (q) is the principal ideal of this ring that is generated
by the element (3.33):
(3.37)
LEMMA 3.22. Let n > 1. Let the Q-linear map
(3.38) 'I': L.; (q) - L;-• (q)

\U(T( T(p"2 l p"· l pE2 l • • • l pE•) l


T p '51 , ... ,pEn)) -_ { • • • l

0,
Then 'I' is an epimorphism of rings, and its kernel is the ideal I of imprimitive elements
ofL.;(q).
PROOF. From Lemmas 1.5 and 3.6 and the definitions it follows that, as a map
of vector spaces, 'I' is an epimorphism with kernel I. Hence, it remains to prove that
'I' is a ring homomorphism. This, in turn, will follow if we proye that the image of a
product of primitive elements of the form (3.34)-(3.35) is equal to the product of the
images. Let
M = diag(pJ', ... ,pE•), M' = diag(p"f, ... ,pE~).
where the exponents satisfy the inequalities (3.35), o; + E; = o, o; + E; = o'. We
suppose that o1 =of = 0, and we set
II -dt"ag(pd2 'I .. ' pd• ' pE2 ' ... ' pEn) '
lY...10 -

= d"tag (p.S'2, ... , p.S'", p E21 • • • 1 p E" ) .


1 1
111
lY.lO

By Lemmas 1.5 and 3.6 and the definition of 'I', we obtain

'l'((M)r(M')r) = I:' c(M, M'; H)(Ho)p,


H=sdH
r(H)=p~+~'
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 135

where the symbol E' means that the Hare primitive, r = r3(q), r' = r~- 1 (q), we
set Ho = diag(p°'2 , ••• , p°'•, pP2 , ••• , pP") for H = diag(p°' 1, ... , p°'•, pP1 , ••• , pP·), and
c (M, M'; H) is the number of pairs M;, Mj which belong to fixed sets ofrepresentatives
of r \ r Mr and r \ r M'r, respectively, and which satisfy the relations
(3.39) M;Mj = yH with y E r.
Similarly-but now summing over integer matrices H 0-we have

(Mo)p (Mo)f' = L: c (Mo, Mo; Ho) (Ho)p,


Ho=sdHo
r(Ho)=p6+6'

where c(Mo, M0; Ho) is the number of pairs Nk> Nf in r' \ r' Mor' and r' \ r' M0r',
respectively, which satisfy the relation
(3.40) NkNf = y'Ho with y' E r'.
Since the matrices Ho in the above expansions obviously run through the same set, it
follows that to prove that
'l'((M)r(M')r) = (Mo)dMo)f' = 'l'((M)r)'l'((M')r)
it suffices to verify that
(3.41) c(M, M'; H) = c(Mo, Mo; Ho)
for primitive n x n-matrices H = sd(H) with r(H) = p 6+6'. These coefficients depend
only on the double cosets of the corresponding matrices. Hence, by Lemma 3.6,
without loss of generality we may assume that
H - d1'ag(p°' 1, ... , p°'" ' pPi , ... , pp") '
where a;+ p; = o +o', Pn = 0, and
u = di'ag( p °'I ' ... ' p °'•-I ' p P1 ' ... 'p P11-1) .
no
By Lemmas 3.11and2.7, we may take

M· =(A;0 D;B;) '


I

where D;,Dj are matrices of the form (2.15), and B; and Bj are fixed modulo D; and
Dj, respectively. It now follows from (3.39) that

y= (Y11 Y12) and D;Dj = Y22diag(pP1,. . .,pP·).


0 Y22
The last relation implies that 122 E An is an upper-triangular matrix, and hence all of
its diagonal entries are ±1. But since D;, Dj are reduced matrices and Pn = 0, we
conclude that
. - (Dfn-1)
D, - 0
0)1 ' Di
I (·Di.(n-1)
10
0)1 ' Y22 = (
(11-I)
Y220
0)
1 '

and
136 3. HECKE RINGS

From these formulas it follows that


._
A, -
(A~n-1)
0
0)
PJ '
A'·=
J
(A~(n-1)
0 pJ'
0 )
' Y11 = ( Y110
(n-1) 0)1 '
and
A ;(n-l)A1(n-I)
j
_ Y(n-l)dt"ag{p°'I
- II , ... , p°'•-1) .

If we replace the matrices B; and Bj by B; +SD; and Bj + S' Dj, respectively,


where S and S' are suitably chosen symmetric integer matrices, and if we take into
account the symmetry of the matrices 'D;B; and 'D}Bj, we may assume that

. _ (Bin-I)
B, - O
0)
0 ' Bj =
1 (B~(n-1)
10 0
0) '

so that
(n-1)
Y12 = ( Y120
O)
0 '

and

A;(n-l)B1(n-I) + B(n-l}D!(n-1) _ (n-1) (pPi O )


i ; i - Y12 ·
O pP•-1
The above discussion and our assumptions imply that the matrices
_ (A~n-1) B~n-1)) ._ (A'~n-1) B~(n-1))
A·' -- '0 '
D~n-1)
and A'j -- 1
0
1
D'.Cn-1)
I J

belong to different r' -left cosets in r' Mor' and r' M6r', respectively, and they satisfy
the relation
(n-1) (n-1))
.i;Aj = yHo, where y = ( Yu0 yl~-1) E r'.
Y22
If we repeat the same argument in the reverse order, we see that, with a suitable choice
of r'-left coset representatives, any pair Nk> Nf that satisfies (3.40) can be obtained in
the manner indicated. This proves (3.41), and hence the lemma. D

We can now completely determine the structure of the rings L.; (q) and L ~ (q).
'THEOREM 3.23. Suppose that n, q E N, and p is a prime number not dividing q.
Then:
{1) the ring L.; {q) is generated over Q by the elements
(3.42) T(p) = Tn(p) =·T~;~)
n n

and

n
-.......-- ...____, _____...., ...____,
T;(p2) = rr(p2) = T(l, ... ' 1,p, ... ' p,p2, ... ,p2,p, ... ,p)
n-i n-i i
Jori= 1, ... ,n;
(2) the ring L~(q) is generated over Q by the elements (3.42) and the element
Tn(p2)-I = An(p)-1;
(3) the elements (3.42) are algebraically independent over Q.
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 137

PROOF. First let n = 1. Since


r 1 = SL2(Z) ={ME A 1;detM> O} and s; ={ME a;;detM> O},
it follows from Proposition 1.9 that there exists an isomorphism e from the Hecke
ring n; to the ring L1. According to (1.29), we have e(t(p'\p152 )) = T(p151 ,p152 ) for
c5 1, c52 E Z. From this and Theorem 2.17 it follows that_ Theorem 3.23 holds in the case
n = 1 and q = 1. Lemma 3.20 then gives us the theorem in the case n = 1 and q
arbitrary.
We now consider the case of arbitrary n. To prove part (1) it suffices to show that
every element of the form (3.34), where the exponents satisfy (3.35), is a polynomial
in the elements (3.42). We prove this by induction on n, and for fixed n > 1 we use
induction on c5 = c5; + e;. We have already treated the case n = 1. Suppose that n > l,
and our claim has been proved for degrees less than n. Ifc5 = 1, then T (p151 , ... , pe,,) =
T(p ). We now suppose that for all T(p 15:, ... , pe:,) with c5' =of + ef < c5 it is already
known that they are polynomials in the elements (3.42); and we let T = T(p 151 , ... , pe,,)
be an element of the indicated form with c5; + e; = c5. If c51 ~ 1, then, by Lemma 3.8,
T = ll.151 T(p151 - 151 , ... , pe.-t5i ). Since (c5; -oi)+ (e; -c5i) < c5 and ll. = ll.n(P) = Tn(p 2),
it follows by the induction assumption that this elemc;mt is a polynomial in the elements
(3.42). Now let c5 1 = 0. By the first induction assumption, the element
'P(T) = T(l2, ... ,pt5",pe2, ... ,pe,,),
where 'Pis the homomorphism (3.38), can be written in the form
'P(T) = F('P(T(p )), l/f(Ti (p 2)), ... , 'l'(Tn-l (p 2))),
where

cr=(cr1, ... ,cr.)


The expansion of the monomial 'l'(T(p ))"'1· · · 'l'(Tn-l (p 2))"'• only contains double
cosets (H) modulo r~-i(q) for which r(H) = plcrl with lal = ai + 20:2 + · · · + 2an.
Thus, after combining similar terms in F, we may assume that only coefficients with
lal = c5 are nonzero. Hence, the element

Ti= T- L aa(T(p))"'1(Ti(p 2))"'2.. · (Tn-i(p 2))"'•


lcrl=t5
lies in the kernel of 'P. This means that Ti is imprimitive. On the other hand, Ti is a
linear combination of elements T (pP 1, ... , pY•) with p; + y; = c5. Thus, Ti = ll. · T',
where T' is a linear combination of elements T (pP 1, ... , pY•) with p; + y; = c5 - 2 c5. <
By the second induction assumption, T'-and hence also T-is a polynomial in the
elements (3.42). This proves part (1).
Part (2) follows from part (1) and Lemma 3.21.
We prove part (3) by induction on n. It was proved above in the case n = 1.
Suppose that n > 1, and part (3) has been proved for all n' < n. Suppose that the
elements (3.42) are algebraically dependent over Q. Let F(T(p), ... , Tn(p 2)) = 0 be
an algebraic relation between these elements, where the polynomial F (xo, xi, . .. , Xn)
has smallest possible degree. Then, applying 'I', we obtain
F(Tn-i(p), Tf-i (p2), ... ' T::l(p2),0) = 0.
138 3. HECKE RINGS

From this and the induction assumption it follows that

F(xo, ... , Xn) = XnF1 (xo, ... , Xn),

where F 1 is another polynomial. Since the element T:(p 2 ) =A. is not a zero divisor in
L;(q ), the equality

F(T(p), ... ' Tn(p 2 )) =A.Fi (T(p), ... ' Tn(p 2 )) = 0

implies that F 1{T (p), ... , Tn (p 2 )) = 0, and this contradicts the choice of F as a
polynomial of minimal degree. 0

PROBLEM 3.24. State and prove results similar to the results in Problem 2.19 in the
case when
G = {M E Min(Qp); 'MJnM = r(M)Jn, r(M) =/: O}
and r = G n GL2n(Zp).
3. The spherical map. The procedure described in the previous subsection for
expressing an element of a local Hecke ring as a polynomial in the generators is
effective, but in general it is not practical. As in the case of the general linear group, we
avoid this difficulty by constructing another polynomial realization of the local Hecke
rings. Namely, we use rings of polynomials that are invariant under a certain finite
group of transformations of the variables.
For later applications it is convenient to carry out all of the constructions for
suitable extensions of the local Hecke rings of the symplectic group. The extensions
we consider are the Hecke rings of the "triangular" subgroup

of the Siegel modular group rn and the subgroups of the group

So =So = { ( ~ ~) E sn}
(3.43) ={ =(~
M ~) E Min(Q); 'AD= r(M)En,

r(M) > 0, 'BD ='DB}·

LEMMA 3.25. (1) Every left coset of rQ \ S0 contains a representative M


( ~ ~). where D belongs to a fixed left coset of An \ an, B belongs to a fixed
residue class B(D)/modD (see (3.24) and (3.25)), and A= r(M)D*.
(2) The decomposition ofan arbitrary double coset of S0 into left cosets of the group
r 0 = r(l has the form

(3.44) roMro = LJ r 0 (rDj


0
Bi)
Di '
D1EA\ADA
81 EB.11 (D1 )/mod Di
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 139

where r = r(M), A= An, and

BM(D1) = { B1; ( r~j ~:) E roMro }·


(3) (r8, S0) is a Hecke pair.
PROOF. Part (1) follows from the definitions. Part (2) follows from Part (1) if we
notethatforanymatrix ( ~2 ~ 1 ) E r 0 Mr0 wehaveD 1 E ADAandD2 = rDj. To
prove Part (3), it is sufficient to verify that every double coset r 0 Mr0 , where ME S0 ,
consists of finitely many left cosets. Without loss of generality we may assume that
M is an integer matrix. Then each set BM(D) is contained in the set {B E Mn;
'BD = 'DB}, which obviously consists of finitely many residue classes modulo D.
From this observation and Lemma 2.1 it follows that there are finitely many left cosets
in (3.44). D

To construct our extensions of the local Hecke rings for the group S 0, we define
the subgroups "·

where pis a prime number. From Lemma 3.25(3) it follows that (r8, S0,p) is a Hecke
pair.
LEMMA 3.26. The Hecke pairs (r8(q),s;(q)), where (p,q) ~ 1, and (r8,So,p)
satisfy the conditions (1.26). The following diagram commutes:

(3.45) EJ.ql

where e = e 1 and Eq are the imbeddings {I.27), and Et,q is the isomorphism in Lemma
3.20.
PRooF. The first and third conditions in ( 1.26) are obvious in the case of our Hecke
pairs, and the second condition is a consequence of Lemma 3.4. The commutativity
of the diagram follows from the definitions of the three mappings. D

According to this lemma, instead of L;(q) one can study the isomorphic {and
independent of q) subring

(3.46)

of the local Hecke ring L8,p of the group q.


The analogue of Lemma 3.21 .that follows allows us to reduce the study of these
Hecke rings to that of their integral subrings

(3.47)
140 3. HECKE RINGS

LEMMA 3.27. The element


(3.48)
lies in the center of the ring Lo,p and is invertible in the ring L;; we have
a-I= (p- 1E2n)q;·
The ring L; (respectively, Lo) is generated by the subring L; (resp. LO,p) and the
element a- 1•
PRooF. The lemma is an easy consequence of the definitions, since r Mr = r M =
Mr for M = p± 1E2n and for any subgroup r c rn. D

REMARK. The element (3.48) is obviously the image of the element (3.33) under
the map eq. In general, for simplicity we shall usually use the same notation for
elements in eq(L;(q)) c L0,p as for their preimages.
The spherical map for the Hecke ring Lo,p will be defined in two stages. We first
define a map to a suitable extension of the local p-Hecke ring of the general linear
group GLn, and we then use the spherical map of this extension that was defined in
§2.3. We start with the left coset space. Let

roM,
pJD*
whereM = ( O
B)
D E So,p,

be an arpitrary left coset of the group So,p modulo r 0 • By Lemma 3.25, the left coset
AD of the element D E GP• along with the exponent o, is uniquely determined by the
original left coset r 0 M. We then set
Cl>((roM)) = xS(AD),
where we suppose that all of the powers xb (i E Z) are linearly independent over the
left coset module of Gp modulo A. We extend Cl> by linearity to a map of the left coset
module:
Cl>= Cl>;: LQ(r(), S{),p) -+ LQlxt''l(An, a;).
PROPOSITION 3.28. The restriction of Cl> to the ring LO,P is an epimorphism of this
ring onto the ring n;[xt 11.
PRooF. Let X e Lo,p· By definition, Xis invariant under right multiplication
by any matrix of the form U(y) with y e A. This implies that Cl>(X) is invariant
under right multiplication by any element y e A, where the multiplication acts only
on the left cosets, not on the coefficients. Thus, Cl>(X) e n;
[xi 1]. From the definition
of the multiplication in Hecke rings and the definition of Cl> it follows that Cl> is a
ring homomorphism. Finally, if D is an arbitrary matrix in GP and o e Z, then
M = ( PJ~* ~) e So,p, and from (3.44) it follows that
Cl>((M)r0 ) = o:(M)xg(D)A,
where o:(M) is a positive integer. This gives us the epimorphism. D

Now let w = w~ be the Q-linear homomorphism from the ring H;lxi 11to the
subring Q[xi 1, ••• , x;= 11 of the field of rational functions over Q in the variables
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 141

xo, x1, ... , Xn such that ro(xo) = xo and the restriction of ro to H; coincides with the
spherical map ro defined in §2.3. From Theorem 2.20 and the definitions we then have
LEMMA 3.29. The map ro = ro; is an isomorphism of the ring H;[xt 11 with the
subring Q[xg= 1, ••• , x,7' 1]s c Q[xg= 1, ••• , x,7' 1] consisiing of all symmetric functions in
X(J ••• , Xn.

Finally, we define the spherical map 0 = n; from Lo,p to Q[xt', ... , x,7' 1]s by
setting ·
(3.49) O(X) = ro(<l>(X)) (XE L 0,p).
Thus, we obtain a commutative diagram:

L 0,p ~ Q[xg= 1, ••• ;x,7' 1]s


(3.50)

H;[xt 11
Since <I> and ro are Q-linear ring epimorphisms, it follows that n is also a Q-linear ring
epimorphism.
J:.et W = Wn be the group of Q-automorphisms of the rational function field
Q(xo, xi, ... , Xn) that is generated by all permutations of the variables x1, ... , Xn and
by the automorphisms -ri. ... , Tn, which act according to the rule
(3.51) T;(xo) = xox;, T;(x;) = x{ 1, T;(xj) = Xj (j =/:- 0, i).
The reader can easily verify that each of the coefficients ra = r; (xi. ... , Xn) in the
expansion

(3.52) r(xi. ... ,Xn;v) = rro -


n

i=I a=O
2n
X;v)(l - X;-lV) = ~::)-l)araVa,

as well as the polynomials


(3.53) Pa= PZ(xo,xi. ... ,xn) = x6x1 ... XnrZ(x,, ... ,xn)
and the polynomial
n
(3.54) t = tn(xo, xi. ... , Xn) = xo IT (1 + x; ),
i=l

are all invariant under the transformations in Wn. The polynomials t,po, ... •Pn-l
play the same role for Wn that the elementary symmetric polynomials play for the
symmetric group.
THEOREM 3.30. Let n E N, and let p be a prime number. Then:
(1) The restriction of the map n = n; to the integral subring i;
c L; is an
isomorphism ofthis subring with the ring Q[ xo, ... , xn] w ofall Wn·invariant polynomials
in xo, xi, ... , Xn over Q.
(2) Any element in Q[xo, ... , Xn]w can be written as a polynomial in
(3.55) t = tn(xo, xi. ... , Xn), Pa = PZ(xo, Xi, .•• , Xn) (0 ~a ~ n - 1),
142 3. HECKE RINGS

with .coefficients in Q, i.e.,


(3.56) Q[xo, xi,; .. , Xn]w = Q[t, po, pi, .. ·, Pn-d·
The elements (3.55) are algebraically independent over Q.
(3) The restriction of the map n = n; to the full subring L; c Lo,p is an isomor-
phism of this subring with the ring Q[xg= 1, ••• , x;= 1]w of all Wn·invariant polynomials
in xg= 1, xf 1, .•• , x;= 1 over Q. The latter ring can be obtained by adjoining the element
p01 = (x6x1 · · · xn)- 1 to the polynomial ring (3.56):

(3.57) Q[xg= 1, •.. , x,7' 1]w = Q[xo, ... , Xn]w[(x6x1 · · · Xn)- 1].

COROLLARY 3.31. The restriction of cl> = c1>; to the subring L; c L 0,p is a monomor-
phism.
The plan of proof of Theorem 3.30 is similar to that for Theorem 2.20. By
computing the Q-images of the generators of L;,
we obtain generators of the ring
!l{L;). This enables us to study the algebraic features of this ring and, in particular, to
prove that the restriction of n to L;
is a monomorphism. The ring L; is investigated
using Lemma 3.27. However, in the case of the symplectic group some preliminary
work is necessary in order to compute the Q-images of the generators of L;.
This is
the purpose of Lemmas 3.32-3.34.
LEMMA 3.32. In the Hecke ring L; c Lo,p the elements (3.42) have the following
expansion into left cosets modulo ro = r 0:
n
(3.58) T(p) =Tn(p) =:Ena,
a=O
where

(3.59) na =n=<p) = :E ( ro ( p2D*


o
B))
D '
DEA\AD0 A
BEBo(D)/mod D

Da = D;(p) are the matrices (2.28), A= A*,


(3.60) Bo(D) ={BE Mn; 1BD = 1DB},
and, as before, the congruence modulo Dis understood in the sense of(3.25);

(3.61) T;(p2 ) = Tf {p 2) = L n~7h-;) (O ~ i ~ n),


a+b~n
a~i

where

(3.62) n~l = :E (r o(P ~· ~))


2
,
DEA\ADa,bA
BEBo(D)/modD
rp( ( p2r ~) )=n-~+r
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 143

Da,h = DZ,h(p) are the matrices (2.31), and rp(M) denotes the rank of Mover the.field
of p elements. The sum of the left cosets Ila (0 ~ a ~ n) and the sum of the left cosets
II~b (a + b ~ n, r ~ a) belong to the Hecke ring LO,P' and

(3.63)

PROOF. Without loss of generality we may consider the elements (3.42) in the case
q = 1.
From Lemma 3.6 it follows that the double coset r Mnr, where r = rn, coincides
with the set SM(p) = SMn(p, 1) (see (3.21)). From the definition of this set we see
that it contains the matrix ( ~ ~) if and only if A,D E Mn(Z), 'AD= pEn, and
B E Bo(D). Lemma 2.2 implies that, if Dis a fixed nonsingular integer matrix, then
'A = pD- 1 is an integer matrix if and only if D lies in one of the double cosets AD0A
for 0 ~ a ~ n. Thus, by Lemma 3.11 we have the decomposition

r ( ~n P~n) r = LJ LJ r ( P2~* ~) .
a=O DEA\AD.A
BEBo(D)/modD

From this and the definition of the map ewe obtain (3.58).
Ifwe apply Lemma 3.6 to the set SM(p2 ) = SMn(p 2, 1), we obtain the decompo-
sition
n
SM(p2 ) = LJ SM(il(p 2 ),
i=O
where
SM(i)(p2) = r (D;
0
0
p2D;-1
) r.
On the other hand, just as in the earlier case of the set SM(p), we see that Lemmas 2.2
and 3.11 give us the decomposition

(3.64) u
a+h:s;;;n
p2D* B)
r ( 0 D .
DEA\AD0 .1>A
BEBo(D)/mod D

Since each set SM(i) (p 2 ) consists of a single double coset modulo r c A 2n, it follows
that all of the matrices in such a set have the same rank over the field of p elements;
and this rank is obviously n - i. Thus, SM(;)(p 2 ) is the union of all left cosets rM in
(3.64) for which rp(M) = n - i. From this and the definitions we obtain (3.61).
We set
Sa= {( p2D* O B) ;DE ADaA,B E Bo(D) }
D
and
2 D*
(r) _
Sa,b - {
M -_ (p O. B).,DE ADa,bA,B E Bo(D),rp(M) -_ n - a+ r } .
D
144 3. HECKE RINGS

Then by Lemma 3.25 we have the expansions

(3.65) Ila= L
MHo\S.
(roM)

and

(3.66) n(r) -
a,b - L (roM).
MHo\S!~!

Since obviously rosaro =Sa and ros~'lro = s~'l, it follows from the above decom-
positions that the elements Ila and n~l are inva;iant under any right multiplication
by elements ofr0 ; hence, they belong to the ring Lo,p· Finally, let M = ( p~* ~)
be an arbitrary element of Sa. If we replace M by yMy 1 with suitable y, y1 i:=. r 0 , we
may suppose that D = Da. Then Bis an integer matrix of the form (B;j) (i, j = 1, 2),
where Bi 1 E Sn-a'(Z), B22 E Sa (Z), and B12 = p · 'B21. This implies that

M = T(S)diag(pD; 1,Da)T(Si) E roMaro,

where S = (B 11 1~21 ), S1 = ( ~ B~2 ). Thus, Sa = roMaro, and (3.63) follows


B21
from (3.65). D

We now describe the sets of the form Bo(D) /mod D, and we compute the number
of elements they have. ·It will be more convenient for later applications if we do this in
a general form.
LEMMA 3.33. Suppose that D E Mn (Z) and det D #- 0. Then:
(1) If a.,p E An, then Bo(a.Dp) = a.* Bo(D)p, and one can take the set
a.*{Bo(D)/modD}P as representatives of Bo(a.DP)/moda.Dp. In particular, ifbo(D)
denotes the number of elements in Bo(D)/modD, then bo(a.Dp) = bo(D).
(2) Suppose that D = ed(D) = diag(d1, ... , dn) is an elementary divisor matrix
(see (2.4)). Then one can take

Bo(D)/modD =· {B = (b;_;); b;_; = djd;- 1b_;; (1 ~ i < j ~ n),


0 ~ bji < d; (1 ~ i ~ j ~ n)}.

In particular, bo(D) = dj'd2- 1 • • ·dn.


PROOF. The equality 'BD = 'DB is obviously equivalent to the equality

'(a.* BP)(a.DP) = '(a.Dp)(a.* BP);

=
and for B, B 1 E B0 (D) the congruence B B 1(mod D) is equivalent to the congruence
a.* BP= a.* B 1p(moda.Dp). This implies the first part of the lemma. The second part
follows easily from the definitions. D

We are now ready to compute the images of elements of the form Ila and Il~k
under the maps <I> = <I>"p and n = p nn.
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 145

LEMMA 3.34. Let n EN. and let p be a prime number. Then


Cl>(II: (p)) = p (a.) xo:n:: (p)
and
n(II:(p)) = XoSa(Xi, ... , Xn),
where :n::(p) E n; are the elements (2.26), and s0 is the ath elementary symmetric
function, 0 :::;; a :::;; n;
Cl>(II(r))
a,b = pb(a+b+l)lP (r' a)x2:n:n
0 a,b
(p)

and
n(rr~l) = pb(a+b+l)1p(r,a)x6w(:n::,b(p)),
where lp(r, a) is the number of a x a symmetric matrices of rank rover the.field of p
elements, :n::,b(p) En; are the elements (2.31), and w is the spherical map for the ring
H;, a+b:::;; n, r:::;; a. Inparticular, thefollowingformulasholdfortheelements(3.48):
Cl>(A(p)) = x6:n:Z(p) and Q(An(p)) = p-Cn>x6x1 .. ·xn.

PROOF. Using the expansion (3.59), Lemma 3.33, and the definitions, we obtain

Cl>(Ila) = L . xobo(D)(AD) = xobo(Da) L (AD)= xop(a):n:a,


DEA\AD0 A DEA\AD0 A

which proves the first formula. The second formula follows from the first one and from
Lemma2.21.
Now suppose that D E ADa,bA, where a + b :::;; n. Then D = o.Da,bP with
o., P E A, and by Lemma 3.33 we can take ·
Bo(D)/mod·D = o.*{Bo(Da,b)/modDa,b}P

(3.67)
= o.*{b = (~ B~2 ~3) ;B22 ESa(Z)/modp,
0 B32 B33

B33 E Sb(Z)/modp 2,B23 = p • 1B32,B32 E Mb,a(Z)/modp }P·


It is not hard to see that for a· fixed matrix B' = o.* BP in this set we have
2 2D- 1 B )
M = ( p 0D* B')
D E r o (p 0 a,h D o r o,
a,h
where

(3.68)
Bo~ G~ll n
This obviously implies that rp(M) = b + rp(B22 ) + n - a - b. Thus,

Cl>(II~k) = L x6phu+h(h+l)lp(r,a)(AD) = Ph(u+h+l)lp(r,a)x6:n:Z,h(p),


146 3. HECKE RINGS

which proves the third formula. The fourth formula follows from the third one and
the definition of the map n. Since obviously 8.n (p) = I1~1~J and n~.o (p) = n~ (p), the
last formula is a consequence of the formulas already proved and Lemma 2.21. D

PROOF OF THEOREM 3.30. We first show that

(3.69)

By Theorem 3.23, the elements T(p), T1(p 2 ), .•• , Tn(p 2 ) of the Hecke ring Lg_P gen-
erate the ring L; over Q. Hence, the ring O(L;) is generated by the images n(T(p)),
n(T1 (p 2 )), •.• , n(Tn(p 2)). Using (3.58) and Lemma 3.34, we obtain
n n n
(3.70) Q(T(p)} = LQ(Ila) = L:xosa(Xi, .. ., Xn) = Xo II(l + X;) = t.
a=O a=O i=I

Thus, to prove (3.69) it suffices to verify that the vector spaces

and

Vi= { ipjpj;pj
J=O
E Q}
coincide. From (3.61) and Lemma 3.34 we obtain
n
n(T;(p 2)) = L pb(a+b+l)fp(a - i,a)x6w(na,b(p)) = L:lp(a - i,a)x6\Jla,
a=i

where
n-a
'Pa= LPb(a+b+llw(na,b(p)).
b=O
We set

VJ= { :t
a=I
)laXff\Jla; Ya E Q}.
The above formulas for O(T; (p 2 )) imply that Vi c V3. The same formulas also
imply that the coefficient matrix for the expansions of n(T 1(p 2)), ••• , n(Tn(p 2 )) with
respect to x6\Jl 1, ... , x6\JIn is a triangular matrix, has integer entries, and has entries
lp(O, a) = 1 (a = 1, ... , n) on the main diagonal. Hence, this matrix has an inverse
matrix of the same form, and this implies that each x6\JI a (a = 1, ... , n) is an integer
linear combination of O(T1 (p 2 )), .. ., O(Tn(p 2 )). In particular, Vi c Vi. Thus,
Vi = V3. On the other hand, returning to the polynomials Pa. by (3.52) we have
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 147

if we take into account that sj(x) 1,. .. ,x; 1) = (xi···Xn)- 1sn-j(xi, ... ,xn), we
obtain
Pa = x6x1 ... XnYa = x6 L
s;(xi, ... 'Xn)Sn-j(Xi, ... 'Xn).
i+j=a
Using the spherical map = w w;
and Lemma 2.21, we can rewrite these formulas in
the form
Pa= x6w( .~ p(i)+(n-i)n;1ln-j )•
1+1=a
where na = n~(p). We use the formulas (2.30) to compute the products 1lj1ln-i> and
we substitute the resulting expressions in the last formula for Pa. We obtain

Pa= x6w( L p(i)+(n-j) L 'Pa+n-j-b 1la+n-j-p,p).


i+j=a a+P=i 'Pa'Pn-j-p
O~a~j
o~p~n-j

If we set i = a + p, j = a - a - p, and note that the conditions on a and p in the


summation are then equivalent to the inequalities a, p ~ 0, 2a + p ~ a, we obtain
Pa_- x2O '°'
~
p(a+P)+(a+P+n-a) 'P2a+n-a w(n
~ff~
p)
'Pa'Pa+n-a
2a+P~a
a,p~o

_ 2 '"' (a)+(a+n-a) 'P2a+n-a 'I'


- Xo ~ P ~+n-a•
O~a~a/2 'Pa'Pa+n-a
where, in accordance with our earlier notation,
n-(2a+n-a)
'I'2a+n-a = '°'
~
pP(2a+n-a+P+llw(n2a+n-a,p ) •
P=O
Setting 2a + n - a =band replacing a by n - a, we obtain

Pn-a = L p(b-a)(b-a+2)/8+(b+a)(b+a+2)/8 'Pb


'P(b-a)/2'P(b+a)/2
x6'1'b.
a~b~n
b::a(mod2)
From these formulas it follows, in particular, that the polynomials Pn-1. Pn-2 •... , Po
can be expressed as linear combinations of x6'1'1, x6'1'2, ... , x6'1'n; and the matrix of
the expansion is a triangular matrix with rational entries and nonzero entries on the
main diagonal. As before, we conclude from this that the vector spaces spanned by
these sets over Qare the same, i.e., Vi= VJ. Thus, Vi = V2 , and (3.69) is proved.
We now prove (3.56). From the definitions it easily follows that the elements
(3.55) are invariant under all transformations in Wn. Thus, the right side of (3.56) is
contained in the left side. We prove the reverse inclusion by induction on n, and for
fixed n by induction on the degree in x 0 of the W-invariant polynomial. If n = 1,
then the change of variables xo = zi. xoxi = z2 obviously takes the ring Q[t, po]
to Q[z1 + z2, z1z2], and takes Q[xo, xt]w to the ring Q[z1, z2]s of all polynomials
in z 1 and z2 over Q that do not change when z 1 and z2 are permuted (note that if
F(xo,x1) = Ei,j~ 0 a;ixbx{ is W-invariant, i.e., if F(xoxi,x) 1) = F(xo,x1), then
aij =f. 0 implies that i ~ j, and hence F (z1, z2/ z1) is a polynomial in z1, z2). By the
fundamental theorem on symmetric polynomials, the latter ring is Q[z1 + z2,z1z2].
148 3. HECKE RINGS

Now suppose that n > 1, and (3.56) has been proved for smaller values of n. We use
induction on m to prove that any W-invariant polynomial F(xo, xi, ... , Xn) of degree
m in xo is a polynomial in t, po, ... , Pn-1 · If m = 0, then F = F(xi, ... , xn) is a
symmetric polynomial that satisfies, for example, the· relation F(x! 1, x2, ... , xn) = F,
and so it clearly must be a constant. Suppose that m ~ l, and our claim has been
proved for polynomials whose degree in xo is less than m. Let
m
F(xo,xi, ... ,xn) = L:xbi,o;(xi, ... ,xn)
i=O

be a W-invariant polynomial of degree m in x 0 • Using the definition of the action of


the group Wn and the algebraic independence of xo, xi, ... , Xn, we see that each of the
polynomials xbi,o;(xi, ... , Xn) is also W-invariant. Thus, without loss of generality we
mayassumethatF = x 0i,o(xi, ... , Xn). The W-invarianceof FclearlyimpliesthatFis
also invariant with respect to the group Wn-1 acting on the variables xo, xi, ... , Xn-1·
In particular, the polynomial F(xo,x1, ... ,xn-1,0) is Wn-1-invariant. By the first
induction assumption, there exists a polynomial

P(Yi.··· •Yn ) = ""'


L...J a;yti1 · · · Yn;.
i=(i1 ,... ,i.)

with coefficients in Q such that

F(xo, xi. ... , Xn-•• 0) = P(t<n-IJ, Pan-I), ... ,p~"._-; 1 >).


If we take into account the special form of F and the algebraic independence of
xo, xi, ... , Xn-t. we may suppose that in the polynomial P
a; = 0, if i1 + 2(i2 + · · · + i,) =F m
(the polynomials t and Pa are homogeneous in xo of degree 1 and 2, respectively). We
now show that when Xn is set equal to zero the polynomials tn, pf, ... , PZ- i become
r<n-I), Pan-I), ... , p~"._-; 1 >, respectively. For tn this is obvious; as for p;, by definition we
have
~ ~~

L(-l)a pZva = (1 - Xnv)(xn - v) L (-l)b Pkn-I)vh,


a~ b~

and so, setting Xn = 0, we obtain


2n 2n-I
""'(-l)apnl
L...J a Xn=O
Va= ""'(-l)ap(n-l)Va
L...J a-I '
a=O a=I

which gives the desired relations·. From the previous argument it follows that the
polynomial

is identically zero when Xn = 0. Hence, it is divisible by Xn. On the other hand,


by construction, F 1 is a W -invariant polynomial. In particular, it is symmetric in
xi, ... , x,.. Thus, Fi is divisible by the product xi · · · xn:
Fi (xo, xi, ... , Xn) =xi · · · x,.F2(xo, xi, ... , x,.).
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP i49

Expanding F2 in powers of xo, we obtain

Fi(xo,xi, ... ,xn) = L:x6xi ···Xnfs(xi, ... ,xn),


s~O

where the f s are polynomials. Since Fi is a W-invariant polynomial, it is invari-


ant under 'l'i (see (3.51)), i.e., it satisfies the relat.ion Fi (x 0 xi, x(i, x2, ....• xn) =
Fi (xo, xi, ... , xn). From this and the above sum for Fi it clearly follows that the
polynomials f s satisfy the relations
xj- 2J s(x(i, X2, ••. , Xn) = Js(xi, · · ·, Xn).

If s = 0 or s = l and f s is nonzero, then the expression on the left contains negative


powers of xi, and so is not a polynomial. Hence, fo(xi, ... , Xn) =Ji (xi, ... ,xn) = 0,
and
Fi (xo, xi, ... , Xn) = x6xi · · · XnF'(xo, xi, ... , Xn),
where F' is a polynomial which is obviously W-invariant. In addition, from our
assumptions concerning F and P it follows that F' is homogeneous of degree m - 2
in xo. Hence, either F' = 0, or else F' is a polynomial in t,po, ... 1 Pn-i· This proves
(3.56).
We now use induction on n to prove that the polynomials (3.55) are algebraically
independent over Q. For n = 1 they are xo(l +xi)= zi + z2 and x6xi = ziz2, where
zi = xo and z2 = xoxi. and their algebraic independence is obvious. Suppose that
n > 1, and our claim has been proved for n' < n. We use proof by contradiction.
Suppose that G(y,yo, ...• Yn-i) is a polynomial of minimal degree that vanishes when
we substitute y = tn, Yo = p0, ... ,Yn-i = p;_ i · We expand G in powers of yo:

G = Lg;(y,yi, ... ,yn-i)Yb·


i

If go= 0, then G is divisible by y 0 , and Gy0 i is a polynomial oflower degree that ~lso
vanishes under the above substitution. Hence g 0 # 0. By assumption,

L:g;(tn,pf, ... ,p;_i)(po)i = o.


i

Since this is an identity in the variables xo, xi, ... , Xn, we can set Xn = 0 in it. As we
c.
saw b eiore, tn , Pi, n then b ecome t(n-i) , p (n-i) , ••. , Pn-
n ... , Pn-i (n-i) , and p n o bv1ous
. ly
0 2 0
goes to zero. We thus obtain the identity

go(t<n-1), Pan-I),···, P~"-2°) = o,


which, by the induction assumption, implies that g0 = 0. This contradiction proves
that the elements (3.55) are algebraically independent.
At the beginning of the proof of the theorem we saw that n(T(p)) = tn, and the
images n(T; (p 2 )) ( l ~ i ~ n) can be expressed as linear combinations of Po, ... , p;_ i,
and conversely. This implies that the elements n(T(p )), n(Ti (p 2 )), ••• , n(Tn (p 2 )) are
algebraically independent over Q; and since T(p), Ti (p 2 ), ••• , T n (p 2 ) generate the ring
t;, it follows that the restriction of n to t;is a monomorphism. This completes the
proofofparts (I) and (2) of the theorem.
The third part follows from parts (I) and (2), Lemma 3.27, and the formula for
n(L\n(p))inLemma3.34. D
150 3. HECKE RINGS

The theorem just proved enables us to reduce computations in the local Hecke
rings of the symplectic group to computations in polynomial rings. To show how this
is done, we consider, for example, the problem of summing the formal generating series
for elements of the form (3.19), where m runs through the powers of a fixed prime p,
(p, q) = 1. Thus, we consider the formal power series
00

(3.71) LTn(pJ)v.s,
J=O

where Tn(p6 ) E L;(q) are the elements (3.19). From (3.22), Lemma 3.11 and the
definitions it follows that

(3.72)

whereD EA \Adiag(p61 , ... ,p611 )A, 0:::;; o1 :::;; .. ·:::;;On:::;; o, and BE Bo(D)/modD,
since ( p6 * f ~) is an integer matrix if and only if Band D are integer matrices
and all of the elementary divisors of D divide p 6 • Then from the definition of the map
n and Lemma 3.33 we obtain the formal identity
00

L: n(r(p.s))v.s = L: pn.s1+(n-l).S2+ .. +.s,,(l)(1(l1, ... , l· ))(xov ).s,


J=O O,.;J1,.; ... ,.;J,,,.;J

where t(p61 , ... , p6•) .= ( diag(p61 , ••. , p6•)) A E H;. The summation of the series on
the right in this relation for arbitrary n is based on explicit formulas for the polynomials
w (t (p61 , ..• , p 6•)) and is beyond the scope of this book (see Andrianov [l, 21). Here
we shall limit ourselves to the cases n = 1 and n = 2. When n = 1, from the definitions
we obtain
00 00

L:n(Tl(p.s))v.s = L p.Si(x1p-1).si(xov).S1+a
(3.73) J=O 61,a=O

=[(1-xov)(l -x0x 1v)r 1•

When n = 2, if we seto2 = 01 +a and o = 02 + Pand use Lemmas 2.4 and 2.21, we


obtain

00

= L P2t51+t52+a(p-3 x1x2).Siw(t(l, p°'))(xov ).Si+a+p

00

=[(I - xov)(l - xox1x2v}]- 1L:w(t(l,p°'))(pxov)°',


a=O

and it remains to compute the last series. This computation easily reduces to our
earlier calculation of the generating series for the polynomials w(t2(p.S)), where tn(m)
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 151

are the elements (2.10). First of all, using the definitions and Lemmas 2.4 and 2.21,
we have
00 00

Lw(t2(p"))vf = L w(t(pY,pr+a))v~y+a
6=0 y,a=O
00 00

= L w(n~(p )Y)v~Y L w(t(l, pa))vl


y=O a=O
00

=(1- p- 3x1x2vf)- 1Lw(t(l,pa))vl.


a=O

From this formula and (2.38) with n = 2 we conclude that


00

(3.74) Lw(t(l,pa))vl = [(1- p- 1x1vi)(l - p- 1x2vdr 1(1- p- 3x1x2vf).


a=O
Finally, we obtain
00

LQ(T2(p"))v 6 = [(1·- xov)(l - xox1v)(l - xox2v)(l - xox1x2v)]- 1


(3.75) 6=0
x (1 - p- 1xfix1x2v 2).
The denominators of the resulting expressions are the special cases when n = 1 and
n = 2 of the polynomial
n
q(xo,. . ., Xn; v) =(1 - xov) II II (1 - XoX·II ... X·Ir v)
(3.76) Ill

= L(-l);qf(xo,. . .,xn)vi (m = 2n).


i=O
It is not hard to see that any transformation in W = Wn with respect to the variables
xo, x1, ... , Xn only permutes the factors of q. Hence, all of the coefficients qf are in
the ring Q[x0 , ••• , Xn] w, and so, by Theorem 3.30, they are the 0-images of uniquely
determined qj (p) E 1;:
(3.77) qf(xo, ... ,xn) = n;(qj(p)), where qj(p) E 1;.
We set
Ill

(3.78) Q(v) = Q;(v) = L(-l);qj(p)v; E 1;[v].


i=O
We note that the obvious relation
q(xo,. . ., Xn; V) = Vn(xfix1 · · • Xn) 111 f2q(xo,. . ., Xn; (xfix1 · · · XnV )-I)
and the formula for Q(A,; (p)) in Lemma 3.34 together imply the relation
Q;(v) = vn(p<n) An(p))mf2Q;((p{n) An(P )v)-1 ),
which, in turn, implies the following relations for the coefficients of Q:
(3.79) q~1 -;(p) = (p<n>An(p)) 11112-iqj(p) (0 ~ i ~ m).
152 3. HECKE RINGS

In particular,

(3.80)
Finally, from (3.70) we have

(3.81) q!(p) = r(p).


We now turn to the problem of summing series of the form (3.71).
PROPOSITION 3.35. The following formal power series identities hold for any prime
p:
00

LT 1(p'5)v 6 = Q~(v)- 1 ,
6=0
00

LT2(p'5)v6 = Q~(v)-1. (1 - p2A2(p)v2),


6=0

whereP(p6) are elements of the form (3.19), regarded as elements in1;. andQ;(v) are
the polynomials (3.78). One has the formulas
Q~(v) = 1 -T 1(p)v + pA1(p)v 2,
Q~(v) = 1 -T2(p)v2 + cfi(p)v2 - p3A2(p)T2(p)v3 + p6A2(p)v4,
where
q~(p) = (n;)- 1(x6x1x2(x1 + x2 + x1 1+x; 1+2)).
PROOF. From (3.73) and the definitions it follows that the isomorphism n maps
the constant term of the power series

to one, and takes all of the other coefficients to zero. Hence, the constant term of
this series is the unit of the ring L1, and the other coefficients are zero. In a similar
way we find that the second identity is a consequence of (3.75). The formulas for the
coefficients of Q1 and Q~ follow from (3.81), (3.79), (3.80), and the definitions. D

It is clear that similar identities hold over any ring isomorphic to 1;,
for example,
over the ring 1; (q), where p % q.
Theorem 3.30 enables us to parameterize the set of all nonzero Q-linear homo-
morphisms fro!ll the ring L; to C.
PRoPosmoN 3.36. Every nonzero Q-linear homomorphism A.from the ring L; to C
has the form
(3.82)

where A= (ao, ... , an) is a set ofnonzero complex numbers that depends on A..
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 153

PROOF. According to Theorem 3.30, to prove the proposition it suffices to verify


that any nonzero Q-linear homomorphismµ : Q[xt I, ... , x;= I] W -+ C can be obtained
by setting xo = o:o, ... , Xn = o:n, where o:o, ... , O!n are nonzero complex numbers. Let
ra = r; be the coefficients of the polynomial (3.52), let po = x6x1 · · · Xn, and let
o,
t be the polynomial (3.54). We set µ(ra) = Pa, µ(po) = and µ(t) = y. Since
o
µ =/: 0, it follows thatµ takes 1 to l; hence, µ(p 0 )µ(p 0 1) = 1, and so .= µ(p 0 ) =/: 0.
Theorem 3.30 implies that every element of the ring Q[xt 1, ••• , x!'
11w is a polynomial

in ri. ... , Yn-1,pt 1, t with coefficients in Q. Thus, it suffices to prove that the system
of equations

ra(xi, ... ,xn) =Pa (1 ~a~ n -1),


(3.83) { po(xo, ... , Xn) = o,
t(xo, ... , Xn) = y,
o
where =/: 0, has a solution in nonzero complex numbers.
Since the polynomial (3.52) obvibusly satisfies the relation r(v) = v 2nr(v- 1), its
coefficients satisfy the relation

(3.84) r;(xi, ... ,xn)=r2n-a(xi, ... ,xn) (O~a~2n),

and hence Pa= µ(ra) = µ(r2n-a) = P2n-a for a= O,l, ... ,2n. From these last
relations it follows that the polynomial
2n
(µr)(v) = ~)-l)a Pava E C[v]
a=O

satisfies the equality (µr) (v) = v 2n(µr) (v- 1). Since obviously Po = P2n = l, the
polynomial µr factors over C into linear factors of the form
2n
(µr)(v) = IT(l -y;v), where YI··· Y2n = 1.
i=l

Applying the above relation, we have

IT(l -y;v) = v2n IT(l -y;v-1) = rro -Y;-lv),


2n 2n 2n

i=l i=l i=l

from which it follows that the numbers y) 1, ••• , Yi,/ are the same as the numbers
Yi. ... , Y2n except for their order, i.e., Y;- 1 = Yu(i)• where a is some permutation of
the numbers l,2, ... ,2n. If a(i) = i for some i, then yf = 1, and y; = ±1. We
let ii. ... , ik denote all indices i for which a(i) = i and y; = l, and we let ji, ... , j,,
denote all indices j for which a(j) = j and yj = -1. All of the other indices can
be partitioned into pairs (i, a(i)) where a(i) =/: i. We let Ii. ... , I, denote the first
components of these pairs. Then k + s + 2t = Zn, and the relation YI··· Y2n = l
implies that yj, · · · Yj, = 1. Hence, sand k are even numbers. We let o:i, ... , O!n denote
the numbers y; with i = i1, ... , ik/2, j 1, ... , f,/2• 11, ... , I,, respectively. We then have
2n n
(µr )(v) =~)-l)a P~va =IT (1 - O!;V )(1 - 0!;- 1v ),
i=l
154 3. HECKE RINGS

and hence
(3.85)
In particular, a1, ... , an is a nonzero solution of the first n - 1 equations of the system
(3.83). If we substitute these numbers in the last two equations, we obtain the system

{ x6a1 ···an =O,


xo(l + a1) .. · (1 +an)=)'
in the unknown x 0 • It is clear that this system is solvable if (and only if) the following
relation holds:
n n
a-1)'2 = (a1 ... an)-1 II (1 +a; )2 =II (1 +a; )(I+ a;-1 ).
i=l i=l

Using the definitions, we have the identity


n( )2 n
t XQ,. . .,Xn = (x1 "'Xn)-1 II(l +x;)2
P8(xo, ... ,xn) i=I
(3.86) n ~

= II(l + x;)(l + X;- 1) = r(-l) = :~:::>;(x1,. . .,xn),


i=I a=O
from which, if we apply the homomorphismµ and the equalities (3.85), we obtain
2n 2n n
o- 1y2 = LPa = Lr;(a1,. .. ,an) = II(l +a;)(l +a/ 1). D
a=O a=O i=I

We call ao, a1, ... , an the parameters of the homomorphism A. = A(ao, .. .,a.). Clearly,
if a set of parameters is obtained from another set by the action of a transformation
in Wn, then the corresponding homomorphisms are the same.
PROBLEM 3.37. Prove that the order of the group Wn is equal to 2nn!.
PROBLEM 3.38. Prove the following formulas for the middle coefficient q~(p) of
Q~(v):

~(p) = pT(l,p,p2,p) + p(p2 + l)A2(p) = (T2(p))2 -T2(p2) - p2A2(p),

whereT(l,p,p2,p) = T 1(p 2).


[Hint: Using (3.61) and Lemma 3.34, compute the '2-images of the right side and
then apply Theorem 3.30.]
PROBLEM3.39. Suppose that q EN, and A.: L 2 (q) ---+ C is a Q-linear homomor-
phism of the global Hecke ring. Further suppose that A.(T2 (m)) = O(mu) for some
a E R. Show thatthe zeta-function of the ring L 2(q) with character A., i.e., the Dirichlet
series
Z(s,A.) = L
A.(T 2 (m))m-s
mEN(q)
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 155

converges absolutely and uniformly in any region of the form {s E C; Res ;;;:: 1+a +e},
where e > 0, and in that region it has an Euler product of the form

Z(s,).) = IJ (1-ao(p) 2a1(p)a2(p)p- 2s-l)q(ao(p),a1(p),a2(p);p-s)- 1,


pEP19 1

where a 0(p), a 1(p), a 2(p) are the parameters of the homomorphism


-I

L~ ~L~(q) ~C
and q is the polynomial (3.76) for n = 2. Prove that the parameters a;(p) satisfy the
inequalities

[Hint: Use (3.20) and (3.75).]


PROBLEM 3.40. Let N: L; --+ C be the homomorphism in Problem 1.16. Show
that one can take as parameters for N the set (ao, ai, ... , an) = (1, p, ... , pn).
PROBLEM 3.41. In the notation of Proposition 3.36, prove that ).A = ).A'• where
A, A' E (C*)n+I, if and only if A'= wA for some w E Wn.

§4. Hecke rings for the symplectic covering group


In this section we study the Hecke rings Ln ( q) of the symplectic covering group (!S
that are obtained by lifting the Hecke rings Ln(q). This lifting was described in §1.2
for abstract Hecke rings. In fn(q) we shall examine the subring .En(q) generated by
the double cosets of elements i! = (M, cp) E (!S such that M E sn (q) and r(M) is the
square of a rational number. We pay particular attention to this "even" subring .En (q)
becaus~and here is the principal difference between the Hecke rings of the symplectic
covering group (!S and the Hecke rings of the symplectic group itself-the Hecke
operators on spaces of Siegel modular forms of half-integer weight that correspond
to all other elements of the ring f n (q) are the zero operator. Concerning this ring
.En(q) we shall prove that it splits into the tensor product of pairwise commuting
local subrings .E;(q), where (p,q) = 1. These local subrings have one important
drawback: they have too many elements, i.e., they have many different elements that
represent the same Hecke operator on the spaces of modular forms. Hence, inside
.E;(q) we look at the minimal commuting subring .E;(q,x) that is analogous to the
subring E;(q) c L;(q) and has the property that its image under the spherical map
n coincides with the image of E;(q).
For the duration of this section we suppose that n, q E N and q is divisible by 4.
1. Global rings. In §3 we defined and studied the Hecke rings Ln(q) of the
symplectic group sn = SQ. We can proceed in the same way in the case of the
symplectic covering group (!S. However, we reach our goal more quickly if we obtain
the Hecke rings of (!S by lifting the Hecke rings of sn. According to §1.2, to do this we
o
must define two homomorphisms P and (see ( 1.13)). For P we take the epimorphism

(4.1) (!S ~SR: (M,cp)--+ M,


156 3. HECKE RINGS

and for '5 we take the monomorphism (3.23) of Chapter 2:

(4.2) r(j(q) L 0: M -ii= (M,j(2i(M,Z)).


The Hecke rings of 0 that we shall be interested in are the rings

(4.3)

where
(4.4)

A basic role in studying the Hecke rings fn(q) is played by the following lifting
homomorphism (compare with the definition (3.26) of Chapter 2):
(4.5) t = tM: (ro(q))M = r 0(q) nM- 1r 0(q)M-+ C1,
where ME sn(q); this map is defined for any y E (r()(q))M by the equality

(4.6)

where for any a E r 0(q) we let Ci denote its image r(a) in the group 0, and we
e
let denote any P-preimage of Min 0. From Lemma 3.4 of Chapter 2 it follows
that (t M)4 =
1; hence, the kernel of t M has finite index in the group (r0(q)) M. By
assumption, ME sn(q) c f 0(q), and so KertM also has finite index in the group
rg (q ). Consequently, by Lemma I. 7 the pair (4.4) is a Hecke pair, and so Ln (q) really
is a Hecke ring.
In this section we investigate the algebraic structure of the Hecke rings fn (q) to
roughly the same extent that we studied the structure of the Hecke rings Ln(q). Our
investigation will be based on Lemma 1.8, which describes the connection between
doublecosets of fn(q) and doublecosets of Ln(q), and Proposition 1.9, which enables
one to deduce certain properties offn (q) from the analogous properties of L n (q ), by
comparing their images in the Hecke ring of the triangular subgroup of sn. In order to
use Lemma 1.8, we must know for which matrices Mis the homomorphism tM trivial.
This question is answered by the next two lemmas. Before giving those lemmas, we
make some preliminary remarks.
Because of Lemma 3.6, without loss of generality we may assume that in any
e
double coset f 0(q)ef0(q), where = (M,e) E §n(q), the matrix M has been chosen
in the canonical form (3.9):

(4.7)

where d;, ej > 0, d;jd;+i. dnlen, e;+ile;, d;e; = r(K).


We compute the value tM(N) of the homomorphism (4.5) for a matrix N =
( ~ ~) in (r0(q))M in the case when M = K. Using (3.19) of Chapter 2 and the
equality e- 1 = (M- 1, e- 1), by (4.6) we have

(MNM- 1,j(2J(MNM- 1,Z)) =Mliiii- 1 = ef:le- 1(E2n,tM(N))


=(MNM- 1, H.2)(N, M- 1(Z} )tM(N)).
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 157

From this and (4.37) of Chapter 1 we find that

tM(N) = j(2>(MNM- 1,Z)j(2>(N,M- 1(Z))- 1


(4.8)
= x(2>(MNM- 1)x(2J(N)- 1I(Z),

where /(Z) = j(MNM- 1,Z) 112 j(N,M- 1(Z))- 112 and j(N,Z) = det(CZ + D).
Furthermore, if we take into account the definition 'of the square root j (N, Z) 112 in
(4.7) of Chapter 1, we obtain

lim /(Z) = (detQDQ- 1)- 112 (detD)- 112 = 1,


Z-+O

where we suppose that Z E Hn." Since the value tM(N) does not depend on Z, it
follows from (4.8) and the last equality that

(4.9)

if ME sn(q) is a matrix of the form (4.7).

LEMMA 4.1. Let M = ( ~ g) be a canonical matrix of the form (4.1) in sn(q),


where q is divisible by 4. If the lifting homomorphism tM is trivial, then r(M) is the
square ofa rational number.
PROOF. Since M can be written in the form M = m- 1M 0 , where m E Zand Mo
is an integer matrix in sn(q), with r(M) = m- 2nr(Mo), and since tM = tMo by (4,6),
it follows that from the beginning we may assume that M is an integer matrix. For
i = 1, ... , "!we define an imbedding cp;: sj_ --+ Sjl by setting
0

a b (i)

0
(4.10)
0

c d (i)

Let M U> = (do; O ) , where d;e; = r (M) , and let


C;

(4.11) a= ( : ~) E (rA{q))M(i>-
Then N = cp;(a) E (r0(q))M, and we have

(4.12)
158 3. HECKE RINGS

From this and from (4.38) and (4.40) of Chapter 1 we have

(4.13) x(2J{N) = eig~dldl'/2 -n L exp(2nibr1/d),


(4.14) X(2)(MNM- 1) = e;g~dldl'/2 -n L exp(2nib'r1 /d),

where r = '(ri. ... , r;, ... , rn) runs through Mn, I (Z/dZ) and b' = d;b/e;. Since a
satisfies the condition (4.11), it follows that b and b' ·are integers prime to d, where
(d, q) = 1, and so d is odd. Hence, if we use the formula for the Gauss sum

(4.15) Gd(k)= L exp(2nikr 2 /d)=ed(~)d'l2 ,


rEZ/dZ

where dis a positive odd number and (k, d) = 1, which follows from Lemmas 4.13
and 4.14 of Chapter 1, and if we further suppose that (d, r(M)) = 1, then from
(4.13)-(4.14) and the formula (4.9) for tM(N) we obtain

(4.16) fM(N) = (~) (~) = ( d;;) = (r(:)).


It is not hard to see that for any d E Z prime to r(M)q there exists a matrix
a= ( : ; ) satisfying (4.11). Thus, from (4.16) we conclude that

(4.17) ( r(dM))--1 for all d > 0 and (d, r(M)q) = 1,

sincetM(N) = 1 by assumption. Letp 1,. • • ,p_, bealloftheprimenumbersthatoccur


to odd powers in the prime factorization of r(M), i.e., r(M) = r[p 1 • • • Ps·
Suppose
that this set of primes is nonempty. Since M E sn(q), it follows that (r(M), q) = 1;
and since q is divisible by 4, it follows that r(M) is odd, and sop; =f. 2. We choose
d > 0 in such a way that the following congruences hold:

d = x 1(modp 1) and d = l(modp)(:ia+l)r(M)q),


where x 1 is a quadratic nonresidue modulo p 1 and 2a + 1 is the power with which p 1
occurs in r(M). Such ad clearly exists, since the two moduli are relatively prime and
p1 =/. 2. Then (d, r(M)q) = 1, and from quadratic reciprocity for the Jacobi symbol
we have

('(:)) = ( P1 ·~· Ps) = (Pl-~· PJ = (;.) ... (;J = -l,


which contradicts (4.17). Thus, r(M) = r[. 0

The converse of Lemma 4.1 is also true.

LEMMA 4.2. Let M = ( ~ ~) be a canonical matrix of the form (4.1) in the


group S" (q ), where q is divisible by 4. If r = r (M) is the square of a rational number,
then the homomorphism t M is trivial.
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 159

PROOF. We let

(4.18)

and we first consider the case when (r, det D) = 1. Since QDQ- 1 is an integer matrix
by (4.18), it follows that in (4.38) and (4.40) of Chapter l, which give the value of the
multiplier x(2J for MNM- 1 E qj(q), we can set

(4.19) d = detD = det QDQ- 1,


after which these formulas give us

(4.20) X(2)(MNM- 1) = e5j8~dldl 1 /2 -n L e{2PBD-IQ- 1(s]}.


sEM,.,,(Z/dZ)

Since PQ = rEn and (r, d) = 1 by assumption, it follows that the last sum is equal to
(4.21) e{2PBD- 1[s']}.
s'EM.,,1 (Z/dZ)

Ifwe set M = E2n in (4.20), we obtain a formula for x(2J(N). Comparing this formula
with (4.20) and (4.21), we conclude that x(2)(MNM- 1) = x(2i(N). From this and
(4.9) it follows that tM(N) = 1 for any NE (qj(q))M. ·
Now suppose that (r,detD) = o > l, pis a prime divisor of o, and r = p 2Pri,
where (p, r1) = 1. Further suppose that the blocks P and Q of the matrix M have the
form
P = diag(Pi. ... , Ps), Q = diag(Qi. ... , Qs),
(4.22)
P; =p 01 P;, a1 <···<as, as= p, . Q; = rP;- 1,
where the Pf are integer diagonal matrices with (det Pf, p) = 1. Of course, the block
Ps might not exist.
The inclusion MNM- 1 E r(j(q) implies the congruences

= =
A;j Dij O(modp) for i > j,
(4.23)
=
B;j O(modp) for (i,j) =F (s,s),
where A = (A;j ), B = (Bij ), C = ( Cij ), and D = (Dij) are divided in blocks that are
analogous to (4.22). Using the other inclusion NE r 0(q) and (2.7) of Chapter 1, we
obtain a new series of congruences:

j=E;min(k,I) j=E;min(k,I)

for k = I = i or k =F I, respectively, where E and 0 are the identity or the zero


matrix, as the case may be. Ifwe now consider the congruences (4.24) successively for
(k, I) = (1, 1), (I, 2), (2, 2), ... , then we obtain
=
D;j 0 for i =f:. j,
(4.25) detD;; =F O(modp) for i =F sand detDss =O(modp),
'AssDss - 'CssBss =E(modp), 'BssDss ='DssBss(modp).
160 3. HECKE RINGS

We choose matrices U, V E SL(Z) of the same size as Dss so that the following
congruence holds:

(4.26) D;s = UDss V =(~I ~) {modp),


where {detDi. p) = I. We suppose that all of the (s, s )-blocks are divided into sub-
blocks in a way analogous to the matrix on the right in (4.26). Let
Pu= diag(E, U*,E, U) E r 0{q),
let Pv be defined in the analogous way, and let the matrix

N 1 =PuNPv = (A'C' D'B')


be divided into the same blocks as N. In this notation it follows from (4.26) and (4.25)
that

D;s =(~1 ~) {modp), =(!~ ; {modp),


B;s 4)

c.:s = ( g~ g~) , where - 'C B =E(mod p );


4 4

this implies that {det B4, p) = l, and, if we set T.vs = ( ~ 1), then we have

(4.27) det(T.,sB.:s + D.:.,) =det ( ~1 ; 4) ~ O{mod p ).

Ifwe define T =(gr 1) E P by setting Cr= diag{O, ... ,0, T.,.. ), then by (4.25)
and (4.27) in the matrix

(4.28) N' = TPuNPv = (C'A' D'B')


the block D' is nonsingular modulo p, since

(4.29) detD' =(p detD;;) det(T.,.,B.:., +D.:.,) ~ O{modp).


1=1

Note that MTM- 1 is an integer matrix, and the matrices MPu M- 1 and MPv M- 1
are p-integral. Since, by assumption, p does not divide r 1, it follows from Lemma
3.2(1) that there exists a matrix U' E SL(Z) of the same size as U such that

U' = U(modp 211) and U' = E{modr 1).


An analogous matrix V' exists for V. Next, we define T' to be a matrix similar to T,
except that the block T.,., is replaced by a symmetric integer matrix T.:, that satisfies
the congruences
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 161

Such a matrix exists, since ME sn(q), so that (r, q) = 1, and hence p does not divide
r1q. From the above definitions it follows that Pu,, Pv1, T', and the transformed
matrix of (4.28)

(4.30) .N" = T' Pu1NPv1 = (C"A" D"B")


all lie in (r(j(q))M, and the block D" is still nonsingular modulo p. If we now use
(4.30) and the fact that t = tM is a homomorphism of the group (r(}(q))M, we can
write
t(N") = t(T')t(Pu' )t(N)t(Pv') = t(N),
since, by (4.9) and (4.38) of Chapter 1, the homomorphism t takes value 1 on T',
Pu', and Pv'· The decomposition (4.30) also implies that D" = D'(modp); hence,
by (4.29), we have (r,detD") = p-"'o, where a is the power to which p appears
in the prime factorization of o. If we make a similar argument for N successively
o,
with the different prime divisors of we see that N can be transformed to a matrix
Ni = (: ; 1
) such that t(N1) = t(N) and (r, detD 1) = 1. Then the first part of
the proof of the lemma implies that t(N) = 1. D

Using Lemmas 4.1 and 4.2, we can now prove the following
PROPOSITION 4.3. Let M E sn (q ), where q is divisible by 4, and let t M be the lifting
homomorphism (4.5) associated to M. Then tM is trivial ifand only ifr(M) is the square
of a rational number.
PRooF. From Lemmas 4.1 and 4.2 it follows that the proposition is true when M
is a canonical matrix of the form (4. 7). If Mis arbitrary, then, by Lemma 3.6, it can be
written in the form M = eKl'f, where e, 1'/ E r(j (q) and K is a canonical matrix of the
form (4.7). Now suppose that in Lemma 1.6 r = r(j(q) and PM(N) = (E2n. tM(N))
for NE (r3(q))M. Then from (1'.17) we find that
(4.31) tM(N) = te-1M,,-1(1'/N11-I) = tK(l'/Nl'/-I).
Since r(M) = r(K) and themapy-+ 1'/Yl'/- 1 isagroupisomorphismfrom (r3(q))M to
(rQ (q)) K, the proposition for M follows from the proposition for the canonical matrix
Kand from the relation (4.31). D

We now consider the product of concrete double cosets in the Hecke ring £n (q ). In
this ring, as in L n ( q), the multiplication formula for double cosets takes a particularly
simple form when one of the double cosets is generated by the P-preimage of a matrix
of the form rE2n· Namely, we have
LEMMA 4.4. Suppose that ME §n(q), t:i2n is any P-preimage in 18 of the matrix
rE2n. where r E z;,
andf = f3(q). Then the following relations ho/din the ring £n(q):
(4.32) (r~)r,(M)f = (r~M)f = (M)f(r~)f.
The proof follows immediately from the definitions.
We let sn(q)+ denote the subgroup of sn(q) consisting of matrices M for which
r(M) is the square ofa rational number, and we let
(4.33)
i62 3. HECKE RINGS

We call in(q) the even subring of the Hecke ring f,n(q). As we noted before, only the
even subring is important for applications to modular forms. Hence, for the rest of
this chapter we shall only be examining in (q) and its local subrings.
PROPOSITION4.5. Letf,; = (M;, t;), wherei = 1,2, beelementsofthegroupSn(q)+,
and let f = f 0(q ). Suppose that the ratios of symplectic divisors ei (Mi)/ di (Mi) and
ei (M2)/di (M2) are relatively prime. Then the following relations hold in the Hecke ring
En(q);

(4.34)

PRooF. According to Lemma 3.6, without loss of generality we may assume that
the M; are canonical matrices of the form (4.7). Since in this case f.if. 2 = f, 2 f.i. the
second equality in (4.34) follows from the first equality, which we shall now prove.
From ( 1.10) it follows that

(f.i)f(f.2)r = (f.if.2)r + Eaj(,,j)f',


j

where (17 j )fare double cosets that are distinct from (f.i f, 2)r, and the a j are nonnegative
integers. Using ( 1.10) again, we see that the last sum here is zero if and only if

(4.35)

On the other hand, by Proposition 3.9 we have (Mi )r(M2)r = (MiM2)r, and hence

(4.36)

Since r(M;) and r(Mi Mi) are squares of rational numbers, it follows from Proposition
4.3 and Lemma 1.8 that

µr(f.;) = µr(M;) for i = 1,2, and µr(f.1f.2) = µr(M1M2),

and this together with (4.36) implies (4.35). D

Just as in the case of Hecke rings for the symplectic group, Proposition 4.5 makes
it possible to reduce the study of the even Hecke ring in(q) to that of its local subrings

(4.37)

where pis a prime not dividing q, and (see (3.26))

(4.38) s;(q)+ ={ME s;(q);r(M) = p2ii,~ E Z}.

THEOREM 4.6. The Hecke ring in(q), where n,q E N and q is divisible by 4, is
generated by the local Hecke rings i; (q ), where p runs through the primes not dividing
q. Elements of different local subrings commute with one another.
PRooF. The theorem follows from the equalities in (4.34) and the proof of Theo-
rem 3.12. D
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 163

PROBLEM 4.7. Suppose that e;


= (M;,t;) E sn(q), i = 1,2, have the property
that the ratios of symplectic divisors e1 (M1)/d 1(M 1) and e1(M2)/d1(M2) are relatively
prime, and let r = r 0(q). Show that the following multiplication formula holds in the
Hecke ring fn(q):
,,
(e1)r(e2)f =a :~:)e1e2(E2n,ei))f',
j=I
where a EN and all ei E C 1; and here one has
ah= h(M1)h(M2)h(M1M2)- 1,
where h(M) for ME sn(q) denotes the index of the kernel of the lifting homomor-
phism tM in the group (r)M.
[Hint: Use Proposition 3.9 and the formulas (1.24) and (1.25).]
PROBLEM 4.8. Determine whether or not the map
fn(q) ~ Ln(q): (e)r-+ (Pe)r,
where r = r(j (q), is a homomorphism of Hecke rings. Answer the same question for
the even subring £n(q).
2. Local rings. Theorem 4.6 enables us to reduce the study of the Hecke ring
£n(q) to that of its local components i;(q). In order to compute in the local rings,
one needs to have an explicit description of representatives M °' of the left co sets r M °'
(here r = r 0(q)) into which the double cosets r Mr decompose, where M is a matrix
in SMn(p 2,q) (see (3.21)). More precisely, one must know how to write M°' in the
form M°' = yK~. where y,~ Er and K is a canonical matrix of the form (4.7). The
next two lemmas are devoted to this question.
LEMMA 4.9. Let SMn(p 2, q) be the subset (3.21) of the group sn(q); where pis a
prime not dividing q, let

(4.39) 2D* B ) 0 0 0)
Mu,b(Bo) = ( P Oa,b D • where B = ( 0 Bo 0 (a),
u,b 0 0 0 (b)
and let R: b =A'}, \An with An = GLn(Z). Then we have the following partition into
' •.I>
disjoint left cosets:
(4.40) SMn(p 2,q) = LJ r 0(q)M,
MER(p2)

where

R(p2) = {Mu,b(Bo,S, V);Bo E Su(Z/pZ),S = (~ ~ '~1),


(4.41) 0 S1 S2

S1 E M 0 ,h(Z/pZ),S2 E Sh(Z/p2Z), VE R:,h,a +b ~ n }•


and, in the notation (2.2)-(2.3) of Chapter 1,
(4.42) Mu,h(Bo,S, V) = Mt1,b(Bo)T(S)U(V).
164 3. HECKE RINGS

PRooF. Using an argument similar to the one used to derive (3.64), we can show
that
SMn(p 2,q) = LJ r 0(q) ( P 2 ~;,b : ) U(V),
a,b,B,V a,b
where a+ b ~ n, B E Bo(Da,b)/modDa,b• V E R=.b· The decomposition (4.40)
follows from this, along with the definition (3.60) of Bo(Da,b). 0

Theorems 1.2 and 1.3 of the Appendix tell us that any symmetric matrix Bo E
Sa(Z/pZ) of rank rp(Bo) = r, where p # 2, can be written in the form
(4.43) Bo = BO[ 'WJ = WBo 'W,
where WE GLa(Z/pZ), B0 = diag(A.i, ... ,A.,,0, ... ,0), and A.; '¢ O(modp). If
det W = d I 1, then we divide the first column of W by d, we replace A. 1 by d 2 A. 1 in
the matrix B0, and we keep our earlier notation when working with the transformed
matrices. After these transformations (4.43) obviously still holds, and det W = 1.
Now let Bo= 'Bo E Sa. Then we may suppose that B0 E Ma, and, by Lemma 3.2(1),
the matrix W lies in SLa(Z). In this case (4.43) can be written asa congruence

(4.44) Bo= B0['WJ(modp).


From this and (4.41) we conclude that we can always take Bo = B0['W] in (4.42);
consequently, the matrix (4.39) can be represented in the form of a product

(4.45)

We consider the special case when n = 1. Then one easily verifies that the following
decompositions hold for the matrices P;. = M 1,0 (A.) with (A., p) = 1 and a = M 0,0 (0):

(4.46)P;. = ( ~ A.)
r = P';. ( 01 o) P"
p2 ). '

(4.47) a= ( ~2 1t)= a '(l0 0 ) a II '


p2

whereallofthematriceshaveintegerentries,pr+qA.s = 1 withr > O,andp 2 d+qt =1


withd > 0.
Using the imbeddings (4.10), we introduce some additional matrices:
r
(4.48) P~.b(Bo) =TI 'Pn-a-b+k(p).k),
k=I

and P:,b(B0) is defined similarly, with P).k replaced by P';.~ (see (4.46));

n-a-b
(4.50) Pa,b(a) = II 'Pk(a) for a E SL2(R),
k=I
(4.51) P~.b = U(R) for R = diag(En-a-b• E,, R1 ),
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 165

where for a - r ~ b or a - r > b the matrix R 1 is respectively equal to


0 0 Ea-r) 0 0 Eb)
( 0 Eb-a+r 0 or ( 0 Ea-r-b 0 ,
-Ea-r 0 0 -Eb 0 0
finally, for i = 0, 1, ... , n we define the following canonical matrices of the form (4. 7):
(4.52) K; = diag(En_;,pE;,p 2 En-;,pE;).
LEMMA 4.10. Let B0, S, V be the matrices in (4.41), let r = rp(B0 ), and let p be
any odd prime. Then the matrices (4.42) can be represented in the form
(4.53) Ma,b(Bo,S, V) = U(W*)Ma,b(Bo)U(W*)- 1T(S)U(V),
where B0and W satisfy the congruence (4.44) and, in addition,
(4.54) Ma,b(Bo) = P~.b(Bo)M;,bp~b(Bo).
(4.55) M;,b = Pa,b(a')P~,bKa-AP~,b)- 1 Pa,b(a"),
where all of the matrices on the right in (4.53)-(4.54) except for M 0 ,b(B0), M;,b, and
Ka-r lie in the group r3(q); here q is any natural number not divisible by p.
PRooF. (4.53) is a consequence of (4.42) and (4.45), and (4.54)" is merely the
"direct sum" of the equations (4.46). Similarly, (4.55) can be obtained from (4.47),
(4.50), and (4.51). The other parts of the lemma follow directly from the definitions.
D

We have thus found representatives of all of the left cosets of SMn (p 2, q) modulo
r3(q), and we have expressed each representative as a product yKo of matrices y,o E
q(q) and acanonicalmatrixK of the form (4.52). However,-tocomputein the Hecke
ring i;(q) we still need to know the second component of the product yKg, where
y = r(r), g = r(o), and K is an arbitrary P-preimage of Kin~. To do this we
prove a result that in certain cases makes it possible to reduce the calculation of the
xl
multiplier x(2) of degree n to that of the multiplier 2) of degree 1. But we must first
give some more definitions.
Let q E N be divisible by 4, and let p be a prime not dividing q. For every matrix
M = ( ~ ; ) in the group S 0,p (see §3.3) we fix a P-preimage M in the symplectic
covering group ~, by setting
(4.56) M = (M, IdetDl 1' 2 ).
If M lies in the subgroup
(4.57) (S0,p)+ ={ME S 0,p; r(M) = p'M,o E Z} C So,p
or even in the less restrictive subgroup s;(q)+ of s;(q) (see (4.38)) and M = yKo,
where y,o E q(q) and K is a canonical matrix of the form (4.7), then we define a
second P-preimage of M in ~ as follows:
(4.58)
We show that the element M E ~ does not depend on the above choice of
representation of M. In fact, suppose that M = y;Ko; (i = C2) are two such
166 3. HECKE RINGS

representations. Then, by Lemma 3.6, K 1 = K 2 , and so it is enough to show that one


has f1KJ°1 = f2KJ'i, or, equivalently,

(4.59) y-.KJ" = K, where)'= Y2 1Yt andc5 = c51c52 1 E r(i(q).


By assumption, the factor r(M) is an even power of the prime p. Hence, Proposition
4.3 implies that
(K-;;;::1) = .Ka..K- 1 for a: E (r(i(q))K.
Hence forc5 = K- 1y- 1KE (r0(q))K we have

_KJ"_K-1 = (K-;;;::I) = (y=t) = y-1,

and this proves (4.59).


The above argument shows that, in particular, if M = K E (S0,p)+, then

(4.60)
Finally, for an arbitrary element c! = (M, cp(Z)) E '8 we set
(4.61) t(c!) = cp(Z) and s(c!) = t(c!)lt(~)l- 1 .
LEMMA 4.11. For i = l, ... , n suppose that the matrices R;, S; E (SJ)+ and
)';,c5; E rA(q) are connected by the relations R; = y;S;c5;, and the elements d(R;), d(S;),
d (y; ), and d (c5;) in the lower-right corner of these matrices are all positive. Furthermore,
let
n
R= II c.o; (R;) E css.p)+.
i=l
where cp; is the imbedding (4.10), and let S E (S0,p)+ and y,c5 E r 0(q) be defined
analogously. Then ii and S satisfy the relations
(4.62) t(ii) = x&i<r)x(2i(c5)s(S)t(R),
(4.63) s(ii) = X(2i(r)x(2i(c5)s(S),
where X(ii is the multiplier (4.38) of Chapter 1.
PRooF. Froni the definition (4.61) we see that (4.63) is a consequence of (4.62);
we now prove the latter relation. By (4.58) we can write

(4.64) ii= ySo.


In fact, let S = a:Kp, where a:,p E r 0(q) and K is a canonical matrix. Then the
equality R = ySc5 along with (4.58) implies that

ii= (fci).K(PJ) = Y(a..K'ji)J" = fSo.


Hence, from (4.2) and (3.19) of Chapter 2 we obtain

(4.65) t(ii) = X(2i<r)x(2)(c5)t(S)j(y,Sc5(Z)) 112j(c5,Z)l/2,


where j{·, ·) is the automorphy factor (4.3) of Chapter 1, and the branches of the
square roots are determined by (4.7) of Chapter 1. We find the limit as Z--+ 0 of the
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 167

right side of (4.65). To do this, we define two holomorphic functions of zi, ... , Zn E H1
by setting
n
'l'(y; Zi, . . . , Zn) =II s{j(y;, Z; )),
i=I
(4.66) n
'l'(o; zi. ... , zn) =II s{j(o;, z; )),
i=I

where for any a= (; ; ) E rA{q) with d > 0 we let s{j(a, z)) denote the function
s+ {j (a, z)) or s_ {j (a, z)) depending on whether c ;;;: 0 or c < 0. Here S±{w) are
the holomorphic functions on ((± l )H 1) u R defined by the conditions: S± (w ) 2 = w
and S± (w) > 0 for w E R and w > 0. Note that the restrictions of the functions
j(y, Z) 112 and j{o, Z) 112 to the main diagonal H 1 x · · · x H 1 c Hn coincide with the
correspo_nding functions in (4.66), since the assumptions in the lemma imply that they
coincide for Z = diag{z1, ... , Zn) sufficiently close to zero. According to the definition
(4.61), the function t(R) does not depend on Z. Hence, if we set Z = diag{z1, ... , Zn)
and use the functions (4.66), we can rewrite {4.65) in the form
(4.67) t(R) = x(2J(y )x(2J(o)'l'(o, zi, ... , zn)'P(y, S101 (z1), ... , Snon(zn))
and pass to the limit as z; -+ 0 (z; E H 1, i = 1, ... , n) on the right of this equality.
According to (4.66) and Lemma 4.2 of Chapter 1, this problem reduces to computing
the limits of expressions of the form
(4.68) s{j(y;, S;o;(z;)))s(j(o;, z;)) = s{j(R;, z;)j(S;o;, zi)- 1 )s{j(o;, z;)).
Ifwe again use the definition (4.66), we find that the desired limit of (4.68) is equal to
(d(R;)(d(S;)d(o;))- 1)d(o;) 112 = d(R;) 112d(S;)- 112,
where all of the square roots are positive. From this, (4.67), and (4.68) we obtain (4.62).
D
Lemmas 4.10 and 4.11 make it possible for us to find the P-preimages in~ of
the matrices (4.42). To do this, we must introduce a certain special function x that is
defined on the set of symmetric integer matrices and is closely related to the multiplier
x(2J. We now give the definition of x.
As we noted earlier, for any matrix A E S 0 {Z) of rank rp(A) = r, where pis an
odd prime, there exists a matrix V E M 0 (Z) that is nonsingular modulo p and satisfies
the congruence

(4.69) A= ( A'
0 0
0) [V]{modp),
where A' E Sr(Z) is a matrix that is nonsingular modulo p. If A satisfies (4.69), then
we set
.. e-r ((-l)'detA') if r > 0,
(4.70) x(A) = { P P '
I, if r = 0,
where ep is the function (4.39) of Chapter 1. It is easy to see that the value x(A) does
not depend on the choice of matrices V and A' with the indicated properties.
168 3. HECKE RINGS

PROPOSITION 4.12. Let Ma,b(Bo, S, V) be the matrices (4.42), where a+ b :E;; n and
B0, S, and V run through the set of matrices in (4.41). Then we have the following
formula/or the P-preimages of these matrices in (!Sas defined in (4.58):
(4.71) Ma,b(Bo, S, V) = (Ma,b(Bo, S, V); x(Bo)p(a+2hll2 ).

PROOF. From the formulas (4.37)-(4.38) of Chapter 1 it follows that j(2) (y, Z) = 1
for matrices y of the form U(V) or T(S) in the group r(j(q), where q is divisible by 4,
as usual. Hence, by (4.2), we have y = (y, 1) for such matrices y. Ifwe now use (4.53)
and (4.64), we obtain
M,,,b(B0 ,S, V) = U(W*)Ma,b(B0)U(W*)- 1T(S)U(V),
and from this and (4.61) it follows that
(4.72) s(Ma,b(Bo,S, V)) = s(Ma,b(B0)).

-
Arguing in an analogous way, we also have (see (4.55))
. ~
s(P~.hKa-r(P~.h)- 1 ) = s(Ka-r) = 1,
and hence, applying Lemma 4.11 to the equalities (4.54) and (4.55), we obtain
(4.73) s(Ma,b(B(i)) = x(2 J(P~.h(B(i))x(2J(P~h(BO))x(2 J(Pa,b(a'))x'(2)(Pa,h(a")).
All of the matrices on the right in this equality have the form
n
(4.74) r = II cp; (r;) with y; = ( ;; ~:) E rA(q).
i=l

From the definitions (4.48) and (4.50) and from the equalities (4.46)-(4.47) it follows
that the entries d; in all of these matrices are positive. By (4.38) of Chapter 1, we have
the following formula for the matrices (4.74):
n
(4.75) x(2J(y) = ITxl2J(y;).
i=l

We use this formula to compute the value of the multiplier x(2) at each of the
matrices in (4.73). By Proposition 4.15 and the relation (4.37) of Chapter 1, we have

(4.76) xl2l(y)=e,/ 1 (J) forr=(; ·~)ErA(q).


Using (4. 76), we show that the following equalities hold for the matrices in (4.46) and
(4.47):

(4·77) I (p')
X(2) I ( a ") = l •
I ( a ') = X(2)
;, = X(2J I (p")
X(2) J. = ep-I (-A)
d ·
For P';, and a" the equalities are obvious. In the case of a' the equality follows
from the congruence p 2d = l(modq), where q is divisible by 4, which implies that
d = l (mod4), and from the usual properties of the Jacobi symbol. Next, in the case
of P';,' the relation (4.76) leads to the equality ·

I II
X(2J(P;.) = eP
-I (-qs)
-r-
-1
= eP
(-qs)
p '
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 169

since by (4.46) we have pr= l(modqs), and hence p =


r(mod4). If we recall that
(-qs )(-A.) = 1(mod p ), we obtain the desired equality in (4.77).
Now (4.73), (4.75), and (4.77) imply that

(4.78) s(Mu,b(Bm = ge:;• (-:;) = e:;' c-l)'pdetBo).


Finally, by (4.56) and (4.39) we have
lt(Ma,b(Bo,S, V))I = (detDa,b) 112 = p(a+2b)f2,
and we conclude from this and from (4.72) and (4.78) that (4.71) holds. D

At the beginning of the first subsection we mentioned that there is an essential


analogy between the Hecke rings of the symplectic group and the symplectic covering
group. This analogy enabled us, in particular, to prove that the even Hecke ring En (q),
like En (q ), is generated by pairwise commuting local subrings i; (q ). If we now try to
follow the same analogy to define the spherical map n in the way that we did in §3.3, it
will turn out that n no longer gives an isomorphic imbedding of i; (q) in a polynomial
ring; thus, we lose one of the most powerful instruments for studying Hecke rings. In
order to deal with this situation, we have to choose a path that is the exact reverse of
E;
what we did in §3: we first find generators of the even Hecke ring (q) and determine
the corresponding elements in i;(q), and we then investigate the subring of i;(q)
that is generated by these elements.
By definition, the ring E;(q) is generated by the double cosets (M)r of elements
ME s;(q)+ relative to the group r = q}(q). Thus, by Theorem 3.23,
(4.79) E;(q) = ~[T(p )2 , Ti (p 2), ... , Tn-t (p 2), Tn(p 2)±1J,
where the expression on the right is the ring of polynomials in the elements in brackets
with coefficients in Q. The formula (1.10) shows that the only double cosets that
appear in the product T(p) 2 = T(p) · T(p) are those of matrices ME s;(q)+ with
r(M) = p 2 • But since, by Lemma 3.6, such matrices M can all be taken to be canonical
matrices of the form (4.52), it follows that
n
(4.80) T(p)2 = L:a;T;(p2),
i=O

where the a; are nonnegative integers and ao > 0, since


Ko Er (En 0 ) r (En 0 ) r.
0 pEn 0 pEn
Ifwe now compare (4.79) and (4.80), we find another system of generators forthe ring
E;(q), one that is more convenient for our purposes:
(4.81)

We now proceed to the next step, and determine the elements in (q) that i;
correspond to the elements in (4.81). There are, of course, many ways to do this. To
be definite, we set
~ 2 ~ .
(4.82) T;(p ) = (K;)f (1=0,1, ... ,n),
170 3. HECKE RINGS

where K; are the matrices (4.52) and the elements K; E 0 are determined by (4.58).
All of the other P-preimages of the matrices K; in 0 are of the form K;E, where
E = (E2n,e) withe E C1, and
(K;E)r- = (K; )r-('E)r-.
Moreover, the double cosets of the form (E)f are contained in the center of the
ring 'E;(q), and so the degree of choice in the elements in (4.82) in no way affects
the algebraic properties of the subring they generate. From the point of view of
applications of the Hecke rings to modular forms, the choices made in (4.82) are also
of no importance, since the Hecke operators for (E)f are operators of multiplication
by a power of e. We thus come to the conclusion that the natural analogue of the even
Hecke ring E;(q) is not the entire ring 'E;(q), but rather the subring
~ ~ 2 ~ 2 ~ 2 ±I
(4.83) E;(q, x) = Q[To(p ), ... , Tn-1 (p ), Tn(P ) ].
We show that this subring is commutative. Recall that in the case of the ring L n (q)
the proof of commutativity was based on: ( 1) the existence of the anti-automorphism
*of the ring Ln(q), and (2) the invariance of elements of Ln(q) relative to*· Thus, we
begin by defining an anti-automorphism * for the ring (q). I;
e
For any = (M, <p) E 0 we set
(4.84) eo = (r(e)E2n, r(e)nl 2) and r(e) = r(M).
Then the map e--+ eo is a homomorphism from the group 0 to its center, and f 0(q)
is contained in the kernel of this homomorphism. This implies that the map
(4.85) 0 ...:"..+ 0: e --+ e* = eo. e- 1

has the properties


(4.86)
i.e., it is an anti-automorphism of order two of the group 0; here we have
(4.87) f* = f, where r = r(j(q).
Using Proposition 1.11 and (4.85), we now define an anti-automorphism I of the ring
I;(q) by setting
(4.88)
and extending this map by Q-linearity to all of L; (q).
'THEOREM 4.13. The ring 'E;(q, x), where q is divisible by 4 and p is a prime not
dividing q, is commutative.
PRooF. Because of (4.83), to prove the theorem it suffices to show that the gen-
erators (4.82) commute with one another in pairs. This, in turn, will follow if we
show that both the generators (4.82) themselves and also their pairwise products are
invariant relative to the anti-automorphism (4.88), i.e.,
(4.89) T;(p 2 )* = T;(p 2 ) and i* = i,
where i = T;(p 2 )Tj(p 2 ) and i,j = O,l, ... ,n. The first equality in (4.89) is a
consequence of a more general fact.
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 171

LEMMA 4.14. Let ii E 18, where M E s;(q)+, be defined by (4.58), and /et
T(M) = (ii)f, where r = r(j(q). Then T(M)* = T(M).
PROOF. ByLemma3.6wecanwriteM = yK<5, wherey,<5 E randKisacanonical
matrix of the form (4.7); hence ii= yK'i, and so T(M) = T(K). Thus, we may
suppose that M = K.
Let n = 1. Since (p, q) = 1; it follows that for any m E N there exist integers t
and d > 0 such that pmd +qt= 1. If mis even and

Km = ( ~ p~' ) , a:., = ( Pqm dt) , a::, = ( P~: ~ ) ,

then obViously pm K; 1 = a:nKma:~. Hence, from Lemma4.ll and (4.76) we obtain


-
(pmK- 1
m ) -- a'm Ka"m-
- (pm K-
m' 1) ·
1

On the other hand, (4.85) implies that


(Km)*= (Km,pm/2)* = (pmK,;l,l),
and so (Km)* = a:.,Kma::,. From this and (4.88) we have
T(Km)* = (K,:;)f = (a:.,Kma::,)f = (Km)f,
i.e., T(Km)* = T(Km). Since the elements (p15 E2, p1512) belong to the center of 18 and
are *-invariant, it follows from this and the last equality that the lemma holds in the
case n = 1. We pass to arbitrary n using Lemma 4.11 and (4.75). Since we already
went through this sort of argument when proving Proposition 4.12, we shall omit it
here. 0

L;
Following the analogy with the Hecke rings (q), we might expect that the second
equality in (4.89) would also follow from Lemma 4.14, since Xis equal to a sum of
e
double cosets of elements = (M, tp) E s;
(q )+. However, the whole point is that e
and ii are not necessarily the same, and in that case (e)f. =I (e)f· This also seems to be
what explains the complication in the proof that the rings i;(q, x) are commutative.

PRooF THAT X* = X. Using Lemma 4.9, we can rewrite (3.61)-(3.62) in the form
(4.90) (r(j(q)Ma,b(Bo,S, V)),
a,b,Bo,S,V
a+b,,;;,n,r,(Bo)=a-i
where the matrices Bo, S, and V run through the sets in (4.41). From (4.58) and
(4.90) it follows that the elements M,,,b(B0 ,S, V) lie infK;f. On the other hand, from
Lemma 1.8 and Proposition 4.3 we find that the map P: ii---+ M gives a one-to-one
correspondence between the double cosets f K; f and r K; r. This implies that
(4.91) (f~(q)Ma,b(Bo,S, V)),
a,b,Bo,S,V
. a+b,,;;,n,r,(Bo)=a-i
where, by (4.42), we have
(4.92) Ma,b(B0 ,S, V) = Ma,b(Bo)T(S)U(V)
172 3. HECKE RINGS

and T(S)U(V) U(V)T(Si), where S1 = S[V]. But since the matrix T =


Kj- T(Si)Kj lies in rK1• it follows that T(S)U(V)Kj = U(V)KjT and TE f =
1

f3(q). Let a,b, B0 , S, and V be the same as.in (4.90). Then by Lemma 1.5 and (4.91)
we have

a,b,Bo,S,V

a,b,Bo,S,V
= L µa,b(Bo, V)(t!a,b(Bo, V))r,
a,b,Bo,V
where
µa,b(Bo, V) = pb(a+b+I) µ(Kj)µ(t!u,b(Bo, v))- 1,
(4.94)
t!a,b(Bo, V) = Mu,b(Bo)U(V)Ki = (Yu,b(Bo, V);ta,b(Bo, V)). 0

REMARK 4.15. If the elements t! = t!a,b(Bo, V) on the right in (4.93) are replaced
by elements of the form Yt t!f2, where )I; E ro,
then it is not hard to verify that this dOP.S
not change either ta,b (Bo, V) or i.
Using this remark, we prove the following property of the double cosets in (4.93).
LEMMA4.16. Let t!a,b(Bo, V) be the elements (4.94), let r = q(q), and let* be the
anti-automorphism (4.88). Then

PROOF. First ofall, using (4.39) and (4.52), we find that Ya,b(Bo, V) = ( P4~· ~),
where

and
N = diag(On-a-b,Bo,Ob)Vdiag(p 2En-i•PEi).
We now choose Vi and Vi E An in such a way that in the matrix

Y~.~(Bo, V) = U(-Vi)Yu,b(Bo, V)U(Vi) = ( P4~i ~:)


the block D 1 is a diagonal matrix. Since p 4D* is an integer matrix, it follows from
Lemma 2.2 and the expression for D that we can take

Since Ni = Vt NVi and 'D1 Ni = 'N1 Di, we have


§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP i73

We next define matrices Si and S2 by setting

Using this notation, we find that in the matrix

(2)(
Ya,b Bo, V ) = T ( Si ) Y (I)(
0 ,b Bo, V ) T ( S2 ) = (p ODj
4 N2)
Di

the block N2 is diag(O, pA 22 , 0, 0). We choose a matrix v;


E SL.2 (Z) that satisfies the
congruence
A22[V3] = diag(A.i, ... , A.p,O, ... , O)(modp)
with A.i,. . ., Ap not divisible by p (compare with (4.43) and (4.44)). If we now set
V3 = diag(E•., v;, E.3' E.4 ), then Y~~(B0 , V) can be transformed to

where NJ = diag(O, pA22, 0, 0), and we may assume here that


A22 = diag(A.i. ... , A.p, 0, ... , 0),

since this can always be achieved by multiplying Y~~ (B0 , V) by a suitable matrix of
the form T(S) Er, which is permissible by Remark 4.15. Thus, we may assume that
in (4.93)
c!a,b(Bo, V) = (Y~~(Bo, V); ta,b(Bo, V)).
Using the notation (4.10) and (4.46), we define the following matrices in r:
•1+p •1+P
P'(A22) = II cp;(pi), P"(A22) = II cp;(P'D·
i=•1+i i=•1+i

Then the matrix Y~:l(B0 , V) can be written in the form Y~~(Bo, V) = P'(A22) x
Y~~(Bo, V)P"(A2 2), where

(4.95)

in which D2 = diag(pE•.,p3 Ep,p 2 Ev2 -p,p 3 E.3'p 4 Ev4 ).


We note-and this is essential for the rest of the proof-that the matrix Y~~ (Bo, V)
does not change if Bo is replaced by -B0 • This property is not satisfied by any of the
matrices Y0 ,b(Bo, V) or Y~~(Bo, V) fork= 1,2, 3. I~ all of those matrices the block
N or Nk would change to -N or -Nk if Bo were replaced by - Bo.
174 3. HECKE RINGS

According to Lemma 4.11, we obtain the following relations from the last equality
for Y~~t(Bo, V):

~{3)( ~{3)(
Ya,b Bo, V ) = ( Ya,b(3)( Bo, V ) ; t ( Ya,b Bo, V ))) = P~,( A 22
I ) ~{4)(
Ya,b Bo, V ) P~,,( A I22 ) ,
s(Y~~(Bo, V)) = x(2 l(P'(A22 ))x(2 l(P"(A2 2 ))s(Y~~(Bo, V)).

It is not hard to see that, if we multiply the matrix Y~~ (B0 , V) by suitable matrices of
the form U(W) Er and ip;{u), where u is either u' or u" {see (4.47)), we can reduce
it to the canonical form (4.7). Hence, using Lemma 4.11, the relation s(U(W)) = 1,
and (4.77), we conclude that s(Y~~(B0 , V)) = 1. From this, (4.75), and (4.77) we
finally obtain

(4.96)

Since, by the last equality for eu,b(Bo, V), we can rewrite this element as the
product
~(3) ~(3) -I
Ya,b (Bo, V)(E2n; ta,b(Bo, V)t( Ya,b (Bo, V)) )
and since the elements Y~~(B0 , V) fork = 3,4 lie in the same f-double coset, it
follows that in (4. 93) we can take

(4.97)

in which Su,b(Bo, V) = lu,b(Bo, V)t(Y~~(Bo, v))- 1 E c,. In addition, by Proposition


4.12, (4.42), and (4.94) we have ta,b(Bo, V) = x(B0 )pJ, whereo E Z; hence, by (4.96),
we obtain
s0 ,b(Bo, V) = x(Bo)x(A2i1).
When Bo is replaced by - Bo, the corresponding matrix A22 is also transformed to
-A 22 , since, by definition, A 22 is a linear function of Bo (for fixed V). From this and
the obvious relation for the function in (4.70)

(4.98) x(-A) = x(A) = x(A)- 1,


where A= 'A isanarbitraryintegermatrix, itfollowsthatsa,b(-Bo, V) =sa,b(Bo, V)- 1•
From Lemma 4.14, (4.88), and (4.97) we now-have

(eu,b(Bo, V))f. =(Y~~(Bo, V)*(E2n;sa,b(Bo, V)- 1))f


=(Y~~(Bo, V)(E2n;sa,b(-Bo, V)))f' = (eu,b(-Bo, V))f',

since Y,~~(B0 , V) = Y~~(-Bo, V). D

We now prove the second equality in (4.89). The relation (4.97) shows that the
value µ(eu.b(B0 , V)) does not change if Bo is replaced by -B0 • But because the map
Bo - - Bo is obviously an automorphism of the space of matrices in S 0 (Z/ pZ) with
fixed r,.-rank, by Lemma 4.16 and (4.93) this implies that X* = i. D
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 175

PROBLEM 4.17. Suppose that e = (M, <p) E s; (q )+, Mis the element defined by
(4.58), andr = r 0(q). Showthate = M(E2n.e:), wheree: E Ci. and that (e)f. =;if (e)r:
if e 2 =;if 1.
[Hint: Use (4.88) and (3.19) of Chapter 2.]
PROBLEM 4.18. Prove that (K)f. =;if (K)f. where

K = ( ~ ~), p = 3(mod4), and r = rA(q).

[Hint: Use (4.65) and (4.7~).]

3. The spherical map. In this section we define an imbedding of the commutative


ring i8(q, x) in a polynomial ring. We do this by first imbedding it in the triangular
Hecke ring Lo,p (more precisely, in its complexification) and then using the spherical
mapn.
We begin by defining the Hecke ring (see §3.3)
(4.99)
For any matrix M E ss,p the homomorphism tM: (r(i)M - C1 defined by (4.6) is
trivial. Hence, by Lemma 1.7, (r0, SS)
is a Hecke pair, and L(i,p is actually a Hecke
ring.
Ifwe now let Z;(q) denote the local subring of the ring fn(q) in (4.3), then, using
Z;
Lemma 3.4, we easily see that the Hecke pairs that determine the rings (q) and Lo,p
satisfy the conditions (1.26). Hence, Proposition 1.9 enables us to define an imbedding

(4.100)
Furthermore, for any odd integer k we define the map
~ pk -n
(4.101) LO,P ----+ Lo,p = LO,P ®Q C
by mapping double cosets

(r(ier(i)-.!!..+ s(e)-k(r(iP(e)r0),
where s(e) is the function (4.61), and then extending Pk by Q-linearity to the entire
ring Lo,p· Since tm = 1 for all M E ss,p• it follows from (1.24)-(1.25) and (1.10)
that Po is a homomorphism. According to (3.19) of Chapter 2 and (4.2), the map
s: S8,p - t C1 is a homomorphism, whose kernel contains the group r 0. Consequently,
for any k the map Pk is also a homomorphism. We note that, although k can be any
integer in the definition (4.101), only the case of odd k is important for applications.
Hence, in what follows we shall always suppose that k is odd.
Suppose that i = (e)f. where r = r 0(q), belongs to the ring i;(q), and r(e) is
not an even power of the prime p. Then from Proposition 4.3 it follows that tM "¢. l,
where M = P(e), and so the partition rM = LJ/Ker IM )P.; contains more than one
coset. If we are also given a second partition r = LJ; r Ma;, then

x = UUrefJ;&;
i .i
176 3. HECKE RINGS

is also a partition into disjoint cosets. Since we have {jii = fi} (E2n, t M (Pi )- 1)e by
(4.6), where P} E r, it follows that the last decomposition can be rewritten in the form

where, by Lemma 3.4, we may suppose that ea:; E So,p· We now let
(4.102)

denote the composition of the maps (4.100) and (4.101). We obtain

eq,k(i) = ( ~ 1M(Pjl) ~s(ea;)-k(r0 P(ea;)).


J I

Since the set {tM(Pi)} is a nontrivial subgroup of the group of fourth roots of unity
(because rt- = 1 by Lemma 3.4 of Chapter 2), it now follows that, since k is odd,
eq,k (X) = O; thus,

(4.103)

where f:;(q) = eq,k(i:;(q)) and E;(q) = e,,,k(i;(q)). In Chapter4we shall show that
the homomorphism eq,k commutes with the representation of the Hecke rings on the
corresponding spaces of modular forms. Hence, (4.103) shows· that'in the theory of
Siegel modular forms of half-integer weightk/2, where k is the same as in (4.101), it is
only the even Hecke rings .E; .E;
(q) (or the rings (q, x), which do riot differ from them
in any essential way) that are of importance.
LEMMA 4.19. In the ring

(4.104)

the images of the elements (4.82) have the following left coset decompositions:

(4.105) T;(p 2 ) = eq,k(T;(p 2)) = I: rr~b-i)(k)


a+b:s;;n
a~i

for i = 0, 1, ... , n, where, in the notation (4.42), one has

(4.106)
Bo,S,V
rp(Bo)=r

in which xis the function (5.70) andthe matrices B0, S, and V run through the sets in
(4.41).
PRooF. The lemma follows from (4.91) and Proposition 4.12. D

In §3.3 we defined the maps <I>, w, and n. We shall use the same letters to denote
the extensions by linearity to the complexifications of the corresponding rings in (3. 50).
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 177

LEMMA4.20. Let n:,b(p) be the elements (2.31) of the ring n;,


and let Cl> be the
homomorphism in the diagram (3.50). Then the following relation holds in the ring
n;[xt 11:
(4.107)
where A 0 denotes the element of n; of the form
(4.108) Aa = L pb(a+b+l)n:,b(p).
Oo>;bo>;n-a

PRooF. (4.106) shows that


(4.109) Cl>(II~l(k)) = xJ L x(Bo)-k(AnDa,b V) = pb(a+b+l)l!(r,a)xfin:,b(p),
Bo,S,V
rp(Bo)=r

where for 0 ~ r ~ a we have used the abbreviated notation


(4.110)
BES0 (Fp),rp(B)=r

Now from (4.105) and (4.109) we obtain the following formula for the Cl>-images of
the elements T;(p 2 ):

(4.111)

From the definition (4.110) it follows that l; (0, a) = 1. In addition, if we replace B by


-Bin the sum (4.110) and use (4.98), we see that the sums (4.110) have real values.
But by (4.70) they lie in the field Q(H); hence, the l;(r,a) are rational numbers.
From this and from (4.111) and (4.83) we obtain (4.107). D

It turns out that the image under Cl> of the ring E~(q) coincides with the ring on
the right side of (4.107). The proof of the next basic result of this section relies upon
this fact.
THEOREM 4.21. Let n, q E N, where q is divisible by 4, and let p be a prime not
dividing q. Then, in the notation of Theorem 3.30:
(1) The restriction of the map n to the subring
(4.112) i;(q, x) = eq,k(E;(q, x)) c· 'L~.p•
where k is an arbitrary odd integer and
-
(4.113) -n
E p(q, x) = Q[To(p 2 -
), ... , Tn(P 2
)]
is the integral subring of the ring (4.83), gives an isomorphism of this ring with the ring
Q[xo, ... , Xn]wz of polynomials that are invariant under the group of automorphisms
W2 = w;, which is obtained by adjoining to Wn the automorphism •o: •o(xo) = -xo,
ro(x;) = x;for i = 1, ... ,n. ·
(2) The ring Q[xo, ... , xn]wz is generated over Q by the polynomials
(4.114) t 2 =(tn(xo,x1,. . .,xn)) 2 , Pa=P:(xo,x1,. .. ,xn) (O~a~n-1).
178 3. HECKE RINGS

The polynomials (4.114) are algebraically independent over Q.


(3) The restriction ofQ to the full subring i;(q, x) gives an isomorphism of this ring
with the polynomial ring
(4.115)
and hence the restriction ofQ to the subring .E; (q, x) ofthe ring L~,p is a monomorphism.
(4) The elements To(p 2 ), ••• , Tn(p 2 ) of the ring .E;(q, x) are algebraically indepen-
dent over Q, and the map eq,k gives an isomorphism of this ring with thtz ring E;(q, x).
PRooF. The plan of. proof is as follows: we compute the Cl>-image of E;(q),
compare it with the analogous image of E;(q, x), and then apply Theorem 3.30.
According to Theorem 3.23, (4.79), and (4.81), we can write
(4.116) E;(q) =Q[T,Ti(p 2 ), ••• ,Tn(p2 )],
where T = T(p) 2 or T0 (p 2 ), and
(4.117) E;(q) = E;(q)[Tn(p 2 )- 1].
Let E;(q) be the eq·image of the ring (4.117). From (3.61) and Lemma 3.34 it follows
that Cl> takes the generators T;(p 2 ) of this ring to the elements
(4.118) Cl>(T;(p2 )) = x6 L lp(a - i, a)Aa,
i~a~n

where A 0 is given by (4.108). Since the lp(r, a) are rational numbers and lp(O, a)= 1,
it follows from this and from (4.116) that Cl>(E;(q)) = Q[x6Ao, ... ,x6An], which,
together with (4.107), implies that
(4.119) n(E;(q,x)) .= n(E;(q)).
We now apply Theorem 3.30. Since Q(T(p)) = t (by (3.70)), it follows from
(4.116) that
(4.120) O(E;(q)) = Q[t 2,po, ... • Pn-i1·
If we take into account the definitions (3.52)-(3.54) of the polynomials t and p0 ,
we see that the right side of the last equality coincides with the polynomial ring
Q[xo, ... , xn]w2. From this, (4.119), (4.120), and the commutativity of the ring
E;(q, x) (which follows from Theorem 4.13), we obtain the first and second parts of
the theorem.
The third part follows from (4.117), the analogous equality for the ring .E;(q, x),
and the fact that
(4.121)
Here the first equality follows from (4.11 I) and (4.118), and the second equality is a
consequence of Lemma 3.34.
Finally, the fourth part follows from the second and third parts and from the
commutativity of the ring .E;(q, x). 0

Just as in the case of L;, this theorem enables us to parameterize the set of all
Q-linear homomorphisms from the ring E;(q, x) to the field C. ,
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 179

PROPOSITION 4.22. Every nonzero Q-linear homomorphism .II.from the ring i; (q, x)
to C has the form
(4.122) T-+ .II. A (T) = nn(T)
p (x0 , ••• ,x.)=A•

where T E i;(q, x) and A = (ao, ... , an) is a set of nonzero complex numbers that
depends on .II.. This set is called the parameters of the homomorphism .II.. If one set of
parameters is obtained from another by the action of a transformation in w;,
then the
two sets ofparameters correspond to the same homomorphism.
PROOF. The proposition is an immediate consequence of Theorem 4.21 and the
proof of Proposition 3.36. D

PROBLEM 4.23. Let n = 1 and /Ji. = l\ (p 2). Prove that Q takes the polynomials
over E~(q, x)

:ii~(v) = 1- (p!Ji.)-TT0 (p 2) +v 2 , Q~(v) = 1-T0 (p2 )v + (p/Ji.) 2 v 2

respectively to the polynomials r(xi; v) and q(x6, xf; v) in Q[xij1= 1, xt'Jw2 (see (3.52)
and (3.76)).
[Hint: Use (4.111).]

§5. Hecke rings for the triangular subgroup of the symplectic group
When studying elements of a Hecke ring of the symplectic group, it is sometimes
convenient to decompose them into suitable components which, however, do not
themselves belong to this Hecke ring. The place where all of these components lie is a
suitable Hecke ring of the triangular subgroup q) of the modular group rn.
1. Global rings. According to Lemma 3.25(3), we can define the global Hecke ring
forr0
(5.1)
and for any q E N we can define its q-subring
(5.2)

where S 0(q) = S0n GL2n(Zq). It is clear that the local rings Lo,p that were introduced
in §3.3 are contained in L 0(q) if (p, q) = 1.
By analogy with the local case, it follows from Lemma 3.4 that the Hecke pairs
(r0(q ), sn(q )) and (q), S 0(q)) and the Hecke pairs obtained from them in the case 4lq
by lifting by means of the homomorphisms rand p (see (4.3) and (4.99)) satisfy the
conditions ( 1.26). Thus, one can determine imbeddings ( 1.27) of the corresponding
Hecke rings:

(5.3)
which enables us in place of Ln(q) and (by Theorems 4.6 and 4.21)

(5.4) £n(q,x) = @QE~(q,x) C fn(q)


p/iq
180 3. HECKE RINGS

to consider the isomorphic subrings

(5.5)

inside the global ring L~{q) = L 0{q) ®QC, where eq,k = e ·Pk.
We shall examine certain multiplicative properties of the rings (5.2). Unlike the
Hecke rings of the symplectic group and the symplectic covering group, the Hecke
rings of the triangular.subgroup are noncommutative and contain zero divisors.
PROBLEM 5.1. Let n = 1, let p be an odd prime, and let X be an element of LA.p of
theform

X= ~{-1); ((~ ;))r (r=rA).


Show that the following relations hold in the ring LA.p:

[Hint: Show that the double cosets

(( P
0 i ))
P r
and ((p0 ))r
2 Pi for i = 1, ... , p - 1

a,re pairwise distinct, and each of them consists of a single left coset modulo r.]
However, several important properties of the rings Ln(q) and En(q,x) do carry
over to the rings L 0{q). Moreover, in practice we shall have need only of certain
subrings and submodules of the rings L 0{q) and Lo,p·

LEMMA5.2. ForanyM E S8(q)andr E z;. wheren,q EN, thefollowingre/ations


hold in the ring L 0(q ):

where

(5.6)

andro = q. In particular, An(r1)An(r2) = An(r1r2)for ri.r2 E z;.


PRooF. The decomposition (5.6) is obvious. The lemma follows from this decom-
position and the definition of multiplication in Hecke rings. D

The lemma implies that elements of the form An(r) lie in the center ofL0{q) and
are invertible in this ring. As in the case of the analogous lemmas for the Hecke rings
considered earlier, in practical calculations this lemma enables us to reduce the case of
arbitrary double cosets to the case of double cosets of integer matrices.
The map j in §1.4 allows us to define an important anti-automorphism * of the
ringL~{q).
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 181

PROPOSITION 5.3. Let ro = q}, and let


X = L:a;(M;)r0 , where a; E C,M; E S0(q),

be an arbitrary element of L~ (q ). Then the C-antilinear map

(5.7) X--+ X* = L:a;(r(M;)M;- 1)r0 ,

where a; is the complex conjugate of a;, is an anti-automorphism of order 2 of the ring


L~(q), i.e.,
(5.8) (XY)* = Y* X*, (X*)* = X (X, YE L~(q)).

Every element Tin the subring (5.5) of L~(q) is invariant relative to the anti-automor-
phism (5.1):
(5.9) X* = X for XE Ln(q) or En(q, x).

PROOF. We consider the diagram

in(q, x) ~ L8(q) ~ L~(q)


(5.10)

in(q,x) ~ L8(q) ~ L~{q)


in which only the first two vertical arrows have not been defined. We define the first of
these maps in such a way that it acts like (4.88) on the local components i;
(q, x). By
Theorems 4.6 and 4.13 and the formula (4.89), this map* is an anti-automorphism,
and X* = X for any XE in(q, x). The second map is defined on double cosets by
setting (e)f0 ~ (e~)fo• where ro = r3, and by extending the map by Q-linearity onto
all of L8(q). The fact that this is an anti-automorphism will be shown by examining
the third *-map, which is equal to the composition i · j = j · i of the C-antilinear
automorphism i of L~ (q) and the c~linear anti-automorphism j of L~ (q) {both of
order 2), which, in turn, are defined on double cosets by the conditions

(M)r0 ~ (r(M- 1)M)r0 , (M)r0 ~ (M- 1)r0 (ME S8(q)).


Now the first part of the proposition follows from Proposition 1.11. From Lemma 1.13
and the definition of all of the maps in (5.10) we conclude that the diagram commutes.
This implies (5.9) for XE En(q, x). For XE Ln(q) the equality (5.9) is obtained from
(3.14). 0

We now turn our attention to subrings of L8(q). It turns out that, in addition to
the Hecke rings of the symplectic group, this ring also contains commutative subrings
that can be obtained as the centralizers of certain sets of elements and that are naturally
isomorphic to Hecke rings of the general linear group of order n (more precisely, to
certain extensions of them). This circumstance makes it possible to reduce various
questions in the theory of Hecke rings and Hecke operators for the symplectic group
to analogous questions for the general linear group.
182 3. HECKE RINGS

We consider the following elements of the Hecke ring L0(q):

(S.11) II~(m)=((m~n ;n))r/ IIi(m)=((~n m~n))r/


where m E N and (m, q) = 1. We have the following
LEMMA S.4. The decomposition of the elements II± (m) = II~ (m) into left cosets
modulo r 0 = r3 has the form

(S.12) II_(m) = (ro ( 0E ~)),


m II+(m) = L
BES.,/modm
(ro (~ :E)),
where E = En. We have the relations:

(S.13) II_(m1)II_(m2) = II_(m1m2), II+(m1)II+(m2) = II+(m1m2),


(S.14) II_(m)II+(m) = m(n)An(m),
where An is the element (S.6);

(S.lS) II_(mi)II+(m2) = II+(m2)II_(m1), if(mi.m2) = 1,


(S.16) II_(m)* = II+(m), II+(m)* = II_(m).

PROOF. The decompositions in (S.12) follow from (3.44). The relations (S.13)-
(S.16) are obtained directly from the definitions and (S.12). D

We now consider the subsets of L0(q) consisting of all elements that commute
with all elements of the form II_ (m) and all elements of the form II+ (m), respectively:
(S.17) C!!_ ={XE L0(q);II_(m)X = XII_(m), (m,q) = l},
(S.18) C~ ={XE L0(q);II+(m)X = XII+(m), (m,q) = l}.

These are clearly subrings of L0(q). From (S.16) it follows that the anti-automorphism
* takes each of these subrings into the other one:
(S.19) C!!_(q)* = C~'(q), C~(q)* = C!!_(q).
PROPOSITION S.S. The ring C~ (q) (resp. q~ (q)) is spanned by the double cosets
modulo r 0 = ro ofelements of the form
(S.20) M = U(r,D) E S 0(q), where dn(D) 2 Ir
(resp., of the form
(S.21) M = U(r,D) E S 0(q), wherer I d1(D) 2 ),

where U(r,D) = (r~* ~)and d;(D) denotes the ith elementary divisor of the
matrix D.
We first describe the decomposition of the r 0 -double cosets of elements of the
form (S.20) and (S.21) into left cosets modulo r 0 •
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 183

LEMMA 5.6. If Mis an element of the form (5.20), then


(5.22) (M)r0 = L (roU(r,D;)),
D;EA\ADA

and if Mis of the form (5.21), then

(5.23) (M)ro= L (ro(r~i r1;J:s)).


D1EA\ADA
SES./S!

where A= An, and in the last condition under the summation the set S! = r- 11D;SnD;
is contained in the group Sn = Sn (Z) and is regarded as a subgroup there.
PROOF OF THE LEMMA. It is easy to see that

roU(r,D)ro = { (r~i ~:) ;D; E ADA,Bij E rDiSn + SnD;}


and that matrices in this set lie in the same r 0 -left coset if and only if the corresponding
D; lie in the same A-left coset and the corresponding Bij are congruent on the right
modulo D;. If dn(D) 2 1r, then rS[D;- 1] is an integer matrix for any D; E ADA and
S E Sn, since dn(D)D;- 1 is an integer matrix. Consequently, in this case all of the
matrices in the set rDi Sn+ SnD; are divisible on the right by D;, and in choosing r o-left
coset representatives we may suppose that all B;i = 0. This gives (5.22). In the general
case, when choosing left coset representatives we may always take B;i = rDi Si, where
the Si E Sn(Z) are such that the matrices B;j are pairwise noncongruent modulo D;.
The congruence B;i' = B;j(modD;) means that B;j - B;i' = S'D; with S' E Sn(Z),
i.e., Si -Si' = r- 1s'[D;] E r- 1 'D;Sn(Z)D;. Ifrld1(D) 2 , then this last set is contained
~~oo. o
PROOF OF THE PROPOSITION. We first note that from the uniqueness of elementary
divisors it follows that the following relations hold for any nonsingular rational n x n-
matrix D:
(5.24) d;(D- 1) = dn+i-;(D)·-l for i = 1, ... ,n.
These relations imply, in particular, that d 1(rD- 1) = rdn(D)- 1, and so we find that
the matrix M has the form (5.20) if and only if the matrix rM- 1 = U(r, rD- 1) has
the form (5.21). Thus, the anti-automorphism* takes linear combinations of double
cosets of elements of the form (5.20) to similar linear combinations of double cosets
of elements of the form (5.21), and conversely. Taking (5.19) into account, we see that
it suffices to prove the proposition for one of the two rings c:;,(q), say, for C~(q).
If M has the form (5.20), then from (5.11) and (5.22) we obtain

IL(m)(M)r0 = L (roU(mr,D;)) = (M)r IL(m).


0

D;EA\ADA

Conversely, suppose that

X =Lao (ro (ro~~


Q
~:)) E C~(q),
184 3. HECKE RINGS

where the left cosets are pairwise distinct and all aa are nonzero. We choose an integer
m prime to q for which all of the matrices mBaD;; 1 are integer matrices. Then (see
(5.11))

On the other hand,

IL(m)X = xn_(m) = ~aa '°" (ro· (mr D*a


0 ~:)).
Comparing these sums, we conclude that for each a the matrix Ba is divisible on the
right by Da. Hence, we may assume that all Ba = 0. Since¥ lies in the Hecke ring
of the group r 0 , it is invariant under right multiplication by matrices in r 0 of the form
T(S) with S E Sn(Z). This implies that for any such S and for any a the matrix
raD~S is also divisible on the right by Da, i.e.,

(5.25)
Let diag(d1, ... , dn) = ed(Da) be the elementary divisor matrix of Da. Since Da =
y(edDa)c5, where y,c5 E A, it follows that Da can be replaced by ed(Da) in (5.25).
Then this condition obviously means that all of the ratios ra/d;dj are integers, and
this is equiva!ent to the condition dn(Da) 2 lra. Finally, because Xis invariant under
right multiplication by matrices in r 0 of the form U (y) with y E A, it follows that the
expansion of X can be rewritten in the form

X = Lafi{ L (roU(rfi,Da)) }•
p DaEA\ADf,A

where Dfi runs through Da lying in pairwise distinct A-double cosets, and any of the
A-left cosets in ADfiA is equal to one of the ADa with Da E ADfiA. Then the relation
dn(Da) 2 lra and (5.22) imply that the expression in braces is the decomposition of some
double coset (Mp)r0 , where Mp has the form (5.20). D

An important tool for studying the global ring L0 and its subrings is the global
analogue of the map ell that was defined in §3.3. We shall associate variables Yp to the
prime numbers p, and for different p we suppose that they commute with one another.
We let .Q = Q[ ... , y'f 1, •.. ] be the ring of polynomials over Q in the variables y~ 1
(p = 2, 3, 5, ... ). We define the Q-linear map ell= elln from the module LQ(r0, S0) to
the module La(An, on) by setting

(5.26) ell (r0 ( r~* 1J;)) = y;1


1 .; • y;;(An D),

if r = pf 1 • • • p~'. It is clear that this map does not depend on the choice ofleft coset
representatives.
PROPOSITION 5.7. The restriction of the map elln 'to the Hecke rir.gL0 c LQ(r0, S0)
gives an epim01phism of this ring onto the Hecke ring Da(An, on) of the Hecke pair
(A", 0") over .Q:
(5.27) .m .mll.·
-v=..v L"o---+ D a (An • 0")-
- H"[ ···•Yp±I , ... ] ·
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 185

PROOF. The invariance of elements X E L 0under right multiplication by matrices


U(y) (y An) implies the invariance of their images <l>(X) under right multiplication
E
by matrices y E An. Hence, <l>(L0) c Do(An, Gn). From the definition of multipli-
cation in Hecke rings it then follows that (5.27) is a homomorphism. Finally, from
(5.11) we have
<l>(IL (pf1 ... p~')) = Y;11 ... y;;'
and from (3.44) we have <l>((U(D)r0 )) = a(D)(D)A, where D E Gn and a(D) ::/; 0.
These equalities imply that we have an epimorphism. D

'THEOREM 5.8. The restrictions of the map <l>n to the subrings C~(q) and c+(q)
of L0, where n,q E N, are monomorphisms. In particular, C~(q) and C~(q) are
commutative rings with no zero divisors.
PROOF. The proof is similar for C~(q) and c+(q); to be definite, we consider the
case of c+(q). By Proposition 5.5, every nonzero X E c+(q) can be written in the
form
X = L:a;(U(r;,D;))r0 ,
i
where all of the U(r;, D;) have the form (5.21), the double cosets are pairwise distinct,
and all a; are nonzero. Then from (5.23) it follows that

<l>(X) = ~ a;a; ( J~J>i'(r;)) (D;)A,


where the a; are positive integers, p runs through the prime numbers, and vp(r)
denotes the exact power of p that occurs in the prime factorization of r. The terms
corresponding to different i have either different r;, or else different (D;)A, and hence
do not cancel one another. Consequently, <l>(X) ::/; 0. D

Ac-.cording to this theorem, the rings C:J:.(q) may be regarded as extensions of the
global Hecke ring of the general linear group. Since the ring L0contains the global
Hecke rings of the symplectic group, this makes it possible for us to examine the
connections between Hecke rings of the symplectic group and the general linear group.
2. Local rings. In earlier sections we have already studied the local Hecke ring L~.p
for each prime p, and also its local subrings L; and E;(q, x) (see (3.45) and (4.112)).
It is clear that
(5.28) Lo,p c L(i(q), if (p,q) =I.
We now ip.troduce local analogues of the rings C:J,,(q). We set

(5.29) C!!_P = C!!_(q) nLo,p• C~P = C~(q) nLo,p·


From Proposition 5.5 it follows that these rings do not depend on q with (p, q) = 1.
The global Theorem 5.8 implies the following local variant.
'THEOREM 5.9. The rings C~P and c+P' where n EN and pis a prime number, are
commutative rings with no zero divisors. Moreover, the restrictions to either C~P or c+P
of the maps <1>; and n; ro; ·
= <1>; that were defined in §3. 3 are monomorphisms.
186 3. HECKE RINGS

PRooF. We note that we can obtain the map <1>; on LO,P if we take the restriction
to Ln0,p of the map q,n and then set Yp = xo. Thus, it follows from Theorem 5.8 that
the restriction of '1>; to either C!!.P or c+P is a monomorphism. From this and Lemma
3.29 we see that the restrictions of n;
are also monomorphisms. 0

In the next section we make a more detailed study of the properties of the local
rings for fixed p, in connection with the problem of factoring polynomials over L;
or i;(q, x). For now we limit ourselves to a discussion of some of the connections
between the local rings corresponding to different primes.
THEOREM 5.10. The ring C!!.(q) (resp. c+(q)), where n, q EN, is generated by the
subrings C!!.P (resp. c+P)' where p runs through all prime numbers not dividing q.
PRooF. From (5.19) it follows that it suffices to treat, say, the case of C!!.(q).
Proposition 5.5 implies that for this it is enough to verify that, given an arbitrary
M of the form (5.20), the double coset (M)r0 is a finite product of double cosets
(Mp)r0 E C!!.P' where p runs through a set of distinct primes not dividing q. Let
M = U(r,D). By Lemma 2.2, if we replace M by another representative of the
same r 0-double coset, we may assume that D is equal to its elementary divisor matrix
ed(D) = diag(di. ... , dn). For each p we set
Dp = diag(pvp(d1l, . .. ,pvp(d,,l), rp = pvp(rl, and Mp= U(rp,Dp),

where vp(a) is the exponent of pin the prime factorization of the rational number a.
Clearly, Mp is not equal to the identity matrix for only finitely many p, and none of
these p divideq. Each matrix Mp lies in S8,p· Sinced;lr, it follows thatdn(Dp) 2 divides
rp for each p; hence, (Mp)r0 E C!!.P" Because IIP rp = r and, by Proposition 2.5,
IIP(Dp)A = (IIP Dp) A = (D)A, we conclude from Lemma 5.6 and the definitions
that the double coset (M)r0 is equal to the product of the double cosets (Mp)r0 • 0

Our next task is to prove that elementi; of the rings L; and i; (


q, x) commute with
elements of the rings C±Pi if (p,p1) = 1. To do this, we examine the double cosets
T1(p 2 ) and T;(p 2 ) in more detail. According to (3.61) and (4.105), thesedoublecosets
can be represented as sums of the ro-double cosets rr~:l and rr~:l(k), where ro = ro.
Just as in the case of (4.106), from (4.90) we obtain the expansion

(5.30) rr~l = L (r0Ma,b(Bo)T(s)u(v)),


Bo,S, V;rp(Bo)=r

and hence rr~l is a sum of double cosets

(5.31) (Ma,b(Bo))r0 , where Bo E Sa(Z)/mod p, rp(Bo) = r.


Suppose that the matrix e = (eij) E A(D) is divided into blocks in the same way
as D = Da,b (see (2.31)). Then e = n- 1,,- 1n (i.e., ,,ne = D) with 1'/ = (1'/;i) EA;
this implies the congruences

(5.32)
e12 =O(modp), =
e23 O(modp), e 13 O(modp 2), =
dete;; '{=. O(mod p) for i = 1, 2, 3.
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 187

Using these congruences, (4.39), and the relation rJ* = n- 1 • 'eD, we obtain
(5.33)
From Lemma 3.2 it follows that for a < n and for any U E GLa(Fp) there exists
e E A(D) such that e22 = U(mod p ). If a = n, then e22 = e, so that dete22 = dete =
=
± 1, and the matrix U must satisfy the condition det U ± 1{mod p). From this and
(5.33) it follows that the matrix Bo in the double coset (5.31) can be any matrix of the
set

(5.34) {Bo}n = { {Bo[U]/modp; U E GLa(Fp)}


P {Bo[U]/modp; U E GLa(Fp),det U =±l(modp)}
for a< n and for a= n, respectively, where Fp = Z/ pZ.
One can also show that, if the double cosets (5.31) for Bo and B~ are the same,
then {Bo};= {BO};. Hence, taking into account (5.33) and (5.30), we obtain

(5.35)
B~E{Bo}~.s.v

where Sand V run through the sets of matrices in (4.41). From this we easily see that
the equality
(5.36) (Ma,b(Bo))r0 = (Ma,b(B~))r0
holds if and only if a= a1, b = b1, and {Bo};= {B~};.

LEMMA 5.11. Let 0 ~ r ~ a, a + b ~ n. Then Il~l and Il~l (k) have the following
decompositions into r o-double cosets, where r 0 = rg:
n(r) -
(5.37) a,b - L
{Bo};,rp(Bo)=r
(Ma,b (Bo) )r0 ,

(5.38) n(r)(k)
a,b =
L
{Bo}~,rp(Bo)=r
x(Bo)-k (Ma,b (Bo) )r0 ,

where the summation is taken over the set (5.34) in Sa(Z)/modp. The action of the
anti-automorphism * on these elements is given by the formulas
(5.39) a,b )*
(Il(r) = n(r)
a,n-a-b• a,b (k))*
(n(r) = n(r)
a,n-a-b (k).

PRooF. Since all of the left cosets in (5.30) and (4.106) and all of the doublecosets
on the right in (5.37) and (5.38) are pairwise distinct, and since these double cosets
occur in n~l and n~l{k ), it follows that (5.37) and (5.38) are consequences of (5.35).
By the definition of the anti-automorphism* (see (5.7)) we have

(Ma,b(Bo))r0 = ( ( DO,b P~l)) ro,


where Bis the matrix in (4.39). We let I= In denote then x n-matrix with l's on the
anti-diagonal and O's everywhere else. Then the map A ---+ IA/ of any n x n-matrix A
reverses the order of its rows and columns; from this it easily follows that

U(I) ( Da,b
O -B )
2n-I U(I) = Ma,n-a-b(-IaBola).
P a,b
188 3. HECKE RINGS

Since U(l) E r 0 , we thus obtain the relation

(Mu,b(Bo))[-0 = (Ma,n-a-b(-laBola))r0 •

The equalities in (5.39) follow from these relations and from (5.37)-(5.38), since the
map Bo--+ -I0 Bola merely permutes the classes {Bo}; with rp(Bo) = r, and since, by
(4.70) and (4.98), wehavex(Bo) = x(-Bo) = x(-laBola). 0

PROPOSITION 5.12. Let p, PI be distinct primes, and let n E N. Then:


(1) every element in L; or E;(q, x) commutes with every element in C!!.p, or C~P•;
(2) every element in C!!_P commutes with every element in C~p,·
PROOF. By Proposition 5.3 and (5.19), it suffices to prove part (1) for, say, C!!.p,.
According to Proposition 5.5, to do this it is enough to verify that any double coset
(M)r0 , where M = U(r,A 0 ) E S8,p, and dn(Ao) 2 1r, commutes with all of the genera-
tors T(p) and T; (p 2 ) ( 1 ~ i ~ n) of the ring L; and with all of the generators T; (p 2 )
(1 ~ i ~ n) of the ring E;(q, x). From (3.58), (3.61), (4.105), and Lemma 5.11 we
see that this, in turn, will follow if we show that (M)r0 commutes with all elements of
L(i,p of the form 110 (0 ~ a ~ n) or (M0 ,b(Bo))r0 (a+ b ~ n). From (3.59) and (5.22)
we obtain

(5.40)

where A E A\ AAoA, D E A\ AD0 A, and B E Bo(D)/modD. We need some


preliminaries in order to transform this expression. Given a subring K of the field Q
and a matrix D E GLn(Q), we set

BK(D) ={BE Mn(K); 'BD = 'DB}.

Just as in the first part of Lemma 3.33, it is not hard to verify that

and that in this case we can take the set a*{BK(D)/Sn(K)D}P as a set of representa-
tives of the residue classes BK(0t.DP)/Sn(K)aDp. Now let K = Z[p! 1], and let D be
an integer matrix all of whose elementary divisors are prime to p 1• Then each residue
class BK(D)/Sn(K)D contains an integer matrix. Namely, if we write an arbitrary
matrix B in BK (D) in the form q- 1Bo with Bo an integer matrix and with q = pf, and
if we choose So E Sn (Z) so that Bo + SoD =
0 (mod q) (D is invertible modulo q ),
then we obtain B + q- 1S 0 D E Bo (D). This implies that in our case we can take

BK(D)/S11(K)D = Bo(D)/modD.

Returning to (5.40), if we use the above considerations and Proposition 2.5, we can
write this expression in th.e form
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 189

where C E A\ AAoDaA, B E r{BK(C)/Sn(K)C}, and K = Z[p( 1]. Since r is


invertible in K, the factor r can be omitted in the last condition. Furthermore, using
the commutativity of the Hecke ring of the group A and Proposition 2.5, we see that
(AoDa)A = (Ao)A(Da)A = (Da)A(Ao)A = (DaAo)A.
Hence, AAoDaA = ADaAoA. Thus, in the last sum we may suppose that C E
A \ AD0 AoA; and, again using (5.22) and the above properties of the sets BK, we can
rewrite this sum in the form

with the same A, B, and Das in (5.40).


We now prove that (M)r0 commutes with the double cosets (M0 ,b(Bo))r0 • Ac-
cording to (5.35), we have

(5.41) (Ma,b(Bo))r0 = L
DEA\ADa,bA,B
(ro (P2~* ~)),
where B runs through the matrices in the set
(5.42) Bz(D) = '1* Bz(Da,b)e,
where D = 'lDa,be, ,,, e EA, and

Bz(Da,b) = {B = (~ B~2 p 1~32) ;B22 E {Bo};,


(5.43) 0 BJ2 BJ3

B32 E Mb,a(Z)/modp,B33 E Sb(Z)/modp 2}·

The definition (5.42) is correct, i.e., it does not depend on how D E ADa,bA is
represented in the form ,,na,be. To see this, it suffices to verify that, if ,,na,be = Da,b
with,,, e E A, then
(5.44)
Using (5.32), the relation,,. = n;,~ · 1eDa,b. and (5.43), we find the following congru-
ences for the blocks B;j(e) in the matrix B(e) = '1* Be:

(5.45)
=B22[e22](modp),
B22(e)
Bn(e) =B1 + Bf2(e)(modp), BJ3(e) =B2 + Bf3(e)(modp2),
where B1 and B2 = 'B2 are integer matrices whose explicit form is not important now,
and

=e33B32e22 + e33B33e32(modp),
Bf2(e) 1 1

Bf3 (e) = e33B32e23 + e23 'B32e33 + BJ3[e33](mod p 2).


1 1

If Bf2 =O(mod p) and Bf3 =O(mod p 2), then (5.32) .implies that B32 =O(mod p)
and B33 =O(mod p 2). From this, (5.45), and (5.43) we·conclude that B B(e) is a
--+
one-to-one map of the set Bz(Da,b), and this proves (5.44).
190 3. HECKE RINGS

Given the ring K = Z[pl 1] and a matrix D = 11Da,bE, where 1'J,e E GLn(K), in
the double coset GLn (K)D a,b GLn (K) we define the following set:

(5.46)

where the definition of the set BK(Da,b) is similar to (5.43), except that B32 and B33
have entries in K/ pK and K/ p 2K, respectively, and the matrices B 22 belong to the class
{Bo}'.k determined by the equations (5.34) with Fp = Z/ pZ replaced by K/ pK. Since
(p 1, p) = 1 by assumption, the proof that the definition (5.46) is correct is exactly the
same as in the case when the ring is Z. We obtain the following relation directly from
the definition (5.46):

(5.47) BK(aDp) =a* BK(D)p, if a,p E GLn(K).

But since the residue ring K/ pm K, m E N, is isomorphic to Z/ pmz, it follows that


every class in the set BK(D) contains an integer matrix; in this sense we may regard
BK(D) as equal to Bz(D) for any DE ADa,bA.
Next, if we use (5.41) and the properties of the sets Bz(D) and BK(D), then we
can prove that the double cosets (M)r0 and (Mu,b (Bo) )r0 commute, in exactly the same
way as we did for elements of 110 •
To prove part (2) of the proposition, according to Proposition 5.5 and Lemma
5.2 it is sufficient to verify that the double cosets (Mo)r0 and (No)r0 commute, where
Mo = U(r, Ao) is an integer matrix in S0,p satisfying (5.20), and No = U(t, Do) is an
integer matrix in S0,p 1 satisfying (5.21). By Lemma 5.6, we can take matrices of the
form M = U(r, A) and N = U(t, D) · T(S), where A EA\ AA0A, D E A\ AD0A,
and S E Sn(Z)/1- 1 • 'DSn(Z)D, as representatives of the left cosets contained in
these double cosets. Since p =I pi, the matrix S can obviously be chosen in the
corresponding residue class in such a way that S = O(mod r). Then

MN= (rt(AD)*
o rt(AD)*S)
AD EroU( rt, AD)ro c r oMoMir
o o,

since AADA = AAoA · ADoA = ADoA · AAoA = AD0AoA, by Proposition 2.5 and
Theorem 2.3. Similarly, we have

NM= (rt(DA)*
O rt(DA)*t-IS[A])
DA EroU( rt, AD)ro c r oMoMir
o O·

This implies· that (Mo)r0 (No)r0 = a(N0Mo)r0 and (No)r0 (Mo)r0 ·= P(N0Mo)r0 for
certain constants a and p. A count of the left cosets on the left and right sides of these
equalities shows that a = P. 0

PROBLEM 5.13. Prove that C~P (resp. C~p) is the centralizer of II_(p) (resp.
II+ (p)) in L(i,p.
3. ExpansionofP(m)forn = 1,2. Atthebeginningofthissectionwementioned
that by passing from Hecke rings of rn to Hecke rings of the subgroup r(j one can often
decompose elements of the former rings into more elementary components. In §6 we
shall consider these questions in more detail for the case of local Hecke rings. Here we
shall remain in the global situation, and for n = 1, 2 we shall obtain expansions of the
images T"(m) E L 0(q) of the elements (3.19) under the map (5.3).
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 191

PROPOSITION 5.14. Let m, q E N with (m, q) = 1. Then


(5.48)

(5.49)

where ni(d) are the elements (5.11), and

(5.50) Il(d) = n?(d) = ( U(d, D(d)))r~ with D(d) = ( ~ ~) .


PROOF. From (3.22), Lemma 3.11, and the definitions we obtain:

T 1(m) = L L
d1</i=mbmodd1
(ro (~2 ;
1
) ) ,

T2(m) = L L (ro (d3d~D* J:~)),


d1d2d3=m D,B'

where D EA\ AD(d2 )A, B' E Bo(D)/modd1D. On the other hand, by (5.12) we
have
I: nl(d1)Il~(d2)= I: I: (ro(~
d1d2=m d1d2=mbmodd1
;J (~ ~)).
which, combined with the above expansion, proves (5.48). Next, we use (5.12) again,
and we note that the following relation is easily verified using Lemma 3.33:

(5.51)

where DE A\ AD(d)A and BE B0 (D)/modD. As a result we find that the sum on


the right in (5.49) is equal to

L L
d1d2d3=m D,B,S
(ro (d3d~D* B :;iD)) ,
.

where D EA\ AD(d2 )A, B E B0 (D)/modD, S E S 2 (Z)/modd. Comparing this


expression with the above expansion for T 2 (m), we see that (5.49) will be proved if we
show that the matrix B +SD runs through the set B0 (D)/mod d 1D if Bruns through
the set B0 (D)/modD and S runs through the set S 2 (Z)/modd1• The details of this
verification, which is easily carried out using Lemma 3.33, will be left to the reader. 0

PROBLEM 5.15. Prove the following formal identities:

L T 1(m)m-·' =( L nl(d)d-s) ( L Il~(d)d-.v)


mEN191 dEN1 91 <IEN1 91

= II o - nl(p)p-")-1 II o - n~(p)p-·')-1
= II {(1- nl(p)p-")(1- n~(p)p-"n-1.
pEP1,1
19i 3. HECKE RINGS

PROBLEM 5.16. (1) Show that


IT 2(d)IT2(d,) = IT 2 (dd,), if (d,d,) = 1.
(2) Show that the following formal identity holds for p a prime:
00

I:IT2(p'5)v.s = (1- IT~(p)v)Q;(v)- 1 (1- IT:_(p)v)(l - p 2 ~2(p)v 2 ),


J=O
where Q~(v) is the polynomial in Proposition 3.35.
[Hint: Use Proposition 3.35.]

§6. Hecke polynomials for the symplectic group


In this section we develop a technique that enables us to reduce several questions
in the theory of Hecke rings and Hecke operators for the symplectic group to the
analogous questions for the general linear group. Because of the multiplicativity of
these theories, it suffices to limit ourselves to the case of local rings for a fixed prime
p. The setting in which the action will be played out is the local Hecke ring Lo,p of the
triangular subgroup r() of the modular group. The rings L; or the rings L; (q) and
i; ( q' x) will play the role of the local rings of rn or of ro (
q) and fo (
q)' respectively.
The local Hecke ring of the group An will appear in one of the two dual extensions
C:!..P or c~r
In this section we suppose that n E N, p is a fixed prime number, and (p, q) = 1.
When we consider Hecke rings for the symplectic covering group, we further suppose
that q is divisible by 4. The indices n and p will often be omitted.
1. Negative powers of Frobenius elements. Because of their number theoretic
associations, the elements
IT_ = IT'..'... (p) and IT+ = IT~ (p)

are called the Frobenius elements of the ring. Lo = Lo,p· In §5 we defined the integral
domains C± = C±P' which can also be characterized by the conditions
(6.1) c_ ={XE Lo;IT_X = XIT_}, C+ ={XE Lo;IT+X = XIT+}·
In fact, the left sides are contained in the right sides by definition, and the reverse
inclusions follow from the local variant of Proposition 5.5, the proof of which we leave
to the reader. From (5.19) it follows t}\at the rings c_ and C+ are dual to one another
relative to the anti-auto~orphism *:
(6.2) c~ = c+. c~ = c_.
We now show that any element of Lo can be projected onto either c_ or C+. Let

(6.3)

be a left coset decomposition of a nonzero element X of Lo without cancellation (i.e.,


the left cosets are pairwise distinct and all of the coefficients are nonzero). Then all of
the elementary divisors of the matrices D; and all of the denominators of entries in the
matrices B; are powers of p. Hence, there exist nonnegativet5 E Z such that p.s B;D;- 1
are integer matrices in all of the terms of the decomposition. We call the smallest such
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 193

o the left exponent of X, denotedo_(X). The numbero+(X) = o_(X*) will be called


the right exponent of X.
PROPOSITION 6.1. Let X E Lo, and let d be an integer that is ~ o_(X) (resp.
~ o+(X)). Then II~X EC_ (resp. XII~ EC+)·

PRooF. From.the definitions it follows that, under the conditions of the proposi-
tion, o_(II~X) = 0 {resp. o+(XII~) = 0). Hence, the proposition follows from the
next lemma:
LEMMA 6.2. One has:

c_ ={XE Lo;o_(X) = 0},


C+ ={XE Lo;o+(X) = O}.

PR00F. By (6.2) and the definition of the left and right exponents it suffices to
verify the first equality. Proposition 5.5 and the decomposition (5.22) imply that
o_(x') = 0 for any XE C_. Conversely, let X be an element of Lo written as in (6.3)
with no cancellation. Suppose that o_ (X) = 0. Then for all i the matrix V; = B;D;- 1
is a symmetric integer matrix. Hence,

where Sis an arbitrary matrix in Sn(Z). If we again use the fact that o_(X) = 0,
we conclude that A;SDj 1 = r;DiSD;- 1 are all integer matrices. Thus, Xis a linear
combination of double cosets of elements of the form M; = U(r;,D;) that satisfy
(5.25), and hence also the condition dn(D;) 2 1r;. Then XE c_ by Proposition 5.5. D

The next lemma gives an easy and practical method for finding exponents d that
satisfy Proposition 6.1 for elements in the subrings L = L~ and E = E~{q, x) of the
ring Lo in the case when the left coset decomposition is not known, but the image
under n = n; is known.

LEMMA 6.3. For any X E Lo and t E Z one has'

o_(A 'X) = o_(X), O+(A 'X) = O+(X),

where A= An(P) is the element (3.48). If X EL= L; or if X EE= i;(q, x) (the


integral subrings of L andE), then

(6.4)

where the expression on the right is the degree in xo of the polynomial il(X).
194 3. HECKE RINGS

PROOF. The first part follows immediately from the definitions. If X E L or


x EE, then X* = x by Proposition 5.3, and hence O+(X) = o_(X*) = o_(X). By
the definition of the map n and Lemma 4.19, we have
n
(6.5) o_(X(a)) ~ L2a; = degxon(X(a)).
i=O

where X(a) = I17=o T;(p 2)"; with a; E Zand a; ~ 0. According to (4.112) and
E
(4.113), each XE can be written in the form X = E(a) a(a)X(a)• where all of the
a(a) are nonzero and the (a) are pairwise distinct. Thus, by Theorem 4.21(1), the
polynomials Q(X(a)) are linearly independent over Q. From this and (6.5) we obtain
(6.4) for o_(X). The proof of the same inequality for X E Lis similar; one uses
Theorems 3.23 and 3.30 and Lemma 3.32. D

Elements of the form II~X E C _and XII~ E C + (where C ± = C± ®QC) will be


called, respectively, the left and right projections of X E Lo. Since the elements in C ±
have simple left coset decompositions (see Lemma 5.6), computation in these subrings
is much simpler than in the full ring L 0 • Hence, it is natural to try to reduce actions on
various elements of Lo to actions on their projections. In this connection the problem
arises of recovering a given X E Lo from its left or right projection. In general, this
cannot be done, since II_ and II+ are left and right divisors of zero, respectively, in
L 0 • However, it turns out that in several important cases-for example, when X E L
or X E E--x can be uniquely recovered if one knows either of its projections. The
possibility of recovering elements of L and E is based on the remarkable fact that II_
and II+ are in some sense algebraic over the rings L and E. More precisely, we have
the following

PROPOSITION 6.4. The following relations hold in the ring Lo = L~.p:

m m
(6.6) L(- l);II~qm-i = 0, L(-l)iqm-ill~ = 0,
i=O i=O
m m
(6.7) ""( 2;~
L..J -1 )i II_qm-i -- 0, L(-l);fim-irrt = 0,
i=O i=O
where m = 2n, q1 = qj(p) are the elements (3.77) of the ring L = L;, and ft = qj(p)
are elements of E= E;(q, x) such that

(6.8)

where q_j(xo, ... , xn) are the coefficients of the polynomial (3.76).
PROOF. From (5.9) and (5.16) it follows that the anti-automorphism* transforms
the first equalities in (6.6) and (6.7) to the second ones, and conversely. Hence, it
suffices to prove the first equalities. Let Y be the left side of (6.6), and let Ybe the left
side of (6.7). Using (3.79) and the analogous relations
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 195

for the elements Ci; E Ewhose existence and uniqueness are guaranteed by Theorem
4.21, we can rewrite Y and fin the form
m m
Y = I:{-l);(p<n>ar12-iTI'.._q;, f = I:{-1)i(p<n>ar-2;TI~q;.
i=O i=O
By definition, O{q;) and O{q;) are polynomials in xo, xi, ... , Xn, and these polynomials
have degree i and 2i, respectively1 in thevariablex0 • Thus, from Theorems 3.30(1) and
o_
4.21 (1) and Lemma 6.3 we conclude that q; E It, (q;) ~ i, and q; E E, o_
(q;) ~ 2i.
Then, by Proposition 6.1, each of the products TI'._ q; and TI~q; is contained in C _.
Since obviously Ii E c_, this means that Y and f also lie inc_. On the other hand,
by Lemma 3.34 we have O(TI_) = O{TI0(p)) = x 0 , so that, if we use (3.76) and the
definition of q; and q;, we obtain
m
O(Y) = L:(-l);xbq;(xo, ... ,xn) = x 0
"q(xo, ... ,xn;x0 1) = 0
i=O
and similarly O(f) = 0. Hence, by Theorem 5.9, we have Y =f = 0. D

If we multiply (6.6) and (6.7) by Tii and (Tii)d with d E N, we obtain the
relations ·
·m m
I:{-l);TI~+;qm-i = 0, """"'(
L...J -1 )i qm-iTI+d +i -- 0,
i=O i=O
(6.9) m m
I:{-l);(TI:.)'+iqm-i = 0, I:{-l)iqm-i{TI~)d+i = 0,
i=O i=O
which may be regarded as recursive relations for the sequences of nonnegative powers
of TI± and Tii. Since q0 = q 0 = 1, the relations (6.9) give high powers of these two
elements as linear combinations of smaller powers with (right or left) coefficients in the
rings Land E. On the other hand, by (3.80), (6.8), and Lemma 3.27, the coefficients
qm and Clm are invertible in Lo. Hence, the relations {6.9) can also be used to determine
small powers of the Frobenius elements in terms of higher powers. Namely, if TI~ and
(Tii)'5 foro > d have already been determined, then we set

d
TI_= q,,,-l ( ~(-1)
m i i+d
TI_ qm-i ) ,
r=I
(6.10)

(6.11)
(TI~)''= q;;;I ( t{-l);Cim-;(Tii)i+d).
The elements Tii and (Tii)d that are obtained in this way ford < 0 are not the
negative powers of TI± and Tii, since these elements are not invertible in Lo. They
196 3. HECKE RINGS

are .not even the powers of II± 1 and (II~1J- 1 , since, for example, II= 2 =f. (II= 1) 2 and
II:+: 2 =f. (II+ 1) 2 • Nevertheless, for brevity we shall sometimes speak of negative powers
of the Frobenius elements. Note that, if we use (5.9), (5.16), and induction on d, then
from (6.10) and (6.11) we find that the negative powers of the Frobenius elements,
together with the positive powers, are dual with respect to the anti-automorphism *=

{II~)* = II1, (II1)* =II~ for all d E Z,


(6.12)
((II:Jd)* = (II~)d' ((II~)d)* = (II:f for all d E z.

We consider the following subspaces of L0 :

C_P • L; =
O_ = on_P = -n L.,,XaTa;Xa
{'""' C_P, Ta EL; } ,
E -n
Cl<
(6.13)
O+ = o~p = L;. c:p = { LTaYa;Ta EL;, Ya E c:p }·
Cl<

and the similarly defined subspaces

(6.14) fj_ = on_P = c"_P • ~(q, x), D+ = o~P = ~(q, x). c:P.

According to (5.9) and (6.2), these spaces are dual with respect to*:

(6.15) o~ = D+, o~ = o_ and o~ = o+. o~ = o_.


From the definition of IIi and (IIi)d one can show by induction on d that

(6.16)

for all d E Z.
THEOREM 6.5. For o E N let the elements II±J E O±P and (IIi)-J E O±P be
defined by the recursive relations (6.10) and (6.11), respectively. Then:
(1) Every element in on (respectively, every element in O±p) satisfies the relations

(6.17) x = II~xII=J for all XE on_P ando;;;.: o_(X),


(6.18) x = II+6 xII~ for all XE o~P ando;;;.: o+(X)
(respectively,

(6.19) i = (II:Y i(II:J-J for all XE fjn_P and20;;;.: o_(X),


(6.20) i = (II~)-J X(II~)J for all i E O~P and20;;;.: o+(X)).

Conversely, every X or i E Lo that satisfies any of the relations (6.17), (6.18) or (6.19),
(6.20), is contained in the corresponding space 01!._p• oip or 01!._p• oip·
(2) The restrictions of the maps~; and n; (see §§3.3 and 4.3) to the spaces O±P
and O±P are all monomorphisms.
We first prove a lemma.
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 197

LEMMA 6.6. For any TEL= L~ (respectively.for any TEE= E~(q, x)) one has
the relations ·
(6.21) II~ TII~ = rr~+dT for all o ~ o"_ (T) and d E Z,
(6.22) II~ TII~ = II~+<ST for all o ~ O+ (T) and d E Z
(respectively,

(6.23) crr:y Tcrr:y = err: )J+dj for all 20 ~ o_ (T) and d E z,


(6.24) (IIi)Jf(Ili)d = (IIi)d+<Sj forall20 ~ o+(T) andd E Z).

PRooF. The anti-automorphism * takes (6.21) to (6.22) and (6.23) to (6.24).


Hence, it suffices to prove, say, the former relations. We use descending induction on
d. If d ~ 0, then II~ and (II:Jd E C _, and the relations (6.21) and (6.23) follow
from Proposition 6.1 and the commutativity of the ring c_. Suppose that (6.21) and
(6.23) have been proved for all o ~ o_ (T) and 20 ~ o_ (T) and all d' > d. Then, using
(6.10)-(6.11) and the commutativity of L and E, we obtain

II~TII~ = -q;;; 1 (t(-l);Il~ TII0°dqm-i)


l=l

= q;;; 1 (tc-l);rr~+i+dqm-;T) = rr~+dT


1=1

and in exactly the same way (n:.)'5i(rr:.)d = (rr:y+dj. 0

PRooF OF THE THEOREM. By the duality relations (6.15) and (6.12), it suffices to
prove the first part of the theorem for, say, O _ and O_. Let X E o_, and let X E O_.
Then, by definition,

a a

where Ya E c_, Ta EL and Ya E c _,Ta EE. If 01 is no smaller than any of the


o_ o_
exponents (Ta). or 20' is no smaller than any of the exponents (Ta). then, using
(6.21) or (6.23), respectively, we obtain
J' -J' =
II_XII_ ~
~
J' -J' =
Yall_Tall_ ~
~ YaTa = X,
a a

a a

Now suppose that o ~ o_(X). Then by what was already proved and by Proposition
6.1, we have

where we used (6.21) with T = I in the last step. Similarly, for 20 ~ o_(X) we use
(6.23) to obtain
198 3. HECKE RINGS

Conversely, if we have elements X and X E Lo written in the form X = IT~XIT=J


for some t5 ~ t5_(X) and X = (IT:Y X(IT~_)-J for some 2L5 ~ t5_(X), then from
Proposition 6.1 and the inclusions (6.16) it follows that X E o_ and XE fi_. The
first part of the theorem is proved. .
We shall prove the second part, for example, for n = n~. First note that we have
the formulas

Q(IT~) = x&, il(IT~) = (xox1 · · · Xn)d (d E Z),


(6.25)
il((IT:Jd) = x6d, il((ITt)d) = (xox1 · · · Xn) 2d (d E Z),

where ITi and (ITi )d ford < 0 are defined by the recursive relations (6.10) and (6.11),
respectively. Namely, these formulas were proved for nonnegative din Lemma 3.34,
while for d = -t5 < 0 we have by Lemma 6.6:

IT~IT=.s = IT+.sIT~ = 1, (IT~j(IT:J-.s = (ITt)-.s(ITt).s = 1,

so thatn(IT~) = n(IT~)- 1 = xg, and similarly for IT~ and (ITi)d. Now suppose that
XE O_ and Q(X) = 0. We taket5 ~ t5_(X). Then by (6.17) we have

o= n(x) = n(Il~xIT= 6 ) = n(IT~X)x0.s,

so that Q(IT~X) = 0. By Theorem 5.9, the last equality implies that IT~X = 0, and
then also X = (IT~X)IT=J = 0. The cases of O+ and D± are similar. 0

PROBLEM 6.7. (I) Show that the map

X - (p<n> L\)-dIT~XIT~,

where d ~ ntln(t5_(X),t5+(X)) and L\ = L\n(p), does not depend on the choice of d,


commutes with the anti-automorphism *• and maps the entire space Lo = L3,p onto
the subspace

C_ • C+ = { L:X0 Y0 ;X0 EC_, Ya EC+}·


Q

Show that this subspace is the set of all elements in Lo that are invariant relative
to the above map. Then deduce that the restrictjons to c_ · C+ of <ll and n are
monomorphisms.
(2) Show that
T(p),IT_T;(p 2 ), T;(p 2 )IT+ EC_· C+,
where T(p), T;(p 2 ) are the images in Lo of the elements (3.42), 0 ~ i ~ n. Then
deduce that, if TEL and the image n(T) is a polynomial in Xo,x1, ... ,Xn having
degree t5 in x 0 , then IT~ TIT~ E c_ · C+ for any a, b ~ 0, a + b ~ t5 - 1.
[Hint: Use the first assertion and (3.58), (3.61). For the second assertion use the
fact that Tis a polynomial in T(p), T;(p 2 ).)
(3) ShowthatIT± 1 E C_ · C+, and then deduce the relations IT=' = (p<n>L\)- 1IT+,
IT+!= (p<n>L\)-IIT_.
[Hint: Use the first two parts of the problem and the definition of negative powers.]
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 199

2. Factorization of Hecke polynomials. By "Hecke polynomials" we mean poly-


nomials over the local Hecke rings of the symplectic group or the symplectic covering
group. These polynomials arise naturally when summing various generating series (for-
mal local zeta-functions) over these rings, and appear as denominators of the resulting
fractions (see, for example, Proposition 3.35). Although usually irreducible over the
original local ring, these polynomials often factor when one extends that local ring to
a local Hecke ring of the triangular subgroup of the symplectic group. These factor-
izations enable one to express both the coefficients of the polynomials themselves, and
also the coefficients of the corresponding generating series, in terms of a simpler type
of element. This turns out to be essential both for computations in the local rings and
for the study of their representations on modular forms. Here we examine the simplest
scheme for factoring Hecke polynomials, and give some important examples.
THEoREM 6.8. Let
N
P(v) = LP;V;
i=O
be a polynomial with coefficients in the subring L = L; (respectively the subring E =
i;(q, x)) of the ring Lo= L~.r Suppose that the polynomial
N
Q(P)(v) = L::n(p;)v;,
i=O

where n = n; is the spherical map (3.49) ,factors into a product of two polynomials with
,n; .
coeJJ.czents . C[x ±I , .•• , xn±IJ:
m 0

O(P)(v) = F(v)G(v),
where
Ni N2
F(v) = Lf;v;, G(v) = Lgivi,
i=O j=O

fi>gi E C[xo" 1,. • .,x;= 1].


(1) If all of the coefficients of the.first polynomial belong to the image n( c _)of the
ring c - = cn_p:
f; = n(f;), 1:
E c_, and fo = l,
then all of the coefficients of the second polynomial belong to the image n( O _) of
o_ = O'!..p (respectively the imagen(o_) ofo_ = D'!..p): .
gj = O(gj), gj E 0_ (respectively gj E 0_),
and over Lo one has the factorization

(6.26)

(2) If, on the other hand, all of the coefficients of the second polynomial belong to
the imageO(C+) ofC+ = C~P:
gi = O(gj), gj EC+, and go= 1,
200 3. HECKE RINGS

then all ofthe coefficients ofthe.first polynomial belong to the image !l( O+) of O+ = O~P
(respectively the imagen(o+) ofO+ = o~p):
fj = n(fj), f} E O+ (respectively f} E O+).
and one again hds the factorization (6.26).
PRooF. The two cases are dual to one another with respect to the anti-automor-
phism *• and so the proofs are analogous. We shall treat, say, the first case. Since the
restriction of n to c_ is a monomorphism (by Theorem 5.9) and fo = 1, it follows
that f 0 = 1. Then the polynomial I:; f :vi
is invertible in the ring of formal power
a:
series over the commutative ring c _,i.e., there exist E C _ such that

We consider the formal power series over Lo

By constructi~n, the coefficients gj of this series lie in the space a_ = C _ · L (resp.


in O_ = C _ · E). On the other hand, if we replace these coefficients by their images
under n, we obtain the series

( t,n(a;)v') ( ~ f;v') (~g;•1) ~ ~g;,,J,


and hence !l(gj) = gi for j = 0, 1, ... , N2, and !l(gj) = 0 for j > N2. By Theorem
6.5, the last equality implies that gj = 0 for j > N 2 • Thus, we obtain the factorization
(6.26) for P(v ), since the elements gj E 0_ (resp. gj E 0_) are uniquely determined
by their images under !l. D

REMARK. In practice, when looking for a preimage in a_ of some element g E


n( a_), one can take an arbitrary image g1 of this element in Lo and then set
g' = II~g1IC'1 , where d ~ o_(g1).
Then by (6.16) and Proposition 6.1 one has g' Ea_; and by (6.14) one has !l(g') =
x8!l(gi)x0" = !l(gi). One can proceed similarly to find preimages in O+ and 0±.
It is usually not hard to find preimages in Lo or Lo. Thus, the problem of finding
preimages in 0± and 0± reduces to the computation of suitable negative powers of
the Frobenius elements.
We now consider some examples of applications of Theorem 6.8. In §3.3 we defined
the polynomial r(xi, ... , xn; v ), all of whose coefficients are invariant relative to any
transformation in the group w;. By Theorems 3.30 and 4.21, there exist uniquely
determined polynomials
2n 2n
(6.27) R;(v) = I:(-l)"r:;(p)v", i;(v) = L(-l)"f:(p)v",
<1=0 <1=0
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 201

2n
O(R;)(v) = ~)-1) 0 .Q(r:(p))v 0 = r(xi, ... ,xn;v),
a=O
(6.28)
2n
n(i;)(v) = ~)-1) 0 nrr:(p))v 0 = r(xi, ... ,xn;v).
a=O

PROPOSITION 6.9. The polynomials R;(v) andi;(v)factor as follows over the ring
L~.p:
(6.29)
R;(v) = (iJ-l);(p(n)A)- 1Il_Iln-;v;) (t(-l);Il_Il;IT: 2v;)•
1=0 1=0
(6.30)

and
(6.31)
i;(v) = ( ~(-l);(p(n)A)- 1 Il_Iln-;v;) ( ~(-l);Il_Il;(Il:_)- 1 v;).
(6.32)
R;(v) = ( ~(-l);(Ilt)- 1 Iln-iil+v;) ( ~(-l);(p(n)A)- 1 Il;Il+v;)•
where A= An(P) is the element (3.48); IT_ = IT~(p) = IT0(p), Il+ = IT~(p) = ITZ(p),
lla = IT~(p) are the elements (3.59); and Il±2 and (IT~)- 1 are determined from the
recursive relations (6.10) and (6.11), respectively.
PROOF. We first show that
(6.33) Ilna(p)* = nnn-a(p) c. a
1or = 0, 1, ... , n.
In fact, if Dais the matrix (2.28), then obviously Ni pD;; 1An =AnDn-aAn, and hence
from the definition of the anti-automorphism* and the relations (3.63) we obtain

rr:(p)* = (pMa-l)ro = ( ( pDO-~a D~-a)) ro = n:_a(p).


From these equalities and (6.12) it follows that the anti-automorphism * takes the
factorization (6.29) to (6.30), takes (6.31) t.o (6.32), and conversely. Hence, it~u. ~5:"
to prove, say, (6.29) and (6.31). By definition, we have ·:t/~"·
n n ,··:".J ·

.Q(R;)(v) = II(l - X;- 1v) II(l - X;V) ~,


i=I i=I
202 3. HECKE RINGS

From Lemma 3.34 it follows that


'2((p(n)A)- 1Il_Iln-i) = s;(x( 1, ••• ,x; 1).
It follows from the definitions that ~-(Il;) = 1. From Proposition 6.1 we then .Obtain
the inclusions
(6.34) Il_Il;, (p(n) A)- 1Il_Iln-i E C_,
Next, from Lemma 3.34 and (6.25) we find that
(6.35) n(Il_Il;Il: 2 ) = n(n_Il;(n:J- 1) = s;(xi. ... ,xn),
and the inclusions (6.16) and (6.34) imply that
(6.36) Il_Il;Il: 2 E O_, Il_Il;(Il:_)- 1 E O_.
The factorizations (6.29) and (6.31) follow from these relations and inclusions and
from Theorem 6.8. D

We now turn to the polynomial Q(v) = Q;(v) defined by the conditions (3.76)-
(3.78). Since n(n_) = Xo and n(Il+) = xox1 ... Xn, the next proposition is an
immediate consequence of Theorem 6.8.
PROPOSITION 6.10. One has the following factorizations over the ring Lo,p:
Q;(v) = (1 - n_v)Q_(v) = Q+(v)(l - Il+v),
where Q_ and Q+ are polynomials of degree 2n - 1 with coefficients in 01!._P and o+p•
respectively.
In order to use the factorizations of the Hecke polynomials, one must be able
_to compute the coefficients of the factors in the form of linear combinations of r 0-
left or double cosets. The rest of this section is devoted to these calculations for the
polynomials Q; with n = 1, 2 and the polynomials R; and i.;
with n EN.
PROBLEM 6.11. Let F(v) be a polynomial of degree N with coefficients in C!!_P
and F(O) = 1. Show that there exists a polynomial G(v) of degree::::; N(2n - 1) with
coefficients in 0'!... P and G (0) = 1, such that all of the coefficients of the polynomial
F(v)G(v) lie in theringL;. From this deduce that every XE C!!_P satisfies an equation
of the form E~o X;T;, where T; EL;, TN= 1, and N::::; 2n. State and prove similar
results for the ring c+r
[Hint: In the polynomialf(v) = O(F)(v) that is obtained from F by replacing its
coefficients by their images under n, all of the coefficients are symmetric in the variables
x1, ... , Xn. Consequently, there exists a polynomial g (v) of degree =:::; N (2n - 1) over
xt
the ring Q[ 1, ••• , x;=
1] such that all of the coefficients in the product f g are invariant
with respect to Wn. Hence, f g = Q(P)(v), where Pis apolynomial·of degree=:::; N .2n
over L;. Apply Theorem 6.8 to P. To prove the second assertion, apply the first part
to the polynomial (1 - Xv).]
3. Symmetric factorization of the polynomials Q; (
v) for n = 1, 2. We obtain fac-
torizations of Q; (
v), n = 1, 2, that are invariant with respect to the anti-automorphism
*• and we compute the coefficients of the polynomial factors.
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 203

PROPOSITION 6.12. Over the ring LA,p one has the factorization
Q~(v) = (1 - ILv)(q - II+v),

where II_ = II~ (p) = IIA(P) and II+ = IIi (p) = IIJ(p ).
PROOF. According to (3.58) and (5.14), for n = 1 we have
(1 - II_ v)(q - II+v) = (1 - T 1(p)v + pA1 (p)v 2).
The last polynomial is equal to Q1(v), by Proposition 3.35. 0

PRoPosmoN 6.13. Over the ring L~.p one has the factorization

Q;(v) = (1- II_v)(l - II1v + p(II~'.6 + II~~6)v 2 )(1 - II+v),


where II_ = II~(p) = II~(p), II+ = II~(p) = II~(p), and II1 = IIf{p) are the
elements (3.59) , and IIk6 are the elements (3.62) for n = 2.
PROOF. We shall need the following formulas for the products of the elements of
L~.p listed in the proposition:

(6.37) II_II, = pII?,o. II1II+ = pII?,,.


(6.38) II_II,II+ = p 3AII,,
where A = A2(p) = II~~6. and

(6.39) II_II~'.6 = (p 2 - l)AII_, II~'.6II+ = (p 2 - l)AII+.


To prove these relations, we note that in each case the expression on the right is an
integer multiple of a certain double coset modulo r 0 = r~. Hence, it suffices to verify
that any left coset in the product on the left is contained in the double coset on the
right, and then verify that the coefficient is correct, by comparing the number of left
cosets on both sides or by applying n to both sides. For example, in the first relation
in (6.39) we have (p 2 - 1) ( ( p~E P~)) ro on the right. On the left, by definition,
we have
II_II~'.6= L (ro(p~E ;~))·
BEBo(pE2)/mod p
rp(B)=I
Since pB = O(mod pE2), it follows that any left coset in the last expansion is contained
in r o ( p~E p~) r O· Since II_II~'.6 is an element of the Hecke ring, by the same
token we have
OJ_
II_II2,o - a ((p2E
0 O )) ro -_ aAII_.
pE
If we apply the map n = n~ to both sides of this relation and use Lemma 3.34, we
find that
xo IP (1 , 2) x 02 p -3 x1x2 = ap -3 x 02x1x2xo, .
so that a= lp(l,2) is the number of symmetric 2 x 2-matrices of rank lover the field
of p elements. It is not hard to list all such matrices: ( ~ ~) and a .( ~2 ~), where
204 3. HECKE RINGS

a E F; and b E F P. Hence, a = p 2 - 1, and the desired equality is proved. The details


of the verification of the other relations will be left to the reader as an easy exercise in
preparation for the much more difficult computations in the next subsection.
Ifwe multiply the polynomials on the right in the claimed factorization, we obtain
the polynomial .

1 - (IL+ 111 + II+)v + (11_111+11_11+ + 11111+ + pv)v 2


- (pII_ v + pvII+ + II_l11II+)v 3 + pII_ vII+v 4 ,

where v = 11~'.J + II~~J. Using the definitions of the elements in the above expression,
the relations (6.37)-(6.39), and (5.14), we can rewrite this polynomial in the form

1 - (Ilo + 111 + l12)v + (p(II(O)


1,0 + II(O)
3
2,0 + (p + p )II(O))v
1,1 + 11(1)) 2,0
2
- p 3A(Ilo + 111 + II2)v 3 + p 6L\2v 4
= 1-T(p)v + (pT1(p2) + (p3 + p)L\)v2- p3L\T(p)v3 + p6L\2v4,

where in the last step we used the expressions (3.58) and (3.61) for T(p) = T 2(p) and
T 1(p 2 ) = Tf (p 2 ), respectively. According to the formula for Q~ (v) in Proposition
3.35, to complete the proof it suffices to verify that

(6.40)

where q~ (p) is the element of L~ determined by the condition

Q(q~(p)) = X~X1x2(x1 +x2 +x( 1 +x2 1 +2).

Since the map n is a monomorphism on L~, to do this it is enough to verify that the
right side of (6.40) has the same n-image as the left side. We compute then-image
of the right side by replacing T 1(p 2 ) by its expression in (3.61), and using Lemma
3.34 and then Lemma 2.21 to calculate the polynomials ru(nf,0 (p)) = ru(nf(p)),
ru(nf, 1(p)) = ru(nHp)nf(p)), and ru(nl0 (p)) = ru(nHp)). The reader can easily
see that the result is the polynomial that gives the left side. D

PROBLEM 6.14. For any n EN and any prime p, prove the following factorization
over the ring L(i,p:
Q;(v) = (1- II_v)Q'(v)(l - II+v),
where II_ = 11~ (p), 11+ = rri (p), and Q' is a polynomial of degree 2n - 2.
[Hint: Using Problem 6.7(2) and (3.79), show that all of the coefficients in the
formal power series (1- II_v )- 1Q;(v )(1 - II+v )-- 1 lie in C!!_P · C~P' and then use the
fact that n; is a monomorphism on this space.]
4. Coefficients in the factorization of Rankin polynomials. Here we compute the
r(j-left and double coset expansions of the coefficients in the factorizations (6.29)-
(6.32) of the Rankin polynomials R;(v) and R.;(v). To do this, we must find certain
products of elements of the form Ila = II~ (p ), II~k· and II~k (k) in the ring L~.p (see
Lemmas 3.32 and 4.19). ·
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 205

PROPOSITION 6.15. The following relations hold in Lo= L~.p:

(6.41) 11bl1
+
= p(n-bJ11(0)
n-b,b = P(n-bJ11(0)
n-b,b (k) '
where 0 ~ b ~ n; and/or 0 ~ r ~ a ~ n, 0 ~ b ~ n; and r + b ~ a,
11(rl 11(0l
(6.42) a,O n_-b,b --
max(O,a+b-n),.;1,.;b
o,.;s,.;r
(6.43)
11(r) (k )11(0)
a,O n-b,b (k) = c(k; a, b, r, t, s )L\11~_;t,s2 21, 1 (k ),
max(O,a+b-n),.;1,.;b
o,.;s,.;r
in which
c(k; a, b, r, t, s) = pt(r+t-a-b-s-l)+b(n+I)
(6.44) 'Pa+b+s-r-21(p)
l k(O a+s-r-1,s,a+s-r
X P
· . )
() ( )'
'Pa+s-r-t P 'Pb-I P
where 'Ps is the function (2.29),

(6.45)

k is a.fixed odd integer, and the prime over the summation means that it is taken over the set
ofmatrices with zero m x m-block in the upper-left corner; the coefficients c(a, b, r, t, s)
are obtained from the coefficients (6.44) by setting k = 0.
PRooF. From (3.59), (6.33), and the definitions it follows that the left and right
exponents of the 11b are at most 1. Then, by Proposition 6.1, the left side of (6.41)
lies in the ring C+ = C~r Since, by (5.37), II~02b,b = (Mn-b,b(O))r0 , it follows frqm
Proposition 5.5 that this element lies in C+. Thus,

(6.46)

From these inclusions and Theorem 5.9 we see that to prove (6.41) it suffices to verify
that both sides have the same image under <b or n.
But this follows immediately from
Lemma 3.34. (6.41) is proved.
Things are not so simple in the case of (6.42) and (6.43), where the reader should
expect some rather tedious computations. First of all, we note that it is enough to
compute products of the form

(6.47)

where (Ma,o(Bo))r0 with Bo E Sa(Z) and rp(Bo) :=: r is one of the doublecosets in the
expansion (5.37) of 11~b· From (5.37) it follows that 11~02b,b = (Mn-b,b(O))r0 • Thus,
using the second formula in Lemma 1.5 and the expansion (5.35) of the double coset
(Ma,o(Bo))r0 , we see that the computation of the product (6.47) requires that we find
the double cosets to which products of the form

(6.48) Ma,o(Bh)u(e)Mn-b,b(O) for Bh E {Bo};,e E A(D.) \A


206 3. HECKE RINGS

(Da = Da,o) belong (and with what multiplicity). To do this we need a special set of
representatives of the left cosets A(n.) \A.
We introduce some notation. We set
(6.49) ln-:-a,n = {i = (ii. ... , in-a) E Nn-a; 1 ~ it < · · · < in-a ~ n }.
Fori E In-a,n we letidenote the set (jp) E Ia,n that is the complement of (ii, ... , in-a)
in the set ( l, 2, ... , n). To every permutation a of the numbers 1, 2, ... , n we associate
the n x n-matrix
M(a) = (t5a-•(i)),
where '500 = 1, t50 p = 0 for a =I p, is the Kronecker symbol. It is easy to see that
M(a-r) = M(a)M(-r) and M(a- 1) = M(a)- 1 = 'M(a).
Next, to every i E In-a,n we associate the permutation a(i) and the matdx M(i) by
setting

(6.50) a(i) = ( ii."·' in-a ji, .. · ,ja ) and M(i) = M(a(i)),


1, ... , n - a n - a+ l, ... , n
where (jp) = t Note that right multiplication of any n-column matrix by a matrix
of the form M (a) is equivalent to performing the permutation a of its columns. In
particular,
(6.51)
where the columns t0 are all of the same size. Finally, for i E In-a,n with i E Ia,n we
define the sets
V(i) ={V = (v 0 p) E Mn-a,n;O ~ V 0 p < p, ifia < jp,

V0 p = 0, if i0 > jp},
(6.52)
W(i) ={ e =( 0 E -a ~) M(i); VE V(i) }•
and
Wa = w:(p) = LJ W(i).
iEln-u.n
We now show that Wa is a complete set of left coset representatives of A = An
modulo the subgroup A(n.) =An D; 1ADa:
(6.53) Wa = A(n.) \A.
To do this, we first note that the number of elements in W(i) is
(6.54) JW(i)J = JV(i)J = pii+···+j.-(a),
where (j 11 ) = i, since for fixed P there are exactly j p - p indices i0 satisfying the
inequality i0 < j P· From this and (2.33) we conclude that the number of elements in
the set Wais
P -(t1) ~
L....,, pji+···+.iu = cpn ' where cp.. = cp.. (p ).
I/ · . /
,.,,,,ll<···<,1.,,.,,,n cpacpn-a

On the other hand, according to Lemma 1.2 and (2.28), the index µA(Da) of A(n.) in
A is equal to the same number. Hence, in order to verify (6.53) it suffices to show that
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 207

all of the matrices in Wa are in pairwise distinct A(D.i-left cosets. From the definition
it follows that

A(D.) = {A.= (~~ Z) E A;A.2 =O(modp) }•


where A.2 is an (n - a) x a-block. We hence find that the (n - a) x (n - a)-block A.1
of any matrix A. E A(D.) is nonsingular modulo p, and if two matrices e,e' EA belong
to the same A(D.i-left coset, then the matrices e(n-a) and e(n-a) consisting of the first
n - a rows satisfy the relation

(6.55) A.1e(n-a) =e:(n-a)(modp), where A.1 E GLn-a(Z/ pZ).


If e E W(i), then from (6.51) and the definition of V(i) it follows that the set
(ii, ... , in-a) is the first (in the sense of lexicographical order) set of indices of n - a
columns of e(n-a) that are linearly independent modulo p. Hence, if (6.55) holds for
two matrices e,e' E Wa, then they both belong to the same subset W(i) c Wa. It
then follows from (6.55) that A.1 =
En-a(modp), from which we conclude that the
corresponding matrices V, V' E V(i) are congruent modulo p, and hence coincide,
i.e., e = e:'. This proves (6.53).
We return to the products (6.48), using the set (6.51) as our set of representatives
of A(D.) \A. Let~= e E Wa. Ifwe multiply out the matrices in (6.48) and take into
account that Da,o = D0 and Dn-b,b = pDb, then we obtain the matrix

(6.56)

Suppose that e E W(i), where i E In-a,n. i.e., e = ( E -a 0 ~) M with M = M(i)


and V E V(i). If we set d(k) equal to 0 or 1 according to the formula Db =
diag(pd(l), ... , pd(nl), then, using (6.51) and the analogous relation for the rows that
comes from the transpose of (6.51), we conclude that MDbM- 1 = C = ( ~1 ~2 ),
where C1 = diag(pciU1>, ... , pcl(i,,_.J), C2 = diag(pdU 1>, ... , pdU.>), and (j p) = t
These formulas imply that

DaeDb = Da ( E -a 0 ~) MDbM- M = Da C11,1

where

11
= c-1 (En-a
0
V) CM= (En-a
Ea 0 Ea
1
c 1- VC2)

M
The matrix c 1- 1 VC2 = (p-cl(ia)+d(hJlv0 p) is an integer matrix, since d(jp) = 0 and
d(i0 ) = 1 in the case d(jp) < d(i0 ); hence, i0 > n - b ~ jp, and from the definition
of V(i) it follows that v 0 p = 0. Thus, 11 E A, and the matrix (6.56) belongs to the
same r o-right coset as the matrix

(6.57) p (p 2 (DaC)* Be:Dh11- 1 ) = p (p 2 (Dac)-l BC )


0 DaC 0 DaC
(e:Dh11- 1 = C). We let t = t(i) denote the number of hi in the.set U11) = iwhich
satisfy the inequality j p > n - b. From the definition of the matrices C1 and C2 it
208 3. HECKE RINGS

follows that they have the form

Cr = ( En-0-b+t pi,,_, ) ' C2 = ( Eo-1 P~t ) '

and hence DaC = Da+b-21,1 and BC= (~ BhOc2 ). For any matrix A, we shall
let A(s) denote the s x s-block in the upper-left corner of A. From the form of our
matrices and the expansion (5.35) it follows that the matrix (6.57} is contained in the
r 0-double coset of the matrix
PM a+b-21,t (Bh
~(a-t)} c.
ior h
B~(a-1) _
-
(00 0 )
B!a-1)

(Ji!a-t) is obviously an {a + b - 2t} x (a+ b - 2t}-matrix}. Thus, every product of


the form (6.56} lies in the double coset of some matrix of the form pMa+b-21,1(K},
where KE Sa+b- 2,(Z), and it falls in a particular doublecoset of this form if and only
if e E W{i}, where i E In-a,n and t(i} = t, and the matrix Ji!a-t) /modp is contained
in {K}; {see (5.34)). Using the formula (6.54) for the number of elements in the
set W{i}, we find that the number of products {6.56} contained in the double coset
ropMa+b-2 1,1(K)ro is equal to a(a, b, t}v( {Bo}, {K}, a - t}, where

a(a,b, t) = ph+···+j.-(a)
l,,;;,.ji<···<ia-1,,;;,_n-b
n-b<ia-t+l <···<j.,,;;,.n
and v( {B0 }, {K}, s) denotes the number of matrices Bh E {Bo}; for which the matrix
Ji!s) /modp {of the same size as K) lies in the class {K};. Since t must obviously be
~ 0 and~ a+ b - n, by Lemma 1.5 we obtain the formula

(6.58) (Ma,o(Bo))r0 • II~02b,b = L c(a, b, t; Bo, K}.1.(Ma+b-21,1(K))r0 ,


t,{K}p

wheremax{O, a+b-n} ~ t ~ min{a,b}, {K}; E Sa+b-2 1(Z/pZ), and the coefficients


c(a, b, t; Bo, K} have the form

a(a, b, t}v( {Bo}, {K}, a - t}µ(Mn-b,b(O}}µ(Ma+b-21,1(K))- 1,


where µ(M) is the number of r 0 -left cosets in the double coset r 0Mr0 • The terms in
the last expression can be computed using the formulas we already know.
First, by (2.33), a(a, b, t) is equal to
L pi1+··+ia-1+(P1+(n-b))+··+(P1+(n-b))-(a)
l,,;;,.ji<···<ia-1,,;;,.n-b
1,,;;,.P1<···<P1"'-b
= pt(n+t-a-b) 'Pn-b'Pb ,
'Pa-t'Pn+t-a-b'Pt'Pb-t
where 'Ps = cp3 (p }. Next, from (5.35) and (2.28) we have

µ(Mn-b,b(O)) = pb(n-b)+b(b+l)µA(Dn-b,b) = pb(n+l)µA(Db) = pb(n+I) 'Pn .


'Pb'Pn-b
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 209

Finally, from (5.35) and (2.32) we obtain


µ(Ma+b-21,1(K)) = l{K};IP1(n+I) </)n .
</)n+l-a-b</)a+b-21</)1
If we substitute these expressions in the formula for c(a, b, t; B0 , K) and take the sum
of the expansions (6.58), multiplied by x(B0)-k, over all distinct classes {Bo}; iri
Sa(Z/ pZ) for which rp(Bo) = r, then from (5.38) we obtain the formula
IT~b(k )IT~°2b,b = L c(k; a, b, r, t; K).1.(Ma+b-21,1(K))r0 ,
1,{K}~

where max(O, a+ b - n) ~ t ~ b, {K}; E Sa+b-2 1(Z/pZ), and


c(k; a, b, r, t; K) = P1(1-a-b-l)+b(n+I) </)a+b-21
</)a-1</)b-1
(6.59) x (l{K};1- 1 L x(Bo)-kv({Bo},{K},a - t))
{Bo}~ES.(Z/ pZ)
rp(Bo)=r
(by assumption r + b ~a, so that min(a,b) = b).
We now turn our attention to the expression in large parentheses on the right side
of (6.59). For brevity we shall denote it S(k, {K} ). By the definition of v({Bo}, {K}, s)
we obtain
(6.60) S(k,{K}) = l{K};l- 1 Llp(k; V,r,a),
v
where
(6.61) lp(k; V,r,a) =
. TES.(Z/pZ),rp(T)=r
r=( ~ :)(modp)
We now show that the sum (6.61) depends only on the rank of V over the field
Z/ pZ. More precisely, suppose that Vis an (a - t) x (a - t)-matrix of rank rp( V) =
r - s, where t, s ~ 0. Then we have the formula (see (6.45))
(6.62) lp(k; V, r, a) = p(r-s)i x( v)-k lp(k; Da+s-r- 1; s, a+ s - r).
To see this, first note that, by (4.70), the sum (6.61) depends only on the class of
the matrix .v over Z/pZ. By Theorem 1.3 of Appendix 1, we may suppose that
V = ( ~1 ~),where Vi is an (r - s) x (r - s )-matrix that is nonsingular over Z/ pZ.
In this case, any matrix T satisfying the conditions in (6.61) is congruent modulo p to
a matrix of the form

( l~I I~ ~~:) = (~I ~J [(Eo-s Ea+~-r-1 12)] ·


ViE*[I
T13 T23 T33 0 0

where T1 = ( 1~23 ~~:), whose rank over Z/ pZ is equal to


rp(Vi) + rp(Ti) = r - s + rp(Ti).
210 3. HECKE RINGS

Conversely, any matrix of the above form, with T1 E S,_s(Z/pZ) and rp(Ti) = s,
satisfies the conditions in (6.61). Furthermore, since (4.70) implies that x(T) =
x (( ~1 ~1 )) = x(Vi)x(T1) (because for any symmetric integer matrices A1 and
A2 we have

(6.63)

it foilows that (6.62) is a consequence of the definition (6.45) and the above consider-
ations.
We return to the computation of the sum (6.60). In order for this not to be
the empty sum, the matrix K E Sa+b- 21 (Z/ pZ) must clearly satisfy the inequality
rp(K) ~ r. We set rp(K) = r - s. Then any matrix Von the right in (6.60) must
satisfy the relation rp(V) = rp(K) = r - s. Hence, if we apply (6.62), we can rewrite
the sum (6.60) in the form
(6.64) S(k, {K}) = p(r-s)lp(k; Da+s-r- 1; s, a+ s - r)l{K}~i- 1 s1 (k, {K}),
where
(6.65)
v

in which VE S 0 _ 1(Z/ pZ), ( ~ ~) /modp E {K}~. By Theorem 1.3 of Appendix

1, wemaysupposethatK = ( ~ ~ 1 ). where K 1 E S,_s(Z) and rp(Ki) = r -s. We


let G denote the group GLa+b-21(Z/ pZ) if a+ b - 2t < n; if a+ b -2t = n, then we
let G denote the subgroup of this group that consists of the matrices of determinant
±1/modp. Then, by (5.34), the class {K} = {K}~ consists of matrices of the form
K[U] (U E G). We divide the matrix U E G into blocks U;j (1 ~ i,j ~ 3) with
diagonal blocks U11, U22, U33 of size (b-t) x (b-t), (a +s -t-r) x (a +s -t-r),
and (r - s) x (r - s ), respectively. Since U is nonsingular modulo p, by a direct
calculation we easily see that, if K = ( ~ ~1 ) with K 1 an (r - s) x (r - s )-matrix
that is nonsingular modulo p, then

(6.66) K[U] =(~ ~) (modp)


if and only if U belongs to the subset
G1 = {U = (Uij) E G; U31 = O(modp)}.
From this condition it follows that the rows of the matrix ( U32 U33) are linearly in-
dependent over Z/ pZ, and hence this matrix can be filled out to a matrix U1 E
GL 0 _ 1 (Z/ pZ). Then (6.66) implies the congruence

(~ ~I) [Ui] = V(modp),


and consequently x( V) = x(K 1) = x(K). From this and (6.65) we obtain
S1(k,{K}) = x(K)-klGil · IGol- 1,
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 211

where
Go= { U E G; (~ ~J [U] =: (~ ~J (modp)}
is the stabilizer of the matrix K in the group G. On the other hand, I{K} I = IGI· IG0 1- 1•
If we substitute this expression into (6.64), we find that S(k, {K}) is equal to
(6.67) p(r-s)i x(K)-klp(k; Da+s-r-1; s, a+ s - r)IGl- 11Gd.
To compute IG 1- 11G1 I we need the following
LEMMA 6.16. Let 1 ~ c ~ d, and let p be a prime number. Then the number of
matrices V E Md,c (Z/ pZ) satisfying the condition rP ( V) = c is equal to

p<c-l)(pd -1) ... (pd-c+I _ 1) = p<c-1) 'PAP) ,


'Pd-AP)

PRooF. We use induction on c. The formula is obvious for c = I. Suppose that it


has already been proved for some c, l ~ c < d. Any matrix V' E Md,c+I (Z/ pZ) with
r P ( V') = c + l can be obtained by taking a suitable V E Md,c (Z/ pZ) with r P ( V) = c
and adding a column which is not a linear combination modulo p of the columns of
.V. The number of such columns that are distinct modulo p is obviously equal to
pd - pc = pc(pd-c - 1). Hence, when passing from c to c + l ~ d, the number of
matrices with the desired properties gets multiplied by pc (pd-c - l). From this and
the induction assumption we obtain the lemma for c + l. D

We now complete the computation of the sum S(k, { K} ). An argument similar


to the one used in the proof of Lemma 6.16 shows that, given any matrix M E
Ma+b-21,b-r(Z/pZ), where l ~ b - t <a+ b - 2t and rp(M) = b - t, there exist
matrices M' E Ma+b-2i,a- 1(Z/pZ) for which (M,M') E G, and the number of M'
with this property does not depend on M. From this and Lemma 6.16 it follows that
this number is equal to IGlp(b-i- 1 >cpa-1'P;~b- 21 . On the other hand, the number of
matrices M of the above form for which the last r - s rows consist of zeros is equal
(by the same lemma) to
(b-1-I) -I
P 'Pa+b+s-21-r'Pa+s-1-r•
If we multiply these expressions, we obtain a formula for the number of elements in
G1:
!Gd= IGl'Pa-1'Pa+b+s-21-r'P;~b-21'Pjs-1-r•
This formula also holds in the cases b - t = 0 and b - t = a+ b - 2t that were excluded,
since in the first case both sides of the equality are equal to IG I, and the second case
reduces to the first case because t ~ b ~ a. If we substitute the expression for IG1I into
(6.67) and substitute the resulting expression for S(k, {K}) into (6.59), after obvious
simplifications we obtain the formula
c(k; a, b, r, t; K) = P1(r+t-a-b-.•-l)+b(n+I) x(K)-k

X 1p(k ; 0 a+s-r-1; s, a + s - r ) 'Pa+b+.l'-r-21 ,


'Pa+ .. -r-1'Pb-1
where s is determined by the condition rP ( K) ~ r - s. Ifwe substitute these expressions
into the formula for the product n~Mk). n~02b,b• by Lemma 5.11 we obtain the formula
(6.43). Setting k = 0 in this formula, we obtain (6.42).
212 3. HECKE RINGS

We are now ready to compute the coefficients in the expansions (6.29) and (6.30)
of the polynomial R; (
v).
PROPOSITION 6.17. In the ring Lo,p one has

(6.68) IT - IT I·IT-2
-
_ -(i)-i(n-i)A-1
- P U L.,,, Of.I}
'°' . L:
; n
IT(i-j+a-n)
a,n-a '
j=O a=n-i+j

(6.69) -2IT ·IT _ -(i)-i(n-i)A-1


IT+ n-1 +- P u L.,,, Ol.11
'°' . '°'
; n

L.,,,
IT(i-j+a-n)
a,O '
j=O a=n-i+j

where

(6.70) n( ) 'Pn-i+j(p) ~ (-p) 1


Ol.jj = Ol.;j p = ( ) L.,,, ( ) ( )'
'Pn-i P t=O 'Pt P 'Pi-t P

'Ps. is the function (2.29), IT~l is the element (3.62), and the rest of the notation is the
same as in Proposition 6. 9.
PROOF.The formulas (6.12), (6.33), and (5.39) show that the anti-automorphism
* takes (6.68) to (6.69) and conversely; hence, it suffices to prove one of the two.
We shall prove (6.69). We first verify that both sides of (6.69) lie in the subspace
O+ = O~P = L; · c:P c L~.r We then show that both sides have the same image
under the map n;.
By Theorem 6.5(2), this will imply that they are equal.
From (6.16) and (6.46) we have
(6.71)
In order to examine the right side of (6.69), we introduce the sums

(6.72) S·1 c
1
= '°'
n-c
L.,,, IT(i+a-n)
a,c
a=n-i

forO ~ c ~ i ~ n. Sincetherightsideof (6.69) is a linear combination of 11- 1si-j,l! for


j = 0, 1, ... , i, it follows that it lies in O+ provided that S;,o E O+ for all i = 0, 1, ... , n.
To prove the latter claim, we use induction on d = i - c to show that
(6.73) S;,c = Sc+d,c E O+ (0 ~ c ~ c + d ~ n).

Since Sc,c = IT~02c,c• it follows by (4.46) that Sc,c E C+ C O+; this proves (6.73) in
the cased = 0. Now suppose that (6.73) holds for all c, d satisfying the conditions
0 ~ d <hand 0 ~ c ~ n - d, where 0 < h ~ n. From (3.61) and (6.72) it follows
that we have

c+d=h,c-;i.1
By the induction assumption, the sum Sc+d,c. where c + d =hand c;;.: 1, i.e., d < h,
lies in C+. By the definition of O+, the element Tn-h(p 2 ) EL; is also contained in
that space. We thus have Sh,O E O+. We now use induction on c to prove that
(6.74) Sc+h,c E O+ for 0 ~ c ~ n - h.
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 213

The case c = 0 has already been treated. Suppose that (6.74) holds for c satisfying the
inequality 0 ~ c < b ~ n - h, where 0 < b ~ n - h. Consider the product
n
S • Il(O) _ '""' Il(h+a-n)Il(O)
h,O n-b,b - L.J a,O n-b,b'
a=n-h

Sinceb ~ n-h, it follows that (h+a-n)+b ~a, and we can apply (6.42) to compute
the products in the last sum (this is the only place where we need (6.42)!). Note that
the coefficients c(a, b, r, t, s) ip. (6.42) do not depend on the individual values of a and
r, but rather only on the difference a - r; so we can set c(a, b, r, t, s) = y(a - r, b, t, s ).
With this notation we have
n
AII(h+a-n-s)
Sh,O IIn-b,b
(O)
= '""'
L.J Y(n - h • b• t, s )Ll a+b-2t,t
a=n-h max(O,a+b-n)~t~b
O~s~h+a-n

b h
= LL Y(n - h, b, t, s )i!J.Sh+2t-b-s,t.
t=O s=O

From the inclusion S1r,o E O+ that was proved above and from (6.46) it follows that
the left side of the last relation is contained in O+. By our induction assumptions,
Sh+2t-b-s, 1 E O+ if h + t - b - s < h or if h + t - b - s =hand t < b, i.e., for all
possible combinations oft ands except t = b, s = 0. Consequently, the term with
t = b ands = 0 is also contained in O+:

and hence S1r+b,b E O+ (see Lemma 3.27). We have thus proved (6.74), and hence also
(6.73). Both sides of (6.69) are then contained in O+, and it remains for us to prove
that they have the same image under n.
From (6.25) and Lemma 3.34 we have

O(II:t 2IIn-iII+) = O(II:t 2)0(IIn-i )O(n+)


(6.75)
= (xi .. · Xn)-I Sn-;(xi. .. ., Xn).

On the other hand, if we again use Lemmas 3.34 and 2.21, we find that the 0-image of
the right side of (6.69) is

i n
x L:a;j L
lp(i- j +a - n,a)x5p-(u)sa(x1, ... ,xn)
j=O a=n-i+j
n
= ( XJ • · • Xn ) -1 p (n-i) '""'
L.J p -(a) Sa (XJ, ••• , Xn )
a=n-i
a+i-n
x L aijlp(i- j +a -n,a).
a=n-i
214 3. HECKE RINGS

Our goal is to prove that the last expression is the same as (6.75). For this it is clearly
sufficient to verify the relations

a+i-n { l if a= n - i,
L a.;jlp(i - j +a -n,a) = '
j=O 0, ifn - i <a~ n,

which, if we set n - a = k, can be rewritten in the form

i-k { 1 if k = i,
(6.76) L°'ij1p(i - j -k,n -k) = 0• ifO ~ k < i.
j=O '

In order to prove these relations, we must analyze the function l P (r, a) and find a way
to compute it. The next two lemmas are devoted to this.

LEMMA 6.18. For 0 ~ r ~ a and for p a prime, set


Lp(r,a) ={A E S 0 (Fp);rp(A) = r}, whereFp = Z/pZ,
and/or i E Ir,a (see (6.49)) let Lp(r,a;i) denote the subset of matrices in Lp(r,a)for
which i is the.first (in lexicographical order) set of indices of r rows (and columns) which
are linearly independent modulo p. Then one can take

(6.77) Lp(r,a;i) = { ('~A ~~]) [M(i)];A E Lp(r,r), VE V(i) }•


where M(i) is the matrix (6.50) and V(i) is the set defined in (6.52). In particular, the
number of elements in the set (6. 77) is given by the formula
(6.78)
where U1, ... , j 0 _r) = i E Ia-r,a is the set of indices complementary to i in (1, 2, ... , n).
The number of elements in the entire set LP (r, a) is given by the formula

'Pa(P)
(6.79) Ip (r, a ) = lp (r, r ) ( ) ( )"
'Pr P 'Pa-r P

PROOF. Let A'= (as 1 ) E Lp(r, a; i). Then, using (6.51) and the analogous relation
for the rows, we obtain

A'[M{i)- 1] =A"= ( ~ ~) ,
where A = (a;.. ,;p), B = (a; ,jp), C
0 = (aj.. ,jp), and (jp) = t We now show that
rp(A) = r. In fact, rp(A") = rp(A') = r, and, by construction, the first r columns
of A" are linearly independent modulo p. Hence, all of the columns of A" are linear
combinations modulo p of its first r columns. In particular, the same is true for the
matrix (A, B), which also has rank rover F p· Thus, the first r columns of this last matrix
cannot be linearly dependent modulo p. Further note that for each p = 1, ... , a - r
the Pth column of B is a linear combination modulo p of the first s columns of A,
wheres is the largest integer such that is < jp, because ifthe columns of (A, B) with
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 215

index i1, ... , is, j p-and hence the columns of A' with the same indices-were linearly
independent modulo p, then the index is+ I could be replaced by jp < is+I· Thus,

s
'(a;i.jp> ... , a;,,jp) =L Vap '(a;,,;
0 , ••• , a;,,;J
a=I
r
=L Vap '(a;.,;
a=l
0 , ••• , a;,,;J(modp),

where 0 ~ Vap < p for 1 ~ a ~ r, 1 ~ P ~ a - r, and Vap = 0 if ia > j p; hence,


B =AV (mod p), and V = (vap) E V{i). Furthermore, since

and rp(A) = r, it follows that rp(C - A[V]) = 0, and hence C A[V]{modp). If =


we carry out the same argument in the opposite order, we readily see that any matrix
of the form in (6.77) belongs modulo p to the set Lp(r, a; i); and that two matrices of
this type are incongruent modulo p if they correspond to distinct V or to matrices A
which are incongruent modulo p. This proves (6.77). (7.78) follows from (6.77) and
from the formula (6.54) for the number of elements in V(i); (6.79) follows from (6.78)
~~~. D

The next lemma gives us explicit formulas for l P ( r, r).

LEMMA 6.19. The following formal power series identities over Q holdfor any prime
p:

PROOF OF THE LEMMA. Since the set S0 {Fp) contains p(a) different matrices, it
follows from (6.79) that

p
(a)_~l( )- ~lp(r,r) • _l_ ('Ps
- L...J p r, a - 'Pa L...J
( ))
= 'Ps p .
r=O r=O 'Pr 'P.a-r

All of these relations for a = 0, l, . . . together are equivalent to the following single
formal power series identity:
216 3. HECKE RINGS

Thus, to prove the lemma it suffices to verify the identities

(6.80)

(6.81)

Ifwe set x = p and t = -p- 1 in (2.34), then for n = 0, 1, ... we obtain

cpn (-l)b p(b-1) • _!_ = { 1, if n = 0,


(6.82)
E
b,c ~0,b+c=n
cpb <pc o, if n > 0,

which together are equivalent to the power series identity (6.80). (6.81) follows from
(6.80) if we replace v by -pv. D

We now complete the proof of Proposition 6.17. It suffices to verify (6. 76). From
(6. 70) for the numbers Ol.ij and from the second identity in Lemma 6.19 we obtain the
congruence

which implies the relations

'°'
L...J
, , . cpn-ilp(r,r) --{ l,
....,}
ifi -k = 0,
. , 0 ·+ . k
J,r~ ,J r=1-
<pn-i+jcpr 0, if 0 <i -. k ~ i,

where 0 ~ k ~ i. Since the right side is nonzero only when i = k, the factor <pn-i can
be replaced by cpn-k· Ifwe set r = i - j - k, then by (6.79) we obtain

cpn-klp(r,r)
--~-=pl-J-
I (' . k ,1-1-
. . k) cpn-k I ('l - J· - k ,n- k) .
- p
-
cpn-i+.i<pr cpi-j-kcpn-k-(i-j-k}

We now compute the coefficients in the expansions (6.31) and (6.32) of the poly-
nomial R~ (v). To do this we first introduce the new functions

(6.83) cp,:(v) = II (vi - 1), cp;(v) = II (vi - 1).


l~i~r l~i~r
i::O(mod2) i::l(mod2)
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 2I7

PROPOSITION 6.20. In the ring L~,p one has:

(6.84)
; n
ILIT;(IT:J-I = p-(i)-i(n-i)L\-I L:aij L rr~;;-~~a-n)(k),
j=O a=n-i+j
(6.85)
; n
{Ilt)-II1n-iI1+ = p-(i)-i(n-i)L\-I LQfj L IT~i)j+a-n)(k),
j=O a=n-i+j

where

(6.86)
if j =O{mod2),
if j = l{mod2),
rr~l (k) is given by (4.106), and the rest of the notation is the same as in Proposition 6. 9.
PROOF. In view of (6.12), (6.33), and (5.39), the anti-automorphism* takes (6.84)
to (6.85) and 'conversely. Hence, it suffices to prove, say, (6.85). Just as in the proof of
Proposition 6.17, using (4.105) and (6.43) we see that
n-c
S·1,c (k) = ~
L...J IT(i+a-n)(k)
a,c
E [jn+p
a=n-i

for 0 ~ c ~ i ~ n, and the right side of (6.85) can be expressed as a linear combination
of the elements L\-Isi-j,o(k), j = 0, 1, ... , i. Since (6.16) and (6.46) imply that the
left side of (6.85) is also contained in O~P' it follows from Theorem 6.5(2) that 'to
prove (6.85) we need only show that both sides have the same image under n = n;.
Using (4.109) and Lemma 2.21, we have

(i-j+a-n)(k)) -- /k(•
n(IT a,O P l - }
· + a - n, a )x 02p -{a) Sa {XI, ... , Xn ) .

From this and Lemma 2.21 we find that then-image of the right side of (6.85) is
n a+i-n
(XI · · · Xn ) -I p {n-i) L...J
~ p -{a) Sa {XI, ... , Xn ) L...J
~ Ol.;j
~ /k(·
p l - }
· + a - n, a )·
a=n-i j=O

Since, by (6.25) and Lemma 3.34,

Q((Ilt)-II1n-iI1+) =(XI·" Xn)-I Sn_;(Xi. ... , Xn),

we see that to prove (6.85) it remains to verify the relations

a+i-n { l if a= n - i,
(6.87) I: aij1;{i-j+a-n,a)= 0• if n - i <a~ n.
j=O '
218 3. HECKE RINGS

LEMMA 6.21. If p is an odd prime and 0 ~ r ~ a, then

(6.88) l Pk ( r, a ) = [k ( ) 'Pa (p)


P r, r ( ) ( ),
'Pr P 'Pn-r P
and the sum 1; (r, r) is equal to
r/2
(6.89) ITP2i-l(p2i-I - 1) or 0
j=I

for even and odd r, respectively.


PROOF. Any matrix Min the set (6.77) satisfies the congruence

M=(~ ~) [(~ E:,) M(i)] (modp).

Hence, x(M) = x(A), and (6.88) is a consequence of (6.77)-(6.79). To prove (6.89),


we write the sum 1;(r,a) (see (4.110)) in the form

(6.90)
d=O ZESa-1(Fp) AES0 (Fp),rp(A)=r
rp(Z)=d A<•-ll::z(modp)

whereA(s) isthes x s-blockin the upper-left corner of A. By Theorem 1.3 of Appendix


1 and the formula

(6.91) x(A[UJ) = x(A) for A E S0 (Fp) and U E GL0 (Fp),

which follows from (4.70), we may suppose that Z = ( ~1 ~) in the inner sum in
(6.90), where Z 1 is ad x d-matrix that is nonsingular modulo p. Thus, the matrix A
in (6.90) has the form

(6.92) (~I
'Xi

where Y1 = Y - Z! 1[XJ], and X1, X2, Y are columns of length d, a - d - 1, l,


respectively.
If rp(X2) = l, then 'X2U = (0,. . ., 0, 1) for some U E GL(a-d-l)(Fp), and hence

X [( ~ ~)] = ( ~ ~2 ) with X =( 1 ~~)


1 2 , Y2 = (~ :J .
Since x( Y2 ) = e;; 2 (-;,1 ) = 1 by (4.70), it follows from the last relation and (6.91) that

(6.93) x(X) = x(Y2) = 1 and rp = 2.


On the other hand, if rp(X2) = 0, then we obviously have

(6.94) x(X) = x(Y1) and rp(X) = rp(Y1).


§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 219

Ifwe now use (6.92) along with (6.30) and (6.91), we find thatx{A) = x(Z1)x(X)
= x(Z)x(X). This implies that the inner sum in (6.90) is equal to

(6.95) pd x(z)- L x(x)-k =pd x(z)-ku(r - d).


xes._,(Fp)
rp(X)=r-d

The relations (6.93) and (6.94) show that the following equalities hold for u{p):

u{O) = 1, u{l) = 0, u{2) = p(pa-d-1 - 1),


u (p) = 0 for p > 2.

From.this, (6.90) and (6.95) we find that 1;(r, a) is equal to

(6.96) p'- 1(pa-r-I - l)l!(r - 2,a - 1) + p'l!(r,a -1).

Thus, if we set a =rand use (6.88), we obtain

l!(r,r) = p'- 1(pr-I - l)l!(r - 2,r - 2).

From this relation, using induction, we obtain (6.89), since/;(o, 0) = 1and/;{l,1) = 0


{recall that k is odd by assumption). D

We return to the proof of Proposition 6.20. We first make the substitution a = n-b
in (6.87) and use (6.88). This transforms the system to the form

L.:
i-b /k(•
a.ij p ' -
.
J-
b .
•' - J -
. b)
'Pn-b =
{ 1 for b = i,
j=O 'Pi-j-b'Pn-i+j 0 forO:::;; b < i.

After making another substitution 'Pn-b ---+ 'Pn-i {'Ps = 'Ps (p)) and i - b - j ---+ r, we
obtain the new system of equalities

'°'
L....J
'Pn-i ~ ..• 1;(r, r) _ { 1
"'"IJ -
for i -b = 0,
for 0 <i - b :::;; i,
· ..... 0,J·+r=1-
J,r~
. b 'Pn-i+j 'Pr 0

which is obviously equivalent to the congruence

(6.97)

Hence, the proof of (6.85) reduces to the inversion of the infinite formal series in this
congruence. With this in mind, we prove the following identity.
LEMMA 6.22. In the notation {4.110) and {6.83),

~ cpia(p) 1;(2b,2b) 2b Ila (1 + 2j-I 2)


L....J ()' ()v = p v.
b=O 'P2a-2b P 'P2b P j=I
220 3. HECKE RINGS

PRooF. From (6.89) and the definition of the polynomials (2.29) and (6.83) it
follows that
1; (2b, 2b) pb 2
(6.98)
'P2b (p) = 'Pih (p) '
and from this the lemma is obtained by induction on a.

Ifwe now set v =A· p- 112 in Lemma 6.22, we obtain the system of equalities
t (-p) 0 -b
b=o.'Pia-2b(p)
• l!(2b,2b) _ { l
'P2b(p) - 0
for a= 0,
for a> 0,

which, together with (6.89), implies the formal power series identity

(~ l!(r,r)
L....J - -vP
r=O <p,(p)
) (~ (-pY
L....J - 2 )
-V c
c=O <pi;,(p)
-
-
1

Ifwe compare this identity with {6.97), by (6.86) we obtain the congruence

PROBLEM 6.23. Let II± = II± (p) be the Frobenius elements of the Hecke ring
Lo,p• and ford ;;;:: I let Il±d be defined by the recursive relations (6.10). Prove that the
negative squares of the Frobenius elements are given by the formulas
n n
rr:2 = (p(n) .l\)-2 L; a:j(p) L: n~7n-.!~.
j=O a=j
n n
11:;:2 = (p<n> .l\)-2 L a:/p) L:rr~~o-j>,
j=O a=j

where aij(p) are the coefficients (6.70). Ford> l show that rr:t1 =F (II: 1)d and
II+'' =F (II+ 1)ti. For p an odd prime obtain analogous formulas for the elements
(Il~J- 1 that are defined bythe recursive relations (6.11).

[Hint: Use Propositions 6.17 and 6.20.]


5. Symmetric factorization of Rankin polynomials. Here we complete our study
of the factorization of R~ (v) and R.; (
v) by obtaining factorization~ that are invariant
relative to the anti-automorphism *· These factorizations will play a fundamental
role in the applications. To derive them we use the factorizations in Proposition
6.9, the multiplication formulas in Propositions 6.17 and 6.20, and certain additional
considerations. . .
As in the rest of this section, we suppose that n E N, p is a fixed prime number,
and (p, q) = I.
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 221

TuEoREM 6.24. Let R;(v) E L;[v] and i;(v) E E;(q, x)[v] be the polynomials
defined in (6.27). These polynomials have the following factorizations over the Hecke
ringL~.p:

(6.99) a;(v) = x_(v) ( ta(-l)ib;v;)x+(v),

(6.100) i;(v) = x_(v) ( ta(-l)ib;vi)x+(v),

where, in the notation (3.62) and (4.106),for i = 0, 1, ... , n


n
(6.101) X_(v) = X~P(v) = L(-l);(p(n)A)- 1ILI1n-iv;,
i=O
n
(6.102) X+(v) = X~P(v) = L(-l);(p(n)A)- 1II;II+v;,
i=O
i
(6.103) b; = bf(p) = P-{i)-i(n-i)A-1 L:a;iII~oi>
j=O

with coefficients aij = al} (p) defined by (6. 70), and

i
(6.104) b; = bf(p) = p-<i)-i(n-nA-1 L:a;jn~;i>(k)
j=O

with coefficients aij = aij (p) defined by (6.86).


We first prove a lemma.
LEMMA 6.25. The following formal power series identities hold over the ring L0,p:
00

x_(v)-1 = LP-dnt-(pd)vd,
(6.105) d=O
00

X+(v)-1 = LP-dnt+(pd)vd,
d=O
where X_(v) and X+(v) are the polynomials (6.101) and (6.102), and

(6.106) t-(pd) = t;;(pd) = I: (U(D*))r0,


DEA"\M./A"
detD=±pd
(6.107) t+(pd) = t;i(pd) = I: (U(D))r0.
DEA"\M./A"
detD=±pd
222 3. HECKE RINGS

PROOF. The two identities in (6.105) have analogous proofs; moreover, the anti-
automorphism * (applied coefficient by coefficient) takes one into the other (see (6. 33)).
Hence, it suffices to prove, say, the second identity in (6.105). From (6.46) and
Proposition 5.5 it follows that all of the coefficients on both sides of this identity are
contained in the ring C+ = C~r Thus, by Theorem 6.5(2), the identity will be proved
if we verify that

(6.108) ( ~(-1) 1 W(x1 )v 1) - ' ~ .~p-'"W(t+(p'))v',


where the x; are the coefficients of the polynomial (6.102) and Cl>= e1>;. By Lemma
3.34, we have Cl>(x;) = p(i)nf(p). To compute the coefficients on the right in (6.108)
we use the expansions (5.23) of the double cosets; using the definitions, we see that
this gives the formulas

DEA"\M./A"
detD=±pd

We note that if D E Mn and detD = ±m, then


(6.109)
In fact, the index on the left obviously depends only on the double coset An DAn,
and so, by Lemma 2.2, the matrix D can be replaced by its elementary divisor matrix
ed(D) = diag(di. ... , dn). for which the index is
II d;dj = (di ... dn)n+I = mn+t.
l~i~j~n

Thus,
Cl>(t+(pd)) = pd(n+l)t(pd),
where t(pd) is the element (2.10) of the ring Hn. These formulas show that the identity
(6.108) that we want to prove is nothing other than the identity in Proposition 2.22
with v replaced by pv. D

PRooF OF THE THEOREM. Using the notation of Proposition 6.9, we define the
following polynomials:
n n
L(v) = z::)-l)'II:t 21In-;II+v;, Y+(v) = :~::)-l);II_II;II= 2 v;,
i=O i=O
n n
Y_(v) = ~)-l)'(II~)- 1 f,ln-;II+v;, Y+(v) = ~)-l);II_II;(II:)- 1 vi.
i=O i=O

Then, by Proposition 6.9, we have the factorizations


R;(v) = X_(v) Y+(v) = Y_(v)X+(v),
(6.110)
a;(v) = x_(v)Y+(v) = y_(v)X+(v),
and hence
-1 ~ ~ -I
X_(v)- 1 Y_(v) = Y+(v)X+(v)- 1 and X_(v) Y_(v) = Y+(v)X+(v) .
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 223

Ifwe let B(v) and ii(v) denote these formal power series, and let (-1); b; and (-l)ib;
denote their coefficients, we obtain the identities
00

B(v) = ~::)-l);b;v; = X_(v)- 1 Y_(v) = Y+(v)X+(v)- 1,


i=O
(6.111) 00

ii(v) = ~:)-l)ib;v; = x_(v)- 1 y_(v) = Y+(v)X+(v)- 1,


i=O

from which, using (6.110), we have:

R;(v) = X_(v)B(v)X+(v) and it;(v) = X_(v)B(v)X+(v).

Thus, to prove the theorem it suffices to verify that B (v) and ii (v) are actually poly-
nomials of degree n, and that their coefficients are given by (6.103) and (6.104). To do
this we need some preliminary observations.
The map

(6.112) So 3M= ( ~ ~) --+ s(M) = detD(detA)- 1,


where S0 is the triangular subgroup (3.43) of sn = S8, is obviously a homomorphism
from S 0 to the multiplicative group of positive rational numbers. For p a prime, the
p-order of s(M) will be referred to as the p-signature of Mand of the corresponding
double coset (M)r0 , and we shall denote it by up:
(6.113)

A linear combination of double cosets (M; )r0 is said to be up-homogeneous if all of the
double cosets that occur with nonzero coefficients have the same p-signature; in that
case this p-signature is called the p-signature of the linear combination of double cosets,
again Jenoted up. Clearly, if two linear combinations X and Y are up-homogeneous,
then so is their product X · Y, and we have
(6.114)

Returning to the proof of the theorem, we consider the subspaces L 0, Lt, Lg, 1-,
and 1+ of the space Lo = L~.p that consist of all (finite) linear combinations of up-
homogeneous elements whose p-signature is, respectively, nonpositive, nonnegative,
zero, negative, and positive. The spaces L 0 , Lt, and Lg are clearly subrings of Lo,
and 1- and 1+ are two-sided ideals of the rings L 0 and Lt, respectively. From
the definitions, it follows that the following elements of Lo are up-homogeneous with
p-signature as given below:

up(Il0 ) = 2a - n, up(n~l) = up(n~l(k)) = 2(a +2b - n),


(6.115)
up(.!\)= 0, up(t-(pd)) = -2d, up(t+(pd)) = 2d.
In particular, all of the coefficients of the series x _(v )- 1 lie in the ring L 0 , and all
of the coefficients of X+ (v )- 1 lie in Lt. On the other hand, the above formulas and
Propositions 6.17 and 6.20 show that all of the coefficients of the polynomials Y+ (v)
and Y+(v) lie in Lt, and all of the coefficients of the polynomials Y_(v) and Y_(v)
224 3. HECKE RINGS

lie in L 0 . From these observations and (6.111) it follows that all of the coefficients in
the series B(v) and B(v) are contained both in L 0 and in Lt, i.e.,
(6.116) b;,b;ELonLt=Lg fori=O,l, ....
We now examine the coefficients b; modulo 1-. By (6.106) and (6.115), all of
the coeffidents of X_(v)- 1 except for the constant term lie in the ideal 1-, and the
constant term is 1; hence, if we pass to congruence modulo 1- coefficient by coefficient
in the equation B(v) = X_(v)- 1 Y_(v), we find that B(v) := L(v)(mod/-), i.e.,
(6.117) b; = rr+ 211n-ill+(mod/-) for 0 ~ i ~ n,
(6.118) b; =O(mod/-) for i > n.
Since Lg n 1- = {O}, it follows from (6.116) and (6.118) that b; = 0 for i > n.
Hence, B (v) is a polynomial of degree n. Furthermore, it follows from (6.115) that
A- 1 rr~b E 1- if a < n. Thus, from (6.117) and (6.69) we obtain the following
congruences for 0 ~ i ~ n: ·
i
b; =P-{i)-i{n-i)A-1 LaiiII~oj>(modl-).
j=O
If we take into account that in each of these congruences both sides are contained in
Lg and if we recall that Lg n 1- = {O}, we see that the two sides are actually equal. We
then carry out an analogous argument with the coefficients b;, using (6.85); we find
that for 0 ~ i ~ n the coefficients b; satisfy the inequalities (6.104), and b; = 0 for
i> n. D
CHAPTER 4

Hecke Operators

Modular forms arose as a result of abstracting the analytic and group properties
of the generating series for the number of integral representations of positive definite
integral quadratic forms by one another. Thus, the basic object of arithmetic interest
in the theory and application of modular forms was and continues to be the Fourier
coefficients regarded as a number-theoretic function. As we saw in §§1.1 and 1.5 of
Chapter 3, the Hecke rings of the symplectic group act as rings of linear operators on
~paces of modular forms. The Hecke operators, which act on modular forms and hence
on their Fourier coefficients, make it possible to carry the various relations between
elements of the Hecke rings over to these number-theoretic functions, thereby revealing
multiplicative properties of the Fourier coefficients. These properties are reflected in
the Euler products of the Dirichlet series (zeta-functions) that are constructed from
the Fourier coefficients of eigenfunctions of the Hecke operators.

§1. Hecke operators for congruence subgroups of the modular group


In this section we define the Hecke operators for a broad class of congruence
subgroups, and we examine some of their properties.
1. Hecke operators. Just as in §3.l of Chapter 3, when we speak of Hecke rings
for congruence subgroups K c rn, we have to impose certain restrictions on K. In
addition, one must restrict the class of characters of the group K in order to define
the representations of these rings on modular forms. Thus, let K be a congruence
subgroup of rn, and let x be a congruence character of K, i.e., a character whose kernel
contains a principal congruence subgroup rn (q) c K. As in §3.1 of Chapter 3, we
suppose that K satisfies the q-symmetry condition ((3.4) of Chapter 3). As for x, we
suppose that it can be extended to a homomorphism X from the group S(K) to C*:

(1.1) X: S(K)--+ C*, XIK = X·

For brevity, we shall refer to a pair (K, x) satisfying these conditions as a q-regular
pair (of degree n).
We consider the space rotk (K, x) of modular forms of weight k and character x
for the group K, where k is an integer and (K, x) is a q-regular pair. To every element
of the Hecke ring L(K) = DQ(K, S(K)) we shall associate a linear operator on this
space. According to the scheme in §1. 5 of Chapter 3, to do this we first need a suitable
automorphy factor of the group S(K). For M = ( ~ ~) E S(K) and Z E Hn we
set

(1.2) 'Pk,x(M, Z) = X(M) det(CZ + Dl.


225
226 4. HECKE OPERATORS

By Lemmas 4.2 and 4.1 (2) of Chapter 1, the function 'Pk,x is an automorphy factor of
S(K) on Hn with values in C*. Then, by Lemma 4.1 (3) of Chapter 1, we can define
an action of the group S(K) on functions F: Hn--+ C:
S(K) 3M: F--+ Fl,,.,k,xM = Flk,xM
(1.3) = 'Pk,x(M,z)- 1F(M(Z}) = X(M)- 1FlkM,
where lkM is the operator (3.14) of Chapter 2, which satisfies the relations
(1.4) Flk,xMdk,xM2 = Flk,xM1M2 (M; E S(K)).
Since X(M) = x(M) if M E K, the condition (2.4) of Chapter 2 in the definition
of modular forms of weight k and character x for K can be rewritten in the above
notation in the form
(1.5) Flk,xM = F for all ME K.
Thus, nothing prevents us from defining the action of the ring L(K) on rotk (K, x) in
the same way as we did for general Hecke rings in §§1.1 and 1.5 of Chapter 3. The only
difference is that in the general case the automorphic forms were defined using only
functional equations of the type ( 1.5), while functions in rotk (K, x) must also satisfy
certain analytic conditions. Thus, if F E !mk(K,x) and T = °E; a;(KM;) E L(K),
we set
(1.6)

From (1.4) and (1.5) it immediately follows that the function FIT does not depend
on the choice of representatives M; in the left cosets KM;; from the definition of the
Hecke rings and from ( 1.4) we then find that

(FIT)lk,xM = La;Flk,xM;lk,xM
i

= L:a;Flk,xM;M =FIT, if ME K,

i.e., FIT satisfies all of the functional equations (1.5) for modular forms in VRk(K,x)
whenever F does. From (1.3), (1.6), and Proposition 3.8 of Chapter 2, we see that the
operator IT also preserves the analytic properties of modular forms.
We now suppose that K c r 0(4), and consider the space rotk;2(K, x) of modular
forms of half-integer weight k/2. According to (3.19) of Chapter 2, in this case we can
take the function
-
(1.7) 'Pk/2,x(M,Z) = X(M)cp(Z) k ,

where M = (M, 'P) E S(K), as the automorphy factor of the group S(K) =
p- 1(S(K)), where Pis the homomorphism (4.1) of Chapter 3, on Hn with values
in C*. Using (3.21) and (3.22) of Chapter 2, we see that if for any function Fon Hn
we set
..-. ..-. I I ..-.
(1.8) Flk;2.xM = 'Pk/2,x(M, z)- F(M(Z}) = X(M)- Flk;2M,

then for any Mi and M;, E S(K) we have


(1.9) F lk/2.X Mi lk/2.X Mi = F lk/2,X Mi' Mi
§1. HECKE OPERATORS FOR CONGRUENCE SUBGROUPS 227

and the condition (2.5) of Chapter 2 in the definition ofa modular form in rotk12 (K, x)
can be written in the form
(I.IO) Fk/2,xM = F for all ME K= r(K),
where j" is the monomorphism (4.2) of Chapter 3. As for the ring L(K), by Lemma
3.4 of Chapter 2 and Lemma ·1.7 of Chapter 3 it can be lifted to the ring L(K) =
DQ(K, S(K)). By analogy to (1.6), we find that the formula
(I.I I) FIT = F lk/2,X T = LOI; F lk/2,X ii;
for T = E; 0t;(KM) E L(K) gives a representation of L(K) in the space of modular
forms ro?k/2(K, x).
Thus, from the above observations, the definition of modular forms, and Propo-
sition 1.14 of Chapter 3 we have the following
PROPOSITION 1.1. Let (K,x) be a q-regular pair of degree n ~ I, let w = k or
k/2 be an integer or half-integer. and suppose that K c r()(4) if w = k/2. Then
any operator lw,x'l", where 'Z" E L(K) or r E L(K) depending on whether w = k or
w = k/2, respectively, takes the space ro?w(K, x) to itself. The map r --+ lw,x'l" is a
linear homomorphism from the corresponding Hecke ring to the ring of endomorphisms
of ro?w (K, X). In particular, ·
(1.12) Flrilr2 = Flr1r2 for FE ro?w(K,x).
The operators lw,x'l" on the space ro?w(K, x) are called Hecke operators.
Our definition (1.6) and (I.I I) of the Hecke operators is somewhat arbitrary, since
the extension X of the character x to S(K) can be chosen in different ways. Although
this element of choice has little effect on the Hecke operators, for convenience in later
computations we would like to remove it. We first describe all possible extensions of x
to S(K).
LEMMA 1.2. Let (K, x) be a q-regular pair of degree n ~ 1, and let p be an arbitrary
homomorphism from S(P (q)) to C* that is trivial on rn (q ). Then there exists a unique
homomorphism X = Xp,x from S(K) to C* whose restriction to K coincides with x and
whose restriction to s(rn(q)) coincides with p.
PRooF. By assumption, there exists an extension Xo of the homomorphism x:
K--+ C* to S(K). This implies that the character x satisfies the following condition:
(l.13) if My= y'M', whereM,M' E S(rn(q)) andy,y' ~ K, thenx(y) = x(/).
In fact, the equality My = y' M' implies that X0 (M)x(y) = x(y')Xo(M'), and Theo-
rem 3.3(3) of Chapter 3 with K 1 = P(q) implies that Xo(M) = Xo(M'), since Xo as
well as xis trivial on P(q); hence, x(y) = x(y').
According to (3.5) of Chapter 3, any matrix ME S(K) can be written in the form
(1.14) M = yN, where y EK and NE s(rn(Q)).
We then set
(1.15) X(M) = ~,.x(M) = x(y)p(N).
If M = yN= y1N1 are two decompositions (1.14), then y( 1y = N 1N- 1 E Kn
S(rn(q)) = P(q), and hence x(y1) = x(y) and p(N1) = p(N). Thus, X(M) does
228 4. HECKE OPERATORS

not depend on how Mis written in the form (1.14). The map X: S(K) -+· C* clearly
coincides with x on K, and with p on S (rn (q)). We check that X is a homomorphism.
If M = yN and M 1 = y1N1 are two matrices in S(K) written in the form (1.14), then,
by (3.4) of Chapter 3, we can write Ny1 = y(N', where y( EK and N' E S(P{q)). By
(1.13) we have x(y() = x(r1). From Theorem 3.3(3) of Chapter 3 with K1 = P(q) it
follows that_p(N') = p(N). From these relations and (1.15) we obtain

X(MMi) = X(yNy1N1) = X(yy(N'N1)


= x(yy()p(N'N1) = x(r)x(r1)p(N)p(N1) = X(M)X(M1).
This proves the existence of the homomorphism Xp.x· Its uniqueness is obvious. 0

According to the lemma, we can fix an extension X of x to S(K) by arbitrarily


giving the restriction p of X to S(P(q)). We set

(1.16) Pw(M) = r(M)-wn+(n), if ME S(rn{q)),

and we use p = Pw in (1.15) to fix once and for all the normalized extension

(1.17)

to S(K) of the character x of K, where (K,x) is a q-regular pair. The corresponding


automorphy factors {1.2) and (1.7) will be denoted 'Pw,x• and the operators (1.3) and
(1.8) will be denoted lw.x· Finally, the Hecke operators {1.6) and (1.11) on !mw(K, x)
with X = Xw,x will be denoted lw.x T, or simply IT if w and x are clear from the context:

(1.18)

and we call these the normalized Hecke operators.


We now show that the normalized Hecke operators are compatible with the natural
imbeddings of spaces of modular forms and with the natural isomorphisms of Hecke
rings described in Theorem 3.3(5) of Chapter 3.
PROPOSITION 1.3. Let (K, x) and (Ki. x1) be two q-regular pairs of degree n. Sup-
pose that K1 c K, and the restriction of x to K1 coincides with XI· Then the following
equality holds for any modular form F E !mk(K, x) C !mk(Ki. xi), where k is an integer,
and for any T E L(K):
Flk,xT = Flk.x 1e(T),
wheree: L(K)-+ L(K1) is theisomorphismofHeckeringsin Theorem 3.3(5) of Chapter
3. In particular, the subspace !mk(K, x) of!mk(Ki. x1) is invariant under all of the Hecke
operators in L(K1 ).
PRooF. Let T = E; a;(KM;). By part (1) of Theorem 3.3 in Chapter 3, we may
suppose that all of the M; lie in the group S(K 1). Then parts (4) and (5) of the same
theorem imply that e(T) = E; a;(K1M;) {recall that e coincides with the map (1.27)
of Chapter 3). Note that the restriction to S(Ki) of the normalized homomorphism
Xk,x coincides with Xk,x., since these two maps agree on the subgroups K1 and P{q)
that together generate S(K1); this implies that lk.xM = lk.x1 M for ME S(Ki). Thus,
for F E !mk (K, x) we obtain

Flk.xT = La;Flk.xM; = La;Flk,x M; = Fkx e(T).


1 1 0
§1. HECKE OPERATORS FOR CONGRUENCE SUBGROUPS 229

Under the conditions of Proposition 1.3, suppose that q is divisible by 4, K 1 c


K c qj(4), and the Hecke rings
(1.19) L(K) = DQ(K,S(K)) and L(K1) = DQ(Ki.S(K1))
are defined as in (4.3) of Chapter 3. By Theorem 3.3(1)-(2) of Chapter 3, the Hecke
pairs for the rings ( 1.19) satisfy the conditions ( 1.26) of Chapter 3; hence, there exists
a monomorphism of the form (1.27) of Chapter 3:
(1.20) e-: Z(K) - Z(Ki),
which is also compatible with the action of the corresponding Hecke operators on the
spaces of modular forms. 0

PROPOSITION 1.4. Suppose that (K,x) and (K 1,x 1) are q-regular pairs, and q is
d(visible by 4. Further suppose that K1 c K c r()(4), and the restriction of x to
K1 coincides with XI· Then the following equality holds for any modular form F E
!mk;2(K, x) c !mk;2(K1, x1). where k is odd:
Flk/2.xT = Flk;2.x 1e(T) for TE L(K).
Moreover, the map e gives an isomorphism between the even subrings E(K) and E(K1) of
the Hecke rings (1.19), and the subspace !mk;2(K, x) of!mk;2(Ki. x1) is invariant under
the Hecke operators of E(K1).
PRooF. From Lemma 1.8, Proposition 4.3, and Theorem 3.3(4) of Chapter 3 it
follows that the restriction ofeto the even subrings truly is an isomorphism. The other
parts of the proposition are proved in the same way as Proposition 1.3. 0

2. Invariant subspaces and eigenfunctions. In the spaces of modular forms we


now define some standard subspaces that are invariant under the Hecke operators. In
particular, we prove that the spaces of cusp-forms are spanned by eigenfunctions of all
of the Hecke operators. We begin with the cusp-forms.
PROPOSITION 1.5. Let (K, x) be a q-regular pair of degree n, and let w = k or k/2,
where k is an integer. In the case w = k /2 we suppose that k is odd, q is divisible by 4,
and K c r 0(4). Then the subspace SJ?w(K,x) of cusp-forms in !mw(K,x) is invariant
relative to all of the Hecke operators lw.x'l' for -r E L(K) or -r E L(K), depending on
whether w = k or w = k/2, respectively.
PRooF. The proposition follows from Proposition 1.1, the formulas (1.3) and
(1.8), and Theorem 3.13(3) of Chapter 2. 0

We now define a multiplicative family of operators on !mw (K, x) that commute


with the Hecke operators. Let d E Z, (d, q) = I. We let E (d) = En (d) denote one
of the matrices that satisfies the following conditions (such matrices exist by Lemma
3.2(1) of Chapter 3):

(1.21) E(d) E r(l(q) and E(d) =(d-~En d~n) (modq).


For fixed d modulo q, it is clear that all such matrices belong to the same left (or right,
or double) coset of qj(q) modulo P(q). This implies that the operator
lw-r(d): !mw(P(q), 1) 3 F - .Flw-r(d),
230 4. HECKE OPERATORS

where -r(d) = E(d) or -r(d) = E(d) = r(E(d)), depending on whether w = k or


w = k/2, respectively, does not depend on the particular choice of E(d) satisfying
(1.21), but rather depends only on d modulo q. In addition, we have
(1.22) Flw-r(di}lw-r(d2) = Flw-r(di}-r(d2) = Flw-r(d1d2),
since E(d1)E(d2) =E(d1d2)(mod q).
LEMMA 1.6. · Suppose that (K, x) is a q-regular pair of degree n, w = k or k/2, and
in the latter case q is divisible by 4 and Kc q)(4). Then for every d, (d, q) = 1:
(1) the map YI --+ y = E(d)- 1y1E(d) gives an automorphism of the group K that
does not affect the character x. i.e., x(y) = x(y1 );
(2) the subspaces !mw(K, x) andSR.w (K, x) in !mw(rn (q ), 1) are invariant relative to
the operator lw-r(d);
(3) the operator lw-r(d) on!mw(K, x) commutes with all of the Hecke operators lw.x-r
for -r E L(K) or -r E E(K), depending on whether w = k or w = k/2, respectivf!ly.
PROOF. Since obviously M = d ·E(d) E S(P(q)) andP(q)Mrn(q) = MP(q),
it follows from Theorem 3.3(4) of Chapter 3 thatKMK =KM= MK. Thus, for any
YI EK there exists y EK such that y1M =My, and conversely. From (1.13) it then
followsthatx(y) = x(y1). Sincey = M- 1y1M = E(d)- 1r1E(d), thefirstpartofthe
lemma is proved.
Since the operator lw-r(d) obviously takes cusp-forms to cusp-forms, it suffices to
prove the second part for the space !mw(K,x). In the case w = k this means that
if a function F E !mk(P(q), 1) satisfies the condition FlkY = x(y)F for y E K,
then the function FlkE(d) also satisfies this condition. But this is a consequence of
part (1) of the lemma and the relations (3.15) of Chapter 2, since if y E K, then
y = E(d)- 1y1E(d) with y1 EK, and

FlkE(d)ikY = FlkE(d)lkE(d)- 1y1E(d) = FlkYI lkE(d)


= x(y1)FlkE(d) = x(y)FlkE(d).
Because the map r: r3(4)--+ f3(4) is an isomorphism, it follows by (3.22) of Chapter
2 that the above argument carries over to the case w = k/2.
From part (1) and from Lemma 1.2 it easily follows that the map y1 --+ y =
E(d)- 1y1E(d) gives an automorphism of the group S(K) that does not affect the
homomorphism X = Xk,v i.e., X(y) = X(y1). If ME S(P(q)), then it follows from
Theorem 3.3(3) of Chapter 3 that the matrices Mand E(d)- 1ME(d) belong to the
same P(q)-double coset, and hence to the same K-double coset. From this and (3.4)
of Chapter 3 we conclude that the above automorphism of S(K) takes every double
coset KMK, M E S(K), to itself. Thus, if M 1, ... , Mµ are any set of representatives
of the left cosets K \ KMK, then so are E(d)- 1M 1E(d), ... , E(d)- 1MµE(d). Using
the definitions (see (1.6), (1.3), (1.18), and (3.15) of Chapter 2), we find the following
relation for F E !mk(K, x):
µ
FlkE(d)lk.x(M)K = L xk.x(E(d)- 1M;E(d))- 1FlkM;E(d)
i=I

= (txk,x(M;)- 1FlkM;) I E(d) = Flk.x(M)KlkE(d),


1=1 k
§1. HECKE OPERATORS FOR CONGRUENCE SUBGROUPS 231

which proves part (3) for T = (M)K. The general case follows from this case and from
Lemma 1.5 of Chapter 3.
If w = k/2, then part (I) implies that E(d)- 1KE(d) = K, since in this case E(d)
and K lie in r3(4). Suppose that ME S(P(q)) and r(M) is the square ofa rational
number. Then, as noted before, E(d)- 1ME(d) = y1My 2 for some y1, y2 E rn(q), and
we can write Mo 2M.- 1 = 01, where 01 = E(d)y 1 and o2 = E(d)y; 1 E q(4). From
this, (4.6), and Proposition 4.3 of Chapter 3 it follows that
.- - - - - -I ..-...- - l
01 = Mo2M = Mo2M-

for any P-preimage Mof M; hence,


i<i(d)- 1iii(d)K = i<YiiiYii< = f<iif<.
Thus, if .M;, ... , 'iiµ is a set of representatives of the left cosets K \ KMK, then so is
E(d)- 1.M;E(d), ... ,E(d)- 1'iiµE(d). Part (3) for w = k/2 now follows from (1.11),
(l.18), and (3.22) of Chapter 2. D

Based on this lemma, one can define the standard decompositions of our spaces
of modular forms. Suppose that V = ro?w(K,x) or IJ?w(K,x). The map d---+ lwT(d)
gives a representation of the abelian group (Z/qZ)* on V; hence, Vis a direct sum of
irreducible invariant subspaces, each of dimension 1. If FlwT(d) = lf/(d)F, then If/ is
a character of the group (Z/ qZ)*. From this and Lemma 1.6 we obtain
PROPOSITION 1.7. Suppose that (K, x) is a q-regular pair of degree n, w = k or k/2
is an integer or half-integer, and in the latter case K c r3(4). Then one has the direct
sum decompositions

If/
(1.23)
IJ?w(K,x) = E01J?w(K,x, If/),
If/

where If/ runs through all of the characters of the group (Z/qZ)*, and for each If/ we set
ro?w(K,x,lf/) ={FE rolw(K,x);FlwT(d) = lf/(d)F, (d,q) = l},
(1.24)
mw(K, x, If/) = ro?w(K, x, If/) n IJ?w(K, x).
Each of the subspaces ro?w (K, x. If/) and IJ4o (K, x, If/) is invariant under all of the Hecke
operators lw.x'l' for T E L(K) or T E E(K) in the case w = ·k or w = k/2, respectively.
We now consider the action of the Hecke operators on subspaces of the form ( 1.24).
In E(K) we look at the subring E(K, x) that is analogous to the ring (5.4) in Chapter
3. Namely, we set

(1.25)

where e1 and e2 are monomorphisms of the form (1.20) for the pairs of groups rn (q) c
K and rn (q) c r3 (q), respectively. Here by ei-- 1 we mean the inverse of the restriction
of e1 to the even Hecke ring.
232 4. HECKE OPERATORS

PROPOSITION 1.8. Suppose that the pair (K, x) satisfies the conditions ofProposition
1. 7, and If/ is a character modulo q. Then the following formula holds for any modular
forms F, G E rotw (K, x, If/) of which at least one is a cusp-form:
(1.26) (F lw.x'r, G) = lf/(r(M)) (F, G lw.x'r ),
where -r = (M)K E L(K) or -r = (M) f( E E(K, x) for w = k or w = k/2, respectively.
and(·,·) is the scalar product in (5.1) of Chapter 2.
PROOF. By Lemma 1.2 of Chapter 3, the number of K-left cosets in KMK is
equal to the index [K: K(M)1· The number of K-right cosets in KMK is obviously
equal to the number of left cosets in KM- 1K, i.e., [K: K(~-1 >1· Since these indices
are equal, by Lemma 5.4 of Chapter 2, it follows that the number of K -left cosets in
KMK is equal to the number of K-right cosets there. On the other hand, every left
coset clearly has nonempty intersection with each right coset. Hence, there exists a set
of representatives M 1, ••• , Mµ of the left cosets K \ KMK that is also a set of right
coset representatives. Using this set of representatives and the properties of the scalar
product in Theorem 5.3 of Chapter 2, we obtain
µ
(Flk.x(M)K,G) = LX(M;)- 1(FlkM;,GJkM;- 1 JkM;)
i=I
(1.27)
= (F,tr-nk X(M;)~lGJkM;-i),
1=1

where X = Xk.x and r = r(M) = r(M;). Since obviously GJkA.M' = ..:1.-nkGJkM', it


follows that the last sum in (1.27) is equal to
µ µ
(1.28) LX(M;)- 1GJkE(r)- 1E(r)rM;-l = lf/(r) LX(M;)- 1GJkE(r)rM;-l ·
i=I i=I

Because Mi. ... , Mµ is a set of representatives of the right cosets KMK/ K, it follows
from this and Lemma 1.6(1) that the set E(r)rM1- 1, ••• , E(r)rMµ 1 is a set of repre-
sentatives of the K-left cosets in the double coset KE(r)rM- 1K. Furthermore, since
( 1.13) implies that any homomorphism X of the form ( 1.17) satisfies the relation
X(M) = X(E(r)rM- 1) for ME S(K) with r(M) = r,
it follows from the above considerations that the expression (1.28) can be rewritten in
the form
lf/(r)Glk.x(M')K, where M' = E(r)rM- 1•
The two matrices M and M' obviously have the same symplectic divisors, and, by
(3.5) of Chapter 3, we may suppose that they lie in S(P (q )); hence, by Lemma 3.6
of Chapter 3, rn MP = rn M'P. If we now apply Theorem 3.3(3) of Chapter 3, we
conclude that these matrices belong to the same rn (q )-double coset, and, in particular,
(1.29) (E(r)rM- 1)K = (M)K (ME S(K),r = r(M)).
Thus, W(r)Glk.x(M')K = W(r)Glk.x(M)K, and, if we substitute this expression in the
sum in (l.27), we obtain (l.26) for w = k.
By Lemma 1.8 and Proposition 4.3 of Chapter 3, the number of K-left and K-right
cosets in KN K, where N = ii± 1, is the same as the number of K-left and K-right
§1. HECKE OPERATORS FOR CONGRUENCE SUBGROUPS 233

cosets in KNK. Hence, there exist elements Mi, ... ,


Mµ in KMK that are a set of
representatives simultaneously for the K-left and the K-right cosets in this double
coset. Setting ;E = (rE2n, rnf2) and using Lemma 1.6(1), we find that
µ
KM'K = 'Li<i(r);EM;- 1, whereM' = i(r);Eii- 1•
i=I

Just as in the case of (l.27) and (l.28), we now obtain the relation
(Flk/2,x(M)R, G) = lfl(r(M))(F, Glk/2.x(M')R),
so that in order to prove (l.26) for w = k/2 we must show that
(l.30) (E(r);Eii- 1)R = (M)R, where ME s(rn(q))+.
SinceE(r) Er= qj(4) and;E."jj- 1 = M* (see (4.85) ofChapter3), from (4.34) and
Lemma 4.14 of Chapter 3 it follows that f M'r = f Mr, or, equivalently, M' = yfil
with y, o E r. We shall show that this implies the equality of double cosets
(l.31)
which, in turn, implies (l.30). We choose an integer q 1 prime to q such that q1M± 1 are
integer matrices. By Lemma 3.2(2) of Chapter 3, the matrix y can be represented in the
form y = YIY2 with YI E r1 and Y2 E P(qn. Moreover, Y2 and 01 = M- 1y2M Er,
since q is divisible by 4 and M E S (r i). By assumption, r(M) is the square of a rational
number; hence, by (4.6) and Proposition 4.3 of Chapter 3, we have~ = ii- 1:y2ri,
and so
. M' = YI fil.i. where YI E r I and 02 = 010 E r.
Sinceo2 = M- y! 1M' E S(r1) n r, we have proved (l.31).
1 0

We are now ready to prove the following


THEOREM 1.9. Suppose that (K, x) is a q-regular pair of degree n, where n, q E N,
w = k or k/2 is an integer or half-integer, and Kc r(j(4) in the latter case. Then each
of the subspaces
!Jtw(K,x, If!) c !Jtw(K,x),
where 1f1 is a character of the group (Z/qZ)*-and hence also the entire space IJtw(K, x)
of cusp-forms-has an orthogonal basis consisting of common eigenfunctions ofall of the
Hecke operators lw.xT for -r E L(K) or -r E E(K, x) in the cases w = k and w = k/2,
respectively.
PRooF. By Theorem 4.5 of Chapter 2, VRw(K,x)-and hence also each of the
subspaces V = IJlw (K, x, If/ )-is finite dimensional. Thus, the ring of Hecke operators
on ·V can be regarded as a subring of the ring of matrices of a certain finite size.
This implies that the ring of Hecke operators on V is finite dimensional (over C).
Thus, there exists a finite set of generators of the ring L(K) or E(K, x) of the form
-r; = (M;)K or (M;)K (i = l, ... ,d) such that every operator 1-r = lw.xT on Vis a
polynomial in the operators 1-r;. We note that these operators, and in fact any Hecke
operators on V, commute with one another, since, by Theorem 3.3 and Lemma 3.5 of
Chapter 3 and Proposition 1.4, the rings L(K) and E(K, x) are isomorphic to Ln(q)
and £n(q, x), respectively. The latter rings are commutative, by Theorems 3.7, 4.6,
and 4.13 of Chapter 3.
234 4. HECKE OPERATORS

LEMMA 1.10. Let V be a nonzero finite dimensional vector space over an algebraically
closed.field, and let S 1, ••• , Sd be a.finite set ofpairwise commuting linear operators on
V. Then V contains a nonzero common eigenvector of S1, ... , Sd.
PRooF. The cased= 1 is obvious. Suppose that d > 1, and the lemma holds for
sets of d - 1 operators. Let A. 1 denote an eigenvalue of S 1 on V, and let V' = {v E V;
v IS1 = A. 1v} denote the corresponding eigenspace. Then V' is invariant relative to
S2, ••• , Sd because those operators commute with S 1• By the induction assumption,
V' contains a nonzero eigenvector of all of these operators. 0

Returning to the proof of the theorem, we see that V contains a nonzero eigenfunc-
tion F 1 of all of the operators 1-r; (i = 1, ... , d) (provided, of course, that V =/: {O} ).
We set Vi = {aF1; a E C} and Vi = {G E V; (Fi. G) = O}. Since the scalar
product on V is hermitian and nondegenerate, V splits into the orthogonal direct sum
of Vi and Vi: V = Vi E9 Vi. By construction, Vi is invariant relative to the operators
1-r;. Then (1.26) implies that Vi is also invariant relative to these operators; hence,
if Vi =/: {O}, then Vi contains a nonzero common eigenfunction F2 • Repeating the
same argument for Vi and F2 in place of V and F 1, and continuing in this way, after
a finite number of steps we obtain an orthogonal basis for V consisting of common
eigenfunctions for all of the operators 1-r;. 0

PROBLEM 1.11. Suppose that (K,x) is a q-regular pair of degree n, w = k or


k/2 is an integer or half-integer, and K c r(j(4) in the latter case. Show that the
subspace ~w(K,x) c ro'?w(K,x) (see (5.10) of Chapter 2) is invariant relative to all of
the Hecke operators lw.xT, where -r E L(K) or E(K, x) in the case w = k or w = k/2,
respectively.

§2. Action of the Hecke operators


We now consider the problem of computing the action of the Hecke operators
on modular forms and their Fourier coefficients. Since the details are different for
different congruence subgroups K c rn, as in Chapter 3 we shall focus our attention
on the case K = r 0(q), which is of the greatest arithmetic importance.
1. Hecke operators for r 0{q). Let x be a Dirichlet character modulo q, and let
[X] denote the one-dimensional character of the group r 0(q) that corresponds to x, i.e.,
the character given by

[x](M) = x{detD) for M = ( ~ ~) E r 0(q).


LEMMA 2.1. The pair (r0(q), [x]). where x is a Dirichlet character modulo q, is
q-regu/ar. For ·any integer w = k or half-integer w = k/2 the normalized extension
Xw,[xl of[x] to the group sn(q) = S(r0(q)) is given by the formula

(2.1) Xw,[xJ(M)=r(M)-wn+(n)x(detA) (M= (~ ~) ESn(q)).

PROOF. By Lemma 3.5 of Chapter 3, the group r 0(q) satisfies the q-symmetry
condition. Hence, to prove the lemma it suffices to verify that the formula (2.1)
gives a group homomorphism from sn(q) to C*, that the restriction to r(l(q) of this
homomorphism coincides with [x], and that its restriction to S(P(q)) is the map Pw
§2. ACTION OF THE HECKE OPERATORS 235

given by (1.16). The first claim follows immediately from the description of sn(q) in
Lemma 3.5 of Chapter 3. The second and third claims follow from the definition of
the character [x] and the map Pw. D

In arithmetic applications, for the most part one encounters one-dimensional


characters of groups of the type r 0(q). Hence, we shall give explicit computations only
for such characters. If [X] is the one-dimensional character of r 0(q) that corresponds
to the Dirichlet character x. and if F is a function on Hn, then, by (1.3), (1.8), and
(2.1), we have
(2.2)

where e = M = ( ~ ~) E sn(q) ore = ME §n(q) for w =k or w = k/2,


respectively. We set
rot:!,(q, x) = ro?w(r(j(q), [x])
and for• E Ln(q) or• E in(q, x) we let
(2.3)
denote the normalized Hecke operators on ro?:!,(q,x). According to the definition of
modular forms, we have the inclusion
(2.4) rot:!,(q,x) c rot(r(j,[xwn.
where [Xw] is the one-dimensional character of r 0corresponding to the character
(2.5) Xw(m) = (signm)WOx(m)
in which wo = k or 0 depending on whether w = k or w = k/2, respectively. By
Theorem 3.1 of Chapter 2, any modular form F E rot(r0, [Xw])-and, in particular,
any modular form F E rot::, (q, x)-has a Fourier expansion of the form
(2.6) F(Z) = L f(R)e{RZ},
REA.

in which the Fourier coefficients f (R) satisfy the relations


(2.7) f(R[V]) = s(det V)f(R) for VE GLn(Z),
where s is the character of the group {± 1} given by setting
(2.8) s(-1) = Xw(-1).
Thus, the Fourier coefficients f (R) of the modular form F (Z) may be regarded as the
values of a function f: An ---+ C that satisfies (2.7). We let ~:!,(q, x) denote the vector
space of such functions. If f E ~:!,(q,x) and• E Ln(q) ou E in(q, x) for w = k or
for w = k/2, respectively, then

F =L f(R)e{RZ} and Fl• E rot:!,(q,x).


REA.

In particular, the function FI• has a Fourier expansion of the same form as F. We
write
(2.9) Fl•= L (/l•)(R)e{RZ},
REAn
236 4. HECKE OPERATORS

where f 1-r is another function in j:!, (q, x). From Proposition 1.1 we immediately
obtain
LEMMA 2.2. The map
-r -1-r: f - /l-r (/ E j:!,{q,x))
is a linear representation of the Hecke ring Ln(q) if w = k, and a linear representation
of the Hecke ring in(q, x) ifw = k/2.
2. Hecke operators for r8. In Chapter 3 we obtained expressions for elements of
Ln(q) and in(q, x) in terms of components belonging to the Hecke ring of the trian-
gular subgroup q c rn. The Hecke operators corresponding to these components
do not, in general, stay within the confines of the original spaces of modular forms.
Thus, in order to apply these results of Chapter 3 when computing the action of the
Hecke operators, we must first define suitable extensions of the spaces rot::, (q, x).
To each character s of the group {± 1} we associate the character Os : r8 - {± 1}
defined by setting

(2.10) Os(M) = s(detD) for M = ( ~ ~) E q.


Let T = rn .denote the kernel of o_ =Os-, where s-(i) = sign(i) is the odd
character of the group {± 1}, and let rot = rot( T, 1) be the space of modular forms
for T that was defined in §3.1 of Chapter 2. ·If we set FloM = F(M(Z)), we
obtain a representation of the group r8 in the space rot; it obviously reduces to a
representation of the group r8f T of order two. Hence, rot is the direct sum of the
subspaces rot+ = rot.• + and rot_ = rots- , where
(2.11) rot~ = {F E rot; FloM = o_.(M)F for M E r8}
ands+ is the unit character of the group {±1}. From the definitions and (2.4) it
follows that
(2.12) rot:!,(q,x) c rot~, wheres(-1) = Xw(-1).
Following the scheme in §1, as in the case of congruence subgroups, we define
representations of the global Hecke rings L~(q) = Dc(r8, S 0(q)) of the group r8
in the spaces rot~. Here we want to obtain operators that are compatible with the
imbeddings (2.12) and with the normalized Hecke operators on the spaces rot::, (q, x).
In view of these conditions and the inclusion (2.4), it is natural to start with an
automorphy factor of the group S0(q) having the form.
(2.13) 'Pw.x(M, Z) = r(M)-wn+(n)xw(detA)I detDlw,

where M = ( ~ ~) E S 0(q) and Z E Hn. Then the action of the group S 0(q) on
functions F : Hn - C is written in the form
(2.14) Flw.xM = r(M)wll-(n) Xw(detA)I detDlw F(M(Z)).
We let V denote the space of all functions F: Hn - C such that Flw.xM = F for all
ME r8. If for each T = L; a;(r8M;) in L~(q) we set
(2.15)
§2. ACTION OF THE HECKE OPERATORS 237

then by Proposition 1.14 of Chapter 3 we obtain a linear representation of the ring


L~(q) in the endomorphism ring of V. ·
We now define the representation lk/2,x of the ring L(j(q) = DQ(f(j, s3(q)) (see §4
of Chapter 3) in the space V with w = k/2. To do this, we set X = Xk; 2 ,ixl in (1.8),
and then give the action of S0(q) on a function F: Hn ---+ C as follows:

(2.16) Flk/2,xM = r(M)nk/2-(n)x(detA)i,o(z)-kF(M(Z)),

where M = (M, i,o(Z)) E s3(q). From (4.2) of Chapter 3 it follows that the space v
can also be defined by the condition that Flk/2,xM = F for all ME rg. Hence, iffor
T = E; a;(f0Af;) E L0(q) we define the operator lk;2,xf on V by the formula
(2.17)

then we obtain a linear representation of the ring L0(q) in V.


In the general case, the condition F E V is equivalent to requiring that

FloM = F(M(Z)) = ~s(M)F. for ME r 0,


where s(-1) = Xw(-1); hence, V contains rot~ for this s. Since the operators in (2.15)
and (2.17) obviously do not destroy the analytic properties offunctions in rot, it follows
that the subspace rot~ c V is invariant relative to all of these operators. We thereby
have the following
LEMMA 2.3. Let w = k or k/2 be an integer or half-integer, let x be a Dirichlet
character modulo q, and lets be the character of the group {±1} that is determined by
the condition s (-1) = x w(-1 ). Then every operator

maps the corresponding space rot~ to itself. The maps T ---+ lw,xT and T ---+ lk/2,xT are
linear representations of the rings L~(q) andL0(q) in the space rot~.
The operators (2.18), which we shall also call Hecke operators, are compatible
with the imbeddings of Hecke rings in (4:101) and (5.3) of Chapter 3 and with the
imbedding (2.12). More precisely, we have
LEMMA 2.4. Under the assumptions in the previous lemma, one has:

Flk,xT = Flk,xeq(T) for FE rotZ(q,x) and TE Ln(q),


Flk/2,xf = Flk/2,xEq,k(T) for F E rotZ;2(q, x) and TE En(q, x),

where eq: Ln(q) ---+ L(j(q) and Eq,k: En(q, x) ---+ L~(q) are the imbeddings (5.3) and
(5.5) of Chapter 3. In particular, the subspace rot:!,(q, x) c rot~ is invariant relative
to all of the Hecke operators lw,x'r, where -r E Ln(q) = eq(Ln(q)) or -r E En(q,x) =
Eq,k(En(q, x)) in the cases w = k andw = k/2, respectively.
238 4. HECKE OPERATORS

PRooF. In the case w = k, the lemma follows immediately from the definitions.
Suppose that w = k/2. We first note that eq,k = eq · Pk is the composition of the
homomorphisms (4.101) and (5.3) of Chapter 3. According to the definitions of the
operators in (2.3), (2.17), and (2.15), they satisfy the relations

Flk/2.xT = Flk/2.xeq(T) for FE Vltk; 2(q,x), TE £n(q,x),


(2.19).
Flk/2.xT = Flk/2.xPk(T) for F e·rotk, TE L0(q),

where s (-1) = x(-1), and this implies the lemma for w = k/2. D

As already mentioned, every modular form F E rot~ = rot(r(J, [xwD. where


s(-1) = Xw(-1), has a Fourier expansion (2.6), and its Fourier coefficients f(R)
satisfy the relations (2.7). We let j~ denote the set of such functions f: An --+ C.
Clearly,

(2.20) j~(q,x) c j~, if s(-1) = Xw(:_l).


If f E j~, if F E rot~ is a function with Fourier coefficients f(R), and if T E L~(q),
then we let Ulw,xT)(R), where Xw(-1) = s(-1), denote the Fourier coefficients of
the function

(2.21) Flw.xT = L Ulw.xT)(R)e{RZ}.


REA.,

From Lemmas 2.3 and 2.4 we have (in the notation of those lemmas):

LEMMA 2.5. The map T --+ lw.x T is a linear representation of the ring L~ (q) in the
space j~, and it satisfies the following relations:

flk.xT = flk.xeq(T) for f E ji:(q,x), TE Ln(q),


(2.22)
flk/2.xT = flk/2.xeq,k(T) for f E ji:12 (q,x), TE £n(q, x),

where eq and eq,k are the imbeddings (5.3) and (5.5) of Chapter 3. In particular, the
subspace j':n (q, x) c j~ is invariant relative to all of the operators lw,/r. where-r: E Ln (q)
or TE in(q, x) in the cases w = k and w = k/2, respectively.
The above constructions make it possible to apply the decompositions in Chapter
3 when computing the action of concrete Hecke operators on modular forms and their
Fourier coefficients. Here we shall prove three general lemmas concerning the action
of Hecke operators in L0. From now on we shall assume, without further mention,
that w = k or k/2 is an integer or half-integer, x is a Dirichlet character modulo
q EN, ands is the character of the group {±1} such that s(-1) = xw(-1).
LEMMA 2.6. Let F be a function in rot~ with Fourier coefficients f (R ). Then for any
matrix Mo= ( r~o ~~) in S0(q) the action of the Hecke operator for (Mo)q; on F
§2. ACTION OF THE HECKE OPERATORS 239

and f is given by the formulas


(Flw,x(Mo)q;)(Z)
(2.23)

D,B

Ulw,x{Mo)q;)(R) = rwn-(n) x(r)nL:'xw(detD)I detDl-w


D
(2.24)
x 1M0 (r- 1R[ 1D);D)f(r- 1R['D]);
here D E An\ AnDoAn, B E BM0 (D)/modD, and in (2.24) R['D] E rAn. where
Z E Hn, RE An. BM0 (D) is the set in Lemma 3.25 of Chapter 3, and/or R' E An (see
(3.25) of Chapter 3)

(2.25) e{R'BD- 1}.


BEBM0 (D)/mod D

PR.ooF. The formula (2.23) follows immediately from (2.15), (2.14), and the left
coset.decomposition for rgM0 q in (3.44) of Chapter 3. If we substitute the Fourier
expansion of Fin the right side of (2.23), we obtain the series

rwn-(n) x(rt L Xw(detD)I detDl-w f(R')e{R'rZ[D- 1) + R' BD- 1}.


D,B,R'

Sincee{R'rZ[D- 1)} = e{rR'[D*)Z} and the resulting series stillliesinrot~, we see that
if we group together terms with fixed product rR'[D*] = R, then the only terms that
remainarethoseforwhichR EX= rD- 1AnD*nAn. Thus,ifwesetrwn-(n)x(r)n = y
and Xw(detD)I detDl-w = l/f(D), we find that our series is equal to

y L l/f(D )e{r- 1DR 'DBD- 1} f (r- 1R[ 'D])e{ RZ}


D,B;REX

= L ( L l/f(D)lM0 (r- 1R['D];D)f(r- 1R['D]))e{RZ},


REA. D,R['D)ErA.

and this proves (2.24). D

LEMMA 2.7. Suppose that Fis a function in rot~ with Fourier coefficients f (R), and
Mo= ( P~o i 0
) E S8(q) satisfies the condition dn(D 0 ) 2 1r (respectively, rld1 (Do) 2 ),
where d; (D) denotes the i th elementary divisor of D. Then we have
(2.26)
(Flw.x(Mo)q;)(Z) = rwn-(n) x(r)n LXw(detD)I detDl-w F(rZ[D- 1]),
D
(2.27)
Ulw,x(Mo)q;)(R) = rwn-(n) x(r)nL:'xw(detD)I detDl-w f(r- 1R['D]),
D
240 4. HECKE OPERATORS

where D EA\ ADoA (A= An), and R[ 'D] E rAn in (2.27) (respectively, we have
(2.28)
(Flw.x(Mo)q;)(Z) =rwn-n(n+llx(rYI detDoln+I
x LXw(detD)I detDl-w Lf(R')e{rR'ID*IZ},
D R'
(2.29)
Ulw.x(Mo)q;)(R) =rwn-n(n+l)x(r)nl detDoln+I
X LXw(detD)I detDl_:.w f(r- 1RI 'DI),
D

where DE A\ ADoA and R' E Ann r- 1DAn 'D).


PROOF. The formulas (2.26) and (2.27) follow immediately from (2.23) and (2.24),
since, by (5.22) of Chapter 3, in this case we can take BM0 (D) = {O} for allD E ADoA.
From (5.23) of Chapter 3 it follows that when rld 1(D0 ) 2 we can take
BM0 (D) = {rD*S;S E Sn/r- 1 • 'DSnD},
where Sn = Sn(Z). From this we deduce that in this case the sum (2.25) for R E An
and D E AD0A can be computed according to the formulas
(2.30) !Mo (R, D) = { r-(n) I detDoln+I, if R E An n r-: DAn 'D,
· 0, if R rt An n r- DAn 'D.
In fact, by definition we have
e{rRID*IS}.

Under our assumptions, the function S --+ e{ rR[D*]S} is obviously a character of the
finite additive quotient group Sn/r- 1 · 'DSnD. Hence, the above sum is equal to the
order of this quotient group if the character is trivial, and is equal to zero otherwise.
Clearly, the character is trivial if and only if rR[D*] is an integer matrix with even
entries on the main diagonal, i.e. (since R E An), if and only if R E Ann r- 1DAn 1D.
When computing the order of the quotient group, obviously we can replace D by its
matrix of elementary divisors ed(D) = diag(di. ... , dn); we find that this order is
IT (r- 1d;dj) = r-(n)(d1 "'dn)n+I = r-(n)ldetDoln+l.

This argument proves (2.30). The formulas (2.28) and (2.39) follow from (2.23),
(2.24), and (2.30). D

In the expansions in Chapter 3 we frequently encountered elements of L 0 of the


form
(2.31)
where
M;(d) = U(d,D;(d)) and n;(d) = ( E -a 0 d~a),
andelementsoftheformL\n(d) = (dE2nko = (dr0)ford EN. Thenextlemmagives
formulas for the action of the Hecke operators that correspond to these elements.
§2. ACTION OF THE HECKE OPERATORS 241

LEMMA 2.8. Let F be a function in rot~ with Fourier coefficients f(R). Then:
(Flw,xII:(d))(Z) =dwn-(n)+(a)x(d)n LXw(detD)I detDl-w
D
(2.32)
x Lf(R)e{dR[D*]Z},
R

where DE A\ AD;(d)A {A= An) and RE Ann d- 1DAn 'D;


(2.33)
Ulw,xII:(d))(R) = dwn-(n)+(a)x(d)nL 1Xw(detD)I detDl-w f(d- 1R['D]),
D

whereD EA \AD;(d)AandR['D] E dAn;


{2.34) Flw,xAn(d) = dn(w-n-l)x(d)n F, flw,xAn(d) = dn(w-n-l)x(d)n f.

PROOF. We first sliow that


BM0 (D) = Bo(D), if Mo= M;(d) and D E AD;(d)A,
where Bo(D) is the set (3.60) of Chapter 3. Since the doublecoset qMoq consists of
integer matrices, the left side of this equality is contained in the right side. Conversely,
suppose that BE B 0 (D) and D = aD 0 p, where a,p EA and Do= D;(d). Then

M = ( d~* ~) = U(a) ( d~iJ 1 ~:) u(p),


where Bo = 'aBp- 1• By Lemma 3.33(1) of Chapter 3, Bo is contained in Bo(Do),
and hence it can be written in the form Bo= ( ~2 d~2 ) . where S1 E Sn-a(Z),
S3 E Sa(Z), and S2 E Mn-a,a· Thus, the matrix

( d~iJ1 ~:) = r( (~2 ~2)) (d~iJ1 io) r( (~ ~3) ).


and hence M as well, is ·contained in the double coset r3M;(d)r(l. This means
that B E BM0 (D). We now show that the trigonometric sum (2.25) with R E An,
Mo= M;(d), .and DE AD;(d)A is given by the formulas
I (RD)= { d(a>, if RE An nd- 1DAn 'D,
(2.35)
Mo ' 0, if R ¢An nd- 1DAn 'D.
In fact, if we write Din the form D = aD0 p, where a,p E Aand Do= D;(d), and
use Lemma 3.33 of Chapter 3, we obtain
e{Ra* B'p(aD0p)- 1}
B 1 EBo(Do)/mod Do

L e{R[a*]B'D0 1}.
B' EBo(Do)/mod Do

From the description of B0 (D 0 )/modD0 in Lemma 3.33(2) of Chapter 3 it easily


follows that the last sum is equal to the number of elements in this set, i.e., it is equal
to d(a) if the lower-right a x a-block in the matrix R[a*], when divided by d, gives
an integer matrix with even entries on the main diagonal, and it is zero otherwise.
242 4. HECKE OPERATORS

Since R E An, this condition for the trigonometric sum to be nonzero is equivalent
to the condition dR[a* D 0 1] E An; and this, in turn, is equivalent to the condition
dR[D*] E An. This proves (2.35). The formulas (2.32) and (2.33) follow from (2.23),
(2.24), and (2.35). The formulas in (2.34) are a direct consequence of (2.14). D

PROBLEM 2.9. Let RE An and DE Mn, detD =f. 0. Set


Sv(R) =
BEBo(D)/modD

and let An(D) ={RE An; Sv(R) =f. O}. Prove that:
(1) If a,p E An, then Savp(R) = Sv(R[a*]) and An(aDP) = aAn(D) 1a.
(2) If edD = diag(d1, ... ,dn), then

Sv(R) = { d)d;- 1· · · dn, if R E An(D),


0, if R rt An(D).
(3) If d = edD = diag(d1, ... , dn) and R = ((I+ eap)rap) E An, where (eap) =
En, then RE An(D) if and only if rap= O(mod da) for 1 ~a~ P ~ n.
(4) Let n = 2 and D = d1Do, where Do is an integer matrix with relatively prime
entries. Then A2(D) = d1 (d0- 1DoA2 'Don A2), where do= IdetDol-
[Hint: See Lemma 3.33 of Chapter 3.)
PROBLEM2.10. As before, letk, x, ands be connected by the relation (-l)k x(-1)
= s(-1). Show that for (m,q) = 1 the images T(m) of the elements (3.22) of Chapter
3 in L3 (q) act on the space~~ according to the formula

(/lk,xT(m))(R) =mnk-(n)x(m)n L d)d;- 1· · ·dn


ddd2l···ld.lm
x L:x(detD)(detD)-k f(m- 1R['D]),
D

where d; EN, DE An \Andiag(di, ... ,dn)An, m- 1R[ 1D] E An(D), f E ~~.and


R E An. In particular, if n = 1, then

(flk.xT(m))(2a) = L ok-lx(o)f(2ma/o2),
Jlm,u
and if n = 2, then

x Ex(detD)(detD)-k f((oifo203)R[ 1D]),


D

whereo1o2o3 = m, RE o3A2, and DE A2 \ A 2 ( ~ ~) A2, R['D] E o2o3A 2•


[Hint: For the first part use Problem 3.14 of Chapter 3, the formulas (2.24) for the
double cosets in T(m), and the previous problem; for the second part set m/d = o;
and for the third part set d 1 = Oi. d2 = 0102, and m = o1o2o3, and use part (4) of the
previous problem.]
§2. ACTION OF THE HECKE OPERATORS 243

3. Hecke operators and the Siegel operator. We now study.the relations between
Hecke operators on the spaces rot::, (q, x) and rot~ and the Siegel operator '1> that
was defined in §3.4 of Chapter 2. These relations make it possible to reduce certain
questions in the theory of Hecke operators for groups of degree n to the analogous
questions for groups of lower degree.
In §3.4 of Chapter 2 the Siegel operator '1> was originally defined on the space lV;
of Fourier series of the form (3.5) of Chapter 2 that converge absolutely on all of Hn
and uniformly on subsets oftheformHn(e) withe> 0. From Theorem3.l of Chapter
2 it follows that rot~ c J'j for any character s. Thus, the operator '1> is defined on
the spaces rot~, and hence also on the subspaces rot:!,(q,x). From the definitions and
Theorem 3.1 of Chapter 2 it follows that the subspace rot~ c lJ'j can be characterized
as the subset of all F E J'j whose Fourier coefficients satisfy (2.7). If we consider
these re~ations for matrices V in An of the form ( ~1 ~), where Vi E An- I, and

apply (3.50) of Chapter 2, we find that for any F E rot~ the Fourier coefficients of the
function F 1'1> satisfy the conditions (2. 7) for n - 1. Thus,

(2.36)

where we set

rotO - { C, if s(-1) = 1,
(2.37)
"- {O}, if s(-1) = -1
(in the case s (- l) = -1, the constant term in the Fourier expansion of any function
of rot~ is obviously zero). Now let q EN, and let x be a Dirichlet character modulo
q. From the definitions it easily follows that if K = r(j(q), then the group Kin-II (see
(3.55) of Chapter 2) is r 0- 1(q), and the character [x]ln-IJ of this group (see (3.56) of
Chapter 2) is the one-dimensional character corresponding to x. Thus, by Proposition
3.12 of Chapter 2 we see that for any integer or half-integer w

(2.38)

where, in accordance with (2.20) and (2.37), we have set

~(q,x) = { ~~},
if Xw(-1) = 1,
(2.39)
if Xw(-1) = -1.
We now turn to the Hecke operators. We show that the Hecke operators on the
spaces rot~ and rot::, (q, x) are compatible with '1> in the sense that there exists a homo-
morphism X -+ X' of Hecke rings from degree n to degree n - 1 such that for every
function Fin the space under consideration one has (Flw.xX)l'1> = (Fl'1>)lw.xX'. It is
convenient to describe these homomorphisms in terms of the polynomial realizations
of the Hecke rings by means of the spherical maps. Hence, we shall limit ourselves
to the local Hecke rings. The global Hecke rings could be treated in an analogous
maimer; however, we do not need to do this, since all of the elements that interest us
in the global Hecke ring are generated by local components.
Suppose that n EN, pis a prime, and

(2.40)
244 4. HECKE OPERATORS

is an arbitrary element of the Hecke ring Lo,p (see (3.45.) of Chapter 3). By choosing
different r 0"-left coset representatives, we can replace each matrix D; E by any a;
matrix in the left coset An D;. From Lemma 2. 7 of Chapter 3 it then f9llows that all of
the D; may be assumed to have been chosen in the form

(2.41) D; = ( ~f ;d,) , where Df E a;-t and d; E Z.


With this choice of representatives, we set

(2.42) qi(x,u) = 'P;(x,u) = ~a;u-.s1 (up-n)d1


I
(r0- 1 (P.s'(~f)* ~D ).
where u is an independent variable, and for every n x n-matrix A we let A' denote the
(n -1) x (n -1)-matrixin the upper-left corner of A. Ifn = 1, then D; =pd', and we
set
(2.43)

It is easy to see that qi(x, u) does not depend on- the choice of representatives with
these properties. We thus o~tain a linear map
qi= qi;: Lo,p - L(r0- 1, s 0,; 1) ®z Q(u± 11
from the ring Lo,p to the left coset module of the pair (rij- 1, s 0,; 1) over the ring of
polynomials in u±• with rational coefficients; in the case n = 1 we obtain a map
'P~ : LA,p -+ Q(u±I].
PROPOSITION 2.11. Let n E N, and let p be a prime. Then:
(1) For any XE Lo,p the element qi(x, u) lies in the ring
Ln- 1[u± 11 = D (rn- 1,sn- 1)[u± 11
0,p Q 0 0,p

ofpolynomials in u±1 over the Hecke ring L0,; 1 (we set L8,p = Q); the map
(2.44)
is a ring homomorphism.
(2) The image of the restriction of qi; to the subring C!!_P, C~P or L; of the ring Lo,p
is contained in C.'.'..; 1[u± 11, c~; 1 [u± 1 1, L;- 1cu± 11, respectively (we set czP = L~ = Q).
(3) The following diagram commutes:

Lo,p Q[Xo±I , • •• ,xn±11

(2.45)

Ln0.-Pl[u±l 1 n;-•
-----'-------+
Q[Xo±I , • • • , X n-1'
±I U ±1 1,

n;
where is the spherical map (3.49) ofChapter 3, in the bottom row n;-
1 is the homomor-

phism extending the spherical map on L 0,; 1 that satisfies the condition 1(u±•) = u±•n;-
(we define the spherical map on L8,P = Q to be the identity map),· qi; is the homomor-
phism (2.44), and Sn is the homomorphism ofpolynomial rings given on generators by:
§2. ACTION OF THE HECKE OPERATORS 245

E(xo) = xou-I, E(xn) = u, andE(x;) = x;for 1::::;; i::::;; n - 1 (we take EI(xo) = u-I,
EI(xI) = u).
PROOF. If YI is an arbitrary matrix in r 0-I and y is the image of YI under the map
(3.53) ofChapter2, theny E q, andfromtheconditionX·y = Xfor XE Lo,p and the
definition of 'P(X, u) it easily follows that 'l'(X, u)yI = 'l'(X · y, u) = 'l'(X, u), where
YI acts only on the leftcosets and not on the coefficients. Hence, 'l'(X, u) E L(),;Icu±I].
From the definition of multiplication in Hecke rings it immediately follows that the
map (2.44) is a homomorphism.
Next, using the definition of'I' and the expansions (5.12) of Chapter 3, we obtain:
'l'(II±(p), u) = u-Irr±-I (p ). This, along with (6.1) of Chapter 3, implies the claim in
the lemma concerning the '1'-images of C:J,,p' As for L;, if we define the map 'P: L; -+
L;-Icu±I] by analogy with (2.42),.then it is not hard to verify the commutativity of
the diagram
Lo,p

L;-Icu±IJ ~ L(),;icu±IJ,
where e denotes the imbeddings in Lemma 3.26 of Chapter 3; hence

To prove the third part of the proposition, we suppose that each matrix D; in the
expansion (2.40) of some X E Lo,p has been chosen in the form (2.36) of Chapter 3
with diagonal entries pd;• , ... , pd;•. Then, by the definition of the maps, we have

En(n;(x)) =En (~a;xg; fl(xjp-j)dii)


I j=I
n-I
= La;u-61 (up-n)d1•xg; IT(xjp-j)d;1
i j=I

~D))
. -6;( up -n)d;. wPn-I (.mn-I 61
=""'
L....Ja,u ""P o (p (Df)*
(rn-I 0
i

=n;-I('l'(X,u)). o

In what follows, to avoid worrying about the different fields of definition of the
Hecke rings, we shall suppose that all of our maps of Hecke rings have been extended
by linearity to the complexifications.
THEOREM 2.12 (the Zharkovskaia commutation relations). Suppose that q E N, p
is a prime not dividing q, x is a Dirichlet character modulo q, ands is the character of
the group {± 1} that satisfies the condition s (-1) = xw ( -1), where xw is the character
(2.5). Then the following relation holds for any F E rot~ and any XE L~.p = Lo,p ®QC:

(2.46)
246 4. HECKE OPERATORS

where cf> is the Siegel operator, 'l'(X, pn-w:X(p)) E L~.~ 1 is the element (2.42), and in the
case n = 1 the operator lw.x 'I' acts on the right as multiplication by the complex number
'I'.
PROOF. Let f(R) (R E An) be the coefficients in the Fourier expansion (2.6) of
F, and let X be written in the form (2.40), where a; E C and each D; has the form
(2.41). Using the definitions (see (2.14)), we have

Flw.xX = L:a; ( L
i REAn
f(R)e{Rz}) I
w,x
(PJ'fi ~;.)
1

= La;a;f(R)e{p61R[D:JZ +RB;D;- 1},


i,R

where
a;= pMwn-{n))Xw(det(p61 Di))JdetD;J-w.

We note that for R = ( ~' ; ) E An the entry in the lower-right corner of the

matrix pJ1 R[Di] is equal to pJi- 2d1r (see (2.41)). Thus, if we set Z := ( ~' i~) in
the last. expression, where Z' E Hn-J. and if we let). approach +oo, then all of the
terms corresponding to matrices R with r > 0 will approach zero. Since r ~ 0, we
have r = 0, and since R ~ 0 it follows that R = ( ~' ~) . Finally, we note that for

R = ( R'
0
0) (Z' 0) ·
0 , Z = 0 iA. , and D; of the form (2.41) we have the relations

p
.s,R(D'!') = .J, (R'[(D;)*]Z'
I p 0
0)0 and RB·D-:-1 = (R'Bf(Df)- 1
I I 0 0
0) J

and we use the uniform convergence of the series on the subsets Hn(e) c Hn. We
obtain

(FJw,xX)Jcp .
= A_!!~00 (FJw,xX) (Z' iA.0)
0

=. ~ a;a;f( ( ~' ~) )e{p6 R'[(D;)*]Z


1 1
+R'Bf(D;)- 1}
1,R EAn-1

=~P;a;( L 1((~' ~))e{R'Z'})I (pJi(~f)* ~{)


I R 1 EAn-I W,X I

= (FJcf>)lw.x'l'(X,pn-wx(p)),

where p; = pJ1(w-n>x(~1 -d1 )p-wd1 , since, according to (3.50) of Chapter 2, the sum
in the large parentheses is equal to (FJcf>)(Z'). D

We now consider the restriction of the map 'I' (·, pn-kx(p)). to the complexification
r:; = L~ ®QC of the ring (3.46) in Chapter 3.
PROPOSITION 2.13. Suppose that n, q EN, k E Z, and xis a character modulo q.
Then/or any prime p not dividing q:
§2. ACTION OF THE HECKE OPERATORS 247

(1) If X E L; (respectively, if X belongs to the even subring


(2.47) E; = { ~a;(r0 M;) EL;; r(M;) = p o; E 2Z} 6i,
I

-n) ( k ( -n-1 (
of LP , then'¥ X, pn- x p)) E LP
-n-1 :-0
respectively, E P ), where we set LP = ::0
EP = C.
(2) The map

(2.48) LP--+ -n-1


'¥ ( •,pn- k X( p ) ): -n LP

is an epimorphism in all cases except when

(2.49) k = n > 1 and x(p) = -1.

-n -n-1 -n-1
In that case the image of LP is the even subring EP c LP .
(3) The map (2.48) gives an epimorphism of E; onto the ringE;- 1.
PROOF. We form the complexifications of all of the rings in the diagram (2.45),
i.e., we tensor with C over Q, and we extend the maps by linearity to the complexifi-
cations. Obviously, the resulting diagram still commutes. Then, instead of the action
of'P;(.,pn-kx(p)) onL; we can consider the action of8n with u = pn-kx(p) on the
image n;(L;) under the extended spherical map. From Theorem 3.30 of Chapter 3 it
follows that this image is the polynomial ring

(2.50)

where the polynomials ra = r;, Pa = p;, and t = tn are defined by (3.52)-(3.54) of


Chapter 3, respectively. From the definition of the map n; and the fact that it is a
monomorphism on r;
it follows that the image of the subring (2.47) under this map
is the subset of polynomials in n;(L;) having even degree in the variable x 0 • Hence,
if we take into account the relation

(t) 2 ;:: X~XI • • ·Xn rro


n

i=I
+x;)(l +x;-I)
(2.51) 2n
=Po "L>a = Po(2 + 2(r1 +" · + Yn-1) + Yn) (n ~ 1),
a=O

we obtain

We now examine how the map 8 = Sn acts on the generators of these polynomial
rings. By definition, we have

(2 .53) -(Pon)
~ = u -I x 02x1 · · · Xn-1 = u -I Pon-1 , ~
-(tn) = u -1(1 + u )tn-1 .
248 4. HECKE OPERATORS

Using (3.52) of Chapter 3, we obtain

2n n-1
~)-1) 0 E(r;)v 0 = (1- uv)(l- u- 1v) ITO - x;v)(l -x;- 1v)
i=I
(2.54)
2(n-I)
=(1-(u+u- 1)v+v 2) L (-l) 0 r;- 1(xi, ... ,Xn_i)v0 ,
a=O

and hence

(2.55)

These formulas imply that for any u E C-in particular, for u = pn-kx(p )-the map
E takes the ring n;(L;) to n;- 1(L;-l ), and takes the ring n;CE;) to

C[r,n-1 , ... ,rnn-1 'Po


( n-1)±11 = C[r,n-1 , ... ,rn-1•
n-1 (Pon-1)±11
. nn-1(-En-l)
=up P

(obviously r:-
1 = r;:J). This proves part (1) (for r:;
this part also follows from
Proposition 2.11 (2)).
To find the images of these maps we use (2.53) and (2.54) to express the generators
r;-1 and rn-I in terms of their preimages. From (2.53) we have

(2.56) Pon-1 = u.:::.n


- (Pon)

Furthermore, from (2.54) we obtain

(2.57) r;- 1 = L (-l)ic;(u)En(r;_;),


o,.;;,.;a

where the c;(u) are determined by the condition


00

.L:c;(u)v 1 = {(1- uv)(l - u- 1v)}- 1.


i=O

If we set u = pn-kx(p) in these formulas, we see that the image of n; (L;) contains
all of the generators of the ring n;- 1 (r:;-
1), except in the case when n > land

pn-kx(p) = -1, i.e., the case (2.49). In the latter case, the image contains all of the
elements r;- 1 and (p0- 1)± 1, but it does not contain rn- 1; hence, by (2.52) for n - 1,
this image is n;- 1cE;- 1). The same formulas also imply that the image of n;CE;) is
I -n-1
always n;- (EP ). 0

The formulas (2.56) and (2.57) give us a practical way to compute the preimages
of the generators ofL;-t under the map (2.48).
Let LO.P be the Hecke ring (4.99) of Chapter 3, and let Pi: be the homomorphism
(4.101) of Chapter 3. We define the homomorphism 'ii= if; for the ririg L(l,p in such
§2. ACTION OF THE HECKE OPERATORS 249

a way that the following diagram commutes:


P"k
Eo,p ----+ L~.p
(2.58) l qi(.u) l 'l'(.u)

En-l[u±l]
p;- 1 x1 r-1( ±I]
O,p O,p U '

where p;:- 1 x 1 is the homomorphism extending p;:- 1 and taking u±1 -+ u±1, and
~o
Lo,p = C. If

(2.59) _"° .
X~ - L..Ja, (~n
r0 (PJ;(D;)*
O Bf) ,t,·)
D!
i I

is an arbitrary element ofLo,p and the matrices D; have the form (2.41), then we easily
see that the map 'P that takes i to

'P(i,u) = ~a;u-J;(up-n)d; (r~- 1 ( ( PJ;(~;)* ~D ,t;p-d 1/ 2) ).

(2.60)
'i'(X,u) = L:a;uJ1 (up- 1)d't;-k
i

for.n > 1 and n = 1, respectively, has the above property. Moreover, if we use (2.19),
(2.46), and the commutativity of the diagram (2.58), then for any F E rot~, where
s(-1) = x(-1), and for any i E Lo,p• we obtain the relation

(2.61)

LEMMA 2.14. Let i;(q) be the even Hecke ring (4.37) of Chapter 3, and let eq be
the imbedding of this ring in Lo,p given in (4.100) of Chapter 3. Then the restriction of
the map (2.60) to the subring eq(E;(q)) gives a homomorphism

(2.62)

where the ring on the right is C[u± 1] in the case n = 1.


PROOF. Since the lemma is obvious for n = 1, we shall suppose that n > 1. It was
shown in §4.3 of Chapter 3 that the Hecke pairs (f0(q), S;'(q)) and (f0, §&:P) satisfy
the condition (1.26) of Chapter 3. Using the diagram (1.30) of Chapter 3, we define an
action of the group f 0(q) on the space L = LQ(f0, SO:P) and on its extension L[u± 1],
where f 0 acts only on the coefficients in L. Then, by Lemma 1.10 of Chapter 3, to
prove (2.62) it suffices to show that

'i'(i,u)·Y''='i'(X·y,u) forXEfq(E;(q)), y'Er~- 1 (q),

where y is the image of y' in the group r 0(q) under the map (3.53) of Chapter 2. This
relation, in turn, is a consequence of the following claim.
250 4. HECKE OPERATORS

- = (M;, t;) E So,p


Let M; ,... with M; = (p'M D*; D:
1 B ) , where i = 1, 2, the matrices
0
D; have the form (2.41), and

(2.63)

Then
(2.64) M{Y' =g'~ for M: = (Mf,t;p-d f 1 2) E s~.; 1 •

where M! = (
'
P'M' (Dt)*
0
Bf)
D! I
and o' =
d'
(a'c'
b') E rn-
o
1(q) and for any
'
x n n-
matrix A we let A' denote the (n - I) x (n - 1)-matrix in its upper-left corner.
To prove this claim, we first note that M1 y = 0M2. From this and from (2. 7)-(2.8)
of Chapter 1 we obtain M{y' = o'M~, whereo' E r~- 1 (q), d 1 = d2 , and the matrix

o has blocks of the form a = ( :' ~) , c = ( ~ ~). and d = ( ~' ; ). If we


now compare the second components of the elements in (2.63), by (3.19) and (3.23)
of Chapter 2 we obtain

.
Setting Z = (Z' iA.0)
0 E Hn, we conclude from (3.61) of Chapter 2 and the last
equality that we have
t1j(2j1(y', Z') = j(2) 1(o', M2{Z'))t2.

This, together with the equality M{y' = o' M~, gives us (2.64). D

This lemma enables us to describe the action of 'I' on the subring eq,k(.E;(q,x))
of the ring L~·P' and to define a map'¥' for .E;(q, x) that commutes with the Siegel
operator <I). Namely, we have
PROPOSITION 2.15. (1) Let E;(q, x) be the ring (4.104) of Chapter 3, let x be a
Dirichlet character modulo q, and let k be an odd integer. Then the map 'I' (see (2.42)
and (2.43)) gives an epimorphism
(2.65) 'I'( ,pn-kf2x(p)): E;(q, x) ®QC-+ E;- 1(q, x) ®QC,
where the ring on the right is C in the case n = 1.
(2) Let'¥'=\¥'(., pn-kf2x(p)) be the map de.fined by the commutative diagram
""'
.E;(q, x) ®QC ~ L~.p
(2.66) l ~(.p•-kt2x(Pn....,_, l '¥(.p•-kt2x(P n
E;-
,... I(
q,x
) e,k -n-1
®QC~ Lo,p,

where E';(q, x) is the ring (4.83) of Chapter 3 andeq,k is the monomorphism (4.102) of
Chapter 3 extended by linearity if m > 0, and is the identity map from C to C if m = 0.
Then \¥' is an epimorphism. .
§2. ACTION OF THE HECKE OPERATORS 251

(3) For any FE rotz12 (q,x) and XE .E;(q,x) one has

(2.67) (Flk12.xX)lct> = (Flct>)lk;2.xql(x,pn-kf 2x(p)),


where ct> is the Siegel operator, lk;2.xX is the operator (2.3), and in the case n = 1 the
operator lk;2.x qi acts as multiplication by the complex number qi,
PROOF. Since part (1) is obvious for n = 1, we shall suppose that n ~ 2. Let
fn(p 2) be one of the generators of .E;(q, x) in (4.82) of Chapter 3, and let eq be the
imbedding (4.100) of Chapter 3. From (2.60) and Lemma 2.14 it follows that

ql(eq(fn(p2)), u) = L asu 'eq((es)r•)


0

(a finite sum), where es= (K:,- 1,


*) E l!S (see (4.82) of Chapter 3), as E Q, as E Z,
and f = f 0- 1(q). But since (es)r• = f;,- 1(p 2 )· ((E2n-2. ts))f, where ts E Ci. it follows
from the commutativity of the diagram (2.58) and the relation Pk(eq((E2n-2. ts)f)) =
t;k that the map (2.65) is a homomorphism. That it is an epimorphism follows from
· the commutativity of the diagram

(2.68) 1 '1'( ,p•-kt2x(p)) 1:=:.


n•-1
~n-1(
E p
'°' C
q,x) IOIQ P
-----'-------+
C[ r1n-1 , ... ,rn-1•
n-1 ( n-l)±q]
Po •
where the maps n; are isomorphisms by Theorem 4.19 of Chapter 3 and (2.52), and
from the fact that En is an isomorphism (see Proposition 2.13).
According to Theorem 4.21 (4) of Chapter 3, the map eq,k gives a ring isomorphism
between E';(q, x) and E;(q, x). From this and part (1) we conclude that the map qi
in (2.66) is well defined and is an epimorphism.
Finally, the third part of the proposition follows from part (2), the commutation
relation (2.46), and Lemma 2.4. D

The Zharkovskaia relations are often used when one wants to answer certain
questions concerning the action of Hecke operators on modular forms not in the
kernel of the Siegel operator by reducing them to analogous questions for forms of
lower degree. One such question is the existence of a basis of eigenfunctions of the
Hecke operators. Theorem 1.9 gives a positive answer to this question in the case of
cusp-forms. In many cases the Zharkovskaia relations enable one to carry this result
over to the entire space of modular forms. Since the general case has not yet been
sufficiently investigated, we shall limit ourselves to the simplest nontrivial case, that of
the space
(2.69)
of modular forms of integer weight and unit character for the modular group rn.
'fHEoREM 2.16. Any subspace V ofrotz, where n EN and k E Z, that is invariant
relative to all of the Hecke operators lkT = lk,I T for TE L(rn) = Ln(l) = Ln has a
basis consisting of eigetifunctions of all of these operators.
252 4. HECKE OPERATORS

PRooF. In the case under consideration, (1.26) obviously implies that for any
forms F, G E rotz,
at least one of which is a cusp-form, and for any T E L n, one has

(2.70)

Then the argument used to prove Theorem 1.9 can be applied to any invariant subspace
v contained in mz, so that our theorem is proved in that case. If v is an arbitrary
invariant subspace, we set Vi= Vnmz and Vi= {FE V; (F, G) = Oforall GE V2}.
Using the properties (3)-(5) of the scalar product in Theorem 5.3 of Chapter 2 and
standard linear algebra, we see that Vis the direct sum of the subspaces Vi and V2 :

(2.71) v =Vi$ Vi.


Since V2 is the intersection of two invariant subspaces (see Proposition 1.5), it is
invariant relative to all of the Hecke operators. It then follows from (2.70) that the
subspace Vi has the same property. In the case of mz the set of cusp-forms coincides
with the kernel of W. The subspace Vi does not contain cusp-forms, and so Cl) gives an
isomorphismofthisspaceontotheimage V' = W(Vi) c mz- 1• Using the invariance
of Vi, the Zharkovskaia relations (2.46), the surjectivity of the maps (2.48) in the
present situation, and Lemma 2.4, we conclude that the space V' is invariant relative
to all Hecke operators in L ~- 1 for each prime p, and hence, by Theorem 3.12 of Chapter
3, relative to all Hecke operators in Ln- 1• Now suppose that the theorem has already
been proved for subspaces of mz- 1• Then V' has a basis F{, ... , FJ of eigenfunctions
of all of the Hecke operators in Ln-i. Let Fi= w-i(F{), ... , Fd = w-i(FJ) denote
the preimages of these eigenfunctions in Vi. Then Fi, ... , Fd obviously form a basis
of Vi. In addition, each of the functions F; is an eigenfunction for all of the operators
in L n. In fact, by Theorem 3.12 of Chapter 3, it suffices to verify this for the operators
corresponding to elements T E L~ for all primes p. For such Tit follows from our
assumptions and Lemma 2.4 that ·

F/lk'P(e(T),pn-k) = A.;(T)F/,
where e : L ~ ~ LO.p is the imbedding ( 1.27) of Chapter 3 and A.; (T) is a scalar; hence,
by (2.46), we have

(FdkT -A.;(T)F;)IW = F;lke(T)IW -A.;(T)F/


= F/lk'P(e(T),pn-k)-A:;(T)F/ = 0,

and so FdkT = A.;(T)F;. As noted before, the space Vi has a ba~is of eigenfunctions
of all of the Hecke operators. If we combine this basis with the basis Fi, ... , Fd
of Vi, we obtain the desired basis for V = Vi $ Vi. To complete the induction it
remains to prove the theorem in the case n = 1. We again represent the invar\ant
subspace V c rot!
in the form (2.71), and note that in this case dim Vi = 0 or 1, since
W(Vi) c rot2 = C. If dim Vi = 0, then V c !Jll,
and our claim has already been
proved; if dim Vi = 1 and F is a function that spans the invariant subspace Vi, then F
is automatically an eigenfunction of all of the Hecke operators. This F, together with
the basis of eigenfunctions for Vi, form the desired basis for V. 0

Another application of the Zharkovskaia relations can be found in the next sub-
section.
§2. ACTION OF THE HECKE OPERATORS 253

PROBLEM 2.17.Let A. be a nonzero Q-linear homomorphism from L;- 1 to C, where


n > 1 and pis a prime; let ao, a1, ... , an-I be the parameters of A. (see Proposition
3. 36 of Chapter 3), and let I be the linear extension of A. to the complexification1:;-
1•

Show that aou- 1,a 1,. • ., an-i. u can be taken as parameters of the homomorphism
T---+ I('l';(r, u)) of the ring L;, where u is a nonzero complex number.
PROBLEM 2.18. Show that all of the eigenvalues of Hecke operators on rot;:· are
real. ·

4. Action of the middle factor in the symmetric factorization of Rankin polynomi-


als. As mentioned before, the main purpose of this section is to study the action of the
Hecke operators in Ln(q) or E(q, x) on modular forms and their Fourier coefficients.
Our general philosophy is to replace the global rings by their local subrings L;(q) and
i;(q, x), and then to replace these by the isomorphic subrings L; and i;(q, x) of the
Hecke ring of the group r 0. The imbeddings of the local Hecke rings in the Hecke
rings of the triangular subgroup ro make it possible to decompose the elements of the
local rings-in particular, their generators-into components having a simpler action
on the modular forms and their Fourier coefficients. Among the decompositions of
this sort, the most important are the ones in the symmetric factorization of the Rankin
polynomialsR;(v) andi.;(v) (see (6.99) and (6.100) of Chapter 3), whose coefficients
include all of the generators of the even local subrings E; and i;(q, x) (from (2.52)
and (2.68) it follows that the coefficients of the Rankin polynomials precisely generate
the even local subrings). Lemma 2.7 contains, in particular, formulas for the action of
the first and third factors in the factorizations (6.99) and (6.100) of Chapter 3. Below
we shall derive formulas for the action of the middle factors, regarded as polynomials
with operator coefficients.
As before, w = k or k/2 is an integer or half-integer, n, q E N, and q is divisible
by 4 if w = k/2. In addition, pis a prime not dividing q, xis a Dirichlet character
modulo q, ands is the character of the group {±1} such that s(-1) = Xw(-1) (see
(2.5)).
We introduce some useful notation. If P( v) = I:;~o p;v; (p; E L~) is a polyno-
mial or formal power series over the ring L~.P' then for any F E rot= (respectively, for
any/ E j=) we set

(2.72) Flw.xP(v) = L)Flw.xP;)(Z)v;


i~O

(respectively,

(2.73) flw.xP(v) = ~)flw.xP;)(R)vi),


i~O

where the right side is understood as a formal sum in the case when P is a formal power
series. If the values off coincide with the Fourier coefficients of F, then obviously

(2.74) Flw.xP(v) = L Ulw,xP(v))(R)e{RZ}.


REA.

It also follows from the definitions that the product of polynomials or series corre-
sponds to the product of the corresponding operators.
254 4. HECKE OPERATORS

Thus, let
' n n
(2.75) B;(v) = :~:)-l);b;v;, n;(v) = :~::)-l);b;v;
i=O i=O
be the middle factors in (6.99) and (6.100) of Chapter 3, where the coefficients b; and
b; are linear combinations of elements of the form 11~6 and 11~6 (k). The next lemma
reduces the study of the action of these elements on rot~ and ~~ to the computation of
certain trigonometric sums.
LEMMA 2.19. Under the above assumptions and notation, any f E ~~satisfies the
relations
flk.xl1~6 = Pn(k-n-l)x(p)nlp(r,n;R)f(R),
f lk/2.xl1~6(k) = Pn(k/2-n-I) x(p)nz;(r, n; R)f (R),
where
(2.76) z;(r,n;R) = L x(A)-ke{p- 1RA},
AELp(r,n)

Lp(r, n) is the set of symmetric n x n-matrices of rank rover thefieldFp = Z/pZ, xis
the function (4.70) of Chapter 3, and
(2.77) lp(r, n; R) = l~(r, n; R).

PRooF. Let F E rot~ have Fourier coefficients f(R). Then from (2.14), (2.15),
and (4.106) of Chapter 3 it follows that
Flw.xl1~6(k) = Pn(w-n-l)x(p)n L f(R)x(Bo)-ke{R(Z + p- 1Bo)} ..
R,BoELp(r,n)

Combining similar terms containing e{ RZ} and setting x


the lemma.
=1when w = k, we obtain
0

This lemma enables us to write the action of the polynomials B; (v) and n; (v) (in
the sense of (2. 73)) in terms of the trigonometric sums (2. 76) and (2. 77).
PROPOSITION 2.20. Under the above assumptions and notation, any f E ~~ satisfies
the relations
(flk.xB;(v))(R) = B;(v,R)f(R),
(2.78)
(flk/2.xB;(v))(R) = n;,k(v,R)f(R),
where RE An, B;(v, R) and n;,k(v, R) are the polynomials defined by setting

(2.79)
n;(v, R) ~ t,(-1) 1p-(i)-l(•-<l { t. <>111,(i - j, n; R) }v',
n;,k(v,R) = ~(-l)ip-(i)-i(n-il{ t.aijz;(i- j,n;R) }v;,
and a;_; and a;_; are the coefficients (6. 70) and (6.86) of Chapter 3.
§2. ACTION OF THE HECKE OPERATORS 255

PRooF. The proposition follows directly from (2.73), (2.75), Theorem 6.24 of
Chapter 3, and Lemma 2.19, if we take into account that !J,. =II~~~· D

Thus, our task reduces to the computation of the polynomials (2.79), to which
the rest of this subsection is devoted. We begin by computing the trigonometric sums
(2.77). For 0 < b ~ n we set

(2.80) Prp(b,n) ={ME Mb,n(Z/pZ);rp(M) = b},


where, as before, rp(M) denotes the rank of the matrix Mover the field of p elements;
and for

(2.81)

(the set of matrices of integral quadratic forms inn variables) we define the set

(2.82) Prp(b,n; Q) ={ME Prp(b,n); Q['M] =O(modp)},


where congruence modulo d E N for S, S' E Em is understood in the following sense:

(2.83) S =S'(modd) means that S - S' E dEm.

We shall let pp(b, n) and pp(b, n; Q) denote the number of elements in the sets (2.80)
and (2.82), respectively:

pp(b,n) = IPrp(b,n)I, Pp(b,n; Q) = IPrp(b,n; Q)I,


andwesetpp(O,n) =pp(O,n;Q) = 1.
LEMMA 2.21. Let 0 ~ r ~ n, R E En. and p be a prime number. Then the
trigonometric sum (2. 77) can be written in the form

where r.p; = r.p;(p) is the function (2.29) of Chapter 3.


PROOF. In the case r = 0 the formula is obvious. If we apply Lemma 6.18 of
Chapter 3 and use the notation of that lemma, we have

lp(r, n; R) = L L e{p- 1RM}


iElr,n MEL~(r,n;i)

= L L.:exp(niu(R['X]A)/p),
iEfr,n A,V

where A E Lp(r, r ), V E V(i), and X = (E,, V)M(i).


To transform the last expression we note that the set that X runs through may be
regarded as a set of representatives of the orbits of the group G = GL, (Z/ pZ) acting
by left multiplication on the set Pr P ( r, n):

(2.84) G \ Prp(r,n) = LJ {X = (E,, V)M(i); VE V(i)}.


i6/,_,,
256 4. HECKE OPERATORS

Namely, for every matrix Tin Prp(r,n) we let i = i(T) E Ir,n denote the first (in
lexicographical order) set of indices 1 ~ i 1 < · · · < i, ~ n such that the columns of T
indexed by i1, .•• , i, are linearly independent modulo p. Since obviously

(2.85) i(gT) = i(T), if g E G,

it follows by taking g = r1- 1, where T1 is the matrix made up of the i 1th, ... , i,th
columns of T, that the matrix T' = r,-
1 T has the same index set as T, and the columns
corresponding to these indices are equal to the corresponding columns of the identity
matrix E,. Since i is a minimal set, it follows that the entries t~.jp in the columns of
T' with indices (j,, ... ,jn-r) = i (the complement of i = (ii, ... , i,) in (1, 2, ... , n))
=
satisfy the condition t~.ip O(mod p) if ia > j p. If we then replace all of the entries
in T' by their least nonnegative residues modulo p and use (6.51) of Chapter 3, we
see that the matrix T' M(i)- 1 = r,-
1TM(i)- 1 has the form (E,, V), where V E V(i)

(see (6.52) of Chapter 3). Hence, the right side of (2.84) contains representatives of
all of the orbits. If two matrices X = (E,, V)M(i) and X' = (E,, V')M(i'), where
=
V E V(i) and V' E V(i'), belong to the same orbit, i.e., X' gX(mod p ), then from
(2.85) and the obvious equalities i(X) = i, i(X') = i' it follows that i' = i, and hence
g = E,(mod p) and X = X'. This proves (2.84). We now note that, because of the
summation over A, the right side of the last expression for lp(r, n; R) does not change
if A is replaced by A[g] with g E G. Hence, applying (2.84), we obtain

lp(r,n;R) = IGl- 1 L L exp(~ u(gXR'X'gA))


AELp(r,r),gEG XEG\Prp(r,n)
(2.86)
= pp(r,r)- 1 L G;(A,R),
AELp(r,r)

where G; (A, R) for A E S,, R E En denotes the reduced Gauss sum

G;(A,R) = L exp(~ u(RA[X])).


XEPrp(r,n)

On the other hand, the number of elements in a set of the form Prp(b,n;R) can
also be expressed in terms of reduced Gauss sums. Namely, from the obvious relations

L exp (ni u(QA)) = { p(b)'


if Q E pEb,
if Q ¢ pEb,
AESb(Z/pZ) p O,
where Q E Eb, we have

p-(b) L G;(A,R) = p-(b) L L exp(ni u(R['X]A))


AESb(Z/pZ) XEPrp(b,n) AESb(Z/pZ) p
= Pp(b, n; R),
where 0 ~ b ~ n and RE En; hence,
b
pp(b,n;R) = p-(b) L L G;(A,R).
s=O AELp(s,b)
§2. ACTION OF THE HECKE OPERATORS 257

Ifwe note that none of the sums a;(A, R) is affected by any substitution of the form
A --+ gA 1g with g E GLb(Z/ pZ), and if we use Lemma 6.18 of Chapter 3, we can
rewrite the inner sums in the last expression in the form

L a;(A,R) = 'Pb L a;(A,R),


AELp (s,b) 'Ps'Pb-s A= ( ~/ ~ ) EL,(s,b)

where A'= A(s) is ans x s-block. Every matrix XE Prp(b, n) can be written in the
form X = ( ~~ ) , where X1 E Pr P (s, n). The number of such X with fixed X1 clearly
does not depend on Xi. and so this number is pp(b, n)/ pp(s, n). Thus, each sum a;
in the last expression can be rewritten in the form

a;( ( ~' ~) ,R) = L e{p- 1R[('Xi. 'X2)] ( ~' ~)}


(i~)EPr,(b,n)

= Pp~b,n~ L e{p-IR['Xt]A'}
Pp s,n
Xi EPr,(s,n)

= Pp(b,n) a;(A',R).
pp(s,n)
If we substitute the resulting expressions into the formula for pp(b, n; R) and use the
formulas for pp in Lemma 6.16 of Chapter 3, after obvic;ms cancellations we obtain the
formula
Pp(b,n;R) = L p-(s-1) 'Pb'Pn-s L
'Ps'Pt'ltn-b AEL,(s,s)
a;(A,R).
s,t~O
s+t=b
Using these expressions and the relations (6.82) of Chapter 3, we find that the right
side of the equality in the lemma is equal to

cp-~ ~ ( " ' (-l)aP(a-l))p-(s-l)'Pn-s " ' G*(A,R)


n r L....J L....J · 'Pa'Pt 'Ps L....J p
s=O a+t=r-s AEL,(s,s)
(a,t~O)

= p-<r-IJcp;I L a;(A,R).
AEL,(r,r)
Since, by Lemma 6.16 of Chapter 3, the factor in front of the last sum is equal to
pp(r,r)- 1, it follows that the last expression is equal to the expression (2.86) for
lp(r,n;R). D

We use the theory of quadratic spaces to compute pp(b, n; Q). The properties of
quadratic spaces that we shall need are given in Appendix 2.
LEMMA 2.22. Suppose that n, b E N, 0 < b :i:;; n, p is a prime, Q E En, and
q = q(x1,. . .,xn) is thequadraticform (1.4) of Chapter 1 having matrix Q. Then the
number pp(b, n; Q) of elements in the set (2.82) is equal to the number i( Vp,f p; b) of
isotropic sets of b vectors in any quadratic space (Vp,f p) of type {q} over Fp.
258 4. HECKE OPERATORS

PROOF. Let e1, ... , en be a basis of VP in which the quadratic form of the space
( Vp, f p) is equal to q modulo p. It follows from the definitions that a set of vectors
mi. ... ,mb E VP, where m; = E}= 1m;iei, is isotropic if and only if the matrix
M = (m;j) is contained in the set Prp(b, n; Q). O

We say that two matrices Q and Q1 in En are equivalent modulo some d E N and
write Q"' Q1 (mod d) ifthere exists ME Mn(Z) with (detM,d) = 1 such that
Q1 =Q[MJ(modd),
where the congruence is understood in the sense of (2.83). If Q is equivalent modulo d
to a matrix of the form ( ~ 1 ~),where Qi E En-1, then we say that Q is degenerate
modulo d. Otherwise, we say that Q is nondegenerate modulo d. If d = p is a prime,
then the relation Q "' Q1(mod p) is obviously equivalent to the relation q "' q 1over F P
between the quadratic forms corresponding to the two matrices (see (1.4) of Chapter
1), and Q is nondegenerate modulo p if and only if the quadratic space over F P of type
{q} is nondegenerate (see Appendix 2).
We can now use the results of Appendix 2.4 to finish the computation of the
polynomials B;(v, R).
'fHEoREM 2.23. Let n E N, p be a prime, R E En. and B; (v, R) be the polynomial
(2.79). Then:
(1) B;(v, R) = B;(v, R1) if R "'R1 (mod p);
(2) ifthe matrix R is degenerate modulo p, i.e.,

R"' ( ~1 ~) (modp), where R1 E En-1.

then B;(v,R) = B;- 1(v,R1);


(3) if Risa nondegenerate matrix modulo p, then

n( ) _ { (1 + v)(l - XR(p) p~" fl7=I 1 (1 - ~ ), ifn = 2m,


BP v,R - 2
(1 +v)TI7= 1 (1- ~). ifn=2m+l,
where for n even XR(p) denotes the sign of the quadratic space over Fp (see (2.21) of
Appendix 2) whose quadratic form has matrix (in some basis) congruent to R modulo p.
PROOF. Part (1) follows from (2.79), since the sums lp(r, n; R) clearly depend only
on the class of R modulo p.
To prove part (2), we apply the Siegel operator and the Zharkovskaia relations. In
what follows we use the notation (2.73), and, when we apply maps defined on Hecke
rings to polynomials over these rings, we let them act only on the coefficients. We first
show that the following equality holds identically in v and u:
(2.87) 'P(B;(v), u) = B;- 1(v),

where B;(v) is the polynomial (2.75) overL0,p and'¥ is themapfromL0,p to L(),; 1[u± 1]
defined by (2.42)-(2.43). Using the definition of the polynomials Rn(v) = a;(v) (see
(6.28) of Chapter 3) and the commutativity of the diagram (2.45), we obtain
n"- 1('¥(R"(v),u)) = E(O"R,")(v) = (1- u- 1v)(l - uv)(nn-Ian- 1)(v).
§2. ACTION OF THE HECKE OPERATORS 259

By Proposition 2.11 (2), all of the coefficients of the polynomial 'l'(Rn(v ), u) lie in the
ring L;- 1, which also contains all of the coefficients of Rn-I (v ). Since, by Theorem

3.30(3) of Chapter 3, 0 is a monomorphism on L;- 1, the last relation implies that

'l'(Rn(v),u) = (1- u- 1v)(l - uv)Rn-l(v).


Now let X.'.'..(v) and X~(v) be the first and third factors in (6.99) of Chapter 3. Using
the commutativity of the diagram (2.45) and Lemma 3.34 of Chapter 3, we obtain

nn-l('l'(X~(v), u)) = E(nn X~)(v) = s( ~)-1); s;(x1


1=0
1, ... , x; 1)v;)

= (1- u- 1v)(nn-l x~- 1 )(v).

Similarly,
nn- 1 ('1'(X~(v), u)) =
(1- uv)(nn-l x~- 1 )(v).
According to (6.34) of Chapter 3, the coefficients of the polynomial X.'.'.. lie in the
subring C.'.'.. of Lo,p· From the duality relations (6.2) of Chapter 3 and (6.33) of
Chapter 3 it then follows that the coefficients of X~ lie in C~. If we take Proposition
2.11 (2) into account and use the fact that 0 is a monomorphism on cg,- 1, we arrive
at the relations
'l'(X~(v),u) = (1- u- 1 v)x~- 1 (v), 'l'(X~(v),u) = (1- uv)x~- 1 (v).

Applying 'I' to (6.99) of Chapter 3 and using the above formulas, we obtain
(1- u- 1v)(l - uv)Rn-l(v) = 'l'(Rn(v),u)
= (1- u- 1 v)x~- 1 (v)'l'(B;(v),u)(l - uv)x~- 1 (v),

so that

On the other hand,


Rn-l(v) = x~- 1 (v)B;- 1 (v)x~- 1 (v).

The relation (2.87) follows from these factorizations, since the polynomials X:C 1 have
constant term 1, and so are invertible in the ring of formal power series over Lo,p.
We now proceed directly to part (2). By part (1), we may assume that

R = (~I ~).
We can take R 1 to be an arbitrary matrix in its residue class modulo pEn-l· Hence,
without loss of generality we may assume that R 1 > 0 (for example, we can arrange
this by choosing sufficiently large representatives modulo 2p of the diagonal entries
in R 1-see Theorem 1.5 of Appendix 1). We then set Q = ( ~ 1 ~) EA~. and we
consider the theta-series en(z, Q) of degree n for the matrix Q (see (1.9) of Chapter 1).
Since the theta-series is obviously invariant relative to the transformations Z -+ M (Z}
for ME r 0, it follows from Proposition 1.3 of Chapter l that en(z, Q) E rot~, wheres
is the unit character of the group { ± 1}. We take k = 0 and x = l, and in two different
ways we compute the R 1-Fourier coefficient of (Flo.iB;(v))lct>, where F = en(z, Q)
260 4. HECKE OPERATORS

and wis the Siegel operator {see (2.74)). On the one hand, from Proposition 2.20 and
(3.50) of Chapter 2 we see that this coefficient is {see §1.2 of Chapter 1)

B;(v,(~1 ~) )r(Q,(~1 ~)) =B;(v,R)r(Q,R1).


On the other hand, if we use (2.46), (3.50) of Chapter 2, (2.87), and Proposition 2.20
for the function FIW, we conclude that this coefficient is equal to

B;- 1(v,R1)r( Q, ( ~1 ~)) = B;...: 1(v,R1)r(Q,Ri).

Since obviously r(Q, Ri) ;;;;: 1, we obtain part (2) by equating the last two expressions.
Now suppose that the matrix R is nondegenerate modulo p. Let r(x1, ... ,xn)
denote the quadratic form with matrix R, and let (VP, f P) denote the quadratic space
of type {r} over the field F P = Z/ pZ. As noted before, the nondegeneracy of R
modulo p implies nondegeneracy of the quadratic space ( Vp, f p). If we apply Lemma
2.22, we can rewrite the expressions for lp(r, n; R) in Lemma 2.21 in the form

lp(r, n; R) = cp;;~, L (-l)a pb+(a-1) 'Pn-b i(b ),


a,b~O;a+b=r
'Pa'Pb

o
where i(b) = i(Vp,fP,b) is the number of isotropic sets of vectors in the space
(Vp,/p). If we substitute these expressions into (2.79) and use the formulas for the
a;j in (6.70) of Chapter 3, we obtain

Bn(v,R) = ~(-l)i P-(i)-i(n-i)_l_B;vi,


p ~
i=O "'
rn-1
.

where

B; = L 'Pn-r L (-pY • cp;;~r L {-l)a p<a-l)+b 'Pn-b i(b)


j+r=i c+d=j 'Pc'Pd a+b=r 'Pa'Pb

= L (-pY • {-l)a p(a-1) • Pb'Pn-b i(b).


c+d+a+b=i 'Pc 'Pd'Pa 'Pb

For fixed c and b we sum the terms in this expression over all nonnegative integers d
and a such that d +a= i - c - b, and we use (6.82) of Chapter 3; this gives us

We let .A. denote the dimension of a maximal isotropic subspace of ( Vp, f p). By
Corollary 2.15 of Appendix 2, we have .A.= m ifn = 2m and XR(p) = 1, .A.= m - 1 if
n = 2m and XR (p) = -1, and .A. = m if n = 2m + 1. This implies that in all cases the
desired expression for the polynomial B;(v, R) can be written in the form

(2.88)
§2. ACTION OF THE HECKE OPERATORS 261

Using the above formulas and talcing into account that i (b) = 0 if b > A., we obtain

B;(v,R) = t(-l)ip(i)-incp;;~;( L (-l)ccpn-bi(b))vi


i=O c+b=i 'Pc'Pb
(n-:t:=a) I.: (-l)h+2c p(b)+(c)+bc-ib+c)ni(b) 'Pa+c vb+c
a+b+c=n 'Pa'Pb'Pc
). (c)
=L.:(-l)bp(b)bni(b)cpb"lvb L P 'Pa+c(vpb-ny.
b=O a+c=n-b 'Pa'Pc
By (2.34) of Chapter 3, we can represent the inner sum as a product

j] (l + vp'.,,,-•) = {"Jj\1 + p-lv) }{ (I + vp g 1.,,,-•)},

where the second product is taken to be 1 in the case b = A.. If we substitute this
expression in the last formula for we find that B;,
B;(v,R) = { u
n-J.-1 (
1 + ;i
)}
G(v),

where
). }.-b
G(v) = L(-l)bp(b)-bni(b)cpb"lvb II(l
+vpi+b-n),
b=O i=I
from which it follows that to complete the proof of part (3) it suffices to verify that
the polynomial G(v) is equal to the second factor in (2.88). Since G(O) = 1 and the
degree of G (v) is at most A., we see that it is sufficient to show that G (pµ) = 0 for
µ = 1, 2, ... , A.. For integers j ;;:.: -1 we define the numbers "i = "i (p) by setting
xi= 'Pi(p 2 )cpi(p)- 1 if j;;:.: 0, and x_ 1 = 1/2. It is nothard to see that
l, if g = h;;:.: 0,
(2.89) ::=: =
{
11
h~i~g-1
(pi+ 1), if g > h;;:.: 0.

In this notation, if we substitute v = pµ, where 1 ~ µ ~ A., into one of the products in
the expression for G(v), then we obtain
}.-b }.-b
II (1 + pi+b+µ-n) = II /+b+µ-n(pn-(i+b+µ) + I)
i=l i=I
n-b-µ-1
= p(J.-b)+(J.-b)(b+µ-n) II (pi+ l)
i=n-J..-µ
_ (J.)-J.µ-J.n-bµ-(b)+bn -I
- P Xn-b-µ-l"n-).-µ-1
(recall that this product is assumed to be 1 in the case b =A.). Hence,
).

G(pµ) = p().)+).µ-J.nx;~).-µ-I L.:(-lli(b )cp,;- 1Xn-b-µ-l·


h=O
262 4. HECKE OPERATORS

We now use the formulas in Appendix 2.4 for the numbers i (b), but first we rewrite
these formulas in a more convenient form. From (2.24)-(2.25) of Appendix 2 and the
definitions we obtain
i(b) = p(b-1)(/• - l)(pl-b - l)cpl-b(p2)-I
= (pl - l)cpl-1(p2). - - • p(b-l)cpl(p)
'Pl(p) "l-b-1 cp),-b(p) '

if n = 2m and XR(p) = 1;

i(b) = p(b-1)(/-+I + l)(p.1.+1-b _ l)cp.1.(p2)cpl+l-b(p2)-I


= (p.1.+I + l)cpl(p2) • _ l _ • p(b-l)cpl(p)
cpl(p) X).-b+I cpl-b(p) '

if n = 2mand XR(p) = -1; and in the case n = 2m + 1

i(b) = 'Pl(p2) • _l_. p<b-l)cp;.(p)


'Pl(p) "l-b 'Pl-b(p) .

These formulas imply that in all cases i (b) can be written in the form

i(b) = cx;~b-µ-1P(b-l)cplcp;!_b,

where c does not depend on b. Substituting these expressions into the formula for
G (pµ), we find that

G(pµ) = c' L Xn-b-µ-1 •


l (
-
l)b (b-1)
P 'Pl.
b=O Xn-b-l-1 'Pb'Pl-b

We now note that


l-µ
Xn-b-µ-IXn-b-l-1 = 'L...J
-I "'
c;p -bi '
i=O
where the coefficients C; do not depend on b. This is clear in the caseµ =A., while if
µ <A., then by (2.89) we obtain the expression
l-µ
-I
Xn-b-µ-l"n-b-l-1 =
II{Pi+n-l-1 P -b + 1) '
i=I

which, after we expand the parentheses and combine similar terms, reduces to the
form indicated. If we substitute these expressions in G (pµ) and change the order of
summation, we find that

where the inner sum can be transformed to a product using the identity (2.43) of
Chapter 3. Sinceµ ~ 1, it follows that 1 ~ i + 1 ~ A. for each i = 0, ... , A.-µ. Hence,
each product in the last expression is zero, and so also G(pµ) = 0. 0
§2. ACTION OF THE HECKE OPERATORS 263

We now compute the polynomial ii;,k(v,R) (see (2.79)), where p =F 2 and k is


odd. In this case Theorems 1.2 and 1.3 of Appendix 1 imply that any matrix R E En
of rank rp(R) = r is equivalent to a diagonal matrix:

(2.90)

where R1 is a diagonal matrix in E, and rp(Ri) = r. In addition, if rp(R) = n and


n = 2m is an even number, then, by (2.21) of Appendix 2, the sign of the quadratic
space over FP = Z/ pZ whose quadratic form has matrix Q is

(2.91) _ ( (- l)m det


XR (P ) - p
R) ·
LEMMA 2.24. Let R be a matrix in En that is nondegenerate modulo the prime p, let

R rv ( ~1 ~J (modp), where R1 E En-2,R2 E E2,

and let .A.(R2) = XRi (p) p. Then for n >- 2 the trigonometric sums (2. 76) satisfy the
relation

PROOF. From (6.91) of Chapter 3 and (2.76) it follows that the sum 1;(r,n;R)
depends only on the equivalence class modulo p of the matrix R. Hence, we may
suppose that R = ( ~ 1 ~2 ) E En and R 2 = ( ~ ~). We rewrite the sum (2. 76)
in the form
(2.92) 1;(r,n;R) =L L u(d;Z,R),
d~O ZELp(d,n-2)
where
u(d;Z,R) =
AELp(r,n),A<•-2)=Z
2
and we let A(n- ) denote the (n - 2) x (n - 2)-matrix in the upper-left corner of A.
Let Z = ( ~ 1 ~) [UJ], where U1 E GLn-2(Z/pZ) and Z1 = diag(z1,. . .,zd) is a
matrix that is nondegenerate modulo p. If we then replace A by A[U] in the last sum,
with U = ( ~1 ~J. and use (6.91) of Chapter 3, we obtain

u(d;Z,R) = u(d; (~I ~) ,R['UJ).


Any matrix A E Sn (Z/ pZ) with A(n- 2) = ( ~1 ~) can be represented in the form

0 Xi)-(~1 O
0 X2 - X
o z-I 1
E
X2
1 Y O 0
264 4. HECKE OPERATORS

whereX = ( 1~2 ~~)and Y 1 = Y-Z) 1[Xi], andinallofthematricesthematrices


on the main diagonal are of sized x d, (n -d - 2) x (n - d - 2), and 2 x 2, respectively.
From this, (4.70) of Chapter 3, and the last equality for u(d; Z, R) we find that

u(d;Z,R) =
AELp(r,n),
A(n-2)=(~1 ~)

= x(Z1)-ke{p- 1R1Z}
X1 ,X2, Y;rp(X)=r-d

where u (p) and u denote the sums

u = Le{p- 1R2Z) 1[X1]},.


X1

and X 1, X 2, and Y = 'Y run through the set of all matrices over Z/ pZ of size d x 2,
(n - d - 2) x 2, and 2 x 2, respectively. We first find a value for u. Using the form of
the matrices R 2 and Z 1 and Lemma 4.14 of Chapter 1, we have

To compute u (p) we fix the following notation:

u (u,0 o)
=
E2 '
V = (En-t1-2
0
0 )
Vi ,
where U1 E GLn-c1-i(Z/ pZ), Vi E GL2(Z/ pZ), and X2 and Y are the same as in the
matrix A. If rp(X2) = 2, then there exists a matrix U1 suchthat'U1X2 = ( ~2) and
X2 E GL2(Z/ pZ). Consequently,

X['UJ"' (~ ~ 12),
0 'X2 Y
where "' denotes equivalence of matrices over Z/ pZ in the sense of§ l of Appendix 1.
This implies that rp(X) = 4 and x(X) = l. If, on the other hand, rp(X2) = 1, then
there exist matrices U 1 and Vi such that U 1X 2 Vi = ( ~ ~) , and hence

X['(UV)J -- (
0 0
0O 0O
0 0)
0 1 where Y 1 = Y['Vi) =(YI Yi).
Yi
' Y2 Y3
0 1
§2. ACTION OF THE HECKE OPERATORS 265

From this we find thatrp(X) = 3 andx(X) = eA


~) ify 1 ~ O(mod p), andrp(X) = 2
=
andx{X) = 1 ify1 O{mod p). Finally,ifrp(X2) = 0, thenobviouslyrp(X) = rp(Y)
and x(X) = x(Y).
Thus, we obtain the following values directly from the above observations: a{O) =
1, a{2) = 1;(2,2;R2), and a(p) = 0 for p ~ 3. As for the sum a{l), we note that if
YE S2(Z/ pZ) and rp(Y) = 1, then Y has the form

(00 0) a '
where a E {Z/pZ)*, v E Z/pZ. Since the summation in a{l) is taken over such Y,
we have ·

a{l)= L (e;;i(-a))-ke{A.1a} L e{A.2av 2 }


aEZ/pZ p p vEZ/pZ p

+ L (e;;I (-a) )-ke{ A.2a }·


aEZ/pZ p p

By assumption, k is odd, and .A. 1 and .A.2 are even numbers prime to p. Hence, if we
apply the formulas for Gauss sums {see (4.28) and Lemma 4.14 of Chapter 1) to the
second and third sums, we find that a{l) = 0. We now substitute these values for a
and a(p) into the expression for a(d; Z, R) and use (2.92); we find that (r, n; R) is 1;
equal to the sum
.A.(R2)'- 21; (2, 2; R2)1! (r - 2, n - 2; R) + .A.(R2)' 1; (r, n - 2; R 1).
Thus, to prove the lemma it remains to evaluate 1;(2, 2; R 2). By the definition (2.76),
this sum can be divided into two parts as follows:

( L + L L )x(A)-ke{p- 1R2A}.
A= ( ~; ) EL,(2,2) zE(Z/pZ)• A= ( : ; ) EL,(2,2)

But since ( ~ ; ) ,..., ( ~ y_ ~2 z_ 1 ) over Fp = Z/ pZ {see (1.2) of Appendix 1)

and R2 = ( ~ ~). we have


1!(2,2; R2) = L e{ A.2y}
x;f:.0,yEZ/ pZ p

+ L •x(z)-ke{ A.~z} L x(y - x 2z- 1)e{ A~}·


=E(Z/ pZ) x,yEZ/ pZ
(y;/:.x 2z- 1(modp))

Hence, if we make the change of variables y ---+ y' + x 2z- 1 and use (4.28) and Lemma
4.14 of Chapter 1, we conclude that 1; (2, 2; R 2 ) = - p. D

The recursive relations in this lemma allow us to obtain explicit formulas for the
sums 1;(r,n; R).
266 4. HECKE OPERATORS

LEMMA 2.25. Let l!(r,n;R) be the trigonometric sum (2.16), where k is an odd
number and R is a matrix in En that is nondegenerate modulo the prime p =f:. 2, and let
cp;); = cp;);(p) be the function in (6.83) of Chapter 3. Then:

1!(2r,2m;R) = L!(2r,2m + l;R) = 'Pm,n


1;(2r + l,2m;R) = 0,
IC(2r + l,2m + l;R) = XR,k(p)pm+ll 2cpm,n
where r and m are integers, 0 ~ r ~ m, and

(2.93)
_ cpi;,,(-1)' p'2 _ (-1) (k-1)/2 ( {-l)m detR)
'Pm,r - + + ' XR,k(p) - - •
'P2m-2r'P2r P P

PRooF. For v = 0 or 1 we define the generating polynomials


/k(r n' R)z'
p ' '

and show that the sum Fn(z; R) = F0(z; R) + Ft(z; R) is equal to the product
m-1
{l + CnXR,k(p)pm+l/2z) II {1- P2j+lz2),
j=O
where cn = 0 or 1 depending on whether n = 2m or n = 2m + l, respectively.
According to formula (2.90), from the very beginning we may assume that R =
( ~1 ~2 ) , just as in Lemma 2.24. If we apply the recursive relation in Lemma 2.24
to the coefficients of the polynomial Fv"(z; R), we see that for n >2
F,,n(z; R) = (1 - pz 2)F:- 2(A.(R2)z; R1).
On the other hand, if n ~ 2, then the proof of Lemma 2.24 and the formulas for the
Gauss sums immediately imply that

L!(l,2;R) = 0, 1!(2,2;R) = -p,

1;(1, l;R) = ( ~l) (k-t)/2 (de;R)pt/2;


and, by the definition (2.76), 1;(0, n; R) = 1. These equalities, along with the formula
for decreasing the size of the matrix R, give the required product formula for Fn (z; R).
On the other hand, we can obtain a similar product by making the change of
variables v--+ A· z in the identity in Lemma 6.22 of Chapter 3 and using (6.98) of
Chapter 3. We find that
{ l)'
(2.94) Lm +
~2m • - /
,2
z2r =
m-1
II (l _ p2j+I z2).
r=O 'P2m-2r 'P2r j=O
It is easy to see that this gives all of the equalities for 1; (r, n; R) in the lemma. 0

We use Lemma 2.25 to compute the second polynomial in (2.79).


§2. ACTION OF THE HECKE OPERATORS 267

'fHEoREM 2.26. Let B;,k(v, R) be the polynomial in (2.79), where p =f; 2, k is odd,
and R E En. Then:
(1) B;,k(v,R) = B;.k(v,R1), if R rv R1{modp);
(2) if R is a degenerate matrix modulo p, i.e.,

R rv ( ~1 ~) {modp), where R1 E En-1,

then B;,k(v,R) = B;;;; 1(v,Ri);


(3) if Risa nondegenerate matrix modulo p, then

ii;,,( v, R) ~ ll
(I - c.x...(p) P"':,,,) (I - p~:,),
where X.R,k is the character in Lemma 2. 25 and en ·= 0 or 1 depending on whether n = 2m
or n = 2m + 1, respectively.
PRooF. The first two parts are proved in exactly the same way as in Theorem 2.23.
We now prove part (3). We rewrite the polynomial B;,k(v, R) in the form
n
(2.95) ii;,k(v,R) = :L(-l);Bf'v;,
i=O
where Bf' denotes the sum
i
Bf'= p<n-i)-(n) :La;jl!(i- j,n;R),
j=O
and we first suppose that n = 2m. Then from (6.86) of Chapter 3 and Lemma 2.25 we
have B£~ 1 = 0 for 0 ~ i ~ m - 1, and

~ i (- )j . . +(-l)i-j P(i-j)2
B 2m _ (n-2i)-(n)""' P 'Pn-21+21 • 'Pn
2i -p L...J + + +
j=O 'P2j'Pn-2i 'Pn-2i+2j'P2i-2j
(2.96)
. . + ; - + s2
(s=b,-J) p<n-2i)-(n)(-p)i 'Pn. + L
'Pn_-2s • ~2;P + (p-1/2)2s,
'Pn-21'P2; s=O 'Pn-2i 'P2i-2s'P2s
where cp~ = cp~ (p) are the functions (6.83) of Chapter 3. From the definition of these
functions it follows that for n = 2m and 0 ~ s ~ i

m-i~j~m-s-1

= (-l)i-s II . {1 _ p2t+I (pm-i)2) = (-l)i-sII(i _ s, yCT. pm-i),


O~t~i-s-1

where we made the substitution j = m - i + t in the second step, and where


II(a, v) = II (1 + p 2j+ 1v 2 ) or 1
O~j~a-1
268 4. HECKE OPERATORS

depending on whether a ~ 1 or a = 0, respectively. Similarly, if we make the change


v
of variables z = A· p- 112 in the identity (2.94), under the same conditions we
obtain

(2.97)

where Ils (i, av) is the coefficient of v 2s in the polynomial II{i, av). If we substitute
these expressions in the last equality for ii];n and introduce the new notation
i
s(i,m) = ~)-l)i-sII(i - s, v'-f · pm-i)IIs(i,p- 112v),
s=O
then we can write
+
jj2'!' = {-l)ip-2i(n-i) 'Pn s(i m)
21 + + ' .
'Pn-2;'P2;
We arrange the pairs (i, m) in lexicographic order, and prove by induction on (i, m)
that ·

(2.98) s(i, m) = pi( 2m-i) (0 ~ i ~ m).


For pairs (0, m) the equality is obvious. Suppose that it holds for all pairs less than
(i,m). The rest of the proof is based on the following obvious properties of the
polynomials II{ a, v) for a ~ 1:
II(a,v) = (1 + p 20 - 1v 2 )II(a -1,v) = (1 + pv 2 )II(a - l,pv),
which implies the relation
{2_.99) IIs{l, p- 112v) = IIs(i - 1, p- 112v) + IIs-l (i - l, p- 112v)p 2i- 2
for 1 ~ s ~ i and the relation
{2.100) II{i - s, v'-f • pm-i) = {l - p2m-2i+l)II{(i - 1) - s, v'-f • pm-(i-1))

for 0 ~ s ~ i - 1. If we now separate the extreme terms in s(i, m) and use (2.99), we
obtain
s(i, m) ={-l);II{i, v'-f • pm-i) + II;{i, p- 112 v)
+ L {-1);-·~II(i - s, v'-1 · pm-i)II,,(i- l,p- 112v)
0<.v<i
+ p2i-2 L {-l)i-.rn{i-s,v'-f. pm-i)II .. -1(i- l,p-lf2v).
O<s<i

Since (2.100) and (2.99) for 1 ~ i ~ m imply that


II{i, v'-f. pm-i) = {1- p2m-2i+l)II{i _ 1, v'-f. pm-(i-1)),
II;(i,p-•f2v) = p2i-2II(i-l)(i- l,p-If2v),

it follows that, if we again apply (2.100) to the first sum on the right in the equality for
s(i, m), we find that s(i, m) satisfies the following recursive relation:
s{i,m) = (p 2111 - 2i+I - l)s{i - l,m) + p 2i- 2s(i - l,m -1).
§2. ACTION OF THE HECKE OPERATORS 269

We now use (2.98) for (i - l,m) and (i - l,m - 1), and make the corresponding
transformations on the right side of the resulting relation; we obtain the formula (2.98)
for(i,m).
We now finish the computation of ii;,k(v, R) for n = 2m. Since BiH. 1 :::;:: 0, and,
by (2.98),
+ ;2
. B21
~2!" = (-1); 'P2mP (0 ...-
(2.101) + + p-2im : : : : 1 ...- )
: : : : m,
'P2m-2;'P2;

it follows that, substituting these expressions in place of the coefficients in (2.95) and
using (2.94), we obtain
m + ;2
ii;,k(v,R) = L(-1); ~2mP + (p-mv)2i
i=O 'P2m-2;'P2;

=IT j~
(1 _ p2j+I p-2mv2) =IT (i _ ~:I).
j~ p

Thus, to prove the theorem it remains to consider the polynomial ii;,k(v, R) for
n = 2m + 1. By Lemma 2.25 and (6.86) of Chapter 3, we can write the coefficients of
this polynomial in the form

(n-2i)-(n) ~ (-p) 1'P2m+l-2i+2j • 'Pim(-l)i-j P(i-J) 2


p L....J + + +
j=O 'P21'P2m+l-2i 'P2m-2i+2j'P2i-2j
..
(1-b=s) p(n-2i)-(n)(-p)i :2m +
+
L; -
'Pn_-.2s • ~2;P + (p-lf2)2s
+s2

'Pn-2i'P2i s=O 'Pn-2i 'P2i-2s'P2s

or, if we use (2.97), the relation

'P~-2s = (-l)i-srr(i- s, yCT. pm-i+I)


'Pn-2i

and (2.98), in the form

and this, by (2.101), is equivalent to the equality


~2m+I ~2m .
B2; = B2; (0 ~ i ~ m).

We similarly have the following formula for the coefficients B~;+i:

B~n2i+l -_ p
(n-2i-l)-(n) ~ (-p) 1'P2m-2i+2j
L....J +
• 'Pim(-l)i-j P(i-1)2
+ +
( ) m+l/2
XR,k p p '
J=O 'P2j'P2m-2i 'P2m-2i+2j'P2i-2j
270 4. HECKE OPERATORS

which, together with the first equality in (2.96), implies that


BJf'.:-t 1 = XR,k(p)p-(m+l/ 2)Bi;" (0 ~ i ~ m).
To complete the proof, we substitute these values for the coefficients in (2.95) and use
the formula we proved for ii;'.Z(v, R). We obtain part (3) for n = 2m + 1:
m m
ii;,k(v,R) = L,iii;nv2; -v "L,(XR,k(p)p-(m+1f2)jiif')v2;
i=O i=O

= (1 - XR,k(p) pm:112) h.-1 (1 - p ~2+1). 0


1=0

§3. Multiplicative properties of the Fourier coefficients


In this section we apply the above technique to study the multiplicative properties
of the Fourier coefficients of modular forms. The plan is as follows. First, the matrices
of the Hecke operators in a fixed basis of some invariant space of modular forms
satisfy the same relations as the corresponding elements of the original Hecke ring.
On the other hand, the Hecke operators, while acting on the modular forms, also
act on their Fourier coefficients; and the matrices of the Hecke operators appear in
relations that reflect this action. In many cases it is possible to use these relations
to express the Fourier coefficients of the basis forms (or certain combinations of the
coefficients) in terms of the entries in the matrices of suitable Hecke operators; in that
way one can investigate how the multiplicative properties of the Hecke operators are
reflected in the Fourier coefficients. In the case of modular forms in several variables
it is usually not possible to express individual Fourier coefficients in terms of the
matrix entries of Hecke operators; rather, it is convenient to establish the relationships
between them in the form of identities that express suitable Dirichlet series constructed
from the Fourier coefficients of the modular forms in terms of Dirichlet series that are
constructed from the matrices of the Hecke operators (the latter Dirichlet series are
called "zeta-functions"). Because of the multiplicative properties of Hecke operators,
these zeta-functions have a special type of Euler product expansion, and this gives the
desired multiplicative relations for the Fourier coefficients. In the analytic theory of
zeta-functions, which we hardly touch upon in this book, the same identities serve
another purpose. Namely, they make it possible to express the zeta-functions as
suitable integral transforms of the·original modular forms. In many cases this enables
one to prove that the zeta-functions have analytic continuations onto the entire complex
plane, and to find functional equations that they satisfy.
In this section our basic object of study will be the case of one-dimensional
invariant subspaces, i.e., eigenfunctions of the Hecke operators. This does not really
represent much loss of generality, since the invariant subspaces are usually spanned by
such eigenfunctions. For modular forms of degree 1 and 2 we consider first only the
case of integer weight. This limitation is related to the fact that their multiplicative
properties and their zeta-functions are connected in this case with the full Hecke ring.
However, as we showed in Chapter 3, in the case of modular forms of half-integer
weight, i.e., in the case of Hecke rings for the symplectic covering group, the full
Hecke ring and its even subring induce the same ring of Hecke operators on the spaces
of modular forms of half-integer weight. This is one of the main differences between
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 271

modular forms of integer and half-integer weight. On the other hand, if we consider the
multiplicative properties connected with the even Hecke rings, then here the theories
for integer and half-integer weight are parallel_.:..and, moreover, they can be developed
for modular forms of arbitrary degree.
1. Modular forms in one variable. We consider a modular form
00

F(z) = Lf(2a)e 2niaz E !mk(q,x) = !mk{q,x)


a=O

of weight k and character x for the group


ro{q) = rA{q).
We suppose that it is an eigenfunction for all of the Hecke operators IT = lk,x T for
TE L(q) = L 1(q). In particular, forthe elements T(m) of the form (3.19) of Chapter
3 we have the relations
(3.1) FIT(m) = -t(m)F (m E N(q)),
where -t(m) is the corresponding eigenvalue. When we compute the Fourier coefficients
(JIT(m))(2a) of the function FIT(m), using (5.48) of Chapter 3 and Lemma 2.7 {or
2.8), we find that

(flT(m))(2a) =

= L dk-i x(d) (f 1rri (m/ d)) (2a/ d)


dlm,a

= I: dk-I x(d)f(2mafd 2 ).

dlm,a.

If we equate the Fourier coefficients with the same indices on both sides of (3. l), we
obtain
(3.2) L dk-I x(d)f(2ma/d 2 ) = -t(m)f(2a) (m E N(q)> a= 0, 1, ... ).
dlm,a

After replacing m and a by m/o and a/o, where o E N is a common divisor of m and
a, we obtain

L dk- 1x(d)f(2ma/(od) 2 ) = -t(m/o)f(2a/o),


dlm/t5,a/t5

from which, ifwemultiplybothsides byok-I x(o)µ(o), whereµ is the Mobius function,


and sum over all common divisors o E N of a and m, we obtain
L (od)k-i x(od)µ(o)f(2ma/(od) 2 ) = L ok-i x(o)-t(m/o)f(2a/o).
t5,dEN,t5dlm,a t51m,a

Since for b E N
if b = l,
(3.3) L:µ(o) = { 1,
Jib o, if b > l,
272 4. HECKE OPERATORS

it follows that the left side of the last formula is


L bk- 1x(b)f(2ma/b 2 ) Lµ(o) = f(2ma),
blm,a .Sib

and we arrive at the following multiplicative identity for f:


(3.4) f(2ma) = L dk- 1x(d)µ(d)A.(mfd)f(2afd),
dEN,dlm,a

where m E N{q} and a = 0, l, .... This series of identities is actually equivalent to


(3.2).
These identities show, in particular, that for any a E N
(3.5) f(2a) = A.(m)f(2a/m),
where m = m(a) is the greatest divisor of a that is in N(q}· Thus, the question of
the dependence off on divisors of the argument that are prime to q reduces to the
study of the corresponding eigenvalues A.(m). The next theorem, which is also based
on these identities, gives a complete description of the multiplicative properties of the
eigenvalues A.(m).
THEOREM 3.1. The eigenvalues .A.(m) = A.(m,F) of T(m) that correspond to a
nonzero eigenfunction F E rotk (q, x) satisfy the following multiplicative relations for
m,m1 EN(q}:
(3.6) A.(m)A.(mi) = L dk- 1x(d)A.(mmt1d 2 ),

(3.7) A.(mm1) = L dk- 1x(d)µ(d)A.(mf d)A.(m1f d).


dlm,m1

If k > 0, then the eigenvalues satisfy the inequalities


(3.8) IA.(m)I ~ cFmk (m E Nc9 >)
(they satisfy the inequalities
(3.9) IA.(m)I ~ cFmk/2 (m E N(q}),

if Fis a cusp-form), where CF depends only on F.


PROOF. As before, let f(2a), a= 0, 1,2, ... , denote the Fourier coefficients of F.
First suppose that f (0) f:. 0. If we write the relations (3.2) for a = 0 and divide both
sides by f(O), we obtain
(3.10) A.(m) = Ldk-lx(d) (m E N(q}),
dim

from which (3.6) and (3. 7) easily follow by an elementary number-theoretic argument.
We leave the details to the reader as an exercise. Now suppose that there exist nonzero
a for which f(2a) f:. 0 (this is always the case if k > O), and let x = x(F) be the
smallest such a. Let d denote the largest divisor of x that is in N(q}· Since d and
x/d are relatively prime, the relation (3.2) gives f(2x) = A.(d)f (2x/d), and hence
f (2x/d) f:. 0. Thus, d = 1, i.e.,
(3.11) x(F)jq 00 (in particular, x(F) = 1, if q = 1).
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 273

From this and from (3.2) for a = x we have the formulas

(3.12) A.(m) = f(2mx)/ f (2x) (m E N(q)).

These formulas together with (3.2) imply that

A.(m)A.(m) = A.(m)f(2m1x)/ f(2x)


= I:
dk- 1x(d)t (2mm1xf d 2)11 (2x)
dlm,mi>c

= I: dk- 1x(d)A.(mm1/d 2),


dlm,m1

since, by (3.11), the common divisors of m and m1x form, m1 E N(q) must divide m1;
this proves (3.6). Similarly, from (3.12), (3.4), and (3.11) we obtain

A.(mmi) = f(2mm1x)/ f(2x)


= I:
dk- 1x(d)µ(d)A.(mf d)f(2m1xf d)!1 (2x)
dlm,mi>c

= I: dk- 1x(d)µ(d)A.(mf d)A.(m1/d);


dlm,m1

and this proves (3.7).


If k > 0, then F is not a constant, and so some of its Fourier coefficients with
nonzero indices must be nonzero. Hence, x = x(F) exists, and we can represent the
eigenvalues ..1.(m) in the form (3.12). lfwe apply the inequalities (3.35) of Chapter 2
( (3. 70) of Chapter 2 if F is a cusp-form) to the Fourier coefficients f (2mx), we obtain
(3.5) ((3.6) if Fis a cusp-form). D

The identities (3.7) show, in particular, that the eigenvalue ..1.(m) for any m E N(q)
can be explicitly written as a polynomial in the eigenvalues A.(p), where p runs through
the prime divisors of m.
The identities (3.4) and the identities in Theorem 3.1 have the following elegant
reformulation in the language of Dirichlet series.
THEOREM 3.2. Let
00

F = Lf(2a)e2niaz E rotk(q,x)
a=O

be a nonzero modular form ofweight k E N and character xfor the group rA(q ). Suppose
that F is an eigenfunction ofall of the Hecke operators on rotk (q, x) of the form lk.x T (m)
form E N(q)> and let A.(m) = A.(m,F) be the corresponding eigenvalues. Then/or any
a E N the Dirichlet series

(3.13) D(s,a;F) = L f(!~m)


mEN(,J

converges absolutely and uniformly in any right half:.plane of the complex variable s of
the form Res ~ k + 1 + e (of the form Res ~ k/2 + 1 + e if Fis a cusp-form) with
274 4. HECKE OPERATORS

e > 0. In that region it factors as follows:

(3.14) D(s,a;F) = ( L x(d)µ(d)f(2a/d)dk-l-.•)c(s,F),


dEN(9 J>dla

whereµ is the Mobiusfunction and


..l(m)
(3.15) C(s,F) = C(s,F;q) = L
ms
mEN(q)

is the zeta-function corresponding to the eigenfunction F. The zeta-function C(s,F)


converges absolutely and uniformly for s in any of the half-planes indicated above, and in
that region it has an absolutely and uniformly convergent Euler product of the form

(3.16) C(s,F) = IT (1-..l(p)p-s +x(p)pk-1-2s)-1.


pEPc9 >

PROOF. The absolute and uniform convergence of the series (3.13) in the indicated
regions follows in the usual way from the estimates (3.35) of Chapter 2 (from (3.70) of
Chapter 2 if Fis a cusp-form). The convergence of the series (3.15) and the product
(3.16) follows from the estimates in Theorem 3.1.
Using (3.4), we have
1
D(s,a;F) = L m" L dk- 1x(d)µ(d)..l(m/d)f(2a/d),
mENc,) dlm,a

from which, replacing m by dm, we obtain


1
L (dm}' dk- 1x(d)µ(d)f(2a/d)..t(m).
d,mENc9 J>dla

= ( L x(d)µ(d)f(2a/d)dk-l-s) L A~~)'
tlENc 9 J>dla mENc9 >

and this proves (3.14).


To prove the Euler expansion (3.16) we first note that, by (3.6),
(3.17)
from which, by the unique factorization of integers, we obtain

(3.18)

where
00

(3.19) (p(s) = (p(s,F) = L..l(p'')v'' (v = p-s)

are the so-called local zeta-functions of the modular form F. To sum the power series
(3.19) we make use of a special case of (3.6):
..l(p)..l(p'') = ..l(p'·+1) + Pk-'x(p)..l(p''-1) (v ~ l).
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 275

Using this relation, we have


00

A(p)(p(s) = A(p) + LA(p)A(pv)vv


v=I
00 00

= A(p) + LA(p•·+•)vv + pk-lx(p) LA(p•·-•)vv


••=I v=I
= ((p(S) - l)v-I +pk-I x(p )v(p(S ),
from which, if we solve for (p(s ), we obtain
(3.20) (p(s) = (l - A(p)v + pk- 1x(p)v 2 )- 1 (v = p-s),
and this, along with (3.18), proves (3.16). 0

REMARK. The Euler product expansion (3.16) also follows from the properties of
the elements T(m) that we studied in §3.3. That is, the relations (3.17), and hence
(3.18), are a direct consequence of (3.20) of Chapter 3. The identity (3.20) can be
obtained if the elements T(p~) in the first identity of Proposition 3.35 of Chapter 3
are replaced by the corresponding eigenvalues and we use the fact that, by (2.34),
Flk.xL\1 (p) = Pk- 2x(p)F.
The expansions in Theorem 3.2 do not give any new information about the mul-
tiplicative properties of the Fourier coefficients of eigenfunctions of Hecke operators
that was not contained, for example, in the identities (3.4). But they make it possible
to express the zeta-function ((s,F) in terms of the original modular form, and in
many cases this enables one to investigate its analytic properties (see Problem 3.9).
In the case of modular forms of degree n > l it seems that there are no universal
identities that express individual eigenvalues in terms of the Fourier coefficients of an
eigenfunction, or vice-versa. (Note that when n > l the Fourier coefficients and the
eigenvalues are even indexed by sets that are not related to one another in any clear
way: the set of integral equivalence classes of matrices in An in the first case, and a set
of double cosets, i.e., diagonal matrices of a special type, in the second case.) As for
the Dirichlet series, in the multivariable case one is able to express certain Dirichlet
series constructed from the Fourier coefficients of eigenfunctions in terms of Euler
products (zeta-functions) constructed from the eigenvalues, and vice-versa. On the
one hand, the resulting identities reveal the multiplicative nature of the Fourier coef-
ficients; on the other hand, as in the one-variable case, they enable us to investigate
the analytic properties of the zeta-functions that appear. Unfortunately, the identity
in Theorem 3.2 has thus far been generalized only to modular forms of degree n = 2.
This generalization will be explained in the second part of this section.
PROBLEM 3.3. Prove that a modular form F E rot!
(q, x) is an eigenfunction for all
of the Hecke operators lk.x T = IT for T E L ( q) if it is an eigenfunction for all T of
1
the form T(p) with p E P(q)·
PROBLEM 3.4. Let F 1, ••• , F11 be a basis of a subspace of rot! (q, x) that is invariant
relative to all of the Hecke operators IT ( T E L 1(q)), and let

"
F;jT = LA;_;(T)F.i fori = l, .. .,h .
.i=I
276 4. HECKE OPERATORS

Let f(2a) for a = 0, 1, ... denote the column made up of the Fourier coefficients
f;(2a) of the form F;; and form E N(q) let A(m) denote the matrix with entries
A.;j(T(m)). Prove the following relations:

dlm,a
f(2ma) = L dk-I x(d)µ(d)A(m/d)f(2a/d) (m E N(q));
dim.a

A(mm1) = L dk-I x(d)µ(d)A(m/d)A(mt/d) (m, m1 E N(q));

L m- 3 f(2am) = ( L m~ 3 A(m))
mEN(q) mEN(q)

x ( L x(d)µ(d)dk-l-sf(2a/d)) (a EN),
dEN(qJ>dla

where the identity is understood in the formal sense;

L m-s A(m) = II (Eh - p-s A(p) + x(p)pk-l-2s Eh)-1,


mEN(q) pEP(q)

where this is again understood as a formal identity.


PROBLEM 3.5. In the notation of the previous problem, show that in the case k >0
the entries A.;j(T(m)) in the matrix A(m) satisfy the inequalities

(and satisfy the inequality

if Fi. ... , Fh are cusp-forms), where c does not depend on m. Using these estimates,
investigate the convergence of the matrix series and products in the previous problem.
[Hint: Let a1, ... ,ah EN be chosen so that the matrix A= (/;(2ai)) is in-
vertible. Then A(m) = B(m)A- 1, where B(m) is the matrix whose columns are
Ed1m,a; dk-1 x(d)c(2maj/d2).1

PROBLEM 3.6. Let q(X) = q(x1, ... , Xm) be a positive definite integral quadratic
form whose matrix has determinant 1. Show that there exists a finite set of functions
A.1, ... , A.h : N -+ C satisfying the relations

A;(ab) = L dml2- 1A.;(ab/d2) for a,b E :N


dla,b
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 277

and having the property that the number r (q, a) of integer solutions of the equation
q(X) = a can be represented in the form
h
r(q,a) = L:a:;A;(a) {a EN)
i=l

with constant coefficients a:;. Then deduce that for any a E N and any prime p the
power series
00

L r(q, apv)vv
v=O
is a rational function in v with denominator
h
ITO - A;(p)v + pm/2-1v2).
i=l

PROBLEM 3. 7. Let
00

F(z) = Lf(2a)e2niaz E !mL(q,x).


a=O

Using the integral representation of the gamma-function

r(s) = fo 00
ts-le- 1 dt (Res> 0),

prove that the Dirichlet series with coefficients f (2a) has the following integral repre-
sentation in terms of the original modular form F:

'l'(s;F) = (2n)-sr(s) L !~:a) = f 00


1s- 1(F(it) - f(O)) dt (Res> k).
a EN Jo
PROBLEM 3.8. In the notation of the previous problem, suppose that q = l and
x= 1. Derive the identity
00
'l'(s;F) = / ts-l(F(it) - /(0)) dt

+ ik }, {
00
1k-s- 1(F(it) - J (0)) dt - J (0) ( k ~·k s +; l) (Res> k).

From this deduce that the function 'l'(s; F) has a meromorphic continuation to the
entire s-plane, the function

'l'(s; F) +f
·k
(0) ( k '_ s +;
l)
is an entire function, and 'l'(s; F) satisfies the functional equation
'l'(k - s; F) = ik'l'(s; F).
[Hint: Divide the integral from 0 to oo in the previous problem into the integral
from l to oo and the integral from 0 to 1. In the latter integral make the change of
variable t --+ l /t and use the relation F (if t) = F (- l /it) = (it )k F (it).]
278 4. HECKE OPERATORS

PROBLEM 3.9. Prove the following properties for the zeta-function ((s, F) corre-
sponding to a nonzero eigenfunction F E rotl (1, 1) for all of the H~cke operators
IT(m):
(1) ((s, F) has a meromorphic continuation to the entires-plane;
(2) the function

~(s;F) + ;~~~ (k ~ s + ~ )•
where ~(s; F) = (2n)-sr(s )((s, F) and f (2a) are the Fourier coefficients of F, is an
entire function;
(3) ((s, F) satisfies the functional equation

~(k - s; F) = ;k~(s; F).

PROBLEM 3.10. Prove that:


(1) Hq- 1rA(q)Hq = rA(q), where Hq = ( ~ ~l).
(2) The map F -+ FlkHq = (qz)-k F(-1/qz) gives an isomorphism between
rotl(q, x) and rotl(q, :X), where Xis the conjugate of X·
(3) If x is a real character modulo q, then

rotl(q, x) = mt(q, x) E9 rot;;(q,x).


where FE rott(q, x) if f'lkHq = ±q-.k/2;k F.
(4) If FE rotl(q,x), then

(FlkHq)ik,;fT(m) = x(m)(Flk.xT(m))lkHq (m E N(q}),

so that in the case of a real character x the subspaces rott (q, x) are invariant relative
to all of the Hecke operators IT(m).
(5) Let F be a modular form in rott(q,x), where xis a real character. In the
notation of Problem 3.7, prove that one has the following integral representation:

'P(s; F) = f00

q-1/2
ts-I (F(it) - /(0)) dt ± ll 2-s f
00

q-1/2
tk-s-I (F(it) - J (0)) dt

- f(O)q-s/2 (!s ± _1_)


k-s
(Res> k.).

From this deduce that 'l'(s; F) has a meromorphic continuation to the entires-plane,
the function
'l'(s; F) +f (O)q-s/2 (!s ± _1
k-s
_)

is an entire function, and 'l'(s; F) satisfies the functional equation

'l'(k - s; F) = ±qs-k/2'1'(s; F).

(6) Let F be a nonzero modular form in rotl(q, x) with Fourier coefficients


f (0), /(2), ....
·Suppose that F is an eigenfunction for all of the Hecke operators
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 279

IT(m) (m E N(q)), and let A.(m) be the corresponding eigenvalues. Show that one has
the factorization

L f~~a) = ( L f~~a)) ( L A.~~)) (Res> k).


a EN aEN,alq'.'° mENc9J.

2. Modular forms of degree 2, Gaussian composition, and zeta-functions. As we


mentioned before, in the case of modular forms of several variables it is natural
to express the relations between the Fourier coefficients of eigenfunctions for the
Hecke operators and the corresponding eigenvalues in the form of identities between
suitable Dirichlet series. In this subsection we obtain analogues of the identities (3.14)
for modular forms of degree 2. A special feature of degree 2 is that the Fourier
coefficients are indexed by the matrices of binary quadratic forms, for which in many
cases there exists a natural multiplication-the so-called Gaussian composition of
quadratic forms. This composition turns out to be essential for understanding the
multiplicative nature of the Fourier coefficients of eigenfunctions. At present there do
not exist generalizations of the identities (3.14) to modular forms of degree greater
than2.
Consider a modular form
F(Z) = L f(A)e{AZ} E !m~(q,x)
AEA2

of integer weight k and one-dimensional character [x] for the group r~(q), where q E N
and x is a Dirichlet character modulo q. We suppose that F is an eigenfunction for all
of the Hecke operators IT= lk.xT for TE L 2 (q) with eigenvalues A.(T) = A.(T;F):
FIT= A.(T)F.
The Dirichlet series constructed from the Fourier coefficients of F that we shall work
with is the series

(3.21) D(s,A,y;F)= L f(m~:y(m),


mENc9 i

where A E A2 and y: N(q) --+ C is a completely multiplicative function, i.e., a function


satisfying the relation
(3.22) y(mm1) = y(m)y(mi) for all m, m1 E N(q)>

and we shall also consider certain linear combinations of these series. On the other
hand, the Dirichlet series constructed from the eigenvalues of F that arise are Euler
products of the form

(3.23) C(s, y; F) = IT Qp(y(p )p-s; F)-1,


pEPc9 J

where y is the same as above, Qp(v; F) for p E P(q) denotes the polynomial
4
(3.24) Qp(v;F) = 1:(-l)iA.(qj(p))vi
j=O
iso 4. HECKE OPERATORS

and qj(p) :;::: q](p) is defined as the preimage in L~(q) of the element q~(p) E L~
(see (3.77) of Chapter 3) under the isomorphism eq (see (3.45) of Chapter 3). We call
(3.23) the zeta-function of F with "character" y. Its p-factors

(3.25)
are called the local zeta-functions of F (with character y).
We begin our search for "global" connections between the series (3.21) and the
products (3.23) by finding local relations, i.e., relations between the power series
00

(3.26) Dp(s,A,y;F) = "


L....,,f(p
" J A)v,
J wherev = y(p)p-s,
J=O

and the local zeta-functions (p(s, y; F). In the computations below we use the tech-
nique of Chapter 3, based on extending Hecke rings of the symplectic group to
Hecke rings of the triangular subgroup. According to the philosophy of §2.2, we
consider rot~ (q, x) as an invariant subspace relative to all of the Hecke operators in
Li(q) = eq(L 2(q)) inside the space rot~, where s(-1) = (-l)kx(-1); and we con-
sider the space J~ (q, x) of Fourier coefficients offunctions in rot~ (q, x) as an invariant
subspace of J~ relative to the same Hecke operators. We shall systematically make
use of the notation (2.72) and the relations (2.74). For most of what we do, what
is important is simply that F lies in rot~ and is an eigenfunction for certain Hecke
operators. Everywhere in what follows k, x. ands are fixed, and are connected by the
relation s(-1) = (-l)k x(-1).
LEMMA 3.11. Let F E rot~ be an eigenfunction for all of the Hecke operators
IT = lk.xT for T E L~, where p is a fixed prime, let A.(T) be the corresponding
eigenvalues, and let f(A) (A E Ai) be the Fourier coefficients of F. Then the following
formal identity holds for every matrix B E Ai:
00

dp(v,B) = Lf(p6B)v6
(3.27) J=O
= Qp(v;F)-'(flk.x(l - Il_v)(l - Il1v + p(Il~'.J + Il~~J)vi))(B),
where, by analogy with (3.24), we set
4
Qp(v; F) = L.(-l)j A.(qj(p))vj,
j=O

n_ = Il~(p) = Il~(p) and Il1 = IlT{p) are the elements (3.59) of Chapter 3, and
nkJ = nkJ(p) are the elements (3.62) of Chapter 3 for n = 2.

PROOF. First of all, by (2.21) the function f E J~ is an eigenfunction for all of


the Hecke operators in L~ and has the same eigenvalues as F; this implies that for any
B' E A1
4
Qp(v; F)f(B') = L(-1).i(Jlq](p))(B')vj = (JIQ;(v))(B'),
i=O
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 281

where Q~{v) is the polynomial (3.78) of Chapter 3 over the ring L~ c L~.p· Further-
more, it follows from (2.33) that for any g E ~~and any BE A2 we can write
g(p6 B) = (glII~(p6))(B) = {glII~)(B) (~ ~ 0),
where II+ = II! (p) = II~ (p). Using these formulas and the factorization of the
polynomial Q~(v) in Proposition 6.13 of Chapter 3, we obtain

Qp(v; F)dp(v, B) = L{-l)i {flq3{p ))(p6 B)vi+<'


j,J

= L(-l)i (/lq3 (p )IIIi)(B)vi+<'


j,J

= (tlQ;(~)I f II~vJ) (B)


J=O
= (/l(l - II_v){I - II1v + p(II~'.J + II~~J)v 2 ))(B). D
The identity (3.27) shows that each series dp(v, B) is a rational function with
denominator Qp(v;F). To compute the numerator we need formulas for the action
of the operators on functions f E ~~. First, if we specialize the formula (2.33) to the
case n = 2 and a= 0, I, we obtain the formulas
(3.28) {JIII-)(B) = { P2k-3x(p)2f(p-'B), ~f BE pA2,
0, tf B ¢, pA2,
(3.29) (/III1)(B) = pk- 2x(p) I: f(p- 1B['D]),
DEA+\A+ D~(p)A+
B['D]EpA2

where A+ = SL 2 (Z). Next, by Lemma 2.19 we have


(3.30) {/III~'.J)(B) = p 2<k- 3lx(p) 2lp(l,2;B)f(B)
and
(3.31) (/III~~J)(B) = {JIA2(p))(B) = p 2<k- 3lx(p) 2f(B).
By Lemma 2.21,
lp(l, 2; B) = (p -1)- 1ppp(l,2; B) - (p +I),

where for B = ( 2~1 {tJ we let pp(l, 2; B) denote the number of nontrivial solu-
tions of the congruence
(3.32) b(x, y) = b1x 2 + b2xy + b3y 2 = O{mod p ).
According to Lemma 2.22, this number is equal to the number i( Vp, bp, I) of nonzero
isotropic vectors in the two-dimensional quadratic space (Vp,bp) over Fp with qua-
dratic form hp= b(x,y)/modp. If(Vp,bp) isanondegeneratespace, then, by Propo-
sition 2.14 of Appendix 2, we have i(Vp,bp, I)= (p - e)(I + e), where e = e(Vp, bp)
is the sign of the space (Vp,bp) {see (2.21) of Appendix 2). Since e = ±1, the last
formula can be rewritten in the form
(3.33) i(Vp,bp, I)= {e + l)(p -1).
282 4. HECKE OPERATORS

This formula is actually valid for any two-dimensional quadratic space (VP, hP),
if.we set

if (VP, hp)_is degenerate but hp =I 0,


(3.34)
if hp= 0.

In fact, in the first case the space ( VP, hP) is the sum of two one-dimensional subspaces,
one null and one not null, and every nonzero isotropic vector must belong to the null
one-dimensional subspace; thus, the number of isotropic vectors is p- 1. In the second
case, every nonzero vector is isotropic, so there are p 2 --: 1 of them.
For every matrix BE A2 we define the p-sign ep(B) by setting

(3.35)

where hP (x, y) is the quadratic form with matrix B, considered over F p· Here the right
side of (3.35) is understood in the sense of (2.21) of Appendix 2 ifthe two-dimensional
quadratic space (Vp; hp) is nondegenerate, and in the sense of (3.34) if this space is
degenerate. Then, by the above observations, we can write

lp(l, 2; B) = p(ep(B) + 1) - (p + 1) = pep(B) - 1,

and from this and (3.30)-(3.31) we finally obtain a formula for the action of the
operator II(l)
2,0
+ II(O)
2,0
on ~2:
s

(3.36)

In order to apply the identities (3.27), it remains to interpret the right side of
(3.29). If /1 = detB =I 0, then any of the matrices p-J B[ 'D] on the right in (3.29) is
the matrix of a positive definite integral quadratic form of discriminant -11, and hence
it is naturally associated with a module of the imaginary quadratic field Q( v-::K). This
enables us to interpret the right side of (3.29) in terms of the composition of quadratic
forms. We shall use the language, notation, and results of Appendix 3.
We first look at the conditions under the summation in (3.29). According to
Lemma 1.2 of Chapter 3, we can take our set of representatives of the left cosets
A+\ A+D 2(p)A+ to be matrices of the form D 2(p)
J J
"2)
("JVJ V2 = ( UJ
PVJ
u2 ).
pv2
where ( u J
VJ
u)
2
V2
runs through a set of left coset representatives of A+ = rJ modulo
the subgroup

It is clear that two matrices in rJ with first rows (uJ, u2) and (u(, u2) lie in the same left
coset modulo the above group if and only if u( u2 = u2uJ (mod p ), i.e., if and only if
the pairs (ui, u2) and (u(, u2) are proportional modulo p:

(3.38) (u(.u2) = a(ui,u2)(modp), where a¢ O(modp).


§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 283

Thus, we can take

A+\A+Df(p)A+ = { (~ ~) U;
(3.39)
U= ( :1 : 2) EA+, (u1,u2) E P 1(Z/pZ) }•

·where P 1(Z/ pZ) denotes an arbitrary set of representatives of the equivalence classes
of pairs of relatively prime integers under the equivalence (3.38) (this is the projective
line over Z/ pZ). Let B = ( 2~1 {t3) E Ai, and let b(x, y) = b1x 2 + bixy + b3y 2

be the quadratic form with matrix B. If D = ( 01 O) U = ( ui ui ) , then,


p pv1 pv2
multiplying out the matrices, we find that

where

(3.40)

In particular, if U is an integer matrix, then the condition B[ 'D] E pA2 is equivalent


to the congruence b(ui, u2) = O(mod p). Thus, if we use the set of representatives
(3.39), we can rewrite (3.29) in the form

(3.41) (/IIIi)(B)=pk-2x(p) ""' i((2b(u1,u2)/p bi(U) )),


L...,, bi(U) 2pb(vi,v2)
(u1 ,u2)EP 1(Z/pZ),
b(u1,u2)=0(modp)

where for each pair of relatively prime integers (u1, u2) we take (vi, v2) to be an arbitrary
pair of integers for which u1v2 - u2v 1 = 1, and where U = ( ui ui ) .
V1 V2
The next proposition, which plays a central role in this discussion, interprets the
right side of (3.41) for positive definite matrices B not divisible by p in terms of
the composition of matrices of quadratic forms and modules of the corresponding
quadratic field.
Let D' be an order of the algebraic number field K. For brevity, we shall use the
term regular ideal (of the ring 0') to refer to a full submodule of K that is contained
in D' and has the property that its ring of multipliers is D'.

PROPOSITION 3.12. Let A = ( 2: {c) be a positive definite even matrix with


relatively prime a, b, c, and let D = b2 - 4ac be its discriminant. Let d denote the
discriminant of the imaginary quadratic.field K = Q( ..fi5). and let D1 be the subring of
index 1 = ../DfJ in the ring of integers ofK. Then the following results hold for any
prime p, any natural number m not divisible by p, and any function f E ~~:
(1) If the p-sign ep(A) is 1. then the ring D1 contains exactly two regular ideals of
norm p, say p and p'; these ideals are conjugate, i.e., p' = p; and one has the formula
(/lk.xIIf(p))(mA) = pk- 2x(p)(f(m(A x p)) + f(m(A x p))).
284 4. HECKE OPERATORS

(2) If, on the other hand, ep(A) = -1, then 01 contains no regular ideals of norm
p, and one has: (Jlk.xITf(p))(mA) = 0.
(3) /fep(A) = Oand pJI, then there exists a unique regular idea/p ofnorm pin the
ring 01, we have p = p, and

Ulk.xnf(p))(mA) = pk- 2x(p)f(m(A x p)).

(4) /f ep(A) = 0 and pl/, then

(Jlk.xITf(p))(mA) = pk- 2 x(p)f(mp(A x 01/p)),

where 01/p is the subring of index I/pin the ring of.integers of K.


In all cases the composition of matrices and modules is understood in the sense of
Appendix 3.3.
PRooF. Let q(x, y) = ax 2 + bxy + cy 2 be the quadratic form with matrix A, and
let V = VP = (VP, q mod p) be the two-dimensional quadratic space over F P with
quadratic form q, regarded modulo p. By the definition of the p-sign of the matrix of
a quadratic form that is not divisible by p, we find that if pf:. 2, then

(3.42) ep(A) = ep(V) = ( -detA)


p = (D)
p ,
and so the equality ep(A) = I means that Pl D and the congruence x 2 = D =
d/ 2 (mod p) is solvable. Since d = 0, l(mod 4), this implies that the congruence
x 2 = d (mod 4p) is also solvable. The equality e P (A) = -1 means that pl D and the
congruence x 2 = D (mod p) has no solution, in which case neither does the congruence
x 2 = d(mod 4p). The equality ep(A) = 0 is equivalent to the condition plD, so that
either pl/, or else pJI but pld, and in the latter case the congruence x 2 = d(mod 4p)
is obviously solvable. Finally, if p = 2, then

I, if detA = -l(mod8),
(3.43) ep(A) = { -1, if detA = 3(mod8),
0, .if detA = O(mod2),

since the square of any odd integer is = I (mod 8), so that in the first case d =
- / 2 detA = l(mod 8) and the congruence x 2 = d(mod 8) is solvable. In the second
cased= -/- 2 detA = 5(mod 8)andthecongruencex 2 = d(mod 8)hasnosolutions;
and in the third case, if I is odd, then d is congruent to 0 or 4 modulo 8, and the
congruence x 2 = d(mod 8) is solvable.
These observations and §2 of Appendix 3 give us the statements in the first three
parts of the proposition about the existence and properties of ideals of norm p in 0.
In particular, in these cases there are exactly ep(A) + 1 such ideals.
On the other hand, the expression ep(A) +I appears in the formula (3.33) for the
number of nonzero isotropic vectors in Vp, which can obviously be interpreted as the
number of distinct pairs (modulo p) of relatively prime integers (u 1, u2) satisfying the
congruence q(u 1, u2 ) = O(mod p ). If we divide these pairs into classes of pairs that
are proportional to one another modulo pin the sense of (3.38), and if we take into
. §3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 285

account that each such class contains exactly p - 1 pairs, we conclude that the number
of terms in the sum (3.41) for B = mA with m =$. O(mod p) is equal to

(3.44)
l{(ui, u2) =
O(mod p )}I
E P 1(Z/ pZ); q(u1, u2)
= (p -1)- i(Vp,qmodp) = ep(A) + 1.
1
Hence, in cases (1), (2), and (3) of the proposition it is natural to expect that the terms
in (3.41) are directly connected with the regular ideals in 0 1 of norm p.
Suppose that ep(A) = 1. In this case, by (3.44), there exist exactly two pairs
of relatively prime integers (ui. u2) and (u(, u2) that are not proportional modulo p
and that satisfy the congruence q(x,y) =
O(mod p). Then q(u1,u2) = pa1 and
q(u(, uD = pa2, where ai, a2 EN. We choose integers vi, v2,v(,v2 such that

u, = (ui
V1
u2),
V2
U2 = (v(u' u')
1 2 E SL2(Z),
v2
and for i = l, 2 we set

A;= A[ 1U;J = ( 2pa;


b;
b;) '
2c;
A~
I
= ( 2a;
b;
b; ) .
2pC;
In this notation, by (3.41) we can write
(3.45) (JIIIi)(mA) = pk- 2x(p)(J(mAD + f(mAm.
We consider the following full modules of K = Q( v'li):
M1 = {p, bi - 2v'li} and M2 = {p, b2 - 2v'li}·
The number Y1 = (b1 -v'li) /2p is obviously a root of the polynomial pv 2-b1 v +a1 c1.
Since D = b2 - 4ac = b? - 4pa 1c1 is not divisible by p, it follows that b 1 is not
divisible by p, and the coefficients of this polynomial are relatively prime. Then, by §2
of Appendix 3, the ring of multipliers DM, of the module M 1 = p{l,y 1} is D,, and
N(Mi) = N(p)/p = p. In addition, obviously M 1 c 0 1• Thus, the module M1 is a
regular ideal of D 1 of norm p. By the same argument this is also true of the module
M 2. We now claim that M 1 =f. M 2. It suffices to show that M1 + M2 = D1; and
for this it is enough to see that the numbers (b 2 - b1)/2 and p, both of which lie in
M 1 + M 2 , are relatively prime, since in that case 1 is an integer linear combination of
these numbers and hence lies in M1 + M2. We set

u2u 1-1 = (ujVI u~)


V2
( v2
-V1
-u2) = T =(ti
U1 t3
t2) E SL 2(z).
t4
Since the pairs (u 1, u2) and (u(, u2) are not proportional to one another modulo
p, it follows that t2 = u1u2 - u2u( =$. O(mod p). If we use the obvious relation
A1['T] = A2 and equate coefficients, we find that pa1t? + b1t1t2 +cit? = pa2 and
2pa1t1t3 + b1 (t1t4 + t2t3) + 2c 1t2t4 = b2. Since t2 =$. O(mod p), the first congruence
implies that b1t1 + c1t2 =
O(mod p). Since t1t4 = 1 + t2t3, the second congruence
implies that (b2 - b 1)/2 = t2(b1t3 + c 1t 4 )(mod p). Thus, if (b1 - b2)/2 were divisible
=
by p, we would have the system of congruences b1 t1 + c1 t2 b1 t3 + c1 t4 O(mod p ),
= =
from which it would follow that b 1 c 1 O(mod p ). But this is impossible, since
=
the discriminant D = b? - 4pa 1ci is not divisible by p. This proves the claim, and
implies that M 1 = p and M2 = pare the only regular ideals of D1 of norm p. Let
is6 4. HECKE OPERATORS

q1, qi, q(, and q~ be the quadratic forms with matrices A,, Ai, Al, and A2, respectively,
and let M(q 1), M(qi), M(qD, and M(qD be the modules corresponding to these
quadratic forms (see §3 of Appendix 3). We set <5 1 = (b 1 - ./D)/2. Since obviously
o[ = b1o1 - pa 1ci. and since b1 is not divisible by p (see above), it follows that
M(qDp = {ai,01}{p,01} = {a1p,a10i,p01,M1 - pa1c1} = {pa1,01} = M(q1),
from which, using §2 of Appendix 3, we obtain
M(qD = p-I M(qDJ:>P = p- 1M(q,)p.
Since the quadratic form q1 is properly equivalent to q, it follows that the last module
is similar to the module M(q)p, and so the matrix Al of the quadratic form q( is
properly equivalent to the matrix Ax p; hence, f (mAI) = f (m(A x p)). Similarly, A2
is properly equivalent to A x p, and f (mA2) = f (m (A x p)). These relations, together
with (3.45), prove the first part of the proposition.
The second part follows from the above arguments and from (3.41) and (3.44).
Now suppose that ep(A) = 0. Then, according to (3.44), all of the pairs of
relatively prime integers satisfying q(x, y) =
O{mod p) are proportional to one such
pair, say, {ui, ui). Let q(ui, ui) = pa1. We choose vi, vi E Z so that

U = (ui ui) E SLi(Z),


VI Vi

and we set

A1 = A['U] = (2pa1 b1 ) and Al= (2a1 b1 )


b1 2c1 b1 2pc1 ·
Then, by (3.41), we have
(3.46) (JIIIi)(mA) = pk-ix(p)f(mAI).
We have already observed that in the case under consideration D = bi - 4ac =
b[ - 4pa 1c1 is divisible by p; in particular, this implies that
(3.47)
We consider two subcases. First suppose that p[ I. Then pld. We show that a1 c1 is
not divisible by p. If p =F 2, then this follows immediately from the second congruence
in (3.47), since d, and hence D, are not divisible by pi. If p = 2, then d = 4d0 , where
do= 2, 3{mod 4). Ifwe set b 1 = 2b0 , then D = 4d0 li = 4(b5- 2a 1c1), and hence
2a1c1 = b5- doli = b5 - do{mod4).
The last expression cannot be divisible by 4, since b5 = 0, l(mod 4) and do -
2, 3(mod 4); hence a 1ci is odd. Since the quadratic form q 1with matrix A1 is primitive
(because q is primitive), and since ai is not divisible by p, it follows that the quadratic
form qf with matrix Al is also primitive, and so it is associated with a module M(qD of
K. WeconsiderthemoduleM = {p, h-j0} Sincethenumberyi =(bi -./D)/2pis
a root of the polynomial pv 2 -bi v + ai ci, and since a 1ci is not divisible by p, it follows
from §2 of Appendix 3 that the ring of multipliers of the module M = p{ l, Yi} is D 1,
and N(M) = N(p )/ p = p. Since M c D1, we see that M =pis the unique regular
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 287

ideal of Di of norm p. We setc51 = P'YI =(bi - Jli)/2. Since of= b1c51 - pa 1c 1 and
a 1 is not divisible by p, it follows that
M(qDp = {ai,c51}{p,c51} = {a1p,a1c5i,pc5i,b1c51 - pa1c1}
= {a1p,c51} = M(q1),
where q1 is the quadratic form with matrix A 1. From this, if we take into account that
j) = p and use §2 of Appendix 3, we obtain
M(qD = p- 1M(qDPP' = p- 1M(qi)P' = p- 1M(qi)p.
Since q1 is properly equivalent to q, it follows that the last module is similar to the
module M(q)p, and so the matrix A~ is properly equivalent to the matrix Ax p. This
means that f(mAD = f(m(A x p)), and this, together with (3.46), completes the
proofof part (3).
Finally, suppose that p I l. Since b 1 is divisible by p, and the quadratic form q1
with matrix A 1 is primitive (because q is primitive), it follows that c1 is not divisible
by p. We show that p divides a 1. If p =/; 2, then this follows immediately from the
=
second congruence in (3.47). If p = 2, then the congruence d 0, l(mod 4) implies
that D = d/ 2 = bf - 8a1c1 = 0,4(mod 16). Then (b1/2) 2 - 2a1c1 = 0, l(mod 4),
and so 2a1c1 = O(mod 4) and a1 = O(mod 2). We set A~ = pA 2. From the above
observations we see that A2 is an integer matrix and is even. The primitivity of the
quadratic form q 1 implies that the quadratic form q2 with matrix A2 is primitive. The
numbery1 = (bi-Jli)/2p isarootofthepolynomial v 2-(b1/p)v+a1cif p, which has·
rational integer coefficients. Hence, from §2 of Appendix 3 it follows that the module
{l, y1} is an order of the field K, having discriminant (b 1/ p) 2 -4pa 1c1/ p 2 = D(l/p) 2 .
Hence, {l, Y1} = D1;p- We now obtain
M(q1)D1/p = {pai, py1}{l, Y1} = {pai, pyi, pam, p(b1y1/ p - a1c1/ p)}
= {a1ci,pai,py1} = {ai,py1} = pM(q2).

As before, the module M(q 1) is similar to the module M(q), and so M(q2) is similar
to M(q) ·D11P, and the matrix A2 is properly equivalent to the matrix Ax D11P; hence,
f(mAD = f(pmA2) = f (pm(A x D1;p)). D

We now return to the identities in Lemma 3.11, and apply the above formulas to
compute the numerators of the rational functions on the right in these identities. If we
set B = mA, where m E N is not divisible by p and A = ( 2: {c) is a matrix in At
with relatively prime a, b, c, and if we use (3.36) and (3.28), then we obtain

Qp(v;F)dp(v,mA) = (/1(1- I11v + p(II~'.6 + I1~~6)v 2


- IT_ v + Il_I11 v2 - pII_ (II~'.6 + II~~6)v 3 ))(mA)
(3.48)
= f(mA) - (JIII1)(mA)v + p2k- 4 x(p) 2e:p(A)f(mA)v 2
+ (JIII_II1)(mA)v 2,
because mA fj. pA2, and obviously e:p(mA) = e:p(A). The next transformations of the
numerators of the series d P ( v, mA) are based on the formulas for II 1 in Proposition
3.12; however, rather than apply the formulas to the individual dp(v,mA), it is more
convenient to apply them to suitable linear combinations of these series.
iss 4. HECKE OPERATORS

We fix an integer D < 0, and let A 1, ••• , Ah(D) be a set of representatives of the
proper equivalence classes (see §3 of Appendix 3) of matrices A = ( 2: {c) EA!
with g. c. d.(a, b, c) = 1 and b2 - 4ac = D. These equivalence classes form a finite
abelian group H(D) under the composition in §3 of Appendix 3. We fix an arbitrary
e
character of the group H(D). Given a prime p, a natural number m, and f E i~.
we define the formal power series
h(D) oo
(3.49) dp(v,m,e,D) = L:e(A;)dp(v,mA;) = 2:f(p.sm,e)v5,
i=I J=O

where
h(D)
f(1,e) =I: e(A;)f(1A;).
i=I
Before we sum the series (3.49), notice that, by (3.42) and (3.43), the p-signep(A;)
does not depend on i, but rather depends only on D. Hence, we set

(3.50) Ep(D) = Ep(A;) = ( ~), if p :;if 2,

and
if D =l(mod8),
(3.51) if D = 5(mod 8),
if D = O(mod2).
LEMMA 3.13. Suppose that the values of the function f E i~ are the Fourier coeffi-
cients of an eigenfunction F E rot~ of all of the Hecke operators IT = lk,x T for T E L~.
where p is a.fixedprime. Let D be a negative integer, e be a character of the group H (D ),
and m be a natural number prime to p. Then the power series d p(v) = d p(v, m, e; D) is
formally equal to a rational function
dp(v) = Qp(v;F)- 1Pp(v),
whose denominator is Qp (v, F) and whose numerator PP (v) is a polynomial in v ofdegree
at most 2. The numerator is given by thefollowingformulas (where, as before, D1 denotes
the subring of index l = JD1d in the ring of integers of K = Q( ..fi5). and d is the
discriminant of the.field K):
(1) /fep(D) = 1, then
Pp(v) = (1- (Np)k- 2 x(Np)e(p)v)(l - (NpY- 2 x(Np)e(p)v)f(m,e),
where p andp are the unique regular ideals ofD1 of norm p.
(2) /f ep(D) = -1, then
Pp(v) = (1- (Np)k- 2x(Np)e(p)v 2 )f(m,e),
where p = pD1 is the unique regular ideal of D1 ofnorm p 2.
(3) /f ep(D) = 0 and pf l, then
Pp(v) = (1- (Np)k- 2 x(Np)e(p)v)f(m,e),
where p =pis the unique regular ideal ofD1 of norm p.
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 289

(4) /fep(D) = O_andpll. then


h(D)
Pp(v) = (fl(l - ILv)(l - Il1v))(m,e) = L e(A;)(/1(1- n_v)(l - Il1v))(mA;).
i=I

PRooF. First suppose that p does not divide /. Then from Proposition 3.12 and
theformula(3.28)itfollowsthat(/IIl-Il1)(mA;) =Oforanyi = l, ... ,h(D). Then
from (3.48) and (3.50)-(3.51) we obtain
Pp(v) = Qp(v;F)dp(v) = f(m,e)- (fllli)(m,e)v + p21c-4x(p 2)ep(D)f(m,e)v 2.
If ep(D) = 1, then, using the first part of Proposition 3.12, we have
h(D)
(flni)(m,e) = :L:e(A;)(/IIl1)(mA;)
i=I
=pk- 2x(p) L:e(A;)(f(m(A; x p)) + f(m(Ai x p)))
i

=pk- 2x(p){ e(p) ~e(A; x p)f(m(A; x p))


I

+e(p)~e(A; xp)f(m(A; xp))}


I

=pk- 2x(p)(e(p) + e(p))f(m,e),


since e(p)e(p) = e(pp) = e(pD1) = 1, and the matrices A; x p and A; x peach run
through a complete set of representatives of the equivalence classes in H (D) whenever
the matrices A; do. This implies that in the case under consideration PP (v) is equal to

(1- pk-2x(p)(e(p) + e(p))v + p2k-4x(p2)v2)f(m,e)


= (1- pk- 2x(p)e(p)v)(l - pk- 2x(p)e(p)v)f(m,e).

Now let ep(D) = -1. Then, using the second part of Proposition 3.12, we have

(flll1)(m,e) = :L:e(A;)(/IIl-)(mA;) = o,
i

and so
Pp(v) = (1- p21c-4x(p 2)v 2)f(m,e).
Finally, _if ep(D) = 0 (and pJI), then, by Proposition 3.12(3), we have
h(D)
(/IIl1)(m,e) = pk- 2x(p) L e(A;)f(m(A; x p))
i=I
= Pk- 2x(p)e(i') L:e(A; x p)f(m(A; x p)) = Pk- 2x(p)e(p)f(m,e),
i

since p = p; thus,
Pp(v) = (1 - pk- 2x(p)e(p)v)f(m,e).
290 4. HECKE OPERATORS

Now suppose that ep(D) = 0 and pjl. Since (fjil_)(mA;) = 0, the formula
(3.48) for A = A; can be rewritten in the form
Qp(v;F)dp(v,mA;) = f(mA;) - (flll1)(mA;)v + (fjil_Il1)(mA;)v 2
= (fj(l - Il_v)(l - Il1v))(mA;).
Ifwe multiply this relation by <!(A;) and sum over i, we obtain the last formula in the
lemma. 0

We are near the end of our examination of the multiplicative properties of the
Fourier coefficients of modular forms of degree 2. It remains for us to bring together
the local information into a single global picture.
THEoREM 3.14. Let
F(Z) = L f(A)e{AZ} E rotZ(q,x)
AEA2

be a nonzero modular form of integer weight k and one-dimensional character [X] for
the group rMq ). Suppose that F is an eigenfunction for all of the Hecke operators
IT = lk,xT for T E L 2 (q). Further suppose that D is an arbitrary negative integer,
l EN is determined from the condition D = dl 2, where dis the discriminant of the.field
K = Q( .fi5). and<! is an arbitrary character of the class group H (D ), regarded as an
abstract group.
Then the following formal identity holds for any natural number a such that a jq 00
and for any completely multiplicative function y: N(q) -+ C:
h(D) ( (
L<!(A;) L f ma~~)y m)
i=I mENc,J
(3.52)
=p(s){ II
p,NpJ(ql)2
(1- x(~~;)~~fl;(p)) }c(s,y;F),
where A1,. . .,Ah(D) is an arbitrary set of representatives of the elements of H(D),
regarded as proper equivalence classes ofthe matrices ofpositive definite primitive integral
binary quadratic forms ofdiscriminant D; where p(s) = p(s, a,D,e, y; F) is a.finite sum
given by one of the expressions

p(s)=~<!(A;)(!I II (1-n~(~y(p))
i-1 pEPc,J
pl/

(3.53) x (1- nH~/(p)) )(aA;)


h(D) ( 2 (
="°'<!(A;) "°' X o oi)y &51)µ(0)µ(01)/ (ao1 (A· x .0 . ))
L.J L.J 0s-2k+3o·lr-k+2 .0 I 1/61 '
i=I J.J1EN1,1
.s1.si11
in whichµ is the Mobiusfunction, .0, denotes the subring of index tin the ring of integers
of K, and the composition x is understood in the sense o/§3 of Appendix 3; where pin
the product in (3.52) runs through all regular prime ideals ofD1 whose norm Np is prime
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 291

to qi; and where C(s, y; F) is the Euler product (3.23) corresponding to the eigenfunction
F.
Further suppose that k ~ 0, f (A) =/:- 0 for some nondegenerate matrix A E A!, and
lr(m)I ~ cma for all m E N(qJ. where c and u are real numbers that do not depend on
m. Then the Dirichlet series on the left in (3.52) and the infinite products on the right
in this identity converge absolutely and uniformly in any right half-plane of the complex
variable s of the form Res > 2pk + u + 1 + e with e > 0, where p ·= 1 in the general
case and p = 1/2 if F is a cusp-form; and the resulting holomorphic functions on the
indicated half-planes are connected by the identity (3.52).

PR.ooF. Let p1, ... , Pb be all of the distinct prime divisors of I that are prime to
q, and let Pb+ 1. Pb+2• . . . be the sequence of all primes in P (qi} arranged in increasing
order. We first prove by induction on c that for all c E N one has the formal identity

L f(am~~)y(m) = { L f(am~~)y(m)}
mEN(q) mEN(q·q(<Jl
(3.54)
x{ II (1- x(Np)y(Np)<!(p))}
(Np)s-k+2
II Q (y(p).
P ps '
F)-1 '
p,Npjq{c)2 pjq{c}

where

q(c) = Pb+I .. ·Pb+c• /(am,<!)= L<!(A;)f(amA;),


;

and p in the middle term on the right runs through all regular prime ideals of .01 whose
norm satisfies the condition given. If c = 1, then, setting.Pb+! = p and y(p )p-s = v
and using the identity in Lemma 3.13 that corresponds to the value of e p (D ), we obtain

QP (Y(p )p -s.' F) ""


L...,,
f(am,<!)y(m)
ms
mEN(q)

(3.55)

where, according to §2 of Appendix 3 and Lemma 3.13, p runs through all regular
prime ideals of .01 of norm p or p 2; this proves (3.54) in the case under consideration.
Suppose that (3.54) has been proved for some value of c. If we set Ph+c+l = p and use
292 4. HECKE OPERATORS

the identity (3.55) with q · q(c) in place of q, we obtain

Q ( ( ) -s. F) " f(am,c!)y(m)


P )' P P ' L...,,, ms
mEN(q)

= Qp(y(p)p-s;F){ L · f(am~~)y(m)}
mEN(q•q(c))

x{ II (1 - x(~;:)~~fl;(p))} II Qp(y(p)p-s;F)- 1
· p,Nplq(c) 2 plq(c)

={ " L...,,,
f(am,c!)y(m)}{
ms ·
II (1 _x(Np)y(Np)c!(p))}
(Np)s-k+2
mEN(q·qk+lll p,Nplq(c+l)2

X II Qp(y(p)p-s; F)-1,
plq(c)

which implies that (3.54) holds for c + 1.


If we now take the limit as c --+ oo in (3.54) coefficient by coefficient, we obtain
the identity

L f (am~c!}y(m) ={ L J(am~~)y(m)}
mEN(q) ml(Pl···Pb) 00

x{ II (1 - x(~;;)~~fl;(p))} II Qp(y(p)p-s;F)- 1.
p,Np J (ql)2 pEP(ql)

Thus, to prove (3.52) with the factor p given by the first formula in (3.53), it remains
to verify that

(3.56) { II Qp(y(p)p-s; F)} { L . J(am~~)y(m)} = p,(s),


PIP1 ... p, ml(pi ... p,) 00

where

and p 1, ••• ,p, are the distinct primes in P(q) that divide I. We use induction on r. If
r ~ 1, then (3.56) holds by Lemma 3.13(4). Now suppose that (3.56) has been proved
for r primes in P(q) that divide I, and let Pr+I = p be a~other such prime. Using the
induction assumption, we obtain

{ II Qp(y~);F) }Qp(y~);F) L oo f(am~~)y(m)


plp1 ... p, ml(p1 ... p,p)

= Qp(y~) ;F) f: { II
J=O PIP1 ... p,
Qp(y~) ;F)
(3.57) x L f(amp:Py(m)}(y~))
ml(pi ... p,) 00
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 293

where
h(D)
f'(apJ, C.) = L C.(A;)f'(al A;)
i=I
and
J'(A) = (11 II (1- II~(~y(p)) (1- IIH~sy(p)) )(A).
PIPl•••Pr
It is clear that for any fixed value of s the function f' is contained in the space ~~. By
Proposition 5.12 of Chapter 3, in the Hecke ring q every element of L~ commutes
with any of the elements II~(pj) E C'!:..P1 (j = 1, ... , r). Since L 2 is a commutative
ring, the elements of L~ commute with all elements of L~1 , and in particular with
T 2 (pj) = II~(pj) + IIHPi) + II~(pj) (see Proposition 5.14 of Chapter 3); from this,
if we again use Proposition 5.12 of Chapter 3, we conclude that every element of Lp
commutes with any of the elements Il~(pi) = T 2 (p i) - II~(pi) - II~ (pi). From these
observations it follows that the function/' E ~.along with f, is an eigenfunction
for all of the Hecke operators corresponding to elements of L~, and it has the same
eigenvalues as f. Thus, we can compute the expression in (3. 57) using Lemma 3.13(4),
according to which it is equal to

and this proves (3.56) for r + 1 primes.


The above argument proves the formal identity (3.52) with factor p(s) = Pb(s)
given by the first formula in (3.53). To prove the second formula for p(s) we note
that, by Proposition 5.12 of Chapter 3, the elements II~(p;) and IIHPi) = T 2 (pj) -
II~(pi) - II~(pi) (see (5.49) of Chapter 3) commute with one another if p; =/:- Pi·
Hence,

where for squarefreeo ando1 we set Ilo(o) = TIP 16 II~(p) and II1(oi) = TIP 161 IIHp)
(it follows from (5.13), (5.49), and Proposition 5.12 of Chapter 3 that the order of the
primes makes no difference). Using Proposition 3.12(4) and induction on the number
294 4. HECKE OPERATORS

of prime divisors of the squarefree number o1 dividing I, we easily derive the formulas
(see §2 of Appendix 3)
(Jlk,xil1 (oi))(aA;) = o:- 2x(o1)f(a01 (A; x .01;15J),
and from this and (3.28) we obtain the following relations for i = 1, ... , h(D):
Ulk.xllo(o)IT1 (01 ))(aA;) = (/IIlo(o)IIl1 (01))(aA;)
= { o2k-30:-2x(o2oi)f (~(A; x .01;15.)),
0,
The second expression for p (s) follows from the first expression and the above formulas.
The identity (3.52) is proved. · D

It remains to examine the convergence of the series and products in (3.52). Ac-
cording to (3.35) of Chapter 2 ((3.70) of Chapter 2 if Fis a cusp-form), we have the
inequalities lf(mA;)I ~ ypDkPm 2kP (m EN), where YF depends only on F. This im-
plies that the Dirichlet series on the left in (3.52) converges absolutely and uniformly
in any of the half-planes indicated in the theorem. The infinite product over p on
the right of (3.52) converges absolutely and uniformly in any right half-plane of the
variable s of the form Res ;;:<: k + u - 1 + e with e > 0, since the following estimate
is a consequence of the description in Appendix 3.2 of the regular prime ideals of .01
whose norm does not divide I:
"'"""' 1x(Nv)r(Nv)e(v)1 ~ "'"""' P(I "'"""' Piu
L..J (Np)s-k+2 "'= L..J pRes-k+2 + L..J p2(Res-k+2)
p,Np l (ql)2 p,Np=p p,Np=p2

~ 2 "'"""' 1
L..J pRes-(k+u-1)+1 + "'"""' 1
L..J p2(Res-(k+u-1))+2
pEP pEP

(the norm of any prime ideal of .01 is obviously either p or p 2, where p E P). In
addition, in any of the indicated half-planes the modulus of the product is clearly
bounded from below by a positive constant. From these observations and from the
formal identity

(3.58) P
(s)((s . F) = {
'y,
IJ (i _x(Np)y(Np)e(p))
(Np)s-k+I
}-i "'"""'
L..J
f(am,e)r(m)
ms
p mENc9>

it follows that the product p(s )((s; y, F), regarded as a Dirichlet series, converges
absolutely and uniformly in any of the half-planes indicated in the theorem. From this
we cannot immediately conclude anything about the convergence of the Dirichlet series
for ((s, y; F), since the factor p(s) could turn out to be identically zero. However,
since the product ( (s, y; F) does not depend on D, e,
or a, we might try to choose
e,
these parameters in such a way that p(s) = p(s, a, D, y, F) '#- 0. From the conditions
in the theorem it obviously follows that there exist an integer D < 0, a character of e
the group H(D), m ·E N(q)• and a E N with alq 00 such that (in the notation of the
theorem) we have
h(D)
(3.59) f(am,e) = L:e(A;)f(amA;) '#- o.
i=l
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 295

Let Do denote the smallest such D in absolute value, and let eo and ao denote the
e
corresponding and a. Using the second formula in (3.53) and the minimality of Do,
we find that
h(Do)
Po= p(s,ao,Do,eo,y,F) = L eo(A;)f(aoA;) = f(ao,eo),
i=I

since, according to (3.6) and §2 of Appendix 3,"the quadratic form with matrix A; x0 1/Ji
has discriminant equal to Do/~~. Ifwewriteouttheidentity (3.52) for D =Do, e = eo,
a = ao, and y = l, and if we take into account that, by (3.59), the series on the left
is not formally equal to zero, then we conclude that f (ao, eo) = po =/= 0. Thus, the
factor p(s) in the identity (3.52) for D = Do, e = eo, and a = ao is equal to a nonzero
constant. Hence, the Dirichlet series for ( (s, y; F), along with the corresponding Euler
product, converges absolutely and uniformly in the same regions as the Dirichlet series
on the right. 0

The identities in (3.52) are analogous to those in (3.14). We call ((s, y; F) the
zeta-function with character y that is associated to the eigenfunction F.
PROBLEM 3.15. Let F E rot~, and let f (A) for A E An be the Fourier coefficients of
F. Suppose that for some prime p the modular form F is an eigenfunction of all of the
Hecke operators IT= lk.xT for TEL~, where (-l)k x(-1) = s(-1), with eigenvalues
A(T) = A(T, F). Set ·
2"
Qp(v;F) = L:(-l)iy(qj(p))vi,
j=O
where qj(p) EL~ are the coefficients of the polynomial (3.78) of Chapter 3, and let
a; = a; (p, F) for i = 0, 1, ... , n be the parameters of the homomorphism T -+ A(T)
in the sense of Proposition 3.36 of Chapter 3. Prove the following:
(1) (Zharkovskaia) The following formal identity holds for any fixed matrix A E
An:
00

L f ( i A)vJ = Qp(v;F)- 1P(v),


J=O
where P(v) = Pp(v;A, F) is a polynomial in v of degree at most 2n - 1.
[Hint: Use Proposition 6.10 of Chapter 3.]
(2) The polynomial Qp(v; F) decomposes into linear factors as follows:
n
Qp(v; F) = (1 - aov) II 11 (1 - aoa;1 ···a;, v ),
r=l l~i1<···<i,~n

where a; = a;(p, F); the parameters a; satisfy the relation


a5a1 ... an= x(p)n pnk-(n);
in particular, if all of the roots of Qp(v; F) have the same absolute value, then this
absolute value is equal to p-(nk-(n))/2, and the a; satisfy the conditions
laol = P(nk-(n))/2 , lad = .. · = lanl = 1.
(3) (Maass-Zharkovskaia) Suppose that n > 1 and either k =I= n, or else k = n
and x(p) =I= - l. Then the modular form FI Cl) E rot~ - I , where Cl) is the Siegel operator,
296 4. HECKE OPERATORS

is an eigenfunction for all of the Hecke operators lk.x T' for T' E L;-
1. Supposing
that Fl<I> =f. 0, we let A.(T',Fl<I>) denote the corresponding eigenvalue, and we let
a; = a; (p, F l<I>) for i = 0, 1, ... , n - 1 denote the parameters of the homomorphism
T' ---t A.(T',Fl<I>). Then we can take

(ao(p, F), a1 (p, F),. . ., an(p, F)) = (aopk-n x(p), a1,. . ., an-t. pn-kx(p)).
In particular, we have the relation

Qp(v;F) = Qp(v;Fl<I>)Qp(pk-nx(p)v;Fl<I>).
[Hint: Apply Theorem 2.12, Proposition 2.13, and Problem 2.17 .]
PROBLEM 3.16. Let F E rotHq,x), F =f. 0, where k,q E N and Xis a Dirichlet
character modulo q. Suppose that F is an eigenfunction for all of the Hecke operators
IT = lk.x T for T E L 2(q) with eigenvalues A.(T) = A.(T, F). For p E P{q) let ao(p, F),
a1 (p, F), a2(p,F) be the parameters of the homomcrphism T ---t A.(T, F) of the ring
L~ ~ L~. Prove that:
(1) One has the inequalities

max(lao(p, F)I, lao(p, F)a1 (p, F)I, lao(p, F)a2(p, F)J,


lao(p, F)a1 (p, F)a2(p, F)J) ~ p 2Pk,

where pis the same as in Theorem 3.14.


(2) If F J<I> =f. 0, where <I> is the Siegel operator, and if either k =f. 2 or else k = 2 and
x(p) =f. -1 forallp E P{q)• thenthemodularformFJ<I> E rotl(q,x)isaneigenfunction
for all of the Hecke operators JT for T E L 1(q), and the zeta-function (3.23) associated
to F can be expressed in terms of the zeta-function associated to F l<I>

((s,y;FJ<I>) = " A.(T 1(m),FJ<l>)y(m)


L...J ms
mEN(q)

= II (1 - A.(Tl(p),FJ<I>)y(p)
p·•
+ x(p)y(p2))-1.
p2.•-k+t
pEP(q)

in the following way:

((s, y; F) = ((s, y; FJ<l>)((s - k + 2, xy; Fl<I>).


(3) Suppose that q = l, x = 1, and F is not a cusp-form. Then the zeta-
function ((s, F) = ((s, l; F) has a meromorphic continuation to the entires-plane, it
is holomorphic except possibly at s = 0, k - 2, k, 2k - 2, where it can have at most
simple poles, and it satisfies the functional equation

'1'(2k - 2 - s, F) = (-l)k'l'(s, F),


where
'P(s,F) = (2n)- 2"r(s)r(s - k + 2)((s,F)
and r(s) is the gamma-function.
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 297

3. Modular forms of arbitrary degree and even zeta-functions. The point of depar-
ture for the theory presented below is the symmetric factorization of the polynomials
R; (v) and R.; (
v) in §6.5 of Chapter 3. Since the coefficients of these polynomials lie in
the even subrings of the corresponding Hecke rings, we begin by making some remarks
about the even subrings.
Let
(3.60) E"(q) = Dq(ro(q),S"(q)+),
where S" (q )+is the subgroup of S" (q) consisting of matrices M for which r(M) is the
square of a rational number, be the even subring of L"(q). By analogy with Theorem
3.12 of Chapter 3, one easily verifies that the ring E" (q) is generated by the even
subrings (see (4.37) of Chapter 3)
(3.61)
of the rings L;(q) for p E P(q}' The imbedding eq in (5.3) of Chapter 3 maps each
E;(q) isomorphically onto the even subring E; of L; c Lo,p· Using the definition
of the spherical map n = n; and the fact that it is a monomorphism on L;, we see
that the image ofE; under this map is the subset of polynomials in n(L;) having even
degree in the variable x 0 • In particular, all of the coefficients of the polynomial (3.52)
of Chapter 3 lie in n(E;), and hence their preimages in L;-i.e., the coefficients r~ (p)
of the polynomial R; (
v )-lie in E;:
(3.62) R;(v) E E;[v].
On the other hand, from (3.56)-(3.57) and Theorem 3.30 of Chapter 3 it follows that
O(E;) = Q[(t) 2, rf 1, rI,. . ., rn_iJ,
and hence
O(L;) = Q[t, rf 1, r1, ... , rn-d = O(E;)[t] with t 2 E O(E;).
Returning to the preimages, we conclude that L; is an extension of degree two of the
ring E;, or more precisely:

L; = E; E0TE;, whereT = n- 1(t) EL;, T 2 EE;.


This decomposition implies that any nonzero Q-linear homomorphism A. from E; to C
extendstoanonzeroQ-linearhomomorphismA.': L;---+ CifwesetA.'(T) = ±A.(T2 ) 112,
and this extension is unique except for the choice of sign in front of the square root.
Let a 0 , a 1, ••• , an be the parameters of A.' (see Proposition 3.36 of Chapter 3). Since
the choice of sign in A.'(T) affects only the sign on the right in the last equation in the
system of equations (3. 83) of Chapter 3 that determines the parameters ao, a 1 ••• , an, it
follows that replacing A.' (T) by -A.' (T) leads to a change from ao to -ao and otherwise
leaves the parameters fixed. The numbers ±a0 , a 1, ••• , an are called the parameters of
the homomorphism A.: E; ---+ C.
In the case of the Hecke ring L" (q), from the very beginning we singled out the even
subring E"(q,x) that is generated by the local subrings.i;(q, x) for all p E P(q) (see
(4.83) and (5.4) of Chapter 3). These subrings and their isomorphic images i;(q, x)
in the ringL~.p have all of the same properties as the rings Ep(q) and E;. For example,
298 4. HECKE OPERATORS

by Proposition 4.22 of Chapter 3, any Q-linear homomorphism A. from i;(q, x) to C


is also determined by the parameters ±ao, a1, ... ,an, and the polynomial R.n(v) (see
(6.27) of Chapter 3) has the property that

(3.63) R.;(v) E i;(q,x)[v].


We introduce some definitions and notation. Suppose that the character s of the
group {±1}, the integer or half-integer w = k or k/2, and the Dirichlet character x
are connected by the relation s(-1) = Xw(-1), where Xw is the character (2.5). If
FE rot~ is an eigenfunction for all of the Hecke operators IT= lw.x't", where 't" EE; for
w = k and 't" E i;(q, x) for w = k/2, with eigenvalues A.('t", F), then by (3.62)-(3.63)
we can associate to F the polynomial with complex coefficients
2n
F;(v,F) = ~)-1) 0 A.(r;:(p),F)v 0
a=O
(3.64) or
2n
R.;(v,R) = ~)-l) 0 A.(~(p),F)v 0 •
a=O

Similarly, for any eigenfunction f E ~~ we define the polynomial


2n
R;(v,f) = ~)-1) 0 A.(r:(p),f)v 0
a=O
(3.65) or
2n
R.;(v,f) = ~)-l) 0 A.(l:(p),f)v 0 •
a=O

If the values off are the Fourier coefficients of a modular form F, and if ±a0 , a 1, ... ,
an are the parameters of the homomorphism 't"-+ A.('t",F) = A.('t";f), then from the
definitions we easily find that
n
R;(v,F) = R;(v,f) = II(l - a;(p)- 1v)(l - a;(p)v),
i=I
(3.66) n
R.;(v,F) = R.;(v,f) = IT(l-a;(p)- 1v)(l -a;(p)v).
i=I

Now let F E rot::,(q,x) be a modular form of weight w and one-dimensional


character [X] for r3(q), and suppose that F is an eigenfunction for all of the even
Hecke operators IT= lw.x't", where't" E En(q) forw = k and't" E £n(a,x) forw = k/2.
If we regard F as an element of rot~ with s(-1) = Xw(-1), we see that F is an
eigenfunction for all of the operators IT for 't" EE; or 't" E E;(q, x) for p E P(q)· Thus,
for each such p we have a corresponding polynomial (3.64). If y: N(q) -+ C is an
arbitrary completely multiplicative function, then we call the function

(3.67) {:(s,y;F) = Rp(y~) ,F )-I or R


p
(y(p)
PS '
F)-I
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 299

(depending on whether w = k or w = k/2, respectively) and the Euler product


(3.68) c+(s,y;F) = II c;(s,y;F)
pEP(q)

the local even zeta-function and the (global) even zeta-function of the modular form F
with character y.
Finally, in the above notation, given A E An, f E ~~, and an arbitrary subset
Ac N(q)• we define the formal Dirichlet series
y(I det Ml)xw (detM)f (A[ 'MJ)
(3.69) Dw(s,A,f,ll.) = L IdetMls+w-1
MEA\M.
ldetMIEA

where A= An= GLn(Z) andxw = x,; 1 (see (2.5)).


We are now ready to give the fundamental result of this subsection.
THEOREM 3.17. Let
F(Z) = L f(A)e{AZ} E rot:,(q,x)
AEAn

be a nonzero modular form of integer or half-integer weight w = k or k/2 and one-


dimensional character [xJfor the group rQ(q), where n, q EN, xis a Dirichlet character
modulo q, andq isdivisibleby4 ifw = k/2 Suppose that F isan eigenfunction for all of
the Hecke operators 1-r =· lw.x-rfor -r in the even Hecke ring En(q)forw = k or En(q, x)
for w = k/2. Then the following formal identity holds for any completely multiplicative
function y : N(q) ---+ C and any fixed matrix Ao E A~:

(3.70) Dw(s,Ao.f,N(q)) = X(s,Ao,f)Bw(y~) ,Ao)c+(s,y;F),


where (+(s, y; F) is the even zeta-function (3.68) of the modular form F with character
y;
Bk(v,Ao) = II B;(v,Ao)
pEP(qJ
and
Bk;2(v,Ao) = II .B;,k(v,Ao),
pEP(qJ
in which the polynomials on the right are the ones defined in Theorems 2.23 and 2.26; and
the.finite sum X(s, Ao, f) is given by setting

(3.71) X(s,Ao,f) = (tlw,x II ( f~)-l);x~;(p)(y~))) )(Ao),


pEP(qJ r-0
Pl detAo

in which the function f: An---+ C is regarded as an element of~~ with s(-1) = Xw(-1),
x~.;(P) E Lo,p are the coefficients of the polynomial (6.101) of Chapter 3, and the right
side is understood in the sense of (2.72).
Furthersupposethatf(A) =I OforsomeA EA~ andly(m)I ~ cma forallm E N(q)•
where c and a are real numbers that do not depend on m. Then the Dirichlet series on
the left in (3.70) and the infinite products on the right converge absolutely and uniformly
300 4. HECKE OPERATORS

in any right half-plane of the variables of the form Res> (2p - l)w +a+ n + 1 + e
with e > 0, where p = 1 in the general case and p = 1/2 if F is a cusp-form; and these
holomorphicfunctions on the half-planes indicated are connected by the identity (3.70).
REMARKS. (1) From (6.34) of Chapter 3 it follows that x'.:;(p) E C!!_P C C!!_. By
Theorem 5.8 of Chapter 3, C!!.. = C!!.. (1) is a commutative ring. Thus, it makes no
difference in what order the primes appear on the right in (3.71).
(2) The action of the operators lw.xX'.:;(P) on the space~~ can be computed from
the formulas (3.74).
(3) At the end of the proof of this theorem we show that, under the conditions of
the theorem, there exist matrfoes Ao E A;!" for which X(s, Ao, f) = f (Ao) =f. 0.
We first prove three lemmas.
LEMMA 3.18. For a E N set
t+(a) = t;t(a) = L (U(D))qj E L(i.
DEA\AM.A
detD=±a

Then:
(1) the elements t+(a) belong to the subring Cf. = Cf.(1) ofLo,p• and in particular
commute with one another;
(2) if a and b are relatively prime, then
(3.72)
(3)/or any g E ~~one has the formula
a
(3.73) MEA\Mn
detM=±a
= (glw.xa-ni+(a))(A) (a E N,A E An),

where we assume that s(-1) = Xw(-1) and x(a) =f. 0.


PROOF. Part (1) follows from Proposition 5.5 and Theorem 5.8 of Chapter 3.
If D E Mn and a= IdetDI > 0, then from (5.23) and (6.109) of Chapter 3 and
from the definition of the map <l>n in (5.29) of Chapter 3 we obtain
<l>n((U(D))r0 ) = an+l(D)A,
and hence <l>n(t+(a)) = an+I t(a), where t(a) = tn(a) E Hn are the elements (2.10) of
Chapter 3. Part (2) of the lemma follows from these formulas and the relations (2.14)
of Chapter 3 for the elements t(~), since, by Theorem.5.8 of Chapter 3, the restriction
of <l>n to Cf. is a monomorphism.
By (2.29) we have

(glw.x(U(D))r0 )(A) = an+I L Xw(detM)I detMl-wg(A['M]).


MEA\ADA
If we divide both sides by an and sum over all double cosets ADA c Mn with
detM =±a, we obtain (3.73). D
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 301

LEMMA 3.19. Suppose that the function g E ~~ is an eigenfunction for all of the
Hecke operators 1-r = lw.x-r, where-r E E;forw = k and-r E E;(q,:x)forw = k/2, and
s(-1) = Xw(-1). Then in the notation (3.65) thefollowingformal identity holds for any
matrix A E An:
= n
R;(v,g) ~)glp-Jnt+(p<Y))(A)vJ = B;(v,A) ~)-l)i(glx'.'..;(p))(A)vi,
J=O i=O

ifw = k, and
oo n
.R;(v,g) L(KIP-Jnt+(i))(A)vJ = ii;,k(v,A) L(-l)i(glx'.'..;(p))(A)vi,
J=O i=O

if w = k/2, where B;(v,A) and ii;,1c(v,A) are the polynomials in Theorems 2.23 and
2.26, and x'.'..;(p) E L3,p are the coefficients of the polynomials (6.101) of Chapter 3.
PROOF. Since A.(r:(p),g)g = glr:(p), it follows that, using the notation (2.73),
we can rewrite the left side of the identity in the lemma in the form
oo 2n
L L(-1) 0 (glr:(p )lp-Jnt+(pJ))(A)v0 vJ
J=O a=O

= (gl (~(-l) 0 r:(p)v0 ) I~p-Jnt+(pJ)vJ) (A)


= (glR;(v) ~P-Jnt+(l)vJ) (A).
Using the factorization of a;(v) in (6.99) of Chapter 3 and the identity (6.105) of
Chapter 3, we see that the last expression is equal to

where B(v) = B;(v) = I::7= 0 (-l)ib;vi. Since each of the functionsglx-; also lies in
~~, it follows by Proposition 2.20 that

where B;(v, A) is the polynomial in (2.79). If we substitute these expressions into the
last formula and take Theorem 2.23 into account, we obtain the desired identity for
w=k.
In the case w = k/2, instead of (6.99) one must use (6.100) of Chapter 3, and
instead of Theorem 2.23 one uses Theorem 2.26. D
302 4. HECKE OPERATORS

LBMMA 3.20. The elements x~; (p) (i = 0, 1, ... , n) act on the space ~= according
to the formulas
(glw,xX~;(p))(A) = pwn-(n)+(n-i)x(p)n
{3.74) X L Xw(detD)ldetdl-wg(p- 2A['D]),
DEA\ADn-;A

where, as usual, s(-1) = Xw(-1) and Da = D;(p) are the matrices (2.28) of Chapter
3. In particular,
(3.75)

PROOF. The formulas in (3.74) follow directly from the formulas in (6.101) of
Chapter 3 for the elements x~; (p) and from the formulas in Lemma 2.8 for the action
of An(p), IL = II8(p), and II~_;{p) on the space~=· (3.75) is a consequence of
(3.74), since in the case p 2iJ detA the matrix A['D] cannot be divisible by p 2 for any
D with detD = ±pn-i. 0

We are now ready to prove Theorem 3.17.


PROOF OF THB THBORBM. Using (3.73), we can rewrite each series Dw(s,A,g,A)
for arbitrary g E ~= in the form

Ly~~) a L Xw(detM)I detMl-wg(A['M])


aEA MEA\M.
detM=±a

We now suppose that for some prime p E P(q) the function g is an eigenfunction for
all of the Hecke operators 1-r = lw.xT for TEE; if w = k and -r E E;{q, x) if w = k/2,
with the same eigenvalues as f, and we suppose that the set A can be represented in
theform
00

(3.76) A= UA1p, J
where A1 c N(pq)·
J=O

Then, using (3.66) an4 (3.72), we obtain


R;(vp,F)Dk(s, A,g,A)
= Rn(v ' )
P P g
'°' ~ y(ap6)(gl(apJ)-nt+(ap6))(A)
L...,,.L...,, (apJ)s
aEA1J=O

= L y~~) R;(vp,g) f)(gla-nt+(a))lp-Jnt+(p6))(A)v~,


aEA1 J=O

where vp = y(p)p-·'. Let a E L\ 1. We write a as a product of prime powers:


a = pf 1 • • • p~'. Then, by (3.72), we have t+(a) = t+(pf 1 ) .. • t+(p~'). Since t+(pf;)
lies in c;; and p; =/:- p for every i = l, ... , t, it follows by Proposition 5.12(1) of
Chapter 3 that any element T E E; c L; commutes with each of the t+(pf;), and
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 303

hence commutes with t+(a). Thus, for every a E A1 the function g0 = gja-nt+(a),
along with g = g1, is an eigenfunction for all of the operators lk,x T for T E E~ and has
the same eigenvalues as g. Hence, using Lemma 3.19, we obtain
00

R;(vp,g) L(gja-nt+(a)jp-Jnt+(pJ))(A)v~
J=O
n
= B;(vp,ga) L(-l)i(galx'.'..;(p))(A)v~.
i=O

If we again write t+ (a) as a product of elements of the form t+ (pri) with primes p; =F p
and take into account that x'.'..i(p) E C!!..P' we conclude from Proposition 5.12(2) of
Chapter 3 that the elements t+(a) for a E A1 commute with the elements x'.'../p).
Hence,

Combining these formulas, we arrive at the formal identity

R;(vp, F)Dk(s, A,g, A)

= B;(vp,A) L y~~) i)-l);(gjx'.'..;(p)ja-nt+(a))(A)v~


(3.78) aEA1 i=O
n
= B;(vp,A) L(-l); Dk(s,A,gjx'.'..;(p),A1)v~.
i=O

According to Proposition 5.12(1) of Chapter 3, the elements TE E~{q, x) and t+(a)


with a E A1 commute with one another; thus, by analogy with (3.78), we obtain
. n
(3.79) fi.;(vp,F)Dk/2(s,A,g,A) = ii;,k(vp,A) L(-l);Dk/2(s,A,gjx'.'..;(p),A1)v~.
i=O

In particular, if (p, detA) = l, then (3.77) and (3.75) imply the identities

where the right side is equal to 0 for i = 1, .. . ,n and is equal to Dw(s,A,g,A1) for
i = 0. Thus, in this case (3.78) and (3.79) are transformed, respectively, to

R;(vp,F)Dk(s,A,g,A) = B;(vp A)Dk(s,A,g,A1),


1

(3.80)
fi.;(vp, F)Dk12(s, A,g, A) = ii;,k(vp, A)Dk/2(s, A, g, Ai).

To prove (3.70) wefollowthesameplanasin theproofof(3.52). Letpi. ... ,ph de-


note all of the distinct prime divisors of det Ao that are in N(q)• let Ph+l • Ph+2• ... denote
all of the primes in P(qdetAo) listed in increasing order, and set q(c) = Ph+l · · · Ph+c·
304 4. HECKE OPERATORS

Using the first equation in (3.80) and an obvious induction on c = 1, 2, ... , we obtain
the identity

Dk (s, Ao, f, N(q))

= Dk(s,Ao,f,N(q·q(c))){ II B;(vp,Ao)} II (R;(vp, F))- 1,


plq(c) plq(c)

which, if we take the limit as c -+ oo coefficient by coefficient, gives us the identity

Dk(s,Ao,f,N(q))

(3·81 ) = Dk(s,Ao,f,A){ II B;(y(p)p-s,Ao)} II (;(s,y,F),


pEP(9 de• Ao) pEP19 de• .10 )

where A= A(pi. ... ,pb) = {a EN; al(p1 · · · Pb) 00 }. To prove (3.70) with w =kit
remains to verify the identities

{ II R;(vp,F) }Dk(s,A,g,A(p,,. . .,pd))


PIPl···Pd
(3.82)
= Xd(s,A,g){ II B;(vp,A)}
PIPi .. ·Pd

for Pt. ... , Pd E P(q)• A E An, and any g E lJ~ that is an eigenfunction for all of the
operators IT for T E E;,, ... ,E;d with the same eigenvalues as f, where

Xd(s,A,g} = (gl II ( t(-l);x'.'..;(p)v~)) (A).


PIP1· .. Pd 1=0

Weuseinductionond. Ford= 1 theidentity(3.82)isthesameas(3.78)forA = A(p 1)


and A1 = {l}. Suppose that (3.82) has already been proved for some d EN, and let
Pd+t = p E P(q) be distinct from Pt. ... , Pd· By (3.78), we have

R;(vp, F)Dk(s,A,g,A(pi. ... ,pd,p))

= ( ~(-1); Dk(s,A,glx'.'..;(p),A(pi. ... ,pd))v~ )B;(vp,A).


Since x'!..;(p) E C!!..P' it follows from Proposition 5.12(1) of Chapter 3 that the
functions glx'!..;(p), along with g, are eigenfunctions for all of the operators IT for
T E E;1 , ••• , E;d with the same eigenvalues as f. Hence, using the last relation and
the induction assumption, we obtain

{ II R;(vp,F) }R;(vp,F)Dk(s,A,g,A(pi. ... ,pd,p))


PIPi .. ·Pd

= ( t(-1); Xd(s,A,glx'.'..;(p))v~) = { II B;(vp,A) }B;(vp,A).


1=0 PIPi· .. Pd
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 305

Since all elements of the form x'!.-;(p) lie in the commutative ring C!!.., it follows that
n

~)-1); Xa(s,A,glx'.'..;(p))v~
i=O

=t(glx'.'..;(p)I II (t(-l)jx'.'../p)vt))(A)v~
1=0 PIPI···Pd 1=0

= (gl II ( t(-l);x'.'..;(p)v~) )(A)= Xa+i(s,A,g),


PIPI···Pd+I 1-0

which proves (3.82), and hence also (3.70) for w = k. We note that the second
equation in (3.80) implies (3.81) (with B; replaced by B;,1c on the right) for the
Dirichlet series Dk12 (s,A 0,j,N(q)). Furthermore, by Proposition 5.12 of Chapter 3,
i;
elements of the rings C!!..P and 1 (q, x), ... , i;d (q, x) commute in pairs. Thus, under
the same conditions as above, from (3.79) we obtain the identity

{ II R~(vp,F) }nk12(s,A,g,A(pi. ... ,pa))


PIPI···Pd

= Xa(s,A,g) II B;,k(vp,A),
PIPI···Pd

which implies (3.70) for w = k/2.


It remains to examine ihe convergence of the series and products in (3.70). Ifwe
apply the bound (3.35) in the general case and (3.70) of Chapter 2 in the case when F
is a cusp-form, we find that the Dirichlet series on the left in (3.70) is majorized by the
series
c' Idet Ml-Res+(2p-l)w+u+I
MEA\M.,I detMIENc9 J
with constant c' ~ 0. Ifwe take A-left coset representatives of the form (2.15) of Chap-
ter 3 and note that the number of representatives with fixed diagonal c1,, c22, ... , Cnn is
obviously equal to c22 c~3 ••• c:;', we conclude that the last series is majorized by the
series

c' L (fl cf;-')/ (fl c;;)'


c11,c22J ... ,Cnn 1=] l=l
n
= c' II L c-t+i-1, where t = Res - (2p - l)w - u - 1,
i=l-cENc,J

which has positive terms and is uniformly convergent fort > n + e, i.e., for Res>
(2p - l)w + u + n + 1 + e, where e > 0. The formulas in Theorems 2.23 and
2.26 imply that the product of the polynomials B;(y(p)p- 3 ,Ao) on the right of the
identity converges absolutely and uniformly for Res > 1 + u + e, while the product
of B;,k(y(p)p- 3 ,Ao) converges absolutely and uniformly for Res > 1/2 + u + e:;
and in any of these half-planes the modulus is bounded from below by a positive
constant. From these observations and the identity (3.70), we conclude that the
product X(s, A 0 , f)(+(s, y, F), regarded as a Dirichlet series, converges absolutely
and uniformly in any of the half-planes indicated in the theorem. Since (+(s, y, F)
306 4. HECKE OPERATORS

does not depend on the choice of Ao, to complete the proof of the theorem it suffices
to show that the sum X(s, Ao, f) becomes a nonzero constant for some Ao E At. By
assumption, there exist matrices A E At for which f(A) =/:- 0. Among these matrices
we choose a matrix Ao of minimal determinant. Then (3.71) and (3.74) imply that
X(s,Ao,f) =/(Ao)=/:- 0. D

PROBLEM 3.21. With the notation and assumptions of Theorem 3.17, show that
the roots o:;(p)± 1 of the polynomials in (3.66) satisfy the inequalities
Io:; (p )±11,,., p (2p-l)w+n
~
("1 = 1, ... , n,. p E p (q) ) •

PROBLEM 3.22. Let F E rot~ be an eigenfunction for all of the Hecke operators
lw.x't for EE; if w = k and for T E i;(q, x) if w = k/2, where Xw(-l) = s(-1).
T
Suppose that FlcJ> =/:- 0, where Cl> is the Siegel operator. Prove that the modular form
FlcJ> E rot~-I is an eigenfunction for all of the operators lw,xT' for -r' E E;- 1 (for
-r' E i;- 1(q, x) in the case w = k/2), and the polynomials (3.64) corresponding to F
and FlcJ> are connected by the relation
R;(v,F) = (1 - pn-kx(p)v)(l - pk-nx(p)v)R;- 1(v,FIC1>)
(in the case w = k/2 by the relation
ii;(v,F) = (1- pn-kf2x(p)v)(l - pk/2-nx(p)v)ii;-l(v,FlcJ>)).

[Hint: See Problem 3.15(3).]


PROBLEM 3.23. Using the previous problem and Theorem 3.17, investigate the
convergence of the zeta-function c+ (
s, y, F) in the case when the Fourier coefficients f
of the eigenfunction F have the property that f (A) = 0 for any nondegenerate matrix
A in An.
APPENDIX 1

Symmetric Matrices Over a Field

1. Arbitrary fields. We let Sn (K) denote the set of all symmetric n x n-matrices
over the field K. Two matrices A, A' E Sn (K) are said to be equivalent over K if
(1.1) A=
I · I
CAC =AC,
[ ] where CE GLn(K).

The following identity, which is easy to verify, is often useful if one wants to simplify a
matrix by replacing it with an equivalent one: if the upper-left r x r-block A 1 = A(r)
in the matrix A = ( 1~~ ~~) E Sn (K) is nonsingular, where 0 < r < n, then

(1.2)
A=(~~ ~~)=(~1 A4-;! 1 [A2])[(~ AI1~ 2 )]·
THEoREM 1.1. Let'A E Sn (K). Suppose that there exists a column c E Mn,I (K)
such that
a=A[c]tfO.

Then A is equivalent over K to a matrix of the form ( ~ ; 1


). where A1 E Sn-I (K).

PRooF. Ifwe replace A by the matrix A[C), where C is a nonsingular matrix with
first column c, we may suppose that A = (: : ) . The theorem then follows from
(1.2). D

If the characteristic of the field K is not 2, then the assumption in Theorem 1.1
obviously holds for any nonzero matrix A E Sn(K). Hence, using Theorem 1.1 and
induction on n, we have
THEOREM 1.2. If the characteristic of the.field K is not 2, then any matrix in Sn(K)
is eqµivalent over K to a diagonal matrix.
THEOREM 1.3. Suppose that the rank of A E Sn(K) is equal tor, where 0 < r < 11.
Then A is equivalent over K to a matrix of the form

(1.3) ( ~1 ~), where A 1 E S,(K) and detA1 :;if 0.

PROOF. Since rank A = r, there exist n - r linearly independent columns c,+l • ... ,
Cn E Mn,I (K) satisfying the equation Ax = 0. Let C be a nonsingular matrix whose
last columns are c,+i. ... , Cn. Then the matrix A[C) has the required form. D
307
308 APPENDIX I

2. The field of real numbers.


'fHEoREM 1.4. For any matrix A E Sn(R) there exists an orthogonal real matrix C
such that the matrix A[C] is diagonal:
{1.4) A[C] = diag{Ai. ... , An).
Here Ai, ... , A.n are the eigenvalues of A.
The proof of this theorem is well known, and can be found in virtually any linear
algebra textbook.
A matrix A E Sn(R) is said to be semidefinite and we write A ~ 0 if A[/] ~ 0 for
any column-vector I E Mn,I (R). If A ~ 0 and the equality A[/] = 0 holds only for
I = 0, then A is said to be positive definite and we write A > 0. From the definition
it follows that if A is semidefinite, then so is any equivalep.t matrix; and similarly if A
is positive definite. Clearly a diagonal matrix is semidefinite (or positive definite) if
and only if all of the diagonal entries are nonnegative {respectively, positive). Thus,
Theorem 1.4 implies that a matrix is semidefinite if and only if all of its eigenvalues are
nonnegative, and it is positive definite if and only if all of its eigenvalues are positive.
If A = (aij) ~ 0, then, since
A[ea ± ep] = aaa ± 2aap + app ;::: 0,
where ei, ... , en are the columns of the identity matrix En, it follows that laapl ~
(aaa + app)/2. Hence,
(1.5) Jaap I :::; u(A) (A::;: (aij);::: 0, a, P= 1, ... , n).
Furthermore, let A - B ~ 0 and R ~ 0, in which case R = A[ C], where C is a
nonsingular matrix and A is a diagonal matrix with nonnegative entries. Then
u((A - B)R) = u((A - B)['C]A);::: 0,
since (A - B)[' C] is a semidefinite matrix, and so all of its diagonal entries are
nonnegative. Thus, we have
(1.6) u(AR);::: u(BR) for A - B;::: 0 and R;::: 0.
Finally, we obviously have
A+B;:::O, if A;::: 0, B;::: 0,
(1.7)
A+B>O, if A> 0, B;::: 0.
We write A ~ B (A > B) if A - B ~ 0 (A - B > 0).
THEOREM 1.5. The following three conditions on a matrix A= (a;j) E Sn(R) are
equivalent:
(l)A>O; (au ...
(2) det A{r) = det . . a~,) > Ofor r = 1, ... , n;
ar1 .•• a,,
/11 /12 lin)
(3) A~ 'LL, where L ~ ( : 122 l~n E GLn (R).

0 Inn
SYMMETRIC MATRICES OVER A FIELD 309

PRooF. (1) -+ (2). If A > 0, then all of the matrices AO) = (a 11 ), A( 2) =


( a1 I ai2) , ... , A(n) =A are also obviously positive definite. This implies that all of
a21 a22
the determinants of these matrices-which are products of eigenvalues-are positive.
(2) -+ (3). We use induction on n. The case n = 1 is obvious. Suppose that the
implication holds for (n - 1) x (n - 1)-matrices, n ~ 2. If A E Sn(R) satisfies (2),
then, by the induction assumption,
1{1 1{2 ...
A(n-1) = 'L' L', where L' = ( O 1~2 • • •
. . . /t~=:
I' )
E GLn-1(R).
0 0 I'n-l,n-1
Since detA(n-I) =f:. 0, it follows from (1.2) that A can be written in the form

A= ( A(~-1) 1) [(Eo-1. ~)] = ( A(~-1) ~) [( Eo-1 ~) ].


since dn = detA/ detA(n-1) > 0. Then
A = 'LL, where L = ( L'
0
0) (E0_
1
h ) = ( L'
1 .jd,,
0
L'h ) ·
.jd,,

(3). -+ (1). This is obvious, since (3) implies that A is equivalent to the identity
matrix, which is positive definite. D

In particular, Theorem 1.5 implies that, if Pn denotes the subset of all positive
definite matrices in Sn (R), then Pn is closed in Sn (R).
THEOREM 1.6. Let A, B E Sn(R). Suppose that A > 0. Then there exists a matrix
·c E GLn(R) such that A[C] =En and the matrix B[C] is diagonal.
PRooF. According to the previous theorem, A is equivalent to the identity matrix:
A[CJ] = En. By Theorem 1.4, there exists an orthogonal matrix C2 such that the
matrix B[C1][C2] = B[C1C2] is diagonal. Since A[C1C2] = 'C2C2 =En, it follows
that C = C1 C2 is the required matrix. D

We conclude this section by proving some useful determinant inequalities.


THEOREM 1. 7. One has:

(1.8) detA :5 a12a22 · · · ann• if A= (a;j) E Sn(R), A~ O;


(1.9) det(A + B) ~ detA + detB, if A, BE Sn(R), A, B ~ O;
(1.10) Idet(A + iB)I ~ detB, if A, BE Sn(R), B > 0.
PROOF. If A is semidefinite but not positive definite, then detA = 0, and (1.8) is
obvious. The inequality (1.8) holds for n = 1. Suppose that (1.8) has been proved for
positive definite (n - l) x (n -1 )-matrices, n ~ 2. If A is a positive definite n x n-matrix,
then A(n-1) > 0, and we can use (1.2) with A 1 = A(n-1) and the induction assumption
to obtain
det.A = detA(n-I) • (ann - (A(n-l))- 1[A2]) :5 a11 · · · an-l,n-lann·
This proves (1. 8).
310 APPENDIX I

The inequality (1.9) holdsifdetA = detB = det(A +B) = 0. But if, for example,
det(A + B) =/: 0, then A + B > 0 and, by Theorem 1.6, the two matrices A + B and
B-and hence the two matrices A and B--can be simultaneously reduced to diagonal
form. This reduces the proof of (1.9) to the case of semidefinite diagonal matrices,
where it is obvious.
Using Theorem 1.6, we reduce (1.10) to the case when B = En and A is a diagonal
matrix. In that case itis obvious. 0
APPENDIX 2

Quadratic Spaces

It is often convenient to use geometrical language when solving problems in the


theory of quadratic forms, especially in cases when the ground ring is a finite field. Here
we give a basic description of the geometrical approach, along with some applications.

1. Geometrical language. Let K be a commutative ring, and let V be a free K-


module having finite dimension over K. A function f : V -+ K is said to be quadratic
if in some basis e 1, ••• , en of V over K it is given by a quadratic form in the coordinates:

(2.1) f ( t
/=)
u;e;) = q(u1, ... , Un) (u1, ... , Un EK),

where
q(xi, ... , Xn) = L %X;Xj (q;j EK).
19:5i:5n
This definition clearly does not depend on the choice of basis. We use the term
quadratic space (over K) to denote a pair ( V, f) consisting of a free K-module V of
finite dimension and a quadratic function f on V. The quadratic form q is called the
form of the space (V, f) in the basis e1, ... , en. A different choice of basis clearly leads
to a form that is equivalent to q over K (see §1.1 of Chapter 1). Conversely, quadratic
forms that are equivalent to one another over K may be regarded as the forms of a
fixed quadratic space in different bases. Thus, the class {q} = {q} K of q over K is
uniquely determined by the space ( V, f). We call the class {q} K the type of the space
(V,f). .
By a morphism cp: ( V, f) -+ (Vi, f 1) between two quadratic spaces over a ring K
we mean a linear map cp : V -+ Vi that satisfies the condition f 1 ( cp (v)) = f (v) for all
v E V. A one-to-one surjective morphism is called an isomorphism, and in that case
we say that the two spaces are isomorphic. It is clear that two spaces are isomorphic if
and only if they correspond to the same class of forms.
Let ( V, f) be a quadratic space over K. Given a pair of vectors u, v E V, we define
the scalar product u · v E K by setting

(2.2) u •v = f(u + v) - f(u) - f(v).

We say that the vectors u, v E V are orthogonal if u · v = 0. If we write f in the form


(2.1), we obtain an expression for the scalar product in terms of the coordinates in the
311
312 APPENDIX 2

(2.3) n
= L %(U;Vj + V;Uj) = L QijUjVj,
19~j~n i,j=I

where Q;j = Qj; = % for 1 ~ i < j ~ n, and Q;; = 2q;; for 1 ~ i ~ n. In particular,
this implies that the scalar product is linear in each factor. The matrix
Q = (Qij) = (e; • ej) E Sn(K)
is called the matrix off (or of the scalar product (2.2)) in the basis ei, ... , en; it is the
same as the matrix of q in ( 1. 3) of Chapter 1. It is easy to see that, if we make a change
of basis from e1, ... , en to ef = E}= 1 aijeb the matrix Q is replaced by the matrix
(2.4)
This implies that the coset det Q · (K*) 2 of the number det Q modulo the group of
squares of units of the ring K is independent of the choice of basis. This coset is called
the determinant of the space (V;f) and is denoted d(V) = d(V,f). If d(V) is a unit
of the ring K, we say that the space ( V, /) is nonsingular.
A quadratic space (Vi,/i) is said to be a subspace of (V,f) if Vi c V and /1
coincides with the restriction off to Vi. We say that ( V, f) splits into a direct sum of
(V;,f;) c (J';/) (l ~ i ~ t) and write ·
I

(V,f) = $(V;,f;),
i=I

if every element v E V can be uniquely written in the form v = v 1 + · · · + v,, where


v; E V; and f(v) = / 1(v 1) + · · · + f 1 (v 1 ). In this case, if we write fin a basis of V
obtained as a union of bases of Vi, ... , V,, then we easily see that
I

(2.5) d(V,J) =II d(V;,f;).


i=I

Suppose that K is a field. By the orthogonal complement of the subspace (Vi, f 1) c


(V;f) (in the space (V,f)) we mean the space (Vi,/1)-1 = (V1-1,jf ), where
V1.l={uE V;u•v=OforallvE Vi},
and If is the restriction of I to vr The set R(V) = v_l is called the radical of the
space ( V, /).
LEMMA 2.1. Let ( V, f) be a quadratic space over a field Then the conditions
R( V) = {O} and d( V, f) /; 0 are equivalent.
PROOF.Writing the scalar product on V in the form (2.3), we see that u =
I:; u;e;E R(V) if and only if I:; Q;_;u; = 0 for j = l, ... ,n. The existence of a
nonzero solution of this homogeneous system of linear equations is equivalent to the
vanishing of its determinant. D
QUADRATIC SPACES 313

LEMMA 2.2. Let (Vi.Ji) be a subspace of the quadratic space (V,f) over afield
Suppose that Vi =/:- {O} and Vi c R( V). Then there exists a subspace (Vi, Ji) c ( V, f)
such that
(2.6)

PRooF. Let ei. ... , e, be a basis of Vi. We complete this basis to a basis ei. ... , e,,
er+I• ... , en of V. We set Vi= {Ker+t +···+Ken}, and we let f 2 be the restriction
off to Vi. We then obviously have the direct sum decomposition (2.6). 0

'THEOREM 2.3. Let (Vi, f 1) be a nonsingular subspace of the quadratic space ( V, f)


over afield Then one has the direct sum decomposition
(2.7)

PROOF. We choose a basis e1, ••• , e,; of Vin such a way that the first d =dim Vi
basis vectors ei. ... , ed form a basis of Vi. Then the condition u = E}=t Ujej E V1.L
is equivalent to the system of equations
n
e;•u=Luje;·ej=O (i=l, ... ,d),
j=I

whose matrix has rank d, since the d x d-minor made up of the first d columns
is obviously equal to d (Vi) =I- 0. Hence dim V1.L = n - d. On the other hand,
Vin V1.L = {O}, since Vin V1.L c R(Vi), and R(Vi) = {O} by Lemma 2.1. These
facts obviously imply that every u E V can be uniquely written in the form u = v 1 + v2
with vi E Vi and v2 E Vi.L · Since
f(u) = f(v1 + v2) =vi·· v2 +/(vi)+ f(v2) =/(vi)+ f(v2),
the theorem is proved. 0

A quadratic space is said to be reducible if it splits into a direct sum of proper


subspaces; otherwise, we say that the quadratic space is irreducible. From the definition
it follows that every quadratic space splits into a direct sum of irreducible subspaces.
'fHEoREM 2.4. Let ( V, f) be an irreducible quadratic space over a field K. Then
dim V ~ 2. If the characteristic of Kis not 2, then dim V ~ I.
PRooF. First suppose that the characteristic of K is not 2. Iff is identically zero on
V, then any one-dimensional subspace of ( V, f) is a direct summand; hence, dim V ~
1. Now suppose that there exists e 1 E V. with f (ei) =I- 0. Then the determinant of
the one-dimensional subspace Vi ={Ket} c Vis equal to ei · ei = 2/(e1), which is
nonzero; hence, by the previous theorem, V splits into the direct sum of Vi and Vi.L.
Thus, V = Vi and dim V = 1.
We now consider the case of characteristic 2. Ifthe scalar product is identically zero
on V, then any one-dimensional subspace is a direct summand, and hence dim V ~ 1.
Now suppose that there exist vectors u, v E V with u · v = a =I- 0. Then obviously the
vectors u and v are linearly independent, and the determinant of the two-dimensional
space Vi= {Ku+ Kv} is equal to 2/(u) · f(v) - (u · v) 2 = -a 2 , which is nonzero.
If we again apply the previous theorem, we see that V = Vi and dim V = 2. 0
314 APPENDIX 2

If ( V, f) is a one-dimensional quadratic space, then obviously


(2.8) type (V,f) = {ax 2 }, d(V,f) = 2a, where a EK,
and two spaces of types {ax 2 } and {bx 2 } are isomorphic if and only if b = at 2 for
some nonzero t E K.
PRoPosmoN 2.5. Every irreducible two-dimensional quadratic space over the field
K = Z/2Z is of type {q!} or {q:_ }, where ·
(2.9) qi (xi. x2) = x1 x2 and q:_ (xi, x2) = xf + x1x2 + xi,
1\vo spaces of types {q!} and {q:_} are not isomorphic.
PROOF. If ( V, f) is an irreducible two-dimensional quadratic space over the field
of two elements, then we saw at the end of the proof of Theorem 2.4 that V contains
a basis ei,e2 with el· e2 = 1. If f(e1) = f(e2) = 1, then Vis of type {q:.}. If
one of these values, say f (e 1), is equal to 1 and the other is equal to zero, then the
change of basis from ei, e2 to ei, el + e2 reduces this case to the previous one. Finally,
if f(ei) = f(e2) = 0, then Vis of type {q!}. Two spaces of types {q!} and {q:_} are
not isomorphic, since the quadratic function for the first space vanishes on all nonzero
vectors except for one, while the quadratic function for the second space is nonzero
on all nonzero vectors. Since the determinant of any one-dimensional space over the
field Z/2Z is zero, while the. determinant of a space of type {qt} or {q:_} is nonzero,
it follows that our spaces are irreducible (see (25)). 0

If the number 2 is a unit of the ring K, then the quadratic function f(u) can be
recovered from the scalar product u · v using the formula f(u) = (1/2)u · u; hence,
the quadratic space may be regarded in the usual way simply as a vector space with
bilinear scalar product. Otherwise, it may very well happen that the scalar product is
identically zero while the quadratic function is nonzero. Thus, in the general case it is
more convenient to start out with the quadratic function.
2. Nondegenerate spaces. A quadratic space ( V, f) is said to be degenerate if it
splits into a direct sum of the form
(V,f) = (Vi,O) E9 (Vi,/2),
where dim Vi ;;:;: 1 and 0 denotes the zero function on Vi. If there is no such direct
sum decomposition, then the space is said to be nondegenerate.
LEMMA 2. 6. If the quadratic space ( V, f) is nonsingular, then it is nondegenerate.
PROOF. Since d(Vi,0) = 0, the lemma foilows from (2.5). 0

LEMMA 2.7. A nondegenerate quadratic space (V,f) over afield of characteristic


=I: 2 is nonsingular.
PROOF. Suppose that d(V,f) = 0. Then R(V) =I: {O} by Lemma 2.1. Let fl
be the restriction off to R(V). By Lemma 2.2, the subspace (R(V),fi) is a direct
summand in ( V, f). On the other hand, if e E R( V), then f (e) = (1/2)e · e = 0, so
that fl = 0. Hence (V,f) is degenerate. 0

We now turn to the case of fields of characteristic 2.


QUADRATIC SPACES 315

LEMMA 2.8. If ( V, f} is any quadratic space of odd dimension over afield of charac-
teristic 2, then the determinant d ( V, f} is zero.
PROOF. Let Q = {Q;j) be the matrix off in some basis ei. ... , en, and let A 0 p be
the cofactors of Q. Ifwe expand det Q along the ith row and sum these expansions as
i goes from 1 ton, we obtain
n n
n det Q = L Q;jAii = L Q;;A;; = 0,
i,j=I i=I

since Q;i = Qi;, A;i =Ai;, and Q;; = 2f(e;} = O; since n is odd, this implies that
detQ = 0. D

THEOREM 2.9. Let {V, f} be a nondegenerate quadratic space over the field K =
Z/2Z. Ifn =dim Vis even, then (V,f} is nonsingular and is of one of the types {q~},
{q~}, where

(2.10} q~(XJ,. . ., Xn) = XJX2 + X3X4 + •••+ Xn-)Xn,


{2.11} q'!_ (xi, ... , Xn} = x1x2 + · · · + Xn-3Xn-2 + x;_I + Xn-JXn + x;;
if n is odd, then the space {V, f} is of type {qn}, where
(2.12)

PRooF. First suppose that n is even. Suppose that d ( V, f) = 0. Then by Lemma


2.1 there exists a nonzero vector e1 such that e1 · v = 0 for all v E V. We complete
e1 to a basis e1,e2, ... ,en of V, and we let V2 = {Ke2 +···+Ken}· Since n - 1
is odd, it follows by Lemma 2. 7 that d (Vi, f 2 ) = 0, where f 2 is the restriction of
f to Vi. Again applying Lemma 2.1, we find that the space V2 contains a nonzero
vector ef that is orthogonal to Vi. From the choice of e 1 and ef it obviously follows
that the space Vi = {Ke 1 + Kef} is two-dimensional, Vi is a direct summand in V,
and e1 · ef = 0. In addition, the space (Vi,f1}, where f1 is the restriction off
to Vi. is degenerate, since it contains a one-dimensional subspace {V0 ,0) that.is a
direct summand. Namely, if, say, f{e 1} = 0, then we can take Vo = {Ke 1}; while if
f{e1} = f(e(} = 1, then.we take Vo= {K(e1 + ef}}. Thus, the entire space {V,f}
is degenerate. We conclude that we must have had d ( V, f) =/:- 0. By Theorem 2.4, the
space {V, f} can be decomposed into a direct sum of irreducible subspaces (V;,/;)
(i = 1,. . .,k} of dimension dim V; ~ 2. If dim V; = l for some i, then obviously
d(V;,f;} = 0, and hence d(V,f} = IJ;d(V;,f;} = 0 {see (2.5)). Thus, dim V; = 2
for all i. Then by Propositi~!J. 2.5 each of the spaces ( V;, f;) is of type {q!} or {q:,},
and to complete the proof of the theorem in the case of even n it suffices to verify that
the direct sum of two spaces of type {q:,} is isomorphic to the direct sum of two spaces
of type {q!}. If ( V, f) is the direct sum of two spaces of type {q:,}, then Vi has a
basis el> ei, ef, e2 such that f {e1} ~ f{e2} = f(e(} = f (e2} = 1, e1 · ei = ef · e2 = 0,
and e; · ej = 0 for i,j = 1,2. Then the subspace Vi with basis e1 + ef,e2 + e2 an_d
the subspace V2 with basis e1 + ei + ef + e2, e1 + ei + e2 are both easily seen to be
subspaces of type {q!}, and Vis equal to their direct sum.
Now suppose that n is odd. By Lemmas 2.8 and 2.1, there exists a nonzero
vector e =en E R(V}. We set Vi = {Ken}, and we let Ji be the restriction off to
V2. Then, by Lemma 2.2, (V,f} = {Vi.Ji} EB {V2,/2) for some (Vi,f1} c (V,f}.
316 APPENDIX2

Since the space (V,f) is nondegenerate, so are the subspaces (Vi.Ji) and (Vi,fi).
Then, by what was proved above, the space (Vi,/ 1) is of type {q~- 1 } or {q'.'..- 1}, and
/(en) = 1. In the first case {V, /) is clearly of type {q~-J + x~} = {qn}. In the
second case (Vi, f 1) can be decomposed into the direct sum of subspaces ( V3, f 3) and
(Vi,/4 ) of types {q~- 3 } and {q::}, respectively. Then the sum (V4 ,J4 ) ffi (V2,J2)
has a basisvi,v2,en satisfying the following conditions: /{vJ) = f(v2) =/(en)= 1,
V1 · en = V2 · en = 0, and VJ · V2 = 1. If We replace this basis by VJ +en, V2 +en, en,
we see that our direct sum is of type {x1x2 + xj} = {q 3}. Hence, the full space
(JI;/)= (V3,f3) ffi (V4,f4) ffi {V2,/2) is of type {q~- 3 + Xn-2Xn-J + x~} = {qn}. D

3. Gauss sums. By the Gauss sum of the quadratic space ( V, f) over K = FP =


Z/ pZ we mean the following sum of pth roots of unity:
(2.13) GP (o., !) -_ '°' exp (2nio.f(v)) ,
L...,,
vEV p
where o. E K and each element o.f (v) E FP = Z/ pZ is regarded as an integer of the
corresponding residue class modulo p. If {V,/) is of type {q{xJ, ... ,xn)}, then the
Gauss sum Gp(o., f) is obviously equal to the Gauss sum of the form o.q:

01, ... ,a.EZ/pZ

Since for fixed o. the Gauss sum (2.13) depends only on the set of values off, it follows
that isomorphic spaces (and equivalent forms q) correspond to the same Gauss sum.
Obviously,
(1.14) Gp(o., f) = Gp(o.q) = pdimV, if o. = 0 or J = 0.
Furthermore, if{V,/) = {Vi,/1) ffi · · · ffi (V,,f,), then

Gp(o., J) =
VjE Vj(I ~j~t)
(2.15) I I

=II L exp(2nio.fj(vj)/p) =II Gp(o.,Ji),


j=I VjE V1 j=I

which reduces the calculation of Gauss sums to the case of irreducible spaces.
Before proceeding to the computations, we. recall the definition and basic prop-
erties of the Legendre symbol. Suppose that p is an odd prime, K = F P• K* is the
multiplicative group of the field K, and (K*) 2 = {d 2; d E K*} is the subgroup of
squares. Since the kernel of the homomorphism d - t d 2 from K* to (K*) 2 consists
of 1 and -1 =f. 1, it follows that" (K* )2 has index 2 in K*. Let a ....:..+ (a/ p) denote the
unique nontrivial character of the quotient group K* /(K*) 2 , regarded as a function
on K*. In other words, (a/ p) = 1 or -1 depending on whether or not a is a square. If
a is an integer not divisible by p, then the Legendre symbol (a/ p) is defined as (a/ p ),
where a denotes the residue class of a in Z/ pZ. From the definition it follows that the
Legendre symbol has the multiplicative property

(2.16) ( apb) =(-pa) (-Pb)• if (ab,p) = 1.


QUADRATIC SPACES 317

Theserelationsstillholdforallaandbifweagreetoset(a/p) = Owhena O(mod p). =


Since the group K* has the same number of squares as nonsquares, it follows that

(2.17)

We now return to the calculation of Gauss sums.


LEMMA 2.10. One has the following formulas:
(2.18) G2(x 2) = 0, G2(q!) = 2, G2(q~) = -2,
where qi are the forms (2.9);

(2.19) Gp(bx 2) = (%) Gp(x 2), Gp(x2)2 = ( ~l )p,


where p is an odd prime and b ¢. O(mod p ).
PROOF. (2.18) is easily verified by direct computation. The formulas in (2.19)
were verified in the course of the proof of Proposition 4.9 of Chapter 1. D

PRoPosmoN 2.11. Let ( V, f) be a nondegenerate quadratic space over K = FP• and


let a EK* . .if dim V = 2k, then
(2.20) Gp(a,f) = e(V,f)pk,
where
c-1t;<v.D), ifp -:I 2,
(2.21) e(V,f) = {1 if p = 2 and (V,f) is of type {q~},
'
-1, if p = 2 and (V,f) is of type {q'.'._}.
.lfdim V = 2k + 1, then

(2.22) ap(a.J) = (i)ap(x 2 )c- 1)k 2:(v,f))Pk for P -::12

and
G2(l,f) = 0.
PRooF. In the case p -:I 2 it follows from Theorem 2.4 that the space ( V, f) splits
into the direct sum of n one-dimensional subspaces (Vi, f 1), ••• , ( Vn, f n), each of
which is nondegenerate, because (V,f) is nondegenerate. Then (V;,f;) is of type
{a;x 2 }, where a; -:I 0, and we can use (2.15), (2.16), and (2.19) to obtain

Gp(a,f) = g Gp(a,f;) = g Gp(aa;x 2) ={ g(a;;) }ap(x 2)n

= (i rr (~ (2a1·~·2an) Gp(x2)n = (2; r( d(; f)) Gp(x 2)n

(according to (2.5) wehaved(V,f) = d(Vi,f1) .. ·d(Vn.fn) = 2a1 .. ·2an). Substi-


tuting Gp(x 2)2 = (-1/p)p, we obtain the required formulas in the case of
oddp.
318 APPENDIX2

The formulas for G2 (1,J) follow from (2.15) and (2.18), since in the case of type
{q!k} the space (J-; f) is the direct sum of k spaces of type {q!}, in the case of type
{q2!} it is the direct sum of k - 1 spaces of type {q!} and one space of type { q~}, and
in the case of type {q2k+ 1} the direct summands include a space of type {x 2 } with zero
Gauss sum. D

The number e(V, f) is called the sign of the nondegenerate even-dimensional qua-
dratic space (V,f) over thefieldFp. From (2.15) and (2.20) it follows that the sign is
multiplicative in the sense that
(2.23)
if ( V;, f;) are nondegenerate even-dimensional spaces.
The Jacobi symbol is a generalization of (2.16). For any odd b =Pl··· p, with
prime divisors p 1, •.• , p, it is defined by setting

(~) = (;J ... (;,). where(a,b) = 1.

4•. Isotropy subspaces of nondegenerate spaces over residue fields. A nonzero qua-
dratic space with zero quadratic form is called an isotropy space.
PROPOSITION 2.12. Suppose that ( V, f) is a nondegenerate quadratic space over the
field K = F P' where p is a prime, Vi = (Vi, 0) is an isotropy subspace of ( V, f), and
ei, ... , e, are an arbitrary basis of Vi. Then there exist vectors ej, ... , e; E V satisfying
the conditions
(1) e; • ei = 1 and f(ei) = Ofor i = 1, ... , r;
(2) the subspaces (P1,f1), ... , (P,,f,), where P; = {Ke;+ Kei} and f; is the
restriction off to P;, are pairwise orthogonal.
We first prove a lemma.
LEMMA 2.13. If (V, f) is an arbitrary quadratic space with d ( V, f) =I 0 and
(Vi, f 1)C ( V, f) is any subspace, then

dimV/- = dimV - dim Vi, (V1.L).L = Vi.

PROOF. We choose a basis e1, . .. , en of Vin such a way that the first r vectors form
a basis of Vi.. Sinced(V, f) = det(e; ·ej) =I 0, the rows of the matrix (e; ·ej) are linearly
independent. In particular, the first r rows of this matrix are linearly independent. This
implies that the system of r linear equations u · e 1 = 0, ... , u · e, = 0 in the coordinates
of the vector u = E; u;e; E V1.L has n - r linearly independent solutions; this proves
the dimension formula in the lemma. From this formula it follows that
dim(Vi.L).L = 4imV - (dimV -dimVi) = dimVi.
On the other hand, we obviously have Vi c ( V 1.L ).l. 0

PRooF OF THE PROPOSITION. We first use induction on r to treat the case when
n =dim Vis even. Since f(ei) = 0 and (V,f) is nondegenerate, there exists v E V
with e1 · v =a =I 0 (see Lemma 2.2). We set v1 = a- 1v and ej = v1 - f(vi)e,. Then
e1 · ej = e1 · v1 = 1 and f(ej) = v1 · (-f(v1)e1) + f(vi) = 0, which gives the result
in the case r = 1. Suppose that r > 1 and the proposition has already been proved
for (r - 1)-dimensional subspaces. Set Vo= {Ke 1 + · · · + Ke,_ 1}. From Lemma 2.7
QUADRATIC SPACES 319

and Theorem 2.9 it follows that in the case under consideration d ( V, f) =/:- 0. Hence,
by Lemma 2.13, (Vl )l. = Vo. This implies that the vector e,, which obviously lies in
Vl but not in Vo, is therefore not contained in ( Vl )l.. Thus, there exists u E Vl
with e, · u = P =/:- 0. If we replace u by e; = u1 - f (u1)e,, where u1 = p- 1u,
we obtain a vector e; E Vl that satisfies the conditions e, · e; = e, · u1 = 1 and
f(e;) = u1 · (-f(ui)e,) + f(u1) = 0. We set P, ={Ke,+ Ke;}. Since P, C Vl, it
follows that Vo c Pf. Since the plane P, has determinant 1, it follows from Theorem
2.3, the relations (2.5), and Lemma 2.6 that Pf is a nondegenerate space of dimension
n- 2. By the induction assumption, there exist vectors ej, ... , e;_ 1 E Pf that satisfy
the required conditions with respect to the basis e1, ... , e,_ 1 of the isotropy subspace
Vo c Pf. Then the vectors e j, ... , e;_ 1, e; satisfy the conditions of the proposition.
Now suppose that n = dim Vis odd. Since the matrix of the quadratic form
(2.12) over K = F 2 is clearly of rank n - 1, it follows that the radical R(V) = VJ.. is
one-dimensional, and, since V is nondegenerate, it contains a unique vector e0 with
f (eo) = 1. Then eo ~ Vi. Hence, the vectors eo, e1, ... , e, are linearly independent,
and they can be completed to a basis eo, ei, ... , e,, . .. , en-I of the space V. We set
V' = {Ke1 + · · · + Ken-1}. Then
(V,f) = ({Keo}.fo) EB (V',f'),
where / 0 and/' are the restrictions off to the corresponding spaces. From this
decomposition and the nondegeneracy of ( V, f) it follows that ( V', f') is a nondegen-
erate even-dimensional space. Since Vi c V', the proposition now follows from the
even-dimensional case considered above. D

A set of vectors e 1, ... , e, in a quadratic space ( V, f) is said to be isotropic if they


are linearly independent and span an isotropy subspace.
PROPOSITION 2.14. Suppose that ( V, f) is a nondegenerate quadratic space over the
field K = Fp. Then the number i(V,f; r) of isotropic sets ofr vectors in (V,f) (where
r = l, ... ,dim V) is equal to
(2.24) p<r-l)(pk -e)(pk-r +e)(p2(k-l) _ l)···(p2(k-r+l) -1),
if dim V = 2k, wheree = e(V,f) is the sign (2.21) of(V,f), and is equal to
(2.25) p<r-l)(p2k _ l)(p2(k-I) -1) ... (p2(k-r+I) -1),
if dim V = 2k + 1.
PROOF. We use induction on r. By d~finition, i(V, f; 1) is equal to the number of
nonzero vectors v E V such that f (v) = 0. The total number of vectors (including
the zero vector) satisfying this condition is obviously equal to
p-1 p-1
i(V,f; 1) + 1 = p-1 L Gp(a,f) = Pn-1 + P-1 L
Gp(a,f),
a=O a<=I

where n =dim V. Hence, if n = 2k, then from (2.20) we obtain


i(V,f; 1) = p2k-I - 1 + epk-1 (p - 1) =(pk - e)(pk-1 + e),
while if n = 2k + 1, then from (2.22) and (2.17) we have
i(V,J; 1) = pn-1 - 1 = p2k - l,
320 APPENDIX2

which proves the proposition in the case r = 1. Now suppose that r > 1 and
the proposition has already been proved for smaller isotropic sets of vectors. If
i(V,f;r-: 1) = 0, then i(V,f;r) = 0. In that case, by the induction assumption,
the corresponding expression (2.24) or (2.25) for r - 1 is· equal to zero; but then
the expression for r is also clearly equal to zero, and the proposition is proved. So
we suppose that i(V,f;r -1) =I: 0, and we let ei, ... ,er-1 be one of the isotropic
sets of r - 1 vectors. Since ei, ... , er-I span an isotropy subspace, it follows that
for the basis ei, ... , er-I of this subspace there exists a set of vectors ej, ... , e;_ 1
with the propertjes in Proposition 2.12. Each of the pairwise orthogonal subspaces
(P 1, / 1), ••• , (Pr-i.fr-i) in this proposition has determinant -1. It hence follows
from Theorem 2.3 that the space ( V, f) splits into the direct sum of subspaces of the
form
(2.26)
As a result of this decomposition, every vector v E V can be uniquely written in the
form
r-1 r-1
v = L:a;e; + LPiej +v',
i=I j=I
where a;,pi E K, v' E V'. Since v · e; = p;, it follows that v is orthogonal
to e1, .·.. ,er-I if and only if P1 = · · · = Pr-I = 0. In that case, since the vec-
tors ei, ... ,er-1,v' are pairwise orthogonal, we find that f(v) = f(a1e1) + ·· · +
f(ar.:..1er-1) + f(v') = f(v'), and the condition f(v) = 0 is equivalent to the condi-
tion f (v') = 0. Finally, the vectors e 1, ••• , er_ 1, v are linearly independent if and only
if v' =/:- 0. Thus, the set e1, ... , er-I can be completed to an isotropic set of r vectors
ei, ... , er-t. v in exactly pr-I i(V',f'; 1) different ways, i.e.,
(2.27) i(V,f;r) = pr-li(V',J'; l)i(V,f;r - 1).
From (2.26) it follows that the subspace ( V', f ') is nondegenerate, and dim V' =
n - 2(r - 1). Since each of the subspaces (P;,f;) obviously has sign l, it follows from
(2.26) and (2.23) thatifn > 2(r - 1) is even, then the signe(V',f') is the same as the
sign e ( V, f). If n = 2(r - 1), then i ( V, f; r) = 0 and the expression (2.24) is also zero.
If n > 2(r - 1), then, substituting into (2.27) the value of i(V', f'; 1) found above and
using the induction assumption, we obtain the required formulas. D

COROLLARY 2.15. Let ( V, f) be a nondegenerate quadratic space over K = F P• and


let Vo = (Vo, O) be a maximal isotropy subspace in ( V, f). Then
k, ifdimV=2kande(V,f)=l,
A.=dimVo= { k.-1, ifdimV=2kande(V,f)=-l,
k, ifdimV = 2k + 1.
APPENDIX 3

Modules in Quadratic Fields


and Binary Quadratic Forms

The proofs of the facts given in Appendix 3 and more detailed information on this
topic can be found in [13].
1. Modules in algebraic number fields. Let k be a subfield of the field K. If K has
finite dimension n = [K : k] as a vector space over k, then we say that K is a.finite
extension of k (of degree n). By the matrix (a;j) of an element a E Kin the basis
{OJ;} of K over k we mean the matrix whose rows are the coordinates of aOJ; in the
basis {OJ;}. The trace S(a) and the determinant N(a) of this matrix (a;j) are called,_
respectively, the trace and the norm of a (from K to k).
In particular, if k = Q, then K is called an algebraic number field. In this case
we say that an element a E K is an integer if the coefficients of its characteristic
polynomial ch(v, a) = det( (a;j) - vEn) belong to the ring of rational integers Z. By a
module in K we mean .any finitely generated Z-submodule M c K .. As a free abelian
group M has basis (over Z) OJ1, ... , OJm. If the rank m of M is equal to n = [K : Q],
we say that M is a full module. Suppose that M1 and Mi are modules with bases {OJ;}
and {'U}. Then the set of integer linear combinations of the products OJ;1'f j is also a
module, which is denoted M1Mi and is called the product of M 1 and Mi. The product
of two full modules is a full module.
A full module is called an order of the field K if it contains l and is a ring. Since
the matrix entries for an element in any order D' with respect to a basis of the same
order are rational integers, it follows that D' is contained in the set D = Dx of all
integers of K. A typical example of an order of K is the ring of multipliers of a full
module M, defined as DM ={a EK; aM c M}. If M1 and Mi are similar modules,
i.e., if Mi= aM1 for some nonzero a EK, then obviously DM1 = DM2 • For every
full module M there exists a similar module contained in OM. The norm N(M) of a
module Mis the absolute value of the determinant of the transition matrix from a basis
of M to a basis of D M. N (M) does not depend on the choice of bases; if M c D M,
it is equal to the index of M in D M. Let {OJ;} be a basis of the full module M. The
number d (M) = det(S (OJ;OJj)), which is ajso independent of the choice of basis, is
called the discriminant of M.
2. Modules and primes in quadratic fields. Any extension K ::J Q of degree two is
oftheformK,,; Q+Qvao = Q(yao), whered0 =f.0,1 isasquarefreerationalinteger.
Ifwecomputethematrixoftheelementa = a+by'{IO, a,b E Q, in the basis 1, ytTo, we
find thatch(v,a) = vi-2av+ (ai-dobi). Consequently, ch(v, a)= (v-a)(v-a'),
where a' = a - byao is the conjugate of a (over Q). The element a has trace
S(a) = a + a' = 2a and norm N(a) = aa' = ai - d0bi; it is an integer of K if
S(a) and N(a) are both in Z. This implies that the set D of integers of the field
3il
322 APPENDIX 3

K = Q(Vdo) is the order with basis l,ro, where w = (1 + Vdo)/2 or Vdo depending
on whether d = 1 or 2, 3(mod 4); the discriminant d = d(D), which is called the
discriminant of the field K, is equal to do or 4do, respectively. Any order of K has the
form D1 = Z + Zlw, where I E N is the index of D1in D; the discriminant of the order
is d/ 2 .
Every full module M c K is generated by two elements a, p, where a =/:- 0 and
y = P/a ¢. Q; in thiscasewewriteM = {a,p}. Ify EK, weletch(v, y) = av 2 +hv+c
denote the polynomial obtained by multiplying the characteristic polynomial ch( v, y)
by a rational number and such that a > 0 and a, b, c are relatively prime integers. The
significance of the polynomial ch( v, y) is that for M = { 1, y} with y ¢. Q we have

DM = {l, ay }, d(DM) = b2 - 4ac, and N(M) = l/a.


Suppose that M and M 1 are arbitrary full modules of K, M' = {a'; a E M}
is the module conjugate.to M, DM = D1, DM1 = o,l' and mis the greatest common
divisor of I and 11• Then

and for the product module MM1 we have

N(MMi) = N(M)N(M1) and DMMi = Dm·


This implies that the full modules of K with fixed ring of multipliers D1 form
a commutative group under multiplication with identity D1; the inverse of Mis the
module N(M)- 1M'. The modules similar to 0 1 form a subgroup of this group, and
the quotient group is called the class group ofmodules of the ring D1 and is denoted
H(D1) = H(d(D1)),
where d(D 1) = d/ 2 is the discriminant of the order D1. If / 1 divides/, then the map
M ---+ D11M induces an epimorphism of class groups

v(/,11): H(D1)---+ H(D1J


Suppose that p is a prime number not dividing I. Then the field K of discriminant
d has a full module M satisfying the conditions

DM =D1, M cD,, and N(M) =p


=
if and only if the congruence x 2 d (mod 4 p) has a solution. If p l d, then there exist
exactly two such modules p and q = p'; if p Id, then there exists exactly one module p,
and p' = p. Finally, if this congruence does not have a solution, then K has a unique
full module M = p = pD 1 satisfying the conditions

DM = D1, Mc D1, and N(M) = p2•


All of these modules are prime ideals of the ring D 1•
3. Modules in imaginary quadratic fields, and quadratic forms. We shall not need
the case of indefinite forms and real quadratic fields, and so shall limit ourselves
to positive definite forms and quadratic fields of negative discriminant (imaginary
quadratic fields). Let
q(x,y) = ax 2 + bxy + cy 2
MODULES IN QUADRATIC FIELDS 323

be a positive definite binary quadratic form with matrix Q = ( 2: {c ) and with


discriminant d (q) = - det Q < 0. We say that such a form is integral if a, b, c E Z,
and is primitive if its divisorb = b(q) = g. c. d.(a, b, c) is equal to 1. If the forms q and
qi have matrices that are connected by the relation
Q1 = Q[U] with U E SL2(Z),
then the forms are said to be properly equivalent;. in that case they have the same
discriminant and divisor. Theorem 1.12 of Chapter 2 implies that the number of
proper equivalence classes of positive definite integral binary quadratic forms of fixed
discriminant is finite.
We define the map
q 1--+ M(q) = {a,ayq},
where M (q) is a full module of the imaginary quadratic field Kq = Q{ Jd(q}) of
discriminant d having ring of multipliers .01 and

}'q = (-b + {iW)f2 and / = Vd(q)/d.


For any fixed negative integer D this map gives a bijection between the set of proper
equivalence classes of primitive positive definite forms q of discriminant d (q) = D
and the group H (D) of similarity classes of modules of the field Q{ Vi5) with ring of
multipliers of discriminant D. The inverse map takes the full module M = {a, P} to
the quadratic form
q(M) = N(M)- 1(az + py)(ax + py),
where the basis a, pis chosen so that lm{-P/a) > 0.
Let q and q1 be two primitive integral positive definite binary quadratic forms,
let Q = Q(q) and Q1 = Q(q 1) be the matrices of these forms, and let D = -det Q, 1

D 1 = - det Q1 be their discriminants.· Suppose that the ratio D / D 1 is the square of a


rational number. ThenQ(Vi5) = Q{JD() = K. Thus, themodulesM{q) andM(qi)
are full modules of the same imaginary quadratic field, and so can be multiplied by
one another or by any full module of that field. We define the product q x q1 of the
forms q and qi by setting
q'x qi= q(M(q)M(qi)),
and we let Q x Q1 denote the matrix of the form q x qi:
Q x Qi= Q(q x qi).
In addition, for any full module M of K we set
q x M = M x q = q(M(q)M)
and we let Q x M denote the matrix of the form q x M:
Q x M = M x Q = Q(q x M).
Notes

In 1987 the first author published the book Quadratic Forms and Hecke Opera-
tors, Grundlehren der Mathematischen Wissenschaften 286, Springer-Verlag. It was
devoted to the multiplicative properties of modular forms of integer weight and qua-
dratic forms in an even number of variables. Meanwhile, the second author had
carried over a large part of the theory to modular forms of half-integer weight and
quadratic forms in an odd number of variables. Hence, when the question arose of
preparing a Russian edition, it was decided that, rather than merely reproduce the
original English version, we would expand it by including the multiplicative properties
of modular forms of half-integer weight. In order not to increase the size of the book,
it was necessary to omit sections on the action of Hecke operators on the theta-series
of quadratic forms. The result was the present volume.

Notes for Chapter 1.


§§3.1, 3.2-The exposition of the properties of theta-functions and theta-groups
follows [16], Appendix to Chapter 1.
§3.3-Eichler [16] was the first to have the idea of considering theta-series of degree
1 as specializations of suitable theta-functions and in this way finding the groups
of automorphic transformations of theta-series; we follow [14], where this idea was
extended to theta-series of arbitrary degree.
§4.2-The automorphic properties of theta-series of positive definite integral qua-
dratic forms of level 1 were first studied by Witt in [48].
§§4.3-4.5-For theta-series of degree 1, the expression for the multipliers in terms
of Gauss sums and the computation of those Gauss sums are classical (see, for exam-
ple, [38, 34, 16]). For theta-series of arbitrary degree, the expression for the multipliers
in terms of Gauss sums was found in [14], where a technique was also developed for
reducing the multipliers for theta-series of arbitrary degree to the case of degree 1.

Notes for Chapter 2.


More details on the theory of Siegel modular forms can be found in the books [29,
31, 22]. Admirers of Bourbaki might want to consult [39]. Good expositions of
modular forms in one variable can be found in [32] and [30]; for an initial exposure
to the subject [40] is recommended; connections with algebraic geometry are featured
in [42]. Our exposition has made essential use of [31] and [27].
§1.2-For more details on reduction theory see, for example, [26].
§2.3-0ur definition of modular forms of half-integer weight for r()(q) in the case
n = 1 is the same as that of Shimura [43].
§3.3-The notation (3.14) is credited to Petersson.
§4.1-The proof of Proposition 4.1 follows [40].
325
326 NOTES

§5.1-The scalar product of modular forms of degree 1 was introduced by Petersson


in [33]; the scalar product for arbitrary degree and both integer and half-integer weight
is based on the same idea.
§5.2-Asymptotic formulas for the Fourier coefficients of modular forms and theta-
series can be found in [25, 32, 35].
Notes for Chapter 3.
§1.1-Hecke operators for modular forms of degree 1 and integer weight were
defined in [24]; the definition for forms of half-integer weight was essentially given
in [43].
§1.2-It is hard to say who was the first to define abstract Hecke rings. In early
works we used Shimura's definition in [41]; we later replaced it by the equivalent but
more convenient definition that apparently appeared first in [8].
§1.4-The properties of the anti-automorphism j were examined in [8].
§2-0ur exposition of the Hecke ring of the general linear group is in the spirit
of [47]. ·
·§2.2-The formulas in Lemma 2.18 are a special case of the formulas in Lemmas 6
and 9 of [2].
§2.3-See [2].
§3-The structure of the Hecke rings for rn was given in [37] and [41] (who was
first?). The transition to congruence subgroups relies upon an idea of Hecke in [24].
But our exposition uses the analogy with the case of the general linear group and some
common sense, rather than any particular sources in the literature.
§3.3-The theory of spherical mappings was developed in [37] in an equivalent
form, namely, as the theory of zonal spherical functions on reductive algebraic groups
over p-adic fields. We use an elementary approach, based on explicit computations.
The summation of the series (3.71) is carried out in [41] for n = l, 2, and for arbitrary
n in [1] and [2].
§§4.1, 4.2-Hecke rings for a covering of the symplectic group were defined and
their basic properties were proved in [52].
§4.3-See [53].
§5-The imbeddings of the local Hecke rings of the symplectic group in the Hecke
rings of the triangular subgroup were first used in [6] in connection with the problem
of factoring Hecke polynomials. The basic properties of the centralizers of elements
II+(P) were also examined in [6]. The case of the symplectic covering group was
studied in [53].
§5.3-The idea of the expansion (5.48) goes back to Hecke [24]. The analogue of
(5.49) for suitable operators on the Fourier coefficients of modular forms of degree 2
was studied and used in an essential way in [4].
§6-The factorization theory of Hecke polynomials arose from trying to understand
and generalize the expansions of the operator polynomials corresponding to Q~ (v)
and R~(v) (see [4] and [5], respectively). The first version of the theory was given
in [6]; the formulas (6.69) were derived in [7]; and in [8] duality considerations were
brought into the theory, making it possible to obtain a symmetric factorization of
the polynomial R; ( v). A similar factorization of the polynomial i; ( v) was obtained
in [53].
Notes for Chapter 4.
§1.1-Hecke operators for r 1 = SL 2 (Z) and certain of its subgroups were defined
in [24]. Hecke operators were first defined on Siegel modular forms in [45] and [46];
NOTES 327

after a gap caused by the War, these operators were examined in [28]. For modular
forms of degree I and half-integer weight, Hecke operators were introduced in [49];
however, the approach adopted by Shimura in [43] turned out to be more fruitful.
§1.2-The existence of a basis of eigenfunctions of the Hecke operators in the
invariant spaces of cusp-forms of degree 1 was proved by Petersson in [33]. Petersson's
idea was first used for Siegel modular forms by Maass in [28]. In [18] the author
proved the existence of a basis of eigenfunctions in the entire space of modular forms
of integer weight for a broad class of congruence subgroups; however, this paper had
some errors, partially noted in [19], that make it difficult to use. In the present book we
prove the existence of a basis of eigenfunctions only for spaces of cusp-forms relative to
q-regular pairs (Theorem 1.9) and for invariant subspaces of the space of all modular
forms for the full modular group rn (Theorem 2.16). Here we do not even touch upon
the important and extensively studied question of spaces spanned by theta-series that
are invariant relative to the Hecke operators. In this connection see [20, 8-12, 21].
§2.3-The relations (2.46) were obtained in [50] for the full modular group and the
trivial character. Our exposition follows the same idea. Zharkovskaya's work arose as
a result of attempts to generalize the Maass commutation relations [28] for the Siegel
operators and the Hecke operators corresponding to T 2 (m).
§2.4-The computations in this subsection were carried out in [7, 8, 53].
§3.1-The results in this subsection are essentially due to Hecke [24].
§3.2-The results in this subsection were obtained for the case q = 1 in [3] and in
their final form in [4]. Here we follow the same ideas. Similar questions were examined
for groups of the form r 2 (q) in [17]. In [4] in the case q = 1 it was proved that the
function ·
'l'(s, F) = (21t)- 2sr(s )r(s - k - 2)c!{s, I; F),
where r(s) is the gamma-function, can be analytically continued to a meromorphic
function on the entire s-plane, where it has at most four simple poles and satisfies the
functional equation
'¥(2k - 2 - s,F) = (-l)k'l'(s,F),
where k is the weight of the eigenfunction F.
§3.3-The results presented here were obtained for n = 1 in [43] and [44], for n = 2
and integer weight in [5], and for arbitrary degree and weight in [7, 8, 53]. Analogous
series for the group r 2 (q) were studied in [23]. In [13] it was proved that, if F is a
modular form of even weight and y is a Dirichlet character, then the even zeta-function
c+(s, y; F) extends meromorphically onto the entire s-plane; and in the case q = 1,
y = 1 (and under certain restrictions) a functional equation was found for the even
zeta-function. The subsequent development of the theory of even zeta-functions is due
to Bocherer, who, in particular, managed to remove the restriction alluded to above
(S. Bocherer, Uber die Funktionalgleichung automorpher L-Funktionen zur Siegelschen
Modulgruppe, J. Reine Angew. Math. 362 (1985), 146-168).
References

1. A. N. Andrianov, Rationality theorems for Hecke series and Zeta-functions of the groups GLn and Spn
over local.fields, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), no. 3, 466-505; English transl. fo Math.
USSR-Izv. 3 (1969).
2. - - , Spherical functions for GLn over local fields, and the summation of Hecke series, Mat. Sb. 83
(1970), no. 3, 429-451; English transl. in Math. USSR-Sb. 12 (1970).
3. --·Dirichlet series with Euler product in the theory of Siegel modular forms of genus 2, Trudy
Mat. Inst. Steklov. 112 (1971), 73-94; English transl. in Proc. Steklov Inst. Math. 112 (1973).
4. - - , Euler products that correspond to Siegel's modular forms of genus 2, Uspekhi Mat. Nauk 29
(1974), no. 3, 43-110; English transl. in Russian Math. Surveys 29 (1974).
5. - - , Symmetric squares ofzeta-functions ofSiegel modularforms ofgenus 2, Trudy Mat. Inst. Steklov.
142 (1976), 22-45; English transl. in Proc. Steklov Inst. Math. 1979, no. 3.
6. - - , The expansion of Hecke polynomials for the symplectic group ofgenus n, Mat. Sb. 104 (1977),
no. 3, 390--427; English transl. in Math. USSR-Sb. 33 (1977).
7. _ _ , Euler expansions ofthe theta-transform ofSiegel modular forms ofgenus n, Mat. Sb. 105 (1978),
no. 3, 291-341; English transl. in Math. USSR-Sb. 34 (1978).
8. ____ , Multiplicative arithmetic of Siegel's modular forms, Uspekhi Mat. Nauk 34 (1979), no. l,
67-135; English transl. in Russian Math. Surveys 34 (1979).
9. --·Action of Hecke operator T(p) on theta series, Math. Ann. 247 (1980), 245-254.
10. - - , Integral representations of quadratic forms by quadratic forms: multiplicative properties, Pro-
ceedings of the International Congress of Mathematicians, Vol. l, 2 (Warsaw 1983), PWN, Warsaw,
1984, pp. 465-474.
11. _ _ , Hecke operators and representations of binary quadratic forms, Trudy Mat. Inst. Steklov. 165
(1984), 4-15; English transl. in Proc. Steklov Inst. Math. 1985, no. 3.
12. _ _ , Representations ofan even zeta-function by theta-series, Zap. Nauchn. Sem. Leningrad. Otdel.
Mat. Inst. Steklov. (LOMI) 134 (1984), 5-14; English transl. in J. Soviet Math. 36 (1987).
13. A. N. Andrianov and V. L. Kalinin, Analytic properties of standard zeta-functions of Siegel modular
forms, Mat. Sb. 106 (1978), no. 3, 323-339; English transl. in Math. USSR-Sb. 35 (1979).
14. A. N. Andrianov and G. N. Maloletkin, Behavior of theta-series ofgenus n under modular substitutions,
Izv. Akad. Nauk SSSR Ser. Mat. 39 (1975), no. 2, 243-258; English transl. in Math. USSR-Izv. 9
(11975).
15. Z. I. Borevich and I. R. Shafarevich, Number theory, Pure Appl. Math., vol. 20, Academic Press, New
York, 1966.
16. M. Eichler, Introduction to the theory of algebraic numbers and functions, Pure Appl. Math., vol. 23,
Academic Press, New York, 1966.
17. S. A. Evdokimov, Euler products for congruence subgroups of the Siegel group of genus 2, Mat. Sb. 99
(1976), no. 4, 483-513; English transl. in Math. USSR-Sb. 28 (1976).
18. _ _ ,A basis composed of eigenfunctions of Hecke operators in the theory ofmodular forms of genus
n, Mat. Sb. 115 (1981), no. 3, 337-363; English transl. in Math. USSR-Sb. 43 (1982).
19. _ _ ,Letter to the editors, Mat. Sb. 116 (1981), no. 4, 603; English transl. in Math. USSR-Sb. 44
(1983).
20. E. Freitag, Die lnvarianz gewisser von Thetareinen erzeugter Vektorriiume unter Heckeoperatoren,
Math. Z. 156 (1977), no. 2, 141-155.
21. _ _ , Eine Bemerkung zu Andrianovs expliziten Formelnfur die Wirkung der Heckeoperatoren auf
Thetareihen, E. B. Christoffel (Aachen/Monschau, 1979), Birkhiiuser, Basel, 1981, pp. 336-351.

329
330 REFERENCES

22. _ _ , Siege/sche Modulfunktionen, Springer, New York, 1983.


23. V. A. Gritsenko, Symmetric squares of the zeta functions for a principal congruence subgroup of the
Siegel group of genus 2, Mat. Sb. 104 (1977), no. 1, 22-41; English transl. in Math. USSR-Sb. 33
(1977). .
24. E. Hecke, Uber Modulfunktionen und die Dirich/etschen Reihen mil Eu/erscher Produktentwick/ung. I,
II, Math. Ann. 114 (1937), 1-28; 316-351.
25. _ _ , Analytische Arithmetik der positiven quadratischen Formen, Danske Vid. Selsk. Math.-Fys.
Medd. 17 (1940), no. 12.
26. J. W. S. Cassels, Rational quadratic forms, Academic Press, New York, 1978.
27. M. Koecher, Zur Theorieder Modulformen n-ten Grades. I, II, Math. Z. 59 (1954), 399--:416; 61 (1955),
455-466.
28. H. Maass, Die Primzah/en in der Theorie der Siegelschen Modulfunktionen, Math. Ann. 124 (1951),
87-122.
29. _ _ ,Lectures on Siegel's modular functions, Tata Institute of Fundamental Research, Bombay,
1954-1955.
30. _ _ ,Lectures on modularfunctions ofone complex variable, Tata Institute of Fundamental Research,
Bombay, 1964; revised 1983.
31. _ _ ,Siegel's modular forms and Dirichlet series, Lecture Notes in Math., vol. 216, Springer, New
York, 1971.
32. A. Ogg, Modular forms and Dirichlet series, Benjamin, New York, 1969.
33. H. Petersson, Konstruktion der siimt/ichen Losungen einer Riemannschen Funktionalg/eichung durch
Dirich/et-Reihen mil Eulerscher Produktent1vicklung. I, II, II, Math. Ann. 116 (1936), 401-412; 117
(1939/40),39-64;277-300.
34. W. Pfetzer, Die Wirkung der Modulsubstitutionen auf mehrafache Thetareihen zu
quadratischen Formen ungerader Variab/enzahl, Arch. Math. 4 (1953), 448-454.
35. S. Raghavan, Modular forms of degree n and representation by quadratic forms, Ann. of Math. (2) 70
(1959), 446-477.
36. R. A. Rankin, Conti:ibutions to the theory of Ramanujan'sfunction -r(n) and similar arithmetical func-
tions. I, II, Proc. Cambridge Philos. Soc. 35 (1939), 351-372.
37. I. Satake, Theory of spherical functions on reductive algebraic groups over p-adic fields, Inst. Hautes
Etudes Sci. Pub!. Math.1963, no. 18, 5-69.
38. B. Schoeneberg, Das Verhalten von mehrfachen Thetareihen bei Modulsubstitutionen, Math. Ann. 116
(1939), 511-523.
39. Seminaire Henri Cartan,.10" annee (1957/58). Fonctions automorphes. vols. l, 2, Secretariat Mathe-
matique, Paris, 1958.
40. J.-P. Serre, A course in arithmetic, Springer, New York, 1973.
41. G. Shimura, On modular correspondences for Sp(n, Z) and their congruence relations, Proc. Nat. Acad.
Sci. U.S.A. 49 (1963), 824-828.
42. _ _ ,Introduction to the arithmetic theory ofautomorphicfunctions, Princeton Univ. Press, Princeton,
NJ, 1971.
43. --·On modular forms of half integral weight, Ann. of Math. (2) 97 (1973), 440-481.
44. --·On the holomorphy of certain Dirichlet series, Proc. London Math. Soc. (J) 31 (1975), 79-98.
45. M. Sugawara, On the transformation theory of Siegel's modular group of the n-th degree, Proc. Imp.
Acad. Japan 13 (1937), 335-336.
46. _ _ ,An invariant property of Siegel's modular functions., Proc. Imp. Acad. Japan 14 (1938), 1-3.
47. T. Tamagawa, On the C{unctions of a division algebra, Ann. of Math. (2)°77 (1963), 387-405.
48. E. Witt, Eine ldentitiit zwischen Modulformen zweiten Grades, Abh. Math. Sem. Hansischen Univ. 14
(1941), 323-337.
49. K. Wohlfahrt, Uber Operatoren Heckescher Art bei Modulformen reel/er Dimension, Math. Nachr. 16
(1957), 233-256.
50. N. A. Zharkovskaya, The Siegel operator and Hecke operators, Funktsional. Anal. i Prilozhen. 8 (1974),
no. 2, 30-38; English transl. in Functional Anal. Appl. 18 (1974).
51. - - · The connection between the eigenvalues of Hecke operators and the Fourier coefficients of
eigenfunctions for Siegel modular forms of genus n, Mat. Sb. 96 (1975), no. 4, 584-593; English transl.
in Math. USSR-Sb. 25 (1975).
REFERENCES 331

52. V. G. Zhuravlev, Hecke rings for a covering ojthe symplectic group, Mat. Sb. 121 (1983 ), no. 3, 381-402;
English transl. in Math. USSR-Sb. 49 (1984).
53. _ _ ,Euler expansions of theta-transformations of Siegel modular forms of half-integer weight and
their analytic properties, Mat. Sb. 123 (1984), no. 2, 174-194; English transl. in Math. USSR-Sb. 51
(1985).
List of Notation

N is the set of natural numbers


N(q) is the set of natural numbers prime to q
P is the set of prime numbers in N
P(q) = P n N(q)
Z is the ring of rational integers
Zq is the set of q-integral rational numbers
Q, R, C are the fields of rational, real, and complex numbers, respectively
C1 = {z EC; izi = l}
F P = Z/ pZ is the residue field modulo the prime p
K is a commutative ring with unit
K* is the multiplicative group of invertible elements of K
Mm,n (K) is the set of m x n-matrices with entries in K
Mn (K) = Mn,n (K)
Sn (K) is the set of symmetric matrices in Mn (K)
Mm,n = Mm,n(Z), Mn = Mn(Z), Sn = Sn(Z)
En is the set of matrices in Sn having even diagonal
A >0 (A ~O) means that A is a positive definite (semidefinite) matrix in Sn (R)
An= {A,E En;A ~ O}, A:= {A E En;A > O}
GLn(K) ={ME Mn(K);detM EK*}
A= An = GLn(Z)
SLn(K) ={ME M(K);detM = 1}
En = (eap) is then x n identity matrix
.l = ( 0 En)
n -En 0
'M is the transpose of the matrix M
M* = 'M- 1 for a nonsingular square matrix M
Q[M]= 'MQM
S~ = Gsp:(K) =.{ME M1n(K);Jn[M] = r(M)Jn,r(M) > O}
Spn(K) = {M = Gsp:(!});r(M) = l}
rn = Spn(Z)
QS is the covering of the symplectic group Gsp:(R)
P: QS __. GSp: (R) is the canonical epimorphism
Hn is the Siegel upper half-plane of degree n
Hn(e) = {Z = X + iY E Hn; Y ~ eEn}
M(Z}=(AZ+B)(CZ+D)- 1 (M= (~ ~))
alb (alb) means that a divides (does not divide) b
a jq 00 means that every prime factor of a divides q

333
334 LIST OF NOTATION

(a, b) is the greatest common divisor of a and b


(i) is the Legendre symbol
(n) = n(n + 1)/2
u(A) is the trace of a square matrix A
e{A} = exp{niu{A))
r P (A) is the rank of the matrix A over the field F P
T(S) =(En0 En
S)
U(r, V) = ( rV*
O 0)V , U(V) = U(l, V)_
{rg) is a left coset modulo the group r
{g)r = 2: {rg;)
g;er\r8 r
rc8 > = rng- 1rg
Dx(r, S) is the Hecke ring of the pair {r, S) over the ring K
D(r,s) = Dz(r,s)
Hn = DQ(An, Gn) = DQ(GLn(Z), GLn(Q))
Ln(q) = DQ(r(i(q), S0(q)), Ln = Ln(l)
L0= DQ(r(i, S0) is the triangular Hecke ring
Lp, L_, E are the local, integral, and even subrings of the Hecke ring L
L = L ®Q C is the complexification of the Hecke ring L
L is the Hecke ring obtained by lifting the ring L
L is the image of the Hecke ring L in the ring L0
T is the image of an element T E L in the ring L0
Other Titles in This Series (Continued from the front of this publication)
103 A. K. Tsikh, Multidimensional residues and their applications, 1992
102 A. M. Il'ln, Matching of asymptotic expansions of solutions of boundary value problems, 1992
101 Zhang Zhi-fen, Ding Tong-ren, Huang Wen-zao, and Dong Zhen-xi, Qualitative
theory of differential equations, 1992
100 V. L. Popov, Groups, generators, syzygies, and orbits in invariant theory, 1992
99 Norio Shimakura, Partial differential operators of elliptic type, 1992
98 V. A. Vassiliev, Complements of discriminants of smooth maps: Topology and applications, 1992
(revised edition, 1994)
97 Itiro Tamura, Topology of foliations: An introduction, 1992
96 A•. I. Markushevich, Introduction to the classical theory of Abelian functions, 1992
95 Guangchang Dong, Nonlinear partial differential equations of second order, 1991
94 Yu. S. Il'yashenko, Finiteness theoreins for limit cycles, 1991
93 A. T. Fomenko and A. A. Tuzbilin, Elements of the geometry and topology of minimal surfaces in
three-dimensional space, 1991
92 E. M. Nikishin and V. N. Sorokin, Rational approximations and orthogonality, 1991
91 Mamoru Mlmura and Hlrosl Toda, Topology of Lie groups, I and II, 1991
90 S. L. Sobolev, Some applications of functional analysis in mathematical physics, third edition, 1991
89 Valeril V. Kozlov and Dmitril V. Treshchi!v, Billiards: A genetic introduction to the dynamics of
systems with impacts, 1991
88 A.G. Khovansk.11, Fewnomials, 1991
87 Aleksandr Robertovich Kemer, Ideals of identities of associative algebras, 1991
86 V. M. Kadets and M. I. Kadets, Rearrangements of series in Banach spaces, 1991
85 Mlklo lse and Masaru Takeuchi, Lie groups I, II, 1991
84 Djio Trong Tl:d and A. T. Fomenko, Minimal surfaces, stratified multivarifolds, and the Plateau
problem, 1991
83 N. I. Portenko, Generalized diffusion processes, 1990
82 Yasutaka Sibuya, Linear differential equations in the complex domain: Problems of analytic
continuation, 1990
81 I. M. Gelfand and S. G. Gindikin, Editors, Mathematical problems of tomography, 1990
80 Junjiro Noguchi and Takusbiro Ochiai, Geometric function theory in several complex variables,
1990
79 N. I. Akblezer, Elements of the theory of elliptic functions, 1990
78 A. V. Skorokbod, Asymptotic methods of the theory of stochastic differential equatic;ms, 1989
77 V. M. Filippov, Variational principles for nonpotential operators, 1989
76 Phillip A. Grlffttbs, Introduction to algebraic curves, 1989
75 B. S. Kasbin and A. A. Saakyan, Orthogonal series, 1989
74 V. I. Yudovich, The linearization method in hydrodynamical stability theory, 1989
73 Yu. G. Resbetnyak, Space mappings with bounded distortion, 1989
72 A. V. Pogorelev, Bendings of surfaces and stability of shells, 1988
71 A. S. Markus, Introduction to the spectral theory of polynomial operator pencils, 1988
70 N. I. Akblezer, Lectures on integral transforms, 1988
69 V. N. Salli, Lattices with unique complements, 1988
68 A.G. Postnikov, Introduction to analytic number theory, 1988
67 A.G. Dragalin, Mathematical intuitionism: Introduction to proof theory, 1988
66 Ye Yan-Qian, Theory of limit cycles, 1986
65 V. M. Zolotarev, One-dimensional stable distributions, 1986
64 M. M. Lavrent'ev, V. G. Romanov, and S. P. Shisbat·skil, Ill-posed problems of mathematical
physics and analysis, 1986

(See the AMS catalog for earlier titles)


ISBN 0-8218-0277-1

9 780821 802779

You might also like