Professional Documents
Culture Documents
145 A. N. Andrianov and V. G. Zhuravlev, Modular forms and Hecke operators, 1995
144 0. V. Troshkin, Nontraditional methods in mathematical hydrodynamics, 1995
143 V. A. Malyshev and R. A. Minlos, Linear infinite-particle operators, 1995
142 N. V. Krylov, Introduction to the theory of diffusion processes, 1995
141 A. A. Davydov, Qualitative theory of control systems, 1994
140 Aizik I. Volpert, Vitaly A. Volpert, and Vladimir A. Volpert, Traveling wave solutions of parabolic
systems, 1994
139 I. V. Skrypnik, Methods for analysis of nonlinear elliptic boundary value problems, 1994
138 Yu. P. Razmyslov, Ide~tities of algebra~ and their representations, 1994
137 F. I. Karpelevich and A. Ya. Kreinin, Heavy traffic limits for multiphase queues, 1994
136 Masayoshi Miyanishi, Algebraic geometry, 1994
135 Masam Takeuchi, Modern spherical functions, 1994
134 V. V. Prasolov, Problems and theorems in linear algebra, 1994
133 P. I. Naumkln and I. A. Shishmarev, Nonlinear nonlocal equations in the theory of waves, 1994
132 Hajime Urakawa, Calculus of variations and harmonic maps, 1993
131 V. V. Sharko, Functions on manifolds: Algebraic and topological aspects, 1993
130 V. V. Vershinln, Cobordisms and spectral sequences, 1993
129 Mitsuo Morimoto, An introduction to Sato's hyperfunctions, 1993
128 V. P. Orevkov, Complexity of proofs and their transformations in axiomatic theories, 1993
127 F. L. Zak, Tangents and secants of algebraic varieties, 1993
126 M. L. Agranovskii, Invariant function spaces on homogeneous manifolds of Lie groups and
applications, 1993
125 Masayoshl Nagata, Theory of commutative fields, 1993
124 Masahisa Adachi, Embeddings and immersions, 1993
123 M.A. Akivis and B. A. Rosenfeld, Elie Cartan (1869-1951), 1993
122 Zhang Guan-Hou, Theory of entire and meromorphic functions: Deficient and asymptotic values
and singular directions, 1993
121 I. B. Fesenko and S. V. Vostokov, Local fields and their extensions: A constructive approach, 1993
120 Takeyukl Hlda and Masuyuki Hltsuda, Gaussian processes, 1993
119 M. V. Karasev and V. P. Maslov, Nonlinear Poisson brackets. Geometry and quantization, 1993
118 Kenkichi Iwasawa, Algebraic functions, 1993
117 Boris Zilber, Uncountably categorical theories, 1993
116 G. M. Fel'dman, Arithmetic of probability distributions, and characterization problems on abelian
groups, 1993
115 Nikolai V. Ivanov, Subgroups of Teichmiiller modular groups, 1992
114 Seiz6 Itll, Diffusion equations, 1992
113 Michail Zhitomirskli, Typical singularities of differential I-forms and Pfaffian equations, 1992
112 S. A. Lomov, Introduction to the general theory of singular perturbations, 1992
111 Simon Gindikin, Tube domains and the ~auchy problem, 1992
110 B. V. Shabat, Introduction to complex analysis Part II. Functions of several variables, 1992
109 lsao Miyadera, Nonlinear semigroups, 1992
108 Takeo Yokonuma, Tensor spaces and exterior algebra, 1992
I07 B. M. Makarov, M. G. Goluzina, A. A. Lodkin, and A. N. Podkorytov, Selected problems in real
analysis, 1992
106 G.-C. Wen, Conformal mappings and boundary value problems, 1992
105 D. R. Yafaev, Mathematical scattering theory: General theory, 1992
104 R. L. Dobmshin, R. Kotecky, and S. Shlosman, Wulff construction: A global shape from local
interaction, 1992
Modular Forms
and Hecke Operators
A. N. Andrianov
V. G. Zhuravlev
A. H. AHJ:(pHaHOB, B. r. )l{ypaBJieB
MO,l:t;YJUIPHbIE <l>OPMbl 11 OIIEPATOPbl rEKKE
Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them,
are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research.
Permission is granted to quote· brief passages from this publication in reviews, provided the customary
acknowledgment of the source is given.
Republication, systematic copying, or multiple reproduction of any material in this publication (in-
cluding abstracts) is permitted only under license from the American Mathematical Society. Requests
for such permission should be a~dressed to the Manager of Editorial Services, American Mathematical
Society, P.O. Box 6248, Providence, Rhode Island 02940-6248. Requests can also be made by e-mail to
reprint-permissionOmath.ams.org.
Introduction 1
Chapter 1. Theta-Series 3
§1. Definition of theta-series 3
1. Representations of quadratic forms by quadratic forms 3
2. Definition of theta-series 5
§2. Symplectic transformations 6
1. The symplectic group 6
2. The Siegel upper half-plane 8
§3. Symplectic transformations of theta-series 11
1. Transformations of theta-series 11
2. The Siegel modular group and the theta-group 16
3. Symplectic transformations of theta-series 19
§4. Computation of the multiplier 25
1. Automorphy factors 25
2. Quadratic forms of level 1 27
3. The multiplier as a Gauss sum 28
4. Quadratic forms in an even number of variables 32
5. Quadratic forms in an odd number of variables 37
Chapter 2. Modular Forms 43
§1. Fundamental domains for subgroups of the modular group 43
1. The modular triangle 43
2. The Minkowski reduction domain 46
3. The fundamental domain for the Siegel modular group 52
4. Subgroups of finite index 57
§2. Definition of modular forms 59
1. Congruence subgroups of the modular group 59
2. Modular forms of integer weight 60
3. Definition of modular forms of half-integer weight 60
4. Theta-series as modular forms 60
§3. Fourier expansions 61
1. Modular forms for triangular subgroups 61
2. The Koecher effect 62
3. Fourier expansions of modular forms 65
4. The Siegel operator 72
5. Cusp-forms 75
v
vi CONTENTS
Throughout the history of number theory, a problem that has attracted and con-
tinues to attract the interest of researchers is that of studying the number r (q, a) of
integer solutions to equations of the form
q(xi, ... ,xm) =a,
where q is a quadratic form. The classical theory gave many exact formulas for the
functions r (q, a) which revealed remarkable multiplicative properties of these numbers.
For example, Jacobi's formula ·
r(xf + · · · + xJ, a)= 8a1 (a)
for the number of representations of an odd integer a as a sum of four squares and
Ramanujan's formula
2 2 ) 16 ( ) 33152 ( )
r ( X1 + ... + X24, a = 691 O"IJ a + """""691-r a
for the number of representations of an odd integer a as a sum of 24 squares, where
ak (a) denotes the sum of the kth powers of the positive divisors of a and -r (a) is defined
as the coefficients in the power series
00
a=I
and the Ramanujan function -r (a), which follows the same multiplication rule as a 11 (a).
In 1937, Hecke explained why this phenomenon occurs. In particular, from Hecke's
theory it follows that, given a positive definite integral quadratic form q in an even
number of variables, the function r (q, a) is a linear combination of multiplicative
functions whose values can be interpreted as eigenvalues of certain invariantly defined
linear operators (called "Hecke operators") on the spaces of modular forms. In sub-
sequent years, the work of Eichler, Sato, Deligne and others uncovered fundamental
relations between Hecke operators and algebraic geometry. In particular, their eigen-
values were interpreted in terms of the roots of the zeta-functions of suitable algebraic
varieties over finite fields. Another line of development, initiated by Selberg and then
greatly expanded by Langlands, considers Hecke operators from the point of view of
the representation theory of locally compact groups and hopes to find a prominent
place for them in a future noncommutative class field theory.
2 INTRODUCTION
Theta-Series
(1.1)
and hence rK (q, a · Y?> is equal to the number rK (q, a) of solutions in K to the
equation ( 1.1). Similarly, in the general case rK ( q, a) can be interpreted as the number
of solutions of a certain system of equations of degree two.
If the number 2 = 2 · lK' is not a zero divisor in K', then it is convenient to use
matrix language. To every quadratic form
l~a,p~m
(1.3)
where t denotes the transpose. We call Q the matrix of the form q (it is more traditional
but less convenient to call Q the matrix of the form 2q). Then q can be written in
terms of its matrix as follows:
where x is the column with components xi, ... , Xm. Using these definitions and
notation, we immediately see that C E Mm,n (K) is a representation of a form a in n
variables by the form q if and only if X = C is a solution of the matrix equation
(1.5) 'XQX=A,
where A is the matrix of the form a and Xis an m x n-matrix. In particular, rK (q, a)
is equal to the number rK(Q, A) of solutions over K of the equation (1.5).
The methods of studying rK(q, a) and even the formulation of the questions
naturally depend upon the nature of the ring K and the properties of the quadratic
forms under consideration. For now we shall limit o.urselves to a simple but useful
observation.
Two quadratic forms q and q' in the same number m of variables are said to ·be
equivalent over K (or K -equivalent) if there exists a representation of one form by the
other that lies in the group GLm(K) of invertible m x m-matrices over K. In this case
we. write q "'K q'. The set {q} K of all forms that are equivalent over K to a given form
q is called the class of q over K. If Q and Q' are the matrices of the forms q and q',
then q "'K q' means that
In this case we say that Q and Q' are equivalent over K (or K -equivalent), and we write
Q "'K Q'. We let { Q} K denote the K -equivalence class of the matrix Q.
For any fixed matrices V E GLm(K) and V' E GLn(K), the map C ---+ VCV'
is obviously a one-to-one correspondence from Mm,n (K) to itself. From this obvious
fact and the definitions we have
PROPOSITION 1.1. The function rK ( Q, A) depends only on the K -equivalence classes
of Q and A (or the K -equivalence classes of the corresponding quadratic forms).
The history of quadratic forms over rings is almost as old and colorful as that of
mathematics itself. The questions asked and the approaches to answering them vary
greatly from one ring to another and from one type of quadratic form to another. In
this book we are interested in the development of analytic methods in the simplest
nontrivial situation, and so we take the ring of rational integers Z as our ground
ring. Other rings will play only an auxiliary role. In addition, as· a rule we will be
considering only representations by positive definite forms. If q is such a form, the
number rz(q, a) of integral representations by q is always finite, and the theory studies
various properties of the function a ---+ rz (q, a).
It is the theory of modular forms which provides a natural language and an
apparatus for studying this function. The theta-series of quadratic forms are the link
between modular forms and quadratic forms.
PROBLEM 1.2. Let Q and A be symmetric m x m- and n x n-matrices, respectively,
with coefficients in the field of real numbers R, and let Q > 0 (i.e., Q is positive
definite). Prove that the equation 1XQX = A is solvable in realm x n-matrices X if
and only if A ;::: 0 (i.e., A is positive semidefinite) and rank A ~ m, and in this case the
entries in any solution X = C = (cij) satisfy the inequality max;,1 icijl ~ ;..- 112µ 112,
where J.. is the smallest eigenvalue of the matrix Q and µ is the largest eigenvalue of the
matrix A; in particular, rz(Q, A) < oo.
§1. DEFINITION OF THETA-SERIES s
(1.7)
be the set of n x n integral even symmetric (i.e., with even numbers on the main
diagonal) semidefinite matrices, and let
L r(Q, A) II 1;1
A=((l+e0 ,11)a0 ,11)EAn l~a~P~n
in (n) = n(n + 1)/2 variables lap. where eap are the coefficients of the identity matrix
En = (eap). Setting lap = exp(2nizap), we obtain the Fourier series
where Z is an n x n symmetric matrix with coefficients Zap on and above the main
diagonal, and where a(M) denotes the trace of the matrix M. The last form of writing
the generating series is the more convenient one in most situations, and in particular
for finding the domain of convergence.
We write the matrix Z in the form Z = X + i Y, where X and Y are real matrices
and i = A. If the matrix Y does not satisfy the condition Y ~ 0, then there
exists a row of integers c = ( c 1, ••• , en) such that c Y 1c < 0. (A real solution of this
inequality exists by definition; a rational solution exists by continuity; and an integral
solution can be obtained from the rational solution using homogeneity.) Let C denote
the m x n integer matrix with c in the first row and zeros everywhere else. Then the
matrices Ad= d 2 • 'CQC = '(dC)Q(dC) with rational integers d belong to An, they
satisfy the condition r(Q, Ad) ~ 1, and we obviously have
Thus, in this case the general term in (1.9) does not approach zero, and the series
diverges. Consequently, if the series (1.9) converges on some open subset of the (n)-
dimensional complex space of the variables Zap, then this subset must be contained in
the region
PROPOSITION 1.3. The series on(z, Q), where Q EA;!; and n EN, converges abso-
lutely and uniformly on any subset of Hn of the form
(1.11) Hn(e) = {Z = X+ iY E Hn; Y ~ eEn}.
where e > 0 and En is then x n identity matrix.
PROOF. Let e 1 denote the smallest eigenvalue. of the matrix Q. Then for any
N E Mm,n(R) we have the inequality 'NQN ~ e1 'NN. Consequently, on the set
Hn (e) the series
(1.12) L exp(nia('NQNZ))
NEM,.,,
and hence it converges absolutely and uniformly on this set. If we gather together all
of the terms in {l.12) for which 'NQN is equal to a fixed matrix A E An, we see that
the number of such terms is r(Q, A), and thus the series (1.12) is equal to on(z, Q) in
any region of absolute convergence. D
The series
(1.13) on (Z, Q) = L r(Q, A) exp(nia(AZ)) = L exp(nia( 'NQNZ))
AEAn
is called the theta-series of degree n for the matrix Q {or the corresponding quadratic
form). Proposition 1.3 immediately implies
TllEoREM 1.4. The theta-series on (Z, Q) of degree n for the matrix Q E A;!; deter-
mines a holomorphicfunction on Hn; the function on(z, Q) is bounded on every subset
Hn(e) C Hn withe> 0.
(2.4) J = Jn = (_OE ~) .
Furthermore, it is easy to see that the composition of any two automorphisms of the
form (2.1) is also an automorphism of the form (2.1) with matrix equal to the product
of the original matrices. Thus, we obtain
PROPOSITION 2.1. Let S be the subgroup of GL2n(R) generated by the matrices
(2.2)-(2.4). Then for every M = ( ~ ~) ES the matrix CZ+ Dis invertible for all
Z E Hn, and the map
(2.5) f(M): Z-+ M(Z}
is a holomorphic automorphism of Hn. The map M-+ f(M) gives a homomorphism
from S to .the group of holomorphic automorphisms of
.
Hn.
In order to characterize Sas an.algebraic group, we first note that each generator
(2.2)-(2.4) leaves invariant the skew-symmetric bilinear form with matrix (2.4), i.e., it
satisfies the relation 'MJnM =Jn. Hence, Sis contained in the group
(2.6)
which is called the real symplectic group ofdegree n. It follows from the definition that
a 2n x 2n real matrix M = ( ~ ~) with n x n-blocks A, B, C, D is symplectic (i.e.,
belongs to the symplectic group of degree n) if and only if
(2.7) 'AC= 'CA, 'BD ='DB, 'AD - 'CB= En.
It is easy to see that a matrix Mis symplectic ifand only if the matrix 'M = JM- 1J- 1
is symplectic. This implies that the conditions (2. 7) can be rewritten in the form
(2.8) A 'B = B 'A, C 'D = D 'C, A 'D - B 'C =En.
THEOREM 2.2. The symplectic group of degree n is generated by the matrices (2.2)-
(2.4). In other words, S = Spn(R).
.
A' = A + A.C = ( E, A.~
+A.Ci
.
0 )
A.~
and so it has rank n for A. sufficiently small. We see that from the beginning we may
assume, without loss of generality, that A= En. Now, by the first relation in (2.7), C
is a symmetric matrix, and
The third and second relations in (2.7) show that Di = En and 'B = Bin the last
matrix, and hence it is equal to the matrix T(B). D
This transitivity implies that Hn can be identified with the quotient Spn (R) /St(Z)
of the symplectic group by the stabilizer of an arbitrary point Z E Hn. All of the
stabilizers are obviously conjugate to the stabilizer of the point iEn. The structure of
the latter group is given by the next proposition.
§2. SYMPLECTIC TRANSFORMATIONS 9
where U(n) is the unitary group of order n. The map M-+ u(M) is an isomorphism of
St(iEn) with the unitary group U(n).
PROOF. The proposition follows easily from the definitions. D
be the Euclidean volume element on Hn. Then for any symplectic matrix M = ( ~ ~)
we have
dM(Z} =I det{CZ + D)l- 2n- 2dz.
PROOF. For Z =(zap)= (xap + iyap) we set Z' = (z~) = (x~ + iy~) = M(Z}.
To prove the lemma, we must find the Jacobian determinant of the variables x~, y~
with respect to the variables Xap, yap, i.e., the determinant of the transition matrix from
the n(n + 1)-vector whose components (in any order) are the differentials dx~, dy~,
to the analogous vector with components dxap. dyap· It is actually simpler to work
with the corresponding question for (n}-vectors made up of the complex differentials
dz~ = dx~ + i dy~ and dzap = dxap + i dyap· If Z1, Z2 E Hn, then, taking into
account the symmetry of Z2, we have
z2 - z; =(Z2 'C + D)- 1(Z2 'A+ 'B) - (AZ1 + B)(CZ1 + D)- 1
1
where DZ = (dzap) and DZ' = (dz~) are symmetric matrices of complex differentials.
We let p denote the (n }-dimensional representation of GLn (C) which associates to every
matrix U the linear transformation (vap) -+ U(vap) 'U of the variables Vap = vpa.
10 I. THETA-SERIES
in particular.
PRooF. Ifwe compute Y' = -{1/2i)(Z' - Z') using equation (2.11), we obtain
the lemma. 0
gives an analytic isomorphism ofHn with the bounded region { W E Sn(C); W · W <
En}, where the inequality is understood in the sense of Hermitian matrices. Prove that
the inverse map is given by the formula W--+ Z = i(En + W)(En - w)- 1
PROBLEM 2.11. Prove that the volume element
d* Y = (det Y)-(n+l)/2 IJ dya.p
l~a.~p~n
If Z E Hk. W, W' E Mk,I (C), then it is easy to see that the series
The last formula-and the only nontrivial formula among the basic transformation
rules for theta-functions-is the inversion formula, which had its origin in the famous
Jacobi inversion f~rmula.
LEMMA 3.1 (Inversion formula for theta-functions). One has the identity
(3.5) ok(_z- 1; w',-W) = (det(-iZ)) 112 ok(z; w,w'),
where the square root is positive for Z = iY and is extended to arbitrary Z by analytic
continuation (see Proposition 2. 6).
PR.ooF. The function
exp{-ni 'W' W)(Jk (Z; W, W') = L exp(niZ[N - W'] + 2ni '(N - W') W)
NEMk,l
where the coefficients c(L) depend only on Z, W, and L. This series converges
uniformly if W' belongs to any set of the form Mk,l (R) + W0 with fixed W0 {since the
series is majorized by an absolutely convergent numerical series); and the series can
be integrated term-by-term over subsets of such sets. We multiply both sides of (3.6)
§3. SYMPLECTIC TRANSFORMATIONS OF THETA-SERIES 13
by exp(-2ni 'W' L), set W' = H + W0 (where W0 will be chosen later), and integrate
term-by-term over the unit cube C = {H = (h,) E Mk,I (R); 0 ~ h, ~ 1} with respect
to the Euclidean measure dH = dh1 · · · dhk. We obtain
c(L)= L
NEMk,•-N+C
J exp(ni(Z[H+Wo]-2 1(H+Wo)(W+L)))dH
where
l(Z) = J. exp(niZ[H]) dH.
Mk,1(R)
If Z = iY, then, making the change of variables H = VH', where V E GLk(R) and
Y[ VJ = Ek. we obtain
Since the left and right sides of this equality are holomorphic functions of each entry
in the matrix Z E Hk. it follows by the· principle of analytic continuation that the
equality holds for all Z E Hk. D
14 I. THETA-SERIES
and we define
(3.7) e(z,n) = ek(z; w, w').
Next, we let
(3.8)
denote the set of integral symplectic matrices. It follows from the definition that rk
is a semigroup. The relation (2.9) shows that rk is a group. The group rk is qalled
the.Siegel modular group of degree k. Given a matrix M = ( ~ ~) E rk, we let
c!{M) and 17(M) denote the diagonal entries {arranged in a column) of the symmetric
matrices B 'A and C 'D, respectively, and we set
PROOF. Since (3.11) holds for the generators of the group r', the proposition
follows if we verify that for any M, Mi E r'
(3.13)
where e is an eighth root of unity that depends both on the matrices M and Mi and
on the choice of sign in the definitions of OIM, OIMIMi, and OIMMi.
First of all, it is not hard to check by a direct substitution that, when the vector
((M) in the definition of FIM is replaced by any vector of the form ((M) +2L, where
L = (f~) E M1k,i. the expression for FIM is multiplied by a fourth root of unity equal
to
exp ( - ~ · 'LJk((M) - ni 'LiL2).
if MMi = ( ~~ ~~), and since the number exp(¥ · '( (M)JM( (Mi)) to the eighth
power is 1, it follows that, up to an eighth root of unity, the last expression is equal to
LEMMA 3.3. For any M, Mi Erk one has ((MMi) := M((Mi) + ((M)(mod2).
and, by definition,
If M and S are square integer matrices of the same size, and if S is symmetric, then it
is not hard to see that
=
dc(MS'M) Mdc(S)(mod2).
From this congruence, the relations (2.8), and the fact that the diagonal does not
change when taking the transpose, it follows that
((MMi) -(A· dc(Bi 'Ai)+ B · dc(Ci 'Di)+ dc(A(Bi 'Ci+ Ai 'Di) 1B))
- C • dc(Bi 1Ai) + D • dc(Ci 'Di)+ dc(C(Ai 1Di +Bi 'Ci) 1D)
=:M((Mi) + ((M)(mod2), .
for any k-dimensional integer vector L E Mk,i · Further show that any function
F(Z; W, W') that satisfies all of these relations and is holomorphic in W' has the form
F(Z; W, W') =Fi (Z, W)Ok(_z-i; W',-W), where Fi depends only on Zand W.
2. The Siegel modular group and the theta-group. In this subsection we show that
the group r' that is generated by matrices of the form (3.12) is actually the entire Siegel
group rk. Thus, the functional equations (3.11) hold for all M Erk. These equations
take a particularly simple form if M belongs to a certain subgroup of rk called the
theta-group.
THEOREM 3.6. The Siegel modular group of degree k
where r =/:- 0. Then there exists a matrix g in the group r' that is generated by matrices
of the form (3.12) such that the product gM has a k x k block of zeros in the lower-left
corner:
gM = (~I ~:).
We first note that Theorem 3.6 follows from Proposition 3.7. Namely, if M E rk,
then, by Proposition 3.7, there exists g E r' such that the matrix M 1 = gM has the
aboveform. Since Mi isasymplecticmatrix, we have ·1A 1D 1 =Ek and 'B1D 1 = 'D 1B1
(see (2.7)). Since M 1 is also an integer matrix, it follows that Ai. D 1 E Ak and
S = B1Di 1 = '(B1D! 1) E Sk. Thus, Mt= T(S)U(Di) and M = g- 1M1 Er'.
Before proceeding to the proof of Proposition 3.7, we prove two useful lemmas.
LEMMA 3.8. Let k ;;:: 2, and let u = '(ui, ... , uk) be an arbitrary nonzero k-
dimensional column of integers. Then there exists a matrix Vin the group SLk(Z) of
k x k integer matrices of determinant +1 such that
(3.16) Vu= '(d,0, ... ,0),
where d is the greatest common divisor of u,, ... , uk.
PROOF. Fork = 2. the lemma follows from the fact that the g.c.d. of two integers
can be written as an integer linear combination of those integers. The general case
follows by an obvious induction on k. 0
LEMMA 3.9. Let u be a nonzero 2k-dimensional column ofintegers. Then there exists
a matrix g E r' such that
(3.17) gu = '(d,O, ... ,0),
PRooF OF PROPOSITION 3.7. Without loss of generality we may assume that Mis
an integer matrix. By Lemma 3.9, we may also assume that the first column of M
has the form (3.17). In the case k = 1 this proves the proposition. Suppose that the
proposition has already been proved for 2k' x 2k' -matrices for k' < k. The relation
(3.15) for the matrix M = ( ~ ~) is equivalent to the conditions
(3.18) 'AC= 'CA, 'BD = 'DB, 'AD - 'CB= r ·Ek.
(3.19) A= (
a~1
.
a12 •••
a1k) -(~ C12
' C- .
: Ao Co
0 0
where an =I= 0. From the relation 'AC = 'CA it follows that c12 = · · · = c1k = 0,
'Ao Co = 'C0 A 0 • Since 'AD.= 'CB+ rEk. we conclude that
dn 0 ...
( d21
D= 'AoDo - 'CoBo = rEk-1>
: Do
d1k
where Bo denotes the corresponding block of B. Finally, the relation 'BD = 'DB
implies the relation 'BoDo = 'DoBo. From all of these relations it follows that the
matrix Mo = ( ~~ ~~) satisftes the condition: 'Moh-iMo = rEk-I· By the.
For an arbitrary {2k - 2) x (2k - 2)-matrix M' = (C'A' D'B') , k ~ 2, we define the
From what we have proved it follows that the functional equation (3.11) holds for
any matrix Min the modular group rk. By the remark at the beginning of the proof of
§3. SYMPLECTIC TRANSFORMATIONS OF THETA-SERIES 19
=
Proposition 3.2, in the case when ((M) O(mod2) we may suppose that ((M) = 0.
Then the functional equation (3.11) can be written in the form
(3.21) det(CZ +n)- 1t 2e(M{Z},MO) = x(M)O(Z,O),
where x(M) is an eighth root of unity. From Lemma 3.3 it follows that the set
9k ={ME rk;((M) := O(mod2)}
is a subgroup of rk. Returning to our original notation, we see that we have the
following theorem.
'fHEoREM 3.10. The set
where n = (W, W'} E Mm, 2n(C), and for an arbitrary square matrix T we set
(3.23) e{T} = exp(niu(T)),
where u(T) is, as usual, the trace of T, converges absolutely and uniformly if n belongs
to a fixed compact subset of Mm,2n(C) and Z E Hn(e) withe> 0 (see (1.11)). Thus,
this series determines a holomorphic function on the space Hn x Mm,2n (C). The series
(3.22) is called the theta-function of degree n for the matrix Q (or the corresponding
20 I. THETA-SERIES
M = ( AQ BQ) = ( Em ®A Q®B )
<? CQ !JQ Q-i ®C Em®D
belongs to the symplectic group Spmn(R), and one has the following identities:
on(M(Z}, Q, (W, W') 'M)
(3.25)
= (J"'n(MQ(Q ® Z};AQc(QW) + BQc(W'), CQc(QW) + DQc(W')),
(3.26) det(CZ +Dr= det(CQ(Q ® Z) + DQ).
§3. SYMPLECTIC TRANSFORMATIONS OF THETA-SERIES 21
n
c(T)c(V) =
1 L t°'v°' = a('TV).
1
Ot=I
We thus obtain:
Similarly,
Finally,
and
We have
(3.28)
the theta-function (3.22) of degree n for the matrix Q satisfies the functional equation
(3.29) det(CZ + D)-ml2 fr(M(Z), Q,O'M) = XQ(M)On(z, Q,O),
where XQ (M) is a certain eighth root ofunity that for odd m also depends on the choice of
root ofthe determinant on the left. In particular, the theta-series (1.13) ofdegree n for the
matrix Q satisfies the following functional equation for every M = ( ~ ~) E q(q):
PRooF. Let MQ be the matrix that is constructed from Min Lemma 3.12. By
Lemma 3.12, MQ E SPmn'(R). From the definitions it follows that MQ is an integer
matrix, so that MQ E rmn. Finally, all of the diagonal entries in the matrices BQ 'AQ =
Q ® B 'A and CQ'DQ = Q- 1 ® C 'D = qQ- 1 ® (q- 1 C 'D) are even, because all of
the diagonal entries in the first factors of the tensor products are even, and the second
factors are integer matrices. Thus, MQ is contained in the theta-group @mn. Using
Lemma 3.12 for the matrix Mand Theorem 3.10 for the matrix MQ, we obtain
det(CZ + D)-mf2rr(M(Z), Q,O'M)
= det(CQ(Q ® Z) + DQ)- 112 • emn(MQ(Q ® Z};AQ • c(QW)
+ BQ • c(W'), CQ • c(QW) + DQ • c(W'))
= x(MQ)e n(Q ® Z;c(QW),c(W')) = x(MQ)en(z, Q, (W, W')),
111
which proves (3.29) if we set XQ(M) = x(MQ). (3.30) follows from (3.29) if we set
0=0. D
§3. SYMPLECTIC TRANSFORMATIONS OF THETA-SERIES 23
Theorem 3.13 answers {except for the computation of the factor XQ(M)) our
question about the action on on(z, Q) ofsymplectic transformations in the subgroup
r(){q) of the Siegel modular.group rn. However, when studying certain properties of
theta-series, such as their behavior near the boundary of Hn, one needs to understand
the action on theta-series of an arbitrary transformation in rn. In general {when
q -:/:- 1), such transformations do not take the theta-series on(z, Q) to itself (even
modulo a multiplicative factor). But the theta-series does remain inside a certain
finite-dimensional space that depends on· n and q-the space of generalized theta-
series of degree n for the matrix Q. Suppose that Q EA;!; and q is the level of Q. We
consider the set of matrices
(3.31) Tn(Q) ={TE Mm,n; QT= O{modq)},
and for each T E Tn(Q) we define the generalized theta-series of degree n for Q by
setting
PRooF. We derive (3.33) from (3.5), using the connection that is given in Lemma
3.12 between the theta-functions in the two formulas. Ifwe use (3.24) and the properties
of the tensor product of matrices (listed right before Lemma 3.12), we obtain
on(_z- 1, Q- 1, (QW', -QW)) = omn(Q- 1 ® (-z- 1);c(W'), -c(QW)).
Since Q- 1 ® (-z- 1) = -(Q ® z)- 1, by Lemma 3.1 the last expression is equal to
det(-i(Q ® Z)) 112omn(Q ® Z; c(QW), c(W'))
and hence, by (3.24), it is equal to the expression on the right in (3.33). 0
PRooF OF PROPOSITION 3.14. The first two formulas follow directlyfrom the defi-
nitions:
on(z['VJ,QIT) = L e{'VQ[N +q- 1 T]VZ}
NEM.,,n
= L e{Q[N' + q- 1TV]Z},
N'EMm,n
since we have Mm,n V = Mm,n for VE An;
on(z + s, QIT) = L e{Q[N + q- 1 T]Z}e{Q[q- 1T]S},
NEM.,,.
since
a(Q[N + q- 1T]S) =a(Q[N)S) + 2a( 'Nq- 1QTS) + a(Q[q- 1 T]S)
:=a(Q[q- 1 T]S)(mod2),
because N E Mm,n, S E Sn, and all of the diagonal entries in Q-and hence also in
Q[N]-are even, while q- 1QT is an integer matrix. To prove the third identity, we apply
the inversion formula (3.33) (with Q replaced by Q- 1, W' = 0, and W = l/qQT) to
the theta-series on(z, QIT) = on(z, Q, (0, -q- 1T)).
We obtain
on(_z- 1, QIT) =(det Q)-nf2 (det(-iz))mf 2on(z, Q- 1, (q- 1QT, 0))
=(det Q)-nf2 (det(-iz)rt2 L e{Q- 1[N]Z + 2q- 1 'NT}.
NEM,.,.
For every LE Mm,n the matrix qQ- 1Lis obviously an integer matrix belonging to the
set Tn(Q) (see (3.31)). Conversely, any matrix T' E Tn(Q) is uniquely representable
in the form qQ- 1L (L E Mm,n). Thus, the map L --+ qQ- 1L gives an isomorphism of
the additive group Mm,n with the group Tn ( Q). Under this isomorphism the subgroup
QMm,n is obviously mapped onto qMm,n c Tn(Q), so that we obtain an isomorphism
of quotient groups: Mm,n/QMm,n .'.:4 Tn(Q)/modq. Continuing the above chain of
equalities, we have
(d~t Q)-nf2 (det(-iz))m/2
x L e{Q- 1[QN+L]Z+2q- 11(QN+L)T}
NEM,.,..
LEM,.,./QM.,,n
§4. COMPUTATION OF THE MULTIPLIER 25
PROBLEM 3.16. Prove that there are (det Q)n elements in the set rn(Q)/mod q.
PROBLEM 3.17. Suppose that Q EA;!;, q is the level of Q, and T E Tn(Q). Prove
that for every matrix M = ( ~ ~) in the group r0(q) the theta-series on(z, QIT)
satisfies the functional equation
det(CZ +D)-mfion(M(Z), QIT) = XQ(M)e{q- 2 A • 'B • Q[T]}On(z, QITA)
with the same scalar XQ(M) as in Theorem 3.13. Thus, if M =E n(modq), then
2
jQ(M,Z)=det(CZ+D)mfiXQ(M) (M= (~ ~) )•
from which it follows that, as a function of Z, it is holomorphic and nonzero on Hn
for every M E r 0(q). Finally, from (4.1) we see that the following relation holds for
any M,M1 E r 0(q) and Z E Hn:
The discussion at the beginning of the section shows that the function J'Q is an
ro
automorphy factor of (q)' where q is the level of the quadratic form Q, on the upper
half-plane of degree n with values in the multiplicative group C* of nonzero complex
numbers. The next lemma gives other examples of automorphy factors.
The analogous identity for j(M, z)k follows from this, since the map A - (detA)k is
a group homomorphism from GLn (C) to C*. D
On the other hand, since Q- 1is a unimodular matrix, it follows that when we replace N
by Q- 1Nin the definition (1.13) ofthetheta-serieson(z', Q), weobtainOn{z', Q- 1) =
on(z', Q). Thus,
{4.4)
If n = 1, then the relations 0 1(z + 1, Q) = 0 1(z, Q) and (4.4) imply that
0 1(-(z + 1)- 1, Q) = (-i(z + l)rt20 1(z, Q).
,I
We set
A= ( ~l ~) (~ !) = ( ~l ~ 1) .
Thenforz E H 1 wehaveA(z} = -{z+l)- 1,A 2(z} = -{z+l)z- 1,A 3(z} = z. Using
these relations, we obtain
0 1(z, Q)' =0 1{A{A 2 (z}}, Q)'
=(-i(-(z + l)z- 1 + l))'m/20 1(A 2(z}, Q)'
=(iz-l)vm/2(-i(-(z + 1)-1 + l))'m/29l(A{z}, Q)'
=(iz-1 )vm/2(-iz(z + 1)-1 )vm/2(-i(z + l))'m/291 {z, Q)v,
where we take v = 1 form even and v = 2 for m odd. We choose a point zo E H1 for
which 0 1(z0 , Q) -=/; 0. We then have the equality
(izol ym/2(-izo(zo + 1)-1 ym/2(-i(zo + l))'m/2 = l,
from which (since vm is even) it follows that
;vm/2, {-Wm/2, (-i)vm/2 = {-Wm/2 = 1,
and hence
(4.5) vm/2 =O(mod4).
Thus, m
O{mod8).
=
O{mod4). But then v = 1, and the congruence {4.5) shows that m =
=
Since m O{mod8), we can rewrite (4.4) in the form
on(Jn(Z}, Q) = det{-zrt 2on(z, Q).
28 I. THETA-SERIES
For the other generators of the modular group rn we immediately find from the
definitions that
on(M(Z), Q) = on(z, Q) for M = U(V) and T(S).
This implies that the automorphy factor j'Q(M, Z) for M = ln, U( V), and T(S) is
equal, respectively, to det(-z)m/2 , 1, and I. The automorphy factor j(M, z)m/2 also
takes these values on the generators. On the other hand, for any M E rn we have, by
Theorem 3.13,
j'Q(M,z) = x'Q(M)j(M,zr12 ,
and, by Lemma 4.1 (2), the map x'Q: rn --+ C* is a homomorphism of groups. The
above discussion shows that this homomorphism is trivial on the generators of rn, and
hence on the entire group. D
1 8
2 ~ x; + 2 ~ Xr
1( 8 )2 - XJ x2 - x2xs
_is contained in At and satisfies the condition: det Q8 = q(Q8) = I. Conclude from
this that for any natural number m divisible by 8 there exist matrices Qm E A;!; with
det Qm = q(Qm) = I.
3. The multiplier as a Gauss sum. We first fix the square root of a complex number
z =I- 0 by setting
(4.6)
where lzl 1/ 2 > 0, -n/2 < <p ~ n/2, and k is any integer. Next, suppose that
( ~ ~) E Si{. By Lemma 4.2, the function /(Z) = det(CZ + D) is nonzero on
Hn. If detD =I- 0, then of the two branches off (Z) 112 that are holomorphic on Hn
and differ from one another by a sign (see the remark in §3.1 ), we shall usually use the
notation /(Z) 112 to denote the branch satisfying the condition
where the right side is understood in the sense of (4.6). Finally, for any integer k we
set
PROPOSITION 4.5. Suppose that, under the assumptions of Theorem 3.13, the level
q is greater than 1. If the function det( CZ + D )-m/2 on the right in the functional
equations (3.29) and (3.30) is understood in the sense o/(4.7)-(4.8), then for any M =
( ~ ~) E r(i(q) the multiplier x'Q(M) in these equations can be computedfrom the
formula
(4.9)
§4. COMPUTATION OF THE MULTIPLIER 29
where the roots are understood in the sense of (4.6), and G(S, Q) denotes the following
Gauss sum, for S .an n x n symmetric matrix with rational entries and for Q an m x m
symmetric integer matrix with even integers on the main diagonal:
the third relation in (2.7)), and hence it is nonzero if q > I. Further note that the
Gauss sum (4.10) does not depend on the choice of integer d satisfying the property
dS E Mn.
PROOF. We compute the limit
(4.11)
where E = En is the n x n identity matrix, in two different ways. On the one hand, by
Theorem 3.13 it is equal to
On the other hand, we set M (i).E) = Bn- 1 + Z 0 • Then, applying (2.8), we find that
(4.13)
Substituting, we obtain
Let d be a positive integer for which dBn- 1 is an integer matrix. We represent Nin
the form N = L + dN1, where LE Mm,n/dMm,n, N1 E Mm,n· Since then
(detQ)-nf2 L e{Q[L]BD- 1}
According to Lemma 3.15, the function inside the last limit is continuous in Z =
iD*(iAC +n)- 1 E Hn. and so the limit is (det(-i(iD* n- 1m-m12 =I detDlm. Thus,
the limit (4.11) is equal to
(det Q)-nf2 (detD)-mf 2 detDlmG(BD- 1, Q).
1
We then set
It is easy to see that i( D satisfies (4.17) and M is any nonsingular n x n integer matrix,
then
GMD(S, Q) = GD(S, Q).
Thus, if D and D 1 are two matrices that satisfy (4.17), then because the matrix D' =
det D · det D1 · En is divisible on the right by both D. and D', it follows that
so that GD(S, Q) does not depend on the choice of D. Then, taking D to be the matrix
dEn, where d EN and dS E Mn, we see that
(4.19) GD(S, Q) = G(S, Q).
(note that 'DBD- 1 = 'DD* · 'B = 'B is an integer matrix). Since 'DB = 'BD, it
follows that the map L -+ LB gives a homomorphism of quotient groups
Since B 'C = A 'D - En, it follows that the composition B 'C of these two homo-
morphisms coincides with the automorphism of multiplying by - En. Since 'CB =
'AD - En, the homomorphism 'CB also coincides with multiplication by - En. Hence,
the maps 'C and B are isomorphisms, and we can write
e{Q[LB](-D- 1C)}
LEM.,,./M,.,n 'D
=
LEM,.,n/Mm,n 1D
PROBLEM 4.6. In the notation of the definition (4.10) of Gauss sums, let S =
J- 1S', where S' is a symmetric integer matrix. Show that the Gauss sum of degree n
reduces to the usual Gauss sum modulo d of the quadratic form with matrix Q ® S':
PROBLEM 4.7 (The Gauss sum as an "automorphic form"). For n EN and q >1
define the set
S = { S = BD- 1; ( ~ ~) E rQ (q)}.
for any M = ( ~ ~) E r3(q) and S ES, where x{2(M) is the same multiplier as
in the corresponding functional equation for the theta-series. [Hint: Use the fact that
in this case XQ is a homomorphism of the group q; (q) .]
4. Quadratic forms in an even number of variables. Suppose that Q E A;!;, where
m = 2k is even. By Theorem 3.13, the automorphy factor j{2(M,Z) (see (4.1)) for
n EN and M = ( ~ ~) E r3(q), where q is the level of Q, can be written in the
form
j{2(M,Z) = x{2(M)det(CZ +D)k = x{2(M)j(M,z)k,
where j(M, Z) is the automorphy factor (4.3) for the group S~, and hence also for the
group q; (q), and XQ is a function on rQ (q) with values in the group of eighth roots
of unity. According to Lemma 4.1(2), the map XQ = x{2 : rcJ(q) --+ C* is a group
homomorphism:
(4.20) XQ(MM1) = XQ(M)xQ(Mi) for M,M1 E rQ(q).
If q = l, then XQ is trivial by Proposition 4.3. Hence we may assume that q > 1. In
this case, by Proposition 4.5,
(4.21) XQ(M) = (detD)kG(-n- 1c, Q).
We let K denote the subgroup of q(q) generated by matrices of the form U(V)
(see(2.2))for VE SLn(Z), T(S) (see(2.3))forS E Sn, and 'T(S)forS E qSn. From
(4.21) it immediately follows that the character XQ is trivial on all of the generators of
K, and hence on all of K. Hence, XQ is constant on every double coset KMK with
ME rcJ(q).
0- ~) Bo = ( ~ ~) ,
Ao = ( E 1 ,
Here
(4.22) ~ = detD(modq).
PRooF. It suffices to prove that every double coset has a representative M 1 =
( ~: ~:) all of whose entries in the first rows and columns of A 1, Bi. Ci, D 1 are
zero, except for the entries in the first row and first column of A 1 and Di. which equal
1. Once that has been proved, the lemma will follow by induction on n.
When we pass from the matrix M to the matrix M' = (C'A' B')
D' = MU(V),
the block D goes to the block D' = DV. By Lemma 3.8, the matrix V E SLn(Z)
can be chosen in such a way ~hat the first row of D' has the form (d, 0, ... , O), where
d is the greatest common divisor of the entries in the first row of D. Let c be the
greatest common divisor of the entries cf 1, ••• , cf n in the first row of C': Then there
exist integers s21, ••. , s2n such that cf 1s21 + · · · + c~ 1s2n = c. Let S = (sap) denote any
symmetric integer matrix whose second column is '(s21, .•• , s2n). Then in the matrix
M" = (C"A" B")
D" = M'T(S) the first two entries in the first row of the block
D" = C'S+ D' are equal to cf 1s11 + · · · + cfns1n + d and c, respectively. Since c
divides cf 1, •.• , c~ 1, and c and d are relatively prime, it follows that these two entries
are relatively prime. Thus, if we again multiply M" on the right by a suitable matrix
of the form U(V'), we may assume that the first row of the D-block of this matrix is
(1, 0, ... , 0). Since
-~) (+
0 ...
1 dnt
*
it follows that, after multiplying the above matrix on the left by a suitable matrix of
the form U( V"), we may assume that its D-block has the form
(4.23)
If the block D in M already has the form (4.23), then we can pass from M to the
matrix
where Mo = ( ~~ ~~),dis any natural number divisible by o, and 11, ••• , ln are the
columns of the matrix L. By the definition of e{T} (see (3.23)), the last expression is
equal to
okd-m I:
e{-o-lyQc1n.
IEM.,,1(Z/dZ)
By the formula (4.21) applied to Mi = ( ; ~), the last expression can be written
as xb ( (; ~)); hence in the notation of Lemma 4.8 we obtain the relation
(4.24)
We have thereby reduced the calculation of the multiplier XQ for arbitrary n to the case
n = 1.
PROPOSITION 4.9. Let Q E A;!;, where m = 2k. is even and the level q of Q is greater
than 1. Then for any (; ~) E rA(q) one has
(4.25)
where XQ is the character of the quadratic form Q, i.e., it is the real Dirichlet character
mo_dulo q defined on integers oprime to q by the formula
(4.26) XQ(o) = (signo)kloi-k L exp((1li/o)Q[l]),
IEM.,,1(Z/JZ)
in particular
XQ(-1) = (-l)k.
If p is an odd prime, then XQ (p) can be computed from the formula
XQ(p) __ ( (-l)kpdet Q) (Legendre symbol).
§4. COMPUTATION OF THE MULTIPLIER 35
PROOF. The formula (4.21) shows that the number <! = xb ( (; ~)) belongs
to the field Q1.s1 of lo Ith roots of unity. On the other hand, because xb is a character of
the group rA(q) and xb ( ( ~ t)) = 1 for any b E Z, we obtain
xb ( (; ~) ) = xb ( (; ~) ( ~ t) ) = xb ( ( ; ~ ! :; )),
so that<! also belongs to any of the fields Q1.i+trl· But the arithmetic progressiono +by
(b E Z) contains numbers that are relatively prime, and Q1 0 1n Qlbl = Q if a is prime
to b (in this case the compositum ofQ1 0 1and Qlbl is Qlabl• and its degree over Q is the
product of the degrees ofQ1 0 1and Qlbl). Hence,<! is a rational number. Consequently,
e does not change under any of the automorphisms exp{2ni/o) - exp(2nit/o) of the
fieldQ 1.i 1(here{t,o)=1). Takingt = -Pandtakingintoaccountthatyp l(modo), =
we find that
xi
Q
((a OP)) -okd-m
y -
L
IEM.,,1(Z/dZ)
e{o-1Q[IJ}
depends only ono and Q. Ifwe set d = lol here, we obtain (4.26).
Given any integer o prime to q, there exist integers a and b such that ab - qb = 1.
Then (: : ) E rA{q), and
XQ(o) =xb ( (: : ) ) = xb ( (: : ) ( ~ ~) )
=xb( (: :!:!)) =xQ(o+tq)
for any t E Z. Thus, the function XQ(o) is defined for all o prime to q, and it depends
o
only on the residue class of modulo q. If o 1 is also an integer prime to q and
( ~1 :: ) E rA(q), then
:) (~I !: ))
qbi : 001 )) = XQ(qbi +&51) = XQ(M1).
Thus, XQ is a real Dirichlet character modulo q. Now let p be an odd prime not
o
dividing q. If we set = p in (4.26), we can write
If M E Mm and the determinant of M is prime top, then the map I ---+ Ml obviously
gives a bijection of the set Mm,1/ pMm,I with itself. Hence, for any such M we can write
It is well known (see Appendix 1.1) that the matrix M can be chosen in such a way
that the quadratic form (1/2)Q[MX] is congruent modulo p to a diagonal quadratic
form a1x? + · · · + amx~. Here we clearly have
a1 ···am =det((l/2)Q[M]) = 2-m(detM) det Q(modp). 2
With this choice of M, the last formula for XQ (p) can be written in the form
If we use the definition and properties of the Legendre symbol modulo p (see Appendix
2.3), we find that
Gp(l) 2 =( ~l )ap(l)Gp(-1)
(4.29) =( ~l) L exp(2ni(t1 - t2)(t1 + t2)/p)
t1.tiEFp
Returning to the calculation of XQ (p), from the above formulas and the properties
of the Legendre symbol we obtain
The number (-1 )k det Q is called the discriminant of the quadratic form with
matrix Q. The reader can easily verify that the discriminant of any integral quadratic
form in an even number of variables is congruent to 0or1modulo4.
The next theorem summarizes our computation of the multiplier for theta-series
of quadratic forms in an even number of variables.
§4. COMPUTATION OF THE MULTIPLIER 37
THEoREM 4.10. Suppose that Q E A;:;, q is the level of Q, and n ;;::: 1. Further
suppose that m = 2k is even. Then for any matrix M .= ( ~ ~) E r() (q) the
multiplier XQ(M) in the functional equations (3.29) and (3.30) of Theorem 3.13 is given
by the following formulas:
if q = 1, then
(4.30)
if q > 1, then
(4.31)
where XQ is the character of the quadratic form of Q, i.e., the real Dirichlet character
modulo q that satisfies the conditions
(4.32) XQ(-1) =(-l)k,
if q is odd
PRooF. Formula (4.30) was proved in Proposition 4.3, and (4.31) follows from
(4.24), (4.25), and (4.22). Formulas (4.32)-(4.34) were proved in Proposition 4.9. D
PRooF. For brevity we shall use the term "even matrix" to refer to a symmetric
integer matrix with even entries on the main diagonal. Recall that the level of a
nonsingular even matrix Q is the least natural number q such that q · Q- 1 is an even
matrix. Let Q = (aap) be a nonsingular m x m even matrix, where mis odd. We set
Q- 1 = Q* = (det Q)- 1 • (Aap). Then for every a = 1, ... , m we have the equality
m
det Q = L aapAap;
P=I
summing these equalities, we obtain
m
Since all of the coefficients a°'°' ( 1 ~ a ~ m) are even, it follows that m det Q is divisible
by 2, and hence so is det Q.
To prove the second congruence in (4.35), we first note that since q(Q)Q- 1 is an
integer matrix, its determinant is an integer, i.e., det Q divides q(Q)m. Thus, if m is
odd, the level q = q(Q) is divisible by 2. To show that q is divisible by 4, we use
induction on the odd number m. The congruence is obvious if m = I. Suppose that
it has already been proved for all nonsingular even matrices of odd order less than m,
where m > I, and let Q be a nonsingular even matrix of order m. We consider two
cases:
(I) All of the entries of Qare even, i.e., Q = 2Qi. where Q1 is an integer matrix.
Then Q2 = qQ- 1 = (q/2)Ql 1 is a nonsingular even matrix of odd order, and hence
has even determinant. Since this determinant divides (q/2)m, it follows that q/2 is
even, and hence q is divisible by 4.
(2) Not all of the entries in Q are even. In this case, if we make a suitable
permutation of the rows of Q and the same permutation of its columns-Le., for
suitable V E GLm(Z) we perform the transformation Q -+ Q[V], which does not
change the level q and takes even matrices to even matrices-then we may suppose
that the entry a12 = a21 is odd. We divide Q into blocks ( ~~: ~~~), where
Proposition 4.11 shows that the level of any quadratic form in an odd number of
variables-or, equivalently, the level of the corresponding matrix Q E A;!;-is divisible
by4; hence, we have the inclusion r(l(q) c rQ(4). According to Theorem 3.13, for any
ME r 0(q) the theta-series lr(z, Q) satisfies the functional equation (3.30), in which
the multiplier XQ is not a character of the group r 0(q) as in the case of theta-series
of quadratic forms in an even number of variables (see (4.31)), but rather is more
complicated.
On the other hand, the example of the simplest quadratic form with I x I-matrix
(2) E Af shows that there exist matrices of level 4. Using the notation in (4.1) and
(4.10) and the formula (4.15), we can write the functional equation for the theta-series
§4. COMPUTATION OF THE MULTIPLIER 39
Cd= {
l,· if d =l(mod4),
(4.39)
=+A,
i if d =-l(mod4),
(4.40) G(BD- 1, (2)) =ldl-n L e{2BD- 1[r]},
rEMn,1(Z/dZ)
where dis any nonzero integer such that d · v- 1 E Mn, and e{ ... } is the function
(3.23).
Since r 0(q) c r 0(4), and since the product of the theta-series for the matrices Q
and (2) is the theta-series for a matrix of even order and the same level q, it follows
that Theorem 4.10 enables us to obtain the functional equation for the theta-series
on(z, Q) in terms of the automorphy factor j(2J. Using this connection, we prove the
following theorem.
THEOREM 4.12. Suppose that on(z, Q) is the theta-series (1.13) for the matrix
Q E A;!; with m = 2k +l odd, q is the level of Q, M = ( ~ ~) E r(j(q), and
j(2i (M, Z) is the automorphy factor (4.37). Then the theta-series satisfies the functional
equation
(4.41) on(M(Z), Q) = XQ(detD)j(2i(M, zron(z, Q),
where
Ifwe now multiply this equation and the equation (4.36) and take (4.43) into account,
we find that the last equation is preserved if Q and m are replaced by Q1 and m + 1.
Since all of our theta-series are nonzero functions, it follows from {4.45) for Qi and
from (4.44) that
det{CZ + D))(m+t)/2
(4.46) x(M) = XQi {detD) ( ·n (M Z)2 .
1(2) '
Furthermore, if we square the equality (4.36) and let Q = (2) in (4.43) and (4.44),
we obtain
Hence, by (4.46) and the definition of the characters of quadratic forms in (4.32),
(4.33), and (4.42), we conclude that
Although the automorphy factor j(2) is simpler than the automorphy factor j'Q for
an arbitrary matrix Q of odd order, nevertheless it has a rather complicated structure.
In certain cases, however, it is possible to express j(2) in terms of j{2), and hence,
because of (4.37)-(4.40), in terms of the one-dimensional Gauss sums
Gd(c) = (J)GAl).
PRooF. If d =pis an odd prime, then the lemma follows from (4.28). Suppose
that d = pn with n > 1. If we set r = r1 + pn- 1r2 in (4.48), where r1 runs through
Z/pn-iz and r2 runs through Z/pZ, we find that
Gp.(c) = pGpn-2(c),
and the proof of the lemma ford = pn can be obtained from this relation by induction
on n. Now suppose that d = di · d2, where d 1 is prime to d2, and suppose that b1 and
b2 are integers such that b1d1 + b2d2 = 1. In (4.48) let r =dirt + d1r2, where r; runs
through Z/d;Z, and replace c by c{b1d1 + bid2). We then find that the Gauss sum
satisfies the relation
(4.49)
§4. COMPUTATION OF THE MULTIPLIER 41
We assume, by induction, that the lemma holds for d 1 and d 2, and we prove that it
holds ford. From (4.49) we have
(4.50)
where the square root is positive and ea is the function (4.39).
PRooF. We compute the value of the theta-function 8 1(z;0, 0) (see (3.2)) at the
point z = 2c/d +iii., where .il > 0, c and d =I 0 are integers. Let d 1 = d if dis odd and
d 1 = d/2 if dis even. In the definition of 8 1(z;0, 0) in (3.2) we divide the summation
into two parts: we set N = r +d1m, where r runs through the set ofresidues modulo d 1
and m runs through all integers. Then, after some simple transformations, we obtain
the identity
=d-1 . L {-d
2r2 }.
e
rEZ/2Z
This, along with (4.53) and (4.39), implies that the first limit in (4.52) is equal to
ea· d- 112; in view of (4.52), we hence obtain (4.50). 0
PROPOSITION 4.15. In the case n = 1 the automorphy factor (4.37) is given by the
explicit formula
(4.54) jl >(M,z)=ei
2
1 (J)(cz+d) 112 forM= (: ~) erA(4),
where·ed is the function (4.39), (c/d) = (c/ldl) is the Jacobi symbol, and the square root
is determined by the condition (4.1).
PROOF. By (4.16), for odd d > 0 we have G(b/d,(2)) = G(-c/d,(2)) and
(-1/d) = eJ. Thus, in the cased > 0 the formula (4.54) follows from (4.37),
(4.38), and Lemmas 4.13 and 4.14. On the other hand, if d < 0, then to prove (4.54)
it suffices to replace -c/ d by c/(-d) in the Gauss sum. 0
To conclude this section, we give a simple but important property of the multiplier
x(2> in (4.38). From (4.47) and (4.37) it follows that [x(2>(M)]2 = XQ2 (detD), and
hence, by (4.32) and (4.33), we find that
(4.55) [X(2)(M)]4 =1 for ME r(i(4).
CHAPTER 2
Modular Forms
of the corresponding subgroup K of the modular group are determined if we know the
value at one of the points. Thus, a theta-series is uniquely determined by its restriction
to any subset of the upper half-plane which intersects with all of the orbits of K. In
this section we shall construct a fundamental domain in Hn for an arbitrary subgroup
K of finite index in rn, i.e., we shall give a set of representatives of the orbits ( 1.1) that
deserves to be called a "domain".
1. The modular triangle. For brevity we shall call the imaginary party of a complex
number z = x + iy E H 1 the height of z, denoted h(z). By Lemma 2.8 of Chapter 1
(or a direct computation) we see that
so that Ix + b I ~ 1/2. We thus see that every orbit of ri in Hi has a point in the set
We now show that the set Dr is actually given by a finite number of inequalities.
Let
so that z E D{. Thus, Di = Dr, and _every orbit of the modular group ri in the
upper half-plane Hi intersects the set Di. This set may be regarded as a "triangle"
(the modular triangle) with vertices at p, p 2, and i oo (see Figure 1).
ioo
y
FIGURE 1
( a1 a- 1) / ap 2 +(a-1) 1 2
1 and z = p2 + 1 = a - p2 + 1 = a + p ,
Show that any real positive definite form Q is equivalent to a form Qi with lb1 I ~ a1 ~
c 1. Further show that in the interior of the region defined by these inequalities in the
space of coefficients of binary quadratic forms, there are no two distinct points that
correspond to equivalent forms.
[Hint: Let w and w 1 be the roots of the quadratic equations Q(t, 1) = 0 and
Q1(t,1) = 0, respectively, that belong to H 1. Show that (1.4) is equivalent to the
conditions b? - 4a 1c1 = b2 - 4ac and w1 = M- 1(w} with M = (; ~),and use
Theorem 1.1.]
PROBLEM 1.4. Show that there are only finitely many equivalence classes of positive
definite integral binary quadratic forms Q = ax 2 + bxy + cy 2 with fixed discriminant
b2 -4ac < 0.
46 2. MODULAR FORMS
of matrices of real positive definite quadratic forms in n variables, where the action is
given by:
A 3 U: Y--+ Y[U] = 'UYU.
We shall let u1, ... , Un denote the columns of U E An, so that U = (ui. ... , un).
In order to choose a "reduced" representative Y[ U] in the orbit
(1.7) {Y}A = {Y[U]; U EA}
(1.8)
In other words, AZ is the set of n x. k integer matrices which can be completed to an
n x n unimodular matrix. Starting with a fixed matrix Y E P, we choose u1 EA) in
such a way that the value Y[u 1] is minimal in the set A). This can be done, because
A) consists of integer vectors and Y > 0. After choosing u 1, we choose u2 so that
(ui. u2) E A2 and the value Y[u2] is minimal. Possibly replacing u2 by -u2, without
loss of generality we may assume that 1 u1 Yu 2 ~ 0. Continuing this process, at the
kth step we find a column uk for which (ui. ... , uk) E Az, Y[uk] is minimal, and
'uk-1 Yuk ~ 0. After n steps we have a matrix U = (ui, ... , un) E A~ = A and a
matrix T =(tap)= Y[U] E { Y}A which we call reduced.
We now explain what the reduced property of a matrix means in terms of the
entries of the matrix.
LEMMA 1.5. Let r ~ 1 and IE M,,1. Then IE A~ if and only if the components of
the vector are relatively prime.
PROOF. Necessity is obvious. Conversely, if the components of I are relatively
prime, then, by Lemma 3.8 of Chapter 1,. there exists V EA' such that
LEMMA 1.6. Let U, U' E An. Then the.first r columns of U' coincide with the.first r
columns of U if and only if
PROOF. The direct implication is obvious. To prove the converse, we let ui, ... , Un
denote the columns of U, and we suppose that the first r columns of U' are ui, ... , u,.
We set
u u- 1 =
1
V= (~ ~)EA\
where A= (aap), B, C =(cap), andD are (r xr)-, (r x (n-r))-, and ((n-:r) x (n-r))-
matrices, respectively. Since the P-column of the matrix U' ~ U( ~ ~) is equal
to up for 1 ~ p ~ r, it follows that
r n-r
L aapUa + L CapUr+a = up,
a=I a=I
which, by the linear independence of the columns u1, ••• , un, implies that aap = 1
for a = p, aap = 0 for a =/= p, and C,ap = 0. Thus, A = E, and C = O; hence D E
An-r. D
Let U = (ui, ... , un) E An and 1 ~ k ~ n. By Lemma 1.6, the set of kth columns
of all of the matrices U' E An with first k - 1 columns u1,. • ., uk-t coincides with the
set of columns of the form
m, wh~ I = CJ EM.. ,
and lk, ... , In are the components of the first column of some matrix D E An-k+t. By
Lemma 1.5, the latter condition means that lk, ... , ln are relatively prime. We thus find
that, if U = {u1, .. . ,un) E An and 1 ~ k ~ n, then
(1.9) {u E Mn,1; (ui. ... , uk-l> u) E Ak} = ULk,n•
where Lk,n is the set of columns in Mn,t whose last (n -k + 1) components are relatively
prime.
From the definition and the relations (1.9) it follows that T = (tap) = Y[U] is a
reduced niatrix if and only if
Y[Ul] ~ Y[uk] for alll E Lk,n and I~ k ~ n
and
'uk-1 Yuk~ 0 for 1 < k ~ n,
where (J = (ui, ... , un) EA. Since Y[U] = T, Y[uk] = tkk. and 'uk-1 Yuk = tk-1,k.
these conditions mean precisely that T belongs to the Minkowski reduction domain
Fn = {T =(tap) E Pn; tkk ~ T[/],
(1.10)
if IE Lk,n (I~ k ~ n), and tk-1,k ~ 0 (1 <k ~ n)}.
48 2. MODULAR FORMS
THEOREM 1.7. In every orbit {Y}A of the group An in Pn there exists at least one
point-and no more than.finitely many points-belonging to the reduction domain Fn. If
T and T' are two interior points of Fn with T' = T[U], where U E An, then U =±En.
In particular, any two interior points of Fn are in different orbits of An.
PROOF. The above discussion shows that for every matrix Y E Pn there exists
U E An such that Y[ U] E Fn, and each column of this matrix U can be chosen in only
finitely many ways.
Let ei, ... , en denote the columns of the identity matrix En. We set
F~ = {T =(tap) E Pn; tkk < T[l], if / E Lk,n•
1 =/:- ±ek (I:::;; k:::;; n), and tk-1,k > 0 (1 < k:::;; n)}.
Clearly Fn° C Fn, and every interior point of Fn is contained in F~. If T = (tap),
T' = (t~p) E F~, and T' = T[U], where U = {u1, ... , un) E An, then
tkk = tfck = T[uk] (1 :::;; k :::;; n).
Since u1 E L1,n. this equality and the definition of F~ imply that u1 = ±e1. Then
obviously u2 E L2,n. and we find that u2 = ±e2. Continuing in this way, we .obtain
uk = ±ek for all l :::;; k:::;; n. Furthermore, from the conditions
tk-1,k > 0' tk-1,k
i I · 'P.
= Uk-1.t Uk >0 (1 < k:::;; n)
it follows that either u1 = e1, ... , Un = en, or else u1 = -ei, ... , Un = -en. Thus,
U=±EnandT'=T. 0
The inequalities that determine the reduction domain imply ·a series of useful
inequalities for the entries in a reduced matrix T = (tap). lnthe first place, since
tkk :::;; T[ek+d = tk+l,k+I (I :::;; k :::;; n), it follows that
(1.11)
In addition, since tn :::;; T[ek ±et]= tkk + 2tk1 + t11for1 :::;; k < / :::;; n, it follows that
(1.12)
Finally, we have the following important theorem.
'fHEoREM 1.8. /f T. = (tap) E Fn, then
(1.13) t11t22 ···Inn :::;; Cn det T,
where Cn depends only on n.
PROOF. For a = 1, ... , n we determine the nonnegative integer µa = µa (t) by the
following conditions:
(I) The columns of integers m satisfying T[m] :::;; µa include at least a linearly
independent columns.
(2) The maximum number oflinearly independent columns of integers m satisfying
the inequality T[m] <µa is at most a - 1.
The numbers µ,, ... , µn are called the successive minima of the matrix T > 0.
It is clear that µ 1 :::;; µ1 :::;; · · · :::;; µn. and there exist linearly independent columns
mi, ... ,m,, such that T[ma] =µa.
We prove the theorem in three stages.
§1. FUNDAMENTAL DOMAINS FOR SUBGROUPS OF THE MODULAR GROUP 49
LEMMA 1.9. Let T =(tap) E Fn, and let µ1, ••• ,µn be the successive minima ofT.
Then·
{1.14) (1 ~a~ n),
where c {a) depends only on a.
PROOF OF THE LEMMA. As before, let m1, ... , mn be linearly independent columns
such that T[ma] = µa, and let e1, ... , en be the columns of En. If a is fixed, then
at least one of the columns m1, ... , ma is not a linear combination of the columns
e1, ... , ea-I· Suppose that mk is such a column. Then there exists a column e~ such
that {e1, ... , ea-t. e~) EA~ (see (1.8)) and
it follows from the triangle inequality for the norm llxll = {T[x]) 112 in the space
Mn,I (R) that
(1.15)
Since {1.4) obviously holds for a = 1 with c{l) = 1, we can proceed to prove (1.14)
by inductio~ on a. If the inequality holds for all p < a, then
T[ep] = tpp ~ c(p)µp ~ c(p)µa.
Since k ~ a, we have T[mk] = µk ~ µa. From this and (1.15) we obtain T[e~] ~
c(a)µa, where
LEMMA 1.10. Let T E Pn, and let µ1 be the first minimum ofT. Then
µ1 ~ Yn(detT)lfn,
where Yn depends only on n.
PROOF. We regard the set of columns Mn, I {R) as an n-dimensional real space with
the usual coordinates. For any µ > 0 the set ·
{XE Mn,1(R); T[X] ~ µ}
is obviously a centrally symmetric convex set centered at the origin, and its volume v is
equal to snµnf 2 (det T)- 112 , where sn is the volume of the unit sphere inn-dimensional
space. By Minkowski's theorem on convex solids, this set contains a point other than
50 2. MODULAR FORMS
the origin with integer coordinates, provided that v > 2n, i.e.,µ > 4s; 2/n {det T)lfn.
Thus,
µ1= inf T[m]~4s; 2fn(detT) 1 fn. D
mEM.,1,m;lO
LEMMA 1.11. Let TE Pn, and let µi, ... ,µn be the successive minima ofT. Then
µ1 ·' · µn ~ (yn)n det T,
where Yn is the same constant as in Lemma 1.10.
PROOF. As before, let m1, ... , mn be linearly independent columns of integers such
that T[ma] =µa (1 ~a~ n). Then the matrix M = (m1, ... ,mn) is nonsingular,
and by Theorem 1.5 of Appendix 1, the matrix T[M] can be represented in the form
T[M] = 'L · L, where L = (lap), lap = 0 for a > p. We set
Q = D[LM- 1], whereD = diag{µl 1, ... ,µ; 1),
and we show that Q[m] ~ 1 for nonzero m E Mn,t ·
In fact, let m = Mh, where 'h = (hi. ... , hn), and let a be the greatest index
for which ha =F 0. Then m is a linear combination of the columns m1, ... , ma with
coefficientsh1, ... , ha, and it is not alinearcombinationofthecolumnsm1, ... ,ma-I·
From the definition of the minimum µa it now follows that T[m] ~ µa. Hence, taking
into account that the components (Lh) p of the column Lh are zero if p > a, we obtain
a a
Q[m] = D[Lh] = Lµfj 1 (Lh)~ ~ µ; 1 °L(Lh)~
P=I P=I
1
= µ; En[Lh] = µ; 1T[m] ~ 1.
From this inequality and Lemma 1.10 it follows that
Yn(det Q)lfn = Yn((µi · · · µn)-I det T)lfn ~ 1. D
Retuming·to the proof of Theorem 1.8, we see that the inequality (1.13) follows
from Lemma 1.9 and Lemma 1.11. The theorem is proved. D
then we have
n 1-nc; 1En ~ T[T0- 1/ 2][V] ~ nEn,
which is equivalent to the inequalities ( 1.16). 0
F2 = { ( 111
112
112 ) E
122
P2; 0 ~ 2112 ~ 111~112}·
[Hint: See Problem 1.3.)
PROBLEM 1.15. Show that Lemma 1.10 for n = 2 holds with )12 = 2/./3. By
considering the matrix T = ( 1~ 2
1{2 ), show that this value of y2 cannot be
improved.
PRoBLEM 1.16. Show that Theorem 1.8 for n = 2 holds with c2 = 4/3, and that
this value cannot be improved.
52 2. MODULAR FORMS
3. The fundamental domain for the Siegel modular group. Just as in the case of the
classical modular group, the basic step in the construction of a fundamental domain
for rn on Hn is to choose in each orbit a representative Z that satisfies the inequalities
I det(CZ + D)I ~ I for every pair of n x n-matrices (C,D) that occurs in a matrix
( ~ ~) E rn. The proof that such a choice is possible is based on an explicit
description of all possible bottom rows of the matrices in rn.
We shall examine pairs ( C, D) of n x n integer matrices, where n is fixed. Such a
pair is said to be symmetric if C · 1D = D · 1 C. A pair is said to be relatively prime if,
whenever GC and GD are integer matrices for an n x n rational matrix G, it follows
that G itself is an integer matrix.
LEMMA 1.17. Let ( C, D) be a pair of n x n integer matrices. Then the following
conditions are equivalent:
(1) there exist matrices A and B such that M = ( ~ ~ ) E rn;
(2) the pair ( C, D) is symmetric and relatively prime.
PRooF. If ( C, D) satisfies (1), then the pair is symmetric by the second relation
(2.8) of Chapter I. Furthermore, if GC and GD are integer matrices, then the same
relations imply integrali~y of the matrix
G = G · 'C-B'C + A'D) = -(GC) 'B + (GD)'A.
Now suppose that ( C, D) satisfies (2). Note that the pair
(C',D') = (C,D)M" =(CA"+ DC", CB"+ DD"),
where M" = (C"A" D"B") E rn, then also satisfies (2). According to the conditions
(2.8) of Chapte~ 1, the matrix
C' 'D' = CA" 'B" 'C +DC" 'B" 'C + CA" 'D" 'D + DC" 'D" 'D
= CA" 'B" 'C +DC" 'D" 'D +DC" 'B" 1C + CB" 'C" 'D + C 'D
is symmetric; in addition, it is clear that the pair ( C', D') is relatively prime. We choose
a matrix M" E rn such that (C',D') = (E,O). Lett be the first row of the matrix
( C, D). Since ( C, D) is a relatively prime pair, it follows that t =F 0. By 'Lemma 3.9 of
Chapter 1, there exists a matrix Mo E rn such that Mo· 1t = (t, 0, ... , O), where t E N.
Then
(C,D) 'Mo= (C',D')
and
We shall say that two symmetric and relatively prime pairs ( C, D) and (C', D') are
equivalent (or belong to the same class) if
{l.17) (C',D') = U(C,D) =(UC, UD), where U E An ..
In this case obviously
{1.18) C · 'D' = D · 'C'.
Conversely, if (1.18) holds, and if M, M' are matrices in P with bottom rows (C, D)
and (C',D'), respectively, then M' = (M' M- 1 )M and M'M- 1 = ( ~1 ~) E rn.
Hence, U E P, and the pairs satisfy {l.17). Thus, the conditions (1.17) and (1.18)
are equivalent.
LEMMA 1.18. Every symmetric and relativelyprime pair {C, D) such that rank C =
r, where 0 ~ r ~ n, is equivalent to a pair of the form
(1.19) (( Ct0 0)
0 1'
0 ) ut)
'U (Di0 En-r 1
(to the pair (0, En) if r = 0), where {Ci, Di) is a symmetric and relatively prime pair of
r x r-matrices, rank C1 = r, and U1 E An.
1Wo symmetric and relatively prime pairs ofthe form {1.19), one ofwhich corresponds
to C1, D1, U1and the other of which corresponds to Ci, Di, Ui, are equivalent if and only
if
(1.20)
U'I = UI (Er
0 B')
V' '
which implies (1.20).
54 2. MODULAR FORMS
01 0)
( V' 1 Vi CV ( V' 0 0) 1 = ( C"0 0) 0 '
where C" is an (n -2) x (n -2)-matrix. Continuing this process, we eventually obtain
two unimodular matrices, which we shall denote U' and Ut, such that
U'CU*
I
=(Ci0 00)
where C1 is an r x r-matrix of rank r. We set
where D 1 is an r x r-matrix, and the sizes of the other blocks are determined by the
size of D 1• The pair ( U' C, U' D ), and hence also the pair
is clearly symmetric and relatively prime. From this it easily follows that ( C1, D 1) is a
symmetric pair, D3 = 0, and D4 E An-r. If we now set
we see that
This implies, in particular, that (C1, D1) is a relatively prime pair, and the first part of
the lemma is proved.
Now suppose that we are given two symmetric and relatively prime pairs of the
form (1.19), written in terms of the matrices C1,D1, U1 and C2, D2, U2, respectively. If
they are equivalent, then, by (1.18), we have the equality
( ~1 ~) u1u; (
1 '~2 ~) = ( ~1 ~) u1-1u2( 1~2 ~).
If we divide the matrices 1 U1 u; and u1- 1 U2 into blocks of the corresponding sizes
We now examine the orbits (1.1) of the Siegel modular group K = r on Hn.
By the height of a point Z = X + iY E Hn, denoted h(Z), we mean the determinant
det Y. By Lemma 2.8 of Chapter 1 we have
Thus, Idet( CZ+ D)I = Idet C1 I · Idet(Z[Q] + P)I, where P = c 1- 1Di is a rational
symmetric r x r-matrix. By Theorem 1.7, if we replace Q by QV for a suitable VE An
(see Lemma 1.18 and the subsequent remark), we may assume that Y[Q] E P,. We
note that the class of the pair ( C 1, D 1) is uniquely determined by the symmetric matrix
P = c 1- 1·D 1. In fact, if c 1- 1·D 1 = c 2- 1·D2 for another symmetric and relatively prime
pair of r x r-matrices ( C2, D 2), and if det C2 =I 0, then c 1- 1 · Di = 1D2Di, and hence
56 2. MODULAR FORMS
C1·1 D 2 = D 1·1 C2, so that our two pairs satisfy the condition for equivalence in the form
(1.18). We set T = Y[Q] and S = X[Q] + P. Since T > 0 and Sis symmetric, there
exists a real r x r-matrix F such that T[F] = E, and S[F] = H = diag(h1, ... ,h,).
Since (detF)- 2 = det T, we have
Idet(Z[Q] + P)I =I det(S + iT)I =I det(H + iE)[F- 1]1
=I detT · g(hQ + i),.
Thus, ( 1.24) is equivalent to the inequality
r
(l.25) Idet Cil det T II (1 + h~) 1 12 < I.
Q=I
From this inequality it follows that det T < I. Let qi, ... , q, denote the columns of the
matrix Q. Since T = (1qQ Yqp) is a reduced matrix, it follows by Theorem 1.8 that
r
II Y[qQ] ~ c, det T < c,.
Q=I
On the other hand, if A. is the smallest eigenvalue of the matrix Y, then Y[qQ] .~
A.1qQ · qQ ~A.. These inequalities imply that Y[qQ] < A. 1-'c, (1 .~a ~ r), and so
all of the qQ belong to a certain finite set of integer vectors. In particular, there are
only finitely many matrices Q that are not connected by relations of the form (1.21)
and have the property that a pair of the form (1.19) satisfies (l.24). Furthermore,
det T = det Y[Q] takes only finitely many values, and hence the inequality (l.25)
implies that the numbers I det C1 I, h 1, ... , h, are bounded from above. In addition,
since T = F* p- I, jt follows that all of the entries in the matrix p- 1·are boundeJ from
above. Consequently, all of the entries in S = H[F- 1]-and hence all of the entries in
P = S - X[Q]-are bounded from above. We conclude that Pis a rational matrix with
bounded entries, all of whose denominators are also bounded, since they are divisors
of a finite number of values of det C1. There are only finitely many such P, and hence
only finitely many nonequivalent pairs ( C1, D 1). Applying the second part of Lemma
1.18, we complete the proof of the lemma. 0
'fHEoREM 1.20. Let Dn be the subset of the upper half-plane Hn that consists of all
matrices Z = X + i Y E Hn that satisfy the conditions:
(1) Idet( CZ +D )I ~ lfor all symmetric and relatively prime pairs ofn·x n-matrices
(C,D);
(2) YE Fn, where Fn is the reduction domain (1.10);
(3) XE Xn = {X = (xQp) E Sn(R); lxQpl ~ 1/2 (1 ~ a,p ~ n)}.
Then Dn intersects with every orbit of rn on Hn. If Z and Z' are two interior points of
Dn and Z' = M (Z} with M E rn, then M = ±E2n. In particular, all of the interior
points of Dn lie in distinct orbits ofrn.
PROOF. We consider the orbit rn (Z"} of an arbitrary point Z" E Hn. By Lemma
1.19, there exists a point Z' E rn (Z"} having maximal height, and this point satisfies
all of the inequalities in (1). Any transformation of the form
where V E An and S E Sn, belongs torn and does not change the height of Z'. By
Theorem 1.7, there exists a matrix V E An such that Y = Y'[V] E Fn. There also
obviously exists a symmetric integer matrix S such that X = X'[V] +SE Xn. Then
Z = X + iY E r(Z"} nDn.
Now suppose that Zand Z' E Dn, Z' = M(Z}, where M = ( ~ ~) E rn.
Thenh(Z) = h(Z')byLemmal.19,andfrom(l.22)itfollowsthatldet(CZ+D)I = 1.
Similarly, becauseZ = M- 1 (Z'}, we have Idet(- 1 CZ'+' A)I = 1 (see (2.9) of Chapter
1). If C =I 0, then these equations are nontrivial, and consequently the points Zand Z'
lie on the boundary of Dn. If C = 0, then M can obviously be written in the form M =
('~ s;_~ 1 )•where VE An, SE Sn. Then Z' = X' + iY' = X[V] + S + iY[V],
where X + iY = Z. In particular, Y' = Y[V]. Since Y, Y' E Fn, it follows by Theorem
1.7 that either Y and Y' are boundary points of Fn, or else V = ±En. In the latter
case X' = X + S, and hence S = 0 if X and X' do not lie on the boundary of Fn. We
conclude that M = ±E2n if Zand Z' are not boundary points of Dn. 0
' n1-nJ3
(1.26) where en= 2Cn ,
From Theorem 1.21 it follows that the fundamental domain Dn is closed in the
space of all complex symmetric matrices. Siegel proved that Dn is a connected domain
bounded by a finite number of algebraic hypersurfaces.
4. Subgroups of finite index. Let K be an arbitrary subgroup offinite index in the
modular group r. We set
Clearly, K' is also a subgroup of rn. We let M 1, • •• , Mµ denote a complete set of left
coset representatives for P modulo K', so that
µ
(1.29) P = LJ K' Ma and K' Ma n K' Mp = 0, if a -=/; p,
a=I
and we set
µ
(1.30) DK = LJ Ma(Dn),
a=I
and the intersection of any pair of subsets on the right in this decomposition does not
contain any interior points of the subsets.
PROPOSITION 1.23. (1) Let K c rn be a subgroup offinite index, and let DK be a
fundamental domain for Kin Hn. Then the volume
PRooF. From Proposition 2.9 of Chapter 1 it follows that v(D K) does not depend
on the choice of DK. Ifwe choose DK in the form (1.30), we have
a=IMo(Dn) a=lv.
= tj
a=lv.
d*Z = [r: K']v(Dn),
from which (1.32) follows. To prove finiteness of the volume it suffices to treat the case
v(Dn). From Theorems 1.21and1.8 and inequalities (1.11) and (1.12) we obtain
~c f (yu · · · Ynn)-(n+I) II
1,.;a,.;p,.;n
c:,~Yll s:;; ... ~Ynn
IYapl.;;;y.... /2 (a#)
~c I
Y11, ... ,y,,n~C~
{y11 · · · Ynn)-(n+I)
n
a=l
n
II y~;;a II dyo;o;
a=l
II I y-(a+I) dy
n oo
= C QQ QQ
< 00 '
a=lc~
PROBLEM 1.24. Sketch connected fundamental domains for rA(2) and r 1(2).
PROBLEM 1.25. Computev{r 1).
this is a normal subgroup of finite index in rn, since it is the kernel of the homomor-
phism from rn to the finite group GL2n(Z/qZ) that is defined by reduction modulo q.
If for some q asubgroup K satisfies
weight m/2 and character XQ for the group r 0(q). Furthermore, from Proposition
3.14 and Theorem 3.6 of Chapter 1 it is easy to see that any function of the form
(3.1)
where
{3.4) r; = { ( ~ ~) E rn{q);detD = 1}
is a subgroup of finite index in ro.
and w = k or k/2 is an integer or half-integer.
Since xm = 1 for some natural number m (the smallest such m is called the order of
x), it follows that x(T(mqS)) = 1 for any matrix SE Sn. In addition, T(mqS) E r;.
2. The Koecher effect. We now prove the following fact.
THEOREM 3.1. Every modular form F E M(T,x), where Tis a subgroup of.finite
index in r(j, n ~ 1, and x is a finite character of T, has a series expansion of the form
(3.5) F(Z) = L f(R)e{q-I RZ} {Z E Hn),
REAn
where An is the set (l.1) of Chapter l, e{· ··}is the function (3.23) of Chapter 1, and
q = q(T, x) is the smallest natural number such that
(3.6) T(qS) E T and x(T(qS)) =1 for any S E Sn.
The series (3.5) is absolutely convergent on all of Hn, and it converges uniformly on every
Hn (e ), where e > 0. In particular, the function F (Z) is bounded on each Hn (e ).
For every matrix V such that
(3.7) U('V) ET,
the coefficients f(R) in (3.5) satisfy the relation
(3.8) f (R[V]) = x( U( 'V))f (R).
We call (3.5) the Fourier expansion of the form F, and we call the numbers f(R)
its Fourier coefficients.
PRooF. In the case of matrices of the form (3.6), the functional equation (3.2)
becomes
F(Z + qS) = F(Z) (Z = (zap) E Hn),
which holds for any symmetric n x n integer matrix. This means that F is a periodic
function with period q in each variable Zap = zfJa. Since it is a regular analytic function,
F then has a Fourier expansion of the form
where f(R, Y) = g((rap), Y)e{-iq- 1RY}, and R runs through the set En of all
matrices in Sn with even main diagonal. Since F(Z) is holomorphic in each of
§3. FOURIER EXPANSIONS 63
the variables Zap. using term-by-term differentiation and uniqueness of the Fourier
expansion we see that the Cauchy-Riemann equations
8F =O,
Ozap
8
where--=
Ozap
8
- --+i--
2 . 8Xap
1(
. 8) ,
8yap
lead to the equations
with constant coefficients f(R), where R runs through the same set as above.
The expansion (3. 9) may be regarded as the Laurent series for the analytic function
Fin the variables tap = exp(27r.izap/q) (1 ~ a ~ p ~ n). Consequently, the series
(3.9) converges absolutely on all of Hn.
We now substitute the expansion (3.9) for Fin the functional equation (3.2) for
a matrix of the form (3.7). Ifwe replace R by R[V] and equate coefficients, we then
obtain (3.8).
To complete the proof of the proposition it remains to verify that f(R) = 0 if·
R ¢.An, and that the series converges uniformly on Hn(e).
We first consider the case n = 1. In this case the expansion (3.9) takes the form
+oo ( 2 . ) +oo
F(z) = r~oo /(2r)exp ; 1 rz = r~oo f(2r)t'
where the sum is taken over a complete set of representatives of the classes
(3.10) {Rhx =·{R[V]; U('V) E T.x(U('V)) = 1}
of matrices R E En, and
(3.11) e(Z,{Rh.x) = L e{q- 1R'Z}.
R'E{Rh:x
64 2. MODULAR FORMS
If f(R) '# 0, then the series f(R)e(Z, {Rh:x) converges for all Z E Hn, since it is
a partial sum of the absolutely convergent series for F. In particular, in this case the
following series converges:
Since the traces a(R') of the matrices R' are integers, from the convergence of the last
series it follows that the inequality a(R') < 0 can hold for at most a finite number of
different matrices in {R} r.x.
We show that for any symmetric n x n integer matrix R, n ;;;.: 2, with even diagonal,
the function a (R') takes infinitely many negative values on the Class { R h:x if R is not
semidefinite. If R rt An, then there obviously exists a column vector h of n integers
such that R[h] < 0. We set Vs = En+ sH, where H = (tth, ... , tnh) E Mn and
s, ti. ... , tn are integers. Since the matrix sH has rank 0 or 1, we clearly have
where ht, ... , hn are the coordinates of h. Since n ;;;.: 2, the integers It, .... , tn can be
1;
chosen so that ltht + · · · + tnhn = 0 and tf + · · · + > 0. Then Vs E SLn(Z) and
H 2 = (haip(ttht + · · · + tnhn)) = 0, from which it follows that, in particular, Vs = Vt.
x
Since the index [r0: T] and the character are finite, it follows that U(' Vr) = U(' V.Y
lies in T for some r E N, and also x(U (' Vr)) = 1; this implies that for any integer /,
R1 = R[Vr1] is contained in {R}r.x and
a(R1) = a(R) + 2rla(RH) + r 2 / 2 R[h](tf + · · · + tn).
Since the last expression is a quadratic trinomial in I with negative coefficient of / 2 , it
takes negative values of arbitrarily large absolute value for suitable integers /.
From what we have proved it follows that the coefficient f (R) in (3.9) vanishes if
R rt An. This proves the existence of the expansion (3.5).
Finally, suppose that Z = X + i Y E Hn (e) for some e > 0. Then, by the inequality
(1.6) of Appendix 1, we find that for any RE An
Now if k = (Ra 11 ) E An and a(R) ~ N, then from the inequality (1.5) of Appendix 1
we obtain IRatil ~ N. Thus, the number of different matrices RE An with a(R) ~ N
is no greater than
(3.13)
§3. FOURIER EXPANSIONS 65
Since the latter series converges, it follows that the series (3.5) converges uniformly on
Hn(e). D
(~ ; ) Er;.
PROBLEM 3.3. Suppose that T = r; and { R}r is the set (3.10) with x = 1. Show
that any series e(Z, {R}r) of the form (3.11) with R E An is a modular form of trivial
character for the group T.
3. Fourier expansions of modular forms. The inclusions (3.3) show that Theorem
3.1 can be applied, in particular, to modular forms for congruence subgroups of the
modular group. Thus, every such form has a Fourier expansion with the properties
described above. However, both in the development of the theory of modular forms
and in applications of the theory it turns out that one also needs to consider analogous
expansions of functions obtained from modular forms by means of certain standard
transformations. In addition, one wants to have bounds on the Fourier coefficients for
all such expansions.
In the case of modular forms of integer weight k, the transformations we are
referring to can be expressed in terms of the elementary transformations that take a
function F on Hn to the function
(3.19)
It is not hard to check that (!5 is a group: this follows immediately from the definition
of the group operation and the basic property of the automorphy factor det( CZ+ D ).
The groups SR and (!5 are related by the epimorphism
(3.20)
whose kernel is contained in the center of the group (!5 and is obviously isomorphic to
the multiplicative group C1.
We are now ready to define the transformations in the case of half-integer weight
k/2 that are analogous to the transformations (3.14). We.set
(3.22)
Suppose that K is a congruence subgroup of rn, K :::> rn (q), and M is any matrix
in the group sn = SQ. Let
(3.25)
LEMMA 3.4. Let K be a congruence subgroup ofr0(4), and let M E sn. Let the
map
(3.26)
be defined for any Mo KM by the equality
(3.27)
E
---
MMoM- 1 = M MoM
--.....-.. ...-..._]
(Ein, tM(Mo)),
where M = (M,<p) is any P-preimage of Min 15 and L = j(i)(L)for all LE r 0(4).
Then this map does not depend on the choice of M, it is a character of the group KM,
and, in addition, tit = 1.
PROOF. Since the P-images of the elements on the left and right sides of (3.27)
are the same, it follows that they differ from one another by a factor in the kernel of
P. It is easy to see that this kernel consists of elements of the center of 15 of the form
(E2~, t), where t E C 1• We thus find that the equality (3.27) uniquely determines a
number tM(Mo) E C1.
We now show that t M is a homomorphism. For any matrices M1, Mi E KM we
have
--- ---- -----
MM1 MiM-1 = (MM1M- 1. )(MMiM- 1)
..-.....-.. ..-... 1 ..-.....-.. ..-... 1
= M M,M- (E2n, tM(M1)) · M MiM- (Ein, tM(Mi))
=M
--------
M1MiM- ...-.. 1(Ein. tM(M1)tM(Mi)),
which, together with the relation (3.27) for the matrix Mo = M 1Mi, implies that
fM(M1Mi) = tM(M1)tM(Mi).
Finally, if we multiply the elements on the right in (3.27) using (3.19) and recall
the definition (3.23) of the homomorphism r'
we obtain the relation
(3.28)
j(iJ(MMoM- 1, Z) = ip(MoM- 1(Z})j(iJ(Mo, M- 1 (Z})ip(M- 1(Z})tM(Mo),
since
(3.29)
Squaring both sides of (3.28) and using Lemma 4.2 of Chapter 1, formula (4.47) of
Chapter 1, and the definition (3.17), we find that tM(Mo) satisfies the relation
(3.30) tM(Mo)i = XQ2(MM0M- 1)x(22(Mo).
where xQi is the character (4.31) of Chapter 1 for the matrix Qi = 2Ei. Thus,
[tM(Mo)) 4 = I. D
of the group KM. where we naturally taker= r(j(4) in the definition (3.25). From
(3.31), (3.32), and Lemma 3.4 it follows that the characters XM and XM,k are finite if x
is a finite character.
'fHEoREM 3.5. Let F E rotw (K, x) be a modular form of degree n ~ 1, of integer or
half-integer weight w (w = k or k/2), and of character x (where x has order m)for the
congruence subgroup K ofrn, K ~ rn(qi); and let Kc r 0(4) ifw = k/2. Then for
every matri:ic M E rn one has the expansion
where e =Mand q = qim if w = k, while ifw = k/2, then e =Mis any P-preimage
of Min the symplectic covering group t!S and q = 4q1m. Each of these series converges
absolutely on all of Hn and uniformly in Hn (e) for any e > 0. In particular, each function
Flwe is bounded on any of the sets Hn(e).
The Fourier coefficients in the expansion (3.33) satisfy the relations
(3.34) f~(R[V]) = x'(U( 'V))fe(R) (R. E An),
where x' = XM or XM,kfor w = k or k/2, respectively, VE A", and V =En(mod qi).
Ifw ~ 0, then
(3.35)
where yF depends only on F.
REMARK. We shall soon see that rotw(K, x) = {O} if w < 0. So there is no loss of
generality in the condition w ~ 0 in (3.35).
PROOF. Since obvfously F E rot = rotw(r"(qi),x), we can start by replacing
rotw (K, X) by rot.
We first consider the case w = k. As we already noted, the function FlkM has the
same analytic properties as F. If Mo E r"(qi), then MMoM-i E r"(qi), and hence
To prove (3.35) for the Fourier coefficients f M (R), we consider the function
(3.38)
§3. FOURIER EXPANSIONS 69
In addition, because the functions FlkMa are bounded on Hn(e) fore> it follows o:
from Theorem 1.21 that any of these functions-and hence also G~are bounded on
the fundamental domain Dn for the group rn.
LEMMA 3.6. Suppose that the nonnegative real-valuedfunction G on Hn satisfies the
functional equation (3.39) for any M E rn and is bounded on Dn. Then the following
bound holds uniformly in X E Sn(R) and R E At:
G(X + iR- 1) ~ y(detR)k,
(see the inequality (1.10) in Appendix 1). If det C = 0, we replace Z by the point
Then
and MM-I=
1
(AS+B -A)
cs+D -D ·
We show that there exists a symmetric n. x n integer matrix S such that
(3.41) det(CS + D)-:/:- 0.
Let r be the rank· of C. From Lemmas 1.17 and 1.18 we see that in this case the
matrices C and D can be represented in the form
Hence,
(3.42) G(Z) =I det(-Z + S)l-kG(M1 {Z)) ~ oa(det Y)-kl det(-Z + S)lk·
The expression det( CS+ D) is a polynomial in the entries sap of the matrix S of degree
at most two in each variable sap. Since this polynomial takes nonzero values, using
induction on n it is easy to see that it is nonzero for certain integer values of the sap
satisfying the inequalities - 2 < sap - Xap < 2 (a, P = 1, ... , n), where Xap are the
entries in the real part X = (xap) of the matrix Z. Supposing that these inequalities
hold, we see that I det(-Z + s)l2 = I det(S - X + iY)l 2 is a polynomial of degree 2n
with bounded coefficients in the entries Yap of the matrix Y. Since Y > 0, it follows
by inequality (1.5) of Appendix l that IYapl ~ u( Y). Hence,
I det(-Z + S)I ~ on(l + u(YW,
where on depends only on n. From this bound and (3.42) we obtain
G(Z) ~ o'(l + u(YWk(det Y)-k.
Now let Y = R- 1, where R E A~. The matrix R can be written in the form R =
Ro[U- 1), where Ro is Minkowski reduced and U E An. Then, using (3.39) for
M = ( ~ i* ) E rn and the last inequality, we obtain
G(X + iR- 1) = G(U(u- 1XU*+ 1) 'Vi Ro
= G(u- 1XU*+ R0 1) ~ o'(l + u(R01))nk(detRo)k.
Let Raa denote the matrix obtained from Ro by crossing out the ath row and column,
and let r a denote the ath diagonal entry of Ro. Since Raa > 0 and Ro is Minkowski
reduced, we can apply the inequality (1.8) of Appendix 1 to the matrices Raa and use
Theorem 1.8 to obtain
n d tR _,,.~'a n -I r1 .. ·rn - -I~ n -I_,,. -I
(R -1) - ~ e (i(i
O' 0 - L...J d R ""'L...J -Cn L...J'a ::::::en n.
a=I et O a=I CnYI • • · Yn a=I
We return to the proof of the bound (3.35) on the coefficients f M (R). Since FlkM
is obviously equal to one of the functions FlkMa in (3.38), if we apply Lemma 3.6 to
G = GF we obtain
(3.43) l(FlkM)(Z)I ~ y(detR)k, where Z = (xap) + iR- 1,
and hence
LEMMA 3.7. Let F; E rotk1; 2(K;,x;), where i = 1,2, the k; are odd integers, and
the K; are congruence subgroups in r 0(4). Then the product F = F1F2 is a modular
form of integer weight k = {k1 + ki)/2 belonging to rotk(K,x). where K = K1 n K2
is a congruence subgroup in r 0{4), x = x1x2(XQ)k, and XQ2 is the character (4.31) of
Chapter lfor the matrix Qi =.2E2.
PROOF. Since K; contains a principal congruence subgroup rn (q; ), it follows that
K :::> rn(q 1q2 ). We also obviously have K c qj(4). Next, according to (3.24) we
can write Fdk1; 2M = x;(M)F; for any matrix ME K. Ifwe multiply these equalities
together for i = 1and2, then from (4.47) of Chapter 1 and the definition of modular
forms we obtain all of the claims in the lemma. D
At the beginning of the proof of Theorem 3.5 we saw that the operators !we!. where
c; = M or Mand M E rn, map modular forms to modular forms. We now show that
this is also a property of the analogous operators for any rational matrix (or matrix
proportional to a rational one).
PROPOSITION 3.8. Let F E. rotw (K, x) be a modular form of integer or half-integer
weight w (where w = k or k/2) and.finite character x for the congruence subgroup K of
P; if w = k/2, th~n also K c r3(4). Further let c; = M if w = k and c; =ME QS if
w = k/2, where M is any matrix in SQ, and P(M) = M. Then
PROBLEM 3.9. Prove that if K = rn(4q) and M E rn, then the homomorphism
t M in Lemma 3.4 satisfies the condition t1- = 1. If M E rg (4), then show that t M = 1.
PROBLEM 3.10. Let K be a congruence subgroup of rg (4), and let M be any matrix
in S8.
Show that the characters tM and tM-1 are related as follows: tM(w)- 1 =
tM-1(MwM- 1) for any w E KM.
4. The Siegel opera~or. In this subsection we shall establish connections between
modular forms of degree n and degree n - 1. These connections come from the
properties of the Fourier expansion of a modular form that were described in Theorems
3.1 and 3.5. Let i;
denote the set of all Fourier series of the form (3.5) that co:.1verge
absolutely and uniformly on Hn(e) fore> 0. LetF E i;.
If Z E Hn-1(e) and A.> e,
then obviously
Zl = ( ~ ~) E Hn(e).
Because of the uniform convergence of the series (3.5) for F, we have
If
R= (R'* * ),
2rnn
then a(RZl) = a(R'Z) + 2rnnA.i, and hence
(3.49) lim e{q- 1RZl} = { e{q-I R'Z}, if Ynn = 0,
l-+oo 0, if Ynn > 0.
Since R ~ 0, the equality Ynn = 0 implies that r1n = Ynl = ... = Yn-1,n = Yn,n-1 = 0,
.
1.e., R: = (R' 0)
0 0 . Thus, for Z E Hn-I we have
Since this last series is a partial sum for the expansion (3.5) of F, it converges absolutely
anduniformlyonHn-1(e). Thus,Fj<l>Ej~- 1 . Ifn = l,weset
(3.51) Fj<l> = lim F(i).) .
.1.-++oo
As before, the limit exists and is. equal to the constant term of the Fourier expansion
of F. Setting~ = C, for all n, q ~ ·1 we obtain the linear operator
lim on(z.i.,QIT)=
.1.-++oo
~
L.J
lim e{Q[M+q- 1T](Zo
.1.-++oo lA
~) }·
MEMm,n .
If M = (M', m') and m' is the last column of the matrix M, then the entry in the
lower-right comer of the matrix Q[M + q- 1T] is obviously equal to Q[m' + q- 1t] ..
Using (3.49) and the positivity of Q, we see that the limit of the corresponding term
in the sum on the right is equal to e{Q[M' + q- 1T']Z} or 0, depending on whether
m' + q- 1t = 0 or =I 0, respectively. · D
We now consider the action of the Siegel operator on modular forms for congru-
ence subgroups of the modular group. For n > 1 we define the monomorphism
Thus, by (3.60) and (3.65), the function Fl<I> satisfies the functional equations for a
modular form in OOlw(K[n-IJ,x[n-IJ) with finite character x[n-IJ, In addition, from
(3.59) and (3.63) it follows that for any M' E rn- 1 the functions (Fl<l>)lkM' and
(Fl<I>)lk;2M' are boundedonHn-1 (e), provided thatthefunctionsFlkM andFlk;2M',
where ME rn, are bounded on Hn(e). Finally, if K ::) P(q 1) and the character x
has order m, then by (3.33) we have Fl<I> E -iJ;-
1, where q = 4q 1m, and hence this
function is analytic on Hn-1 · We have thereby proved
PROPOSITION 3.12. Suppose that K is a congruence subgroup of the modular group
rn. xis a.finite character of K, and w is an integer or half-integer (w = k or w = k/2).
Set 00lw(Kl01, x[OJ) = C. Then the Siegel operator <I> gives a linear map
(3.66)
(3.67)
The condition (3.67) means that F approaches zero as the argument makes certain
"rational" approaches to the boundary of the upper half-plane Hn, i.e., in some sense
F is small near the boundary. This circumstance makes it possible to substantially
strengthen the bounds (3.35) on the Fourier coefficients of the functions Flwe for such
F.
THEOREM 3.13. Let F E OOlw (K, x) be a cusp-form of degree n ;;;:: 1, integer or
half-integer weight w (where w = k or w = k/2), and.finite character x of order mfor
a congruence subgroup Kin rn, where K ::) P(q1) and if w = k/2, then K c r 0(4).
Then, in the notation of Proposition 3. 8:
(1)/or any matrix ME rn the Fourier expansion (3.33) of the/unction Flwe E
OOlw(KM,x') has theform
whereq = q1m ifw = k and4q1m ifw = k/2, i.e., only positive definite matrices appear
in the expansion;
(2) if w ;;;:: 0, then the/unctions Flwe (M E rn) and its Fourier coefficients satisfy
the bounds
LEMMA 3.14. Let R E Am, m > 1, and detR = 0. Then there exists a matrix
V E SLm (Z) such that
PRooF OF THE THEOREM. Let w = k. We shall show that the coefficients f M(R)
in the expansion (3.33) of a cusp-form are zero for matrices with det R = 0. If n = 1,
then this follows immediately from (3.67), since in this case (FlkM)l<I> = f M(O).
Suppose that n > 1 and V E SLn(Z) satisfies (3.71) for the matrix R = Ro. Then
Mo= U(V*) E P, and
On the other hand, the Fourier coefficients of the function FlkMlkMo = FlkMMo are
f MM0 (R), SO that
f M(R[V- 11) = f MM0 (R) (RE An).
Ifwe set R = R 0 [V] here, we obtain
since Fl<l>MMo = 0, and the last expression is one of the Fourier coefficients of this
function {see (3.50)). If we use (3.33) with w = k/2, then the above argument goes
through for modular forms of half-integer weight as well.
Q(Y) = L q(R)exp(-ea(RY)),
REAt
where all of the coefficients q(R) are nonnegative and e> 0, converges for all YE Pn.
Then the following bounds holdfor any matrix Y that belongs to the Minkowski reduction
domain Fn and satisfies the inequality Y ~ eEn, where e > 0:
Q( Y) ~ 01 exp{-02a ( J(")),
Q(Y) ~ 01exp{-02n{det Y) 1fn),
where o1 and 02 are positive constants, the first of which depends only on Q and e and the
second of which depends only on n and e.
§3. FOURIER EXPANSIONS 77
PR.OOF OF THE LEMMA. Since y E Fn. it follows from (1.16) and also (1.6) of
Appendix 1 that
u(RY);;;:: bnu(Rdiag(yu, ... ,ynn))
n
= bn L2TaaYaa;;;:: 2bnu(Y) (RE A~),
a=I
where bn = n 1-nc; 1 depends only on n and 2r00 ;;;:: 2 (i.e., the diagonal entries in R).
On the other hand, since Y;;;:: eEn, it follows that u(RY) ;;;:: eu(R) for R ;;;:: 0. From
these inequalities we obtain u(R Y) ;;;:: bnu( Y) + e/2u(R), and hence
We now return to the proof of Theorem 3.13 for w = k. By analogy with the
function (3.38) we consider the function
(3.72) G = GF = L IFlkMI.
MEK\r•
From (3.15) and (3.16) it follows that G does not depend on the choice ofleft coset
representatives of P modulo K. Consequently, for any matrix M' = ( ~ ~) E rn
we have
M M
From these relations and Lemma 2.8 of Chapter 1 it follows that the function
'PF(Z') = 'PF(X + iY) = (det Y)kf 2 GF(Z)
on Hn is invariant relative to all transformations in rn:
'PF(M'(Z)) = 'PF(Z) for M' E rn.
Hence, any value it takes on Hn is already taken on the fundamental domain Dn of
the group P. However, if Z = X + iY E Dn, then, by the definition of Dn and the
inequality ( 1.26), Y satisfies the conditions of Lemma 3.15 for suitable e > 0. If we
apply the first inequality in Lemma 3.15 to each of the functions
L lfM(R)lexp(-:n:u(RY)/q);;;:: IFlkMI
REA;!'
we obtain
78 2. MODULAR FORMS
o o
where 2 and 3 are positive constants depending only on F. If Yaa are the diagonal
entries in Y, then, by (1.8) of Appendix 1 (recall that k ~ 0), the last expression is no
greater than
n
.03 IT Y!~2exp(-02Yaa),
a=l
and consequently it is bounded for all Y > 0. Thus, the function 'I' F is bounded on
Dn. and hence on all of Hn: 'I'F(Z) ~ oj for Z E Hn. Since for an arbitrary matrix
ME rn the function IFlkMI is obviously equal to one of the terms in (3.72), the last
bound will give us (3.69) with w = k:
(FlkM)(X + iY) ~ GF(X + iY)
(3.74)
= (det Y)-kf 2'PF(X + iY) ~ oj.(det Y)-kf 2.
If we now substitute this bound in the integral in (3.44), we obtain the bound (3.70)
for the coefficients f M(R). To prove (3.69) and (3.70) for w = k/2 one can repeat
the proof of (3.35) in Theorem 3.5, i.e., instead of the modular form F of half-integer
weight k/2 one considers the modular form F 2 of integer weight k. Then F 2 satisfies
(3.74), and from that, along with (3.46), one obtains
where <!' = M' E rn if w = k and <!' = M' E <!S, P(M') = M' if w = k/2. By
Proposition 3.7 of Chapter 1, the matrix MM' can be written in the form M 1M0 ,
where Mo = ( ~o ~:) , M1 E rn. Let <! 1 and <!2 be defined in the same manner
as in addition, if w = k/2, then let i!o =
<!; Mo
E <!S be chosen so that we have
MM' = Mi Mi
(this is always possible). Then, by the first part of the theorem, we
have the expansion
Flwi!1 = L f ~ 1 (R)e{q- 1RZ},
REAt
and hence
where tis a complex number in C 1• Since the matrix q- 1D 0 1RA0 is positive definite,
from (3.49) and the definition of the Siegel operator it follows that Flwi!i!'l<I> = 0.
This, along with the relation Flwi!i!' = Flwi!lwi!', implies (3.75). D
We let 1Jtw (K, X) denote the space of all cusp-forms of weight w and character X
for the group K.
§4. SPACES OF MODULAR FORMS 79
implies that Vp(F) = VM(p)(F) for ME ri; thus, theordervp(F) depends only on the
ri-orbit ri(p} of p. In addition, by Theorem 3.1, in thiscaseF has a series expansion
of the form
00
which converges uniformly on Hi (e) for any e > 0. This implies that F may be
regarded as a function of the variable q = exp(2niz):
00
and, as a function of q, it is holomorphic in the open unit disc lql < 1, including the
center q = 0 = limz-+ioo exp(2niz). The order of F(q) at the point q = 0 is called
the order of Fat the point ioo; it is denoted v; 00 (F). In other words, v; 00 (F) = n if
f(O) = f(2) = · · · = f(2(n - 1)) = 0 but f (2n) "# 0 in the expansion (4.2).
PRoPOSITION 4.1. Any nonzero modular form F of weight k and trivial character for
the modular group ri vanishes on only a.finite number ofri-orbits in Hi. If pi, ... , Pm
are a set of representatives of these orbits, then
m
(4.4) v; 00 (F) +L e(pa)-ivp., (F) = k/12,
a=i
where e (p) = 2 if p belongs to the orbit of the point i, e (p) = 3 if p belongs to the orbit
of the point p = (1 + h/3) /2, and e (p) = 1 otherwise.
PROOF. By Theorem 1.1, every ri-orbit in Hi intersects with the fundamental
domain Di. Thus, to prove the first part of the proposition it suffices to verify that F
vanishes at only finitely many points p E Di. Since the function F(q) (see (4.3)) is
holomorphic at q = 0 and is not identically zero, it must be nonzero in some region of
theformO < lql < e, wheree < 1. ThisimpliesthatF(x+iy) "# Oify > (lnci)/21r.
But the subset of Di consisting of all points x + iy for which y ~ (lnci)/211:
is compact, and hence can contain only finitely many zeros of the holomorphic
functionF.
80 2. MODULAR FORMS
C~---1--...,B
-1 -1/2 0 112
FIGURE2
· In proving (4.4) we may assume that p 1, ••• , Pm E D 1• We first suppose that the
boundary of D 1 does not contain any zeros of F, except possibly for i, p, p 2• Then
one can draw the contour Lr shown in Figure 2; where DD', EE', and A' A are arcs of
small circles all of radius r centered at p 2, i, and p, respectively, such that Lr contains
all of the zeros p 1, ••• , Pm that are distinct from p, p 2 , and i. Since all of the interior
points of D 1 lie in different r 1-orbits, it follows that there are no other zeros of F inside
Lr, and so, by the residue theorem, we have
(4.5) L vp,.(F) = -2 ·
1tl
l !dF .
-F = r-+O
l !dF
hm 2--:
1tl
-F'
p,.o/p.112 ,i L, L,
2~i I ~ + 2~i I ~ = 0·
AB CD
Next, since the map z -+ q takes the segment BC to a (clockwise) circle R centered at
q = 0 that does not contain any zeros of F -with the possible exception of a zero of
order Vfoo(F) at q = 0-it follows that
The integral of 2 ~; ''; over the entire circle containing the arc DD' (taken in the same
direction as the arc) is equalto -v1,2(F) = -v1,(F) for small r. Since the angle between
the radii from p 2 to D and from p 2 to D' is obviously 211:/6, we have
. l
hm --:
211:1
1°-+0
I dF
-F = --6l v,,(F).
DD'
§4. SPACES OF MODULAR FORMS , 81
Similarly,
. -1. fdF
hm 1
- = --v;(F)
r--+02m F 2
EE'
and
. ----:-
hm 1
21tl
r--+O
f dF = - -1 vp(F).
-F 6
A'A
Finally, the transformation z - z- 1 takes the arc A' E' to the arc D' E; and the relation
F(-1/z) = zkF(z) implies that
_dF_,(_-_1'-/z,,_) = _kd_z + _dF
F(-1/z) z F '
hence,
1 f dF 1 f dF 1 f dF(-1 I z) 1 f dF
2ni F + 2ni F = 2ni F(-1/z) + 2ni F
D'E E'A' A'E' · E'A'
= 2~i !
A'E'
d: + 2~i (
A'E'
f ~ + f ~) = 2~i
E'A' A'E'
f d:.
since the length of the arc from p to i is 211:/12. Ifwe substitute these expressions into
(4.5), we obtain (4.4).
If F has zeros other than p, p 2, or i on the boundary of D 1, then the same argument
goes through if we d~form L, in such a way that its interior contains only one from
each pair ofzeros lying in the.same orbit. For example, if we have a pair of zeros A. and
A. + 1 on the lines x = ± 1/2 and another pair p and -1 / p on the circle lz I = 1, then
we draw the contour shown in Figure 3, where the small circular arcs have the same
radius and are centered at the points indicated. D
-1 -1/2 0 1/2
FIGURE 3
82 2. MODULAR FORMS
(4.6)
This means that F must be identically zero if sufficiently many of its initial Fourier
coefficients are zero. The analogous fact holds for all modular forms for congruence
subgroups of the Siegel modular group, and it is this fact that implies finite dimension-
ality of the space of such forms.
THEOREM 4.3. Suppose that K is a congruence subgroup of the modular group rn,
K ::> rn (q 1), x is a finite character of K of order m, and w is an integer or half-integer
(w = k or w = k/2), where Kc r3(4) ifw = k/2. Then a modular form
where q = q1m or 4q1 m for w = k or k/2, respectively, is identically zero ifits coefficients
satisfy the condition -
g(R) =
where Cl> is the Siegel operator (3.50), is contained in rotk(rn-J, 1). If u(R') ~
(k/2x)un-i. then
u ( ( R'
0
0))
0
/
= u(R ) ~ 2k7t Un,
sincecn-J ~en by Proposition 1.13, andhenceun-J ~Un. Then, using the assumption
onF, we have
f 1
(R') = f ( ( ~1 ~) = 0.
)
By the induction assumption, F' is identically zero. This means that F is a cusp-form.
Then, by Theorem 3.13, F has a Fourier expansion of the form
F = L f(R)e{RZ},
REA;!"
which, by Theorem 1.21 and the second inequality in Lemma 3.15 for the function
L lf(R)lexp(-xu(RY)) ~ IFI
REA;!"
implies the following bound for all Z = X + i Y in the fundamental domain Dn of rn:
IF(Z)I ~ OJexp(-of (det Y)Jfn),
where OJ and of are positive constants. From this bound it follows that the function
G(Z) = (det Y)kf2 1F(Z)I approaches zero as det Y-+ +oo and Z = X + iY remains
in Dn. On the other hand, from the definition of Dn and Theorem 1.21 it follows that
any subset of Dn of the form {X + iY E Dn; det Y ~ o} with > 0 is closed and o
bounded, and hence compact. Thus, the function G(Z) attains its maximumµ on Dn
atsomefinitepointZ0 = Xo + iYo E Dn. Next, fromLemma2.8 of Chapter 1 and the
definition of a modular form it follows that for any M = ( ~ ~) E rn the function
G satisfies the relation
G(M{Z)) = (ldet(CZ +D)l- 2 det Y)kf2 ldet(CZ +D)kF(Z)I
= (det Y)kf 2 IF(Z)I = G(Z)
84 2. MODULAR FORMS
and so is constant on every rn-orbit in Hn. According to Theorem 1.20, the set Dn
intersects with each P-orbit in Hn; thus, the maximumµ of G on Dn is also its
maximum on Hn: G(Z) ~ G(Zo) =µfor all z E Hn. We introduce the complex
parameter t = u +iv, set z,
= Zo + tEn, and consider the function
g(t) = F(Z,)exp{-iA.a{Z1)),
where A. is determined from the condition nA./n = 1 + [tnan] {here [· · ·] denotes the
greatest integer function). If we substitute the Fourier expansion for F, we obtain the
expansion
g(t) = L f(R)e{RZ,}exp(-iA.a(Z,))
= L f(R)e{RZo}exp(-iA.a(Zo))qu(R)-An/n = g(q),
REAt
where q = exp{nit). The assumptions of the theorem imply that f(R) = 0 if a(R) -
A.n/n < 0. Hence, the series for g(q) does not contain negative powers of q. If e > 0 is
small enough so that z,E Hn for v ~ -e, then the series for g{t) converges absolutely
and uniformly in the half-plane v ~ -e. Then g(q) is a holomorphic function in the
disc jq I ~ exp{ -ne) = p. Since p > 1, it follows from the maximum principle that
there exists a point q0 = exp(nito) such that
i=I
since Xo + iYo E Dn and, by Theorem 1.21, a{Y0- 1) ~an. This implies that cp{vo) =
cp{-e) < 1 if e is sufficiently small. This, along with (4.8), proves that µ = 0.
Consequently, the function F is identically zero, and the theorem is proved in the case
K = r'' and x = .I.
§4. SPACES OF MODULAR FORMS 85
Finally, we consider the general case. Suppose that F E rotk (K, x), M E rn, and
FlkM is defined by (3.14). From (3.15) and (3.16) it follows that any function of the
form (FlkM)m depends only on the left coset KM of the group K. Thus, the function
µ
(4.9) G(Z) = II(FlkMar.
a=I
where M 1, ... , Mµ is a complete set of representatives of K\rn, does not depend on
the choice of representatives. Since for any ME rn the set M 1M, ... , MµM is also a
set of representatives of K\rn, if we again use (3.15) we find that GlkmµM =.G for
M E rn (compare with (3.73)).
On the other hand, G is obviously a holomorphic function on Hn, and, by Theorem
3.5, it is bounded on Hn(e) for any e > 0. Thus, GE rotkmµ(rn, 1). Letting fa(R)
denote the Fourier coefficients (3.33) of the function FlkMa, we easily see that the
Fourier coefficients g(R) of Gare given by the formula
a( L Rap
m )
= L a(Rap) ~ k;: anq,
m
P=I P=I
and this implies that for any a there exists a Psuch that
k
a(Rap) ~ 211: anµq.
To be definite, we suppose that F lkMi = F, and hence f 1 = f. We then see that in the
expression g(R) every term contains a factor of the form /1(R 1p) =/(Rip), which
is equal to zero, because F satisfies (4.7). Consequently, g(R) = O; and, by what was
proved above, G and so also F are identically zero. D
PROOF. The case w > 0. If we use the bound (3.13) for the number of matrices
RE An with a(R):::;; N, we see that the number of different RE An satisfying (4.7) is
at most d = dn (w 1µ (K)q) (n), where dn depends only on n. Then any d + 1 functions
in ro?w (K, x) are linearly dependent, since one can always find complex numbers not
all zero such that the corresponding linear combination of the functions satisfies (4. 7),
and so is equal to zero.
The case w = k = 0. In this case Theorem 4.3 shows that F = 0 if f (0) = 0.
Since obviously 1 E VRo = VRo(K, 1), it follows that for any form F E ro?o we have
F - /(0) x 1 E rot0 • Hence, F - /(0) = 0 and F = /(0). If F E rot0 (K,x) and
x(M) =I l for some ME K, then from the functional equation for the function F and
matrix Mand the uniqueness of Fourier expansions it follows that /(0) = 0. Hence,
F=O.
The case w = k < 0. We first prove the theorem for K = rn and x = 1 by
induction on n. If n = 1, the result follows from Proposition 4.1, since the left side
of (4.4) is nonnegative. Now suppose that n > 1, and we have already shown that
v.n;:-1 = {O}. If F E rot;:, then FICI> Ev.n;:-
1 by Proposition 3.12, and hence FICI> = 0
and F is a cusp-form. Let G be an arbitrary nonzero modular form of positive integer
weight I and character x1 for some congruence subgroup K 1of rn (for example, a
theta-series). Then obviously F 1Glkl E rot0 (K', xlkl), and the constant coefficient in
the Fourier expansion of this function is equal to zero. By what was proved before, the
function is zero, and hence F = 0.
The general case when w = k < 0 reduces to the case K = rn and x = 1 by
the same method as in the proof of Theorem 4.3. That is, if F E rotk(K,x), then the
function Gin (4.9) belongs to VRkmµ(K)· Hence G = 0, and then F = 0.
Ifw = k/2 < 0, then, byLemma3.7, thefunctionF 2 is amodularformofweight
2w = k < 0, and the proof reduces to the previous case. D
[Hint: Use Theorem 4.3 and.Problem 1.16; verify thatthe space rotk (r2, 1) has no
nonzero cusp-forms fork = 1, 2, 3, 4; and then use Problem 4.2.]
1. The scalar product. Given any two functions F and G on Hn, we consider the
differential form
LEMMA 5.1. For an arbitrary matrix M = ( ~ ~) in Si{ one has the transfor-
mation formulas
(5.4) d* M (Z) = d* Z;
i.e., the differential form (5.1) is invariant under the group K. This implies that the
integral
(5.6) J
DK
row(F. G)(Z),
where DK is a fundamental domain for K in Hn, does not depend on the choice of DK,
provided that the integral is absolutely convergent.
LEMMA 5.2. If at least one of the two forms F, G E !mw (K, x) is a cusp-form, then
the integral (5.6) is absolutely convergent.
88 2. MODULAR FORMS
PROOF. Since D x is a finite union of sets of the form M (Dn), where M E and rn
Dn is the fundamental domain for rn described in Theorem 1.20, to prove the lemma
it suffices to verify absolute convergence of an integral of the form
J
M(D.)
ww(F, G)(Z) = J
D.
ww(Flw<!, Glw<!)(Z),
where<! has the same meaning as in Lemma 5.1. To be definite, suppose that Fis a
cusp-form. According to Theorems 3.13 and 3.5, we have the Fourier expansions
where
t(R) = L lh(R1)g{(R2)I
R1+R2=R
is a finite sum and the last series converges on Hn. Then, by Theorem 1.21 and the first
inequality in Lemma 3.15, we obtain the inequality
j exp(-oa(Y)){det Y)wd* Z
D.
'= J
Dn
exp(-oa(Y))(det Y)w-n-I II
l:i;;;a:i;;;p:i;;;n
dxapdYaP•
where o > 0. If (x 0 p) + i(y0 p) E Dn, then the definition of Dn and the inequalities
(1.12) imply that lx0 pl ~ 1/2, IYapl ~ Yaa/2 (a =/:- p). In addition, at the beginning of
the proof of Theorem 1.21 it was shown that in this case y 00 ;;;:: ../3/2. Thus, applying
the inequality (1.13) if w < n + 1 and the inequality ( 1. 8) of Appendix 1 if w ;;;:: n + 1,
we see that the last integral is majorized by an integral of the form
n
a=I
lxapl:i::;l/2
Yoo v'3/2,IYap I:i::;yaa/2
n oo
where K' =KU (-E2n)K, µ(K') = [P: K'], and DK isafundamental domain/or K
in Hn. This scalar product then has the following properties:
(1) (F, G) converges abso.lutely and does not depend on the choice offundamental
domain DK;
(2) (F, G) does not depend on the choice ofgroup K such that F, G E ro?w(K, x);
(3) (F, G) is a positive definite nondegenerate hermitian scalar product;
(4) if M E SQ, then
(5.8)
where the functions F lwe and G lwe are regarded as elements ofro?w (KM, x') (see Propo-
sition 3.8 and Theorem 3.13(3)).
PRooF. Property (1) has already been proved, and (3) follows immediately from
the definitions.
We prove (2). If F, GE rolw(Ki.xi), then, replacingK1by K 1nK, we may assume
thatK1 CK. Let
K' =LJK1Np,
p
where kf = k1 U (-e2n)k1, be partitions into left cosets. Then
r =LJK{NpMa
a,p
is also a partition into disjoint left cosets. By Theorem 1.22, we can take D Ki =
Ua,p NpMa(Dn}; hence,
I
0
[K': K']
= [P: KfJ L row(F. G)(Z)
Ma(Dn)
where for the second equality we used the invariance of the differential form row under
the group K'. This proves property (2).
We now prove (5.8). Since Kand KM are both congruence subgroups of rn, their
intersection K(M) = K n KM is also a congruence subgroup, and so has finite index in
90 2. MODULAR FORMS
where D = D KCMi. It is easy to see that the set M (D) is a fundamental domain for the
group MK(M)M- 1 = MKM- 1 n K = K(M-1)· Thus, again using property (2), we
can rewrite the last expression in the form
and it remains for us to verify that µ(K{MJ) = µ(K{M- 1)). Since we obviously have
where r = rn, we can limit ourselves to the case K = r. For future reference we shall
prove a more general fact. D
LEMMA 5.4. Let G be a congruence subgroup of rn. Then for every matrix M E S8
the group G(M) = G n M--: 1GM isa congruence subgroup of rn, and one has
PROOF. The first part follows from the relation G(M) = G n GM, where GM is
defined as in (3.25). Now let D be a fundamental domain for G(M) in Hn. Since
GcM-1) = MG(M)M- 1; it follows that M(D) is a fundamental domain for G(M-1)·
Then from Proposition 1.23(2) we have the relations
2. The orthogonal complement. We now define the subspace fw (K, x) of the space
ro?w(K, x) of modular forms of integer or half-integer weight w (w = k or w = k/2)
and character x for the congruence subgroup K of rn, whereK c r(i(4) if w = k/2.
This subspace is the set of all forms that are orthogonal to the subspace of cusp-forms
with respect to the scalar product (5.7):
PROPOSITION 5.5. The space of all modular forms splits into the direct sum
(5.11)
of orthogonal subspaces with respect to the scalar product (5.1). In addition, for any
matrix ME ~Q the map (see Proposition 3.8)
PRooF. The decomposition (5.11) follows from Theorem 5.3(3) and standard
linear algebra. Next, since (Flwe)lwe- 1 = F by (3.15) and (3.22), it follows from
Proposition 3.8 that the map lwe is an isomorphic imbedding. The remaining claims
in the theorem follow from Theorem 3.13(3) and (5.8). D
From (5.11) it follows that any modular form F E ro?w(K,x) can be uniquely
represented in the form
F = F1 +F2, where Fi E ~w(K,x);F2 E !Jtw(K,x).
Equating Fourier coefficients, we obtain
(5.12) f(R) = f 1(R) + fz(R) (R E An),
where f(R), f 1(R), and fz(R) are the Fourier coefficients of the functions F, Fi.
and F2, respectively. The Fourier coefficients f 1(R) of F 1 E ~w(K,x) can sometimes
be computed in explicit form. On the other hand, the Fourier coefficients fz(R) of
the cusp-form F2 are relatively small, by (3.70). Starting from these considerations,
in many cases one can· prove that as detR ---. +oo the decomposition (5.12) gives an
asymptotic formula for the function f (R) with principal term f 1(R).
CHAPTER 3
Hecke Rings
One of the most fruitful ideas in the theory of modular forms-the notion of a
Hecke operator-is based on a procedure for taking the average of a function over
suitable double cosets of subgroups of the modular group. Chapter 4 is devoted to
the theory and application of Hecke operators. The properties of Hecke operators are
to a large extent a reflection of the connections that exist between the corresponding
double cosets. The _present chapter examines these connections.
(1.2)
However, it often turns out that there are only finitely many distinct functions among
the functions (1.2). In that case, if we sum these functions, we might again obtain a
function in rot. A typical situation of this sort occurs if the double coset r gr contains
only finitely many left cosets modulo r:
(l.3)
Namely, each product gy (y E r) is contained in the double coset r gr; if gy lies in a
fixed left coset rg' c rgr, then gy = y'g', where y' E r, and, by Lemma 4.1(3) of
Chapter 1 and the equality ( 1.1) above, we find that
We consider the following average of the function F over the double coset r gr
(or, if we want, the average of the function F' over the group r):
LEMMA 1.1. Suppose that F E rot = rotrp(r), and the double coset rgr, where
g E S, satisfies the condition (1.3). Then the function Fl(g) does not depend on the
choice of representatives gi, ... ,gµ ofr \ rgr, and it is an automorphicform of weight
r.pfor r. .
PRooF. If gf = y;g; (i = l, ... ,µ),where y; Er, form another set of representa-
tives, then, by Lemma 4.1 (3) of Chapter 1 and the definition of an automorphic form,
we have
LFlrpy;g; = LFlrpY;lrpg; = LFlrpg;.
i i i
Let y E r. Since the set g 1y, ... ,gµy is also obviously a set of representatives of
r \ rgr, it follows that
The operators
(1.5)
are called Hecke operators on the space OO?rp(r).
2. Hecke rings. In order to study the connections between the Hecke operators
corresponding to different double cosets, we first examine the connections between the
double cosets themselves, where we suppose that the double cosets satisfy (1.3).
LEMMA 1.2. Suppose that G is an arbitrary group, r is a subgroup of G, g E G, and
(1.6) r = u
y,er<ri\r
r(g)'Yi> where r(g) = rng- 1rg,
is the partition of r into a disjoint union of left cosets of the subgroup r(g)· Then
PRooF. The right hand side of (1.7) is clearly contained in the left hand side.
Conversely, suppose that g' = ygo, where y,o E r. By (1.6), the element o lies in
one of the left cosets r(g)'Yi• i.e., o = ay;, where a E r and gag- 1 E r. Then
g' = ygag- 1gy; E rgy;. If rgy; and rgyj intersect, then for some y,o Er we have
the equality ygy; = ogyi, and henceg- 10- 1ygy; =Yi and r(g)'Yi = r(g)'Yi· D
§I. ABSTRACT HECKE RINGS 95
Thus, if g is an invertible element, the condition (l .3) holds ifand only if rng- 1r g
has finite index in r.
Two subgroups r 1 and r 2 of a group G are said to be commensurable if their
intersection r. nr2 has finite index both in r. and in r2; in this case we writer. ,...., r2.
LEMMA 1.3. The commensurability relation is transitive on the set of subgroups of a
group G.
PROOF. Suppose that r 1 ,...., r 2 and r 2 ,...., r 3. If we take left cosets modulo r 2n r 3,
the imbedding r I n r 2 C r 2 gives the imbedding
r1 nr2nr3 \r1 nr2 c r2nr3 \r2,
so that
and
er. : r. n r3] ~er. : r. n r2 n r3] =er. : r1 n r21er1 n r2 : r1 n r2 n r3] < oo.
Similarly, er2 n r3 : r1 n r2 n r3] < oo and er3 : r1 n r3] < oo. Thus, r1 ,...., r3. o
LEMMA 1.4. Let G be a group, and let r be a subgroup. Then the set
f = {g E G;g- 1rg,...., r}
is a group.
PROOF. If r 1 and r 2 are two commensurable subgroups of G and g E· G, then
clearly the subgroups g- 1r1g and g- 1r2g are also commep.surable. Thus, if g E f,
then r,...., g- 1rg, and hence grg- 1 ,...., g(g-•rg)g- 1 = r, and g- 1 E f. Next, if
g1,g2 E f, then g) 1rg1 ,...., r, and hence g2 1g) 1rg1g2 ,...., g2 1rg2 ,...., r; then, by
Lemma 1.3, g1g2 E f. D
The group f is called the commensurator of the subgroup r in G, and its elements
are called r-rational elements of G.
Let r be a subgroup of G, and let S be a multiplicatively closed subset of G. We
call (r, S) a Hecke pair if
(1.9) res c r,
where f is the commensurator of r in G. To each Hecke pair (r, S) we associate the
free Z-module L = L(r,s) whose generators over Z are the symbols (rg) (g ES),
one for each left coset rg. The elements of S act as linear transformations of the
module L according to the rule
s 3 g: t = Ea;(rg;)---+ tg = Ea;(rg;g).
i i
does not depend on the choice of left coset representatives, and it also belongs to the
module D. Namely, t · t' obviously does not depend on the choice of representatives
g;. Let yjhio where Yj E r, be different representatives of the left cosets rhj. Since, by
assumption, ty j = t for each j, it follows that
and the element (I. I 0) does not depend on the choice of representatives hj. Finally, if
y E r, then (t · t')y = t(t'y) = t · t', so that t · t' E D. Since the multiplication map
(t, t') -4 t · t' on elements of Dis obviously bilinear and associative, it follows that D
becomes an associative ring, called the Hecke ring of the pair (r, S).
If (r, S) is a Hecke pair, then, by Lemma 1.2, the double coset rg r of any g E S
is a finite union of disjoint left cosets of r:
µ
rgr= LJrg1.
i=I
If y E r, then the set g 1y, ... , gµ y obviously is also a full set of representatives of the
distinct left cosets r \ rgr. Thus, the elements
be the decomposition of the double cosets into distinct left cosets. Then
where h runs through a set of representatives of the r-double cosets contained in the set
_rgrg'r, and/or each h the coefficient c(g,g'; h) is equal to the number ofpairs (g;,gj)
such that g;gj E rh. The coefficients c(g,g'; h) can also be expressed in the form
c(g,g';h) = v(g,g';h)µ(g')µ(h)- 1,
where v(g,g';h) is the number of elements g; such that g;g' E rhr, andµ(g'), µ(h) are
the indices (1.8). ·
§1. ABSTRACT HECKE RINGS 97
PROOF. Let D' denote the submodule of D ·consisting of all finite linear combina-
tions of elements of the form ( 1.11) with coefficients in Z. Any nonzero element t E D
can be written in the form
µ
(1.12) t = L:a;(rg;),
i=I
where all of the coefficients a; are nonzero and all of the left cosets rg; are pairwise
distinct. We then callµ= µ(t) the length oft, and we prove that t ED' by induction
onµ. Ifµ= l, then t = a(rg). Since t ED, it follows that ty = t for ally Er, i.e.,
a(rgy) = a(rg) for y Er, and hence rg = rgr and t = a(g) ED'. Now suppose
thatµ > I, and we have already verified that all elements of D of length less than µ are
contained in D'. Lett be an element of D of the form (l.12) that has lengthµ. Since
ty = t for y E r, it follows that, if the left coset (rg;) appears in (l.12), then all of
the left cosets (rg;y) for y E r appear in (l .12) with the same coefficient." By Lemma
1.2, every left coset in the double coset rg;r can be written in the form rg;y for some
y E r. Thus, all left cosets in the decomposition of the double coset rg;r appear in
(1.12) with coefficient a;. Hence, the length of the element t -a;(g;) is less thanµ. By
the induction assumption, t - a;(g;) ED', and sot ED'. The first part of the lemma
is proved.
By definition, we have (g) = E;(rg;), (g') = E/rgj), and
(g)(g') = L:(rg;gj).
i,j
Since all of the products g;gj obviously lie in the set rg rg'r, it follows from what was
just proved that the product (g) (g') can also be written in the form
with certain coefficients c(g,g'; h). If we equate coefficients of (rh) in these two left
coset decompositions of the product (g)(g'), we find that c(g,g';h) is equal to the
number of pairs (g;,gj) such that rg;gj = rh, i.e., g;gj E rh. From what we proved
before it follows that c (g, g'; h) depends only on the double cosets of g, g', and h. Ifwe
sum the numbers c(g,g'; h) over all left cosets in rhr, we find that µ(h)c(g,g'; h) is
equal to the number of pairs (g;,gj) such thatg;gj E rhr. Taking the set of g'yj with
yj E r (see Lemma 1.2) as our set of representatives gj, we see that the last numberis
equal to the product of µ(g') with the number of elements g; for which g;g' E rhr. 0
(1.16)
e
where ii= c5(o:) for 0: E rand E Gis any P-preimage of g. Since Ker pis contained
e;
in the center of G, it is clear that p(y) does not depend on the choice of moreover,
p(y) belongs to the center of G.
LEMMA 1.6. The map Pg (g E G) is a homomorphism, and for any yi, Y2 E r it
satisfies the relation
(1.17)
PRooF. The proof of the first part of the lemma is similar to the proof of Lemma
3.4 of Chapter 2. As for (1.17), we first note that the right side is in fact well defined,
because, by (1.6), y' = Y2IYY2 E r(g') for g' = 'Yig'Y2 and y E r(g)· We now use the
definition of the map Pg' and choose g' E Gto be the product YIKY2· We then have
-
g'y'(g')-I = g':y'(g')-I Pg (y') =Yi (m-I Pg' (y'))ylI,
1
since Y' = Y2In2, and the element Pg' (y') is in the center of the group G. On the
other hand, using the definition of the map pg, we have
- -
g'r'(g')-I = Yi (gyg-I w1I = YI (m-I Pg (r ))y1I,
which implies that Pg (y) = Pg' (y'). D
(1.19) -
gyg-I =eye-I.
Since y E r(g)> it follows that gyg-I E r, and hence the right side of (1.19) is
contained in the group f = c5(r). Using this and the fact that y E f, we see that
c5(Ker p) c f(~)' We now prove the reverse inclusion. Let y E f (~)> i.e., y E f and
y = e-IYie. where YI E f. Then 'Y E r(g)> since 'Y = g-Iyig, and by (1.16) we have
-gyg-I =eye-I p(y) = YIP(Y ). If we note that -gyg-I =YI, we find that p(y) = 1, and
soy E c5(Ker p ). This proves (1.18).
§I. ABSTRACT HECKE RINGS 99
le -1-
re : -rc~>1 = [r:
-
e-1-rc~>e1 = [r:
- -
rc~-·>1
and from the previous argument. D
If (r, S) is a Hecke pair that satisfies the conditions in Lemma 1.7, we let
(1.21) B(r, s) = D(f, S)
denote the Hecke ring of the pair (1.14), and we call this ring the Hecke ring obtained
by lifting of the ring D(r, S).
In order to clarify how the Hecke ring. D(r, S) differs from the original ring
D(r,s), we look at the relation between the partition of the double coset fef into
f-left cosets and the partition of the double coset rgr, where g = P(e) E S, into
r-left cosets. Suppose we are given the partitions
(1.22) r = LJ r{g)Yi and r {g) = LJ(Ker Pg )Pi·
j
Since ogives an imbedding of r in G, it follows from (1.22) and (1.18) that we have
the partition
(1.23) f = LJfc~>PJYI,
i,j
which, in conjunction with Lemma 1.2, gives us the corresponding partition of the
double coset fer:
(1.24) ref= LJfeiijYI·
i,j
On the other hand, from ( 1.22) and Lemma 1.2 we also have the partition
(1.25)
which, when compared to (1.24), shows the similarities and differences between the
partitions of the double cosets ref and rgr, and the role played by the lifting
homomorphism p. Using (1.24) and (1.25), we obtain the following result.
LEMMA 1.8. The equality Ker Pg = r{g) holds if and only if the map
P: fef-+ rgr, where g = P(e),
is a one-to-one correspondence.
100 3. HECKE RINGS
. 3. The imbedding e. We now examine the connection between the Hecke rings
corresponding to different Hecke pairs for the same group G. Let (r, S) and (ro, So)
be two Hecke pairs. Suppose that the following conditions hold:
(l.26) ro c r, sc rso, and rn So· s 0 1 c ro.
According to the second of these conditions, every left coset r g, where g E S, contains
an element g0 E S 0 • If we now set e((rg)) = (r0g 0 ), then, by the third condition in
(l.26), we see that (rogo) E L(ro, So) does not depend on the choice of go. The first
condition in (1.26) shows that the map e takes distinct r-left cosets to distinct r 0 -left
cosets. Thus, if we extend e by Z-linearity onto all of L(r, S), we obtain an imbedding
of this module into L(r0 , So).
PROPOSITION 1.9. Suppose that the Hecke pairs (r,s) and (r0,S0 ) satisfy (l.26).
Then the restriction of e to the Hecke ring D(r, S) is amonomorphismfrom this ring to
the Hecke ring D(ro, So):
(l.27) e: D(r, s) ---+ D(ro, So).
If, in addition,
(l.28) Socs and µr(g)=µr 0 (g) forallgESo,
whereµ denotes the index (1.8), then the map (l.27) is an isomorphism of rings.
PRooF. The first part follows directly from the definitions and the assumption that
r 0 c r. To prove the second part, by Lemma 1.5 it suffices to verify that under our
assumptions
(l.29) e(g)r = (g)r0 for g E So.
Let )11,. • ., )Iµ, whereµ = µr 0 (g), be a set of left coset representatives of ro modulo
r o n g- 1r og. Then, by the definition of (g )r0 and Lemma 1.2, we have
µ
(g)ro = ~)rogy;).
i=I
On the other hand, the elements gy1, ... , gyµ all lie in r gr and belong to different left r-
cosets, since if we had gy; =ogyjwitho E ritwouldfollowthato E rnS0 ·S0 1 c r 0 ,
and hence i = j and o = e. By ( 1.28) and Lemma 1.2, the number of elements is equal
to the number of left r-cosets in r gr. Hence,
µ
(g)r = L:)rgy;).
i=I
Again suppose that the Hecke pairs (r, S) and (r0 , S 0 ) are related as in (l.26), and
let y be an arbitrary element of r. We further suppose that Soc S, and we consider
the commutative diagram
L(r,s) L(ro, So)
(l.30)
L(r,s) ~ L(ro,So),
§I. ABSTRACT HECKE RINGS IOI
where the vertical arrows denote the Z-linear homomorphisms that take (rg) E
L(r, S) and (r0g0 ) E L(r0 , S 0 ), respectively, to
(rg). y = (rgy) and (rogo). y = (rogo),
where g0is any element of Son rgoy. From the inclusions So c Sand r c Sand
the second property in ( 1.26) we find that g0 y E S, and this product can be written in
the form y'g0with y' E r and g0E S 0 • From this, together with the third property in
(l.26), it follows that g0 E Son rgoy, and the left coset r 0g 0does not depend on the
choice of g0.
LEMMA 1.10. Suppose that the Hecke pairs (r, S) and (r0 , So) satisfy (l.26), and
So C S. Then the map e in the diagram (l.30) is an isomorphism between L(r, S)
and L(ro, So), and the e-image of the Hecke ring D(r, S) coincides with the set of
t E L(r0, So) such that t · y = t for ally E r.
PROOF. From the inclusion S 0 c Sit follows that e is an epimorphism. Since e is
also an imbedding, jt must in fact be an isomorphism. The second part of the lemma
follows from the commutativity of the diagram (1.30) and the definition of the Hecke
ringD(r,s). D
4. The anti-isomorphism j.
PROPOSITION 1.11. Let (r,s) be a Hecke pair for the group G. Then the pair
(r, s-t ),where s- 1 = {g- 1; g E S}, is also a Hecke pair, and the Z-linearp-a
u k .
nee e rings r••".~
(I.31) j: n(r,s) ~ n(r,s- 1), -
which is defined on elements of the form (1.11) by setting
j((g)r) = (g- 1)r (g ES),
is an anti-isomorphism of rings. In particular, if S is a group, then j is an anti-
automorphism ofihe Hecke ring D(r,s).
We first prove a lemma.
LEMMA 1.12. Let r be a subgroup of G, and let f be the commensurator of r in G.
Then the map
g ~ A.(g) = µ(g)µ(g-1)-1, g E f,
where µ(h) = [r : r(h)], is a homomorphism from r to the multiplicative group of
rational numbers.
PRooF. We let X denote the set of all subgroups of G that are commensurable
with r. If r 1, r 2 E X, then there exists r' E X having finite index both in r I antin
r2 (for example, by Lemma 1.3 we can taker'= r 1 n r 2). We then set
A.(rifr2) = [r1 : r'][r2 : r'r 1.
It is easy to see that this number does not depend on the choice of r', and it satisfies
the relations
A.(r1/r2)A.(r2fr3) = A.(r1/r3) (r; Ex),
(l.32)
A.(g- 1r1g/g- 1r2g) = A.(r1/r2) (r; E X,g E G).
102 3. HECKE RINGS
Let g E f and r' E X; we set .A.'(g) = .A.(r' /g- 1r'g). It is easily verified that .A.'(g)
does not depend on the choice of r' E X. Using (1.32), we find that, on the one hand,
PRooF OF PROPOSITION 1.11. Since the elements of the form ( 1.11) form a Z-basis
of the module D(r, S), it suffices to prove that for any gi,g2 ES one has
By Lemma 1.5 and the definition of j, this relation will be proved if we show that for
any h E rg1rg2r
Since the map g -+ .A.(g) in Lemma 1.12 is trivial on the group r, Lemma 1.12 implies
that
From this it follows that the equality (1.33) is equivalent to the relation
(1.34)
From the definition of µ (gi, g2; h) and the decomposition ( 1. 7) it follows that
v (g 1, g2; h) is equal to the number of elements in the set
It is easy to see that for y Er the double coset rg1')'g2r depends only on r(gi)'Yr (g;•i·
and for y E rand t E r(g2-•i we have r(gi)'Yt = r(gi)'Y if and only if t E y- 1r(g.J'Y·
Thus, v(gi,g2; h) can be written in the form
(1.35)
rEr1K11\r/r1K;•1
K1rg2Erhr-
Similarly,
v(gil ,gtl;h-l) = L(r(gi): (r(gi) n yr(gi"•)'Y-1)].
l'
§I. ABSTRACT HECKE RINGS 103
Using the notation .A.(r 1/r2) (see the proof of Lemma 1.12) and the properties (1.32)
of this symbol, we obtain
v(g;21,g)l;h-I) = L:.A.(r{g1)/r{g1) n yr{g2-1)y-I)
y
x .A.(r/rcg2-1i).A.(r<g2-1/r,g2_,J n y- 1r,giJ>')
= L .A.(y )µ(gi)-1 µ(g21 ).A.(r<K2-•/r{g2'i n >'-1 rcgiJ>')
y
The next lemma shows that the anti-isomorphism j is compatible with the mono-
morphism e that was defined in the previous subsection.
LEMMA 1.13. Let (r, S) and (r0 , S0 ) be two Hecke pairs satisfying (l.26). Suppose
that the Hecke pairs (r,s- 1) and (r0 ,S0 1) also satisfy (1.26). Then the following
diagram is commutative:
E
L(r,s) -------+ L(ro,So)
PROOF. From the definition of e it follows that e(g)r = L:;(g;)r0 , where the
summation is over the double r 0 -cosets in rgr n S 0 • The map g; --+ g;- 1 gives a one-
to-one correspondence between the double r o-cosets in r gr n So and in rg- 1r n S 01 •
We thus have
je(g)r = L(g;- 1)r0 = e(g- 1)r = ej(g)r.
i
The lemma follows from this relation and Lemma 1.5. 0
representatives int. Furthermore, if y Er, then (Flt)ly = Flty =Flt, so that the
function FI t also belongs to rot Thus, to every t E D is associated a linear operator
is a homomorphism from the Hecke ring D(r, S) to the endomorphism ring of the
Z-module !Dt'P (r).
PRooF. The map is obviously linear. Hence, it suffices to prove that a product
of elements of the Hecke ring is taken to the corresponding product of operators. If
FE !Dt'P(r) and t = E; a;(rg;), t' = Ei bi(rhi) E D(r, S), then, by Lemma 4.1(3)
of Chapter 1, we have
From Proposition 1.14 it follows that any algebraic relation between elements of
the Hecke ring is also valid for the corresponding Heck~ operators.
6. Hecke algebras over a commutative ring. Let (r, S) be a Hecke pair, and let A
be an arbitrary commutative ring with unit. Just as in subsection 2 for the case of Z, we
can define the free A-module LA = LA (r, S) whose generators over A are the symbols
(rg) (g E s); one for each left coset rg c S, and we can define the submodule
DA = DA (r, S) consisting of all r-invariant elements. Again, the multiplication
of elements in DA does not depend on the choice of left coset representatives, and it
makes DA into an associative ring with unit, called the Hecke algebra of the pair (r, S)
over A. All of the results of subsections 2-4 carry over without any change to the Hecke
algebras DA(r, S). The results of subsections 1 and 5 concerning representations of
Hecke rings also carry over (along with their proofs), if we suppose that Vis also an
A-module and the actions on V of the group T and the ring A commute with one
another. Clearly, if Z c A, then
PROBLEM 1.16. Let (r, S) be a Hecke pair. Show that the A-linear map N from
the Hecke algebra DA(r, S) to A that is defined on elements of the form (1.11) by
setting N((g)) = µr(g )eA, where µr(g) is theindex (1.8) and eA is the unit of the ring
A, is a ring homomorphism.
The main object of study in this section will be the Hecke ring
(2.3) H = Hn = DQ(An, an)= D(An, an) ®z Q
of the Hecke pair (An, an) over the field Q and over subrings of Q. By Lemma 1.5, we
can take as a Q-basis of H the elements (g) = (g)A of the form (1.11), one for each
double A-coset AgA of the group a. In order to visualize this set of double cosets, we
prove that there is a special type of diagonal representative in each double coset.
LEMMA2.2. EverydoublecosetAgA where A= aLn(Z)andg E an= aLn(Q),
has one and only one representative of the form
(2.4) ed(g) = diag(di. ... , dn), where d; > 0, d;jd;+I ·
PRooF. Let d 1 be the greatest common divisor of the entries of the matrix g, i.e.,
di is the positive rational number such that g = d 1g 1, where g 1 is an integer matrix with
relatively prime entries. Using induction on the minimum~ = ~(g 1 ) of the greatest
common divisors of the columns of these matrices g 1, we prove that the double coset
Ag 1A of such a matrix contains a representative of the form ( ~ : 2 ) , where g2 is an
106 3. HECKE RINGS
and this proves the claim in the case c5 = 1. Now suppose that c5 > 1, and the claim
has already been proved for all g1 with c5 (g1) < c5. If we are given a matrix g 1 with
c5 (g1) = c5, by permuting the columns we may assume that the greatest common divisor
of the entries in the first column is equal to c5. Again using Lemma 3.8 of Chapter 1,
we multiply g 1 on the left by a suitable matrix in A so as to reduce it to the form
We shall call ed(g) = diag(d1,. . ., dn) the matrix of elementary divisors of the
matrix g, and we shall call the numbers d, = d, (g) the elementary divisors ofg. Using
an argument similar to the proof of uniqueness in the previous lemma, we see that the
product d 1 · · · d, is equal to the greatest common divisor of the r x r-minors of g. In
particular,
(2.5)
We now turn our attention to the multiplicative properties of the Hecke rings nn.
THEOREM 2.3. The Hecke ring Hn, n ~ 1, is commutative.
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 107
where v'(g,g';h) = v(g,g';h)µ(g') is equal to the number of pairs (i,j) such that
g; · gj E AhA. Ifwe replace the representatives {g;} and {gj} by { 1g;} and { 'gj}, we
see that v'(g,g';h) is also equal to the number of pairs (i,j) such that
g; • 'gj = '(gjg;) E AhA = '(AhA),
1 i.e., gjg; E AhA.
since the set Ag Ag'A, as a finite union of double A-cosets, coincides with its transpose
'(AgAg'A) = A'g'A'gA = Ag'AgA. D
The rule for multiplying double cosets becomes especially simple when one of the
cosets is proportional to the identity coset. In this case the definition of multiplication
in the Hecke ring immediately implies
LEMMA 2.4. The fol/owing relation holds in the Hecke ring Hn for any g E an and
r E Q*:
PROPOSITION 2.5. Let g, g' E an. Suppose the ratios dn (g) I di (g) and dn (g') I di (g'),
where d, denotes the rth elementary divisor, are relatively prime. Then the following
equality holds in the Hecke ring Hn:
(2.7) (g)(g') = (gg').
PRooF. Lemma 2.4 obviously implies that it suffices to prove the proposition in
the case when d 1(g) = d 1(g') = 1. Hence, we suppose that g and g' are integer
matrices, and the numbers d = Idetgl and d' = Idetg'I are relatively prime (see (2.5)
and (2.6)). From Lemma 1.5 it follows that
(g)(g') = c(g,g';h)(h),
AhAEAgAg'A
108 3. HECKE RINGS
where c{g,g'; h) are positive integers that depend only on the double A-coset of g, g',
and h. Since Agg'A c AgAg' A, it follows that c(g,g';gg') ~ 1, and the last relation
can be rewritten in the form
where c'{g,g'; h) are nonnegative integers that depend only on the double cosets of g,
g', and h. Form ~ 1 we let
denote the set of all integer matrices of the form (2.4) having determinant m. By
Lemma 2.2, we may assume that the matrices g, g', and h in (2.8) belong to the sets
ED(d), ED(d'), and ED(dd'), respectively, since every h in (2.8) is an integer matrix
with Idethl = dd'. Since the EDn(m) are obviously finite sets, we can define the
following elements of nn:
Summing (2.8) over all g E ED(d) and g' E ED(d'), we obtain the relation
It is easy to see that for d prime to d' the map {g, g') - gg' gives a bijection of
ED(d) x ED(d') with ED(dd'). Hence, the first sum on the right in (2.11) is equal to
t(dd'). Ifwe prove that t(d)t(d') = t(dd') fot d prime to d', then it will follow from
(2.11) that the double sum on the right is equal to zero; hence, all of the coefficients
c'(g,g'; h), since they are nonnegative, must equal zero. Thus, (2.8) would turn into
(2.7), and the proposition would be proved. In other words, to prove the proposition
it suffices to prove the following lemma, which is also of independent interest. 0
(2.12)
is the union of finitely many left cosets modulo the group A = An, and the element
t(d) = tn(d) of the Hecke ring Hn can be written as a sum of the form
If d and d' are relatively prime and the matrices g and g' run through sets ofrepresenta-
tives of the left cosets A\ Mn(±d) and A\ Mn(±d'), respectively, then the product gg'
runs through a set of representatives of the left cosets A\ Mn (±dd'). In particular,
PROOF. Lemma 2.2 implies that the set Mn(±d) is the union of finitely many
double A-cosets, and each of these double cosets has exactly one representative in the
set ED (d). The first part of the lemma now follows from Lemma 2.1 and the definition.
of t(d) and (g).
Suppose that d and d' are relatively prime, and {g 1, ... ,gµ} and {gf, ... ,g~} are
fixed sets of representatives of the left cosets A\ Mn (±d) and A\ Mn (±d'), respectively.
Each productg;gj is clearly contained in Mn(±dd'). Suppose that two such products
belong to the same left A-coset:
where A.EA.
We set h = K;~ 1 A.g; = gj 1 (gj)- 1• Then dh = dg;~ 1 A.g; and d'h = gj1d'(gj)- 1 are
integer matrices; since dis prime to d', it follows that his an integer matrix. Further-
more, deth = ±1. Thus, h EA, so that gj 1 = hgj E Agj, and hence j 1 = j. Then
also A.g; = g; 1, and so i 1 = i. We have thus proved that the products g;gj belong to
distinct left cosets of Mn (±dd') modulo the group A. Now suppose that Ag, where
g E Mn(±dd'), is an arbitrary left coset of Mn(±dd') modulo A. Then Lemma 2.2
implies thatg can be written in theformg = vw, wherev E Mn(±d), w E Mn(±d').
The element w lies in some left coset Ag), i.e., w = A. 1gj, where A. 1 E A, and vA. 1 lies
in some left coset Ag;, i.e., vA. 1 = A.g;, where A.EA. Consequently, g = A.g;gj, and the
left coset Ag contains the product g;gj. The second part of the lemma is proved.
The relation (2.14) follows from what has already been proved and from the
definition of multiplication in Hecke rings. This proves Lemma 2.6, and hence also
Proposition 2.5. D
·The next lemma turns out to be useful for explicitly computing the left coset
decomposition of elements of the Hecke ring nn.
LEMMA 2.7. Every left coset Ag, where g is an integer matrix in Gn, contains one
and only one reduced representative C = (c;i) with
(2.15) 0 ~ c;i < cjj, cji = 0 for 1 ~ i < j ~ n.
The study of the global Hecke ring H reduces to the study of its local subrings
Hp, where p runs through all prime numbers. Let p be a prime. We set
(2.16)
110 3. HECKE RINGS
where
is the ring of rational numbers that are integral outside p. Since A c GP c G, we can
consider the Hecke ring
(2.18)
and this ring can be regarded as a subring of the Hecke ring H. The subrings HP cH
for prime p are called the local Hecke rings of the group G.
THEOREM 2.8. The Hecke ring Hn is generated by the local subrings H; asp runs
through the prime numbers.
PRooF. Given a nonzero rational number r and a prime p, we let vP (r) denote the
exponent of p that occurs in the prime factorization of r. If g E an, we define the
matrix edp (g) of elementary p-divisors ofg by setting
where d; (g) are the elementary divisors of g. The numbers p°'' are called the elementary
p-divisors ofg. For fixed g, the matrices edp (g) are clearly equal to the identity matrix
for all but finitely many p; furthermore,
where the produc~ is taken over all prime numbers. Since g E Aed(g )A, this product
formula implies that g can be written in the form
and gp = En for all but finitely many p. It now follows from Proposition 2.5 that
we have the expansion (g) = TIP(gp) for the corresponding double cosets. Since
(gp) E Hp, and H consists of finite linear combinations of elements of the form (g)
(g E G), we conclude that every element in His a finite sum of finite products of
elements of the subrings Hp. 0
PROBLEM 2.9. Let g,g' E Gn. Suppose that the numbers dn(g)/d1(g) and
dn(g 1 )/d1 (g') are relatively prime. Show that d;(gg') = d;(g)d;(g') for i = 1, ... , n.
PROBLEM 2.10. Show that the set of reduced integer matrices C E Mn with det C =
d, where d EN, can be taken as a set of representatives of the leftcosets A\ Mn(±d).
Conclude from this that the zeta-function of the ring Hn, which is defined as the
Dirichlet series
Z(s, N) = ~ N(t(m))'
L.,, ms
m=I
where N: Hn --+ Q is the homomorphism in Problem 1.16 and Res> n, is equal to
the product {(s ){(s - 1) · · · {(s - n + 1) of Riemann zeta-functions.
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 111
PROBLEM 2.11. Prove the following identities for formal Dirichlet series with coef-
ficients in the ring nn:
Z(s) = ~
L...J t(m)
ms = II Zp(p -s
),
m=l p
Zp(v) = Lt(l)v.s;
J=O
( ) '°'
Z S(, " , 'Sn =
L...J
(diag(di, .. ·, dn))A
ds' ds•
1 , ', n
-
_II z p(p -s 1
'" ' 'p
-sn)
'
diag(d,, ... ,d.) P
where diag(di, ... ,dn) runs through the set LJ:'=I ED(m), p runs through the prime
numbers, and
Zp(vi, ... ,vn) = L (diag(p.s1 , . . . ,d.s·))Avf1 00 ·v!•.
o..;;;.s,,.;;; ... ,.;;;.s.
PROBLEM 2.12. Using the coset representatives in Problem 2.10; prove that in the
case n = 2 the following relation holds for every prime p:
t(p )t(l) = t(l+l) + p(pE2)At(l-l) (15 ;;;,: 1).
From this derive the following identities in the ring of formal power series over n;: ·
Zp(v) =(1 - t(p )v + p(pE2)v 2)- 1,
Zp(vi.v2) =(1- (pE2)vn(l - (pE2)v1v2)- 1(1- t(p)v2 + p(pE2)vn- 1.
PROOF. The lemma follows from Lemma 2.4 and the definitions. D
From Lemma 1.5 and the relations (2.5) it follows that every element (h) in this
expansion has the form t(pr•, ... , pr"), where 0 ~ Y1 ~ • · • ~ Yn and YI + · · · + Yn =
o.
02 + · · · +On + o~ + · · · + o~ = Thus, by the definition of 'I', we find that
'P((g )(g')) = L c(g, g'; diag(l, pl'2,. . ., pY· ))t(pl'2,. . ., pY• ).
o,..r2,..···,..r,,
n+···+r.=0
Similarly, in the ring n;- 1 we have the relation
We can now completely determine the structure of the rings n; and n;.
THEOREM 2.17. Let n ~ 1, and let p be a prime number. Then:
(I) the ring n; is generated over Q by the elements
(2.26) n;(p)=n/(p)=(diag(l, ... ,l,p, ... ,p)) (l~i~n);
~..__.......,
n-i ;
(2) the Hecke ring n; is generated over Q by the elements 11:1(p), ... ,1tn-I (p) and
1tn(p)±I;
(3) the elements 11:1(p), ... ,1tn(p) are algebraically independent over Q.
REMARK. We are identifying Q with the subring Q(En)A c n;.
PRooF. To prove the first part it suffices to verify that every element t (p61 , ••• , p 6•),
where 1 ~ 01 ~ · · · ~ On, is a polynomial in 11:1 (p), ... , 1tn (p) with rational coefficients.
We prove this by induction on n, and for each fixed n > l by induction on N =
01 +···+On. If n = 1, the claim is obvious, since t(p6) = t(p )6 = nl (p )6 • Suppose
that n > 1, and the claim has been verified for smaller orders. If N = o 1 + · · ·+on = l,
then t(p61 , ... , p6•) = 11: 1(p ). Suppose that for all t(p6:, ... , p6~) witho( + · · ·+o~ < N
it is already known that they are polynomials in the elements (2.26), an~ let 0 ~ o 1 ~
···~On ando 1 +···+on= N. If o 1 ~ l, then by Lemma 2.4 we have
where \J' is the homomorphism in Lemma 2.16, is a polynomial in the .,,,7- 1(p):
t( P62 , ... ,pJn)_ n-1( P)) •
- F( n,n-1(P), ... ,nn-1
where
F(xi. ... , Xn-1) =
a=(a1,. .. ,a.-1)
Since each element (h) in the expansion of the product (nj'- 1(p))a 1 ... (n~::::l(p))an-1
obviously satisfies the relation Idethl = plal with lal = a1 +2a2 + .. · + (n - l)an-i.
it follows that, after combining similar terms, we may assume that the only nonzero
coefficients aa in Fare those for which lal = 02 +···+on= N. Since \J'(nj(p)) =
.,,,7- 1(p) (1 ~ i ~ n - 1), it follows that \J' takes the element
t = t(l,p62 , . . . ,p6•)- L aa(nj(p))a1 ... (n~_ 1 (p))a,,_1
JaJ=N
to zero. Thus, t is imprimitive. On the other hand, from the form of t it follows
that t is a linear combination of elements t (p 6 ;, .•• , p6~) with of + · · · + o~ = N.
Hence, t = nn(p)t', where t' is a linear combination of elements t(pr 1, ... , pY•) with
'YI + · · · + 'Yn < N. By the second induction assumption, t' is a polynomial in
nj (p), ... , n~ (p), and so the same is true of t. Part ( 1) of the theorem is proved.
Part (2) follows from part (1) and Lemma 2.15.
We prove the third part by induction on n. If n = 1, it is obvious, since the
ni
elements (p )6 = (p6 ), o = 0, 1, 2, ... , correspond to pairwise distinct double cosets
modulo A 1 = {±1}, and so they are linearly independent. Suppose that n > 1, and
the claim has been verified for all n' < n. Suppose that the elements nj(p), ... , n~(p)
are algebraically dependent over Q, and let F (nj (p), ... , n~ (p)) = 0 be an algebraic
relation in which the polynomial F has the smallest possible degree. If we now apply
the homomorphism \J', we obtain
\J'(F(nj (p ), ... , n~ (p))) =F(\J'(nj (p )), ... , \J'(n~ (p)))
=F(nj'- 1(p), ... , n~::::l(p), O) = 0.
By the induction assumption, this implies that F = xnFi. where F1 = F1 (xi, ... , xn) is
also a polynomial. By Lemma 2.15, n~ (p) is not a zero divisor. Hence, the relation 0 =
F(nj(p), ... , n~(p)) = n~(p)F1 (nj(p ), ... , n~(p)) implies that F1 (nj(p), ... , n~(p))
= 0, and this contradicts our assumption that F has minimal degree. D
We conclude this subsection by proving some technical facts about the generators
n; (p) and their pairwise products which will be needed later.
(2.27)
{C =(cap) E Mn;detC = pi,Caa = 1, or p,
Cap = 0, if a > p, or a < Pand Caa = p}
is a complete set of representatives of the distinct left cosets of AD;A modulo A= An;
the number of matrices in this subset is equal to
µA(D;)=~,
'Pi'Pn-1
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 115
where
(2) The double coset expansion of the product in the Hecke ring n; of the two
elements n; = n'/ (p) and n j = nj (p ), where 1 :::;; i, j :::;; n, has the form
(2.31) 0
pEa
0
The number of A-left cosets in the double coset ADapA is given by the formula
_ P(n-a-p) 'Pn
(2.32) µA (D ap ) -p
'Pn-a-p'Pa'Pp
PROOF. Every matrix in (2.27) can be written in the form diag(p'\ ... ,p.s•)C',
where On = 0, 1, 01 + · · · +On = i, and C' E A. Thus, the matrix lies in AD;A.
By Lemma 2.7, all of these matrices belong to different A-left cosets. Now let C be
any matrix of the form (2.15) that lies in AD;A. Its diagonal entries Caa are positive
integers, and their product is p;. Hence, Caa = p.S0 , where Oa ;;::: 0, 01 + · · · + On = i.
From the integrality of the matrix p · c- 1 it follows that each Oa is either 0 or 1.
Suppose that Cap =F 0, where 1 :::;; a < p :::;; n. Then op = 1 and Cap is not divisible by
p. We let )'Ii ... , Yn-i denote the indices y for which o.,
= 0. If Oa = 1, then the )'1th,
... , Yn-;th, and Pth columns of C are linearly independent modulo p; but the rank of
C modulo p is obviously equal to n - i. Thus, Oa = 0, and C lies in the set (2.27).
It is easy to see that the number of elements of the set (2.27) with fixed diagonal and
with Oa = 1 precisely when a = ai. a2, ... , a;, where 1 :::;; a1 < a2 < · · · <a; :::;; n, is
equal to pa1- 1 • • • pa,-i; this implies that the number of elements in the set (2.27) is
(2.33) '°'
L..J
xa1+···+a1 = x(i)
cp; ()
'Pn(x)
()
X 'Pn-i X
(l __., · __., )
:::::,l:::::,n,
l~a1<"·<a1~n
116 3. HECKE RINGS
where cp,(x) is the function (2.29). This identity can be obtained by equating coeffi-
cients of ti on both sides of the identity
n n (i)
( )
(2.34) no+ tx°') = L I ; x 'Pn x ,
a=I i=O cp; (x )'Pn-i (x)
an identity which the reader can easily prove by induction on n, in a man'1.er analogous
to the standard proof of Newton's binomial expansion.
To prove (2.30) we use induction on n. If n = 1, we must prove the formula
n/n/ = nA. 1, which is obvious. Now let n > 1, and suppose that the formula has been
proved for smaller orders. By Lemma 1.5 and the first part of Lemma 2.18, we have
the expansion
1t;1tj = C;j(g)(g)A,L
AgAEGp
where
cij(g) = v(D;,Dj;g)µA(Dj)µA(g)- 1
and v(D;, Dj;g) is the number of matrices C in the set (2.27) such that CDj E AgA.
This number is easy to compute. If the diagonal entries in C are pJ', ... , pJ•, then, as
noted before, C = diag(pJ', ... , pJ•) C1, where C1 is an upper-triangular matrix in A.
Then
CDj = diag(i', ... ,i•)DjDj 1C1Dj E Adiag(pJ', ... ,i•)DjA,
where Dj = DJ(p), since obviously Dj 1C1Dj EA. Let aq <···<a.; be the indices
a. for which Oa = 1. If a of the numbers a.1, ... , a.; do not exceed n - j and b = i - a
of these numbers are greater than n - j, then clearly
diag(pJ', ... ,i•)Dj E ADa+j-b,bA,
where Dap = D~p(P ). As noted before, the number of matrices C with fixed a.1, ... , a.;
is equal to p°'' - 1 • • • p°';-i. Thus, the number of matrices C in the set (2.27) such that
CDj E ADa+j-b,bA for fixed a and b satisfying the inequalities 0 ~ a ~ n - j,
0 ~ b ~ j, and a + b = i, is equalto
1i;;;a1< ·<a.i;;;n-j ·
00
ti;;;P1<· .. <Pbi;;;j
where
CiJ (D 11+}-h,h ) = p
h(n-j-a)
. 'Pn
µA (D a+j-h,h )-1 ·
'Pa'Pn-.i-a'Ph'P j-h
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP 117
In the case when i =nor j = n, (2.30) follows from the definitions and Lemma 2.4.
Hence, we may assume that 1 :::;; i,j < n. Ifwe now apply the homomorphism'¥ of
Lemma 2.16 to the left and right sides of (2.35), we find that
,,,.n1• -l(p),,,.1(.n-l(p) =
1• 1•
~
L...J C;j (D a+j-b,b )'ll:a+j-b,b
n-1 {p ) ·
o,.;;a,.;;n-j
O,.;;b,.;;j
a+b=i
On the other hand, by the induction assumption we have
nn,. -l(p)n1\n-l(p) = ~ 'Pa+j-b n-1 { )
L...J cp cp. na+j-b,b P ·
a 1-b
o,.;;a,.;;n-1-j
o,.;;b,.;;j
a+b=i
Since the double cosets in this expansion are linearly independent, we obtain the
relation
'Pa+j-b
C;j (D a+j-b,b ) = ,
'Pa'Pj-b
where a+ b = i, 0:::;; a :::;; n - 1 - j, 0:::;; b :::;; j. The same formula can be obtained
in the case a = n - j if we use the original formula for C;j(D2n-i-j,i+j-n) and (2.28)
and take into account that
µA(D2n-i-j,i+j-n) = µ~(D'/+j-n(p)) = 'Pn ,
'Pi+j-n'P2n-i-j
This proves (2.30). Comparing the expressions for the coefficients C;j(Da+j-b,b), we
find that
. ) _ b(n-j-a) 'Pn
µA (D a+J-b,b - P •
'Pn-j-a'Pa+j-b'Pb
from which (2.32) follows if we set a + j - b = a, b = p. D
PROBLEM 2.19. Let Qp be the field of p-adic numbers, and let Zp be the ring of
p-adic integers. Then G = GLn (Qp) is a locally compact group in the p-adic topology,
and r = GLn(Zp) is a (maximal) compact subgroup. Let Dx, where K is a subring
of C, denote the K -module consisting of all continuous functions f : G --+ K with
compact support which satisfy the condition f(y1gy2) = f(g) for any y1, r2 Er and
g E G. Fix the Haar measure µ on G for which µ (r) = 1, and define the product
f * f 1 of functions /, f 1 E DK by the formula
Show that the K-linearmap from the Hecke ringDx(An, a;) to Dx that associates to
an element {g) A {g E a;) the characteristic function of the double coset rg r C G, is
an isomorphism of rings.
3. The spherical map. We have shown th~t every element in the local Hecke ring of
the general linear group ~an be uniquely expressed as a polynomial in a finite number
of generators. But often it is not so simple to find this polynomial if the element
is given, say, as a linear combination of left cosets. In order to solve this problem,
we define certain maps from the local Hecke rings to rings of symmetric polynomials
that are analogous to the spherical functions in the representation theory of locally
compact groups.
118 3. HECKE RINGS
from the Q-vector space spanned by the distinct left cosets (Ag) of modulo An to thea;
subring of the field Q(x1, ... , xn) of rational functions inn variables that is generated
over Q by xt 1, ... , x;= 1• Lemma 2.7 implies that every left coset Ag (g E a;) has a
representative of the form
.s,
P c12 ·· ·
.~. ~.s~
(
(2.36) ::: where Oi, ... , On E Z,
0 0 ...
and the diagonal (p.S1 , ••• , ~·) is uniquely determined by the left coset. We set
n
OJ((Ag)) = IT(x;p-i).S'
i=I
t = L:aj(Agj) E LQ(An,a;)
j
we define
We call OJ the spherical map. We would like to describe the image of the Hecke rings
under OJ. Recall that an element of the field Q(xi. ... , xn) is said to be symmetric if it
does not change under any permutation of the variables x 1, ... , Xn.
THEOREM 2.20. The restriction of the map ro = OJ; to the Hecke ring n; =
DQ(An, a;) c LQ(An, a;) isanisomorphismofthisringwiththeringQ[xt 1, ... ,x;= 11'_
of all symmetric elements of Q[xt 1, ... , x;= 1]. The image of the integral subring n;
under the map OJ is the ring Q[x1, ... , xnlr of all symmetric polynomials in xi, ... , Xn
overQ.
We first prove a lemma.
LEMMA 2.21. The images of the elements (2.26) under the map OJ = OJ; are given by
the formulas
where
s;(x1, ... ,xn) =
I ~°'I<··· <a; ~n
PROOF. In the expansion of n[ (p) into left cosets we take the set (2.27) as the
representatives of the different left cosets, and we use the fact that the number of
elements in this set for which <5"' = 1 precisely when a = a 1, a 2 , ••• , a;, where 1 ~
ai < · · · <a; ~ n, is equal to p"' 1-i · · · p"';-i. By the definition of OJ we then have
OJ(n7(p)) =
tt' = L a;bj(Ag;gj),
i,j
have the same form, and the diagonal entries in the matrices g;gj are equal to the
products of the corresponding diagonal entries in g; and gj. Thus, from the definition
of OJ we have OJ((Ag;gj)) = OJ((Ag;))OJ((Agj)). Using this and the linearity of OJ, we
obtain the relation OJ(tt') = OJ(t)OJ(t').
Clearly OJ((En)) = 1, where (En) = (En)A is the unit of the ring Hp. From
Theorem 2.17 it then follows that the OJ-image of HP consists of all polynomials over
Qin the elements OJ(n;(p)) (1 ~ i ~ n), and the OJ-image of Hp is generated by
OJ(Hp) andtheelementOJ(n,,(p)-i) = OJ(nn(p))-i. ButthenLemma2.21 andthe
fundamental theorem on symmetric polynomials imply that OJ(H p) coincides with
the ring of symmetric; polynomials in n variables over Q, and the ring OJ(Hp) is
generated over OJ(H p) by the element (xi··· xn)-i, and so obviously coincides with
Q[ Xi±i ,. . .,xn±i]s•
Finally, if OJ(t) = 0 for some t E Hp, then by Lemma 2.15 we can write t =
nn(p)15 ti. where<5 E Zand ti EH p· Then, since OJ(nn(p)±1) =fa 0, we have OJ(ti) = 0.
By Theorem 2.17, ti = Cl>(ni (p ), ... , nn(P )), where Cl>(xi. ... , xn) is a polynomial with
rational coefficients. But by Lemma 2.21, the equality OJ(ti) = 0 means that
Cl>(p-i si (xi. ... , Xn), ... , p-(n) Sn (xi, ... , Xn)) = 0,
and because of the algebraic independence of elementary symmetric polynomials, this
implies that Cl>(xi. ... , xn) = 0. Hence, ti = 0 and t = 0. D
Theorem 2.20 reduces the computation of the product of elements of a local Hecke
ring to the computation of the product of the corresponding symmetric polynomials.
The problem of expressing elements of the local Hecke rings in terms of the generators
n; (p) reduces to the problem of expressing symmetric polynomials in terms of the
elementary symmetric polynomials.
We illustrate ·the usefulness of the spherical map by discussing the problem of
summing the formal generating series for elements of the form t (pt'i), where p is a prime
number. Note that when n > 2 these elements do not have a simple multiplication
table (the case n = 1 is trivial, and the case n = 2 is treated in Problem 2.12).
120 3. HECKE RINGS
PROPOSITION 2.22. In the notation (2.10) and (2.26), the following identity holds in
the ring offormal power series over n;:
PRooF. From the definition of t(pJ) and Lemma 2.7 it follows that
so that
(2.38)
00 00
where we used Lemma 2.21 for the last step. From the identity for formal power series
over the ring of polynomials in x1, ... , Xn it follows that all of the coefficients in the
formal series
This proposition can be used to find explicit expressions for t (pJ) in terms of the
generators n;(p).
We conclude this section by describing the anti-automorphism j of the Hecke ring
n; (see §1.4) in terms of symmetric functions.
§2. HECKE RINGS FOR THE GENERAL LINEAR GROUP i2i
LEMMA 2.23. Let n ~ 1, and let p be a prime number. Then the diagram
H Pn w
----+ Q(Xi±i , ... ,xn±i]
H Pn w
----+ Q(Xi±i , ... ,xn±i]
commutes, where (J) is the Spherical map, j is the anti-automorphism (1.31), and W = Wn,p
denotes the Q-linear ring homomorphism given by w(x;) = pn+i x;-i (1 : : ; i : : ; n) on the
generators.
PROOF. Since n; is a commutative ring, all of the maps in this diagram are Q-
linear ring homomorphisms. Taking into account Theorem 2.17 and Lemma 2.4, it is
therefore sufficient to verify that ro(j(7t; (p))) = w(ro(1t; (p))) (1 ::::;; i ::::;; n). We have
PROBLEM 2.24. Show that any Q-linear ring homomorphism from n; to C that
takes the unit of n;
to 1 has the form
Z(s, A) = ~ A(t(d))'
L.J dS
d=i
where t(d) is the element (2.13), has a formal Euler product expansion of the form
where p runs through all prime numbers, Ap denotes the restrictio,i of A to n;, and
the local zeta-functions Zp(s, A.p) have the form
n
Zp(s,A.p) = IT(l-A.;(p)p-s-i)-i,
i=i
where A.i (p), ... , An (p) are the parameters of the homomorphism Ap.
122 3. HECKE RINGS
PROBLEM 2.26. We return to the notation of Problem 2.19. Recall that a continuous
function won the group G = GLn(Qp) with values in C is called a (zonal) spherical
function if w(y1gy2) = w(g) for any yi, Y2 E r = GLn(Zp) and g E G, w(En) = 1,
and the map
f - w(J) = l f(g)w(g) dµ(g)
is a ring homomorphism from De to C. Show that every spherical function on G has
theform
W(),1, ... ,;..)(g) = l 'P(J.1, ... ,J..)(gy)dµ(y),
where A1, ... , An are nonzero complex numbers and the function c,o = 'P(J. 1,. ..,;.• ) is given
by the conditions: c,o(yg) = c,o(g) for y E rand g E G, and
Show that the spherical functions W(J.i,. ....t.) and wo.; ......t:,) coincide ifand only ifthe
numbers A~, ... , A~ are a permutation of the numbers A1, ... , An.
[Hint: Use Problems 2.19 and 2.24, Theorem 2.20, and Lemma 2.21.)
Using Lemma 3.1 as a point of departure, one could determine the Hecke ring
of the pair (K, S") and then consider its representations on spaces of modular forms
for the group K. However, the structure of the Hecke rings that arise is in general
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 123
unknown, and one does not yet have a concrete general theory of Hecke operators.
Because our constructions are not meant as an end in themselves, but rather as a means
for studying Diophantine problems in number theory, we shall simplify the situation
by, in the first place, limiting ourselves to the types of congruence subgroups that arise
in arithmetic, and, in the second place, considering certain subrings of the Hecke ring
of the pair (K, sn), rather than the entire Hecke ring.
We first prove an approximation lemma.
LEMMA 3.2. ( l) The natural homomorphisms mod q
SLn(Z)---+ SLn(Z/qZ)
and
rn = Spn(Z)---+ Spn(Z/qZ),
where n, q E N, are epimorphisms.
(2) If q and qi are relatively prime, then
rn(q)rn(q1) = rn.
PROOF. We prove the first part separately for each of the two groups using in-
duction on n. For SLn it is obvious in the case n = 1. Suppose that n > l, and
the claim has been proved for SLm with m < n. Let T be an n x n integer matrix
such that det T =
l (mod q). We can replace T by a matrix congruent to it modulo q
that has the property that the entries in its first column are relatively prime. Then, by
Lemma 3.8 of Chapter l, there exists Vi E SLn(Z) such that Vi T = ( ~ ~~). Since
det T4 =det T =l(modq), it follows by the induction assumption that there exists
Vi E SLn-1(Z) that is congruent to T4moduloq. Then T::::: v1- 1 ( ~ ~) (modq),
and the last matrix obviously lies in SLn (Z). This proves the first part of the lemma for
SLn. Since Sp 1(Z) = SL2(Z), part one of the lemma holds for Sp 1(Z). Suppose that
n > l, and the claim has already been proved for Sp111 with m < n. If Mis a·2n x 2n
integer matrix satisfying the congruence Jn[M] = Jn(modq), then we first replace the
entries in the first column of M by suitable integers congruent to them modulo q so
that they are relatively prime. Applying Lemma 3.9 of Chapter l to the first column of
M, we find that there exists g E rn such that the first column of the matrix M' = gM
is the same as the first column of the identity matrix E 2n. The reader can easily verify
that it is always possible to choose matrices V E SLn (Z) and S = 1S E Mn in such a
way that the first row of the matrix
M " -_ (A
C D
B ) -_ M' g1, whereg1 = ( ~
is the same as the first row of E2n· Thus,
these groups have a number of technical advantages that make the calculations easier.
Using Theorem 3.3, a reader who has need of results for the rings L(K) for other K
will easily be able to obtain them from the corresponding results for L(r0(q)).
We start with some technical lemmas that give information on the left and double
cosets ofr0(q). We set
Lemma 3.5 implies that S(r0(q))q = sn(q), and the Hecke ring (3.6) for the
group K = r 0(q) has the form
(3.8)
By Lemma 1.5, we can take elements of the form (1.11) as a Q-basis of the ring
Ln(q), one element for each double coset r 0(q)Mr0(q) of the group sn(q) modulo
the subgroup rg(q).
LEMMA3.6. Everydoublecosetr0(q)Mr0(q), where ME sn(q), contains one and
only one representative of the form
(3.9) sd(M) = diag(d1,. . .,dn;e1,. . .,en),
where d;, ei > 0, d;ld;+1. dnlen, e;+de;, d;e; = r(M).
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 127
where ( ~: ~:) E sn-t. First suppose thato = 1, and let i be the index of the first
column of M whose entries are relatively prime. By replacing M by Mln if necessary,
we may suppose that i :.;; n. If we then replace M by MU(V*), where V E An is a
suitable permutation matrix, we may assume that i = 1. We now apply Lemma 3.9
of Chapter 1 to the first column of M; we find that the left coset rn M contains a
representative whose A 1-block is 1 and whose A 3-, C1-, and C3-blocks consist ofzeros.
After multiplying this matrix on the right by
we obtain a matrix with zero B 1- and B2-blocks. Thus, in our double coset we have
found a matrix with A1 = 1, A2 = 0, B1 = 0, B2 = 0, AJ = 0, C1 = 0, and C3 = 0.
From (3.1)-(3.2) it now follows that this matrix has the form (3.10). For example,
the first relation in (3.1) shows that C2 = 0 and 'A 4C4 = 'C4A 4, the first relation in
(3.2) leads to the equalities B3 = 0 and A 4 'B4 = B4 'A 4, and so on. Now suppose that
o > l, the claim has been proved for all integer matrices M' E sn with relatively prime
entries and with o(M') < o, and o(M) = o. Just as in the above discussion, in the
o,
doublecosetof M wecanfindarepresentativeM0 withA 1 = withzeroA 3-, C1-, and
C3-blocks, and with the property that all of the entries in the Ai-, B 1-, and Bi-blocks
are between 1 and o. Then o(Mo) < o. In fact, we obviously have o(Mo) :.;; o. If
o(M0 ) were equal too, then all of the entries of M 0-and hence all of the entries of
o,
M -would be divisible by contradicting the assumption that its entries are relatively
prime. By the induction assumption, the double coset rn Morn = rn Mm contains a
128 3. HECKE RINGS
representative of the form (3 .10), and the proof of the claim is complete. Returning to
the proof of the lemma, we see that the P-double coset of an arbitrary matrix M E sn
contains a representative Mo =(~ ~) with blocks of the form
d1
A= ( 0
0)
A' '
where di,e1 > 0, dde1, dlei = r(M), M' = . (A'C' D'B') E sn-l, and all of the
e,
entries in M' are divisible by di. By the induction assumption, there exist e1 E rn- 1
such that the matrix eM'e1 has the form (3.9). Then the matrix [Mocfi, where for
e=(; ~)Ern- 1 weset
0 0 P0)
a 0 Ern
0 l 0 '
y 0 0
has the form (3.9). The uniqueness of the rn-double coset representative of the form
(3.9) follows from Lenima 2.2, since the numbers di. ... , dn, en, en-1> ... , e1 obviously
are the elementary divisors of this matrix in the sense of §2.1. The lemma is proved in
the case q = I. ·
We now turn to the case of arbitrary q ~ l. If M E sn (q), then, by what was
proved abqve, there exist Yi.Yi E rn such that y1My2 = sd(M). By Lemma 3.5,
the groups K = rn and K 1 = r(j(q) satisfy the conditions of Theorem 3.3. Since
sd(M) E sn(q), it follows from part (3) of that theorem that sd(M) E r(j(q)Mr(j(q);
the uniqueness of this element follows from its uniqueness in the larger double coset
rnMrn. o
We call a matrix sd(M) of the form (3.9) the symplectic divisor matrix of M,
and we call d; = d;(M) and e; = e;(M) (i = 1, ... , n) the symplectic divisors of M.
Clearly, the numbers d1,. . .,dn,en,. .. ,e1 are the elementary divisors of M. From
Lemmas 1.5 and 3.6 it follows that the elements of the form
form a basis of the space Ln(q) over Q, where d;, ej are positive rational numbers that
are q-integral, have q-integral inverses, and satisfy the conditions
(3.12)
PROOF. The map M--+ r(M- 1)M is obviously an automorphism of the group
S" (q) that does not affect the elements of r = r(j (q ). Hence, the Q-linear map from
L"(q) to itself that is given by the condition (M)r--+ (r(M- 1)M)r for ME sn(q)
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 129
is an automorphism of the ring Ln(q). It now follows from Proposition 1.11 that the
map
(3.13)
PROPOSITION 3.9. Let M,M' E sn(q). Suppose that the symplectic divisor ratios
el (M)/di (M) and e 1(M')/di (M') are relatively prime. Then the relation
(3:16) (M)r(M')r = (MM')r,
where r = r 0(q), holds in the Hecke ring Ln(q).
PRooF. The proof of this proposition is similar to the proof of Proposition 2.5,
with obvious modifications. So we shall be brief. From Lemma 3.8 and the definition
of the symplectic divisors it follows that one need only prove the proposition in the
case when Mand M' are integer matrices and r = r(M) and r' = r(M') are relatively
prime. In analogy with (2.8) we obtain
where a(M, M'; H) are nonnegative integers depending only on the r-double cosets
of M, M', and H. Form EN we let
SD(m) = SDn(m) = {diag(di. ... ,dn,ei, ... ,en);
(3.18)
130 3. HECKE RINGS
denote the set of all integer matrices of the form (3.9) with r(M) = m. By Lemma
3.6, we may assume that ME sd{r), M' E sd(r'), and HE sd(rr') for all Hin (3.17).
If m is prime to q, we can define the element
(3.19) T(m) = Tn(m) = L (M)r
MESD.(m)
of the ring Ln(q). If m and m' are relatively prime and also prime to q, then, summing
(3.17) over all M E sd{m) and M' E sd(m'), we obtain
If m and m' are relatively prime (and also prime to q ), and if the matrices Mand M' run
through a set ofrepresentatives of the left cosets r\ SM (m) and r\ SM(m'), respectively,
then the product MM' runs through a complete set of representatives of the left cosets
r \ SM(mm'). In particular, the relation (3.20) holds.
PRooF. The first assertion follows from Lemmas 3.6 and 3.1.
If {Mi. ... , Mµ} and {M{, ... , M:} are fixed sets of representatives of the left
cosets r \ SM(m) and r \ SM(m'), respectively, then every product M; M} is obviously
contained in SM(mm'). Suppose that two such products lie in the same r-left coset, say,
yM;Mj = MkMf, wherey Er. WesetH = Mk- 1yM; = Mf(Mj)- 1• ThenmH and
m' H are integer matrices, and since (m, m') = 1 it follows that H is an integer matrix.
On the other hand, HE sn(q) and r(H) = 1. Thus, HEP n sn(q) = r 0{q) = r,
so thatMk E rM; and Mf E rMj. This means that k = i and l = j. Thus, all of the
productsM;M} belongtodifferentleftcosetsr\SM(mm'). IfrM0 (Mo E SM(mm'))
is an arbitrary left coset, then it follows from Lemma 3.6 that Mo can be written in the
form Mo = MM', where M E SM(m) and M' E SM{m'). Then M' = y' Mj, where
y' Er, and My'= yM;, where y Er; hence, Mo= yM;Mj, and the left coset rMo
contains the product M;Mj. Lemma 3.10, and hence also Proposition 3.9, are proved.
D
The next lemma is useful for explicitly computing the left coset expansions of
elements of the Hecke ring Ln(q).
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 131
LEMMA 3.11. Every left coset rM, where r = r 0{q) and ME sn(q), contains one
and only one representative of the form
(3.23) (A B)
0 D
= (r(M)D*
0
B) '
D
where D belongs to a.fixed N'-left coset of GLn(Zq) •. and B belongs to a.fixed residue
class of the set
(3.24) B(D) = B(D)Q ={BE Mn(Q); 'BD = 'DB}
modulo D, where
(3.25)
PRooF. The lemma follows directly from Lemma 3.4, the relations (3.1), and the
definitions. D
Just as in the case of the general linear group, the study of the global Hecke rings
L n (q) reduces to the study of the local subrings. Let p be a prime number not dividing
q. We set
(3.26)
where Z[p- 1] is the ring (2.17). Since r 0{q) c s;(q) c sn(q), it follows that
(r0{q), s;(q)) is a Hecke pair, and the Hecke ring
(3.27)
may be regarded as a subring of the Hecke ring Ln(q). The subrings L;(q) c Ln(q)
as p runs through the primes not dividing q are called the local subrings of L n {q).
'fHEoREM 3.12. For n, q E N the Hecke ring L n (q) is generated by the local subrings
L;(q) asp ranges over all primes not dividing q.
PROOF. For r E Q* and pa prime number, as before we let vp(r) denote the power
with which p occurs in the prime factorization of r. If M E sn (q) and p ,j'q, we define
the symplectic p-divisor matrix of M by setting
(3.28) sdp(M) = diag{pvp(d1), ... , pvp(d.), pvp(ei), ... , pvp(e.l),
and Mp = E2n for all except finitely many p. From Proposition 3.9 it now follows that
the corresponding double coset in the Hecke ring L n (q) has the expansion
(M)r = II (Mp)r,
pEPc9J
where r = r3(q). Since (Mp)r E L~(q), and since Ln(q) consists of finite linear
combinations of (M) r (where M E sn (q)), this proves the theorem. D
PROBLEM 3.13. Let M,M' E sn(q). Suppose that the ratios en(M)/di(M) and
en(M')/d1(M') are relatively prime. Show that sd(MM') = sd(M)sd(M').
·PROBLEM 3.14. Show that the following set can be taken as a set of representatives
of the left cosets of SMn(m, q) (where (m, q) = l) modulo the group r3(q):
PROBLEM 3.15. Let D be a nonsingular n x n integer matrix. Show that the number
p(D) = IB(D) n Mn/modDI of left residue classes of the set B(D) n Mn modulo D
is finite and satisfies the relations
p(UDV) = p(D), ifil, VE An,
1
p(D) = dfdf- · · · dn, if ed(D) = diag(d1, ... , dn).
PROBLEM 3.16. Show that the zeta-function of the ring Ln(q), which is defined as
the Dirichlet series
ZN(s) = L
N(T(m))m-s,
mENc,J
where N: Ln(q)-+ Q is the homomorphism in Problem 1.16 and the real part of sis
sufficiently large, converges and has an Euler product of the form
Show that
N(T(m)) = L N(t(di, ... , dn))df df-I · · · dn,
cl1EN
cldc/2 I··· lcln Im
wheret(d,, ... ,dn) = (diag(di. ... ,dn))A E Hn andthesymbolNontherightdenotes
the corresponding homomorphism of the ring nn.
[Hint: Use the two preceding problems to prove the last relation.]
PROBLEM 3.17. Show that for L 2 ( q) one has
ZN(s) = (q(s)(q(s - l)(q(s - 2)(q(s - 3)(q(2s - 2)- 1,
where
(q(s) = L m-"'.
• mENlql
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 133
[Hint: Use the previous problem and the following identity, which is a consequence
of Problem 2.12:
'°'
L....J
o,.;6, ,.;62
N(t( 61
p ,p
62)) 61 62 _
VI V2
1- vi
-(1- vi v2 )(1-( p +l) v2 + pv22)"
]
PROBLEM 3.18. Show that the Hecke pairs (A 2, G2) and (r 1,S 1) satisfy the con-
ditions (1.26) and (1.28), so that the map (1.27) gives a natural isomorphism between
the rings H 2 and L 1(1). From this and Problem 2.13 deduce that in the case n = 1 the
elements (3.19) of the ring L 1{q) can be multiplied by the rule
T(m)T(m1) = L d(dE2)rT(mmif d 2 ),
dlm,m1
2. Local rings. In this subsection we study the structure of the local Hecke rings
Lp(q), where pis a prime not dividing q. We first note that this structure does not
dependonq.
LEMMA 3.20. Let p be a prime not dividing q. Then the restriction of the map
e: DQ(rn, sn(q, 1)) --+ Ln(q) to the subring
L; = L;(l) = DQ(rn,s;(q))
PRooF. The lemma follows from Theorem 3.3(5) and Lemma 3.5. D
This lemma enables us to restrict ourselves to the case q = 1when proving structure
theorems.
As in §2.2, it is convenient to reduce the study of L;(q) to the study of its integral
sub ring
(3.32)
Lemma 3.20 implies that the ring L.; (q) is naturally isomorphic to the ring L.; = L.; ( 1).
LEMMA 3.21. The element
(3.33)
of the Hecke ring L;(q), where (p, q) = 1, is invertible in L;(q), and ll.- 1 = {p- 1E2n).
The ring L;(q) is generated by ll.- 1 and the subring L.;(q).
134 3. HECKE RINGS
PRooF. The lemma follows from Lemma 3.8 and the definitions. 0
We set
(3.34) T( PJ1 ' ... ' pJ• ' pE• ' ... ' pE•) -_ (di"ag( PJ1 ' ... 'PJ. ' PE' ' ... ' pE•)) rjj(q)>
where O;,Ej E Z, 01 +Et = · · · =On.+ En· In this notation the ring L.;(q) consists of
linear combinations of elements of the form (3.34), where
(3.35)
This follows from Lemmas 1.5 and 3.6. We say that such an element is primitive if
01 = 0, and that it is imprimitive if01 ~ 1. An arbitrary element T E L.; (q) is said to be
primitive (or imprimitive) if it is a linear combination of primitive (resp. imprimitive)
elements of the form (3.34)-(3.35). Clearly, any element Tin L.;(q) can be uniquely
represented in the form
(3.36)
where TPr is primitive and Tim is imprimitive. Lemma 3.21 implies that the subset I
of all imprimitive elements of L.; (q) is the principal ideal of this ring that is generated
by the element (3.33):
(3.37)
LEMMA 3.22. Let n > 1. Let the Q-linear map
(3.38) 'I': L.; (q) - L;-• (q)
0,
Then 'I' is an epimorphism of rings, and its kernel is the ideal I of imprimitive elements
ofL.;(q).
PROOF. From Lemmas 1.5 and 3.6 and the definitions it follows that, as a map
of vector spaces, 'I' is an epimorphism with kernel I. Hence, it remains to prove that
'I' is a ring homomorphism. This, in turn, will follow if we proye that the image of a
product of primitive elements of the form (3.34)-(3.35) is equal to the product of the
images. Let
M = diag(pJ', ... ,pE•), M' = diag(p"f, ... ,pE~).
where the exponents satisfy the inequalities (3.35), o; + E; = o, o; + E; = o'. We
suppose that o1 =of = 0, and we set
II -dt"ag(pd2 'I .. ' pd• ' pE2 ' ... ' pEn) '
lY...10 -
where the symbol E' means that the Hare primitive, r = r3(q), r' = r~- 1 (q), we
set Ho = diag(p°'2 , ••• , p°'•, pP2 , ••• , pP") for H = diag(p°' 1, ... , p°'•, pP1 , ••• , pP·), and
c (M, M'; H) is the number of pairs M;, Mj which belong to fixed sets ofrepresentatives
of r \ r Mr and r \ r M'r, respectively, and which satisfy the relations
(3.39) M;Mj = yH with y E r.
Similarly-but now summing over integer matrices H 0-we have
where c(Mo, M0; Ho) is the number of pairs Nk> Nf in r' \ r' Mor' and r' \ r' M0r',
respectively, which satisfy the relation
(3.40) NkNf = y'Ho with y' E r'.
Since the matrices Ho in the above expansions obviously run through the same set, it
follows that to prove that
'l'((M)r(M')r) = (Mo)dMo)f' = 'l'((M)r)'l'((M')r)
it suffices to verify that
(3.41) c(M, M'; H) = c(Mo, Mo; Ho)
for primitive n x n-matrices H = sd(H) with r(H) = p 6+6'. These coefficients depend
only on the double cosets of the corresponding matrices. Hence, by Lemma 3.6,
without loss of generality we may assume that
H - d1'ag(p°' 1, ... , p°'" ' pPi , ... , pp") '
where a;+ p; = o +o', Pn = 0, and
u = di'ag( p °'I ' ... ' p °'•-I ' p P1 ' ... 'p P11-1) .
no
By Lemmas 3.11and2.7, we may take
where D;,Dj are matrices of the form (2.15), and B; and Bj are fixed modulo D; and
Dj, respectively. It now follows from (3.39) that
and
136 3. HECKE RINGS
. _ (Bin-I)
B, - O
0)
0 ' Bj =
1 (B~(n-1)
10 0
0) '
so that
(n-1)
Y12 = ( Y120
O)
0 '
and
belong to different r' -left cosets in r' Mor' and r' M6r', respectively, and they satisfy
the relation
(n-1) (n-1))
.i;Aj = yHo, where y = ( Yu0 yl~-1) E r'.
Y22
If we repeat the same argument in the reverse order, we see that, with a suitable choice
of r'-left coset representatives, any pair Nk> Nf that satisfies (3.40) can be obtained in
the manner indicated. This proves (3.41), and hence the lemma. D
We can now completely determine the structure of the rings L.; (q) and L ~ (q).
'THEOREM 3.23. Suppose that n, q E N, and p is a prime number not dividing q.
Then:
{1) the ring L.; {q) is generated over Q by the elements
(3.42) T(p) = Tn(p) =·T~;~)
n n
and
n
-.......-- ...____, _____...., ...____,
T;(p2) = rr(p2) = T(l, ... ' 1,p, ... ' p,p2, ... ,p2,p, ... ,p)
n-i n-i i
Jori= 1, ... ,n;
(2) the ring L~(q) is generated over Q by the elements (3.42) and the element
Tn(p2)-I = An(p)-1;
(3) the elements (3.42) are algebraically independent over Q.
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 137
where F 1 is another polynomial. Since the element T:(p 2 ) =A. is not a zero divisor in
L;(q ), the equality
implies that F 1{T (p), ... , Tn (p 2 )) = 0, and this contradicts the choice of F as a
polynomial of minimal degree. 0
PROBLEM 3.24. State and prove results similar to the results in Problem 2.19 in the
case when
G = {M E Min(Qp); 'MJnM = r(M)Jn, r(M) =/: O}
and r = G n GL2n(Zp).
3. The spherical map. The procedure described in the previous subsection for
expressing an element of a local Hecke ring as a polynomial in the generators is
effective, but in general it is not practical. As in the case of the general linear group, we
avoid this difficulty by constructing another polynomial realization of the local Hecke
rings. Namely, we use rings of polynomials that are invariant under a certain finite
group of transformations of the variables.
For later applications it is convenient to carry out all of the constructions for
suitable extensions of the local Hecke rings of the symplectic group. The extensions
we consider are the Hecke rings of the "triangular" subgroup
So =So = { ( ~ ~) E sn}
(3.43) ={ =(~
M ~) E Min(Q); 'AD= r(M)En,
To construct our extensions of the local Hecke rings for the group S 0, we define
the subgroups "·
where pis a prime number. From Lemma 3.25(3) it follows that (r8, S0,p) is a Hecke
pair.
LEMMA 3.26. The Hecke pairs (r8(q),s;(q)), where (p,q) ~ 1, and (r8,So,p)
satisfy the conditions (1.26). The following diagram commutes:
(3.45) EJ.ql
where e = e 1 and Eq are the imbeddings {I.27), and Et,q is the isomorphism in Lemma
3.20.
PRooF. The first and third conditions in ( 1.26) are obvious in the case of our Hecke
pairs, and the second condition is a consequence of Lemma 3.4. The commutativity
of the diagram follows from the definitions of the three mappings. D
According to this lemma, instead of L;(q) one can study the isomorphic {and
independent of q) subring
(3.46)
(3.47)
140 3. HECKE RINGS
REMARK. The element (3.48) is obviously the image of the element (3.33) under
the map eq. In general, for simplicity we shall usually use the same notation for
elements in eq(L;(q)) c L0,p as for their preimages.
The spherical map for the Hecke ring Lo,p will be defined in two stages. We first
define a map to a suitable extension of the local p-Hecke ring of the general linear
group GLn, and we then use the spherical map of this extension that was defined in
§2.3. We start with the left coset space. Let
roM,
pJD*
whereM = ( O
B)
D E So,p,
be an arpitrary left coset of the group So,p modulo r 0 • By Lemma 3.25, the left coset
AD of the element D E GP• along with the exponent o, is uniquely determined by the
original left coset r 0 M. We then set
Cl>((roM)) = xS(AD),
where we suppose that all of the powers xb (i E Z) are linearly independent over the
left coset module of Gp modulo A. We extend Cl> by linearity to a map of the left coset
module:
Cl>= Cl>;: LQ(r(), S{),p) -+ LQlxt''l(An, a;).
PROPOSITION 3.28. The restriction of Cl> to the ring LO,P is an epimorphism of this
ring onto the ring n;[xt 11.
PRooF. Let X e Lo,p· By definition, Xis invariant under right multiplication
by any matrix of the form U(y) with y e A. This implies that Cl>(X) is invariant
under right multiplication by any element y e A, where the multiplication acts only
on the left cosets, not on the coefficients. Thus, Cl>(X) e n;
[xi 1]. From the definition
of the multiplication in Hecke rings and the definition of Cl> it follows that Cl> is a
ring homomorphism. Finally, if D is an arbitrary matrix in GP and o e Z, then
M = ( PJ~* ~) e So,p, and from (3.44) it follows that
Cl>((M)r0 ) = o:(M)xg(D)A,
where o:(M) is a positive integer. This gives us the epimorphism. D
Now let w = w~ be the Q-linear homomorphism from the ring H;lxi 11to the
subring Q[xi 1, ••• , x;= 11 of the field of rational functions over Q in the variables
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 141
xo, x1, ... , Xn such that ro(xo) = xo and the restriction of ro to H; coincides with the
spherical map ro defined in §2.3. From Theorem 2.20 and the definitions we then have
LEMMA 3.29. The map ro = ro; is an isomorphism of the ring H;[xt 11 with the
subring Q[xg= 1, ••• , x,7' 1]s c Q[xg= 1, ••• , x,7' 1] consisiing of all symmetric functions in
X(J ••• , Xn.
Finally, we define the spherical map 0 = n; from Lo,p to Q[xt', ... , x,7' 1]s by
setting ·
(3.49) O(X) = ro(<l>(X)) (XE L 0,p).
Thus, we obtain a commutative diagram:
H;[xt 11
Since <I> and ro are Q-linear ring epimorphisms, it follows that n is also a Q-linear ring
epimorphism.
J:.et W = Wn be the group of Q-automorphisms of the rational function field
Q(xo, xi, ... , Xn) that is generated by all permutations of the variables x1, ... , Xn and
by the automorphisms -ri. ... , Tn, which act according to the rule
(3.51) T;(xo) = xox;, T;(x;) = x{ 1, T;(xj) = Xj (j =/:- 0, i).
The reader can easily verify that each of the coefficients ra = r; (xi. ... , Xn) in the
expansion
i=I a=O
2n
X;v)(l - X;-lV) = ~::)-l)araVa,
are all invariant under the transformations in Wn. The polynomials t,po, ... •Pn-l
play the same role for Wn that the elementary symmetric polynomials play for the
symmetric group.
THEOREM 3.30. Let n E N, and let p be a prime number. Then:
(1) The restriction of the map n = n; to the integral subring i;
c L; is an
isomorphism ofthis subring with the ring Q[ xo, ... , xn] w ofall Wn·invariant polynomials
in xo, xi, ... , Xn over Q.
(2) Any element in Q[xo, ... , Xn]w can be written as a polynomial in
(3.55) t = tn(xo, xi. ... , Xn), Pa = PZ(xo, Xi, .•• , Xn) (0 ~a ~ n - 1),
142 3. HECKE RINGS
(3.57) Q[xg= 1, •.. , x,7' 1]w = Q[xo, ... , Xn]w[(x6x1 · · · Xn)- 1].
COROLLARY 3.31. The restriction of cl> = c1>; to the subring L; c L 0,p is a monomor-
phism.
The plan of proof of Theorem 3.30 is similar to that for Theorem 2.20. By
computing the Q-images of the generators of L;,
we obtain generators of the ring
!l{L;). This enables us to study the algebraic features of this ring and, in particular, to
prove that the restriction of n to L;
is a monomorphism. The ring L; is investigated
using Lemma 3.27. However, in the case of the symplectic group some preliminary
work is necessary in order to compute the Q-images of the generators of L;.
This is
the purpose of Lemmas 3.32-3.34.
LEMMA 3.32. In the Hecke ring L; c Lo,p the elements (3.42) have the following
expansion into left cosets modulo ro = r 0:
n
(3.58) T(p) =Tn(p) =:Ena,
a=O
where
where
Da,h = DZ,h(p) are the matrices (2.31), and rp(M) denotes the rank of Mover the.field
of p elements. The sum of the left cosets Ila (0 ~ a ~ n) and the sum of the left cosets
II~b (a + b ~ n, r ~ a) belong to the Hecke ring LO,P' and
(3.63)
PROOF. Without loss of generality we may consider the elements (3.42) in the case
q = 1.
From Lemma 3.6 it follows that the double coset r Mnr, where r = rn, coincides
with the set SM(p) = SMn(p, 1) (see (3.21)). From the definition of this set we see
that it contains the matrix ( ~ ~) if and only if A,D E Mn(Z), 'AD= pEn, and
B E Bo(D). Lemma 2.2 implies that, if Dis a fixed nonsingular integer matrix, then
'A = pD- 1 is an integer matrix if and only if D lies in one of the double cosets AD0A
for 0 ~ a ~ n. Thus, by Lemma 3.11 we have the decomposition
r ( ~n P~n) r = LJ LJ r ( P2~* ~) .
a=O DEA\AD.A
BEBo(D)/modD
From this and the definition of the map ewe obtain (3.58).
Ifwe apply Lemma 3.6 to the set SM(p2 ) = SMn(p 2, 1), we obtain the decompo-
sition
n
SM(p2 ) = LJ SM(il(p 2 ),
i=O
where
SM(i)(p2) = r (D;
0
0
p2D;-1
) r.
On the other hand, just as in the earlier case of the set SM(p), we see that Lemmas 2.2
and 3.11 give us the decomposition
(3.64) u
a+h:s;;;n
p2D* B)
r ( 0 D .
DEA\AD0 .1>A
BEBo(D)/mod D
Since each set SM(i) (p 2 ) consists of a single double coset modulo r c A 2n, it follows
that all of the matrices in such a set have the same rank over the field of p elements;
and this rank is obviously n - i. Thus, SM(;)(p 2 ) is the union of all left cosets rM in
(3.64) for which rp(M) = n - i. From this and the definitions we obtain (3.61).
We set
Sa= {( p2D* O B) ;DE ADaA,B E Bo(D) }
D
and
2 D*
(r) _
Sa,b - {
M -_ (p O. B).,DE ADa,bA,B E Bo(D),rp(M) -_ n - a+ r } .
D
144 3. HECKE RINGS
(3.65) Ila= L
MHo\S.
(roM)
and
(3.66) n(r) -
a,b - L (roM).
MHo\S!~!
Since obviously rosaro =Sa and ros~'lro = s~'l, it follows from the above decom-
positions that the elements Ila and n~l are inva;iant under any right multiplication
by elements ofr0 ; hence, they belong to the ring Lo,p· Finally, let M = ( p~* ~)
be an arbitrary element of Sa. If we replace M by yMy 1 with suitable y, y1 i:=. r 0 , we
may suppose that D = Da. Then Bis an integer matrix of the form (B;j) (i, j = 1, 2),
where Bi 1 E Sn-a'(Z), B22 E Sa (Z), and B12 = p · 'B21. This implies that
We now describe the sets of the form Bo(D) /mod D, and we compute the number
of elements they have. ·It will be more convenient for later applications if we do this in
a general form.
LEMMA 3.33. Suppose that D E Mn (Z) and det D #- 0. Then:
(1) If a.,p E An, then Bo(a.Dp) = a.* Bo(D)p, and one can take the set
a.*{Bo(D)/modD}P as representatives of Bo(a.DP)/moda.Dp. In particular, ifbo(D)
denotes the number of elements in Bo(D)/modD, then bo(a.Dp) = bo(D).
(2) Suppose that D = ed(D) = diag(d1, ... , dn) is an elementary divisor matrix
(see (2.4)). Then one can take
=
and for B, B 1 E B0 (D) the congruence B B 1(mod D) is equivalent to the congruence
a.* BP= a.* B 1p(moda.Dp). This implies the first part of the lemma. The second part
follows easily from the definitions. D
We are now ready to compute the images of elements of the form Ila and Il~k
under the maps <I> = <I>"p and n = p nn.
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 145
and
n(rr~l) = pb(a+b+l)1p(r,a)x6w(:n::,b(p)),
where lp(r, a) is the number of a x a symmetric matrices of rank rover the.field of p
elements, :n::,b(p) En; are the elements (2.31), and w is the spherical map for the ring
H;, a+b:::;; n, r:::;; a. Inparticular, thefollowingformulasholdfortheelements(3.48):
Cl>(A(p)) = x6:n:Z(p) and Q(An(p)) = p-Cn>x6x1 .. ·xn.
PROOF. Using the expansion (3.59), Lemma 3.33, and the definitions, we obtain
which proves the first formula. The second formula follows from the first one and from
Lemma2.21.
Now suppose that D E ADa,bA, where a + b :::;; n. Then D = o.Da,bP with
o., P E A, and by Lemma 3.33 we can take ·
Bo(D)/mod·D = o.*{Bo(Da,b)/modDa,b}P
(3.67)
= o.*{b = (~ B~2 ~3) ;B22 ESa(Z)/modp,
0 B32 B33
(3.68)
Bo~ G~ll n
This obviously implies that rp(M) = b + rp(B22 ) + n - a - b. Thus,
which proves the third formula. The fourth formula follows from the third one and
the definition of the map n. Since obviously 8.n (p) = I1~1~J and n~.o (p) = n~ (p), the
last formula is a consequence of the formulas already proved and Lemma 2.21. D
(3.69)
By Theorem 3.23, the elements T(p), T1(p 2 ), .•• , Tn(p 2 ) of the Hecke ring Lg_P gen-
erate the ring L; over Q. Hence, the ring O(L;) is generated by the images n(T(p)),
n(T1 (p 2 )), •.• , n(Tn(p 2)). Using (3.58) and Lemma 3.34, we obtain
n n n
(3.70) Q(T(p)} = LQ(Ila) = L:xosa(Xi, .. ., Xn) = Xo II(l + X;) = t.
a=O a=O i=I
and
Vi= { ipjpj;pj
J=O
E Q}
coincide. From (3.61) and Lemma 3.34 we obtain
n
n(T;(p 2)) = L pb(a+b+l)fp(a - i,a)x6w(na,b(p)) = L:lp(a - i,a)x6\Jla,
a=i
where
n-a
'Pa= LPb(a+b+llw(na,b(p)).
b=O
We set
VJ= { :t
a=I
)laXff\Jla; Ya E Q}.
The above formulas for O(T; (p 2 )) imply that Vi c V3. The same formulas also
imply that the coefficient matrix for the expansions of n(T 1(p 2)), ••• , n(Tn(p 2 )) with
respect to x6\Jl 1, ... , x6\JIn is a triangular matrix, has integer entries, and has entries
lp(O, a) = 1 (a = 1, ... , n) on the main diagonal. Hence, this matrix has an inverse
matrix of the same form, and this implies that each x6\JI a (a = 1, ... , n) is an integer
linear combination of O(T1 (p 2 )), .. ., O(Tn(p 2 )). In particular, Vi c Vi. Thus,
Vi = V3. On the other hand, returning to the polynomials Pa. by (3.52) we have
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 147
if we take into account that sj(x) 1,. .. ,x; 1) = (xi···Xn)- 1sn-j(xi, ... ,xn), we
obtain
Pa = x6x1 ... XnYa = x6 L
s;(xi, ... 'Xn)Sn-j(Xi, ... 'Xn).
i+j=a
Using the spherical map = w w;
and Lemma 2.21, we can rewrite these formulas in
the form
Pa= x6w( .~ p(i)+(n-i)n;1ln-j )•
1+1=a
where na = n~(p). We use the formulas (2.30) to compute the products 1lj1ln-i> and
we substitute the resulting expressions in the last formula for Pa. We obtain
Now suppose that n > 1, and (3.56) has been proved for smaller values of n. We use
induction on m to prove that any W-invariant polynomial F(xo, xi, ... , Xn) of degree
m in xo is a polynomial in t, po, ... , Pn-1 · If m = 0, then F = F(xi, ... , xn) is a
symmetric polynomial that satisfies, for example, the· relation F(x! 1, x2, ... , xn) = F,
and so it clearly must be a constant. Suppose that m ~ l, and our claim has been
proved for polynomials whose degree in xo is less than m. Let
m
F(xo,xi, ... ,xn) = L:xbi,o;(xi, ... ,xn)
i=O
which gives the desired relations·. From the previous argument it follows that the
polynomial
If go= 0, then G is divisible by y 0 , and Gy0 i is a polynomial oflower degree that ~lso
vanishes under the above substitution. Hence g 0 # 0. By assumption,
Since this is an identity in the variables xo, xi, ... , Xn, we can set Xn = 0 in it. As we
c.
saw b eiore, tn , Pi, n then b ecome t(n-i) , p (n-i) , ••. , Pn-
n ... , Pn-i (n-i) , and p n o bv1ous
. ly
0 2 0
goes to zero. We thus obtain the identity
The theorem just proved enables us to reduce computations in the local Hecke
rings of the symplectic group to computations in polynomial rings. To show how this
is done, we consider, for example, the problem of summing the formal generating series
for elements of the form (3.19), where m runs through the powers of a fixed prime p,
(p, q) = 1. Thus, we consider the formal power series
00
(3.71) LTn(pJ)v.s,
J=O
where Tn(p6 ) E L;(q) are the elements (3.19). From (3.22), Lemma 3.11 and the
definitions it follows that
(3.72)
whereD EA \Adiag(p61 , ... ,p611 )A, 0:::;; o1 :::;; .. ·:::;;On:::;; o, and BE Bo(D)/modD,
since ( p6 * f ~) is an integer matrix if and only if Band D are integer matrices
and all of the elementary divisors of D divide p 6 • Then from the definition of the map
n and Lemma 3.33 we obtain the formal identity
00
where t(p61 , ... , p6•) .= ( diag(p61 , ••. , p6•)) A E H;. The summation of the series on
the right in this relation for arbitrary n is based on explicit formulas for the polynomials
w (t (p61 , ..• , p 6•)) and is beyond the scope of this book (see Andrianov [l, 21). Here
we shall limit ourselves to the cases n = 1 and n = 2. When n = 1, from the definitions
we obtain
00 00
L:n(Tl(p.s))v.s = L p.Si(x1p-1).si(xov).S1+a
(3.73) J=O 61,a=O
00
00
and it remains to compute the last series. This computation easily reduces to our
earlier calculation of the generating series for the polynomials w(t2(p.S)), where tn(m)
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 151
are the elements (2.10). First of all, using the definitions and Lemmas 2.4 and 2.21,
we have
00 00
Lw(t2(p"))vf = L w(t(pY,pr+a))v~y+a
6=0 y,a=O
00 00
In particular,
(3.80)
Finally, from (3.70) we have
LT 1(p'5)v 6 = Q~(v)- 1 ,
6=0
00
whereP(p6) are elements of the form (3.19), regarded as elements in1;. andQ;(v) are
the polynomials (3.78). One has the formulas
Q~(v) = 1 -T 1(p)v + pA1(p)v 2,
Q~(v) = 1 -T2(p)v2 + cfi(p)v2 - p3A2(p)T2(p)v3 + p6A2(p)v4,
where
q~(p) = (n;)- 1(x6x1x2(x1 + x2 + x1 1+x; 1+2)).
PROOF. From (3.73) and the definitions it follows that the isomorphism n maps
the constant term of the power series
to one, and takes all of the other coefficients to zero. Hence, the constant term of
this series is the unit of the ring L1, and the other coefficients are zero. In a similar
way we find that the second identity is a consequence of (3.75). The formulas for the
coefficients of Q1 and Q~ follow from (3.81), (3.79), (3.80), and the definitions. D
It is clear that similar identities hold over any ring isomorphic to 1;,
for example,
over the ring 1; (q), where p % q.
Theorem 3.30 enables us to parameterize the set of all nonzero Q-linear homo-
morphisms fro!ll the ring L; to C.
PRoPosmoN 3.36. Every nonzero Q-linear homomorphism A.from the ring L; to C
has the form
(3.82)
where A= (ao, ... , an) is a set ofnonzero complex numbers that depends on A..
§3. HECKE RINGS FOR THE SYMPLECTIC GROUP 153
in ri. ... , Yn-1,pt 1, t with coefficients in Q. Thus, it suffices to prove that the system
of equations
and hence Pa= µ(ra) = µ(r2n-a) = P2n-a for a= O,l, ... ,2n. From these last
relations it follows that the polynomial
2n
(µr)(v) = ~)-l)a Pava E C[v]
a=O
satisfies the equality (µr) (v) = v 2n(µr) (v- 1). Since obviously Po = P2n = l, the
polynomial µr factors over C into linear factors of the form
2n
(µr)(v) = IT(l -y;v), where YI··· Y2n = 1.
i=l
from which it follows that the numbers y) 1, ••• , Yi,/ are the same as the numbers
Yi. ... , Y2n except for their order, i.e., Y;- 1 = Yu(i)• where a is some permutation of
the numbers l,2, ... ,2n. If a(i) = i for some i, then yf = 1, and y; = ±1. We
let ii. ... , ik denote all indices i for which a(i) = i and y; = l, and we let ji, ... , j,,
denote all indices j for which a(j) = j and yj = -1. All of the other indices can
be partitioned into pairs (i, a(i)) where a(i) =/: i. We let Ii. ... , I, denote the first
components of these pairs. Then k + s + 2t = Zn, and the relation YI··· Y2n = l
implies that yj, · · · Yj, = 1. Hence, sand k are even numbers. We let o:i, ... , O!n denote
the numbers y; with i = i1, ... , ik/2, j 1, ... , f,/2• 11, ... , I,, respectively. We then have
2n n
(µr )(v) =~)-l)a P~va =IT (1 - O!;V )(1 - 0!;- 1v ),
i=l
154 3. HECKE RINGS
and hence
(3.85)
In particular, a1, ... , an is a nonzero solution of the first n - 1 equations of the system
(3.83). If we substitute these numbers in the last two equations, we obtain the system
We call ao, a1, ... , an the parameters of the homomorphism A. = A(ao, .. .,a.). Clearly,
if a set of parameters is obtained from another set by the action of a transformation
in Wn, then the corresponding homomorphisms are the same.
PROBLEM 3.37. Prove that the order of the group Wn is equal to 2nn!.
PROBLEM 3.38. Prove the following formulas for the middle coefficient q~(p) of
Q~(v):
converges absolutely and uniformly in any region of the form {s E C; Res ;;;:: 1+a +e},
where e > 0, and in that region it has an Euler product of the form
L~ ~L~(q) ~C
and q is the polynomial (3.76) for n = 2. Prove that the parameters a;(p) satisfy the
inequalities
(4.3)
where
(4.4)
A basic role in studying the Hecke rings fn(q) is played by the following lifting
homomorphism (compare with the definition (3.26) of Chapter 2):
(4.5) t = tM: (ro(q))M = r 0(q) nM- 1r 0(q)M-+ C1,
where ME sn(q); this map is defined for any y E (r()(q))M by the equality
(4.6)
where for any a E r 0(q) we let Ci denote its image r(a) in the group 0, and we
e
let denote any P-preimage of Min 0. From Lemma 3.4 of Chapter 2 it follows
that (t M)4 =
1; hence, the kernel of t M has finite index in the group (r0(q)) M. By
assumption, ME sn(q) c f 0(q), and so KertM also has finite index in the group
rg (q ). Consequently, by Lemma I. 7 the pair (4.4) is a Hecke pair, and so Ln (q) really
is a Hecke ring.
In this section we investigate the algebraic structure of the Hecke rings fn (q) to
roughly the same extent that we studied the structure of the Hecke rings Ln(q). Our
investigation will be based on Lemma 1.8, which describes the connection between
doublecosets of fn(q) and doublecosets of Ln(q), and Proposition 1.9, which enables
one to deduce certain properties offn (q) from the analogous properties of L n (q ), by
comparing their images in the Hecke ring of the triangular subgroup of sn. In order to
use Lemma 1.8, we must know for which matrices Mis the homomorphism tM trivial.
This question is answered by the next two lemmas. Before giving those lemmas, we
make some preliminary remarks.
Because of Lemma 3.6, without loss of generality we may assume that in any
e
double coset f 0(q)ef0(q), where = (M,e) E §n(q), the matrix M has been chosen
in the canonical form (3.9):
(4.7)
where /(Z) = j(MNM- 1,Z) 112 j(N,M- 1(Z))- 112 and j(N,Z) = det(CZ + D).
Furthermore, if we take into account the definition 'of the square root j (N, Z) 112 in
(4.7) of Chapter 1, we obtain
where we suppose that Z E Hn." Since the value tM(N) does not depend on Z, it
follows from (4.8) and the last equality that
(4.9)
a b (i)
0
(4.10)
0
c d (i)
(4.11) a= ( : ~) E (rA{q))M(i>-
Then N = cp;(a) E (r0(q))M, and we have
(4.12)
158 3. HECKE RINGS
where r = '(ri. ... , r;, ... , rn) runs through Mn, I (Z/dZ) and b' = d;b/e;. Since a
satisfies the condition (4.11), it follows that b and b' ·are integers prime to d, where
(d, q) = 1, and so d is odd. Hence, if we use the formula for the Gauss sum
where dis a positive odd number and (k, d) = 1, which follows from Lemmas 4.13
and 4.14 of Chapter 1, and if we further suppose that (d, r(M)) = 1, then from
(4.13)-(4.14) and the formula (4.9) for tM(N) we obtain
PROOF. We let
(4.18)
and we first consider the case when (r, det D) = 1. Since QDQ- 1 is an integer matrix
by (4.18), it follows that in (4.38) and (4.40) of Chapter l, which give the value of the
multiplier x(2J for MNM- 1 E qj(q), we can set
Since PQ = rEn and (r, d) = 1 by assumption, it follows that the last sum is equal to
(4.21) e{2PBD- 1[s']}.
s'EM.,,1 (Z/dZ)
Ifwe set M = E2n in (4.20), we obtain a formula for x(2J(N). Comparing this formula
with (4.20) and (4.21), we conclude that x(2)(MNM- 1) = x(2i(N). From this and
(4.9) it follows that tM(N) = 1 for any NE (qj(q))M. ·
Now suppose that (r,detD) = o > l, pis a prime divisor of o, and r = p 2Pri,
where (p, r1) = 1. Further suppose that the blocks P and Q of the matrix M have the
form
P = diag(Pi. ... , Ps), Q = diag(Qi. ... , Qs),
(4.22)
P; =p 01 P;, a1 <···<as, as= p, . Q; = rP;- 1,
where the Pf are integer diagonal matrices with (det Pf, p) = 1. Of course, the block
Ps might not exist.
The inclusion MNM- 1 E r(j(q) implies the congruences
= =
A;j Dij O(modp) for i > j,
(4.23)
=
B;j O(modp) for (i,j) =F (s,s),
where A = (A;j ), B = (Bij ), C = ( Cij ), and D = (Dij) are divided in blocks that are
analogous to (4.22). Using the other inclusion NE r 0(q) and (2.7) of Chapter 1, we
obtain a new series of congruences:
j=E;min(k,I) j=E;min(k,I)
We choose matrices U, V E SL(Z) of the same size as Dss so that the following
congruence holds:
this implies that {det B4, p) = l, and, if we set T.vs = ( ~ 1), then we have
Ifwe define T =(gr 1) E P by setting Cr= diag{O, ... ,0, T.,.. ), then by (4.25)
and (4.27) in the matrix
Note that MTM- 1 is an integer matrix, and the matrices MPu M- 1 and MPv M- 1
are p-integral. Since, by assumption, p does not divide r 1, it follows from Lemma
3.2(1) that there exists a matrix U' E SL(Z) of the same size as U such that
Such a matrix exists, since ME sn(q), so that (r, q) = 1, and hence p does not divide
r1q. From the above definitions it follows that Pu,, Pv1, T', and the transformed
matrix of (4.28)
Using Lemmas 4.1 and 4.2, we can now prove the following
PROPOSITION 4.3. Let M E sn (q ), where q is divisible by 4, and let t M be the lifting
homomorphism (4.5) associated to M. Then tM is trivial ifand only ifr(M) is the square
of a rational number.
PRooF. From Lemmas 4.1 and 4.2 it follows that the proposition is true when M
is a canonical matrix of the form (4. 7). If Mis arbitrary, then, by Lemma 3.6, it can be
written in the form M = eKl'f, where e, 1'/ E r(j (q) and K is a canonical matrix of the
form (4.7). Now suppose that in Lemma 1.6 r = r(j(q) and PM(N) = (E2n. tM(N))
for NE (r3(q))M. Then from (1'.17) we find that
(4.31) tM(N) = te-1M,,-1(1'/N11-I) = tK(l'/Nl'/-I).
Since r(M) = r(K) and themapy-+ 1'/Yl'/- 1 isagroupisomorphismfrom (r3(q))M to
(rQ (q)) K, the proposition for M follows from the proposition for the canonical matrix
Kand from the relation (4.31). D
We now consider the product of concrete double cosets in the Hecke ring £n (q ). In
this ring, as in L n ( q), the multiplication formula for double cosets takes a particularly
simple form when one of the double cosets is generated by the P-preimage of a matrix
of the form rE2n· Namely, we have
LEMMA 4.4. Suppose that ME §n(q), t:i2n is any P-preimage in 18 of the matrix
rE2n. where r E z;,
andf = f3(q). Then the following relations ho/din the ring £n(q):
(4.32) (r~)r,(M)f = (r~M)f = (M)f(r~)f.
The proof follows immediately from the definitions.
We let sn(q)+ denote the subgroup of sn(q) consisting of matrices M for which
r(M) is the square ofa rational number, and we let
(4.33)
i62 3. HECKE RINGS
We call in(q) the even subring of the Hecke ring f,n(q). As we noted before, only the
even subring is important for applications to modular forms. Hence, for the rest of
this chapter we shall only be examining in (q) and its local subrings.
PROPOSITION4.5. Letf,; = (M;, t;), wherei = 1,2, beelementsofthegroupSn(q)+,
and let f = f 0(q ). Suppose that the ratios of symplectic divisors ei (Mi)/ di (Mi) and
ei (M2)/di (M2) are relatively prime. Then the following relations hold in the Hecke ring
En(q);
(4.34)
PRooF. According to Lemma 3.6, without loss of generality we may assume that
the M; are canonical matrices of the form (4.7). Since in this case f.if. 2 = f, 2 f.i. the
second equality in (4.34) follows from the first equality, which we shall now prove.
From ( 1.10) it follows that
where (17 j )fare double cosets that are distinct from (f.i f, 2)r, and the a j are nonnegative
integers. Using ( 1.10) again, we see that the last sum here is zero if and only if
(4.35)
On the other hand, by Proposition 3.9 we have (Mi )r(M2)r = (MiM2)r, and hence
(4.36)
Since r(M;) and r(Mi Mi) are squares of rational numbers, it follows from Proposition
4.3 and Lemma 1.8 that
Just as in the case of Hecke rings for the symplectic group, Proposition 4.5 makes
it possible to reduce the study of the even Hecke ring in(q) to that of its local subrings
(4.37)
THEOREM 4.6. The Hecke ring in(q), where n,q E N and q is divisible by 4, is
generated by the local Hecke rings i; (q ), where p runs through the primes not dividing
q. Elements of different local subrings commute with one another.
PRooF. The theorem follows from the equalities in (4.34) and the proof of Theo-
rem 3.12. D
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 163
(4.39) 2D* B ) 0 0 0)
Mu,b(Bo) = ( P Oa,b D • where B = ( 0 Bo 0 (a),
u,b 0 0 0 (b)
and let R: b =A'}, \An with An = GLn(Z). Then we have the following partition into
' •.I>
disjoint left cosets:
(4.40) SMn(p 2,q) = LJ r 0(q)M,
MER(p2)
where
PRooF. Using an argument similar to the one used to derive (3.64), we can show
that
SMn(p 2,q) = LJ r 0(q) ( P 2 ~;,b : ) U(V),
a,b,B,V a,b
where a+ b ~ n, B E Bo(Da,b)/modDa,b• V E R=.b· The decomposition (4.40)
follows from this, along with the definition (3.60) of Bo(Da,b). 0
Theorems 1.2 and 1.3 of the Appendix tell us that any symmetric matrix Bo E
Sa(Z/pZ) of rank rp(Bo) = r, where p # 2, can be written in the form
(4.43) Bo = BO[ 'WJ = WBo 'W,
where WE GLa(Z/pZ), B0 = diag(A.i, ... ,A.,,0, ... ,0), and A.; '¢ O(modp). If
det W = d I 1, then we divide the first column of W by d, we replace A. 1 by d 2 A. 1 in
the matrix B0, and we keep our earlier notation when working with the transformed
matrices. After these transformations (4.43) obviously still holds, and det W = 1.
Now let Bo= 'Bo E Sa. Then we may suppose that B0 E Ma, and, by Lemma 3.2(1),
the matrix W lies in SLa(Z). In this case (4.43) can be written asa congruence
(4.45)
We consider the special case when n = 1. Then one easily verifies that the following
decompositions hold for the matrices P;. = M 1,0 (A.) with (A., p) = 1 and a = M 0,0 (0):
(4.46)P;. = ( ~ A.)
r = P';. ( 01 o) P"
p2 ). '
and P:,b(B0) is defined similarly, with P).k replaced by P';.~ (see (4.46));
n-a-b
(4.50) Pa,b(a) = II 'Pk(a) for a E SL2(R),
k=I
(4.51) P~.b = U(R) for R = diag(En-a-b• E,, R1 ),
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 165
We have thus found representatives of all of the left cosets of SMn (p 2, q) modulo
r3(q), and we have expressed each representative as a product yKo of matrices y,o E
q(q) and acanonicalmatrixK of the form (4.52). However,-tocomputein the Hecke
ring i;(q) we still need to know the second component of the product yKg, where
y = r(r), g = r(o), and K is an arbitrary P-preimage of Kin~. To do this we
prove a result that in certain cases makes it possible to reduce the calculation of the
xl
multiplier x(2) of degree n to that of the multiplier 2) of degree 1. But we must first
give some more definitions.
Let q E N be divisible by 4, and let p be a prime not dividing q. For every matrix
M = ( ~ ; ) in the group S 0,p (see §3.3) we fix a P-preimage M in the symplectic
covering group ~, by setting
(4.56) M = (M, IdetDl 1' 2 ).
If M lies in the subgroup
(4.57) (S0,p)+ ={ME S 0,p; r(M) = p'M,o E Z} C So,p
or even in the less restrictive subgroup s;(q)+ of s;(q) (see (4.38)) and M = yKo,
where y,o E q(q) and K is a canonical matrix of the form (4.7), then we define a
second P-preimage of M in ~ as follows:
(4.58)
We show that the element M E ~ does not depend on the above choice of
representation of M. In fact, suppose that M = y;Ko; (i = C2) are two such
166 3. HECKE RINGS
(4.60)
Finally, for an arbitrary element c! = (M, cp(Z)) E '8 we set
(4.61) t(c!) = cp(Z) and s(c!) = t(c!)lt(~)l- 1 .
LEMMA 4.11. For i = l, ... , n suppose that the matrices R;, S; E (SJ)+ and
)';,c5; E rA(q) are connected by the relations R; = y;S;c5;, and the elements d(R;), d(S;),
d (y; ), and d (c5;) in the lower-right corner of these matrices are all positive. Furthermore,
let
n
R= II c.o; (R;) E css.p)+.
i=l
where cp; is the imbedding (4.10), and let S E (S0,p)+ and y,c5 E r 0(q) be defined
analogously. Then ii and S satisfy the relations
(4.62) t(ii) = x&i<r)x(2i(c5)s(S)t(R),
(4.63) s(ii) = X(2i(r)x(2i(c5)s(S),
where X(ii is the multiplier (4.38) of Chapter 1.
PRooF. Froni the definition (4.61) we see that (4.63) is a consequence of (4.62);
we now prove the latter relation. By (4.58) we can write
right side of (4.65). To do this, we define two holomorphic functions of zi, ... , Zn E H1
by setting
n
'l'(y; Zi, . . . , Zn) =II s{j(y;, Z; )),
i=I
(4.66) n
'l'(o; zi. ... , zn) =II s{j(o;, z; )),
i=I
where for any a= (; ; ) E rA{q) with d > 0 we let s{j(a, z)) denote the function
s+ {j (a, z)) or s_ {j (a, z)) depending on whether c ;;;: 0 or c < 0. Here S±{w) are
the holomorphic functions on ((± l )H 1) u R defined by the conditions: S± (w ) 2 = w
and S± (w) > 0 for w E R and w > 0. Note that the restrictions of the functions
j(y, Z) 112 and j{o, Z) 112 to the main diagonal H 1 x · · · x H 1 c Hn coincide with the
correspo_nding functions in (4.66), since the assumptions in the lemma imply that they
coincide for Z = diag{z1, ... , Zn) sufficiently close to zero. According to the definition
(4.61), the function t(R) does not depend on Z. Hence, if we set Z = diag{z1, ... , Zn)
and use the functions (4.66), we can rewrite {4.65) in the form
(4.67) t(R) = x(2J(y )x(2J(o)'l'(o, zi, ... , zn)'P(y, S101 (z1), ... , Snon(zn))
and pass to the limit as z; -+ 0 (z; E H 1, i = 1, ... , n) on the right of this equality.
According to (4.66) and Lemma 4.2 of Chapter 1, this problem reduces to computing
the limits of expressions of the form
(4.68) s{j(y;, S;o;(z;)))s(j(o;, z;)) = s{j(R;, z;)j(S;o;, zi)- 1 )s{j(o;, z;)).
Ifwe again use the definition (4.66), we find that the desired limit of (4.68) is equal to
(d(R;)(d(S;)d(o;))- 1)d(o;) 112 = d(R;) 112d(S;)- 112,
where all of the square roots are positive. From this, (4.67), and (4.68) we obtain (4.62).
D
Lemmas 4.10 and 4.11 make it possible for us to find the P-preimages in~ of
the matrices (4.42). To do this, we must introduce a certain special function x that is
defined on the set of symmetric integer matrices and is closely related to the multiplier
x(2J. We now give the definition of x.
As we noted earlier, for any matrix A E S 0 {Z) of rank rp(A) = r, where pis an
odd prime, there exists a matrix V E M 0 (Z) that is nonsingular modulo p and satisfies
the congruence
(4.69) A= ( A'
0 0
0) [V]{modp),
where A' E Sr(Z) is a matrix that is nonsingular modulo p. If A satisfies (4.69), then
we set
.. e-r ((-l)'detA') if r > 0,
(4.70) x(A) = { P P '
I, if r = 0,
where ep is the function (4.39) of Chapter 1. It is easy to see that the value x(A) does
not depend on the choice of matrices V and A' with the indicated properties.
168 3. HECKE RINGS
PROPOSITION 4.12. Let Ma,b(Bo, S, V) be the matrices (4.42), where a+ b :E;; n and
B0, S, and V run through the set of matrices in (4.41). Then we have the following
formula/or the P-preimages of these matrices in (!Sas defined in (4.58):
(4.71) Ma,b(Bo, S, V) = (Ma,b(Bo, S, V); x(Bo)p(a+2hll2 ).
PROOF. From the formulas (4.37)-(4.38) of Chapter 1 it follows that j(2) (y, Z) = 1
for matrices y of the form U(V) or T(S) in the group r(j(q), where q is divisible by 4,
as usual. Hence, by (4.2), we have y = (y, 1) for such matrices y. Ifwe now use (4.53)
and (4.64), we obtain
M,,,b(B0 ,S, V) = U(W*)Ma,b(B0)U(W*)- 1T(S)U(V),
and from this and (4.61) it follows that
(4.72) s(Ma,b(Bo,S, V)) = s(Ma,b(B0)).
-
Arguing in an analogous way, we also have (see (4.55))
. ~
s(P~.hKa-r(P~.h)- 1 ) = s(Ka-r) = 1,
and hence, applying Lemma 4.11 to the equalities (4.54) and (4.55), we obtain
(4.73) s(Ma,b(B(i)) = x(2 J(P~.h(B(i))x(2J(P~h(BO))x(2 J(Pa,b(a'))x'(2)(Pa,h(a")).
All of the matrices on the right in this equality have the form
n
(4.74) r = II cp; (r;) with y; = ( ;; ~:) E rA(q).
i=l
From the definitions (4.48) and (4.50) and from the equalities (4.46)-(4.47) it follows
that the entries d; in all of these matrices are positive. By (4.38) of Chapter 1, we have
the following formula for the matrices (4.74):
n
(4.75) x(2J(y) = ITxl2J(y;).
i=l
We use this formula to compute the value of the multiplier x(2) at each of the
matrices in (4.73). By Proposition 4.15 and the relation (4.37) of Chapter 1, we have
(4·77) I (p')
X(2) I ( a ") = l •
I ( a ') = X(2)
;, = X(2J I (p")
X(2) J. = ep-I (-A)
d ·
For P';, and a" the equalities are obvious. In the case of a' the equality follows
from the congruence p 2d = l(modq), where q is divisible by 4, which implies that
d = l (mod4), and from the usual properties of the Jacobi symbol. Next, in the case
of P';,' the relation (4.76) leads to the equality ·
I II
X(2J(P;.) = eP
-I (-qs)
-r-
-1
= eP
(-qs)
p '
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 169
We now proceed to the next step, and determine the elements in (q) that i;
correspond to the elements in (4.81). There are, of course, many ways to do this. To
be definite, we set
~ 2 ~ .
(4.82) T;(p ) = (K;)f (1=0,1, ... ,n),
170 3. HECKE RINGS
where K; are the matrices (4.52) and the elements K; E 0 are determined by (4.58).
All of the other P-preimages of the matrices K; in 0 are of the form K;E, where
E = (E2n,e) withe E C1, and
(K;E)r- = (K; )r-('E)r-.
Moreover, the double cosets of the form (E)f are contained in the center of the
ring 'E;(q), and so the degree of choice in the elements in (4.82) in no way affects
the algebraic properties of the subring they generate. From the point of view of
applications of the Hecke rings to modular forms, the choices made in (4.82) are also
of no importance, since the Hecke operators for (E)f are operators of multiplication
by a power of e. We thus come to the conclusion that the natural analogue of the even
Hecke ring E;(q) is not the entire ring 'E;(q), but rather the subring
~ ~ 2 ~ 2 ~ 2 ±I
(4.83) E;(q, x) = Q[To(p ), ... , Tn-1 (p ), Tn(P ) ].
We show that this subring is commutative. Recall that in the case of the ring L n (q)
the proof of commutativity was based on: ( 1) the existence of the anti-automorphism
*of the ring Ln(q), and (2) the invariance of elements of Ln(q) relative to*· Thus, we
begin by defining an anti-automorphism * for the ring (q). I;
e
For any = (M, <p) E 0 we set
(4.84) eo = (r(e)E2n, r(e)nl 2) and r(e) = r(M).
Then the map e--+ eo is a homomorphism from the group 0 to its center, and f 0(q)
is contained in the kernel of this homomorphism. This implies that the map
(4.85) 0 ...:"..+ 0: e --+ e* = eo. e- 1
LEMMA 4.14. Let ii E 18, where M E s;(q)+, be defined by (4.58), and /et
T(M) = (ii)f, where r = r(j(q). Then T(M)* = T(M).
PROOF. ByLemma3.6wecanwriteM = yK<5, wherey,<5 E randKisacanonical
matrix of the form (4.7); hence ii= yK'i, and so T(M) = T(K). Thus, we may
suppose that M = K.
Let n = 1. Since (p, q) = 1; it follows that for any m E N there exist integers t
and d > 0 such that pmd +qt= 1. If mis even and
L;
Following the analogy with the Hecke rings (q), we might expect that the second
equality in (4.89) would also follow from Lemma 4.14, since Xis equal to a sum of
e
double cosets of elements = (M, tp) E s;
(q )+. However, the whole point is that e
and ii are not necessarily the same, and in that case (e)f. =I (e)f· This also seems to be
what explains the complication in the proof that the rings i;(q, x) are commutative.
PRooF THAT X* = X. Using Lemma 4.9, we can rewrite (3.61)-(3.62) in the form
(4.90) (r(j(q)Ma,b(Bo,S, V)),
a,b,Bo,S,V
a+b,,;;,n,r,(Bo)=a-i
where the matrices Bo, S, and V run through the sets in (4.41). From (4.58) and
(4.90) it follows that the elements M,,,b(B0 ,S, V) lie infK;f. On the other hand, from
Lemma 1.8 and Proposition 4.3 we find that the map P: ii---+ M gives a one-to-one
correspondence between the double cosets f K; f and r K; r. This implies that
(4.91) (f~(q)Ma,b(Bo,S, V)),
a,b,Bo,S,V
. a+b,,;;,n,r,(Bo)=a-i
where, by (4.42), we have
(4.92) Ma,b(B0 ,S, V) = Ma,b(Bo)T(S)U(V)
172 3. HECKE RINGS
f3(q). Let a,b, B0 , S, and V be the same as.in (4.90). Then by Lemma 1.5 and (4.91)
we have
a,b,Bo,S,V
a,b,Bo,S,V
= L µa,b(Bo, V)(t!a,b(Bo, V))r,
a,b,Bo,V
where
µa,b(Bo, V) = pb(a+b+I) µ(Kj)µ(t!u,b(Bo, v))- 1,
(4.94)
t!a,b(Bo, V) = Mu,b(Bo)U(V)Ki = (Yu,b(Bo, V);ta,b(Bo, V)). 0
REMARK 4.15. If the elements t! = t!a,b(Bo, V) on the right in (4.93) are replaced
by elements of the form Yt t!f2, where )I; E ro,
then it is not hard to verify that this dOP.S
not change either ta,b (Bo, V) or i.
Using this remark, we prove the following property of the double cosets in (4.93).
LEMMA4.16. Let t!a,b(Bo, V) be the elements (4.94), let r = q(q), and let* be the
anti-automorphism (4.88). Then
PROOF. First ofall, using (4.39) and (4.52), we find that Ya,b(Bo, V) = ( P4~· ~),
where
and
N = diag(On-a-b,Bo,Ob)Vdiag(p 2En-i•PEi).
We now choose Vi and Vi E An in such a way that in the matrix
(2)(
Ya,b Bo, V ) = T ( Si ) Y (I)(
0 ,b Bo, V ) T ( S2 ) = (p ODj
4 N2)
Di
since this can always be achieved by multiplying Y~~ (B0 , V) by a suitable matrix of
the form T(S) Er, which is permissible by Remark 4.15. Thus, we may assume that
in (4.93)
c!a,b(Bo, V) = (Y~~(Bo, V); ta,b(Bo, V)).
Using the notation (4.10) and (4.46), we define the following matrices in r:
•1+p •1+P
P'(A22) = II cp;(pi), P"(A22) = II cp;(P'D·
i=•1+i i=•1+i
Then the matrix Y~:l(B0 , V) can be written in the form Y~~(Bo, V) = P'(A22) x
Y~~(Bo, V)P"(A2 2), where
(4.95)
According to Lemma 4.11, we obtain the following relations from the last equality
for Y~~t(Bo, V):
~{3)( ~{3)(
Ya,b Bo, V ) = ( Ya,b(3)( Bo, V ) ; t ( Ya,b Bo, V ))) = P~,( A 22
I ) ~{4)(
Ya,b Bo, V ) P~,,( A I22 ) ,
s(Y~~(Bo, V)) = x(2 l(P'(A22 ))x(2 l(P"(A2 2 ))s(Y~~(Bo, V)).
It is not hard to see that, if we multiply the matrix Y~~ (B0 , V) by suitable matrices of
the form U(W) Er and ip;{u), where u is either u' or u" {see (4.47)), we can reduce
it to the canonical form (4.7). Hence, using Lemma 4.11, the relation s(U(W)) = 1,
and (4.77), we conclude that s(Y~~(B0 , V)) = 1. From this, (4.75), and (4.77) we
finally obtain
(4.96)
Since, by the last equality for eu,b(Bo, V), we can rewrite this element as the
product
~(3) ~(3) -I
Ya,b (Bo, V)(E2n; ta,b(Bo, V)t( Ya,b (Bo, V)) )
and since the elements Y~~(B0 , V) fork = 3,4 lie in the same f-double coset, it
follows that in (4. 93) we can take
(4.97)
We now prove the second equality in (4.89). The relation (4.97) shows that the
value µ(eu.b(B0 , V)) does not change if Bo is replaced by -B0 • But because the map
Bo - - Bo is obviously an automorphism of the space of matrices in S 0 (Z/ pZ) with
fixed r,.-rank, by Lemma 4.16 and (4.93) this implies that X* = i. D
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 175
PROBLEM 4.17. Suppose that e = (M, <p) E s; (q )+, Mis the element defined by
(4.58), andr = r 0(q). Showthate = M(E2n.e:), wheree: E Ci. and that (e)f. =;if (e)r:
if e 2 =;if 1.
[Hint: Use (4.88) and (3.19) of Chapter 2.]
PROBLEM 4.18. Prove that (K)f. =;if (K)f. where
(4.100)
Furthermore, for any odd integer k we define the map
~ pk -n
(4.101) LO,P ----+ Lo,p = LO,P ®Q C
by mapping double cosets
(r(ier(i)-.!!..+ s(e)-k(r(iP(e)r0),
where s(e) is the function (4.61), and then extending Pk by Q-linearity to the entire
ring Lo,p· Since tm = 1 for all M E ss,p• it follows from (1.24)-(1.25) and (1.10)
that Po is a homomorphism. According to (3.19) of Chapter 2 and (4.2), the map
s: S8,p - t C1 is a homomorphism, whose kernel contains the group r 0. Consequently,
for any k the map Pk is also a homomorphism. We note that, although k can be any
integer in the definition (4.101), only the case of odd k is important for applications.
Hence, in what follows we shall always suppose that k is odd.
Suppose that i = (e)f. where r = r 0(q), belongs to the ring i;(q), and r(e) is
not an even power of the prime p. Then from Proposition 4.3 it follows that tM "¢. l,
where M = P(e), and so the partition rM = LJ/Ker IM )P.; contains more than one
coset. If we are also given a second partition r = LJ; r Ma;, then
x = UUrefJ;&;
i .i
176 3. HECKE RINGS
is also a partition into disjoint cosets. Since we have {jii = fi} (E2n, t M (Pi )- 1)e by
(4.6), where P} E r, it follows that the last decomposition can be rewritten in the form
where, by Lemma 3.4, we may suppose that ea:; E So,p· We now let
(4.102)
Since the set {tM(Pi)} is a nontrivial subgroup of the group of fourth roots of unity
(because rt- = 1 by Lemma 3.4 of Chapter 2), it now follows that, since k is odd,
eq,k (X) = O; thus,
(4.103)
where f:;(q) = eq,k(i:;(q)) and E;(q) = e,,,k(i;(q)). In Chapter4we shall show that
the homomorphism eq,k commutes with the representation of the Hecke rings on the
corresponding spaces of modular forms. Hence, (4.103) shows· that'in the theory of
Siegel modular forms of half-integer weightk/2, where k is the same as in (4.101), it is
only the even Hecke rings .E; .E;
(q) (or the rings (q, x), which do riot differ from them
in any essential way) that are of importance.
LEMMA 4.19. In the ring
(4.104)
the images of the elements (4.82) have the following left coset decompositions:
(4.106)
Bo,S,V
rp(Bo)=r
in which xis the function (5.70) andthe matrices B0, S, and V run through the sets in
(4.41).
PRooF. The lemma follows from (4.91) and Proposition 4.12. D
In §3.3 we defined the maps <I>, w, and n. We shall use the same letters to denote
the extensions by linearity to the complexifications of the corresponding rings in (3. 50).
§4. HECKE RINGS FOR THE SYMPLECTIC COVERING GROUP 177
Now from (4.105) and (4.109) we obtain the following formula for the Cl>-images of
the elements T;(p 2 ):
(4.111)
It turns out that the image under Cl> of the ring E~(q) coincides with the ring on
the right side of (4.107). The proof of the next basic result of this section relies upon
this fact.
THEOREM 4.21. Let n, q E N, where q is divisible by 4, and let p be a prime not
dividing q. Then, in the notation of Theorem 3.30:
(1) The restriction of the map n to the subring
(4.112) i;(q, x) = eq,k(E;(q, x)) c· 'L~.p•
where k is an arbitrary odd integer and
-
(4.113) -n
E p(q, x) = Q[To(p 2 -
), ... , Tn(P 2
)]
is the integral subring of the ring (4.83), gives an isomorphism of this ring with the ring
Q[xo, ... , Xn]wz of polynomials that are invariant under the group of automorphisms
W2 = w;, which is obtained by adjoining to Wn the automorphism •o: •o(xo) = -xo,
ro(x;) = x;for i = 1, ... ,n. ·
(2) The ring Q[xo, ... , xn]wz is generated over Q by the polynomials
(4.114) t 2 =(tn(xo,x1,. . .,xn)) 2 , Pa=P:(xo,x1,. .. ,xn) (O~a~n-1).
178 3. HECKE RINGS
where A 0 is given by (4.108). Since the lp(r, a) are rational numbers and lp(O, a)= 1,
it follows from this and from (4.116) that Cl>(E;(q)) = Q[x6Ao, ... ,x6An], which,
together with (4.107), implies that
(4.119) n(E;(q,x)) .= n(E;(q)).
We now apply Theorem 3.30. Since Q(T(p)) = t (by (3.70)), it follows from
(4.116) that
(4.120) O(E;(q)) = Q[t 2,po, ... • Pn-i1·
If we take into account the definitions (3.52)-(3.54) of the polynomials t and p0 ,
we see that the right side of the last equality coincides with the polynomial ring
Q[xo, ... , xn]w2. From this, (4.119), (4.120), and the commutativity of the ring
E;(q, x) (which follows from Theorem 4.13), we obtain the first and second parts of
the theorem.
The third part follows from (4.117), the analogous equality for the ring .E;(q, x),
and the fact that
(4.121)
Here the first equality follows from (4.11 I) and (4.118), and the second equality is a
consequence of Lemma 3.34.
Finally, the fourth part follows from the second and third parts and from the
commutativity of the ring .E;(q, x). 0
Just as in the case of L;, this theorem enables us to parameterize the set of all
Q-linear homomorphisms from the ring E;(q, x) to the field C. ,
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 179
PROPOSITION 4.22. Every nonzero Q-linear homomorphism .II.from the ring i; (q, x)
to C has the form
(4.122) T-+ .II. A (T) = nn(T)
p (x0 , ••• ,x.)=A•
where T E i;(q, x) and A = (ao, ... , an) is a set of nonzero complex numbers that
depends on .II.. This set is called the parameters of the homomorphism .II.. If one set of
parameters is obtained from another by the action of a transformation in w;,
then the
two sets ofparameters correspond to the same homomorphism.
PROOF. The proposition is an immediate consequence of Theorem 4.21 and the
proof of Proposition 3.36. D
PROBLEM 4.23. Let n = 1 and /Ji. = l\ (p 2). Prove that Q takes the polynomials
over E~(q, x)
respectively to the polynomials r(xi; v) and q(x6, xf; v) in Q[xij1= 1, xt'Jw2 (see (3.52)
and (3.76)).
[Hint: Use (4.111).]
§5. Hecke rings for the triangular subgroup of the symplectic group
When studying elements of a Hecke ring of the symplectic group, it is sometimes
convenient to decompose them into suitable components which, however, do not
themselves belong to this Hecke ring. The place where all of these components lie is a
suitable Hecke ring of the triangular subgroup q) of the modular group rn.
1. Global rings. According to Lemma 3.25(3), we can define the global Hecke ring
forr0
(5.1)
and for any q E N we can define its q-subring
(5.2)
where S 0(q) = S0n GL2n(Zq). It is clear that the local rings Lo,p that were introduced
in §3.3 are contained in L 0(q) if (p, q) = 1.
By analogy with the local case, it follows from Lemma 3.4 that the Hecke pairs
(r0(q ), sn(q )) and (q), S 0(q)) and the Hecke pairs obtained from them in the case 4lq
by lifting by means of the homomorphisms rand p (see (4.3) and (4.99)) satisfy the
conditions ( 1.26). Thus, one can determine imbeddings ( 1.27) of the corresponding
Hecke rings:
(5.3)
which enables us in place of Ln(q) and (by Theorems 4.6 and 4.21)
(5.5)
inside the global ring L~{q) = L 0{q) ®QC, where eq,k = e ·Pk.
We shall examine certain multiplicative properties of the rings (5.2). Unlike the
Hecke rings of the symplectic group and the symplectic covering group, the Hecke
rings of the triangular.subgroup are noncommutative and contain zero divisors.
PROBLEM 5.1. Let n = 1, let p be an odd prime, and let X be an element of LA.p of
theform
(( P
0 i ))
P r
and ((p0 ))r
2 Pi for i = 1, ... , p - 1
a,re pairwise distinct, and each of them consists of a single left coset modulo r.]
However, several important properties of the rings Ln(q) and En(q,x) do carry
over to the rings L 0{q). Moreover, in practice we shall have need only of certain
subrings and submodules of the rings L 0{q) and Lo,p·
where
(5.6)
The lemma implies that elements of the form An(r) lie in the center ofL0{q) and
are invertible in this ring. As in the case of the analogous lemmas for the Hecke rings
considered earlier, in practical calculations this lemma enables us to reduce the case of
arbitrary double cosets to the case of double cosets of integer matrices.
The map j in §1.4 allows us to define an important anti-automorphism * of the
ringL~{q).
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 181
Every element Tin the subring (5.5) of L~(q) is invariant relative to the anti-automor-
phism (5.1):
(5.9) X* = X for XE Ln(q) or En(q, x).
We now turn our attention to subrings of L8(q). It turns out that, in addition to
the Hecke rings of the symplectic group, this ring also contains commutative subrings
that can be obtained as the centralizers of certain sets of elements and that are naturally
isomorphic to Hecke rings of the general linear group of order n (more precisely, to
certain extensions of them). This circumstance makes it possible to reduce various
questions in the theory of Hecke rings and Hecke operators for the symplectic group
to analogous questions for the general linear group.
182 3. HECKE RINGS
PROOF. The decompositions in (S.12) follow from (3.44). The relations (S.13)-
(S.16) are obtained directly from the definitions and (S.12). D
We now consider the subsets of L0(q) consisting of all elements that commute
with all elements of the form II_ (m) and all elements of the form II+ (m), respectively:
(S.17) C!!_ ={XE L0(q);II_(m)X = XII_(m), (m,q) = l},
(S.18) C~ ={XE L0(q);II+(m)X = XII+(m), (m,q) = l}.
These are clearly subrings of L0(q). From (S.16) it follows that the anti-automorphism
* takes each of these subrings into the other one:
(S.19) C!!_(q)* = C~'(q), C~(q)* = C!!_(q).
PROPOSITION S.S. The ring C~ (q) (resp. q~ (q)) is spanned by the double cosets
modulo r 0 = ro ofelements of the form
(S.20) M = U(r,D) E S 0(q), where dn(D) 2 Ir
(resp., of the form
(S.21) M = U(r,D) E S 0(q), wherer I d1(D) 2 ),
where U(r,D) = (r~* ~)and d;(D) denotes the ith elementary divisor of the
matrix D.
We first describe the decomposition of the r 0 -double cosets of elements of the
form (S.20) and (S.21) into left cosets modulo r 0 •
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 183
where A= An, and in the last condition under the summation the set S! = r- 11D;SnD;
is contained in the group Sn = Sn (Z) and is regarded as a subgroup there.
PROOF OF THE LEMMA. It is easy to see that
D;EA\ADA
where the left cosets are pairwise distinct and all aa are nonzero. We choose an integer
m prime to q for which all of the matrices mBaD;; 1 are integer matrices. Then (see
(5.11))
(5.25)
Let diag(d1, ... , dn) = ed(Da) be the elementary divisor matrix of Da. Since Da =
y(edDa)c5, where y,c5 E A, it follows that Da can be replaced by ed(Da) in (5.25).
Then this condition obviously means that all of the ratios ra/d;dj are integers, and
this is equiva!ent to the condition dn(Da) 2 lra. Finally, because Xis invariant under
right multiplication by matrices in r 0 of the form U (y) with y E A, it follows that the
expansion of X can be rewritten in the form
X = Lafi{ L (roU(rfi,Da)) }•
p DaEA\ADf,A
where Dfi runs through Da lying in pairwise distinct A-double cosets, and any of the
A-left cosets in ADfiA is equal to one of the ADa with Da E ADfiA. Then the relation
dn(Da) 2 lra and (5.22) imply that the expression in braces is the decomposition of some
double coset (Mp)r0 , where Mp has the form (5.20). D
An important tool for studying the global ring L0 and its subrings is the global
analogue of the map ell that was defined in §3.3. We shall associate variables Yp to the
prime numbers p, and for different p we suppose that they commute with one another.
We let .Q = Q[ ... , y'f 1, •.. ] be the ring of polynomials over Q in the variables y~ 1
(p = 2, 3, 5, ... ). We define the Q-linear map ell= elln from the module LQ(r0, S0) to
the module La(An, on) by setting
if r = pf 1 • • • p~'. It is clear that this map does not depend on the choice ofleft coset
representatives.
PROPOSITION 5.7. The restriction of the map elln 'to the Hecke rir.gL0 c LQ(r0, S0)
gives an epim01phism of this ring onto the Hecke ring Da(An, on) of the Hecke pair
(A", 0") over .Q:
(5.27) .m .mll.·
-v=..v L"o---+ D a (An • 0")-
- H"[ ···•Yp±I , ... ] ·
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 185
'THEOREM 5.8. The restrictions of the map <l>n to the subrings C~(q) and c+(q)
of L0, where n,q E N, are monomorphisms. In particular, C~(q) and C~(q) are
commutative rings with no zero divisors.
PROOF. The proof is similar for C~(q) and c+(q); to be definite, we consider the
case of c+(q). By Proposition 5.5, every nonzero X E c+(q) can be written in the
form
X = L:a;(U(r;,D;))r0 ,
i
where all of the U(r;, D;) have the form (5.21), the double cosets are pairwise distinct,
and all a; are nonzero. Then from (5.23) it follows that
Ac-.cording to this theorem, the rings C:J:.(q) may be regarded as extensions of the
global Hecke ring of the general linear group. Since the ring L0contains the global
Hecke rings of the symplectic group, this makes it possible for us to examine the
connections between Hecke rings of the symplectic group and the general linear group.
2. Local rings. In earlier sections we have already studied the local Hecke ring L~.p
for each prime p, and also its local subrings L; and E;(q, x) (see (3.45) and (4.112)).
It is clear that
(5.28) Lo,p c L(i(q), if (p,q) =I.
We now ip.troduce local analogues of the rings C:J,,(q). We set
PRooF. We note that we can obtain the map <1>; on LO,P if we take the restriction
to Ln0,p of the map q,n and then set Yp = xo. Thus, it follows from Theorem 5.8 that
the restriction of '1>; to either C!!.P or c+P is a monomorphism. From this and Lemma
3.29 we see that the restrictions of n;
are also monomorphisms. 0
In the next section we make a more detailed study of the properties of the local
rings for fixed p, in connection with the problem of factoring polynomials over L;
or i;(q, x). For now we limit ourselves to a discussion of some of the connections
between the local rings corresponding to different primes.
THEOREM 5.10. The ring C!!.(q) (resp. c+(q)), where n, q EN, is generated by the
subrings C!!.P (resp. c+P)' where p runs through all prime numbers not dividing q.
PRooF. From (5.19) it follows that it suffices to treat, say, the case of C!!.(q).
Proposition 5.5 implies that for this it is enough to verify that, given an arbitrary
M of the form (5.20), the double coset (M)r0 is a finite product of double cosets
(Mp)r0 E C!!.P' where p runs through a set of distinct primes not dividing q. Let
M = U(r,D). By Lemma 2.2, if we replace M by another representative of the
same r 0-double coset, we may assume that D is equal to its elementary divisor matrix
ed(D) = diag(di. ... , dn). For each p we set
Dp = diag(pvp(d1l, . .. ,pvp(d,,l), rp = pvp(rl, and Mp= U(rp,Dp),
where vp(a) is the exponent of pin the prime factorization of the rational number a.
Clearly, Mp is not equal to the identity matrix for only finitely many p, and none of
these p divideq. Each matrix Mp lies in S8,p· Sinced;lr, it follows thatdn(Dp) 2 divides
rp for each p; hence, (Mp)r0 E C!!.P" Because IIP rp = r and, by Proposition 2.5,
IIP(Dp)A = (IIP Dp) A = (D)A, we conclude from Lemma 5.6 and the definitions
that the double coset (M)r0 is equal to the product of the double cosets (Mp)r0 • 0
(5.32)
e12 =O(modp), =
e23 O(modp), e 13 O(modp 2), =
dete;; '{=. O(mod p) for i = 1, 2, 3.
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 187
Using these congruences, (4.39), and the relation rJ* = n- 1 • 'eD, we obtain
(5.33)
From Lemma 3.2 it follows that for a < n and for any U E GLa(Fp) there exists
e E A(D) such that e22 = U(mod p ). If a = n, then e22 = e, so that dete22 = dete =
=
± 1, and the matrix U must satisfy the condition det U ± 1{mod p). From this and
(5.33) it follows that the matrix Bo in the double coset (5.31) can be any matrix of the
set
(5.35)
B~E{Bo}~.s.v
where Sand V run through the sets of matrices in (4.41). From this we easily see that
the equality
(5.36) (Ma,b(Bo))r0 = (Ma,b(B~))r0
holds if and only if a= a1, b = b1, and {Bo};= {B~};.
LEMMA 5.11. Let 0 ~ r ~ a, a + b ~ n. Then Il~l and Il~l (k) have the following
decompositions into r o-double cosets, where r 0 = rg:
n(r) -
(5.37) a,b - L
{Bo};,rp(Bo)=r
(Ma,b (Bo) )r0 ,
(5.38) n(r)(k)
a,b =
L
{Bo}~,rp(Bo)=r
x(Bo)-k (Ma,b (Bo) )r0 ,
where the summation is taken over the set (5.34) in Sa(Z)/modp. The action of the
anti-automorphism * on these elements is given by the formulas
(5.39) a,b )*
(Il(r) = n(r)
a,n-a-b• a,b (k))*
(n(r) = n(r)
a,n-a-b (k).
PRooF. Since all of the left cosets in (5.30) and (4.106) and all of the doublecosets
on the right in (5.37) and (5.38) are pairwise distinct, and since these double cosets
occur in n~l and n~l{k ), it follows that (5.37) and (5.38) are consequences of (5.35).
By the definition of the anti-automorphism* (see (5.7)) we have
U(I) ( Da,b
O -B )
2n-I U(I) = Ma,n-a-b(-IaBola).
P a,b
188 3. HECKE RINGS
(Mu,b(Bo))[-0 = (Ma,n-a-b(-laBola))r0 •
The equalities in (5.39) follow from these relations and from (5.37)-(5.38), since the
map Bo--+ -I0 Bola merely permutes the classes {Bo}; with rp(Bo) = r, and since, by
(4.70) and (4.98), wehavex(Bo) = x(-Bo) = x(-laBola). 0
(5.40)
Just as in the first part of Lemma 3.33, it is not hard to verify that
and that in this case we can take the set a*{BK(D)/Sn(K)D}P as a set of representa-
tives of the residue classes BK(0t.DP)/Sn(K)aDp. Now let K = Z[p! 1], and let D be
an integer matrix all of whose elementary divisors are prime to p 1• Then each residue
class BK(D)/Sn(K)D contains an integer matrix. Namely, if we write an arbitrary
matrix B in BK (D) in the form q- 1Bo with Bo an integer matrix and with q = pf, and
if we choose So E Sn (Z) so that Bo + SoD =
0 (mod q) (D is invertible modulo q ),
then we obtain B + q- 1S 0 D E Bo (D). This implies that in our case we can take
BK(D)/S11(K)D = Bo(D)/modD.
Returning to (5.40), if we use the above considerations and Proposition 2.5, we can
write this expression in th.e form
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 189
(5.41) (Ma,b(Bo))r0 = L
DEA\ADa,bA,B
(ro (P2~* ~)),
where B runs through the matrices in the set
(5.42) Bz(D) = '1* Bz(Da,b)e,
where D = 'lDa,be, ,,, e EA, and
The definition (5.42) is correct, i.e., it does not depend on how D E ADa,bA is
represented in the form ,,na,be. To see this, it suffices to verify that, if ,,na,be = Da,b
with,,, e E A, then
(5.44)
Using (5.32), the relation,,. = n;,~ · 1eDa,b. and (5.43), we find the following congru-
ences for the blocks B;j(e) in the matrix B(e) = '1* Be:
(5.45)
=B22[e22](modp),
B22(e)
Bn(e) =B1 + Bf2(e)(modp), BJ3(e) =B2 + Bf3(e)(modp2),
where B1 and B2 = 'B2 are integer matrices whose explicit form is not important now,
and
=e33B32e22 + e33B33e32(modp),
Bf2(e) 1 1
If Bf2 =O(mod p) and Bf3 =O(mod p 2), then (5.32) .implies that B32 =O(mod p)
and B33 =O(mod p 2). From this, (5.45), and (5.43) we·conclude that B B(e) is a
--+
one-to-one map of the set Bz(Da,b), and this proves (5.44).
190 3. HECKE RINGS
Given the ring K = Z[pl 1] and a matrix D = 11Da,bE, where 1'J,e E GLn(K), in
the double coset GLn (K)D a,b GLn (K) we define the following set:
(5.46)
where the definition of the set BK(Da,b) is similar to (5.43), except that B32 and B33
have entries in K/ pK and K/ p 2K, respectively, and the matrices B 22 belong to the class
{Bo}'.k determined by the equations (5.34) with Fp = Z/ pZ replaced by K/ pK. Since
(p 1, p) = 1 by assumption, the proof that the definition (5.46) is correct is exactly the
same as in the case when the ring is Z. We obtain the following relation directly from
the definition (5.46):
MN= (rt(AD)*
o rt(AD)*S)
AD EroU( rt, AD)ro c r oMoMir
o o,
since AADA = AAoA · ADoA = ADoA · AAoA = AD0AoA, by Proposition 2.5 and
Theorem 2.3. Similarly, we have
NM= (rt(DA)*
O rt(DA)*t-IS[A])
DA EroU( rt, AD)ro c r oMoMir
o O·
This implies· that (Mo)r0 (No)r0 = a(N0Mo)r0 and (No)r0 (Mo)r0 ·= P(N0Mo)r0 for
certain constants a and p. A count of the left cosets on the left and right sides of these
equalities shows that a = P. 0
PROBLEM 5.13. Prove that C~P (resp. C~p) is the centralizer of II_(p) (resp.
II+ (p)) in L(i,p.
3. ExpansionofP(m)forn = 1,2. Atthebeginningofthissectionwementioned
that by passing from Hecke rings of rn to Hecke rings of the subgroup r(j one can often
decompose elements of the former rings into more elementary components. In §6 we
shall consider these questions in more detail for the case of local Hecke rings. Here we
shall remain in the global situation, and for n = 1, 2 we shall obtain expansions of the
images T"(m) E L 0(q) of the elements (3.19) under the map (5.3).
§5. HECKE RINGS FOR THE TRIANGULAR SUBGROUP 191
(5.49)
T 1(m) = L L
d1</i=mbmodd1
(ro (~2 ;
1
) ) ,
where D EA\ AD(d2 )A, B' E Bo(D)/modd1D. On the other hand, by (5.12) we
have
I: nl(d1)Il~(d2)= I: I: (ro(~
d1d2=m d1d2=mbmodd1
;J (~ ~)).
which, combined with the above expansion, proves (5.48). Next, we use (5.12) again,
and we note that the following relation is easily verified using Lemma 3.33:
(5.51)
L L
d1d2d3=m D,B,S
(ro (d3d~D* B :;iD)) ,
.
= II o - nl(p)p-")-1 II o - n~(p)p-·')-1
= II {(1- nl(p)p-")(1- n~(p)p-"n-1.
pEP1,1
19i 3. HECKE RINGS
are called the Frobenius elements of the ring. Lo = Lo,p· In §5 we defined the integral
domains C± = C±P' which can also be characterized by the conditions
(6.1) c_ ={XE Lo;IT_X = XIT_}, C+ ={XE Lo;IT+X = XIT+}·
In fact, the left sides are contained in the right sides by definition, and the reverse
inclusions follow from the local variant of Proposition 5.5, the proof of which we leave
to the reader. From (5.19) it follows t}\at the rings c_ and C+ are dual to one another
relative to the anti-auto~orphism *:
(6.2) c~ = c+. c~ = c_.
We now show that any element of Lo can be projected onto either c_ or C+. Let
(6.3)
PRooF. From.the definitions it follows that, under the conditions of the proposi-
tion, o_(II~X) = 0 {resp. o+(XII~) = 0). Hence, the proposition follows from the
next lemma:
LEMMA 6.2. One has:
PR00F. By (6.2) and the definition of the left and right exponents it suffices to
verify the first equality. Proposition 5.5 and the decomposition (5.22) imply that
o_(x') = 0 for any XE C_. Conversely, let X be an element of Lo written as in (6.3)
with no cancellation. Suppose that o_ (X) = 0. Then for all i the matrix V; = B;D;- 1
is a symmetric integer matrix. Hence,
where Sis an arbitrary matrix in Sn(Z). If we again use the fact that o_(X) = 0,
we conclude that A;SDj 1 = r;DiSD;- 1 are all integer matrices. Thus, Xis a linear
combination of double cosets of elements of the form M; = U(r;,D;) that satisfy
(5.25), and hence also the condition dn(D;) 2 1r;. Then XE c_ by Proposition 5.5. D
The next lemma gives an easy and practical method for finding exponents d that
satisfy Proposition 6.1 for elements in the subrings L = L~ and E = E~{q, x) of the
ring Lo in the case when the left coset decomposition is not known, but the image
under n = n; is known.
(6.4)
where the expression on the right is the degree in xo of the polynomial il(X).
194 3. HECKE RINGS
where X(a) = I17=o T;(p 2)"; with a; E Zand a; ~ 0. According to (4.112) and
E
(4.113), each XE can be written in the form X = E(a) a(a)X(a)• where all of the
a(a) are nonzero and the (a) are pairwise distinct. Thus, by Theorem 4.21(1), the
polynomials Q(X(a)) are linearly independent over Q. From this and (6.5) we obtain
(6.4) for o_(X). The proof of the same inequality for X E Lis similar; one uses
Theorems 3.23 and 3.30 and Lemma 3.32. D
m m
(6.6) L(- l);II~qm-i = 0, L(-l)iqm-ill~ = 0,
i=O i=O
m m
(6.7) ""( 2;~
L..J -1 )i II_qm-i -- 0, L(-l);fim-irrt = 0,
i=O i=O
where m = 2n, q1 = qj(p) are the elements (3.77) of the ring L = L;, and ft = qj(p)
are elements of E= E;(q, x) such that
(6.8)
where q_j(xo, ... , xn) are the coefficients of the polynomial (3.76).
PROOF. From (5.9) and (5.16) it follows that the anti-automorphism* transforms
the first equalities in (6.6) and (6.7) to the second ones, and conversely. Hence, it
suffices to prove the first equalities. Let Y be the left side of (6.6), and let Ybe the left
side of (6.7). Using (3.79) and the analogous relations
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 195
for the elements Ci; E Ewhose existence and uniqueness are guaranteed by Theorem
4.21, we can rewrite Y and fin the form
m m
Y = I:{-l);(p<n>ar12-iTI'.._q;, f = I:{-1)i(p<n>ar-2;TI~q;.
i=O i=O
By definition, O{q;) and O{q;) are polynomials in xo, xi, ... , Xn, and these polynomials
have degree i and 2i, respectively1 in thevariablex0 • Thus, from Theorems 3.30(1) and
o_
4.21 (1) and Lemma 6.3 we conclude that q; E It, (q;) ~ i, and q; E E, o_
(q;) ~ 2i.
Then, by Proposition 6.1, each of the products TI'._ q; and TI~q; is contained in C _.
Since obviously Ii E c_, this means that Y and f also lie inc_. On the other hand,
by Lemma 3.34 we have O(TI_) = O{TI0(p)) = x 0 , so that, if we use (3.76) and the
definition of q; and q;, we obtain
m
O(Y) = L:(-l);xbq;(xo, ... ,xn) = x 0
"q(xo, ... ,xn;x0 1) = 0
i=O
and similarly O(f) = 0. Hence, by Theorem 5.9, we have Y =f = 0. D
If we multiply (6.6) and (6.7) by Tii and (Tii)d with d E N, we obtain the
relations ·
·m m
I:{-l);TI~+;qm-i = 0, """"'(
L...J -1 )i qm-iTI+d +i -- 0,
i=O i=O
(6.9) m m
I:{-l);(TI:.)'+iqm-i = 0, I:{-l)iqm-i{TI~)d+i = 0,
i=O i=O
which may be regarded as recursive relations for the sequences of nonnegative powers
of TI± and Tii. Since q0 = q 0 = 1, the relations (6.9) give high powers of these two
elements as linear combinations of smaller powers with (right or left) coefficients in the
rings Land E. On the other hand, by (3.80), (6.8), and Lemma 3.27, the coefficients
qm and Clm are invertible in Lo. Hence, the relations {6.9) can also be used to determine
small powers of the Frobenius elements in terms of higher powers. Namely, if TI~ and
(Tii)'5 foro > d have already been determined, then we set
d
TI_= q,,,-l ( ~(-1)
m i i+d
TI_ qm-i ) ,
r=I
(6.10)
(6.11)
(TI~)''= q;;;I ( t{-l);Cim-;(Tii)i+d).
The elements Tii and (Tii)d that are obtained in this way ford < 0 are not the
negative powers of TI± and Tii, since these elements are not invertible in Lo. They
196 3. HECKE RINGS
are .not even the powers of II± 1 and (II~1J- 1 , since, for example, II= 2 =f. (II= 1) 2 and
II:+: 2 =f. (II+ 1) 2 • Nevertheless, for brevity we shall sometimes speak of negative powers
of the Frobenius elements. Note that, if we use (5.9), (5.16), and induction on d, then
from (6.10) and (6.11) we find that the negative powers of the Frobenius elements,
together with the positive powers, are dual with respect to the anti-automorphism *=
C_P • L; =
O_ = on_P = -n L.,,XaTa;Xa
{'""' C_P, Ta EL; } ,
E -n
Cl<
(6.13)
O+ = o~p = L;. c:p = { LTaYa;Ta EL;, Ya E c:p }·
Cl<
(6.14) fj_ = on_P = c"_P • ~(q, x), D+ = o~P = ~(q, x). c:P.
According to (5.9) and (6.2), these spaces are dual with respect to*:
(6.16)
for all d E Z.
THEOREM 6.5. For o E N let the elements II±J E O±P and (IIi)-J E O±P be
defined by the recursive relations (6.10) and (6.11), respectively. Then:
(1) Every element in on (respectively, every element in O±p) satisfies the relations
Conversely, every X or i E Lo that satisfies any of the relations (6.17), (6.18) or (6.19),
(6.20), is contained in the corresponding space 01!._p• oip or 01!._p• oip·
(2) The restrictions of the maps~; and n; (see §§3.3 and 4.3) to the spaces O±P
and O±P are all monomorphisms.
We first prove a lemma.
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 197
LEMMA 6.6. For any TEL= L~ (respectively.for any TEE= E~(q, x)) one has
the relations ·
(6.21) II~ TII~ = rr~+dT for all o ~ o"_ (T) and d E Z,
(6.22) II~ TII~ = II~+<ST for all o ~ O+ (T) and d E Z
(respectively,
PRooF OF THE THEOREM. By the duality relations (6.15) and (6.12), it suffices to
prove the first part of the theorem for, say, O _ and O_. Let X E o_, and let X E O_.
Then, by definition,
a a
a a
Now suppose that o ~ o_(X). Then by what was already proved and by Proposition
6.1, we have
where we used (6.21) with T = I in the last step. Similarly, for 20 ~ o_(X) we use
(6.23) to obtain
198 3. HECKE RINGS
where ITi and (ITi )d ford < 0 are defined by the recursive relations (6.10) and (6.11),
respectively. Namely, these formulas were proved for nonnegative din Lemma 3.34,
while for d = -t5 < 0 we have by Lemma 6.6:
so thatn(IT~) = n(IT~)- 1 = xg, and similarly for IT~ and (ITi)d. Now suppose that
XE O_ and Q(X) = 0. We taket5 ~ t5_(X). Then by (6.17) we have
so that Q(IT~X) = 0. By Theorem 5.9, the last equality implies that IT~X = 0, and
then also X = (IT~X)IT=J = 0. The cases of O+ and D± are similar. 0
X - (p<n> L\)-dIT~XIT~,
Show that this subspace is the set of all elements in Lo that are invariant relative
to the above map. Then deduce that the restrictjons to c_ · C+ of <ll and n are
monomorphisms.
(2) Show that
T(p),IT_T;(p 2 ), T;(p 2 )IT+ EC_· C+,
where T(p), T;(p 2 ) are the images in Lo of the elements (3.42), 0 ~ i ~ n. Then
deduce that, if TEL and the image n(T) is a polynomial in Xo,x1, ... ,Xn having
degree t5 in x 0 , then IT~ TIT~ E c_ · C+ for any a, b ~ 0, a + b ~ t5 - 1.
[Hint: Use the first assertion and (3.58), (3.61). For the second assertion use the
fact that Tis a polynomial in T(p), T;(p 2 ).)
(3) ShowthatIT± 1 E C_ · C+, and then deduce the relations IT=' = (p<n>L\)- 1IT+,
IT+!= (p<n>L\)-IIT_.
[Hint: Use the first two parts of the problem and the definition of negative powers.]
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 199
where n = n; is the spherical map (3.49) ,factors into a product of two polynomials with
,n; .
coeJJ.czents . C[x ±I , .•• , xn±IJ:
m 0
O(P)(v) = F(v)G(v),
where
Ni N2
F(v) = Lf;v;, G(v) = Lgivi,
i=O j=O
(6.26)
(2) If, on the other hand, all of the coefficients of the second polynomial belong to
the imageO(C+) ofC+ = C~P:
gi = O(gj), gj EC+, and go= 1,
200 3. HECKE RINGS
then all ofthe coefficients ofthe.first polynomial belong to the image !l( O+) of O+ = O~P
(respectively the imagen(o+) ofO+ = o~p):
fj = n(fj), f} E O+ (respectively f} E O+).
and one again hds the factorization (6.26).
PRooF. The two cases are dual to one another with respect to the anti-automor-
phism *• and so the proofs are analogous. We shall treat, say, the first case. Since the
restriction of n to c_ is a monomorphism (by Theorem 5.9) and fo = 1, it follows
that f 0 = 1. Then the polynomial I:; f :vi
is invertible in the ring of formal power
a:
series over the commutative ring c _,i.e., there exist E C _ such that
2n
O(R;)(v) = ~)-1) 0 .Q(r:(p))v 0 = r(xi, ... ,xn;v),
a=O
(6.28)
2n
n(i;)(v) = ~)-1) 0 nrr:(p))v 0 = r(xi, ... ,xn;v).
a=O
PROPOSITION 6.9. The polynomials R;(v) andi;(v)factor as follows over the ring
L~.p:
(6.29)
R;(v) = (iJ-l);(p(n)A)- 1Il_Iln-;v;) (t(-l);Il_Il;IT: 2v;)•
1=0 1=0
(6.30)
and
(6.31)
i;(v) = ( ~(-l);(p(n)A)- 1 Il_Iln-;v;) ( ~(-l);Il_Il;(Il:_)- 1 v;).
(6.32)
R;(v) = ( ~(-l);(Ilt)- 1 Iln-iil+v;) ( ~(-l);(p(n)A)- 1 Il;Il+v;)•
where A= An(P) is the element (3.48); IT_ = IT~(p) = IT0(p), Il+ = IT~(p) = ITZ(p),
lla = IT~(p) are the elements (3.59); and Il±2 and (IT~)- 1 are determined from the
recursive relations (6.10) and (6.11), respectively.
PROOF. We first show that
(6.33) Ilna(p)* = nnn-a(p) c. a
1or = 0, 1, ... , n.
In fact, if Dais the matrix (2.28), then obviously Ni pD;; 1An =AnDn-aAn, and hence
from the definition of the anti-automorphism* and the relations (3.63) we obtain
We now turn to the polynomial Q(v) = Q;(v) defined by the conditions (3.76)-
(3.78). Since n(n_) = Xo and n(Il+) = xox1 ... Xn, the next proposition is an
immediate consequence of Theorem 6.8.
PROPOSITION 6.10. One has the following factorizations over the ring Lo,p:
Q;(v) = (1 - n_v)Q_(v) = Q+(v)(l - Il+v),
where Q_ and Q+ are polynomials of degree 2n - 1 with coefficients in 01!._P and o+p•
respectively.
In order to use the factorizations of the Hecke polynomials, one must be able
_to compute the coefficients of the factors in the form of linear combinations of r 0-
left or double cosets. The rest of this section is devoted to these calculations for the
polynomials Q; with n = 1, 2 and the polynomials R; and i.;
with n EN.
PROBLEM 6.11. Let F(v) be a polynomial of degree N with coefficients in C!!_P
and F(O) = 1. Show that there exists a polynomial G(v) of degree::::; N(2n - 1) with
coefficients in 0'!... P and G (0) = 1, such that all of the coefficients of the polynomial
F(v)G(v) lie in theringL;. From this deduce that every XE C!!_P satisfies an equation
of the form E~o X;T;, where T; EL;, TN= 1, and N::::; 2n. State and prove similar
results for the ring c+r
[Hint: In the polynomialf(v) = O(F)(v) that is obtained from F by replacing its
coefficients by their images under n, all of the coefficients are symmetric in the variables
x1, ... , Xn. Consequently, there exists a polynomial g (v) of degree =:::; N (2n - 1) over
xt
the ring Q[ 1, ••• , x;=
1] such that all of the coefficients in the product f g are invariant
with respect to Wn. Hence, f g = Q(P)(v), where Pis apolynomial·of degree=:::; N .2n
over L;. Apply Theorem 6.8 to P. To prove the second assertion, apply the first part
to the polynomial (1 - Xv).]
3. Symmetric factorization of the polynomials Q; (
v) for n = 1, 2. We obtain fac-
torizations of Q; (
v), n = 1, 2, that are invariant with respect to the anti-automorphism
*• and we compute the coefficients of the polynomial factors.
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 203
PROPOSITION 6.12. Over the ring LA,p one has the factorization
Q~(v) = (1 - ILv)(q - II+v),
where II_ = II~ (p) = IIA(P) and II+ = IIi (p) = IIJ(p ).
PROOF. According to (3.58) and (5.14), for n = 1 we have
(1 - II_ v)(q - II+v) = (1 - T 1(p)v + pA1 (p)v 2).
The last polynomial is equal to Q1(v), by Proposition 3.35. 0
PRoPosmoN 6.13. Over the ring L~.p one has the factorization
where v = 11~'.J + II~~J. Using the definitions of the elements in the above expression,
the relations (6.37)-(6.39), and (5.14), we can rewrite this polynomial in the form
where in the last step we used the expressions (3.58) and (3.61) for T(p) = T 2(p) and
T 1(p 2 ) = Tf (p 2 ), respectively. According to the formula for Q~ (v) in Proposition
3.35, to complete the proof it suffices to verify that
(6.40)
Since the map n is a monomorphism on L~, to do this it is enough to verify that the
right side of (6.40) has the same n-image as the left side. We compute then-image
of the right side by replacing T 1(p 2 ) by its expression in (3.61), and using Lemma
3.34 and then Lemma 2.21 to calculate the polynomials ru(nf,0 (p)) = ru(nf(p)),
ru(nf, 1(p)) = ru(nHp)nf(p)), and ru(nl0 (p)) = ru(nHp)). The reader can easily
see that the result is the polynomial that gives the left side. D
PROBLEM 6.14. For any n EN and any prime p, prove the following factorization
over the ring L(i,p:
Q;(v) = (1- II_v)Q'(v)(l - II+v),
where II_ = 11~ (p), 11+ = rri (p), and Q' is a polynomial of degree 2n - 2.
[Hint: Using Problem 6.7(2) and (3.79), show that all of the coefficients in the
formal power series (1- II_v )- 1Q;(v )(1 - II+v )-- 1 lie in C!!_P · C~P' and then use the
fact that n; is a monomorphism on this space.]
4. Coefficients in the factorization of Rankin polynomials. Here we compute the
r(j-left and double coset expansions of the coefficients in the factorizations (6.29)-
(6.32) of the Rankin polynomials R;(v) and R.;(v). To do this, we must find certain
products of elements of the form Ila = II~ (p ), II~k· and II~k (k) in the ring L~.p (see
Lemmas 3.32 and 4.19). ·
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 205
(6.41) 11bl1
+
= p(n-bJ11(0)
n-b,b = P(n-bJ11(0)
n-b,b (k) '
where 0 ~ b ~ n; and/or 0 ~ r ~ a ~ n, 0 ~ b ~ n; and r + b ~ a,
11(rl 11(0l
(6.42) a,O n_-b,b --
max(O,a+b-n),.;1,.;b
o,.;s,.;r
(6.43)
11(r) (k )11(0)
a,O n-b,b (k) = c(k; a, b, r, t, s )L\11~_;t,s2 21, 1 (k ),
max(O,a+b-n),.;1,.;b
o,.;s,.;r
in which
c(k; a, b, r, t, s) = pt(r+t-a-b-s-l)+b(n+I)
(6.44) 'Pa+b+s-r-21(p)
l k(O a+s-r-1,s,a+s-r
X P
· . )
() ( )'
'Pa+s-r-t P 'Pb-I P
where 'Ps is the function (2.29),
(6.45)
k is a.fixed odd integer, and the prime over the summation means that it is taken over the set
ofmatrices with zero m x m-block in the upper-left corner; the coefficients c(a, b, r, t, s)
are obtained from the coefficients (6.44) by setting k = 0.
PRooF. From (3.59), (6.33), and the definitions it follows that the left and right
exponents of the 11b are at most 1. Then, by Proposition 6.1, the left side of (6.41)
lies in the ring C+ = C~r Since, by (5.37), II~02b,b = (Mn-b,b(O))r0 , it follows frqm
Proposition 5.5 that this element lies in C+. Thus,
(6.46)
From these inclusions and Theorem 5.9 we see that to prove (6.41) it suffices to verify
that both sides have the same image under <b or n.
But this follows immediately from
Lemma 3.34. (6.41) is proved.
Things are not so simple in the case of (6.42) and (6.43), where the reader should
expect some rather tedious computations. First of all, we note that it is enough to
compute products of the form
(6.47)
where (Ma,o(Bo))r0 with Bo E Sa(Z) and rp(Bo) :=: r is one of the doublecosets in the
expansion (5.37) of 11~b· From (5.37) it follows that 11~02b,b = (Mn-b,b(O))r0 • Thus,
using the second formula in Lemma 1.5 and the expansion (5.35) of the double coset
(Ma,o(Bo))r0 , we see that the computation of the product (6.47) requires that we find
the double cosets to which products of the form
(Da = Da,o) belong (and with what multiplicity). To do this we need a special set of
representatives of the left cosets A(n.) \A.
We introduce some notation. We set
(6.49) ln-:-a,n = {i = (ii. ... , in-a) E Nn-a; 1 ~ it < · · · < in-a ~ n }.
Fori E In-a,n we letidenote the set (jp) E Ia,n that is the complement of (ii, ... , in-a)
in the set ( l, 2, ... , n). To every permutation a of the numbers 1, 2, ... , n we associate
the n x n-matrix
M(a) = (t5a-•(i)),
where '500 = 1, t50 p = 0 for a =I p, is the Kronecker symbol. It is easy to see that
M(a-r) = M(a)M(-r) and M(a- 1) = M(a)- 1 = 'M(a).
Next, to every i E In-a,n we associate the permutation a(i) and the matdx M(i) by
setting
V0 p = 0, if i0 > jp},
(6.52)
W(i) ={ e =( 0 E -a ~) M(i); VE V(i) }•
and
Wa = w:(p) = LJ W(i).
iEln-u.n
We now show that Wa is a complete set of left coset representatives of A = An
modulo the subgroup A(n.) =An D; 1ADa:
(6.53) Wa = A(n.) \A.
To do this, we first note that the number of elements in W(i) is
(6.54) JW(i)J = JV(i)J = pii+···+j.-(a),
where (j 11 ) = i, since for fixed P there are exactly j p - p indices i0 satisfying the
inequality i0 < j P· From this and (2.33) we conclude that the number of elements in
the set Wais
P -(t1) ~
L....,, pji+···+.iu = cpn ' where cp.. = cp.. (p ).
I/ · . /
,.,,,,ll<···<,1.,,.,,,n cpacpn-a
On the other hand, according to Lemma 1.2 and (2.28), the index µA(Da) of A(n.) in
A is equal to the same number. Hence, in order to verify (6.53) it suffices to show that
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 207
all of the matrices in Wa are in pairwise distinct A(D.i-left cosets. From the definition
it follows that
(6.56)
where
11
= c-1 (En-a
0
V) CM= (En-a
Ea 0 Ea
1
c 1- VC2)
•
M
The matrix c 1- 1 VC2 = (p-cl(ia)+d(hJlv0 p) is an integer matrix, since d(jp) = 0 and
d(i0 ) = 1 in the case d(jp) < d(i0 ); hence, i0 > n - b ~ jp, and from the definition
of V(i) it follows that v 0 p = 0. Thus, 11 E A, and the matrix (6.56) belongs to the
same r o-right coset as the matrix
and hence DaC = Da+b-21,1 and BC= (~ BhOc2 ). For any matrix A, we shall
let A(s) denote the s x s-block in the upper-left corner of A. From the form of our
matrices and the expansion (5.35) it follows that the matrix (6.57} is contained in the
r 0-double coset of the matrix
PM a+b-21,t (Bh
~(a-t)} c.
ior h
B~(a-1) _
-
(00 0 )
B!a-1)
a(a,b, t) = ph+···+j.-(a)
l,,;;,.ji<···<ia-1,,;;,_n-b
n-b<ia-t+l <···<j.,,;;,.n
and v( {B0 }, {K}, s) denotes the number of matrices Bh E {Bo}; for which the matrix
Ji!s) /modp {of the same size as K) lies in the class {K};. Since t must obviously be
~ 0 and~ a+ b - n, by Lemma 1.5 we obtain the formula
Conversely, any matrix of the above form, with T1 E S,_s(Z/pZ) and rp(Ti) = s,
satisfies the conditions in (6.61). Furthermore, since (4.70) implies that x(T) =
x (( ~1 ~1 )) = x(Vi)x(T1) (because for any symmetric integer matrices A1 and
A2 we have
(6.63)
it foilows that (6.62) is a consequence of the definition (6.45) and the above consider-
ations.
We return to the computation of the sum (6.60). In order for this not to be
the empty sum, the matrix K E Sa+b- 21 (Z/ pZ) must clearly satisfy the inequality
rp(K) ~ r. We set rp(K) = r - s. Then any matrix Von the right in (6.60) must
satisfy the relation rp(V) = rp(K) = r - s. Hence, if we apply (6.62), we can rewrite
the sum (6.60) in the form
(6.64) S(k, {K}) = p(r-s)lp(k; Da+s-r- 1; s, a+ s - r)l{K}~i- 1 s1 (k, {K}),
where
(6.65)
v
where
Go= { U E G; (~ ~J [U] =: (~ ~J (modp)}
is the stabilizer of the matrix K in the group G. On the other hand, I{K} I = IGI· IG0 1- 1•
If we substitute this expression into (6.64), we find that S(k, {K}) is equal to
(6.67) p(r-s)i x(K)-klp(k; Da+s-r-1; s, a+ s - r)IGl- 11Gd.
To compute IG 1- 11G1 I we need the following
LEMMA 6.16. Let 1 ~ c ~ d, and let p be a prime number. Then the number of
matrices V E Md,c (Z/ pZ) satisfying the condition rP ( V) = c is equal to
We are now ready to compute the coefficients in the expansions (6.29) and (6.30)
of the polynomial R; (
v).
PROPOSITION 6.17. In the ring Lo,p one has
(6.68) IT - IT I·IT-2
-
_ -(i)-i(n-i)A-1
- P U L.,,, Of.I}
'°' . L:
; n
IT(i-j+a-n)
a,n-a '
j=O a=n-i+j
L.,,,
IT(i-j+a-n)
a,O '
j=O a=n-i+j
where
'Ps. is the function (2.29), IT~l is the element (3.62), and the rest of the notation is the
same as in Proposition 6. 9.
PROOF.The formulas (6.12), (6.33), and (5.39) show that the anti-automorphism
* takes (6.68) to (6.69) and conversely; hence, it suffices to prove one of the two.
We shall prove (6.69). We first verify that both sides of (6.69) lie in the subspace
O+ = O~P = L; · c:P c L~.r We then show that both sides have the same image
under the map n;.
By Theorem 6.5(2), this will imply that they are equal.
From (6.16) and (6.46) we have
(6.71)
In order to examine the right side of (6.69), we introduce the sums
(6.72) S·1 c
1
= '°'
n-c
L.,,, IT(i+a-n)
a,c
a=n-i
Since Sc,c = IT~02c,c• it follows by (4.46) that Sc,c E C+ C O+; this proves (6.73) in
the cased = 0. Now suppose that (6.73) holds for all c, d satisfying the conditions
0 ~ d <hand 0 ~ c ~ n - d, where 0 < h ~ n. From (3.61) and (6.72) it follows
that we have
c+d=h,c-;i.1
By the induction assumption, the sum Sc+d,c. where c + d =hand c;;.: 1, i.e., d < h,
lies in C+. By the definition of O+, the element Tn-h(p 2 ) EL; is also contained in
that space. We thus have Sh,O E O+. We now use induction on c to prove that
(6.74) Sc+h,c E O+ for 0 ~ c ~ n - h.
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 213
The case c = 0 has already been treated. Suppose that (6.74) holds for c satisfying the
inequality 0 ~ c < b ~ n - h, where 0 < b ~ n - h. Consider the product
n
S • Il(O) _ '""' Il(h+a-n)Il(O)
h,O n-b,b - L.J a,O n-b,b'
a=n-h
Sinceb ~ n-h, it follows that (h+a-n)+b ~a, and we can apply (6.42) to compute
the products in the last sum (this is the only place where we need (6.42)!). Note that
the coefficients c(a, b, r, t, s) ip. (6.42) do not depend on the individual values of a and
r, but rather only on the difference a - r; so we can set c(a, b, r, t, s) = y(a - r, b, t, s ).
With this notation we have
n
AII(h+a-n-s)
Sh,O IIn-b,b
(O)
= '""'
L.J Y(n - h • b• t, s )Ll a+b-2t,t
a=n-h max(O,a+b-n)~t~b
O~s~h+a-n
b h
= LL Y(n - h, b, t, s )i!J.Sh+2t-b-s,t.
t=O s=O
From the inclusion S1r,o E O+ that was proved above and from (6.46) it follows that
the left side of the last relation is contained in O+. By our induction assumptions,
Sh+2t-b-s, 1 E O+ if h + t - b - s < h or if h + t - b - s =hand t < b, i.e., for all
possible combinations oft ands except t = b, s = 0. Consequently, the term with
t = b ands = 0 is also contained in O+:
and hence S1r+b,b E O+ (see Lemma 3.27). We have thus proved (6.74), and hence also
(6.73). Both sides of (6.69) are then contained in O+, and it remains for us to prove
that they have the same image under n.
From (6.25) and Lemma 3.34 we have
On the other hand, if we again use Lemmas 3.34 and 2.21, we find that the 0-image of
the right side of (6.69) is
i n
x L:a;j L
lp(i- j +a - n,a)x5p-(u)sa(x1, ... ,xn)
j=O a=n-i+j
n
= ( XJ • · • Xn ) -1 p (n-i) '""'
L.J p -(a) Sa (XJ, ••• , Xn )
a=n-i
a+i-n
x L aijlp(i- j +a -n,a).
a=n-i
214 3. HECKE RINGS
Our goal is to prove that the last expression is the same as (6.75). For this it is clearly
sufficient to verify the relations
a+i-n { l if a= n - i,
L a.;jlp(i - j +a -n,a) = '
j=O 0, ifn - i <a~ n,
i-k { 1 if k = i,
(6.76) L°'ij1p(i - j -k,n -k) = 0• ifO ~ k < i.
j=O '
In order to prove these relations, we must analyze the function l P (r, a) and find a way
to compute it. The next two lemmas are devoted to this.
'Pa(P)
(6.79) Ip (r, a ) = lp (r, r ) ( ) ( )"
'Pr P 'Pa-r P
PROOF. Let A'= (as 1 ) E Lp(r, a; i). Then, using (6.51) and the analogous relation
for the rows, we obtain
A'[M{i)- 1] =A"= ( ~ ~) ,
where A = (a;.. ,;p), B = (a; ,jp), C
0 = (aj.. ,jp), and (jp) = t We now show that
rp(A) = r. In fact, rp(A") = rp(A') = r, and, by construction, the first r columns
of A" are linearly independent modulo p. Hence, all of the columns of A" are linear
combinations modulo p of its first r columns. In particular, the same is true for the
matrix (A, B), which also has rank rover F p· Thus, the first r columns of this last matrix
cannot be linearly dependent modulo p. Further note that for each p = 1, ... , a - r
the Pth column of B is a linear combination modulo p of the first s columns of A,
wheres is the largest integer such that is < jp, because ifthe columns of (A, B) with
§6. HECKE POLYNOMIALS FOR THE SYMPLECTIC GROUP 215
index i1, ... , is, j p-and hence the columns of A' with the same indices-were linearly
independent modulo p, then the index is+ I could be replaced by jp < is+I· Thus,
s
'(a;i.jp> ... , a;,,jp) =L Vap '(a;,,;
0 , ••• , a;,,;J
a=I
r
=L Vap '(a;.,;
a=l
0 , ••• , a;,,;J(modp),
LEMMA 6.19. The following formal power series identities over Q holdfor any prime
p:
PROOF OF THE LEMMA. Since the set S0 {Fp) contains p(a) different matrices, it
follows from (6.79) that
p
(a)_~l( )- ~lp(r,r) • _l_ ('Ps
- L...J p r, a - 'Pa L...J
( ))
= 'Ps p .
r=O r=O 'Pr 'P.a-r
All of these relations for a = 0, l, . . . together are equivalent to the following single
formal power series identity:
216 3. HECKE RINGS
(6.80)
(6.81)
which together are equivalent to the power series identity (6.80). (6.81) follows from
(6.80) if we replace v by -pv. D
We now complete the proof of Proposition 6.17. It suffices to verify (6. 76). From
(6. 70) for the numbers Ol.ij and from the second identity in Lemma 6.19 we obtain the
congruence
'°'
L...J
, , . cpn-ilp(r,r) --{ l,
....,}
ifi -k = 0,
. , 0 ·+ . k
J,r~ ,J r=1-
<pn-i+jcpr 0, if 0 <i -. k ~ i,
where 0 ~ k ~ i. Since the right side is nonzero only when i = k, the factor <pn-i can
be replaced by cpn-k· Ifwe set r = i - j - k, then by (6.79) we obtain
cpn-klp(r,r)
--~-=pl-J-
I (' . k ,1-1-
. . k) cpn-k I ('l - J· - k ,n- k) .
- p
-
cpn-i+.i<pr cpi-j-kcpn-k-(i-j-k}
We now compute the coefficients in the expansions (6.31) and (6.32) of the poly-
nomial R~ (v). To do this we first introduce the new functions
(6.84)
; n
ILIT;(IT:J-I = p-(i)-i(n-i)L\-I L:aij L rr~;;-~~a-n)(k),
j=O a=n-i+j
(6.85)
; n
{Ilt)-II1n-iI1+ = p-(i)-i(n-i)L\-I LQfj L IT~i)j+a-n)(k),
j=O a=n-i+j
where
(6.86)
if j =O{mod2),
if j = l{mod2),
rr~l (k) is given by (4.106), and the rest of the notation is the same as in Proposition 6. 9.
PROOF. In view of (6.12), (6.33), and (5.39), the anti-automorphism* takes (6.84)
to (6.85) and 'conversely. Hence, it suffices to prove, say, (6.85). Just as in the proof of
Proposition 6.17, using (4.105) and (6.43) we see that
n-c
S·1,c (k) = ~
L...J IT(i+a-n)(k)
a,c
E [jn+p
a=n-i
for 0 ~ c ~ i ~ n, and the right side of (6.85) can be expressed as a linear combination
of the elements L\-Isi-j,o(k), j = 0, 1, ... , i. Since (6.16) and (6.46) imply that the
left side of (6.85) is also contained in O~P' it follows from Theorem 6.5(2) that 'to
prove (6.85) we need only show that both sides have the same image under n = n;.
Using (4.109) and Lemma 2.21, we have
(i-j+a-n)(k)) -- /k(•
n(IT a,O P l - }
· + a - n, a )x 02p -{a) Sa {XI, ... , Xn ) .
From this and Lemma 2.21 we find that then-image of the right side of (6.85) is
n a+i-n
(XI · · · Xn ) -I p {n-i) L...J
~ p -{a) Sa {XI, ... , Xn ) L...J
~ Ol.;j
~ /k(·
p l - }
· + a - n, a )·
a=n-i j=O
a+i-n { l if a= n - i,
(6.87) I: aij1;{i-j+a-n,a)= 0• if n - i <a~ n.
j=O '
218 3. HECKE RINGS
(6.90)
d=O ZESa-1(Fp) AES0 (Fp),rp(A)=r
rp(Z)=d A<•-ll::z(modp)
which follows from (4.70), we may suppose that Z = ( ~1 ~) in the inner sum in
(6.90), where Z 1 is ad x d-matrix that is nonsingular modulo p. Thus, the matrix A
in (6.90) has the form
(6.92) (~I
'Xi
Ifwe now use (6.92) along with (6.30) and (6.91), we find thatx{A) = x(Z1)x(X)
= x(Z)x(X). This implies that the inner sum in (6.90) is equal to
The relations (6.93) and (6.94) show that the following equalities hold for u{p):
We return to the proof of Proposition 6.20. We first make the substitution a = n-b
in (6.87) and use (6.88). This transforms the system to the form
L.:
i-b /k(•
a.ij p ' -
.
J-
b .
•' - J -
. b)
'Pn-b =
{ 1 for b = i,
j=O 'Pi-j-b'Pn-i+j 0 forO:::;; b < i.
After making another substitution 'Pn-b ---+ 'Pn-i {'Ps = 'Ps (p)) and i - b - j ---+ r, we
obtain the new system of equalities
'°'
L....J
'Pn-i ~ ..• 1;(r, r) _ { 1
"'"IJ -
for i -b = 0,
for 0 <i - b :::;; i,
· ..... 0,J·+r=1-
J,r~
. b 'Pn-i+j 'Pr 0
(6.97)
Hence, the proof of (6.85) reduces to the inversion of the infinite formal series in this
congruence. With this in mind, we prove the following identity.
LEMMA 6.22. In the notation {4.110) and {6.83),
PRooF. From (6.89) and the definition of the polynomials (2.29) and (6.83) it
follows that
1; (2b, 2b) pb 2
(6.98)
'P2b (p) = 'Pih (p) '
and from this the lemma is obtained by induction on a.
Ifwe now set v =A· p- 112 in Lemma 6.22, we obtain the system of equalities
t (-p) 0 -b
b=o.'Pia-2b(p)
• l!(2b,2b) _ { l
'P2b(p) - 0
for a= 0,
for a> 0,
which, together with (6.89), implies the formal power series identity
(~ l!(r,r)
L....J - -vP
r=O <p,(p)
) (~ (-pY
L....J - 2 )
-V c
c=O <pi;,(p)
-
-
1
•
Ifwe compare this identity with {6.97), by (6.86) we obtain the congruence
PROBLEM 6.23. Let II± = II± (p) be the Frobenius elements of the Hecke ring
Lo,p• and ford ;;;:: I let Il±d be defined by the recursive relations (6.10). Prove that the
negative squares of the Frobenius elements are given by the formulas
n n
rr:2 = (p(n) .l\)-2 L; a:j(p) L: n~7n-.!~.
j=O a=j
n n
11:;:2 = (p<n> .l\)-2 L a:/p) L:rr~~o-j>,
j=O a=j
where aij(p) are the coefficients (6.70). Ford> l show that rr:t1 =F (II: 1)d and
II+'' =F (II+ 1)ti. For p an odd prime obtain analogous formulas for the elements
(Il~J- 1 that are defined bythe recursive relations (6.11).
TuEoREM 6.24. Let R;(v) E L;[v] and i;(v) E E;(q, x)[v] be the polynomials
defined in (6.27). These polynomials have the following factorizations over the Hecke
ringL~.p:
i
(6.104) b; = bf(p) = p-<i)-i(n-nA-1 L:a;jn~;i>(k)
j=O
x_(v)-1 = LP-dnt-(pd)vd,
(6.105) d=O
00
X+(v)-1 = LP-dnt+(pd)vd,
d=O
where X_(v) and X+(v) are the polynomials (6.101) and (6.102), and
PROOF. The two identities in (6.105) have analogous proofs; moreover, the anti-
automorphism * (applied coefficient by coefficient) takes one into the other (see (6. 33)).
Hence, it suffices to prove, say, the second identity in (6.105). From (6.46) and
Proposition 5.5 it follows that all of the coefficients on both sides of this identity are
contained in the ring C+ = C~r Thus, by Theorem 6.5(2), the identity will be proved
if we verify that
DEA"\M./A"
detD=±pd
Thus,
Cl>(t+(pd)) = pd(n+l)t(pd),
where t(pd) is the element (2.10) of the ring Hn. These formulas show that the identity
(6.108) that we want to prove is nothing other than the identity in Proposition 2.22
with v replaced by pv. D
PRooF OF THE THEOREM. Using the notation of Proposition 6.9, we define the
following polynomials:
n n
L(v) = z::)-l)'II:t 21In-;II+v;, Y+(v) = :~::)-l);II_II;II= 2 v;,
i=O i=O
n n
Y_(v) = ~)-l)'(II~)- 1 f,ln-;II+v;, Y+(v) = ~)-l);II_II;(II:)- 1 vi.
i=O i=O
Ifwe let B(v) and ii(v) denote these formal power series, and let (-1); b; and (-l)ib;
denote their coefficients, we obtain the identities
00
Thus, to prove the theorem it suffices to verify that B (v) and ii (v) are actually poly-
nomials of degree n, and that their coefficients are given by (6.103) and (6.104). To do
this we need some preliminary observations.
The map
A linear combination of double cosets (M; )r0 is said to be up-homogeneous if all of the
double cosets that occur with nonzero coefficients have the same p-signature; in that
case this p-signature is called the p-signature of the linear combination of double cosets,
again Jenoted up. Clearly, if two linear combinations X and Y are up-homogeneous,
then so is their product X · Y, and we have
(6.114)
Returning to the proof of the theorem, we consider the subspaces L 0, Lt, Lg, 1-,
and 1+ of the space Lo = L~.p that consist of all (finite) linear combinations of up-
homogeneous elements whose p-signature is, respectively, nonpositive, nonnegative,
zero, negative, and positive. The spaces L 0 , Lt, and Lg are clearly subrings of Lo,
and 1- and 1+ are two-sided ideals of the rings L 0 and Lt, respectively. From
the definitions, it follows that the following elements of Lo are up-homogeneous with
p-signature as given below:
lie in L 0 . From these observations and (6.111) it follows that all of the coefficients in
the series B(v) and B(v) are contained both in L 0 and in Lt, i.e.,
(6.116) b;,b;ELonLt=Lg fori=O,l, ....
We now examine the coefficients b; modulo 1-. By (6.106) and (6.115), all of
the coeffidents of X_(v)- 1 except for the constant term lie in the ideal 1-, and the
constant term is 1; hence, if we pass to congruence modulo 1- coefficient by coefficient
in the equation B(v) = X_(v)- 1 Y_(v), we find that B(v) := L(v)(mod/-), i.e.,
(6.117) b; = rr+ 211n-ill+(mod/-) for 0 ~ i ~ n,
(6.118) b; =O(mod/-) for i > n.
Since Lg n 1- = {O}, it follows from (6.116) and (6.118) that b; = 0 for i > n.
Hence, B (v) is a polynomial of degree n. Furthermore, it follows from (6.115) that
A- 1 rr~b E 1- if a < n. Thus, from (6.117) and (6.69) we obtain the following
congruences for 0 ~ i ~ n: ·
i
b; =P-{i)-i{n-i)A-1 LaiiII~oj>(modl-).
j=O
If we take into account that in each of these congruences both sides are contained in
Lg and if we recall that Lg n 1- = {O}, we see that the two sides are actually equal. We
then carry out an analogous argument with the coefficients b;, using (6.85); we find
that for 0 ~ i ~ n the coefficients b; satisfy the inequalities (6.104), and b; = 0 for
i> n. D
CHAPTER 4
Hecke Operators
Modular forms arose as a result of abstracting the analytic and group properties
of the generating series for the number of integral representations of positive definite
integral quadratic forms by one another. Thus, the basic object of arithmetic interest
in the theory and application of modular forms was and continues to be the Fourier
coefficients regarded as a number-theoretic function. As we saw in §§1.1 and 1.5 of
Chapter 3, the Hecke rings of the symplectic group act as rings of linear operators on
~paces of modular forms. The Hecke operators, which act on modular forms and hence
on their Fourier coefficients, make it possible to carry the various relations between
elements of the Hecke rings over to these number-theoretic functions, thereby revealing
multiplicative properties of the Fourier coefficients. These properties are reflected in
the Euler products of the Dirichlet series (zeta-functions) that are constructed from
the Fourier coefficients of eigenfunctions of the Hecke operators.
For brevity, we shall refer to a pair (K, x) satisfying these conditions as a q-regular
pair (of degree n).
We consider the space rotk (K, x) of modular forms of weight k and character x
for the group K, where k is an integer and (K, x) is a q-regular pair. To every element
of the Hecke ring L(K) = DQ(K, S(K)) we shall associate a linear operator on this
space. According to the scheme in §1. 5 of Chapter 3, to do this we first need a suitable
automorphy factor of the group S(K). For M = ( ~ ~) E S(K) and Z E Hn we
set
By Lemmas 4.2 and 4.1 (2) of Chapter 1, the function 'Pk,x is an automorphy factor of
S(K) on Hn with values in C*. Then, by Lemma 4.1 (3) of Chapter 1, we can define
an action of the group S(K) on functions F: Hn--+ C:
S(K) 3M: F--+ Fl,,.,k,xM = Flk,xM
(1.3) = 'Pk,x(M,z)- 1F(M(Z}) = X(M)- 1FlkM,
where lkM is the operator (3.14) of Chapter 2, which satisfies the relations
(1.4) Flk,xMdk,xM2 = Flk,xM1M2 (M; E S(K)).
Since X(M) = x(M) if M E K, the condition (2.4) of Chapter 2 in the definition
of modular forms of weight k and character x for K can be rewritten in the above
notation in the form
(1.5) Flk,xM = F for all ME K.
Thus, nothing prevents us from defining the action of the ring L(K) on rotk (K, x) in
the same way as we did for general Hecke rings in §§1.1 and 1.5 of Chapter 3. The only
difference is that in the general case the automorphic forms were defined using only
functional equations of the type ( 1.5), while functions in rotk (K, x) must also satisfy
certain analytic conditions. Thus, if F E !mk(K,x) and T = °E; a;(KM;) E L(K),
we set
(1.6)
From (1.4) and (1.5) it immediately follows that the function FIT does not depend
on the choice of representatives M; in the left cosets KM;; from the definition of the
Hecke rings and from ( 1.4) we then find that
(FIT)lk,xM = La;Flk,xM;lk,xM
i
= L:a;Flk,xM;M =FIT, if ME K,
i.e., FIT satisfies all of the functional equations (1.5) for modular forms in VRk(K,x)
whenever F does. From (1.3), (1.6), and Proposition 3.8 of Chapter 2, we see that the
operator IT also preserves the analytic properties of modular forms.
We now suppose that K c r 0(4), and consider the space rotk;2(K, x) of modular
forms of half-integer weight k/2. According to (3.19) of Chapter 2, in this case we can
take the function
-
(1.7) 'Pk/2,x(M,Z) = X(M)cp(Z) k ,
where M = (M, 'P) E S(K), as the automorphy factor of the group S(K) =
p- 1(S(K)), where Pis the homomorphism (4.1) of Chapter 3, on Hn with values
in C*. Using (3.21) and (3.22) of Chapter 2, we see that if for any function Fon Hn
we set
..-. ..-. I I ..-.
(1.8) Flk;2.xM = 'Pk/2,x(M, z)- F(M(Z}) = X(M)- Flk;2M,
and the condition (2.5) of Chapter 2 in the definition ofa modular form in rotk12 (K, x)
can be written in the form
(I.IO) Fk/2,xM = F for all ME K= r(K),
where j" is the monomorphism (4.2) of Chapter 3. As for the ring L(K), by Lemma
3.4 of Chapter 2 and Lemma ·1.7 of Chapter 3 it can be lifted to the ring L(K) =
DQ(K, S(K)). By analogy to (1.6), we find that the formula
(I.I I) FIT = F lk/2,X T = LOI; F lk/2,X ii;
for T = E; 0t;(KM) E L(K) gives a representation of L(K) in the space of modular
forms ro?k/2(K, x).
Thus, from the above observations, the definition of modular forms, and Propo-
sition 1.14 of Chapter 3 we have the following
PROPOSITION 1.1. Let (K,x) be a q-regular pair of degree n ~ I, let w = k or
k/2 be an integer or half-integer. and suppose that K c r()(4) if w = k/2. Then
any operator lw,x'l", where 'Z" E L(K) or r E L(K) depending on whether w = k or
w = k/2, respectively, takes the space ro?w(K, x) to itself. The map r --+ lw,x'l" is a
linear homomorphism from the corresponding Hecke ring to the ring of endomorphisms
of ro?w (K, X). In particular, ·
(1.12) Flrilr2 = Flr1r2 for FE ro?w(K,x).
The operators lw,x'l" on the space ro?w(K, x) are called Hecke operators.
Our definition (1.6) and (I.I I) of the Hecke operators is somewhat arbitrary, since
the extension X of the character x to S(K) can be chosen in different ways. Although
this element of choice has little effect on the Hecke operators, for convenience in later
computations we would like to remove it. We first describe all possible extensions of x
to S(K).
LEMMA 1.2. Let (K, x) be a q-regular pair of degree n ~ 1, and let p be an arbitrary
homomorphism from S(P (q)) to C* that is trivial on rn (q ). Then there exists a unique
homomorphism X = Xp,x from S(K) to C* whose restriction to K coincides with x and
whose restriction to s(rn(q)) coincides with p.
PRooF. By assumption, there exists an extension Xo of the homomorphism x:
K--+ C* to S(K). This implies that the character x satisfies the following condition:
(l.13) if My= y'M', whereM,M' E S(rn(q)) andy,y' ~ K, thenx(y) = x(/).
In fact, the equality My = y' M' implies that X0 (M)x(y) = x(y')Xo(M'), and Theo-
rem 3.3(3) of Chapter 3 with K 1 = P(q) implies that Xo(M) = Xo(M'), since Xo as
well as xis trivial on P(q); hence, x(y) = x(y').
According to (3.5) of Chapter 3, any matrix ME S(K) can be written in the form
(1.14) M = yN, where y EK and NE s(rn(Q)).
We then set
(1.15) X(M) = ~,.x(M) = x(y)p(N).
If M = yN= y1N1 are two decompositions (1.14), then y( 1y = N 1N- 1 E Kn
S(rn(q)) = P(q), and hence x(y1) = x(y) and p(N1) = p(N). Thus, X(M) does
228 4. HECKE OPERATORS
not depend on how Mis written in the form (1.14). The map X: S(K) -+· C* clearly
coincides with x on K, and with p on S (rn (q)). We check that X is a homomorphism.
If M = yN and M 1 = y1N1 are two matrices in S(K) written in the form (1.14), then,
by (3.4) of Chapter 3, we can write Ny1 = y(N', where y( EK and N' E S(P{q)). By
(1.13) we have x(y() = x(r1). From Theorem 3.3(3) of Chapter 3 with K1 = P(q) it
follows that_p(N') = p(N). From these relations and (1.15) we obtain
and we use p = Pw in (1.15) to fix once and for all the normalized extension
(1.17)
(1.18)
PROPOSITION 1.4. Suppose that (K,x) and (K 1,x 1) are q-regular pairs, and q is
d(visible by 4. Further suppose that K1 c K c r()(4), and the restriction of x to
K1 coincides with XI· Then the following equality holds for any modular form F E
!mk;2(K, x) c !mk;2(K1, x1). where k is odd:
Flk/2.xT = Flk;2.x 1e(T) for TE L(K).
Moreover, the map e gives an isomorphism between the even subrings E(K) and E(K1) of
the Hecke rings (1.19), and the subspace !mk;2(K, x) of!mk;2(Ki. x1) is invariant under
the Hecke operators of E(K1).
PRooF. From Lemma 1.8, Proposition 4.3, and Theorem 3.3(4) of Chapter 3 it
follows that the restriction ofeto the even subrings truly is an isomorphism. The other
parts of the proposition are proved in the same way as Proposition 1.3. 0
which proves part (3) for T = (M)K. The general case follows from this case and from
Lemma 1.5 of Chapter 3.
If w = k/2, then part (I) implies that E(d)- 1KE(d) = K, since in this case E(d)
and K lie in r3(4). Suppose that ME S(P(q)) and r(M) is the square ofa rational
number. Then, as noted before, E(d)- 1ME(d) = y1My 2 for some y1, y2 E rn(q), and
we can write Mo 2M.- 1 = 01, where 01 = E(d)y 1 and o2 = E(d)y; 1 E q(4). From
this, (4.6), and Proposition 4.3 of Chapter 3 it follows that
.- - - - - -I ..-...- - l
01 = Mo2M = Mo2M-
Based on this lemma, one can define the standard decompositions of our spaces
of modular forms. Suppose that V = ro?w(K,x) or IJ?w(K,x). The map d---+ lwT(d)
gives a representation of the abelian group (Z/qZ)* on V; hence, Vis a direct sum of
irreducible invariant subspaces, each of dimension 1. If FlwT(d) = lf/(d)F, then If/ is
a character of the group (Z/ qZ)*. From this and Lemma 1.6 we obtain
PROPOSITION 1.7. Suppose that (K, x) is a q-regular pair of degree n, w = k or k/2
is an integer or half-integer, and in the latter case K c r3(4). Then one has the direct
sum decompositions
If/
(1.23)
IJ?w(K,x) = E01J?w(K,x, If/),
If/
where If/ runs through all of the characters of the group (Z/qZ)*, and for each If/ we set
ro?w(K,x,lf/) ={FE rolw(K,x);FlwT(d) = lf/(d)F, (d,q) = l},
(1.24)
mw(K, x, If/) = ro?w(K, x, If/) n IJ?w(K, x).
Each of the subspaces ro?w (K, x. If/) and IJ4o (K, x, If/) is invariant under all of the Hecke
operators lw.x'l' for T E L(K) or T E E(K) in the case w = ·k or w = k/2, respectively.
We now consider the action of the Hecke operators on subspaces of the form ( 1.24).
In E(K) we look at the subring E(K, x) that is analogous to the ring (5.4) in Chapter
3. Namely, we set
(1.25)
where e1 and e2 are monomorphisms of the form (1.20) for the pairs of groups rn (q) c
K and rn (q) c r3 (q), respectively. Here by ei-- 1 we mean the inverse of the restriction
of e1 to the even Hecke ring.
232 4. HECKE OPERATORS
PROPOSITION 1.8. Suppose that the pair (K, x) satisfies the conditions ofProposition
1. 7, and If/ is a character modulo q. Then the following formula holds for any modular
forms F, G E rotw (K, x, If/) of which at least one is a cusp-form:
(1.26) (F lw.x'r, G) = lf/(r(M)) (F, G lw.x'r ),
where -r = (M)K E L(K) or -r = (M) f( E E(K, x) for w = k or w = k/2, respectively.
and(·,·) is the scalar product in (5.1) of Chapter 2.
PROOF. By Lemma 1.2 of Chapter 3, the number of K-left cosets in KMK is
equal to the index [K: K(M)1· The number of K-right cosets in KMK is obviously
equal to the number of left cosets in KM- 1K, i.e., [K: K(~-1 >1· Since these indices
are equal, by Lemma 5.4 of Chapter 2, it follows that the number of K -left cosets in
KMK is equal to the number of K-right cosets there. On the other hand, every left
coset clearly has nonempty intersection with each right coset. Hence, there exists a set
of representatives M 1, ••• , Mµ of the left cosets K \ KMK that is also a set of right
coset representatives. Using this set of representatives and the properties of the scalar
product in Theorem 5.3 of Chapter 2, we obtain
µ
(Flk.x(M)K,G) = LX(M;)- 1(FlkM;,GJkM;- 1 JkM;)
i=I
(1.27)
= (F,tr-nk X(M;)~lGJkM;-i),
1=1
Because Mi. ... , Mµ is a set of representatives of the right cosets KMK/ K, it follows
from this and Lemma 1.6(1) that the set E(r)rM1- 1, ••• , E(r)rMµ 1 is a set of repre-
sentatives of the K-left cosets in the double coset KE(r)rM- 1K. Furthermore, since
( 1.13) implies that any homomorphism X of the form ( 1.17) satisfies the relation
X(M) = X(E(r)rM- 1) for ME S(K) with r(M) = r,
it follows from the above considerations that the expression (1.28) can be rewritten in
the form
lf/(r)Glk.x(M')K, where M' = E(r)rM- 1•
The two matrices M and M' obviously have the same symplectic divisors, and, by
(3.5) of Chapter 3, we may suppose that they lie in S(P (q )); hence, by Lemma 3.6
of Chapter 3, rn MP = rn M'P. If we now apply Theorem 3.3(3) of Chapter 3, we
conclude that these matrices belong to the same rn (q )-double coset, and, in particular,
(1.29) (E(r)rM- 1)K = (M)K (ME S(K),r = r(M)).
Thus, W(r)Glk.x(M')K = W(r)Glk.x(M)K, and, if we substitute this expression in the
sum in (l.27), we obtain (l.26) for w = k.
By Lemma 1.8 and Proposition 4.3 of Chapter 3, the number of K-left and K-right
cosets in KN K, where N = ii± 1, is the same as the number of K-left and K-right
§1. HECKE OPERATORS FOR CONGRUENCE SUBGROUPS 233
Just as in the case of (l.27) and (l.28), we now obtain the relation
(Flk/2,x(M)R, G) = lfl(r(M))(F, Glk/2.x(M')R),
so that in order to prove (l.26) for w = k/2 we must show that
(l.30) (E(r);Eii- 1)R = (M)R, where ME s(rn(q))+.
SinceE(r) Er= qj(4) and;E."jj- 1 = M* (see (4.85) ofChapter3), from (4.34) and
Lemma 4.14 of Chapter 3 it follows that f M'r = f Mr, or, equivalently, M' = yfil
with y, o E r. We shall show that this implies the equality of double cosets
(l.31)
which, in turn, implies (l.30). We choose an integer q 1 prime to q such that q1M± 1 are
integer matrices. By Lemma 3.2(2) of Chapter 3, the matrix y can be represented in the
form y = YIY2 with YI E r1 and Y2 E P(qn. Moreover, Y2 and 01 = M- 1y2M Er,
since q is divisible by 4 and M E S (r i). By assumption, r(M) is the square of a rational
number; hence, by (4.6) and Proposition 4.3 of Chapter 3, we have~ = ii- 1:y2ri,
and so
. M' = YI fil.i. where YI E r I and 02 = 010 E r.
Sinceo2 = M- y! 1M' E S(r1) n r, we have proved (l.31).
1 0
LEMMA 1.10. Let V be a nonzero finite dimensional vector space over an algebraically
closed.field, and let S 1, ••• , Sd be a.finite set ofpairwise commuting linear operators on
V. Then V contains a nonzero common eigenvector of S1, ... , Sd.
PRooF. The cased= 1 is obvious. Suppose that d > 1, and the lemma holds for
sets of d - 1 operators. Let A. 1 denote an eigenvalue of S 1 on V, and let V' = {v E V;
v IS1 = A. 1v} denote the corresponding eigenspace. Then V' is invariant relative to
S2, ••• , Sd because those operators commute with S 1• By the induction assumption,
V' contains a nonzero eigenvector of all of these operators. 0
Returning to the proof of the theorem, we see that V contains a nonzero eigenfunc-
tion F 1 of all of the operators 1-r; (i = 1, ... , d) (provided, of course, that V =/: {O} ).
We set Vi = {aF1; a E C} and Vi = {G E V; (Fi. G) = O}. Since the scalar
product on V is hermitian and nondegenerate, V splits into the orthogonal direct sum
of Vi and Vi: V = Vi E9 Vi. By construction, Vi is invariant relative to the operators
1-r;. Then (1.26) implies that Vi is also invariant relative to these operators; hence,
if Vi =/: {O}, then Vi contains a nonzero common eigenfunction F2 • Repeating the
same argument for Vi and F2 in place of V and F 1, and continuing in this way, after
a finite number of steps we obtain an orthogonal basis for V consisting of common
eigenfunctions for all of the operators 1-r;. 0
PROOF. By Lemma 3.5 of Chapter 3, the group r 0(q) satisfies the q-symmetry
condition. Hence, to prove the lemma it suffices to verify that the formula (2.1)
gives a group homomorphism from sn(q) to C*, that the restriction to r(l(q) of this
homomorphism coincides with [x], and that its restriction to S(P(q)) is the map Pw
§2. ACTION OF THE HECKE OPERATORS 235
given by (1.16). The first claim follows immediately from the description of sn(q) in
Lemma 3.5 of Chapter 3. The second and third claims follow from the definition of
the character [x] and the map Pw. D
In particular, the function FI• has a Fourier expansion of the same form as F. We
write
(2.9) Fl•= L (/l•)(R)e{RZ},
REAn
236 4. HECKE OPERATORS
where f 1-r is another function in j:!, (q, x). From Proposition 1.1 we immediately
obtain
LEMMA 2.2. The map
-r -1-r: f - /l-r (/ E j:!,{q,x))
is a linear representation of the Hecke ring Ln(q) if w = k, and a linear representation
of the Hecke ring in(q, x) ifw = k/2.
2. Hecke operators for r8. In Chapter 3 we obtained expressions for elements of
Ln(q) and in(q, x) in terms of components belonging to the Hecke ring of the trian-
gular subgroup q c rn. The Hecke operators corresponding to these components
do not, in general, stay within the confines of the original spaces of modular forms.
Thus, in order to apply these results of Chapter 3 when computing the action of the
Hecke operators, we must first define suitable extensions of the spaces rot::, (q, x).
To each character s of the group {± 1} we associate the character Os : r8 - {± 1}
defined by setting
where M = ( ~ ~) E S 0(q) and Z E Hn. Then the action of the group S 0(q) on
functions F : Hn - C is written in the form
(2.14) Flw.xM = r(M)wll-(n) Xw(detA)I detDlw F(M(Z)).
We let V denote the space of all functions F: Hn - C such that Flw.xM = F for all
ME r8. If for each T = L; a;(r8M;) in L~(q) we set
(2.15)
§2. ACTION OF THE HECKE OPERATORS 237
where M = (M, i,o(Z)) E s3(q). From (4.2) of Chapter 3 it follows that the space v
can also be defined by the condition that Flk/2,xM = F for all ME rg. Hence, iffor
T = E; a;(f0Af;) E L0(q) we define the operator lk;2,xf on V by the formula
(2.17)
maps the corresponding space rot~ to itself. The maps T ---+ lw,xT and T ---+ lk/2,xT are
linear representations of the rings L~(q) andL0(q) in the space rot~.
The operators (2.18), which we shall also call Hecke operators, are compatible
with the imbeddings of Hecke rings in (4:101) and (5.3) of Chapter 3 and with the
imbedding (2.12). More precisely, we have
LEMMA 2.4. Under the assumptions in the previous lemma, one has:
where eq: Ln(q) ---+ L(j(q) and Eq,k: En(q, x) ---+ L~(q) are the imbeddings (5.3) and
(5.5) of Chapter 3. In particular, the subspace rot:!,(q, x) c rot~ is invariant relative
to all of the Hecke operators lw,x'r, where -r E Ln(q) = eq(Ln(q)) or -r E En(q,x) =
Eq,k(En(q, x)) in the cases w = k andw = k/2, respectively.
238 4. HECKE OPERATORS
PRooF. In the case w = k, the lemma follows immediately from the definitions.
Suppose that w = k/2. We first note that eq,k = eq · Pk is the composition of the
homomorphisms (4.101) and (5.3) of Chapter 3. According to the definitions of the
operators in (2.3), (2.17), and (2.15), they satisfy the relations
where s (-1) = x(-1), and this implies the lemma for w = k/2. D
From Lemmas 2.3 and 2.4 we have (in the notation of those lemmas):
LEMMA 2.5. The map T --+ lw.x T is a linear representation of the ring L~ (q) in the
space j~, and it satisfies the following relations:
where eq and eq,k are the imbeddings (5.3) and (5.5) of Chapter 3. In particular, the
subspace j':n (q, x) c j~ is invariant relative to all of the operators lw,/r. where-r: E Ln (q)
or TE in(q, x) in the cases w = k and w = k/2, respectively.
The above constructions make it possible to apply the decompositions in Chapter
3 when computing the action of concrete Hecke operators on modular forms and their
Fourier coefficients. Here we shall prove three general lemmas concerning the action
of Hecke operators in L0. From now on we shall assume, without further mention,
that w = k or k/2 is an integer or half-integer, x is a Dirichlet character modulo
q EN, ands is the character of the group {±1} such that s(-1) = xw(-1).
LEMMA 2.6. Let F be a function in rot~ with Fourier coefficients f (R ). Then for any
matrix Mo= ( r~o ~~) in S0(q) the action of the Hecke operator for (Mo)q; on F
§2. ACTION OF THE HECKE OPERATORS 239
D,B
PR.ooF. The formula (2.23) follows immediately from (2.15), (2.14), and the left
coset.decomposition for rgM0 q in (3.44) of Chapter 3. If we substitute the Fourier
expansion of Fin the right side of (2.23), we obtain the series
Sincee{R'rZ[D- 1)} = e{rR'[D*)Z} and the resulting series stillliesinrot~, we see that
if we group together terms with fixed product rR'[D*] = R, then the only terms that
remainarethoseforwhichR EX= rD- 1AnD*nAn. Thus,ifwesetrwn-(n)x(r)n = y
and Xw(detD)I detDl-w = l/f(D), we find that our series is equal to
LEMMA 2.7. Suppose that Fis a function in rot~ with Fourier coefficients f (R), and
Mo= ( P~o i 0
) E S8(q) satisfies the condition dn(D 0 ) 2 1r (respectively, rld1 (Do) 2 ),
where d; (D) denotes the i th elementary divisor of D. Then we have
(2.26)
(Flw.x(Mo)q;)(Z) = rwn-(n) x(r)n LXw(detD)I detDl-w F(rZ[D- 1]),
D
(2.27)
Ulw,x(Mo)q;)(R) = rwn-(n) x(r)nL:'xw(detD)I detDl-w f(r- 1R['D]),
D
240 4. HECKE OPERATORS
where D EA\ ADoA (A= An), and R[ 'D] E rAn in (2.27) (respectively, we have
(2.28)
(Flw.x(Mo)q;)(Z) =rwn-n(n+llx(rYI detDoln+I
x LXw(detD)I detDl-w Lf(R')e{rR'ID*IZ},
D R'
(2.29)
Ulw.x(Mo)q;)(R) =rwn-n(n+l)x(r)nl detDoln+I
X LXw(detD)I detDl_:.w f(r- 1RI 'DI),
D
Under our assumptions, the function S --+ e{ rR[D*]S} is obviously a character of the
finite additive quotient group Sn/r- 1 · 'DSnD. Hence, the above sum is equal to the
order of this quotient group if the character is trivial, and is equal to zero otherwise.
Clearly, the character is trivial if and only if rR[D*] is an integer matrix with even
entries on the main diagonal, i.e. (since R E An), if and only if R E Ann r- 1DAn 1D.
When computing the order of the quotient group, obviously we can replace D by its
matrix of elementary divisors ed(D) = diag(di. ... , dn); we find that this order is
IT (r- 1d;dj) = r-(n)(d1 "'dn)n+I = r-(n)ldetDoln+l.
This argument proves (2.30). The formulas (2.28) and (2.39) follow from (2.23),
(2.24), and (2.30). D
LEMMA 2.8. Let F be a function in rot~ with Fourier coefficients f(R). Then:
(Flw,xII:(d))(Z) =dwn-(n)+(a)x(d)n LXw(detD)I detDl-w
D
(2.32)
x Lf(R)e{dR[D*]Z},
R
L e{R[a*]B'D0 1}.
B' EBo(Do)/mod Do
Since R E An, this condition for the trigonometric sum to be nonzero is equivalent
to the condition dR[a* D 0 1] E An; and this, in turn, is equivalent to the condition
dR[D*] E An. This proves (2.35). The formulas (2.32) and (2.33) follow from (2.23),
(2.24), and (2.35). The formulas in (2.34) are a direct consequence of (2.14). D
and let An(D) ={RE An; Sv(R) =f. O}. Prove that:
(1) If a,p E An, then Savp(R) = Sv(R[a*]) and An(aDP) = aAn(D) 1a.
(2) If edD = diag(d1, ... ,dn), then
(flk.xT(m))(2a) = L ok-lx(o)f(2ma/o2),
Jlm,u
and if n = 2, then
3. Hecke operators and the Siegel operator. We now study.the relations between
Hecke operators on the spaces rot::, (q, x) and rot~ and the Siegel operator '1> that
was defined in §3.4 of Chapter 2. These relations make it possible to reduce certain
questions in the theory of Hecke operators for groups of degree n to the analogous
questions for groups of lower degree.
In §3.4 of Chapter 2 the Siegel operator '1> was originally defined on the space lV;
of Fourier series of the form (3.5) of Chapter 2 that converge absolutely on all of Hn
and uniformly on subsets oftheformHn(e) withe> 0. From Theorem3.l of Chapter
2 it follows that rot~ c J'j for any character s. Thus, the operator '1> is defined on
the spaces rot~, and hence also on the subspaces rot:!,(q,x). From the definitions and
Theorem 3.1 of Chapter 2 it follows that the subspace rot~ c lJ'j can be characterized
as the subset of all F E J'j whose Fourier coefficients satisfy (2.7). If we consider
these re~ations for matrices V in An of the form ( ~1 ~), where Vi E An- I, and
apply (3.50) of Chapter 2, we find that for any F E rot~ the Fourier coefficients of the
function F 1'1> satisfy the conditions (2. 7) for n - 1. Thus,
(2.36)
where we set
rotO - { C, if s(-1) = 1,
(2.37)
"- {O}, if s(-1) = -1
(in the case s (- l) = -1, the constant term in the Fourier expansion of any function
of rot~ is obviously zero). Now let q EN, and let x be a Dirichlet character modulo
q. From the definitions it easily follows that if K = r(j(q), then the group Kin-II (see
(3.55) of Chapter 2) is r 0- 1(q), and the character [x]ln-IJ of this group (see (3.56) of
Chapter 2) is the one-dimensional character corresponding to x. Thus, by Proposition
3.12 of Chapter 2 we see that for any integer or half-integer w
(2.38)
~(q,x) = { ~~},
if Xw(-1) = 1,
(2.39)
if Xw(-1) = -1.
We now turn to the Hecke operators. We show that the Hecke operators on the
spaces rot~ and rot::, (q, x) are compatible with '1> in the sense that there exists a homo-
morphism X -+ X' of Hecke rings from degree n to degree n - 1 such that for every
function Fin the space under consideration one has (Flw.xX)l'1> = (Fl'1>)lw.xX'. It is
convenient to describe these homomorphisms in terms of the polynomial realizations
of the Hecke rings by means of the spherical maps. Hence, we shall limit ourselves
to the local Hecke rings. The global Hecke rings could be treated in an analogous
maimer; however, we do not need to do this, since all of the elements that interest us
in the global Hecke ring are generated by local components.
Suppose that n EN, pis a prime, and
(2.40)
244 4. HECKE OPERATORS
is an arbitrary element of the Hecke ring Lo,p (see (3.45.) of Chapter 3). By choosing
different r 0"-left coset representatives, we can replace each matrix D; E by any a;
matrix in the left coset An D;. From Lemma 2. 7 of Chapter 3 it then f9llows that all of
the D; may be assumed to have been chosen in the form
It is easy to see that qi(x, u) does not depend on- the choice of representatives with
these properties. We thus o~tain a linear map
qi= qi;: Lo,p - L(r0- 1, s 0,; 1) ®z Q(u± 11
from the ring Lo,p to the left coset module of the pair (rij- 1, s 0,; 1) over the ring of
polynomials in u±• with rational coefficients; in the case n = 1 we obtain a map
'P~ : LA,p -+ Q(u±I].
PROPOSITION 2.11. Let n E N, and let p be a prime. Then:
(1) For any XE Lo,p the element qi(x, u) lies in the ring
Ln- 1[u± 11 = D (rn- 1,sn- 1)[u± 11
0,p Q 0 0,p
ofpolynomials in u±1 over the Hecke ring L0,; 1 (we set L8,p = Q); the map
(2.44)
is a ring homomorphism.
(2) The image of the restriction of qi; to the subring C!!_P, C~P or L; of the ring Lo,p
is contained in C.'.'..; 1[u± 11, c~; 1 [u± 1 1, L;- 1cu± 11, respectively (we set czP = L~ = Q).
(3) The following diagram commutes:
(2.45)
Ln0.-Pl[u±l 1 n;-•
-----'-------+
Q[Xo±I , • • • , X n-1'
±I U ±1 1,
n;
where is the spherical map (3.49) ofChapter 3, in the bottom row n;-
1 is the homomor-
phism extending the spherical map on L 0,; 1 that satisfies the condition 1(u±•) = u±•n;-
(we define the spherical map on L8,P = Q to be the identity map),· qi; is the homomor-
phism (2.44), and Sn is the homomorphism ofpolynomial rings given on generators by:
§2. ACTION OF THE HECKE OPERATORS 245
E(xo) = xou-I, E(xn) = u, andE(x;) = x;for 1::::;; i::::;; n - 1 (we take EI(xo) = u-I,
EI(xI) = u).
PROOF. If YI is an arbitrary matrix in r 0-I and y is the image of YI under the map
(3.53) ofChapter2, theny E q, andfromtheconditionX·y = Xfor XE Lo,p and the
definition of 'P(X, u) it easily follows that 'l'(X, u)yI = 'l'(X · y, u) = 'l'(X, u), where
YI acts only on the leftcosets and not on the coefficients. Hence, 'l'(X, u) E L(),;Icu±I].
From the definition of multiplication in Hecke rings it immediately follows that the
map (2.44) is a homomorphism.
Next, using the definition of'I' and the expansions (5.12) of Chapter 3, we obtain:
'l'(II±(p), u) = u-Irr±-I (p ). This, along with (6.1) of Chapter 3, implies the claim in
the lemma concerning the '1'-images of C:J,,p' As for L;, if we define the map 'P: L; -+
L;-Icu±I] by analogy with (2.42),.then it is not hard to verify the commutativity of
the diagram
Lo,p
L;-Icu±IJ ~ L(),;icu±IJ,
where e denotes the imbeddings in Lemma 3.26 of Chapter 3; hence
To prove the third part of the proposition, we suppose that each matrix D; in the
expansion (2.40) of some X E Lo,p has been chosen in the form (2.36) of Chapter 3
with diagonal entries pd;• , ... , pd;•. Then, by the definition of the maps, we have
~D))
. -6;( up -n)d;. wPn-I (.mn-I 61
=""'
L....Ja,u ""P o (p (Df)*
(rn-I 0
i
=n;-I('l'(X,u)). o
In what follows, to avoid worrying about the different fields of definition of the
Hecke rings, we shall suppose that all of our maps of Hecke rings have been extended
by linearity to the complexifications.
THEOREM 2.12 (the Zharkovskaia commutation relations). Suppose that q E N, p
is a prime not dividing q, x is a Dirichlet character modulo q, ands is the character of
the group {± 1} that satisfies the condition s (-1) = xw ( -1), where xw is the character
(2.5). Then the following relation holds for any F E rot~ and any XE L~.p = Lo,p ®QC:
(2.46)
246 4. HECKE OPERATORS
where cf> is the Siegel operator, 'l'(X, pn-w:X(p)) E L~.~ 1 is the element (2.42), and in the
case n = 1 the operator lw.x 'I' acts on the right as multiplication by the complex number
'I'.
PROOF. Let f(R) (R E An) be the coefficients in the Fourier expansion (2.6) of
F, and let X be written in the form (2.40), where a; E C and each D; has the form
(2.41). Using the definitions (see (2.14)), we have
Flw.xX = L:a; ( L
i REAn
f(R)e{Rz}) I
w,x
(PJ'fi ~;.)
1
where
a;= pMwn-{n))Xw(det(p61 Di))JdetD;J-w.
We note that for R = ( ~' ; ) E An the entry in the lower-right corner of the
matrix pJ1 R[Di] is equal to pJi- 2d1r (see (2.41)). Thus, if we set Z := ( ~' i~) in
the last. expression, where Z' E Hn-J. and if we let). approach +oo, then all of the
terms corresponding to matrices R with r > 0 will approach zero. Since r ~ 0, we
have r = 0, and since R ~ 0 it follows that R = ( ~' ~) . Finally, we note that for
R = ( R'
0
0) (Z' 0) ·
0 , Z = 0 iA. , and D; of the form (2.41) we have the relations
p
.s,R(D'!') = .J, (R'[(D;)*]Z'
I p 0
0)0 and RB·D-:-1 = (R'Bf(Df)- 1
I I 0 0
0) J
and we use the uniform convergence of the series on the subsets Hn(e) c Hn. We
obtain
(FJw,xX)Jcp .
= A_!!~00 (FJw,xX) (Z' iA.0)
0
= (FJcf>)lw.x'l'(X,pn-wx(p)),
where p; = pJ1(w-n>x(~1 -d1 )p-wd1 , since, according to (3.50) of Chapter 2, the sum
in the large parentheses is equal to (FJcf>)(Z'). D
We now consider the restriction of the map 'I' (·, pn-kx(p)). to the complexification
r:; = L~ ®QC of the ring (3.46) in Chapter 3.
PROPOSITION 2.13. Suppose that n, q EN, k E Z, and xis a character modulo q.
Then/or any prime p not dividing q:
§2. ACTION OF THE HECKE OPERATORS 247
-n) ( k ( -n-1 (
of LP , then'¥ X, pn- x p)) E LP
-n-1 :-0
respectively, E P ), where we set LP = ::0
EP = C.
(2) The map
-n -n-1 -n-1
In that case the image of LP is the even subring EP c LP .
(3) The map (2.48) gives an epimorphism of E; onto the ringE;- 1.
PROOF. We form the complexifications of all of the rings in the diagram (2.45),
i.e., we tensor with C over Q, and we extend the maps by linearity to the complexifi-
cations. Obviously, the resulting diagram still commutes. Then, instead of the action
of'P;(.,pn-kx(p)) onL; we can consider the action of8n with u = pn-kx(p) on the
image n;(L;) under the extended spherical map. From Theorem 3.30 of Chapter 3 it
follows that this image is the polynomial ring
(2.50)
i=I
+x;)(l +x;-I)
(2.51) 2n
=Po "L>a = Po(2 + 2(r1 +" · + Yn-1) + Yn) (n ~ 1),
a=O
we obtain
We now examine how the map 8 = Sn acts on the generators of these polynomial
rings. By definition, we have
(2 .53) -(Pon)
~ = u -I x 02x1 · · · Xn-1 = u -I Pon-1 , ~
-(tn) = u -1(1 + u )tn-1 .
248 4. HECKE OPERATORS
2n n-1
~)-1) 0 E(r;)v 0 = (1- uv)(l- u- 1v) ITO - x;v)(l -x;- 1v)
i=I
(2.54)
2(n-I)
=(1-(u+u- 1)v+v 2) L (-l) 0 r;- 1(xi, ... ,Xn_i)v0 ,
a=O
and hence
(2.55)
These formulas imply that for any u E C-in particular, for u = pn-kx(p )-the map
E takes the ring n;(L;) to n;- 1(L;-l ), and takes the ring n;CE;) to
(obviously r:-
1 = r;:J). This proves part (1) (for r:;
this part also follows from
Proposition 2.11 (2)).
To find the images of these maps we use (2.53) and (2.54) to express the generators
r;-1 and rn-I in terms of their preimages. From (2.53) we have
If we set u = pn-kx(p) in these formulas, we see that the image of n; (L;) contains
all of the generators of the ring n;- 1 (r:;-
1), except in the case when n > land
pn-kx(p) = -1, i.e., the case (2.49). In the latter case, the image contains all of the
elements r;- 1 and (p0- 1)± 1, but it does not contain rn- 1; hence, by (2.52) for n - 1,
this image is n;- 1cE;- 1). The same formulas also imply that the image of n;CE;) is
I -n-1
always n;- (EP ). 0
The formulas (2.56) and (2.57) give us a practical way to compute the preimages
of the generators ofL;-t under the map (2.48).
Let LO.P be the Hecke ring (4.99) of Chapter 3, and let Pi: be the homomorphism
(4.101) of Chapter 3. We define the homomorphism 'ii= if; for the ririg L(l,p in such
§2. ACTION OF THE HECKE OPERATORS 249
En-l[u±l]
p;- 1 x1 r-1( ±I]
O,p O,p U '
where p;:- 1 x 1 is the homomorphism extending p;:- 1 and taking u±1 -+ u±1, and
~o
Lo,p = C. If
(2.59) _"° .
X~ - L..Ja, (~n
r0 (PJ;(D;)*
O Bf) ,t,·)
D!
i I
is an arbitrary element ofLo,p and the matrices D; have the form (2.41), then we easily
see that the map 'P that takes i to
(2.60)
'i'(X,u) = L:a;uJ1 (up- 1)d't;-k
i
for.n > 1 and n = 1, respectively, has the above property. Moreover, if we use (2.19),
(2.46), and the commutativity of the diagram (2.58), then for any F E rot~, where
s(-1) = x(-1), and for any i E Lo,p• we obtain the relation
(2.61)
LEMMA 2.14. Let i;(q) be the even Hecke ring (4.37) of Chapter 3, and let eq be
the imbedding of this ring in Lo,p given in (4.100) of Chapter 3. Then the restriction of
the map (2.60) to the subring eq(E;(q)) gives a homomorphism
(2.62)
where y is the image of y' in the group r 0(q) under the map (3.53) of Chapter 2. This
relation, in turn, is a consequence of the following claim.
250 4. HECKE OPERATORS
(2.63)
Then
(2.64) M{Y' =g'~ for M: = (Mf,t;p-d f 1 2) E s~.; 1 •
where M! = (
'
P'M' (Dt)*
0
Bf)
D! I
and o' =
d'
(a'c'
b') E rn-
o
1(q) and for any
'
x n n-
matrix A we let A' denote the (n - I) x (n - 1)-matrix in its upper-left corner.
To prove this claim, we first note that M1 y = 0M2. From this and from (2. 7)-(2.8)
of Chapter 1 we obtain M{y' = o'M~, whereo' E r~- 1 (q), d 1 = d2 , and the matrix
.
Setting Z = (Z' iA.0)
0 E Hn, we conclude from (3.61) of Chapter 2 and the last
equality that we have
t1j(2j1(y', Z') = j(2) 1(o', M2{Z'))t2.
This, together with the equality M{y' = o' M~, gives us (2.64). D
This lemma enables us to describe the action of 'I' on the subring eq,k(.E;(q,x))
of the ring L~·P' and to define a map'¥' for .E;(q, x) that commutes with the Siegel
operator <I). Namely, we have
PROPOSITION 2.15. (1) Let E;(q, x) be the ring (4.104) of Chapter 3, let x be a
Dirichlet character modulo q, and let k be an odd integer. Then the map 'I' (see (2.42)
and (2.43)) gives an epimorphism
(2.65) 'I'( ,pn-kf2x(p)): E;(q, x) ®QC-+ E;- 1(q, x) ®QC,
where the ring on the right is C in the case n = 1.
(2) Let'¥'=\¥'(., pn-kf2x(p)) be the map de.fined by the commutative diagram
""'
.E;(q, x) ®QC ~ L~.p
(2.66) l ~(.p•-kt2x(Pn....,_, l '¥(.p•-kt2x(P n
E;-
,... I(
q,x
) e,k -n-1
®QC~ Lo,p,
where E';(q, x) is the ring (4.83) of Chapter 3 andeq,k is the monomorphism (4.102) of
Chapter 3 extended by linearity if m > 0, and is the identity map from C to C if m = 0.
Then \¥' is an epimorphism. .
§2. ACTION OF THE HECKE OPERATORS 251
The Zharkovskaia relations are often used when one wants to answer certain
questions concerning the action of Hecke operators on modular forms not in the
kernel of the Siegel operator by reducing them to analogous questions for forms of
lower degree. One such question is the existence of a basis of eigenfunctions of the
Hecke operators. Theorem 1.9 gives a positive answer to this question in the case of
cusp-forms. In many cases the Zharkovskaia relations enable one to carry this result
over to the entire space of modular forms. Since the general case has not yet been
sufficiently investigated, we shall limit ourselves to the simplest nontrivial case, that of
the space
(2.69)
of modular forms of integer weight and unit character for the modular group rn.
'fHEoREM 2.16. Any subspace V ofrotz, where n EN and k E Z, that is invariant
relative to all of the Hecke operators lkT = lk,I T for TE L(rn) = Ln(l) = Ln has a
basis consisting of eigetifunctions of all of these operators.
252 4. HECKE OPERATORS
PRooF. In the case under consideration, (1.26) obviously implies that for any
forms F, G E rotz,
at least one of which is a cusp-form, and for any T E L n, one has
(2.70)
Then the argument used to prove Theorem 1.9 can be applied to any invariant subspace
v contained in mz, so that our theorem is proved in that case. If v is an arbitrary
invariant subspace, we set Vi= Vnmz and Vi= {FE V; (F, G) = Oforall GE V2}.
Using the properties (3)-(5) of the scalar product in Theorem 5.3 of Chapter 2 and
standard linear algebra, we see that Vis the direct sum of the subspaces Vi and V2 :
F/lk'P(e(T),pn-k) = A.;(T)F/,
where e : L ~ ~ LO.p is the imbedding ( 1.27) of Chapter 3 and A.; (T) is a scalar; hence,
by (2.46), we have
and so FdkT = A.;(T)F;. As noted before, the space Vi has a ba~is of eigenfunctions
of all of the Hecke operators. If we combine this basis with the basis Fi, ... , Fd
of Vi, we obtain the desired basis for V = Vi $ Vi. To complete the induction it
remains to prove the theorem in the case n = 1. We again represent the invar\ant
subspace V c rot!
in the form (2.71), and note that in this case dim Vi = 0 or 1, since
W(Vi) c rot2 = C. If dim Vi = 0, then V c !Jll,
and our claim has already been
proved; if dim Vi = 1 and F is a function that spans the invariant subspace Vi, then F
is automatically an eigenfunction of all of the Hecke operators. This F, together with
the basis of eigenfunctions for Vi, form the desired basis for V. 0
Another application of the Zharkovskaia relations can be found in the next sub-
section.
§2. ACTION OF THE HECKE OPERATORS 253
Show that aou- 1,a 1,. • ., an-i. u can be taken as parameters of the homomorphism
T---+ I('l';(r, u)) of the ring L;, where u is a nonzero complex number.
PROBLEM 2.18. Show that all of the eigenvalues of Hecke operators on rot;:· are
real. ·
(respectively,
where the right side is understood as a formal sum in the case when P is a formal power
series. If the values off coincide with the Fourier coefficients of F, then obviously
It also follows from the definitions that the product of polynomials or series corre-
sponds to the product of the corresponding operators.
254 4. HECKE OPERATORS
Thus, let
' n n
(2.75) B;(v) = :~:)-l);b;v;, n;(v) = :~::)-l);b;v;
i=O i=O
be the middle factors in (6.99) and (6.100) of Chapter 3, where the coefficients b; and
b; are linear combinations of elements of the form 11~6 and 11~6 (k). The next lemma
reduces the study of the action of these elements on rot~ and ~~ to the computation of
certain trigonometric sums.
LEMMA 2.19. Under the above assumptions and notation, any f E ~~satisfies the
relations
flk.xl1~6 = Pn(k-n-l)x(p)nlp(r,n;R)f(R),
f lk/2.xl1~6(k) = Pn(k/2-n-I) x(p)nz;(r, n; R)f (R),
where
(2.76) z;(r,n;R) = L x(A)-ke{p- 1RA},
AELp(r,n)
Lp(r, n) is the set of symmetric n x n-matrices of rank rover thefieldFp = Z/pZ, xis
the function (4.70) of Chapter 3, and
(2.77) lp(r, n; R) = l~(r, n; R).
PRooF. Let F E rot~ have Fourier coefficients f(R). Then from (2.14), (2.15),
and (4.106) of Chapter 3 it follows that
Flw.xl1~6(k) = Pn(w-n-l)x(p)n L f(R)x(Bo)-ke{R(Z + p- 1Bo)} ..
R,BoELp(r,n)
This lemma enables us to write the action of the polynomials B; (v) and n; (v) (in
the sense of (2. 73)) in terms of the trigonometric sums (2. 76) and (2. 77).
PROPOSITION 2.20. Under the above assumptions and notation, any f E ~~ satisfies
the relations
(flk.xB;(v))(R) = B;(v,R)f(R),
(2.78)
(flk/2.xB;(v))(R) = n;,k(v,R)f(R),
where RE An, B;(v, R) and n;,k(v, R) are the polynomials defined by setting
(2.79)
n;(v, R) ~ t,(-1) 1p-(i)-l(•-<l { t. <>111,(i - j, n; R) }v',
n;,k(v,R) = ~(-l)ip-(i)-i(n-il{ t.aijz;(i- j,n;R) }v;,
and a;_; and a;_; are the coefficients (6. 70) and (6.86) of Chapter 3.
§2. ACTION OF THE HECKE OPERATORS 255
PRooF. The proposition follows directly from (2.73), (2.75), Theorem 6.24 of
Chapter 3, and Lemma 2.19, if we take into account that !J,. =II~~~· D
Thus, our task reduces to the computation of the polynomials (2.79), to which
the rest of this subsection is devoted. We begin by computing the trigonometric sums
(2.77). For 0 < b ~ n we set
(2.81)
(the set of matrices of integral quadratic forms inn variables) we define the set
We shall let pp(b, n) and pp(b, n; Q) denote the number of elements in the sets (2.80)
and (2.82), respectively:
= L L.:exp(niu(R['X]A)/p),
iEfr,n A,V
Namely, for every matrix Tin Prp(r,n) we let i = i(T) E Ir,n denote the first (in
lexicographical order) set of indices 1 ~ i 1 < · · · < i, ~ n such that the columns of T
indexed by i1, .•• , i, are linearly independent modulo p. Since obviously
it follows by taking g = r1- 1, where T1 is the matrix made up of the i 1th, ... , i,th
columns of T, that the matrix T' = r,-
1 T has the same index set as T, and the columns
corresponding to these indices are equal to the corresponding columns of the identity
matrix E,. Since i is a minimal set, it follows that the entries t~.jp in the columns of
T' with indices (j,, ... ,jn-r) = i (the complement of i = (ii, ... , i,) in (1, 2, ... , n))
=
satisfy the condition t~.ip O(mod p) if ia > j p. If we then replace all of the entries
in T' by their least nonnegative residues modulo p and use (6.51) of Chapter 3, we
see that the matrix T' M(i)- 1 = r,-
1TM(i)- 1 has the form (E,, V), where V E V(i)
(see (6.52) of Chapter 3). Hence, the right side of (2.84) contains representatives of
all of the orbits. If two matrices X = (E,, V)M(i) and X' = (E,, V')M(i'), where
=
V E V(i) and V' E V(i'), belong to the same orbit, i.e., X' gX(mod p ), then from
(2.85) and the obvious equalities i(X) = i, i(X') = i' it follows that i' = i, and hence
g = E,(mod p) and X = X'. This proves (2.84). We now note that, because of the
summation over A, the right side of the last expression for lp(r, n; R) does not change
if A is replaced by A[g] with g E G. Hence, applying (2.84), we obtain
On the other hand, the number of elements in a set of the form Prp(b,n;R) can
also be expressed in terms of reduced Gauss sums. Namely, from the obvious relations
Ifwe note that none of the sums a;(A, R) is affected by any substitution of the form
A --+ gA 1g with g E GLb(Z/ pZ), and if we use Lemma 6.18 of Chapter 3, we can
rewrite the inner sums in the last expression in the form
where A'= A(s) is ans x s-block. Every matrix XE Prp(b, n) can be written in the
form X = ( ~~ ) , where X1 E Pr P (s, n). The number of such X with fixed X1 clearly
does not depend on Xi. and so this number is pp(b, n)/ pp(s, n). Thus, each sum a;
in the last expression can be rewritten in the form
= Pp~b,n~ L e{p-IR['Xt]A'}
Pp s,n
Xi EPr,(s,n)
= Pp(b,n) a;(A',R).
pp(s,n)
If we substitute the resulting expressions into the formula for pp(b, n; R) and use the
formulas for pp in Lemma 6.16 of Chapter 3, after obvic;ms cancellations we obtain the
formula
Pp(b,n;R) = L p-(s-1) 'Pb'Pn-s L
'Ps'Pt'ltn-b AEL,(s,s)
a;(A,R).
s,t~O
s+t=b
Using these expressions and the relations (6.82) of Chapter 3, we find that the right
side of the equality in the lemma is equal to
= p-<r-IJcp;I L a;(A,R).
AEL,(r,r)
Since, by Lemma 6.16 of Chapter 3, the factor in front of the last sum is equal to
pp(r,r)- 1, it follows that the last expression is equal to the expression (2.86) for
lp(r,n;R). D
We use the theory of quadratic spaces to compute pp(b, n; Q). The properties of
quadratic spaces that we shall need are given in Appendix 2.
LEMMA 2.22. Suppose that n, b E N, 0 < b :i:;; n, p is a prime, Q E En, and
q = q(x1,. . .,xn) is thequadraticform (1.4) of Chapter 1 having matrix Q. Then the
number pp(b, n; Q) of elements in the set (2.82) is equal to the number i( Vp,f p; b) of
isotropic sets of b vectors in any quadratic space (Vp,f p) of type {q} over Fp.
258 4. HECKE OPERATORS
PROOF. Let e1, ... , en be a basis of VP in which the quadratic form of the space
( Vp, f p) is equal to q modulo p. It follows from the definitions that a set of vectors
mi. ... ,mb E VP, where m; = E}= 1m;iei, is isotropic if and only if the matrix
M = (m;j) is contained in the set Prp(b, n; Q). O
We say that two matrices Q and Q1 in En are equivalent modulo some d E N and
write Q"' Q1 (mod d) ifthere exists ME Mn(Z) with (detM,d) = 1 such that
Q1 =Q[MJ(modd),
where the congruence is understood in the sense of (2.83). If Q is equivalent modulo d
to a matrix of the form ( ~ 1 ~),where Qi E En-1, then we say that Q is degenerate
modulo d. Otherwise, we say that Q is nondegenerate modulo d. If d = p is a prime,
then the relation Q "' Q1(mod p) is obviously equivalent to the relation q "' q 1over F P
between the quadratic forms corresponding to the two matrices (see (1.4) of Chapter
1), and Q is nondegenerate modulo p if and only if the quadratic space over F P of type
{q} is nondegenerate (see Appendix 2).
We can now use the results of Appendix 2.4 to finish the computation of the
polynomials B;(v, R).
'fHEoREM 2.23. Let n E N, p be a prime, R E En. and B; (v, R) be the polynomial
(2.79). Then:
(1) B;(v, R) = B;(v, R1) if R "'R1 (mod p);
(2) ifthe matrix R is degenerate modulo p, i.e.,
where B;(v) is the polynomial (2.75) overL0,p and'¥ is themapfromL0,p to L(),; 1[u± 1]
defined by (2.42)-(2.43). Using the definition of the polynomials Rn(v) = a;(v) (see
(6.28) of Chapter 3) and the commutativity of the diagram (2.45), we obtain
n"- 1('¥(R"(v),u)) = E(O"R,")(v) = (1- u- 1v)(l - uv)(nn-Ian- 1)(v).
§2. ACTION OF THE HECKE OPERATORS 259
By Proposition 2.11 (2), all of the coefficients of the polynomial 'l'(Rn(v ), u) lie in the
ring L;- 1, which also contains all of the coefficients of Rn-I (v ). Since, by Theorem
Similarly,
nn- 1 ('1'(X~(v), u)) =
(1- uv)(nn-l x~- 1 )(v).
According to (6.34) of Chapter 3, the coefficients of the polynomial X.'.'.. lie in the
subring C.'.'.. of Lo,p· From the duality relations (6.2) of Chapter 3 and (6.33) of
Chapter 3 it then follows that the coefficients of X~ lie in C~. If we take Proposition
2.11 (2) into account and use the fact that 0 is a monomorphism on cg,- 1, we arrive
at the relations
'l'(X~(v),u) = (1- u- 1 v)x~- 1 (v), 'l'(X~(v),u) = (1- uv)x~- 1 (v).
Applying 'I' to (6.99) of Chapter 3 and using the above formulas, we obtain
(1- u- 1v)(l - uv)Rn-l(v) = 'l'(Rn(v),u)
= (1- u- 1 v)x~- 1 (v)'l'(B;(v),u)(l - uv)x~- 1 (v),
so that
The relation (2.87) follows from these factorizations, since the polynomials X:C 1 have
constant term 1, and so are invertible in the ring of formal power series over Lo,p.
We now proceed directly to part (2). By part (1), we may assume that
R = (~I ~).
We can take R 1 to be an arbitrary matrix in its residue class modulo pEn-l· Hence,
without loss of generality we may assume that R 1 > 0 (for example, we can arrange
this by choosing sufficiently large representatives modulo 2p of the diagonal entries
in R 1-see Theorem 1.5 of Appendix 1). We then set Q = ( ~ 1 ~) EA~. and we
consider the theta-series en(z, Q) of degree n for the matrix Q (see (1.9) of Chapter 1).
Since the theta-series is obviously invariant relative to the transformations Z -+ M (Z}
for ME r 0, it follows from Proposition 1.3 of Chapter l that en(z, Q) E rot~, wheres
is the unit character of the group { ± 1}. We take k = 0 and x = l, and in two different
ways we compute the R 1-Fourier coefficient of (Flo.iB;(v))lct>, where F = en(z, Q)
260 4. HECKE OPERATORS
and wis the Siegel operator {see (2.74)). On the one hand, from Proposition 2.20 and
(3.50) of Chapter 2 we see that this coefficient is {see §1.2 of Chapter 1)
Since obviously r(Q, Ri) ;;;;: 1, we obtain part (2) by equating the last two expressions.
Now suppose that the matrix R is nondegenerate modulo p. Let r(x1, ... ,xn)
denote the quadratic form with matrix R, and let (VP, f P) denote the quadratic space
of type {r} over the field F P = Z/ pZ. As noted before, the nondegeneracy of R
modulo p implies nondegeneracy of the quadratic space ( Vp, f p). If we apply Lemma
2.22, we can rewrite the expressions for lp(r, n; R) in Lemma 2.21 in the form
o
where i(b) = i(Vp,fP,b) is the number of isotropic sets of vectors in the space
(Vp,/p). If we substitute these expressions into (2.79) and use the formulas for the
a;j in (6.70) of Chapter 3, we obtain
where
For fixed c and b we sum the terms in this expression over all nonnegative integers d
and a such that d +a= i - c - b, and we use (6.82) of Chapter 3; this gives us
We let .A. denote the dimension of a maximal isotropic subspace of ( Vp, f p). By
Corollary 2.15 of Appendix 2, we have .A.= m ifn = 2m and XR(p) = 1, .A.= m - 1 if
n = 2m and XR (p) = -1, and .A. = m if n = 2m + 1. This implies that in all cases the
desired expression for the polynomial B;(v, R) can be written in the form
(2.88)
§2. ACTION OF THE HECKE OPERATORS 261
Using the above formulas and talcing into account that i (b) = 0 if b > A., we obtain
where the second product is taken to be 1 in the case b = A.. If we substitute this
expression in the last formula for we find that B;,
B;(v,R) = { u
n-J.-1 (
1 + ;i
)}
G(v),
where
). }.-b
G(v) = L(-l)bp(b)-bni(b)cpb"lvb II(l
+vpi+b-n),
b=O i=I
from which it follows that to complete the proof of part (3) it suffices to verify that
the polynomial G(v) is equal to the second factor in (2.88). Since G(O) = 1 and the
degree of G (v) is at most A., we see that it is sufficient to show that G (pµ) = 0 for
µ = 1, 2, ... , A.. For integers j ;;:.: -1 we define the numbers "i = "i (p) by setting
xi= 'Pi(p 2 )cpi(p)- 1 if j;;:.: 0, and x_ 1 = 1/2. It is nothard to see that
l, if g = h;;:.: 0,
(2.89) ::=: =
{
11
h~i~g-1
(pi+ 1), if g > h;;:.: 0.
In this notation, if we substitute v = pµ, where 1 ~ µ ~ A., into one of the products in
the expression for G(v), then we obtain
}.-b }.-b
II (1 + pi+b+µ-n) = II /+b+µ-n(pn-(i+b+µ) + I)
i=l i=I
n-b-µ-1
= p(J.-b)+(J.-b)(b+µ-n) II (pi+ l)
i=n-J..-µ
_ (J.)-J.µ-J.n-bµ-(b)+bn -I
- P Xn-b-µ-l"n-).-µ-1
(recall that this product is assumed to be 1 in the case b =A.). Hence,
).
We now use the formulas in Appendix 2.4 for the numbers i (b), but first we rewrite
these formulas in a more convenient form. From (2.24)-(2.25) of Appendix 2 and the
definitions we obtain
i(b) = p(b-1)(/• - l)(pl-b - l)cpl-b(p2)-I
= (pl - l)cpl-1(p2). - - • p(b-l)cpl(p)
'Pl(p) "l-b-1 cp),-b(p) '
if n = 2m and XR(p) = 1;
These formulas imply that in all cases i (b) can be written in the form
i(b) = cx;~b-µ-1P(b-l)cplcp;!_b,
where c does not depend on b. Substituting these expressions into the formula for
G (pµ), we find that
which, after we expand the parentheses and combine similar terms, reduces to the
form indicated. If we substitute these expressions in G (pµ) and change the order of
summation, we find that
where the inner sum can be transformed to a product using the identity (2.43) of
Chapter 3. Sinceµ ~ 1, it follows that 1 ~ i + 1 ~ A. for each i = 0, ... , A.-µ. Hence,
each product in the last expression is zero, and so also G(pµ) = 0. 0
§2. ACTION OF THE HECKE OPERATORS 263
(2.90)
and let .A.(R2) = XRi (p) p. Then for n >- 2 the trigonometric sums (2. 76) satisfy the
relation
PROOF. From (6.91) of Chapter 3 and (2.76) it follows that the sum 1;(r,n;R)
depends only on the equivalence class modulo p of the matrix R. Hence, we may
suppose that R = ( ~ 1 ~2 ) E En and R 2 = ( ~ ~). We rewrite the sum (2. 76)
in the form
(2.92) 1;(r,n;R) =L L u(d;Z,R),
d~O ZELp(d,n-2)
where
u(d;Z,R) =
AELp(r,n),A<•-2)=Z
2
and we let A(n- ) denote the (n - 2) x (n - 2)-matrix in the upper-left corner of A.
Let Z = ( ~ 1 ~) [UJ], where U1 E GLn-2(Z/pZ) and Z1 = diag(z1,. . .,zd) is a
matrix that is nondegenerate modulo p. If we then replace A by A[U] in the last sum,
with U = ( ~1 ~J. and use (6.91) of Chapter 3, we obtain
0 Xi)-(~1 O
0 X2 - X
o z-I 1
E
X2
1 Y O 0
264 4. HECKE OPERATORS
u(d;Z,R) =
AELp(r,n),
A(n-2)=(~1 ~)
= x(Z1)-ke{p- 1R1Z}
X1 ,X2, Y;rp(X)=r-d
and X 1, X 2, and Y = 'Y run through the set of all matrices over Z/ pZ of size d x 2,
(n - d - 2) x 2, and 2 x 2, respectively. We first find a value for u. Using the form of
the matrices R 2 and Z 1 and Lemma 4.14 of Chapter 1, we have
u (u,0 o)
=
E2 '
V = (En-t1-2
0
0 )
Vi ,
where U1 E GLn-c1-i(Z/ pZ), Vi E GL2(Z/ pZ), and X2 and Y are the same as in the
matrix A. If rp(X2) = 2, then there exists a matrix U1 suchthat'U1X2 = ( ~2) and
X2 E GL2(Z/ pZ). Consequently,
X['UJ"' (~ ~ 12),
0 'X2 Y
where "' denotes equivalence of matrices over Z/ pZ in the sense of§ l of Appendix 1.
This implies that rp(X) = 4 and x(X) = l. If, on the other hand, rp(X2) = 1, then
there exist matrices U 1 and Vi such that U 1X 2 Vi = ( ~ ~) , and hence
X['(UV)J -- (
0 0
0O 0O
0 0)
0 1 where Y 1 = Y['Vi) =(YI Yi).
Yi
' Y2 Y3
0 1
§2. ACTION OF THE HECKE OPERATORS 265
(00 0) a '
where a E {Z/pZ)*, v E Z/pZ. Since the summation in a{l) is taken over such Y,
we have ·
By assumption, k is odd, and .A. 1 and .A.2 are even numbers prime to p. Hence, if we
apply the formulas for Gauss sums {see (4.28) and Lemma 4.14 of Chapter 1) to the
second and third sums, we find that a{l) = 0. We now substitute these values for a
and a(p) into the expression for a(d; Z, R) and use (2.92); we find that (r, n; R) is 1;
equal to the sum
.A.(R2)'- 21; (2, 2; R2)1! (r - 2, n - 2; R) + .A.(R2)' 1; (r, n - 2; R 1).
Thus, to prove the lemma it remains to evaluate 1;(2, 2; R 2). By the definition (2.76),
this sum can be divided into two parts as follows:
( L + L L )x(A)-ke{p- 1R2A}.
A= ( ~; ) EL,(2,2) zE(Z/pZ)• A= ( : ; ) EL,(2,2)
Hence, if we make the change of variables y ---+ y' + x 2z- 1 and use (4.28) and Lemma
4.14 of Chapter 1, we conclude that 1; (2, 2; R 2 ) = - p. D
The recursive relations in this lemma allow us to obtain explicit formulas for the
sums 1;(r,n; R).
266 4. HECKE OPERATORS
LEMMA 2.25. Let l!(r,n;R) be the trigonometric sum (2.16), where k is an odd
number and R is a matrix in En that is nondegenerate modulo the prime p =f:. 2, and let
cp;); = cp;);(p) be the function in (6.83) of Chapter 3. Then:
(2.93)
_ cpi;,,(-1)' p'2 _ (-1) (k-1)/2 ( {-l)m detR)
'Pm,r - + + ' XR,k(p) - - •
'P2m-2r'P2r P P
and show that the sum Fn(z; R) = F0(z; R) + Ft(z; R) is equal to the product
m-1
{l + CnXR,k(p)pm+l/2z) II {1- P2j+lz2),
j=O
where cn = 0 or 1 depending on whether n = 2m or n = 2m + l, respectively.
According to formula (2.90), from the very beginning we may assume that R =
( ~1 ~2 ) , just as in Lemma 2.24. If we apply the recursive relation in Lemma 2.24
to the coefficients of the polynomial Fv"(z; R), we see that for n >2
F,,n(z; R) = (1 - pz 2)F:- 2(A.(R2)z; R1).
On the other hand, if n ~ 2, then the proof of Lemma 2.24 and the formulas for the
Gauss sums immediately imply that
'fHEoREM 2.26. Let B;,k(v, R) be the polynomial in (2.79), where p =f; 2, k is odd,
and R E En. Then:
(1) B;,k(v,R) = B;.k(v,R1), if R rv R1{modp);
(2) if R is a degenerate matrix modulo p, i.e.,
ii;,,( v, R) ~ ll
(I - c.x...(p) P"':,,,) (I - p~:,),
where X.R,k is the character in Lemma 2. 25 and en ·= 0 or 1 depending on whether n = 2m
or n = 2m + 1, respectively.
PRooF. The first two parts are proved in exactly the same way as in Theorem 2.23.
We now prove part (3). We rewrite the polynomial B;,k(v, R) in the form
n
(2.95) ii;,k(v,R) = :L(-l);Bf'v;,
i=O
where Bf' denotes the sum
i
Bf'= p<n-i)-(n) :La;jl!(i- j,n;R),
j=O
and we first suppose that n = 2m. Then from (6.86) of Chapter 3 and Lemma 2.25 we
have B£~ 1 = 0 for 0 ~ i ~ m - 1, and
~ i (- )j . . +(-l)i-j P(i-j)2
B 2m _ (n-2i)-(n)""' P 'Pn-21+21 • 'Pn
2i -p L...J + + +
j=O 'P2j'Pn-2i 'Pn-2i+2j'P2i-2j
(2.96)
. . + ; - + s2
(s=b,-J) p<n-2i)-(n)(-p)i 'Pn. + L
'Pn_-2s • ~2;P + (p-1/2)2s,
'Pn-21'P2; s=O 'Pn-2i 'P2i-2s'P2s
where cp~ = cp~ (p) are the functions (6.83) of Chapter 3. From the definition of these
functions it follows that for n = 2m and 0 ~ s ~ i
m-i~j~m-s-1
(2.97)
where Ils (i, av) is the coefficient of v 2s in the polynomial II{i, av). If we substitute
these expressions in the last equality for ii];n and introduce the new notation
i
s(i,m) = ~)-l)i-sII(i - s, v'-f · pm-i)IIs(i,p- 112v),
s=O
then we can write
+
jj2'!' = {-l)ip-2i(n-i) 'Pn s(i m)
21 + + ' .
'Pn-2;'P2;
We arrange the pairs (i, m) in lexicographic order, and prove by induction on (i, m)
that ·
for 0 ~ s ~ i - 1. If we now separate the extreme terms in s(i, m) and use (2.99), we
obtain
s(i, m) ={-l);II{i, v'-f • pm-i) + II;{i, p- 112 v)
+ L {-1);-·~II(i - s, v'-1 · pm-i)II,,(i- l,p- 112v)
0<.v<i
+ p2i-2 L {-l)i-.rn{i-s,v'-f. pm-i)II .. -1(i- l,p-lf2v).
O<s<i
it follows that, if we again apply (2.100) to the first sum on the right in the equality for
s(i, m), we find that s(i, m) satisfies the following recursive relation:
s{i,m) = (p 2111 - 2i+I - l)s{i - l,m) + p 2i- 2s(i - l,m -1).
§2. ACTION OF THE HECKE OPERATORS 269
We now use (2.98) for (i - l,m) and (i - l,m - 1), and make the corresponding
transformations on the right side of the resulting relation; we obtain the formula (2.98)
for(i,m).
We now finish the computation of ii;,k(v, R) for n = 2m. Since BiH. 1 :::;:: 0, and,
by (2.98),
+ ;2
. B21
~2!" = (-1); 'P2mP (0 ...-
(2.101) + + p-2im : : : : 1 ...- )
: : : : m,
'P2m-2;'P2;
it follows that, substituting these expressions in place of the coefficients in (2.95) and
using (2.94), we obtain
m + ;2
ii;,k(v,R) = L(-1); ~2mP + (p-mv)2i
i=O 'P2m-2;'P2;
=IT j~
(1 _ p2j+I p-2mv2) =IT (i _ ~:I).
j~ p
Thus, to prove the theorem it remains to consider the polynomial ii;,k(v, R) for
n = 2m + 1. By Lemma 2.25 and (6.86) of Chapter 3, we can write the coefficients of
this polynomial in the form
B~n2i+l -_ p
(n-2i-l)-(n) ~ (-p) 1'P2m-2i+2j
L....J +
• 'Pim(-l)i-j P(i-1)2
+ +
( ) m+l/2
XR,k p p '
J=O 'P2j'P2m-2i 'P2m-2i+2j'P2i-2j
270 4. HECKE OPERATORS
modular forms of integer and half-integer weight. On the other hand, if we consider the
multiplicative properties connected with the even Hecke rings, then here the theories
for integer and half-integer weight are parallel_.:..and, moreover, they can be developed
for modular forms of arbitrary degree.
1. Modular forms in one variable. We consider a modular form
00
(flT(m))(2a) =
= I: dk-I x(d)f(2mafd 2 ).
dlm,a.
If we equate the Fourier coefficients with the same indices on both sides of (3. l), we
obtain
(3.2) L dk-I x(d)f(2ma/d 2 ) = -t(m)f(2a) (m E N(q)> a= 0, 1, ... ).
dlm,a
After replacing m and a by m/o and a/o, where o E N is a common divisor of m and
a, we obtain
Since for b E N
if b = l,
(3.3) L:µ(o) = { 1,
Jib o, if b > l,
272 4. HECKE OPERATORS
from which (3.6) and (3. 7) easily follow by an elementary number-theoretic argument.
We leave the details to the reader as an exercise. Now suppose that there exist nonzero
a for which f(2a) f:. 0 (this is always the case if k > O), and let x = x(F) be the
smallest such a. Let d denote the largest divisor of x that is in N(q}· Since d and
x/d are relatively prime, the relation (3.2) gives f(2x) = A.(d)f (2x/d), and hence
f (2x/d) f:. 0. Thus, d = 1, i.e.,
(3.11) x(F)jq 00 (in particular, x(F) = 1, if q = 1).
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 273
since, by (3.11), the common divisors of m and m1x form, m1 E N(q) must divide m1;
this proves (3.6). Similarly, from (3.12), (3.4), and (3.11) we obtain
The identities (3.7) show, in particular, that the eigenvalue ..1.(m) for any m E N(q)
can be explicitly written as a polynomial in the eigenvalues A.(p), where p runs through
the prime divisors of m.
The identities (3.4) and the identities in Theorem 3.1 have the following elegant
reformulation in the language of Dirichlet series.
THEOREM 3.2. Let
00
F = Lf(2a)e2niaz E rotk(q,x)
a=O
be a nonzero modular form ofweight k E N and character xfor the group rA(q ). Suppose
that F is an eigenfunction ofall of the Hecke operators on rotk (q, x) of the form lk.x T (m)
form E N(q)> and let A.(m) = A.(m,F) be the corresponding eigenvalues. Then/or any
a E N the Dirichlet series
converges absolutely and uniformly in any right half:.plane of the complex variable s of
the form Res ~ k + 1 + e (of the form Res ~ k/2 + 1 + e if Fis a cusp-form) with
274 4. HECKE OPERATORS
PROOF. The absolute and uniform convergence of the series (3.13) in the indicated
regions follows in the usual way from the estimates (3.35) of Chapter 2 (from (3.70) of
Chapter 2 if Fis a cusp-form). The convergence of the series (3.15) and the product
(3.16) follows from the estimates in Theorem 3.1.
Using (3.4), we have
1
D(s,a;F) = L m" L dk- 1x(d)µ(d)..l(m/d)f(2a/d),
mENc,) dlm,a
= ( L x(d)µ(d)f(2a/d)dk-l-s) L A~~)'
tlENc 9 J>dla mENc9 >
(3.18)
where
00
are the so-called local zeta-functions of the modular form F. To sum the power series
(3.19) we make use of a special case of (3.6):
..l(p)..l(p'') = ..l(p'·+1) + Pk-'x(p)..l(p''-1) (v ~ l).
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 275
REMARK. The Euler product expansion (3.16) also follows from the properties of
the elements T(m) that we studied in §3.3. That is, the relations (3.17), and hence
(3.18), are a direct consequence of (3.20) of Chapter 3. The identity (3.20) can be
obtained if the elements T(p~) in the first identity of Proposition 3.35 of Chapter 3
are replaced by the corresponding eigenvalues and we use the fact that, by (2.34),
Flk.xL\1 (p) = Pk- 2x(p)F.
The expansions in Theorem 3.2 do not give any new information about the mul-
tiplicative properties of the Fourier coefficients of eigenfunctions of Hecke operators
that was not contained, for example, in the identities (3.4). But they make it possible
to express the zeta-function ((s,F) in terms of the original modular form, and in
many cases this enables one to investigate its analytic properties (see Problem 3.9).
In the case of modular forms of degree n > l it seems that there are no universal
identities that express individual eigenvalues in terms of the Fourier coefficients of an
eigenfunction, or vice-versa. (Note that when n > l the Fourier coefficients and the
eigenvalues are even indexed by sets that are not related to one another in any clear
way: the set of integral equivalence classes of matrices in An in the first case, and a set
of double cosets, i.e., diagonal matrices of a special type, in the second case.) As for
the Dirichlet series, in the multivariable case one is able to express certain Dirichlet
series constructed from the Fourier coefficients of eigenfunctions in terms of Euler
products (zeta-functions) constructed from the eigenvalues, and vice-versa. On the
one hand, the resulting identities reveal the multiplicative nature of the Fourier coef-
ficients; on the other hand, as in the one-variable case, they enable us to investigate
the analytic properties of the zeta-functions that appear. Unfortunately, the identity
in Theorem 3.2 has thus far been generalized only to modular forms of degree n = 2.
This generalization will be explained in the second part of this section.
PROBLEM 3.3. Prove that a modular form F E rot!
(q, x) is an eigenfunction for all
of the Hecke operators lk.x T = IT for T E L ( q) if it is an eigenfunction for all T of
1
the form T(p) with p E P(q)·
PROBLEM 3.4. Let F 1, ••• , F11 be a basis of a subspace of rot! (q, x) that is invariant
relative to all of the Hecke operators IT ( T E L 1(q)), and let
"
F;jT = LA;_;(T)F.i fori = l, .. .,h .
.i=I
276 4. HECKE OPERATORS
Let f(2a) for a = 0, 1, ... denote the column made up of the Fourier coefficients
f;(2a) of the form F;; and form E N(q) let A(m) denote the matrix with entries
A.;j(T(m)). Prove the following relations:
dlm,a
f(2ma) = L dk-I x(d)µ(d)A(m/d)f(2a/d) (m E N(q));
dim.a
L m- 3 f(2am) = ( L m~ 3 A(m))
mEN(q) mEN(q)
x ( L x(d)µ(d)dk-l-sf(2a/d)) (a EN),
dEN(qJ>dla
if Fi. ... , Fh are cusp-forms), where c does not depend on m. Using these estimates,
investigate the convergence of the matrix series and products in the previous problem.
[Hint: Let a1, ... ,ah EN be chosen so that the matrix A= (/;(2ai)) is in-
vertible. Then A(m) = B(m)A- 1, where B(m) is the matrix whose columns are
Ed1m,a; dk-1 x(d)c(2maj/d2).1
PROBLEM 3.6. Let q(X) = q(x1, ... , Xm) be a positive definite integral quadratic
form whose matrix has determinant 1. Show that there exists a finite set of functions
A.1, ... , A.h : N -+ C satisfying the relations
and having the property that the number r (q, a) of integer solutions of the equation
q(X) = a can be represented in the form
h
r(q,a) = L:a:;A;(a) {a EN)
i=l
with constant coefficients a:;. Then deduce that for any a E N and any prime p the
power series
00
L r(q, apv)vv
v=O
is a rational function in v with denominator
h
ITO - A;(p)v + pm/2-1v2).
i=l
PROBLEM 3. 7. Let
00
r(s) = fo 00
ts-le- 1 dt (Res> 0),
prove that the Dirichlet series with coefficients f (2a) has the following integral repre-
sentation in terms of the original modular form F:
+ ik }, {
00
1k-s- 1(F(it) - J (0)) dt - J (0) ( k ~·k s +; l) (Res> k).
From this deduce that the function 'l'(s; F) has a meromorphic continuation to the
entire s-plane, the function
'l'(s; F) +f
·k
(0) ( k '_ s +;
l)
is an entire function, and 'l'(s; F) satisfies the functional equation
'l'(k - s; F) = ik'l'(s; F).
[Hint: Divide the integral from 0 to oo in the previous problem into the integral
from l to oo and the integral from 0 to 1. In the latter integral make the change of
variable t --+ l /t and use the relation F (if t) = F (- l /it) = (it )k F (it).]
278 4. HECKE OPERATORS
PROBLEM 3.9. Prove the following properties for the zeta-function ((s, F) corre-
sponding to a nonzero eigenfunction F E rotl (1, 1) for all of the H~cke operators
IT(m):
(1) ((s, F) has a meromorphic continuation to the entires-plane;
(2) the function
~(s;F) + ;~~~ (k ~ s + ~ )•
where ~(s; F) = (2n)-sr(s )((s, F) and f (2a) are the Fourier coefficients of F, is an
entire function;
(3) ((s, F) satisfies the functional equation
so that in the case of a real character x the subspaces rott (q, x) are invariant relative
to all of the Hecke operators IT(m).
(5) Let F be a modular form in rott(q,x), where xis a real character. In the
notation of Problem 3.7, prove that one has the following integral representation:
'P(s; F) = f00
q-1/2
ts-I (F(it) - /(0)) dt ± ll 2-s f
00
q-1/2
tk-s-I (F(it) - J (0)) dt
From this deduce that 'l'(s; F) has a meromorphic continuation to the entires-plane,
the function
'l'(s; F) +f (O)q-s/2 (!s ± _1
k-s
_)
IT(m) (m E N(q)), and let A.(m) be the corresponding eigenvalues. Show that one has
the factorization
of integer weight k and one-dimensional character [x] for the group r~(q), where q E N
and x is a Dirichlet character modulo q. We suppose that F is an eigenfunction for all
of the Hecke operators IT= lk.xT for TE L 2 (q) with eigenvalues A.(T) = A.(T;F):
FIT= A.(T)F.
The Dirichlet series constructed from the Fourier coefficients of F that we shall work
with is the series
and we shall also consider certain linear combinations of these series. On the other
hand, the Dirichlet series constructed from the eigenvalues of F that arise are Euler
products of the form
where y is the same as above, Qp(v; F) for p E P(q) denotes the polynomial
4
(3.24) Qp(v;F) = 1:(-l)iA.(qj(p))vi
j=O
iso 4. HECKE OPERATORS
and qj(p) :;::: q](p) is defined as the preimage in L~(q) of the element q~(p) E L~
(see (3.77) of Chapter 3) under the isomorphism eq (see (3.45) of Chapter 3). We call
(3.23) the zeta-function of F with "character" y. Its p-factors
(3.25)
are called the local zeta-functions of F (with character y).
We begin our search for "global" connections between the series (3.21) and the
products (3.23) by finding local relations, i.e., relations between the power series
00
and the local zeta-functions (p(s, y; F). In the computations below we use the tech-
nique of Chapter 3, based on extending Hecke rings of the symplectic group to
Hecke rings of the triangular subgroup. According to the philosophy of §2.2, we
consider rot~ (q, x) as an invariant subspace relative to all of the Hecke operators in
Li(q) = eq(L 2(q)) inside the space rot~, where s(-1) = (-l)kx(-1); and we con-
sider the space J~ (q, x) of Fourier coefficients offunctions in rot~ (q, x) as an invariant
subspace of J~ relative to the same Hecke operators. We shall systematically make
use of the notation (2.72) and the relations (2.74). For most of what we do, what
is important is simply that F lies in rot~ and is an eigenfunction for certain Hecke
operators. Everywhere in what follows k, x. ands are fixed, and are connected by the
relation s(-1) = (-l)k x(-1).
LEMMA 3.11. Let F E rot~ be an eigenfunction for all of the Hecke operators
IT = lk.xT for T E L~, where p is a fixed prime, let A.(T) be the corresponding
eigenvalues, and let f(A) (A E Ai) be the Fourier coefficients of F. Then the following
formal identity holds for every matrix B E Ai:
00
dp(v,B) = Lf(p6B)v6
(3.27) J=O
= Qp(v;F)-'(flk.x(l - Il_v)(l - Il1v + p(Il~'.J + Il~~J)vi))(B),
where, by analogy with (3.24), we set
4
Qp(v; F) = L.(-l)j A.(qj(p))vj,
j=O
n_ = Il~(p) = Il~(p) and Il1 = IlT{p) are the elements (3.59) of Chapter 3, and
nkJ = nkJ(p) are the elements (3.62) of Chapter 3 for n = 2.
where Q~{v) is the polynomial (3.78) of Chapter 3 over the ring L~ c L~.p· Further-
more, it follows from (2.33) that for any g E ~~and any BE A2 we can write
g(p6 B) = (glII~(p6))(B) = {glII~)(B) (~ ~ 0),
where II+ = II! (p) = II~ (p). Using these formulas and the factorization of the
polynomial Q~(v) in Proposition 6.13 of Chapter 3, we obtain
where for B = ( 2~1 {tJ we let pp(l, 2; B) denote the number of nontrivial solu-
tions of the congruence
(3.32) b(x, y) = b1x 2 + b2xy + b3y 2 = O{mod p ).
According to Lemma 2.22, this number is equal to the number i( Vp, bp, I) of nonzero
isotropic vectors in the two-dimensional quadratic space (Vp,bp) over Fp with qua-
dratic form hp= b(x,y)/modp. If(Vp,bp) isanondegeneratespace, then, by Propo-
sition 2.14 of Appendix 2, we have i(Vp,bp, I)= (p - e)(I + e), where e = e(Vp, bp)
is the sign of the space (Vp,bp) {see (2.21) of Appendix 2). Since e = ±1, the last
formula can be rewritten in the form
(3.33) i(Vp,bp, I)= {e + l)(p -1).
282 4. HECKE OPERATORS
This formula is actually valid for any two-dimensional quadratic space (VP, hP),
if.we set
In fact, in the first case the space ( VP, hP) is the sum of two one-dimensional subspaces,
one null and one not null, and every nonzero isotropic vector must belong to the null
one-dimensional subspace; thus, the number of isotropic vectors is p- 1. In the second
case, every nonzero vector is isotropic, so there are p 2 --: 1 of them.
For every matrix BE A2 we define the p-sign ep(B) by setting
(3.35)
where hP (x, y) is the quadratic form with matrix B, considered over F p· Here the right
side of (3.35) is understood in the sense of (2.21) of Appendix 2 ifthe two-dimensional
quadratic space (Vp; hp) is nondegenerate, and in the sense of (3.34) if this space is
degenerate. Then, by the above observations, we can write
and from this and (3.30)-(3.31) we finally obtain a formula for the action of the
operator II(l)
2,0
+ II(O)
2,0
on ~2:
s
(3.36)
In order to apply the identities (3.27), it remains to interpret the right side of
(3.29). If /1 = detB =I 0, then any of the matrices p-J B[ 'D] on the right in (3.29) is
the matrix of a positive definite integral quadratic form of discriminant -11, and hence
it is naturally associated with a module of the imaginary quadratic field Q( v-::K). This
enables us to interpret the right side of (3.29) in terms of the composition of quadratic
forms. We shall use the language, notation, and results of Appendix 3.
We first look at the conditions under the summation in (3.29). According to
Lemma 1.2 of Chapter 3, we can take our set of representatives of the left cosets
A+\ A+D 2(p)A+ to be matrices of the form D 2(p)
J J
"2)
("JVJ V2 = ( UJ
PVJ
u2 ).
pv2
where ( u J
VJ
u)
2
V2
runs through a set of left coset representatives of A+ = rJ modulo
the subgroup
It is clear that two matrices in rJ with first rows (uJ, u2) and (u(, u2) lie in the same left
coset modulo the above group if and only if u( u2 = u2uJ (mod p ), i.e., if and only if
the pairs (ui, u2) and (u(, u2) are proportional modulo p:
A+\A+Df(p)A+ = { (~ ~) U;
(3.39)
U= ( :1 : 2) EA+, (u1,u2) E P 1(Z/pZ) }•
·where P 1(Z/ pZ) denotes an arbitrary set of representatives of the equivalence classes
of pairs of relatively prime integers under the equivalence (3.38) (this is the projective
line over Z/ pZ). Let B = ( 2~1 {t3) E Ai, and let b(x, y) = b1x 2 + bixy + b3y 2
where
(3.40)
where for each pair of relatively prime integers (u1, u2) we take (vi, v2) to be an arbitrary
pair of integers for which u1v2 - u2v 1 = 1, and where U = ( ui ui ) .
V1 V2
The next proposition, which plays a central role in this discussion, interprets the
right side of (3.41) for positive definite matrices B not divisible by p in terms of
the composition of matrices of quadratic forms and modules of the corresponding
quadratic field.
Let D' be an order of the algebraic number field K. For brevity, we shall use the
term regular ideal (of the ring 0') to refer to a full submodule of K that is contained
in D' and has the property that its ring of multipliers is D'.
(2) If, on the other hand, ep(A) = -1, then 01 contains no regular ideals of norm
p, and one has: (Jlk.xITf(p))(mA) = 0.
(3) /fep(A) = Oand pJI, then there exists a unique regular idea/p ofnorm pin the
ring 01, we have p = p, and
I, if detA = -l(mod8),
(3.43) ep(A) = { -1, if detA = 3(mod8),
0, .if detA = O(mod2),
since the square of any odd integer is = I (mod 8), so that in the first case d =
- / 2 detA = l(mod 8) and the congruence x 2 = d(mod 8) is solvable. In the second
cased= -/- 2 detA = 5(mod 8)andthecongruencex 2 = d(mod 8)hasnosolutions;
and in the third case, if I is odd, then d is congruent to 0 or 4 modulo 8, and the
congruence x 2 = d(mod 8) is solvable.
These observations and §2 of Appendix 3 give us the statements in the first three
parts of the proposition about the existence and properties of ideals of norm p in 0.
In particular, in these cases there are exactly ep(A) + 1 such ideals.
On the other hand, the expression ep(A) +I appears in the formula (3.33) for the
number of nonzero isotropic vectors in Vp, which can obviously be interpreted as the
number of distinct pairs (modulo p) of relatively prime integers (u 1, u2) satisfying the
congruence q(u 1, u2 ) = O(mod p ). If we divide these pairs into classes of pairs that
are proportional to one another modulo pin the sense of (3.38), and if we take into
. §3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 285
account that each such class contains exactly p - 1 pairs, we conclude that the number
of terms in the sum (3.41) for B = mA with m =$. O(mod p) is equal to
(3.44)
l{(ui, u2) =
O(mod p )}I
E P 1(Z/ pZ); q(u1, u2)
= (p -1)- i(Vp,qmodp) = ep(A) + 1.
1
Hence, in cases (1), (2), and (3) of the proposition it is natural to expect that the terms
in (3.41) are directly connected with the regular ideals in 0 1 of norm p.
Suppose that ep(A) = 1. In this case, by (3.44), there exist exactly two pairs
of relatively prime integers (ui. u2) and (u(, u2) that are not proportional modulo p
and that satisfy the congruence q(x,y) =
O(mod p). Then q(u1,u2) = pa1 and
q(u(, uD = pa2, where ai, a2 EN. We choose integers vi, v2,v(,v2 such that
u, = (ui
V1
u2),
V2
U2 = (v(u' u')
1 2 E SL2(Z),
v2
and for i = l, 2 we set
q1, qi, q(, and q~ be the quadratic forms with matrices A,, Ai, Al, and A2, respectively,
and let M(q 1), M(qi), M(qD, and M(qD be the modules corresponding to these
quadratic forms (see §3 of Appendix 3). We set <5 1 = (b 1 - ./D)/2. Since obviously
o[ = b1o1 - pa 1ci. and since b1 is not divisible by p (see above), it follows that
M(qDp = {ai,01}{p,01} = {a1p,a10i,p01,M1 - pa1c1} = {pa1,01} = M(q1),
from which, using §2 of Appendix 3, we obtain
M(qD = p-I M(qDJ:>P = p- 1M(q,)p.
Since the quadratic form q1 is properly equivalent to q, it follows that the last module
is similar to the module M(q)p, and so the matrix Al of the quadratic form q( is
properly equivalent to the matrix Ax p; hence, f (mAI) = f (m(A x p)). Similarly, A2
is properly equivalent to A x p, and f (mA2) = f (m (A x p)). These relations, together
with (3.45), prove the first part of the proposition.
The second part follows from the above arguments and from (3.41) and (3.44).
Now suppose that ep(A) = 0. Then, according to (3.44), all of the pairs of
relatively prime integers satisfying q(x, y) =
O{mod p) are proportional to one such
pair, say, {ui, ui). Let q(ui, ui) = pa1. We choose vi, vi E Z so that
and we set
ideal of Di of norm p. We setc51 = P'YI =(bi - Jli)/2. Since of= b1c51 - pa 1c 1 and
a 1 is not divisible by p, it follows that
M(qDp = {ai,c51}{p,c51} = {a1p,a1c5i,pc5i,b1c51 - pa1c1}
= {a1p,c51} = M(q1),
where q1 is the quadratic form with matrix A 1. From this, if we take into account that
j) = p and use §2 of Appendix 3, we obtain
M(qD = p- 1M(qDPP' = p- 1M(qi)P' = p- 1M(qi)p.
Since q1 is properly equivalent to q, it follows that the last module is similar to the
module M(q)p, and so the matrix A~ is properly equivalent to the matrix Ax p. This
means that f(mAD = f(m(A x p)), and this, together with (3.46), completes the
proofof part (3).
Finally, suppose that p I l. Since b 1 is divisible by p, and the quadratic form q1
with matrix A 1 is primitive (because q is primitive), it follows that c1 is not divisible
by p. We show that p divides a 1. If p =/; 2, then this follows immediately from the
=
second congruence in (3.47). If p = 2, then the congruence d 0, l(mod 4) implies
that D = d/ 2 = bf - 8a1c1 = 0,4(mod 16). Then (b1/2) 2 - 2a1c1 = 0, l(mod 4),
and so 2a1c1 = O(mod 4) and a1 = O(mod 2). We set A~ = pA 2. From the above
observations we see that A2 is an integer matrix and is even. The primitivity of the
quadratic form q 1 implies that the quadratic form q2 with matrix A2 is primitive. The
numbery1 = (bi-Jli)/2p isarootofthepolynomial v 2-(b1/p)v+a1cif p, which has·
rational integer coefficients. Hence, from §2 of Appendix 3 it follows that the module
{l, y1} is an order of the field K, having discriminant (b 1/ p) 2 -4pa 1c1/ p 2 = D(l/p) 2 .
Hence, {l, Y1} = D1;p- We now obtain
M(q1)D1/p = {pai, py1}{l, Y1} = {pai, pyi, pam, p(b1y1/ p - a1c1/ p)}
= {a1ci,pai,py1} = {ai,py1} = pM(q2).
As before, the module M(q 1) is similar to the module M(q), and so M(q2) is similar
to M(q) ·D11P, and the matrix A2 is properly equivalent to the matrix Ax D11P; hence,
f(mAD = f(pmA2) = f (pm(A x D1;p)). D
We now return to the identities in Lemma 3.11, and apply the above formulas to
compute the numerators of the rational functions on the right in these identities. If we
set B = mA, where m E N is not divisible by p and A = ( 2: {c) is a matrix in At
with relatively prime a, b, c, and if we use (3.36) and (3.28), then we obtain
We fix an integer D < 0, and let A 1, ••• , Ah(D) be a set of representatives of the
proper equivalence classes (see §3 of Appendix 3) of matrices A = ( 2: {c) EA!
with g. c. d.(a, b, c) = 1 and b2 - 4ac = D. These equivalence classes form a finite
abelian group H(D) under the composition in §3 of Appendix 3. We fix an arbitrary
e
character of the group H(D). Given a prime p, a natural number m, and f E i~.
we define the formal power series
h(D) oo
(3.49) dp(v,m,e,D) = L:e(A;)dp(v,mA;) = 2:f(p.sm,e)v5,
i=I J=O
where
h(D)
f(1,e) =I: e(A;)f(1A;).
i=I
Before we sum the series (3.49), notice that, by (3.42) and (3.43), the p-signep(A;)
does not depend on i, but rather depends only on D. Hence, we set
and
if D =l(mod8),
(3.51) if D = 5(mod 8),
if D = O(mod2).
LEMMA 3.13. Suppose that the values of the function f E i~ are the Fourier coeffi-
cients of an eigenfunction F E rot~ of all of the Hecke operators IT = lk,x T for T E L~.
where p is a.fixedprime. Let D be a negative integer, e be a character of the group H (D ),
and m be a natural number prime to p. Then the power series d p(v) = d p(v, m, e; D) is
formally equal to a rational function
dp(v) = Qp(v;F)- 1Pp(v),
whose denominator is Qp (v, F) and whose numerator PP (v) is a polynomial in v ofdegree
at most 2. The numerator is given by thefollowingformulas (where, as before, D1 denotes
the subring of index l = JD1d in the ring of integers of K = Q( ..fi5). and d is the
discriminant of the.field K):
(1) /fep(D) = 1, then
Pp(v) = (1- (Np)k- 2 x(Np)e(p)v)(l - (NpY- 2 x(Np)e(p)v)f(m,e),
where p andp are the unique regular ideals ofD1 of norm p.
(2) /f ep(D) = -1, then
Pp(v) = (1- (Np)k- 2x(Np)e(p)v 2 )f(m,e),
where p = pD1 is the unique regular ideal of D1 ofnorm p 2.
(3) /f ep(D) = 0 and pf l, then
Pp(v) = (1- (Np)k- 2 x(Np)e(p)v)f(m,e),
where p =pis the unique regular ideal ofD1 of norm p.
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 289
PRooF. First suppose that p does not divide /. Then from Proposition 3.12 and
theformula(3.28)itfollowsthat(/IIl-Il1)(mA;) =Oforanyi = l, ... ,h(D). Then
from (3.48) and (3.50)-(3.51) we obtain
Pp(v) = Qp(v;F)dp(v) = f(m,e)- (fllli)(m,e)v + p21c-4x(p 2)ep(D)f(m,e)v 2.
If ep(D) = 1, then, using the first part of Proposition 3.12, we have
h(D)
(flni)(m,e) = :L:e(A;)(/IIl1)(mA;)
i=I
=pk- 2x(p) L:e(A;)(f(m(A; x p)) + f(m(Ai x p)))
i
Now let ep(D) = -1. Then, using the second part of Proposition 3.12, we have
(flll1)(m,e) = :L:e(A;)(/IIl-)(mA;) = o,
i
and so
Pp(v) = (1- p21c-4x(p 2)v 2)f(m,e).
Finally, _if ep(D) = 0 (and pJI), then, by Proposition 3.12(3), we have
h(D)
(/IIl1)(m,e) = pk- 2x(p) L e(A;)f(m(A; x p))
i=I
= Pk- 2x(p)e(i') L:e(A; x p)f(m(A; x p)) = Pk- 2x(p)e(p)f(m,e),
i
since p = p; thus,
Pp(v) = (1 - pk- 2x(p)e(p)v)f(m,e).
290 4. HECKE OPERATORS
Now suppose that ep(D) = 0 and pjl. Since (fjil_)(mA;) = 0, the formula
(3.48) for A = A; can be rewritten in the form
Qp(v;F)dp(v,mA;) = f(mA;) - (flll1)(mA;)v + (fjil_Il1)(mA;)v 2
= (fj(l - Il_v)(l - Il1v))(mA;).
Ifwe multiply this relation by <!(A;) and sum over i, we obtain the last formula in the
lemma. 0
We are near the end of our examination of the multiplicative properties of the
Fourier coefficients of modular forms of degree 2. It remains for us to bring together
the local information into a single global picture.
THEoREM 3.14. Let
F(Z) = L f(A)e{AZ} E rotZ(q,x)
AEA2
be a nonzero modular form of integer weight k and one-dimensional character [X] for
the group rMq ). Suppose that F is an eigenfunction for all of the Hecke operators
IT = lk,xT for T E L 2 (q). Further suppose that D is an arbitrary negative integer,
l EN is determined from the condition D = dl 2, where dis the discriminant of the.field
K = Q( .fi5). and<! is an arbitrary character of the class group H (D ), regarded as an
abstract group.
Then the following formal identity holds for any natural number a such that a jq 00
and for any completely multiplicative function y: N(q) -+ C:
h(D) ( (
L<!(A;) L f ma~~)y m)
i=I mENc,J
(3.52)
=p(s){ II
p,NpJ(ql)2
(1- x(~~;)~~fl;(p)) }c(s,y;F),
where A1,. . .,Ah(D) is an arbitrary set of representatives of the elements of H(D),
regarded as proper equivalence classes ofthe matrices ofpositive definite primitive integral
binary quadratic forms ofdiscriminant D; where p(s) = p(s, a,D,e, y; F) is a.finite sum
given by one of the expressions
p(s)=~<!(A;)(!I II (1-n~(~y(p))
i-1 pEPc,J
pl/
to qi; and where C(s, y; F) is the Euler product (3.23) corresponding to the eigenfunction
F.
Further suppose that k ~ 0, f (A) =/:- 0 for some nondegenerate matrix A E A!, and
lr(m)I ~ cma for all m E N(qJ. where c and u are real numbers that do not depend on
m. Then the Dirichlet series on the left in (3.52) and the infinite products on the right
in this identity converge absolutely and uniformly in any right half-plane of the complex
variable s of the form Res > 2pk + u + 1 + e with e > 0, where p ·= 1 in the general
case and p = 1/2 if F is a cusp-form; and the resulting holomorphic functions on the
indicated half-planes are connected by the identity (3.52).
PR.ooF. Let p1, ... , Pb be all of the distinct prime divisors of I that are prime to
q, and let Pb+ 1. Pb+2• . . . be the sequence of all primes in P (qi} arranged in increasing
order. We first prove by induction on c that for all c E N one has the formal identity
L f(am~~)y(m) = { L f(am~~)y(m)}
mEN(q) mEN(q·q(<Jl
(3.54)
x{ II (1- x(Np)y(Np)<!(p))}
(Np)s-k+2
II Q (y(p).
P ps '
F)-1 '
p,Npjq{c)2 pjq{c}
where
and p in the middle term on the right runs through all regular prime ideals of .01 whose
norm satisfies the condition given. If c = 1, then, setting.Pb+! = p and y(p )p-s = v
and using the identity in Lemma 3.13 that corresponds to the value of e p (D ), we obtain
(3.55)
where, according to §2 of Appendix 3 and Lemma 3.13, p runs through all regular
prime ideals of .01 of norm p or p 2; this proves (3.54) in the case under consideration.
Suppose that (3.54) has been proved for some value of c. If we set Ph+c+l = p and use
292 4. HECKE OPERATORS
= Qp(y(p)p-s;F){ L · f(am~~)y(m)}
mEN(q•q(c))
x{ II (1 - x(~;:)~~fl;(p))} II Qp(y(p)p-s;F)- 1
· p,Nplq(c) 2 plq(c)
={ " L...,,,
f(am,c!)y(m)}{
ms ·
II (1 _x(Np)y(Np)c!(p))}
(Np)s-k+2
mEN(q·qk+lll p,Nplq(c+l)2
X II Qp(y(p)p-s; F)-1,
plq(c)
L f (am~c!}y(m) ={ L J(am~~)y(m)}
mEN(q) ml(Pl···Pb) 00
x{ II (1 - x(~;;)~~fl;(p))} II Qp(y(p)p-s;F)- 1.
p,Np J (ql)2 pEP(ql)
Thus, to prove (3.52) with the factor p given by the first formula in (3.53), it remains
to verify that
where
and p 1, ••• ,p, are the distinct primes in P(q) that divide I. We use induction on r. If
r ~ 1, then (3.56) holds by Lemma 3.13(4). Now suppose that (3.56) has been proved
for r primes in P(q) that divide I, and let Pr+I = p be a~other such prime. Using the
induction assumption, we obtain
= Qp(y~) ;F) f: { II
J=O PIP1 ... p,
Qp(y~) ;F)
(3.57) x L f(amp:Py(m)}(y~))
ml(pi ... p,) 00
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 293
where
h(D)
f'(apJ, C.) = L C.(A;)f'(al A;)
i=I
and
J'(A) = (11 II (1- II~(~y(p)) (1- IIH~sy(p)) )(A).
PIPl•••Pr
It is clear that for any fixed value of s the function f' is contained in the space ~~. By
Proposition 5.12 of Chapter 3, in the Hecke ring q every element of L~ commutes
with any of the elements II~(pj) E C'!:..P1 (j = 1, ... , r). Since L 2 is a commutative
ring, the elements of L~ commute with all elements of L~1 , and in particular with
T 2 (pj) = II~(pj) + IIHPi) + II~(pj) (see Proposition 5.14 of Chapter 3); from this,
if we again use Proposition 5.12 of Chapter 3, we conclude that every element of Lp
commutes with any of the elements Il~(pi) = T 2 (p i) - II~(pi) - II~ (pi). From these
observations it follows that the function/' E ~.along with f, is an eigenfunction
for all of the Hecke operators corresponding to elements of L~, and it has the same
eigenvalues as f. Thus, we can compute the expression in (3. 57) using Lemma 3.13(4),
according to which it is equal to
where for squarefreeo ando1 we set Ilo(o) = TIP 16 II~(p) and II1(oi) = TIP 161 IIHp)
(it follows from (5.13), (5.49), and Proposition 5.12 of Chapter 3 that the order of the
primes makes no difference). Using Proposition 3.12(4) and induction on the number
294 4. HECKE OPERATORS
of prime divisors of the squarefree number o1 dividing I, we easily derive the formulas
(see §2 of Appendix 3)
(Jlk,xil1 (oi))(aA;) = o:- 2x(o1)f(a01 (A; x .01;15J),
and from this and (3.28) we obtain the following relations for i = 1, ... , h(D):
Ulk.xllo(o)IT1 (01 ))(aA;) = (/IIlo(o)IIl1 (01))(aA;)
= { o2k-30:-2x(o2oi)f (~(A; x .01;15.)),
0,
The second expression for p (s) follows from the first expression and the above formulas.
The identity (3.52) is proved. · D
It remains to examine the convergence of the series and products in (3.52). Ac-
cording to (3.35) of Chapter 2 ((3.70) of Chapter 2 if Fis a cusp-form), we have the
inequalities lf(mA;)I ~ ypDkPm 2kP (m EN), where YF depends only on F. This im-
plies that the Dirichlet series on the left in (3.52) converges absolutely and uniformly
in any of the half-planes indicated in the theorem. The infinite product over p on
the right of (3.52) converges absolutely and uniformly in any right half-plane of the
variable s of the form Res ;;:<: k + u - 1 + e with e > 0, since the following estimate
is a consequence of the description in Appendix 3.2 of the regular prime ideals of .01
whose norm does not divide I:
"'"""' 1x(Nv)r(Nv)e(v)1 ~ "'"""' P(I "'"""' Piu
L..J (Np)s-k+2 "'= L..J pRes-k+2 + L..J p2(Res-k+2)
p,Np l (ql)2 p,Np=p p,Np=p2
~ 2 "'"""' 1
L..J pRes-(k+u-1)+1 + "'"""' 1
L..J p2(Res-(k+u-1))+2
pEP pEP
(the norm of any prime ideal of .01 is obviously either p or p 2, where p E P). In
addition, in any of the indicated half-planes the modulus of the product is clearly
bounded from below by a positive constant. From these observations and from the
formal identity
(3.58) P
(s)((s . F) = {
'y,
IJ (i _x(Np)y(Np)e(p))
(Np)s-k+I
}-i "'"""'
L..J
f(am,e)r(m)
ms
p mENc9>
it follows that the product p(s )((s; y, F), regarded as a Dirichlet series, converges
absolutely and uniformly in any of the half-planes indicated in the theorem. From this
we cannot immediately conclude anything about the convergence of the Dirichlet series
for ((s, y; F), since the factor p(s) could turn out to be identically zero. However,
since the product ( (s, y; F) does not depend on D, e,
or a, we might try to choose
e,
these parameters in such a way that p(s) = p(s, a, D, y, F) '#- 0. From the conditions
in the theorem it obviously follows that there exist an integer D < 0, a character of e
the group H(D), m ·E N(q)• and a E N with alq 00 such that (in the notation of the
theorem) we have
h(D)
(3.59) f(am,e) = L:e(A;)f(amA;) '#- o.
i=l
§3. MULTIPLICATIVE PROPERTIES OF THE FOURIER COEFFICIENTS 295
Let Do denote the smallest such D in absolute value, and let eo and ao denote the
e
corresponding and a. Using the second formula in (3.53) and the minimality of Do,
we find that
h(Do)
Po= p(s,ao,Do,eo,y,F) = L eo(A;)f(aoA;) = f(ao,eo),
i=I
since, according to (3.6) and §2 of Appendix 3,"the quadratic form with matrix A; x0 1/Ji
has discriminant equal to Do/~~. Ifwewriteouttheidentity (3.52) for D =Do, e = eo,
a = ao, and y = l, and if we take into account that, by (3.59), the series on the left
is not formally equal to zero, then we conclude that f (ao, eo) = po =/= 0. Thus, the
factor p(s) in the identity (3.52) for D = Do, e = eo, and a = ao is equal to a nonzero
constant. Hence, the Dirichlet series for ( (s, y; F), along with the corresponding Euler
product, converges absolutely and uniformly in the same regions as the Dirichlet series
on the right. 0
The identities in (3.52) are analogous to those in (3.14). We call ((s, y; F) the
zeta-function with character y that is associated to the eigenfunction F.
PROBLEM 3.15. Let F E rot~, and let f (A) for A E An be the Fourier coefficients of
F. Suppose that for some prime p the modular form F is an eigenfunction of all of the
Hecke operators IT= lk.xT for TEL~, where (-l)k x(-1) = s(-1), with eigenvalues
A(T) = A(T, F). Set ·
2"
Qp(v;F) = L:(-l)iy(qj(p))vi,
j=O
where qj(p) EL~ are the coefficients of the polynomial (3.78) of Chapter 3, and let
a; = a; (p, F) for i = 0, 1, ... , n be the parameters of the homomorphism T -+ A(T)
in the sense of Proposition 3.36 of Chapter 3. Prove the following:
(1) (Zharkovskaia) The following formal identity holds for any fixed matrix A E
An:
00
is an eigenfunction for all of the Hecke operators lk.x T' for T' E L;-
1. Supposing
that Fl<I> =f. 0, we let A.(T',Fl<I>) denote the corresponding eigenvalue, and we let
a; = a; (p, F l<I>) for i = 0, 1, ... , n - 1 denote the parameters of the homomorphism
T' ---t A.(T',Fl<I>). Then we can take
(ao(p, F), a1 (p, F),. . ., an(p, F)) = (aopk-n x(p), a1,. . ., an-t. pn-kx(p)).
In particular, we have the relation
Qp(v;F) = Qp(v;Fl<I>)Qp(pk-nx(p)v;Fl<I>).
[Hint: Apply Theorem 2.12, Proposition 2.13, and Problem 2.17 .]
PROBLEM 3.16. Let F E rotHq,x), F =f. 0, where k,q E N and Xis a Dirichlet
character modulo q. Suppose that F is an eigenfunction for all of the Hecke operators
IT = lk.x T for T E L 2(q) with eigenvalues A.(T) = A.(T, F). For p E P{q) let ao(p, F),
a1 (p, F), a2(p,F) be the parameters of the homomcrphism T ---t A.(T, F) of the ring
L~ ~ L~. Prove that:
(1) One has the inequalities
= II (1 - A.(Tl(p),FJ<I>)y(p)
p·•
+ x(p)y(p2))-1.
p2.•-k+t
pEP(q)
3. Modular forms of arbitrary degree and even zeta-functions. The point of depar-
ture for the theory presented below is the symmetric factorization of the polynomials
R; (v) and R.; (
v) in §6.5 of Chapter 3. Since the coefficients of these polynomials lie in
the even subrings of the corresponding Hecke rings, we begin by making some remarks
about the even subrings.
Let
(3.60) E"(q) = Dq(ro(q),S"(q)+),
where S" (q )+is the subgroup of S" (q) consisting of matrices M for which r(M) is the
square of a rational number, be the even subring of L"(q). By analogy with Theorem
3.12 of Chapter 3, one easily verifies that the ring E" (q) is generated by the even
subrings (see (4.37) of Chapter 3)
(3.61)
of the rings L;(q) for p E P(q}' The imbedding eq in (5.3) of Chapter 3 maps each
E;(q) isomorphically onto the even subring E; of L; c Lo,p· Using the definition
of the spherical map n = n; and the fact that it is a monomorphism on L;, we see
that the image ofE; under this map is the subset of polynomials in n(L;) having even
degree in the variable x 0 • In particular, all of the coefficients of the polynomial (3.52)
of Chapter 3 lie in n(E;), and hence their preimages in L;-i.e., the coefficients r~ (p)
of the polynomial R; (
v )-lie in E;:
(3.62) R;(v) E E;[v].
On the other hand, from (3.56)-(3.57) and Theorem 3.30 of Chapter 3 it follows that
O(E;) = Q[(t) 2, rf 1, rI,. . ., rn_iJ,
and hence
O(L;) = Q[t, rf 1, r1, ... , rn-d = O(E;)[t] with t 2 E O(E;).
Returning to the preimages, we conclude that L; is an extension of degree two of the
ring E;, or more precisely:
If the values off are the Fourier coefficients of a modular form F, and if ±a0 , a 1, ... ,
an are the parameters of the homomorphism 't"-+ A.('t",F) = A.('t";f), then from the
definitions we easily find that
n
R;(v,F) = R;(v,f) = II(l - a;(p)- 1v)(l - a;(p)v),
i=I
(3.66) n
R.;(v,F) = R.;(v,f) = IT(l-a;(p)- 1v)(l -a;(p)v).
i=I
the local even zeta-function and the (global) even zeta-function of the modular form F
with character y.
Finally, in the above notation, given A E An, f E ~~, and an arbitrary subset
Ac N(q)• we define the formal Dirichlet series
y(I det Ml)xw (detM)f (A[ 'MJ)
(3.69) Dw(s,A,f,ll.) = L IdetMls+w-1
MEA\M.
ldetMIEA
in which the function f: An---+ C is regarded as an element of~~ with s(-1) = Xw(-1),
x~.;(P) E Lo,p are the coefficients of the polynomial (6.101) of Chapter 3, and the right
side is understood in the sense of (2.72).
Furthersupposethatf(A) =I OforsomeA EA~ andly(m)I ~ cma forallm E N(q)•
where c and a are real numbers that do not depend on m. Then the Dirichlet series on
the left in (3.70) and the infinite products on the right converge absolutely and uniformly
300 4. HECKE OPERATORS
in any right half-plane of the variables of the form Res> (2p - l)w +a+ n + 1 + e
with e > 0, where p = 1 in the general case and p = 1/2 if F is a cusp-form; and these
holomorphicfunctions on the half-planes indicated are connected by the identity (3.70).
REMARKS. (1) From (6.34) of Chapter 3 it follows that x'.:;(p) E C!!_P C C!!_. By
Theorem 5.8 of Chapter 3, C!!.. = C!!.. (1) is a commutative ring. Thus, it makes no
difference in what order the primes appear on the right in (3.71).
(2) The action of the operators lw.xX'.:;(P) on the space~~ can be computed from
the formulas (3.74).
(3) At the end of the proof of this theorem we show that, under the conditions of
the theorem, there exist matrfoes Ao E A;!" for which X(s, Ao, f) = f (Ao) =f. 0.
We first prove three lemmas.
LEMMA 3.18. For a E N set
t+(a) = t;t(a) = L (U(D))qj E L(i.
DEA\AM.A
detD=±a
Then:
(1) the elements t+(a) belong to the subring Cf. = Cf.(1) ofLo,p• and in particular
commute with one another;
(2) if a and b are relatively prime, then
(3.72)
(3)/or any g E ~~one has the formula
a
(3.73) MEA\Mn
detM=±a
= (glw.xa-ni+(a))(A) (a E N,A E An),
LEMMA 3.19. Suppose that the function g E ~~ is an eigenfunction for all of the
Hecke operators 1-r = lw.x-r, where-r E E;forw = k and-r E E;(q,:x)forw = k/2, and
s(-1) = Xw(-1). Then in the notation (3.65) thefollowingformal identity holds for any
matrix A E An:
= n
R;(v,g) ~)glp-Jnt+(p<Y))(A)vJ = B;(v,A) ~)-l)i(glx'.'..;(p))(A)vi,
J=O i=O
ifw = k, and
oo n
.R;(v,g) L(KIP-Jnt+(i))(A)vJ = ii;,k(v,A) L(-l)i(glx'.'..;(p))(A)vi,
J=O i=O
if w = k/2, where B;(v,A) and ii;,1c(v,A) are the polynomials in Theorems 2.23 and
2.26, and x'.'..;(p) E L3,p are the coefficients of the polynomials (6.101) of Chapter 3.
PROOF. Since A.(r:(p),g)g = glr:(p), it follows that, using the notation (2.73),
we can rewrite the left side of the identity in the lemma in the form
oo 2n
L L(-1) 0 (glr:(p )lp-Jnt+(pJ))(A)v0 vJ
J=O a=O
where B(v) = B;(v) = I::7= 0 (-l)ib;vi. Since each of the functionsglx-; also lies in
~~, it follows by Proposition 2.20 that
where B;(v, A) is the polynomial in (2.79). If we substitute these expressions into the
last formula and take Theorem 2.23 into account, we obtain the desired identity for
w=k.
In the case w = k/2, instead of (6.99) one must use (6.100) of Chapter 3, and
instead of Theorem 2.23 one uses Theorem 2.26. D
302 4. HECKE OPERATORS
LBMMA 3.20. The elements x~; (p) (i = 0, 1, ... , n) act on the space ~= according
to the formulas
(glw,xX~;(p))(A) = pwn-(n)+(n-i)x(p)n
{3.74) X L Xw(detD)ldetdl-wg(p- 2A['D]),
DEA\ADn-;A
where, as usual, s(-1) = Xw(-1) and Da = D;(p) are the matrices (2.28) of Chapter
3. In particular,
(3.75)
PROOF. The formulas in (3.74) follow directly from the formulas in (6.101) of
Chapter 3 for the elements x~; (p) and from the formulas in Lemma 2.8 for the action
of An(p), IL = II8(p), and II~_;{p) on the space~=· (3.75) is a consequence of
(3.74), since in the case p 2iJ detA the matrix A['D] cannot be divisible by p 2 for any
D with detD = ±pn-i. 0
We now suppose that for some prime p E P(q) the function g is an eigenfunction for
all of the Hecke operators 1-r = lw.xT for TEE; if w = k and -r E E;{q, x) if w = k/2,
with the same eigenvalues as f, and we suppose that the set A can be represented in
theform
00
(3.76) A= UA1p, J
where A1 c N(pq)·
J=O
hence commutes with t+(a). Thus, for every a E A1 the function g0 = gja-nt+(a),
along with g = g1, is an eigenfunction for all of the operators lk,x T for T E E~ and has
the same eigenvalues as g. Hence, using Lemma 3.19, we obtain
00
R;(vp,g) L(gja-nt+(a)jp-Jnt+(pJ))(A)v~
J=O
n
= B;(vp,ga) L(-l)i(galx'.'..;(p))(A)v~.
i=O
If we again write t+ (a) as a product of elements of the form t+ (pri) with primes p; =F p
and take into account that x'.'..i(p) E C!!..P' we conclude from Proposition 5.12(2) of
Chapter 3 that the elements t+(a) for a E A1 commute with the elements x'.'../p).
Hence,
In particular, if (p, detA) = l, then (3.77) and (3.75) imply the identities
where the right side is equal to 0 for i = 1, .. . ,n and is equal to Dw(s,A,g,A1) for
i = 0. Thus, in this case (3.78) and (3.79) are transformed, respectively, to
(3.80)
fi.;(vp, F)Dk12(s, A,g, A) = ii;,k(vp, A)Dk/2(s, A, g, Ai).
Using the first equation in (3.80) and an obvious induction on c = 1, 2, ... , we obtain
the identity
Dk(s,Ao,f,N(q))
where A= A(pi. ... ,pb) = {a EN; al(p1 · · · Pb) 00 }. To prove (3.70) with w =kit
remains to verify the identities
for Pt. ... , Pd E P(q)• A E An, and any g E lJ~ that is an eigenfunction for all of the
operators IT for T E E;,, ... ,E;d with the same eigenvalues as f, where
Since all elements of the form x'!.-;(p) lie in the commutative ring C!!.., it follows that
n
~)-1); Xa(s,A,glx'.'..;(p))v~
i=O
=t(glx'.'..;(p)I II (t(-l)jx'.'../p)vt))(A)v~
1=0 PIPI···Pd 1=0
which proves (3.82), and hence also (3.70) for w = k. We note that the second
equation in (3.80) implies (3.81) (with B; replaced by B;,1c on the right) for the
Dirichlet series Dk12 (s,A 0,j,N(q)). Furthermore, by Proposition 5.12 of Chapter 3,
i;
elements of the rings C!!..P and 1 (q, x), ... , i;d (q, x) commute in pairs. Thus, under
the same conditions as above, from (3.79) we obtain the identity
= Xa(s,A,g) II B;,k(vp,A),
PIPI···Pd
which has positive terms and is uniformly convergent fort > n + e, i.e., for Res>
(2p - l)w + u + n + 1 + e, where e > 0. The formulas in Theorems 2.23 and
2.26 imply that the product of the polynomials B;(y(p)p- 3 ,Ao) on the right of the
identity converges absolutely and uniformly for Res > 1 + u + e, while the product
of B;,k(y(p)p- 3 ,Ao) converges absolutely and uniformly for Res > 1/2 + u + e:;
and in any of these half-planes the modulus is bounded from below by a positive
constant. From these observations and the identity (3.70), we conclude that the
product X(s, A 0 , f)(+(s, y, F), regarded as a Dirichlet series, converges absolutely
and uniformly in any of the half-planes indicated in the theorem. Since (+(s, y, F)
306 4. HECKE OPERATORS
does not depend on the choice of Ao, to complete the proof of the theorem it suffices
to show that the sum X(s, Ao, f) becomes a nonzero constant for some Ao E At. By
assumption, there exist matrices A E At for which f(A) =/:- 0. Among these matrices
we choose a matrix Ao of minimal determinant. Then (3.71) and (3.74) imply that
X(s,Ao,f) =/(Ao)=/:- 0. D
PROBLEM 3.21. With the notation and assumptions of Theorem 3.17, show that
the roots o:;(p)± 1 of the polynomials in (3.66) satisfy the inequalities
Io:; (p )±11,,., p (2p-l)w+n
~
("1 = 1, ... , n,. p E p (q) ) •
PROBLEM 3.22. Let F E rot~ be an eigenfunction for all of the Hecke operators
lw.x't for EE; if w = k and for T E i;(q, x) if w = k/2, where Xw(-l) = s(-1).
T
Suppose that FlcJ> =/:- 0, where Cl> is the Siegel operator. Prove that the modular form
FlcJ> E rot~-I is an eigenfunction for all of the operators lw,xT' for -r' E E;- 1 (for
-r' E i;- 1(q, x) in the case w = k/2), and the polynomials (3.64) corresponding to F
and FlcJ> are connected by the relation
R;(v,F) = (1 - pn-kx(p)v)(l - pk-nx(p)v)R;- 1(v,FIC1>)
(in the case w = k/2 by the relation
ii;(v,F) = (1- pn-kf2x(p)v)(l - pk/2-nx(p)v)ii;-l(v,FlcJ>)).
1. Arbitrary fields. We let Sn (K) denote the set of all symmetric n x n-matrices
over the field K. Two matrices A, A' E Sn (K) are said to be equivalent over K if
(1.1) A=
I · I
CAC =AC,
[ ] where CE GLn(K).
The following identity, which is easy to verify, is often useful if one wants to simplify a
matrix by replacing it with an equivalent one: if the upper-left r x r-block A 1 = A(r)
in the matrix A = ( 1~~ ~~) E Sn (K) is nonsingular, where 0 < r < n, then
(1.2)
A=(~~ ~~)=(~1 A4-;! 1 [A2])[(~ AI1~ 2 )]·
THEoREM 1.1. Let'A E Sn (K). Suppose that there exists a column c E Mn,I (K)
such that
a=A[c]tfO.
PRooF. Ifwe replace A by the matrix A[C), where C is a nonsingular matrix with
first column c, we may suppose that A = (: : ) . The theorem then follows from
(1.2). D
If the characteristic of the field K is not 2, then the assumption in Theorem 1.1
obviously holds for any nonzero matrix A E Sn(K). Hence, using Theorem 1.1 and
induction on n, we have
THEOREM 1.2. If the characteristic of the.field K is not 2, then any matrix in Sn(K)
is eqµivalent over K to a diagonal matrix.
THEOREM 1.3. Suppose that the rank of A E Sn(K) is equal tor, where 0 < r < 11.
Then A is equivalent over K to a matrix of the form
PROOF. Since rank A = r, there exist n - r linearly independent columns c,+l • ... ,
Cn E Mn,I (K) satisfying the equation Ax = 0. Let C be a nonsingular matrix whose
last columns are c,+i. ... , Cn. Then the matrix A[C) has the required form. D
307
308 APPENDIX I
0 Inn
SYMMETRIC MATRICES OVER A FIELD 309
(3). -+ (1). This is obvious, since (3) implies that A is equivalent to the identity
matrix, which is positive definite. D
In particular, Theorem 1.5 implies that, if Pn denotes the subset of all positive
definite matrices in Sn (R), then Pn is closed in Sn (R).
THEOREM 1.6. Let A, B E Sn(R). Suppose that A > 0. Then there exists a matrix
·c E GLn(R) such that A[C] =En and the matrix B[C] is diagonal.
PRooF. According to the previous theorem, A is equivalent to the identity matrix:
A[CJ] = En. By Theorem 1.4, there exists an orthogonal matrix C2 such that the
matrix B[C1][C2] = B[C1C2] is diagonal. Since A[C1C2] = 'C2C2 =En, it follows
that C = C1 C2 is the required matrix. D
The inequality (1.9) holdsifdetA = detB = det(A +B) = 0. But if, for example,
det(A + B) =/: 0, then A + B > 0 and, by Theorem 1.6, the two matrices A + B and
B-and hence the two matrices A and B--can be simultaneously reduced to diagonal
form. This reduces the proof of (1.9) to the case of semidefinite diagonal matrices,
where it is obvious.
Using Theorem 1.6, we reduce (1.10) to the case when B = En and A is a diagonal
matrix. In that case itis obvious. 0
APPENDIX 2
Quadratic Spaces
(2.1) f ( t
/=)
u;e;) = q(u1, ... , Un) (u1, ... , Un EK),
where
q(xi, ... , Xn) = L %X;Xj (q;j EK).
19:5i:5n
This definition clearly does not depend on the choice of basis. We use the term
quadratic space (over K) to denote a pair ( V, f) consisting of a free K-module V of
finite dimension and a quadratic function f on V. The quadratic form q is called the
form of the space (V, f) in the basis e1, ... , en. A different choice of basis clearly leads
to a form that is equivalent to q over K (see §1.1 of Chapter 1). Conversely, quadratic
forms that are equivalent to one another over K may be regarded as the forms of a
fixed quadratic space in different bases. Thus, the class {q} = {q} K of q over K is
uniquely determined by the space ( V, f). We call the class {q} K the type of the space
(V,f). .
By a morphism cp: ( V, f) -+ (Vi, f 1) between two quadratic spaces over a ring K
we mean a linear map cp : V -+ Vi that satisfies the condition f 1 ( cp (v)) = f (v) for all
v E V. A one-to-one surjective morphism is called an isomorphism, and in that case
we say that the two spaces are isomorphic. It is clear that two spaces are isomorphic if
and only if they correspond to the same class of forms.
Let ( V, f) be a quadratic space over K. Given a pair of vectors u, v E V, we define
the scalar product u · v E K by setting
(2.3) n
= L %(U;Vj + V;Uj) = L QijUjVj,
19~j~n i,j=I
where Q;j = Qj; = % for 1 ~ i < j ~ n, and Q;; = 2q;; for 1 ~ i ~ n. In particular,
this implies that the scalar product is linear in each factor. The matrix
Q = (Qij) = (e; • ej) E Sn(K)
is called the matrix off (or of the scalar product (2.2)) in the basis ei, ... , en; it is the
same as the matrix of q in ( 1. 3) of Chapter 1. It is easy to see that, if we make a change
of basis from e1, ... , en to ef = E}= 1 aijeb the matrix Q is replaced by the matrix
(2.4)
This implies that the coset det Q · (K*) 2 of the number det Q modulo the group of
squares of units of the ring K is independent of the choice of basis. This coset is called
the determinant of the space (V;f) and is denoted d(V) = d(V,f). If d(V) is a unit
of the ring K, we say that the space ( V, /) is nonsingular.
A quadratic space (Vi,/i) is said to be a subspace of (V,f) if Vi c V and /1
coincides with the restriction off to Vi. We say that ( V, f) splits into a direct sum of
(V;,f;) c (J';/) (l ~ i ~ t) and write ·
I
(V,f) = $(V;,f;),
i=I
LEMMA 2.2. Let (Vi.Ji) be a subspace of the quadratic space (V,f) over afield
Suppose that Vi =/:- {O} and Vi c R( V). Then there exists a subspace (Vi, Ji) c ( V, f)
such that
(2.6)
PRooF. Let ei. ... , e, be a basis of Vi. We complete this basis to a basis ei. ... , e,,
er+I• ... , en of V. We set Vi= {Ker+t +···+Ken}, and we let f 2 be the restriction
off to Vi. We then obviously have the direct sum decomposition (2.6). 0
PROOF. We choose a basis e1, ••• , e,; of Vin such a way that the first d =dim Vi
basis vectors ei. ... , ed form a basis of Vi. Then the condition u = E}=t Ujej E V1.L
is equivalent to the system of equations
n
e;•u=Luje;·ej=O (i=l, ... ,d),
j=I
whose matrix has rank d, since the d x d-minor made up of the first d columns
is obviously equal to d (Vi) =I- 0. Hence dim V1.L = n - d. On the other hand,
Vin V1.L = {O}, since Vin V1.L c R(Vi), and R(Vi) = {O} by Lemma 2.1. These
facts obviously imply that every u E V can be uniquely written in the form u = v 1 + v2
with vi E Vi and v2 E Vi.L · Since
f(u) = f(v1 + v2) =vi·· v2 +/(vi)+ f(v2) =/(vi)+ f(v2),
the theorem is proved. 0
If the number 2 is a unit of the ring K, then the quadratic function f(u) can be
recovered from the scalar product u · v using the formula f(u) = (1/2)u · u; hence,
the quadratic space may be regarded in the usual way simply as a vector space with
bilinear scalar product. Otherwise, it may very well happen that the scalar product is
identically zero while the quadratic function is nonzero. Thus, in the general case it is
more convenient to start out with the quadratic function.
2. Nondegenerate spaces. A quadratic space ( V, f) is said to be degenerate if it
splits into a direct sum of the form
(V,f) = (Vi,O) E9 (Vi,/2),
where dim Vi ;;:;: 1 and 0 denotes the zero function on Vi. If there is no such direct
sum decomposition, then the space is said to be nondegenerate.
LEMMA 2. 6. If the quadratic space ( V, f) is nonsingular, then it is nondegenerate.
PROOF. Since d(Vi,0) = 0, the lemma foilows from (2.5). 0
LEMMA 2.8. If ( V, f} is any quadratic space of odd dimension over afield of charac-
teristic 2, then the determinant d ( V, f} is zero.
PROOF. Let Q = {Q;j) be the matrix off in some basis ei. ... , en, and let A 0 p be
the cofactors of Q. Ifwe expand det Q along the ith row and sum these expansions as
i goes from 1 ton, we obtain
n n
n det Q = L Q;jAii = L Q;;A;; = 0,
i,j=I i=I
since Q;i = Qi;, A;i =Ai;, and Q;; = 2f(e;} = O; since n is odd, this implies that
detQ = 0. D
THEOREM 2.9. Let {V, f} be a nondegenerate quadratic space over the field K =
Z/2Z. Ifn =dim Vis even, then (V,f} is nonsingular and is of one of the types {q~},
{q~}, where
Since the space (V,f) is nondegenerate, so are the subspaces (Vi.Ji) and (Vi,fi).
Then, by what was proved above, the space (Vi,/ 1) is of type {q~- 1 } or {q'.'..- 1}, and
/(en) = 1. In the first case {V, /) is clearly of type {q~-J + x~} = {qn}. In the
second case (Vi, f 1) can be decomposed into the direct sum of subspaces ( V3, f 3) and
(Vi,/4 ) of types {q~- 3 } and {q::}, respectively. Then the sum (V4 ,J4 ) ffi (V2,J2)
has a basisvi,v2,en satisfying the following conditions: /{vJ) = f(v2) =/(en)= 1,
V1 · en = V2 · en = 0, and VJ · V2 = 1. If We replace this basis by VJ +en, V2 +en, en,
we see that our direct sum is of type {x1x2 + xj} = {q 3}. Hence, the full space
(JI;/)= (V3,f3) ffi (V4,f4) ffi {V2,/2) is of type {q~- 3 + Xn-2Xn-J + x~} = {qn}. D
Since for fixed o. the Gauss sum (2.13) depends only on the set of values off, it follows
that isomorphic spaces (and equivalent forms q) correspond to the same Gauss sum.
Obviously,
(1.14) Gp(o., f) = Gp(o.q) = pdimV, if o. = 0 or J = 0.
Furthermore, if{V,/) = {Vi,/1) ffi · · · ffi (V,,f,), then
Gp(o., J) =
VjE Vj(I ~j~t)
(2.15) I I
which reduces the calculation of Gauss sums to the case of irreducible spaces.
Before proceeding to the computations, we. recall the definition and basic prop-
erties of the Legendre symbol. Suppose that p is an odd prime, K = F P• K* is the
multiplicative group of the field K, and (K*) 2 = {d 2; d E K*} is the subgroup of
squares. Since the kernel of the homomorphism d - t d 2 from K* to (K*) 2 consists
of 1 and -1 =f. 1, it follows that" (K* )2 has index 2 in K*. Let a ....:..+ (a/ p) denote the
unique nontrivial character of the quotient group K* /(K*) 2 , regarded as a function
on K*. In other words, (a/ p) = 1 or -1 depending on whether or not a is a square. If
a is an integer not divisible by p, then the Legendre symbol (a/ p) is defined as (a/ p ),
where a denotes the residue class of a in Z/ pZ. From the definition it follows that the
Legendre symbol has the multiplicative property
(2.17)
and
G2(l,f) = 0.
PRooF. In the case p -:I 2 it follows from Theorem 2.4 that the space ( V, f) splits
into the direct sum of n one-dimensional subspaces (Vi, f 1), ••• , ( Vn, f n), each of
which is nondegenerate, because (V,f) is nondegenerate. Then (V;,f;) is of type
{a;x 2 }, where a; -:I 0, and we can use (2.15), (2.16), and (2.19) to obtain
The formulas for G2 (1,J) follow from (2.15) and (2.18), since in the case of type
{q!k} the space (J-; f) is the direct sum of k spaces of type {q!}, in the case of type
{q2!} it is the direct sum of k - 1 spaces of type {q!} and one space of type { q~}, and
in the case of type {q2k+ 1} the direct summands include a space of type {x 2 } with zero
Gauss sum. D
The number e(V, f) is called the sign of the nondegenerate even-dimensional qua-
dratic space (V,f) over thefieldFp. From (2.15) and (2.20) it follows that the sign is
multiplicative in the sense that
(2.23)
if ( V;, f;) are nondegenerate even-dimensional spaces.
The Jacobi symbol is a generalization of (2.16). For any odd b =Pl··· p, with
prime divisors p 1, •.• , p, it is defined by setting
4•. Isotropy subspaces of nondegenerate spaces over residue fields. A nonzero qua-
dratic space with zero quadratic form is called an isotropy space.
PROPOSITION 2.12. Suppose that ( V, f) is a nondegenerate quadratic space over the
field K = F P' where p is a prime, Vi = (Vi, 0) is an isotropy subspace of ( V, f), and
ei, ... , e, are an arbitrary basis of Vi. Then there exist vectors ej, ... , e; E V satisfying
the conditions
(1) e; • ei = 1 and f(ei) = Ofor i = 1, ... , r;
(2) the subspaces (P1,f1), ... , (P,,f,), where P; = {Ke;+ Kei} and f; is the
restriction off to P;, are pairwise orthogonal.
We first prove a lemma.
LEMMA 2.13. If (V, f) is an arbitrary quadratic space with d ( V, f) =I 0 and
(Vi, f 1)C ( V, f) is any subspace, then
PROOF. We choose a basis e1, . .. , en of Vin such a way that the first r vectors form
a basis of Vi.. Sinced(V, f) = det(e; ·ej) =I 0, the rows of the matrix (e; ·ej) are linearly
independent. In particular, the first r rows of this matrix are linearly independent. This
implies that the system of r linear equations u · e 1 = 0, ... , u · e, = 0 in the coordinates
of the vector u = E; u;e; E V1.L has n - r linearly independent solutions; this proves
the dimension formula in the lemma. From this formula it follows that
dim(Vi.L).L = 4imV - (dimV -dimVi) = dimVi.
On the other hand, we obviously have Vi c ( V 1.L ).l. 0
PRooF OF THE PROPOSITION. We first use induction on r to treat the case when
n =dim Vis even. Since f(ei) = 0 and (V,f) is nondegenerate, there exists v E V
with e1 · v =a =I 0 (see Lemma 2.2). We set v1 = a- 1v and ej = v1 - f(vi)e,. Then
e1 · ej = e1 · v1 = 1 and f(ej) = v1 · (-f(v1)e1) + f(vi) = 0, which gives the result
in the case r = 1. Suppose that r > 1 and the proposition has already been proved
for (r - 1)-dimensional subspaces. Set Vo= {Ke 1 + · · · + Ke,_ 1}. From Lemma 2.7
QUADRATIC SPACES 319
and Theorem 2.9 it follows that in the case under consideration d ( V, f) =/:- 0. Hence,
by Lemma 2.13, (Vl )l. = Vo. This implies that the vector e,, which obviously lies in
Vl but not in Vo, is therefore not contained in ( Vl )l.. Thus, there exists u E Vl
with e, · u = P =/:- 0. If we replace u by e; = u1 - f (u1)e,, where u1 = p- 1u,
we obtain a vector e; E Vl that satisfies the conditions e, · e; = e, · u1 = 1 and
f(e;) = u1 · (-f(ui)e,) + f(u1) = 0. We set P, ={Ke,+ Ke;}. Since P, C Vl, it
follows that Vo c Pf. Since the plane P, has determinant 1, it follows from Theorem
2.3, the relations (2.5), and Lemma 2.6 that Pf is a nondegenerate space of dimension
n- 2. By the induction assumption, there exist vectors ej, ... , e;_ 1 E Pf that satisfy
the required conditions with respect to the basis e1, ... , e,_ 1 of the isotropy subspace
Vo c Pf. Then the vectors e j, ... , e;_ 1, e; satisfy the conditions of the proposition.
Now suppose that n = dim Vis odd. Since the matrix of the quadratic form
(2.12) over K = F 2 is clearly of rank n - 1, it follows that the radical R(V) = VJ.. is
one-dimensional, and, since V is nondegenerate, it contains a unique vector e0 with
f (eo) = 1. Then eo ~ Vi. Hence, the vectors eo, e1, ... , e, are linearly independent,
and they can be completed to a basis eo, ei, ... , e,, . .. , en-I of the space V. We set
V' = {Ke1 + · · · + Ken-1}. Then
(V,f) = ({Keo}.fo) EB (V',f'),
where / 0 and/' are the restrictions off to the corresponding spaces. From this
decomposition and the nondegeneracy of ( V, f) it follows that ( V', f') is a nondegen-
erate even-dimensional space. Since Vi c V', the proposition now follows from the
even-dimensional case considered above. D
which proves the proposition in the case r = 1. Now suppose that r > 1 and
the proposition has already been proved for smaller isotropic sets of vectors. If
i(V,f;r-: 1) = 0, then i(V,f;r) = 0. In that case, by the induction assumption,
the corresponding expression (2.24) or (2.25) for r - 1 is· equal to zero; but then
the expression for r is also clearly equal to zero, and the proposition is proved. So
we suppose that i(V,f;r -1) =I: 0, and we let ei, ... ,er-1 be one of the isotropic
sets of r - 1 vectors. Since ei, ... , er-I span an isotropy subspace, it follows that
for the basis ei, ... , er-I of this subspace there exists a set of vectors ej, ... , e;_ 1
with the propertjes in Proposition 2.12. Each of the pairwise orthogonal subspaces
(P 1, / 1), ••• , (Pr-i.fr-i) in this proposition has determinant -1. It hence follows
from Theorem 2.3 that the space ( V, f) splits into the direct sum of subspaces of the
form
(2.26)
As a result of this decomposition, every vector v E V can be uniquely written in the
form
r-1 r-1
v = L:a;e; + LPiej +v',
i=I j=I
where a;,pi E K, v' E V'. Since v · e; = p;, it follows that v is orthogonal
to e1, .·.. ,er-I if and only if P1 = · · · = Pr-I = 0. In that case, since the vec-
tors ei, ... ,er-1,v' are pairwise orthogonal, we find that f(v) = f(a1e1) + ·· · +
f(ar.:..1er-1) + f(v') = f(v'), and the condition f(v) = 0 is equivalent to the condi-
tion f (v') = 0. Finally, the vectors e 1, ••• , er_ 1, v are linearly independent if and only
if v' =/:- 0. Thus, the set e1, ... , er-I can be completed to an isotropic set of r vectors
ei, ... , er-t. v in exactly pr-I i(V',f'; 1) different ways, i.e.,
(2.27) i(V,f;r) = pr-li(V',J'; l)i(V,f;r - 1).
From (2.26) it follows that the subspace ( V', f ') is nondegenerate, and dim V' =
n - 2(r - 1). Since each of the subspaces (P;,f;) obviously has sign l, it follows from
(2.26) and (2.23) thatifn > 2(r - 1) is even, then the signe(V',f') is the same as the
sign e ( V, f). If n = 2(r - 1), then i ( V, f; r) = 0 and the expression (2.24) is also zero.
If n > 2(r - 1), then, substituting into (2.27) the value of i(V', f'; 1) found above and
using the induction assumption, we obtain the required formulas. D
The proofs of the facts given in Appendix 3 and more detailed information on this
topic can be found in [13].
1. Modules in algebraic number fields. Let k be a subfield of the field K. If K has
finite dimension n = [K : k] as a vector space over k, then we say that K is a.finite
extension of k (of degree n). By the matrix (a;j) of an element a E Kin the basis
{OJ;} of K over k we mean the matrix whose rows are the coordinates of aOJ; in the
basis {OJ;}. The trace S(a) and the determinant N(a) of this matrix (a;j) are called,_
respectively, the trace and the norm of a (from K to k).
In particular, if k = Q, then K is called an algebraic number field. In this case
we say that an element a E K is an integer if the coefficients of its characteristic
polynomial ch(v, a) = det( (a;j) - vEn) belong to the ring of rational integers Z. By a
module in K we mean .any finitely generated Z-submodule M c K .. As a free abelian
group M has basis (over Z) OJ1, ... , OJm. If the rank m of M is equal to n = [K : Q],
we say that M is a full module. Suppose that M1 and Mi are modules with bases {OJ;}
and {'U}. Then the set of integer linear combinations of the products OJ;1'f j is also a
module, which is denoted M1Mi and is called the product of M 1 and Mi. The product
of two full modules is a full module.
A full module is called an order of the field K if it contains l and is a ring. Since
the matrix entries for an element in any order D' with respect to a basis of the same
order are rational integers, it follows that D' is contained in the set D = Dx of all
integers of K. A typical example of an order of K is the ring of multipliers of a full
module M, defined as DM ={a EK; aM c M}. If M1 and Mi are similar modules,
i.e., if Mi= aM1 for some nonzero a EK, then obviously DM1 = DM2 • For every
full module M there exists a similar module contained in OM. The norm N(M) of a
module Mis the absolute value of the determinant of the transition matrix from a basis
of M to a basis of D M. N (M) does not depend on the choice of bases; if M c D M,
it is equal to the index of M in D M. Let {OJ;} be a basis of the full module M. The
number d (M) = det(S (OJ;OJj)), which is ajso independent of the choice of basis, is
called the discriminant of M.
2. Modules and primes in quadratic fields. Any extension K ::J Q of degree two is
oftheformK,,; Q+Qvao = Q(yao), whered0 =f.0,1 isasquarefreerationalinteger.
Ifwecomputethematrixoftheelementa = a+by'{IO, a,b E Q, in the basis 1, ytTo, we
find thatch(v,a) = vi-2av+ (ai-dobi). Consequently, ch(v, a)= (v-a)(v-a'),
where a' = a - byao is the conjugate of a (over Q). The element a has trace
S(a) = a + a' = 2a and norm N(a) = aa' = ai - d0bi; it is an integer of K if
S(a) and N(a) are both in Z. This implies that the set D of integers of the field
3il
322 APPENDIX 3
K = Q(Vdo) is the order with basis l,ro, where w = (1 + Vdo)/2 or Vdo depending
on whether d = 1 or 2, 3(mod 4); the discriminant d = d(D), which is called the
discriminant of the field K, is equal to do or 4do, respectively. Any order of K has the
form D1 = Z + Zlw, where I E N is the index of D1in D; the discriminant of the order
is d/ 2 .
Every full module M c K is generated by two elements a, p, where a =/:- 0 and
y = P/a ¢. Q; in thiscasewewriteM = {a,p}. Ify EK, weletch(v, y) = av 2 +hv+c
denote the polynomial obtained by multiplying the characteristic polynomial ch( v, y)
by a rational number and such that a > 0 and a, b, c are relatively prime integers. The
significance of the polynomial ch( v, y) is that for M = { 1, y} with y ¢. Q we have
In 1987 the first author published the book Quadratic Forms and Hecke Opera-
tors, Grundlehren der Mathematischen Wissenschaften 286, Springer-Verlag. It was
devoted to the multiplicative properties of modular forms of integer weight and qua-
dratic forms in an even number of variables. Meanwhile, the second author had
carried over a large part of the theory to modular forms of half-integer weight and
quadratic forms in an odd number of variables. Hence, when the question arose of
preparing a Russian edition, it was decided that, rather than merely reproduce the
original English version, we would expand it by including the multiplicative properties
of modular forms of half-integer weight. In order not to increase the size of the book,
it was necessary to omit sections on the action of Hecke operators on the theta-series
of quadratic forms. The result was the present volume.
after a gap caused by the War, these operators were examined in [28]. For modular
forms of degree I and half-integer weight, Hecke operators were introduced in [49];
however, the approach adopted by Shimura in [43] turned out to be more fruitful.
§1.2-The existence of a basis of eigenfunctions of the Hecke operators in the
invariant spaces of cusp-forms of degree 1 was proved by Petersson in [33]. Petersson's
idea was first used for Siegel modular forms by Maass in [28]. In [18] the author
proved the existence of a basis of eigenfunctions in the entire space of modular forms
of integer weight for a broad class of congruence subgroups; however, this paper had
some errors, partially noted in [19], that make it difficult to use. In the present book we
prove the existence of a basis of eigenfunctions only for spaces of cusp-forms relative to
q-regular pairs (Theorem 1.9) and for invariant subspaces of the space of all modular
forms for the full modular group rn (Theorem 2.16). Here we do not even touch upon
the important and extensively studied question of spaces spanned by theta-series that
are invariant relative to the Hecke operators. In this connection see [20, 8-12, 21].
§2.3-The relations (2.46) were obtained in [50] for the full modular group and the
trivial character. Our exposition follows the same idea. Zharkovskaya's work arose as
a result of attempts to generalize the Maass commutation relations [28] for the Siegel
operators and the Hecke operators corresponding to T 2 (m).
§2.4-The computations in this subsection were carried out in [7, 8, 53].
§3.1-The results in this subsection are essentially due to Hecke [24].
§3.2-The results in this subsection were obtained for the case q = 1 in [3] and in
their final form in [4]. Here we follow the same ideas. Similar questions were examined
for groups of the form r 2 (q) in [17]. In [4] in the case q = 1 it was proved that the
function ·
'l'(s, F) = (21t)- 2sr(s )r(s - k - 2)c!{s, I; F),
where r(s) is the gamma-function, can be analytically continued to a meromorphic
function on the entire s-plane, where it has at most four simple poles and satisfies the
functional equation
'¥(2k - 2 - s,F) = (-l)k'l'(s,F),
where k is the weight of the eigenfunction F.
§3.3-The results presented here were obtained for n = 1 in [43] and [44], for n = 2
and integer weight in [5], and for arbitrary degree and weight in [7, 8, 53]. Analogous
series for the group r 2 (q) were studied in [23]. In [13] it was proved that, if F is a
modular form of even weight and y is a Dirichlet character, then the even zeta-function
c+(s, y; F) extends meromorphically onto the entire s-plane; and in the case q = 1,
y = 1 (and under certain restrictions) a functional equation was found for the even
zeta-function. The subsequent development of the theory of even zeta-functions is due
to Bocherer, who, in particular, managed to remove the restriction alluded to above
(S. Bocherer, Uber die Funktionalgleichung automorpher L-Funktionen zur Siegelschen
Modulgruppe, J. Reine Angew. Math. 362 (1985), 146-168).
References
1. A. N. Andrianov, Rationality theorems for Hecke series and Zeta-functions of the groups GLn and Spn
over local.fields, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), no. 3, 466-505; English transl. fo Math.
USSR-Izv. 3 (1969).
2. - - , Spherical functions for GLn over local fields, and the summation of Hecke series, Mat. Sb. 83
(1970), no. 3, 429-451; English transl. in Math. USSR-Sb. 12 (1970).
3. --·Dirichlet series with Euler product in the theory of Siegel modular forms of genus 2, Trudy
Mat. Inst. Steklov. 112 (1971), 73-94; English transl. in Proc. Steklov Inst. Math. 112 (1973).
4. - - , Euler products that correspond to Siegel's modular forms of genus 2, Uspekhi Mat. Nauk 29
(1974), no. 3, 43-110; English transl. in Russian Math. Surveys 29 (1974).
5. - - , Symmetric squares ofzeta-functions ofSiegel modularforms ofgenus 2, Trudy Mat. Inst. Steklov.
142 (1976), 22-45; English transl. in Proc. Steklov Inst. Math. 1979, no. 3.
6. - - , The expansion of Hecke polynomials for the symplectic group ofgenus n, Mat. Sb. 104 (1977),
no. 3, 390--427; English transl. in Math. USSR-Sb. 33 (1977).
7. _ _ , Euler expansions ofthe theta-transform ofSiegel modular forms ofgenus n, Mat. Sb. 105 (1978),
no. 3, 291-341; English transl. in Math. USSR-Sb. 34 (1978).
8. ____ , Multiplicative arithmetic of Siegel's modular forms, Uspekhi Mat. Nauk 34 (1979), no. l,
67-135; English transl. in Russian Math. Surveys 34 (1979).
9. --·Action of Hecke operator T(p) on theta series, Math. Ann. 247 (1980), 245-254.
10. - - , Integral representations of quadratic forms by quadratic forms: multiplicative properties, Pro-
ceedings of the International Congress of Mathematicians, Vol. l, 2 (Warsaw 1983), PWN, Warsaw,
1984, pp. 465-474.
11. _ _ , Hecke operators and representations of binary quadratic forms, Trudy Mat. Inst. Steklov. 165
(1984), 4-15; English transl. in Proc. Steklov Inst. Math. 1985, no. 3.
12. _ _ , Representations ofan even zeta-function by theta-series, Zap. Nauchn. Sem. Leningrad. Otdel.
Mat. Inst. Steklov. (LOMI) 134 (1984), 5-14; English transl. in J. Soviet Math. 36 (1987).
13. A. N. Andrianov and V. L. Kalinin, Analytic properties of standard zeta-functions of Siegel modular
forms, Mat. Sb. 106 (1978), no. 3, 323-339; English transl. in Math. USSR-Sb. 35 (1979).
14. A. N. Andrianov and G. N. Maloletkin, Behavior of theta-series ofgenus n under modular substitutions,
Izv. Akad. Nauk SSSR Ser. Mat. 39 (1975), no. 2, 243-258; English transl. in Math. USSR-Izv. 9
(11975).
15. Z. I. Borevich and I. R. Shafarevich, Number theory, Pure Appl. Math., vol. 20, Academic Press, New
York, 1966.
16. M. Eichler, Introduction to the theory of algebraic numbers and functions, Pure Appl. Math., vol. 23,
Academic Press, New York, 1966.
17. S. A. Evdokimov, Euler products for congruence subgroups of the Siegel group of genus 2, Mat. Sb. 99
(1976), no. 4, 483-513; English transl. in Math. USSR-Sb. 28 (1976).
18. _ _ ,A basis composed of eigenfunctions of Hecke operators in the theory ofmodular forms of genus
n, Mat. Sb. 115 (1981), no. 3, 337-363; English transl. in Math. USSR-Sb. 43 (1982).
19. _ _ ,Letter to the editors, Mat. Sb. 116 (1981), no. 4, 603; English transl. in Math. USSR-Sb. 44
(1983).
20. E. Freitag, Die lnvarianz gewisser von Thetareinen erzeugter Vektorriiume unter Heckeoperatoren,
Math. Z. 156 (1977), no. 2, 141-155.
21. _ _ , Eine Bemerkung zu Andrianovs expliziten Formelnfur die Wirkung der Heckeoperatoren auf
Thetareihen, E. B. Christoffel (Aachen/Monschau, 1979), Birkhiiuser, Basel, 1981, pp. 336-351.
329
330 REFERENCES
52. V. G. Zhuravlev, Hecke rings for a covering ojthe symplectic group, Mat. Sb. 121 (1983 ), no. 3, 381-402;
English transl. in Math. USSR-Sb. 49 (1984).
53. _ _ ,Euler expansions of theta-transformations of Siegel modular forms of half-integer weight and
their analytic properties, Mat. Sb. 123 (1984), no. 2, 174-194; English transl. in Math. USSR-Sb. 51
(1985).
List of Notation
333
334 LIST OF NOTATION
9 780821 802779