You are on page 1of 6

Topics in analytic number theory, Lent 2013.

Lecture 24: Selbergs theorem


Bob Hough
March 13, 2013

Reference for this lecture: Selberg, Contributions to the theory of the Rie-
mann zeta-function. In Collected Papers vol 1.

Recall from last lecture our approximate formula for the logarithm to the
right of the half-line.
1 A
Theorem. Assume RH. Let 2 < x < t and let = 2 + log x < 1 with A > A0 .
Set s = + it. There exists , || < 1 such that
2
X 1 X 1 log pxn
log (s) = + (1)
npns npns log x
pn x x<pn x2

x2
log p log pn
 
2 X log p X log t
+ + +O .
log x n pns pns log x log x
p x x<pn x2

We may deduce an immediate corollary.


Corollary 1 (RH implies Lindelof). Assume RH. Let t > 2 be large. We have
  
log t
|(1/2 + it)| exp O .
log log t

Proof. Let x = log t. Since |(1/2 + it)| |(1/2 + 1/ log x + it)| we have
 
log t
log |(1/2 + it)| O + log |(1/2 + 1/ log x + it)|.
log x
But
  X  
log t 1 2 X log p log t
< log (1/2+1/ log x+it) O + + O .
log x n 2
npn/2 log x n 2 pn/2 log x
p <x p <x

Selbergs theorem
We now turn to Selbergs theorem, which describes the distribution of values of
log((1/2 + it)). The formal set-up is the following. We work on the probability
space
(, ) = ([1, 2], dt),

1
the unit interval with standard measure. We work with a continuous family of
random variables {XT (t)}T 106 , which are functions [1, 2] C. Well consider
 
1 1
X,T (t) = log + itT ,
log log T 2

so we are studying log (1/2 + it) in the interval [T, 2T ].


Let C0 (C) be the space of continuous functions of compact support in C.
We say that XT converges weak-* to a measure on C if for each f C0 (C)
Z 2 Z
lim f (XT (t))dt = f (z)d(z).
T 1 C

Let
1 |z|2
G =
e 2 dz
2
denote the standard complex Gaussian measure on C.

Theorem 24.1 (Selberg). The family X,T converges weak-* to G .



Thus typically log (1/2 + it) varies on a scale of log log T in the interval
t [T, 2T ].
In the proof well use that the Gaussian distribution is determined by its
moments.

Theorem 24.2 (Method of moments). Let XT be a continuous family of ran-


dom variables on a probability space (, ). A sufficient condition for the weak-*
convergence of XT G is that, for each j, k Z0 ,
Z Z
k
lim XTj XT d = j=k j! = z j z k dG (z).
T C

See e.g. Billingsley, Probability and Measure Chapter 5.


We also use the following lemma.

Lemma 24.3. Suppose 1 m, n Y . Then


Z 2  itT 
m 1 m=n
dt = .
1 n O(Y /T ) m 6= n

Proof. If m = n this obvious.


If m 6= n then we use the following
R2 observation.
Let f R, f 6= 0. Then 1 eif t = O(1/|f |). To see this, note that eif t
has periods of length  |f1 | . Split [1, 2] into some full periods and one partial
period. Over each full period the integral is 0, and over the partial period use
|eif t | = 1.
Now if m 6= n, say without loss of generality that m > n. Then m n 1
from which it follows that log m mn 1 1
n = log(1 + n ) log(1 + Y )  Y . Thus the
above observation applies with f  YT .
log T
Sketch proof of Selbergs theorem. We set log x = log log log T so that x < T  .
The proof proceeds in three steps.

2
Step 1
In the first step we take moments to show that the sum over primes which
approximates log (1/2 + A/ log x + it) tends to a Gaussian limit. It will then
remain to show that the distribution of log (1/2 + it) is similar.
Set
1 X 1
X1,T = .
log log T px p1/2+itT

We have
Z 2 Z 2  itT
k 1 X 1 q1 ...qk
X1,T (t)j X1,T (t) dt = j+k dt.
1 (log log T ) 2 p1 ...pj q1 ...qk 1 p1 ...pj
p1 ,...,pj x
q1 ,...,qk x

Since xk , xj T  , the integral is O(T 1+ ) unless p1 ...pj = q1 ...qk . This can
only happen if j = k. So if j 6= k then the moment is bounded by T 1+ , which
tends to zero.
When j = k, we neglect the error and take only those terms with p1 ...pj =
q1 ...qj . For a fixed p1 , ..., pj the number of possible q1 , ..., qj is j! if all of the pi
are distinct, and is smaller otherwise. The moment becomes

1 X j! X 1
+O
(log log T )j p1 ...pj p21 p2 ...pj1
p1 ,...,pj x p1 ,...,pj1 x

where the second sum bounds the terms with at least one pi repeated (we allow
the constant in the big O to depend on j). Since each sum over pi contributes a
factor of log log x, the error is lower order. It follows that the moment becomes

j!(log log x)j


(1 + o(1)) = (1 + o(1))j!.
(log log T )j

This concludes the first step.

Before the final two steps we record a simple lemma regarding convergence
of distributions.
Lemma 24.4. Suppose XT (t) and XT (t) are two families of complex valued
random variables on [1, 2], and is a measure on C. Each of the following
conditions is sufficient to guarantee the simultaneous weak-* convergence

XT XT .

i. limT m(t : XT (t) 6= XT (t)) = 0

ii. limT supt |XT (t) = XT (t)| = 0


R2
iii. limT 1 |XT (t) XT (t)|2 = 0.

Proof. Exercise.

3
Step 2
In the second step we show that
 
1 A
X2,T (t) = log 1/2 + + itT
log log T log x

converges to G . We will reduce this to the convergence of X1,T by using the


expression (1) and repeatedly applying Lemma 24.4. We have
p
log log T |X1,T (t) X2,T (t)|


X 1 1 X 1
1/2+itT
1/2+A/ log x+itT
+
n(1/2+A/ log x+itT )

px p p pn x,n>1 p



2
log pxn

X 1
+
n(1/2+A/ log x+itT ) log x

x<pn x2 p

2
log pxn

1 X log p X log p
+O +
log x n pn(1/2+A/ log x+itT ) p n(1/2+A/ log x+itT ) log x
p x x<pn x2
 
log T
+O .
log x

The last term may be handled by item ii.


To the remaining terms we apply iii. Since the computations are similar
well just show how to bound the first part of the last sum. We have
2
Z 2
1 1 X
log p
dt
log log T log X pn x p
n(1/2+A/ log x+itT
1
Z 2  n2 itT
1 X log p1 log p2 n2 (1/2+A/ log x) p2
= p2 dt
2
(log x) log log T n1 n2 p n 1 (1/2+A/ log x)
1 pn1 1
p1 ,p2 x 1

. 1 X (log p)2 1
= 
(log x)2 log log T pn(1+2A/ log x) log log T
pn x

[weve discarded the terms with pn1 1 6= pn2 2 , which are very small].

Step 3
We now pass between the distribution of log (1/2 + A/ log x + itT ) and that of
log (1/2 + itT ). We have
 Z 1+ A
log x 0
  
1 1 A 2
log + itT log + + itT = ( + itT )d
2 2 log x 1
2

 Z 1+ A 0 
A 0 1 0
 
A 2 log x 1 A
= + + itT + + + itT ( + itT ) d
log x 2 log x 1
2
2 log x

The first term may be bounded by the second line of (1) (see Lemma 23.3).

4
In the second term we use the partial fraction representation of 0 / to
write the integral as

log T
 X Z 12 + logA x 1 1
O + 1 d
log x 1 A/ log x + i(T t ) 2 + i(T t )
()=0
2
|tT |<1

The error term is negligible, and we may rewrite the integral as


X Z 12 + logA x 12 log
A
x
1 d. (2)
1 (A/ log x + i(T t ))( 2 + i(T t ))
()=0 2
|T t|<1

Let  = (T ) = log log1 log T . We split the sum over zeros into three sets.
X X X X
(...) = (...) + (...) + (...)
:|T t|< log T : log T |T t|< log
A
x
A
: log x |T t|<1

= 1 + 2 + 3
We also distinguish a two sets of t [1, 2]. Let
 

E1 (T ) = t [1, 2] : , | tT | <
log T
   
A A log T
E2 (T ) = t [1, 2] : # : | tT | <
log x  log x
T
Since the density of zeros at height T is 2 log T , we expect both sets E1 and
E2 to be small, which we now show.
Lemma 24.5. We have
lim m(E1 (T )) = 0, lim m(E2 (T )) = 0
T T

Proof. We have
 
+
 
log T log T
[
E1 (T ) ,
T T
T log T <<2T + log T

so that, using Theorem 21.3 for the number of zeros,


 
2  
m(E1 ) # :T < < 2T +  .
T log T log T log T
To estimate E2 we may write
Z  
A log T A
m(E2 ) # : | tT |
 log x log x
ZtE2 X
1dt
t[1,2] A
:|tT |< log x

+ A
log x
X Z min(2, T )
= A 1dt
log x
A A max(1, )
[T log x ,2T + log x ]
T

    
A A A A log T
N 2T + N T 
T log x log x log x log x

5
so that m(E2 )  .

Applying this last lemma and i. of Lemma 24.4 we may work under the
assumption that
t E1 (T )c E2 (T )c ,
i.e. since we could change the value of log (1/2 + itT ) on either E1 or E2
without changing the limiting distribution. Then 1 is empty. On 2 we bound
the integrand of (2) by  log T , the length of integral by  log1 x and the
log T
number of zeros in the sum by   log x for a bound

(log T )2
2   (log log log T )3 .
(log x)2

This ratio is dominated by the normalizing factor log1log T .


Finally, for the integrand on 3 we may put in the bound
1 1
1  A
2 + i(T t ) log x + i(T t )

so that
1 X 1
3  .
(log x) |A/ log x + i(t )|2
2

Applying Lemma 23.3, the contribution of 3 is bounded by the error in the


second line of (1), completing the proof.

You might also like