You are on page 1of 8

09:53 11/15/2000

TOPIC. Characteristic functions, cont’d. This lecture develops the convolution of f and g) given by
an inversion formula for recovering the density of a smooth random Z ∞ Z ∞
variable X from its characteristic function, and uses that formula to (f ? g)(s) = f (x)g(s − x) dx = f (s − y)g(y) dy. (3)
establish the fact that, in general, the characteristic function of X −∞ −∞
uniquely characterizes the distribution of X. We begin by discussing
the characteristic function of sums and products of random variables. Since the addition of random variables is associative — (X +Y )+
Z = X + (Y + Z) — so is the convolution of probability measures —
Sums and products. C-valued random variables Z1 = U1 + iV1 , (µ?ν)?ρ = µ?(ν ?ρ). In general, the convolution µ1 ?· · ·?µn of several
. . . , Zn = Un + iVn , all defined on a common probability space, are measures is difficult to compute; however, the characteristic function
said to be independent if the pairs (Uk , Vk ) for k = 1, . . . , n are of µ1 ? · · · ? µn is easily obtained from the characteristic functions of
independent. the individual µk ’s:
Theorem 1. If Z1 , .Q
. . , Zn are independent C-valued integrable ran- TheoremP2. If X1 , . . . , Xn are independent real random
Qn variables,
n n
dom variables, then k=1 Zk is integrable and then S = k=1 Xk has characteristic function φS = k=1 φXk .
Qn Qn Proof Since eitS is the product of the independent complex-valued
E( k=1 Zk ) = k=1 E(Zk ). (1)
integrable random
Qn variables eitXk for k = 1, . . . , n, Theorem 1 implies
Proof The case n = 2 follows from the identity (U1 +iV1 )(U2 +iV2 ) = itS itXk
that E(e ) = k=1 E(e ).
U1 U2 − V1 V2 + i(U1 V2 + U2 V1 ) and the analogue of (1) for integrable
real-valued random variables. The general case follows by induction Example 1. (A) Suppose X and Y are independent and each is
on n. uniformly distributed over [−1/2, 1/2]. Using (3) it is easy to check
that S = X + Y has the so-called triangular distribution, with density
Suppose X and Y are independent real random variables with
fS (s) = (1 − |s|)+ (see Exercise 1). Since
distributions µ and ν respectively. The distribution of the sum S :=
X + Y is called the convolution of µ and ν, denoted µ ? ν, and is Z 1/2 Z 1/2
sin(t/2)
given by the formula φX (t) = cos(tx) dx + i sin(tx) dx =
−1/2 −1/2 t/2
(µ ? ν)(B) = P [S ∈ B]
Z ∞ Z ∞ (this verifies line 10 of Table 12.1) and since φY = φX , we have
= P [X + Y ∈ B | X = x] µ(dx) = P [x + Y ∈ B] µ(dx) φS (t) = sin2 (t/2)/(t2/4) = 2(1 − cos(t))/t2 ; this gives line 11 of the
−∞
Z Z
−∞ Table.
∞ ∞
= ν(B − x) µ(dx) = µ(B − y) ν(dy) (2) (B) Suppose X and Z are independent real random variables, with
−∞ −∞ Z ∼ N (0, 1). Then Xσ := X+σZ has characteristic function φXσ (t) =
2 2
φX (t)φσZ (t) = φX (t)e−σ t /2 . We’re interested in Xσ because its dis-
for (Borel) subsets B of R; here B − x := { b − x : b ∈ B }. When
tribution is very smooth (see Lemma 1 and Exercise 3) for any σ, and
X and Y have densities f and g respectively, this calculation can be
is almost the same as the distribution of X when σ is small. •
pushed further (see Exercise 1) to show that S has density f ?g (called

13 – 1 13 – 2
1
R∞ 2 2
Theorem 3. Let X and Y be independent real random variables (5): Xσ = X + σZ has density fσ (x) = 2π −∞
e−itx φ(t) e−σ t /2
dt.
with characteristic functions φ and ψ respectively. The product XY
has characteristic function
¡ ¢ ¡ ¢ ¡ ¢ where µ is the distribution of X. Rearranging terms gives
E eitXY = E φ(tY ) = E ψ(tX) . (4)
Z ∞ −(ξ−x)2/(2σ2 ) Z ∞
e 1 2 2
Proof The condition expectation of eitXY given that X = x equals √ µ(dx) = e−iξy φ(y) e−σ y /2 dy. (7)
−∞ 2πσ 2π −∞
the unconditional expectation of eitxY , namely, E(eitxY ) = ψ(tx).
Letting µ denote the distribution of X we thus have E(eitXY ) = To interpret the left-hand side, note that conditional on X = x, one
R∞ R∞
−∞
E(eitXY
| X = x) µ(dx) = −∞
ψ(tx) µ(dx) = E(ψ(tX)). has Xσ = x + σZ ∼ N (x, σ 2 ), with density
2 2
An inversion formula. This section shows how some probability e−(ξ−x) /(2σ )
gx (ξ) = √
densities on R can be recovered from their characteristic functions by 2πσ
means of a so-called inversion formula. We begin with a special case, R
which will be used later on in the proof of the general one. at ξ. The left-hand side of (7), to wit gx (ξ) µ(dx), is thus the un-
conditional density of Xσ , evaluated at ξ. This proves (5).
Lemma 1. Let X be a real random variable with characteristic func-
tion φ. Let Z ∼ N (0, 1) be independent of X. For each σ > 0, Theorem 4 (The inversion formula for densities). Let X be
the random variable Xσ := X + σZ has density fσ (with respect to a real randomR ∞ variable whose characteristic function φ is integrable
Lebesgue measure) given by over R, so −∞ |φ(t)| dt < ∞. Then X has a bounded continuous
Z ∞ density f on R given by
1 2 2
Z ∞
fσ (x) = e−itx φ(t) e−σ t /2 dt (5) 1
2π −∞ f (x) = e−itx φ(t) dt. (8)
2π −∞
for x ∈ R.
Proof We need to show that
Proof For ξ ∈ R, the random variable Xξ := X − ξ has characteristic
function φXξ (y) = φ(y)e−iξy . Taking X = Xξ and t = 1 in (4) gives (A) the integral in (8) exists,
¡ ¢ ¡ ¢ (B) f is bounded,
E φ(Y )e−iξY = E ψ(X − ξ) (6)
(C) f is continuous,
for any random variable Y with characteristic function p ψ. Applying (D) f (x) is real and nonnegative, and
2 2 Rb
this to Y ∼ N (0, 1/σ 2 ) with density fY (y) = e−σ y /2 / 2π/σ 2 and (E) P [a < X ≤ b] = a f (x) dx for all −∞ < a < b < ∞.
2 2
characteristic function ψ(t) = e−t /(2σ ) , we get R∞
Z ∞ Z ∞ (A) f (x) exists since |e−itx φ(t)| = |φ(t)| and −∞ |φ(t)| dt < ∞.
−iξy σ −σ 2y 2/2 2 2
¯R ¯ R
φ(y) e √ e dy = e−(x−ξ) /(2σ ) µ(dx),
−∞ 2π −∞ (B) f is bounded since 2π|f (x)| = ¯ e−itx φ(t) dt¯ ≤ |φ(t)| dt < ∞.

13 – 3 13 – 4
09:53 11/15/2000

R∞ 1
R ∞ −itx R∞ 1
R ∞ −itx
X has cf φ with −∞ |φ(t)| dt < ∞. f (x) := 2π −∞
e φ(t) dt. X has cf φ with −∞ |φ(t)| dt < ∞. f (x) := 2π −∞
e φ(t) dt.

(C) f is continuous. (D) f (x) is real and nonnegative. (D ): the density fσ of Xσ = X + σZ converges uniformly to f

(C) We need to show that xn → x ∈ R entails f (xn ) → f (x), i.e., (E) Let −∞ < a < b < ∞ be given. We need to show that
Z b
Z ∞ Z ∞
1 −itxn 1 P [a < X ≤ b] = f (x) dx. (9)
e φ(t) dt → e−itx φ(t) dt. a
2π −∞ 2π −∞
To this end, let n ∈ N and let gn : R → R be the function graphed
This follows from the DCT, since below:
1 ......................................................................................
... ... gn is piecewise linear
gn (t) := e−itxn φ(t) → e−itx φ(t) for all t ∈ R, ..
..
.
... ...
...
...
←g gn (a) = 0 = gn (b + 1/n)
.... ... n
... ...
|gn (t)| ≤ D(t) := |φ(t)| for all t ∈ R and all n ∈ N, and ...
..
.
. ...
...
..
gn (a + 1/n) = 1 = gn (b)
R∞ 0 ........................... ............................
R gn (−∞) = 0 = gn (+∞)
−∞
D(t) dt < ∞. a a + 1/n b b + 1/n
As in the proof of (D), let Xσ = X + σZ with σ > 0 and Z ∼ N (0, 1)
(D) Let Z be a standard normal random variable independent of X independently of X. Since Xσ has density fσ , we have
and let σ > 0. Lemma 1 asserts that Xσ := X + σZ has density Z Z ∞
¡ ¢ ¡ ¢
Z ∞ E gn (Xσ ) := gn Xσ (ω) P (dω) = gn (x)fσ (x) dx. (10)
1 2 2
fσ (x) = e−itx φ(t)e−σ t /2 dt. Ω −∞
2π −∞ Take limits in (10) as σ ↓ 0. In the middle term gn (Xσ (ω)) converges
to gn (X(ω)) for each sample point ω; since the convergence is domi-
Note that
¯ Z ∞ ¯ nated by 1 and E(1) = 1 < ∞, the DCT implies that E(gn (Xσ )) →
¯ 1 2 2 ¯ E(gn (X)). On the right-hand side, gn (x)fσ (x) converges pointwise to
sup |f (x) − fσ (x)| = sup ¯¯ e −itx
φ(t)(1 − e −σ t /2
) dt¯¯ R∞
x∈R x∈R 2π −∞ gn (x)fR(x), with the convergence dominated by ( −∞ |φ(t)| dt)I(a,b+1] ;
Z ∞ ∞
since −∞ I(a,b+1] (x) dx < ∞, another Rapplication of the DCT shows
1 ¡ 2 2 ¢
≤ |φ(t)| 1 − e−σ t /2 dt. ∞
that the right-hand side converges to −∞ gn (x)f (x) dx. The upshot
2π −∞
is Z ∞
The DCT implies that the right-hand side here tends to 0 as σ → 0, ¡ ¢
E gn (X) = gn (x)f (x) dx. (11)
since −∞
2 2
gσ (t) := |φ(t)|(1 − e−σ t /2 ) → 0 for all t ∈ R, Now take limits in (11) as n → ∞. Since gn → g := I(a,b] pointwise
and boundedly, two more applications of the DCT (give the details!)
|gσ (t)| ≤ D(t) := |φ(t)| for all t ∈ R and all n ∈ N, and
R∞ yield
D(t) dt < ∞. Z ∞
−∞ ¡ ¢
E g(X) = g(x)f (x) dx.
−∞
This proves that fσ (x) tends uniformly to f (x) as σ → 0; in particular,
since fσ (x) is real and nonnegative, so is f (x). This is the same as (9).

13 – 5 13 – 6
R∞ 1
R∞ 2 2 2 2
X has cf φ with −∞
|φ(t)| dt < ∞. f (x) := 2π −∞
e−itx φ(t) dt. integrable characteristic function, to wit φ(t)e−σ t /2 = ψ(t)e−σ t /2 ,
Theorem 4 implies that they have same density, and hence that
¡ ¢ ¡ ¢
E g(Xσ ) = E g(Yσ ) (12)
Example 2. (A) Suppose X has the standard exponential distri-
bution, with density f (x) = e−x I[0,∞) (x) and characteristic function for any continuous bounded function g: R → R. Letting σ → 0 in (12)
φ(t) = 1/(1−it). φ is not integrable because |φ(t)| ∼ 1/|t| as |t| → ∞. gives (how?)
This is consistent with Theorem 4 because X doesn’t admit a contin- ¡ ¢ ¡ ¢
E g(X) = E g(Y ) (13)
uous density.
(B) Consider the two-sided exponential distribution, with density Taking g to be the function with graph
f (x) = e−|x|/2 for −∞ < x < ∞. f has characteristic function 1 ...................................................................
.....
.....
Z ∞ Z Z ∞
.....
.....←g (14)
1 h 0 itx x i .....
.....
.....
eitx f (x) dx = eitx e−x dx
...................................................................
φ(t) = e e dx + 0 R
−∞ 2 −∞ 0 x x + 1/n
1 h 1 1 i 1
= + = . in (13) and letting n → ∞ gives (how?)
2 1 + it 1 − it 1 + t2 ¡ ¢ ¡ ¢
E I(−∞,x] (X) = E I(−∞,x] (Y ) ;
This φ is integrable, so Theorem 4 implies that
Z ∞ but this is the same as F (x) = G(x). (There is an inversion formula
1 1 1
e−itx 2
dt = e−|x| . for cdfs implicit in this argument; see Exercise 6.)
2π −∞ 1+t 2

Thus the standard Cauchy distribution, with density 1/(π(1 + x2 )) Example 3. According to the uniqueness theorem, a random vari-
for −∞ < x < ∞, has characteristic function e−|t| . This gives line 9 able X is normally distributed with mean µ and variance σ 2 if and
of Table 12.1. • only if E(eitX ) = exp(iµt − σ 2 t2/2) for all real t. It follows easily from
this that the sum of two independent normally distributed random
The uniqueness theorem. Here we establish the important fact variables is itself normally distributed. In other words, the family
that a probability measure on R is uniquely determined by its char- { N (µ, σ 2 ) : µ ∈ R, σ 2 ≥ 0 } of normal distributions on R is closed
acteristic function. under convolution. •
Theorem 5 (The uniqueness theorem for characteristic func- Example 4. Suppose X1 , X2 , . . . , Xn are iid standard Cauchy ran-
tions). Let X be a real random variable with distribution function dom variables. By Example 2, each Xi has characteristic function
F and characteristic function φ. Similarly, let Y have distribution φ(t) = e−|t| . The sum Sn = X1 + · · · + Xn thus has characteristic
function G and characteristic function ψ. If φ(t) = ψ(t) for all t ∈ R, function φSn (t) = e−n|t| , and the sample average X̄n = Sn /n has
then F (x) = G(x) for all x ∈ R. characteristic function φXn (t) = φSn (t/n) = e−|t| . Thus X̄n is itself
Proof Let Z ∼ N (0, 1) independently of both X and Y . Set Xσ = standard Cauchy. What does this say about using the sample mean
X + σZ and Yσ = Y + σZ for σ > 0. Since Xσ and Yσ have the same to estimate the center of a distribution? •

13 – 7 13 – 8
09:53 11/15/2000

Theorem 6 (The uniqueness theorem for MGFs). Let X and (ii) E(X k ) = E(Y k ) := αk for all k ∈ N, and (iii) the radius R of
P∞
Y be real random variables with respective real moment generating convergence of the power series k=1 αk uk/k! is nonzero, then F (x) =
functions M and N and distribution functions F and G. If M (u) = G(x) for all x ∈ R.
N (u) < ∞ for all u in some nonempty open interval, then F (x) =
Proof Let M and NPbe the MGFs of X and Y , respectively. By The-
G(x) for all x ∈ R. ∞
orem 12.5, M (u) = k=1 αk uk/k! = N (u) for all u’s in the nonempty
Proof Call the interval (a, b) and put D = { z ∈ C : a < <(z) < b }. open interval (−R, R). Theorem 6 thus implies F = G.
We will consider two cases: 0 ∈ (a, b), and 0 ∈
/ (a, b).
Example 5. (A) Suppose X is a random variable such that
• Case 1: 0 ∈ (a, b). In this case the imaginary axis I := { it : ½
k (k − 1) · (k − 3) · · · 5 · 3 · 1, if k is even,
t ∈ R } is contained in D. The complex generating functions GX αk := E(X ) =
and GY of X and Y exist and are differentiable on D and agree on 0, if k is odd.
{ u + i0 : u ∈ (a, b) } by assumption. By Theorem 13.4, GX and GY Then X ∼ N (0, 1), because a standard normal random variable has
agree on all of D, and in particular on I. In other words, E(eitX ) = P∞
these moments and the series k=1 αk uk/k! has an infinite radius of
GX (it) = GY (it) = E(eitY ) for all t ∈ R. Thus X and Y have the convergence.
same characteristic function, and hence the same distribution.
(B) It is possible for two random variables to have the same moments,
• Case 2: 0 ∈/ (a, b). We treat this by exponential tilting, as follows. but yet to have different distributions. For example, suppose Z ∼
For simplicity of exposition, suppose that X and Y have densities f N (0, 1) and set
and g respectively. Put θ = (a + b)/2 and set
X = eZ . (16)
θx θx
fθ (x) = e f (x)/M (θ) and gθ (x) = e g(x)/N (θ). (15) X is said to have a log-normal distribution, even though it would
be more accurate to say that the log of X is normally distributed. By
fθ and gθ are probability densities; the corresponding MGFs are the change of variables formula, X has density
dz 1 2 1 1 1
Mθ (u) = M (u + θ)/M (θ) and Nθ (u) = N (u + θ)/N (θ). fX (x) = fZ (z) = √ e−(log(x)) /2 = √ 1+log(x)/2
(17)
dx 2π x 2π x

Mθ and Nθ coincide and are finite on the interval (a − θ, b − θ). Since for x > 0, and 0 otherwise. The k th moment of X is
this interval contains 0, the preceding argument shows that the dis- 2
tributions with densities fθ and gθ coincide. It follows from (15) that E(X k ) = E(ekZ ) = ek /2 ;
the distributions with densities f and g coincide. this is finite, but increases so rapidly with k that the power series
P ∞ k k
Theorem 7 (The uniqueness theorem for moments). Let X k=1 E(X )u /k! converges only for u = 0 (check this!). For real
numbers α put
and Y be real random variables with respective distribution functions £ ¡ ¢¤
F and G. If (i) X and Y each have (finite) moments of all orders, gα (x) = fX (x) 1 + sin α log(x)

13 – 9 13 – 10
£ ¡ ¢¤ ¡√ ¢
X = eZ for Z ∼ N (0, 1). gα (x) = fX (x) 1 + sin α log(x) Graphs of gα (x) = [1 + sin(α log(x))]/ 2π x1+log(x)/2
versus x ∈ (0, 4), for α = 0 and 2π. g0 is the
log-normal density. g2π has the same moments as g0 .
for x > 0, and = 0 otherwise. I am going to show that one can pick
...
an α 6= 0 such that 1.2
.....
.. ..
.. ...
.... ....
Z ∞ Z ∞ . ..
... ...
.. ...
k .... ....
x gα (x) dx = xk fX (x) dx (18) 1.0 ....
....
......
. .
.. ..
.. ...
.... .. ..
−∞ −∞ ......... .... ....
.. . ...
0.8 ... .. ... ...
.. .... ... ...
.... .... ... ...
.. .. ..
for k = 0, 1, . . . . For k = 0, (18) says that the nonnegative function gα (x) ... .... ..................... ...
.. .... .. ........
..
......
0.6 ... ...... ... ..... ... ......
gα integrates to 1, and thus is a probability density. Let Y be a . .... .. ........ ... ...
.. ....... ..
............. ... ... ...... .
.. ...
.. . . ..... ... ...
random variable with this density. Then by (18) we have E(Y k ) =
...
....... .... .... ... ..
.........
. ...
...
0.4 .... ... ... ... ..
.........
. ...
..... ... ... ... .
. ...... ...
E(X k ) for k = 1, 2, . . . , even though X and Y have different densities, ..... ... ..
.......... ... ....
......... ... ..
...
...
... .
.
.
.
.
.
. ..
.......
......... ....
. .
.
.
........ ....
....... .... ... .... .
. .........
and therefore different distributions. 0.2 ........ ... ..
...... .. .
...
...
... ....
.
.
.
.............
.. ...............
... ...............
............. .... .... ... ... ... ...................
.....................................................................
It remains to show that there is a nonzero α such that (18) holds, .
... ...... ... ..
.
. .. ..
.
... ..
... ..
.....
....
...... ........
......... .................................................................................................
....... ..... ..... . ............... .........
or, equivalently, such that 0.0 ...

0 1 2 3 4
Z ∞
¡ ¢ x
νk := xk fX (x) sin α log(x) dx = 0
0 Exercise 1. (a) Suppose that µ and ν are probability measures on R
with densities f and g respectively. Show that the convolution µ ? ν
for k = 0, 1, 2, . . . . Letting =(z) denote the imaginary part v of the has density f ? g given by (3). (b) Verify the claim made in Exam-
complex number z = u + iv, we have ple 1 (A), that the sum S of two independent random variables, each
£ ¡ ¢¤ ¡ ¢ uniformly distributed over [−1/2, 1/2], has density fS (s) = (1 − |s|)+
νk = E X k sin α(log(X) = E ekZ sin(αZ) with respect to Lebesgue measure. [Hint for (a): Continue (2) by writ-
R
£ ¡ ¢¤ £ ¡ ¢¤ £ 2 ¤ ing ν(B − x) = B g(s
= = E ekZ eiαZ = = E e(k+iα)Z = = e(k+iα) /2 R − x) ds (why?) and then use Fubini’s theorem
£ 2 2 ¤ to get P [S ∈ B] = B (f ? g)(s) ds.] ¦
2 2
= = e(k −α )/2 eikα = e(k −α )/2 sin(kα).
Exercise 2. Show that the function fσ defined by (5) is infinitely
differentiable. [Hint: replace the real argument x by −iz with z ∈ C
Consequently we can make νk = 0 for all k by taking α = π, or 2π, or
and show that the resulting integral is a linear combination of complex
3π, . . . . We have not only produced two distinct densities with the
generating functions. ¦
same moments, but in fact an infinite sequence g0 = fX , gπ , g2π , g3π ,
. . . of such densities! The plot on the next page exhibits g0 and g2π .• Exercise 3. The function
R b gn in (10) may be replaced by g := I(a,b] ,
to get P [a < Xσ ≤ b] = a fσ (x) dx. Show how (9) may be deduced
from this. ¦

13 – 11 13 – 12
09:53 11/15/2000

Exercise 4. Use line 11 of Table 12.1 and the inversion formula (8) Exercise 7 (The inversion formula for lattice distributions). Let φ
for densities
¡ to get ¢line 12 (for simplicity take α = 1). Show that be the characteristic function of a random variable X which takes
f (x) := 1 −Rcos(x) /(πx2 )¢is in fact a probability density on R and almost all its values in the lattice La,h = { a+kh : k = 0, ±1, ±2, . . . }.
∞¡
deduce that 0 1 − cos(x) /x2 dx = π/2. ¦ Show that
Z π/h
Exercise 5 (An inversion formula for point masses). Let X be a P[X = x] 1
= e−itx φ(t) dt (22)
random variable with characteristic function φ. Show that for each h 2π −π/h
x ∈ R,
Pn
Z T
for each x ∈ La,h . Give an analogous formula for Sn = i=1 Xi ,
1 where X1 , . . . , Xn are independent random variables, each distributed
P [ X = x ] = lim e−itx φ(t) dt . (19)
T→ ∞ 2T −T like X. [Hint: use Fubini’s theorem.] ¦

Deduce that if φ(t) → 0 as |t| → ∞, then P [ X = x ] = 0 for all x ∈ R. Exercise 8. Use the inversion formula (22) to recover the probability
[Hint for (19): write φ(t) as E(eitX ), use Fubini’s Theorem, and the mass function of the Binomial(n, p) distribution from its characteristic
DCT (with justifications). ¦ function (peit + q)n . ¦

Exercise 6 (An inversion formula for cdfs). Let X be a random Exercise 9. Use characteristic functions to show that the follow-
variable with distribution function F , density f , and characteristic ing families of distributions in Table 1 are closed under convolution:
function φ. (a) Suppose that φ is integrable. Use the inversion formula Degenerate, Binomial (fixed p), Poisson, Negative binomial (fixed p),
(8) for densities to show that Gamma (fixed α), and Symmetric Cauchy. ¦

Z A random variable X is said to have an infinitely divisible



1 e−ita − e−itb distribution if for each positive integer n, there exist iid random
F (b) − F (a) = φ(t) dt (20)
2π −∞ it variables Xn,1 , . . . , Xn,n whose sum Xn,1 + · · · + Xn,n has the same
distribution as X.
¡ ¢
for −∞ < a < b < ∞. [Hint: F (b) − F (a) /(b − a) is the density at b
Exercise 10. Show that a Normal distribution and all but one
for the random variable X + U , where U is uniformly distributed over (which one?) of the distributions in the preceding exercise is infinitely
[0, b − a] independently of X.] (b) Now drop the assumption that φ divisible. ¦
is integrable. Show that
Z Exercise 11. (a) Suppose X and Y are independent random vari-

1 e−ita − e−itb 2 2 ables, with densities f and g respectively. Show that Z := Y − X has
F (b) − F (a) = lim φ(t)e−σ t /2 dt (21)
σ↓0 2π −∞ it density
Z ∞
provided that F is continuous at a and b. ¦ h(z) := f (x)g(x + z) dx. (23)
−∞

13 – 13 13 – 14
(b) Suppose the random variables X and Y in part (a) are each deduce the general case (f complex and integrable) from the special
Gamma(r, 1), for a positive integer r. Use (23) to show that Z has one.] ¦
density
Xr−1 h 1 Γ(r + j) 1 i Exercise 14. Let f : R → C be continuous and integrable. Show that
hr (z) := e−|z| |z|r−1−j . (24) if fˆ is integrable, then
j=0 Γ(r) Γ(r − j) j! 2r+j
Z ∞
1
(c) Find the characteristic function of the unnormalized t-distribution f (x) = e−itx fˆ(t) dt (27)
with ν degrees of freedom, for an odd integer ν. ¦ 2π −∞

Exercise 12. Let ITα be the so-called inverse triangular distribution for all x ∈ R. [Hint: let σ tend to 0 in (26) — but beware, f need not
with parameter α, having characteristic function φα (t) = (1 − |t|/α)+ be bounded.] ¦
(see
¡ line 12 of Table 1). ¢Show that the characteristic function φ(t) =
(1 − |t|)+ + (1 − |t|/2)+ /2 of the mixture µ = (IT1 + IT2 )/2 agrees
with the characteristic function ψ of ν = IT4/3 on a nonempty open
interval, even though µ 6= ν. Why doesn’t this contradict Theo-
rem 5? ¦
Let f be a continuous integrable function from
R ∞the real line R to
the complex plane C. (Here “integrable” means −∞ |f (x)| dx < ∞.)
The Fourier transform of f is the function fˆ from R to C defined
by
Z ∞
fˆ(t) = eitx f (x) dx (25)
−∞

for −∞ < t < ∞. If f is a probability density, then fˆ is its char-


acteristic function; the goal of the next two exercises is to extend
some of the things we know about characteristic functions to Fourier
transforms.
Exercise 13. Let f : R → C be continuous and integrable and let
σ > 0. Show that
Z ∞ 2 2 Z ∞
e−z /(2σ ) 1 2 2
f (x − z) √ dz = e−itx fˆ(t)e−σ t /2 dt (26)
−∞ 2πσ 2 2π −∞

for all x ∈ R. [Hint: by the inversion formula for densities, (26) is true
when f is a probability density (i.e., real, positive, and integrable);

13 – 15 13 – 16

You might also like