You are on page 1of 11

1.

Gamma Function
We will prove that the improper integral
Z ∞
Γ(x) = e−t tx−1 dt
0
exists for every x > 0. The function Γ(x) is called the Gamma function. Let us recall the
comparison test for improper integrals.
Theorem 1.1. (Comparison Test for Improper Integral of Type I) Let f (x), g(x) be two
continuous functions on [a, ∞) such that 0 ≤ f (x) ≤ g(x) for all x ≥ a. Then
Z ∞ Z ∞
(1) If g(x)dx is convergent, so is f (x)dx.
Za ∞ a Z ∞
(2) If f (x)dx is divergent to infinity, so is g(x)dx.
a a
Theorem 1.2. (Limit Comparison Test) Let f (x), g(x) be two nonnegative continuous
functions on [a, ∞). Suppose that
f (x)
lim = L with L > 0.
x→∞ g(x)
Z ∞ Z ∞
Then both g(x)dx and
f (x)dx are convergent or divergent.
a a
Z ∞
Lemma 1.1. For every s > 0, the improper integral e−st dt converges.
0
Proof. Let us compute
N
e−sM − 1
Z
e−st dt = .
0 −s
By definition,

e−sM − 1
N
Z Z
−st 1
e dt = lim e−st dt = lim
= .
0 N →∞ 0 N →∞ −s s
The limit exists and hence the improper integral converges. 
Now let us study the case when x ≥ 1.

Lemma 1.2. Let n be a natural number. Then


tn−1
lim 1 = 0.
t→∞ e 2 t

Proof. By L’ Hospital rule,


tn−1 (n − 1)tn−2
lim 1 = lim .
1 12 t
t→∞ e2t t→∞
2e
Since tn−1 is a polynomial of degree n − 1, we know
dn n−1
t = 0.
dtn
Inducitvely, we find
tn−1 0
lim 1 = lim  1 = 0.
t→∞ e 2 t t→∞ 1 n e 2 t
2

1
2

By the definition of limit, we choose  = 1, there exists M > 0 such that for all t ≥ M
n−1
t
1 <  = 1.
2t
e
1
Hence for t ≥ M, 0 ≤ tn−1 < e 2 t . This implies that for t ≥ M,
1 1
(1.1) 0 ≤ e−t tn−1 ≤ e−t · e 2 t = e− 2 t .
Z ∞ 1
Lemma 1.1 implies that e− 2 t dt is convergent. (1.1) and Comparison test implies that
Z ∞ 0

e−t tn−1 dt is convergent for every n ∈ N.


0
Let x ≥ 1 be any real number. Let [x] be the largest integer so that [x] ≤ x < [x] + 1.
Then for t ≥ 0,
(1.2) 0 ≤ e−t tx−1 ≤ e−t t[x] .
Z ∞ Z ∞
−t [x]
Since e t dx is convergent, by comparison test and (1.2), we find e−t tx−1 dt is
0 0
convergent.
Now let us study the case when 0 < x < 1. Then we know
1 tx−1 t
1 ≤ 1 ≤ 1 .
t t
e 2 e 2 e2t
We know that
t 1
lim 1 = lim 1 = 0.
t
t→∞ e 2 t→∞ e2t
By the Sandwich principle,
tx−1
1 =0 lim
e2t t→∞

for 0 < x < 1. (Of course, this statement is true for all x > 0.) This implies that
1
0 ≤ e−t tx−1 ≤ e− 2 t , t ≥ 1.
Z ∞
By comparison test, e−t tx−1 dt is convergent for 0 < x < 1. Now, we need to verify that
1
1 1
et
Z Z
e−t tx−1 dt = dt
0 0 t1−x
e−t 1
Z
is convergent. Notice that lim = ∞. Hence the integral e−t tx−1 dt is a type II
t→0 t1−x 0
improper integral.
Theorem 1.3. Let f, g ∈ C(a, b] and 0 ≤ f (x) ≤ g(x) for all a < x ≤ b.
Z b Z b
(1) If g(x)dx is convergent, so is f (x)dx.
a
Z b a Z b
(2) If f (x)dx is divergent to infinity, so is g(x)dx.
a a
3

Note that
0 < e−t tx−1 ≤ etx−1 .
We know that
Z 1 Z 1
x−1
t dt = lim tx−1 dt
0 b→0 b

tx 1

= lim
x b
b→0

1 bx
 
= lim −
b→0 x x
1
= .
x
Z 1
By the comparison test, e−t tx−1 dt is convergent.
0
Theorem 1.4. For every x > 0, the improper integral
Z ∞
Γ(x) = e−t tx−1 dt
0
is convergent.
Proposition 1.1. For x > 0,
Γ(x + 1) = xΓ(x).
Proof. For every N > 0, using integration by parts,
Z N Z N
N
e−t tx dt = −tx e−t 0 + x e−t tx−1 dt
0 0
Z N
= −N x e−N + x e−t tx−1 dt.
0
Nx
We have seen that lim N x e−N = lim N = 0 for all x > 0. Then
N →∞ N →∞ e
Z N
Γ(x + 1) = lim e−t tx dt
N →∞ 0
Z N
= lim (−N x e−N + x e−t tx−1 dt)
N →∞ 0
Z N
= x lim e−t tx−1 dt
N →∞ 0
= xΓ(x).

Corollary 1.1. For every n ≥ 0, Γ(n + 1) = n!.
Z ∞
Proof. We know e−t dt = 1. Hence Γ(1) = 1. We can prove the formula by induction.
0
Assume that the statement is true for some nonnegative integer k, i.e. Γ(k + 1) = k!. By
the previous proposition,
Γ(k + 2) = Γ((k + 1) + 1) = (k + 1)Γ(k + 1) = (k + 1) · k! = (k + 1)!.

4

Proposition 1.2. For any s > 0, x > 0, we have


Z ∞
Γ(x)
e−st tx−1 dt = x .
0 s
Proof. This formula can be proved by using substitution u = st.

Example 1.1. Z ∞
2 3 10
e−st (2 − 3t + 5t2 )dt = − + 3.
0 s s2 s
This can be proved by the previous proposition.

Here comes a question, if f (t) is a continuous function on [0, ∞) such that


Z ∞
2 3 10
e−st f (t)dt = − 2 + 3 ,
0 s s s
what can you say about f (t)? Let us introduction the notion of Laplace transformation.
5

2. Laplace Transformation, its inverse transformation, and linear


homogeneous o.d.e of constant coefficients
Z ∞
Definition 2.1. Let f (t) be a continuous function on [0, ∞). Suppose e−st f (t)dt con-
0
verges for s ≥ a for some a ∈ R. Denote
Z ∞
L(f )(s) = e−st f (t)dt
0

called the Laplace transform of f.


Now, let us assume that the Laplace transform of f1 , f2 exist on some domain D ⊂ R
where f1 , f2 are continuous functions on [0, ∞). For any real numbers a1 , a2 , we know that
for any s ∈ D,
Z ∞
L(a1 f1 + a2 f2 )(s) = (a1 f1 (t) + a2 f2 (t))e−st dt
0
Z M
= lim (a1 f1 (t) + a2 f2 (t))e−st dt
M →∞ 0
 Z M Z M 
−st −st
= lim a1 f1 (t)e dt + a2 f2 (t)e dt
M →∞ 0 0
Z M Z M
−st
= a1 lim f1 (t)e dt + a2 lim f2 (t)e−st dt
M →∞ 0 M →∞ 0
= a1 L(f1 )(s) + a2 L(f2 )(s).
Proposition 2.1. Suppose that f1 , f2 are two continuous functions on [0, ∞) such that
their Laplace transform exist on some domain D ⊂ R for s. Then
L(a1 f1 + a2 f2 )(s) = a1 L(f1 )(s) + a2 L(f2 )(s), s ∈ D,
for any a1 , a2 ∈ R. (We say that the Laplace transform is a linear transformation.)
Theorem 2.1. Suppose f1 , f2 are two continuous functions on [0, ∞) such that their
Laplace transform exist. If L(f1 ) = L(f2 ), then f1 = f2 .
Hence if g(s) = L(f )(s), we denote
f (t) = L−1 (g)(t)
called then Laplace inverse transform.
Proposition 2.2. Suppose f (t) is a smooth function on [0, ∞) (f (k) (t) exists for all k ≥ 1.
Set f (0) (t) = f (t).) Assume that lim f (k) (t)e−st = 0 for all k ≥ 0. Then
t→∞

L(f 0 )(s) = −f (0) + sL(f )(s).


Proof. Using integration by parts,
Z N Z N
−st 0 −st N
e−st f (t)dt

e f (t)dt = f (t)e 0
+s
0 0
Z N
−sN
= f (N )e − f (0) + s e−st f (t)dt.
0
6

By assumption, lim f (N )e−sN = 0 (consider k = 0.) Hence


N →∞
Z ∞ Z N
−st 0
e f (t)dt = lim e−st f 0 (t)dt
0 N →∞ 0
 Z N 
= lim f (N )e−sN − f (0) + s e−st f (t)dt
N →∞ 0
Z N
= −f (0) + s lim e−st f (t)dt
N →∞ 0
Z ∞
= −f (0) + s e−st f (t)dt.
0

Corollary 2.1. Under the same assumption as above, we have
X
L(f (n) )(s) = sn F (s) − sn−k−1 f (k) .
k=0
When n = 2, we have L(f 00 )(s) = s2 F (s) − 0
sf (0) − f (0).
Now let us use Laplace transform to solve ordinary differential equation with constant
coefficients.
Example 2.1. Solve for y 0 + 2y = 0 with initial condition y(0) = 1.
Take Laplace transform, we obtain
L(y 0 + 2y)(s) = L(y 0 )(s) + 2L(y)(s) = 0.
Let g(s) = L(y)(s). Using the previous proposition,
L(y 0 )(s) = −y(0) + sL(y)(s) = −1 + sg(s).
1
Hence (−1 + sg(s)) + 2g(s) = 0. This implies that g(s) = . Then
s+2
y = L−1 (g)(t) = e−2t .
Example 2.2. Solve for y 00 + y = 0 with initial condition y(0) = a and y 0 (0) = b.
Let g(s) be the Laplace transform of y. Then
L(y 00 )(s) + g(s) = 0.
On the other hand, L(y 00 )(s) = s2 g(s) − sy(0) − y 0 (0) = s2 g(s) − as − b. This implies that
s 1
(s2 + 1)g(s) = as + b =⇒ g(s) = a 2 + b 2.
s +1 s
s 1
We know 2 = L(cos t)(s), and 2 = L(sin t)(s). Hence y(t) = a cos t + b sin t.
s +1 s +1
In general, we can solve for the linear (homogeneous) differential equation of constant
coefficients through Laplace transformation. A linear homogeneous differential equation of
constant coefficients is a differential equation
(2.1) an y (n) + an−1 y (n−1) + · · · + a1 y 0 + a0 y = 0,
where a0 , · · · , an ∈ R. If an 6= 0, we say that the equation is of order n.
Let us study the case when n = 2 :
(2.2) ay 00 + by 0 + cy = 0.
7

Here a 6= 0. Taking the Laplace transformation of the equation, we have


a(s2 F (s) − sy 0 (0) − y(0)) + b(sF (s) − y(0)) + cF (s),
where F (s) = L(y)(s). Therefore F (s) is a rational function in s and given by
As + B
F (s) = 2 .
as + bs + c
where A = ay 0 (0) and B = ay 0 (0) + by(0). We can use partial fraction expansion to decom-
pose F (s).
Definition 2.2. The polynomial χ(s) = as2 + bs + c is called the characteristic polynomial
of (2.2).
We assume that a = 1 and let D = b2 − 4c be the discriminant of the characteristic
polynomial.
case 1: If D > 0, χ(s) has two distinct real roots. We denote the root of χ(s) by λ1 and
λ2 . Thus we may write
1 1
F (s) = C1 + C2
s − λ1 s − λ2
for some C1 , C2 ∈ R. We know that
1
L(eat )(s) = .
s−a
Using the inverse transform and the linearity of L−1 , we see that
y(t) = L−1 (F )(t)
   
1 1
= C1 L−1 + C2 L−1
s − λ1 s − λ2
= C1 eλ1 t + C2 eλ2 t .
Hence y is a linear combination of eλ1 t and eλ2 t .
case 2: If D = 0, χ(s) has one repeated root, called λ. Then we write
1 1
F (s) = C1 + C2 .
s−λ (s − λ)2
Recall that Z ∞
Γ(n + 1) n!
L(tn eλt )(s) = e−st tn etλ dt = n+1
= .
0 (s − λ) (s − λ)n+1
Thus we obtain
y(t) = L−1 (F )(t)
   
−1 1 −1 1
= C1 L + C2 L
s−λ (s − λ)2
= C1 eλt + C2 teλt
= (C1 + C2 t)eλt .
case 3: If D < 0, by completing the square, we write
χ(s) = (s − α)2 + β 2 .
We write
As B s−α β
F (s) = + = C1 + C2 .
(s − α)2 + β 2 (s − α)2 + β 2 (s − α)2 + β 2 (s − α)2 + β 2
8

Notice that
s−α β
L(eαt cos βt)(s) = , L(eαt sin βt) = .
(s − α)2 + β 2 (s − α)2 + β 2
Thus
y(t) = L−1 (F )(t)
   
−1 s−α −1 β
= C1 L + C2 L
(s − α)2 + β 2 (s − α)2 + β 2
= C1 eαt cos βt + C2 eαt sin βt
= eαt (C1 cos βt + C2 sin βt).
Remark. Let V be the subset of C 2 [0, ∞) consisting of y such that (2.2) holds. The above
observation implies that V is in fact a two dimensional real vector (sub)space (of C 2 [0, ∞)).
This method can be applied to (2.1) for any n ∈ N.
Definition 2.3. The characteristic polynomial of (2.1) is defined to be
χ(s) = an sn + · · · + a1 s + a0 .
Taking the Laplace transformation of (2.1) and using Corollary 2.1, we find that
P (s)
F (s) =
χ(s)
for some polynomial P (s) ∈ R[s]. Thus F (s) is a rational function and we can solve for
y using the partial fraction for F (s). Thus we reduce our problems to the cases when
χ(s) = (s − a)n or χ(s) = ((s − α)2 + β 2 )m . When χ(s) = (s − a)n , we write
A1 A2 An
F (s) = + + ··· +
s − a (s − a)2 (s − a)n
1 1! (n − 1)!
= C0 + C1 2
+ · · · + Cn−1 ,
s−a (s − a) (s − a)n
where Ci−1 = Ai /(i − 1)! for i = 1, · · · , n. By taking the inverse transform, we obtain
y(t) = (C0 + C1 t + · · · + Cn−1 tn−1 )eat .
When χ(s) = ((x − α)2 + β 2 )m , we write
A1 s + B 1 Am s + B m
F (s) = + ··· +
(x − α)2 + β 2 ((s − α)2 + β 2 )m
C1 (s − a) + D1 β Cm (s − a) + Dm β
= 2 2
+ ··· + .
(x − α) + β ((s − α)2 + β 2 )m
Here Ci , Dj are constants. By finding the inverse transform of (Ci (s − a) + Di β)/(((s −
α)2 + β 2 )i ), we obtain y.
9

3. Beta Function
In this section, x, y are positive real numbers. Let us define
Γ(x)Γ(y)
B(x, y) = .
Γ(x + y)
The function B(x, y) is called the Gamma function. It follows immediately from the defi-
nition that
B(x, y) = B(y, x), ∀x, y > 0.
Using the property of Gamma function: Γ(x + 1) = xΓ(x) for x > 0, we obtain:
Proposition 3.1. Suppose p, q > 0. Then
(1) B(p, q) = B(p + 1, q) + B(p, q + 1).
q q
(2) B(p, q + 1) = B(p + 1, q) = B(p, q).
p p+q
Theorem 3.1. For x, y > 0,

tx−1
Z
B(x, y) = dt.
0 (1 + t)x+y
Let us postpone the proof of this equation.

Let us consider the substitution t = tan2 θ with 0 ≤ θ ≤ π/2. Then dt = 2 tan θ sec2 θdθ.
Using sec2 θ = tan2 θ + 1, Beta function B(x, y) can be rewritten as
Z π
2 (tan2 θ)x−1 2
B(x, y) =
(1 + tan 2 θ)x+y · 2 tan θ sec θdθ
0
Z π
2 tan2x−2 θ
=2 2x+2y θ
· tan θ sec2 θdθ
0 sec
Z π
2 1
=2 tan2x−1 θ · 2x+2y−2 θ

0 sec
Z π
sin θ 2x−1

2
=2 · cos2x+2y−2 θdθ
0 cos θ
Z π
2
=2 sin2x−1 θ cos2y−1 θdθ.
0
Using B(y, x) = B(x, y), we also obtain
Z π
2
B(x, y) = 2 cos2x−1 θ sin2y−1 θdθ.
0
This is the second form of Beta function. Consider substitution t = sin2 θ for 0 ≤ θ ≤ π/2.
Then dt = 2 sin θ cos θdθ, we obtain the third form of the Beta function:
Z 1
B(x, y) = tx−1 (1 − t)y−1 dt.
0
We conclude that
Theorem 3.2. The Beta function has the following forms:
Z ∞
tx−1
(1) B(x, y) = dt,
0 (1 + t)x+y
10

Z π
2
(2) B(x, y) = 2 cos2x−1 θ sin2y−1 θdθ,
0
Z 1
(3) B(x, y) = tx−1 (1 − t)y−1 dt.
0
Z ∞
2
Example 3.1. Compute e−x dx.
0
1
t− 2 dt
Let us make a change of variable t = x2 . Then dt = 2xdx and thus dx = .The
2
integral can be rewritten as
Z ∞ Z ∞ 1

2 1 1 Γ
e−x dx = e−t t− 2 dt = 2
.
0 2 0 2
Now, we only need to compute Γ(1/2). Using Beta function,
2  2
Γ 12
 
1 1 1
B , = 1 1
 =Γ
2 2 Γ 2+2 2
On the other hand,
  Z π Z π
1 1 2
2· 12 −1 2· 12 −1
2
B , =2 cos θ sin θdθ = 2 dθ = π.
2 2 0 0

Hence Γ(1/2) = π. We obtain that
Z ∞ √
−x2 π
e dx = .
0 2
Z 2π
Example 3.2. Compute sin4 θdθ.
0
We know
Z 2π Z π Z π  
4
2
4
2
2· 52 −1 2· 12 5 1
sin θdθ = 4 sin θdθ = 2 · 2 sin θ cos θdθ = 2B , .
0 0 0 2 2
We compute
5 1
   
5 1 Γ 2 Γ 2
B , = .
2 2 Γ(3)
Using Γ(x + 1) = xΓ(x), we obtain
3√
   
5 3 1 1
Γ = · ·Γ = π.
2 2 2 2 4
3√ √ Z 2π
4 π· π
 
5 1 3 3
Hence B , = = π. Thus sin4 θdθ = π.
2 2 2! 8 0 4
Now, let us go back to the proof of Theorem 3.1.
Z ∞  Z ∞ 
−t x−1 −s x−1
Γ(x)Γ(y) = e t dt e s ds
0 0
Z ∞Z ∞
= e−(t+s) tx−1 sy−1 dtds.
0 0
11

Z ∞
Let us compute e−(t+s) tx−1 dt. Consider t = su, we rewrite
0
Z ∞ Z ∞
−(t+s) x−1
e t dt = s x
e−(u+1)s ux−1 du.
0 0
Hence the integral becomes
Z ∞ Z ∞ 
Γ(x)Γ(y) = e−(u+1)s sx+y−1 ds ux−1 du
Z0 ∞ 0
Γ(x + y)
= · ux−1 du
0 (u + 1)x+y
Z ∞
ux−1
= Γ(x + y) du.
0 (1 + u)x+y
Here we use Proposition 1.2. This shows that
Z ∞
Γ(x)Γ(y) ux−1
= du.
Γ(x + y) 0 (1 + u)x+y
Z 3
Example 3.3. Compute (x − 1)10 (x − 3)3 dx.
1
Let t = (x − 1)/2. Then the integral becomes
Z 3 Z 1 Z 1
10 3 10 3 14
(x − 1) (x − 3) dx = (2t) (2t − 2) 2dt = −2 t10 (1 − t)3 dt.
1 0 0
We obtain Z 3
10!3!
(x − 1)10 (x − 3)3 dx = −214 B(11, 4) = −214 · .
1 14!

You might also like