DIFFERENTIATING UNDER THE INTEGRAL SIGN

KEITH CONRAD

I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr. Bader had given me. [It] showed how to differentiate parameters under the integral sign – it’s a certain operation. It turns out that’s not taught very much in the universities; they don’t emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. [If] guys at MIT or Princeton had trouble doing a certain integral, [then] I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me. Richard Feynman [1, pp. 71–72] 1. Introduction The method of differentiation under the integral sign, due originally to Leibniz, concerns integrals 1 depending on a parameter, such as 0 x2 e−tx dx. Here t is the extra parameter. (Since x is the variable of integration, x is not a parameter.) In general, we might write such an integral as
b

(1.1)
a

f (x, t) dx,

where f (x, t) is a function of two variables like f (x, t) = x2 e−tx . Example 1.1. Let f (x, t) = (2x + t3 )2 . Then
1 1

f (x, t) dx =
0 0

(2x + t3 )2 dx.

An anti-derivative of (2x + t3 )2 with respect to x is 1 (2x + t3 )3 , so 6
1

(2x + t3 )2 dx =
0

(2x + t3 )3 6

x=1 x=0

(2 + t3 )3 − t9 = 6 4 = + 2t3 + t6 . 3 This answer is a function of t, which makes sense since the integrand depends on t. We integrate over x and are left with something that depends only on t, not x. An integral like a f (x, t) dx is a function of t, so we can ask about its t-derivative, assuming that f (x, t) is nicely behaved. The rule is: the t-derivative of the integral of f (x, t) is the integral of the t-derivative of f (x, t): (1.2) d dt
b b b

f (x, t) dx =
a 1 a

∂ f (x, t) dx. ∂t

We obtain ∞ 0 −xe−tx dx = − ∞ 1 . Now we are going to derive Euler’s formula in another way.2) becomes ∞ 0 te−tu du = 1. keep in mind that (1. Euler’s integral formula for n! is ∞ (2. by repeated differentiation after introducing a parameter t into (2.2. According to (1. We saw in Example 1. t2 .2 KEITH CONRAD This is called differentiation under the integral sign. We need t > 0 in order that e−tx is integrable over the region x ≥ 0. If you are used to thinking mostly about functions with one variable. t2 so (2.3) with respect to t. where both sides are now functions of t. For any t > 0. using (1. Example 1. The answers agree. not two.2) involves integrals and derivatives with respect to separate variables: integration with respect to x and differentiation with respect to t. Euler’s factorial integral in a new light For integers n ≥ 0. we can also compute the t-derivative of the integral like this: d dt 1 1 1 (2x + t3 )2 dx = 0 0 1 ∂ (2x + t3 )2 dx ∂t 2(2x + t3 )(3t2 ) dx = 0 1 = 0 (12t2 x + 6t5 ) dx 6t2 x2 + 6t5 x x=1 x=0 = = 6t2 + 6t5 .2) to treat the left side.1 that 0 (2x + t3 )2 dx = 4/3 + 2t3 + t6 .3) 0 1 e−tx dx = . Dividing by t and writing u as x (why is this not a problem?). let x = tu.2) 0 e−x dx = 1 when n = 0. which can be obtained by repeated integration by parts starting from the formula ∞ (2. we get ∞ (2.1) 0 xn e−x dx = n!.4) 0 xe−tx dx = 1 .2). whose t-derivative is 6t2 + 6t5 . Differentiate both sides of (2. t This is a parametric form of (2.2). Now we bring in differentiation under the integral sign.2). 2. Then dx = t du and (2.

4) with respect to t. Now specialize t to 1 in (2. . Our goal is to show F (0) = π/2.1) to (2. again using (1. which is our old friend (2. we will integrate f (x. t5 120 .5) 0 ∞ 0 ∞ 0 x2 e−tx dx = 2 . 3. Compare (2. A damped sine integral We are going to use differentiation under the integral sign to prove ∞ (3. ∞ (2. Using differentiation under the integral sign. sometimes to solve a problem it is useful to solve a more general problem. t3 Taking out the sign on both sides. We obtain ∞ 0 xn e−x dx = n!. t) = e−tx (sin x)/x. Note f (x.2) to handle the left side. for t > 0 we have ∞ F (t) = 0 −xe−tx sin x dx = − x ∞ 0 e−tx (sin x) dx. t6 If we continue to differentiate each new equation with respect to t a few more times. using calculus on t. and we will get this by studying F (t) for variable t.1). x To prove (3. 0) = (sin x)/x. tn+1 We have used the presence of the extra variable t to get these equations by repeatedly applying d/dt.1) 0 sin x π dx = . Since (sin x)/x is even. t3 6 .6).6) 0 xn e−tx dx = n! . In other words.6). and then setting t to a particular value so it disappears from the final formula. We get ∞ 0 −x2 e−tx dx = − 2 . where t ≥ 0. Voil´! a The idea that made this work is introducing a parameter t. we obtain x3 e−tx dx = x4 e−tx dx = and 0 ∞ x5 e−tx dx = Do you see the pattern? It is ∞ (2. t4 24 .DIFFERENTIATING UNDER THE INTEGRAL SIGN 3 Differentiate both sides of (2.1) using differentiation under the integral sign. Set ∞ sin x F (t) = e−tx dx x 0 for t ≥ 0. x 2 This formula is important in signal processing and Fourier analysis. an ∞ equivalent formula over the whole real line is −∞ sin x dx = π. Instead of integrating (sin x)/x. we need to introduce a parameter.

but in a bounded way (since sin x and cos x are bounded functions).1. so the integral on the left goes to 0.1) −∞ e−x dx = √ 2π 2 is fundamental to probability theory and Fourier analysis. let t → ∞ in (3. giving ∞ 1 F (t) = − e−tx (sin x) dx = − . t sin x + cos x oscillates a lot. (3. as a function of x.2). That 0 sin x dx = 0 ( sin x )2 dx seems amazing. Remark 3. the integral x ∞ ∞ ∞ ∞ equals 0 sin(2x) dx = 0 sin x dx = π . To pin down C in (3. So the value at x = ∞ is 0. 1 + a2 Applying this with a = −t and turning the indefinite integral into a definite integral. namely arctan t.1) says the integral of the Gaussian over the whole real line is 1. not by actually finding an antiderivative of e−tx (sin x)/x.3) e−tx dx = − arctan t x 2 0 for all t > 0. ∞ F (t) = − 0 e −tx (t sin x + cos x) −tx (sin x) dx = e 1 + t2 x=∞ . and (4.2) e−tx x 0 Notice we obtained (3. Evaluating 0 ( sin x )2 dx by parts with u = sin2 x and dv = dx/x2 . The integrand on the left goes to 0.4 KEITH CONRAD The integrand e−tx sin x.2). while the term e−tx decays exponentially to 0 since t > 0. After doing calculus with the parameter.2) as t → ∞ becomes 0 = − π + C. Notice again the convenience introduced by a parameter t. x x 2 x x 4. . Let’s write this out explicitly: ∞ sin x dx = − arctan t + C. x 2 0 so we’re done. x=0 As x → ∞. they differ by a constant: F (t) = − arctan t + C for t > 0. we obtain ∞ sin x π dx = . We are left with the negative of the value at x = 0.2) by seeing both sides have the same t-derivative. so 2 C = π/2. Feeding this back into (3.3). ∞ sin x π (3.1 Since arctan t → π/2 as t → ∞ (3. can be integrated by parts: (a sin x − cos x) ax eax sin x dx = e . we let it go this time to a boundary value (t = 0) to remove it and solve our problem. Now let t → 0+ in (3. Since e−tx → 1 and arctan t → 0.2).1) on the board in a class and said “A mathematician is one to whom that [pointing at the formula] is 1Taking the limit of t inside the integral is justified by the dominated convergence theorem from measure theory. Since F (t) has the same tderivative as − arctan t for t > 0. The physicist Lord Kelvin (after whom the Kelvin temperature scale is named) once wrote (4. The function √1 e−x /2 is called a 2π Gaussian. 1 + t2 0 We know an explicit antiderivative of 1/(1 + t2 ). The Gaussian integral The improper integral formula ∞ 2 /2 ∞ (4.

1) so that the result really does become obvious. The function under the integral sign is easily antidifferentiated with respect to t: 1 A (t) = 0 ∂ 2e−(1+y )t − ∂t 1 + y2 2 2 /2 d dy = −2 dt 2 2 1 0 e−(1+y )t 1 + y2 2 2 /2 dy. That 5. we let t → 0+ in (4. Let x = ty. .3).1). but at an equivalent formula over the range x ≥ 0: √ ∞ 2π π −x2 /2 (4.2) e dx = = . To find C. are called the moments of f (x).2).DIFFERENTIATING UNDER THE INTEGRAL SIGN 5 as obvious as twice two makes four is to you. If you take further courses you may learn more natural derivations of (4.1) using differentiation under the integral sign. 2 2 0 For t > 0. We want to calculate A(∞) and then take a square root. What if n = 0. so (4. 2. (Integrals of the type xn f (x) dx for n = 0. We are going to aim not at (4.) When n is odd.” We will prove (4. so (5. .1) as obvious as 2 · 2 = 4. just try to follow the argument here step-by-step. so there is a constant C such that 1 (4. . ∞ −x2 /2 dx)2 0 e = π/2.1) vanishes since xn e−x /2 is an odd function. (5. 4. so A (t) = 2e−t 2 /2 1 0 te−t 2 y 2 /2 dy = 0 2te−(1+y 2 )t2 /2 dy. Differentiating with respect to t. t A (t) = 2 0 e−x 2 /2 dx · e−t 2 /2 = 2e−t 1 2 /2 t 0 e−x 2 /2 dx. For now.1) −∞ xn e−x 2 /2 dx. Thus C = π/2. set t 2 −x2 /2 A(t) = 0 e dx . ∞ −x2 /2 dx 0 e Letting t → ∞ in this equation. .3) becomes t 0 2 e−x 2 /2 dx = π −2 2 1 0 e−(1+x )t 1 + x2 2 2 /2 dx. 2. 1 + x2 0 we have A (t) = −2B (t) for all t > 0. . 1. . The left side tends to ( 0 e−x dx)2 = 0 while the 1 right side tends to −2 0 dx/(1 + x2 ) + C = −π/2 + C. is even? . Higher moments of the Gaussian For every integer n ≥ 0 we want to compute a formula for ∞ (5. so = π/2.1) is the 2 n-th moment of the Gaussian. we obtain ( is (4. Letting e−(1+x )t /2 B(t) = dx.3) A(t) = −2B(t) + C 0 2 for all t > 0. The method will not make (4.

we now compute ( 1 )! := 0 x1/2 e−x dx. −∞ 1 As an application of (5.4) with respect to t.1): ∞ (5. we get √ ∞ 3 2π 4 −tx2 /2 (5.2) −∞ e−x 2 /2 dx = √ 2π. is the Gaussian integral (4. we follow the same strategy as our treatment of the factorial 2 integral in Section 2: stick a t into the exponent of e−x /2 and then differentiate repeatedly with respect to t.4).1) when n = 0. To get formulas for (5.3) dx = √ . using differentiation under the integral sign on the left: √ ∞ x2 −tx2 /2 2π − e dx = − 3/2 .5) with respect to t a few more times. ∞ . where the notation ( 2 )! and 2 its definition are inspired by Euler’s integral formula (2.2) gives √ ∞ 2π −tx2 /2 (5. n = 0. e t −∞ Differentiate both sides of (5. √ For t > 0. t where the numerator is the product of the positive odd integers from 1 to n − 1 (understood to be the empty product 1 when n = 0). when n is even ∞ −∞ xn e−tx 2 /2 dx = 1 · 3 · 5 · · · (n − 1) tn/2 2π . After removing a common factor of −1/2 on both sides. 2 2t −∞ so √ ∞ 2π 2 −tx2 /2 (5.1) for n! when n is a nonnegative integer.1): ∞ √ 2 xn e−x /2 dx = 1 · 3 · 5 · · · (n − 1) 2π. taking t = 1 we have computed (5. t −∞ Differentiating both sides of (5.5) x e dx = 5/2 . t11/2 −∞ Quite generally. t −∞ Differentiate both sides of (5.3) with respect to t. In particular. replacing x with tx in (5. we get √ ∞ 3 · 5 2π 6 −tx2 /2 x e dx = .4) x e dx = 3/2 . x e dx = t9/2 −∞ and √ ∞ 3 · 5 · 7 · 9 2π 10 −tx2 /2 x e dx = .6 KEITH CONRAD The first case. t7/2 −∞ √ ∞ 3 · 5 · 7 2π 8 −tx2 /2 .

ex 2 /2 uv 0 − 0 x=∞ v du ∞ sin(tx) ex2 /2 sin(tx) ex2 /2 −t x=0 x=∞ x=0 0 cos(tx)e−x 2 /2 dx − tF (t). F (t) = 0 u dv ∞ ∞ = = = As x → ∞. 0 x ∞ we have ! = 0 ∞ x1/2 e−x dx ue−u (2u) du u2 e−u du 2 2 2 = 0 ∞ = 2 0 ∞ = √ = = √ −∞ u2 e−u du by (5. A cosine transform of the Gaussian We are going to use differentiation under the integral sign to compute ∞ 0 cos(tx)e−x 2 /2 dx. Then ∞ is the derivative of e−x 2 /2 . goes to 0.1) F (t) = 0 −x sin(tx)e−x 2 /2 dx. We will calculate F (t) by looking at its t-derivative: ∞ (6. 2 6. and du = t cos(tx) dx.DIFFERENTIATING UNDER THE INTEGRAL SIGN 7 Using the substitution u = x1/2 in 1 2 ∞ 1/2 −x e dx.1): u = sin(tx). so sin(tx)/ex F (t) = −tF (t).4) at t = 2 2π 23/2 π . 2 /2 This is good from the viewpoint of integration by parts since −xe−x So we apply integration by parts to (6. Therefore . Call the above integral F (t). dv = −xe−x dx v = e−x 2 /2 2 . Here we are including t in the integral from the beginning. 2 /2 blows up while sin(tx) stays bounded.

So while 0 cos(tx)e−x /2 dx = π e−t /2 . Nasty logs. log x 1 0 For example. log x Notice we computed this definite integral without computing an anti-derivative of (x − 1)/ log x. log x 1 0 To find C. (et /2 G(t)) = et /2 . 1+t which we recognize as the t-derivative of log(1 + t). set t = 0. 0 e which is π/2 by (4. So cos(tx)e−x 2 /2 dx = Ce−t 2 /2 for some constant C. 1 0 xt log x dx = log x 1 xt dx = 0 1 . 7. so C = 0. so G(t) = e−t /2 0 ex /2 dx. log x Since 1/ log x → 0 as x → the integrand vanishes at x = 0. 2 2 ∞ the integral 0 sin(tx)e−x /2 dx is impossible to express in terms of elementary functions. 2 2 Remark 6. To find C. The integral and the log term vanish. Thus xt − 1 dx = log(1 + t). As x → 1− . Thus C = π/2. cos(tx)e−x 2 /2 dx = π −t2 /2 e .1. part II We now consider the integral ∞ F (t) = 2 dx xt log x .2). The left side is The right side is C. 8. part I Consider the following integral over a finite interval: 1 0 xt − 1 dx. From the differential equa2 2 2 2 2 2 t ∞ tion. and G(0) = 0. so it is not an improper integral. x−1 dx = log 2. The t-derivative of this integral is 0+ .8 KEITH CONRAD 2 /2 We know the solutions to this differential equation: constant multiples of e−t ∞ 0 . (xt − 1)/ log x → 0. 1]. then in place of F (t) = −tF (t) we have G (t) = 1−tG(t). Therefore the integrand makes sense as a continuous function on [0. If we want to compute G(t) = 0 sin(tx)e−x /2 dx. Nasty logs. Therefore 1 0 xt − 1 dx = log(1 + t) + C. so we are done: ∞ 0 ∞ ∞ −x2 /2 dx. set t = 0. with sin(tx) in place of cos(tx).

So we expect that as t → 1+ . we want to use a lower bound on 21−t . reverses the sense of the inequality and gives 21−t 1 > .DIFFERENTIATING UNDER THE INTEGRAL SIGN 9 where t > 1. so dividing (8. t−1 1−t This is an upper bound on F (t). We know that “at t = 1” the dx x log x b = = = b→∞ 2 lim dx x log x b b→∞ b→∞ lim log log x 2 lim log log b − log log 2 = ∞. For this purpose we use a tangent line calculation. To get an upper bound on F (t). Then we will integrate to get bounds on the size of F (t). the difference 1 − t is negative. which is negative. The integral converges by comparison with integral diverges to ∞: ∞ 2 ∞ 2 dx/xt .1) 21−t ≥ (log 2)(1 − t) + 1. we are going to show F (t) behaves essentially like − log(t − 1) as t → 1+ . 1−t We want to bound this derivative from above and below when t > 1. The function 2x has the tangent line y = (log 2)x + 1 at x = 0 and the graph of y = 2x is everywhere above this tangent line. Using differentiation under the integral sign. Taking x = 1 − t. 1 − t is negative. When t > 1. Combining both bounds. so 21−t < 1. But how does it blow up? By analyzing F (t) and then integrating back. (8.2) < F (t) ≤ log 2 + 1−t 1−t for all t > 1. F (t) should blow up. 1−t 1−t This is a lower bound on F (t). For t > 1. 1 1 (8. Dividing both sides by 1 − t. F (t) = ∂ 1 dx t log x ∂t x 2 ∞ −t x (− log x) = dx log x 2 ∞ dx = − xt 2 x=∞ x−t+1 = − −t + 1 x=2 = ∞ 21−t .1) by 1 − t reverses the sense of the inequality: 21−t 1 ≤ log 2 + . . so 2x ≥ (log 2)x + 1 for all x.

In particular. we divide by t and get (9. 9. when t → 0 we have h(t) − h(0) h(t) = → h (0). then it is easy to show that r(t) is infinitely differentiable at t = 0 by working with the series for h(t). if t = 0 is continuous for all t. The question we want to address is this: is r(t) infinitely differentiable at t = 0 too? If h(t) has a power series representation around t = 0. which is also the defined value of r(0). F (t) → ∞ as t → 1+ .2) from a to 2. r(t) = h (0). < 1−t a a 1−t a Using the Fundamental Theorem of Calculus. r(t) is infinitely differentiable at t = 0. so (9.1) at t = 0 is c1 = h (0). we have (log 2)(a − 2) − log(a − 1) + F (2) ≤ F (a) < − log(a − 1) + F (2) Since a − 2 > −1 for 1 < a < 2. This gives the bounds − log(a − 1) + F (2) − log 2 ≤ F (a) < − log(a − 1) + F (2) Writing a as t. Here c1 = h (0). Indeed. Since functions with power series representations around a point are infinitely differentiable at the point. Therefore r(t) has a power series representation around 0 (it’s just the power series for h(t) at 0 divided by t). so F (t) is a bounded distance from − log(t−1) when 1 < t < 2. The value of the right side of (9. this is an incomplete answer to our question about the infinite differentiability of r(t) 2 at t = 0 because we know by the key example of e−1/t (at t = 0) that a function can be infinitely . c2 = h (0)/2! and so on.1) r(t) = c1 + c2 t + c3 t3 + · · · . Let’s integrate (8. For small t = 0. t t−0 Therefore the function h(t)/t. However. We can see immediately from the definition of r(t) that it is better than continuous when t = 0: it is infinitely differentiable when t = 0. The ratio h(t)/t makes sense for t = 0.10 KEITH CONRAD We are concerned with the behavior of F (t) as t → 1+ . where 1 < a < 2: 2 2 2 1 dt log 2 + F (t) dt ≤ dt. if t = 0. (log 2)(a − 2) is greater than − log 2. Smoothly dividing by t Let h(t) be an infinitely differentiable function for all real t such that h(0) = 0. so log(a − 1) < F (2) − F (a) ≤ (log 2)(2 − a) + log(a − 1). Manipulating to get inequalities on F (a). we get − log(t − 1) + F (2) − log 2 ≤ F (t) < − log(t − 1) + F (2).1) is valid for all small x (including t = 0). write h(t) = c1 t + c2 t2 + c3 t3 + · · · for all small t. which is a power series representation for r(t) for all small t = 0. 2 2 2 − log(t − 1) a < F (t) a ≤ ((log 2)t − log(t − 1)) a . and it also can be given a reasonable meaning at t = 0: from the very definition of the derivative.

so du = t dx. In particular. Therefore 1 1 h(t) = 0 h (tx)t dx = t 0 h (tx) dx. but we have not actually stated conditions under which (1. h(k+1) (0) k+1 . If u = t then x = 1 (we can divide the equation t = tx by t because t = 0). the integration variable on the right is x. r(t) is infinitely differentiable! Explicitly. The right side is 0 h (0) dx = h (0) too. How are we going to show r(t) = h(t)/t is infinitely differentiable at t = 0 if we don’t have a power series to help us out? Maybe there’s actually a counterexample? The way out is to write h(t) in a very clever way using differentiation under the integral sign. In [3].2) is valid. The following example shows that even if both sides of (1. where a divergent integral is evaluated as a finite expression. we want to differentiate with respect to t. Start with t h(t) = 0 h (u) du. including t = 0. we get r(t) = h(t) = t 1 h (tx) dx. r(k) (0) = 1 k (k+1) (0) dx 0 x h = 10.DIFFERENTIATING UNDER THE INTEGRAL SIGN 11 differentiable at a point without having a power series representation at the point. they need not be equal.2) make sense. .) For t = 0. The way we have set things up here. 0 The left and right sides don’t have any t in the denominator. Dividing by t when t = 0. if u = 0 then x = 0. A counterexample We have seen many examples where differentiation under the integral sign can be carried out with interesting results. This is a formula for h(t)/t where there is no longer a t being divided! Now we’re set to use differentiation under the integral sign.2) when the integrand is differentiable. 1 r (t) = 0 vh (tx) dx 1 and r (t) = 0 vh (tx) dx 1 and more generally r(k) (t) = 0 xk h(k+1) (tx) dx. Since the integrand is infinitely differentiable. Something does need to be checked. (This is correct since h(0) = 0. At the boundary.2) r(t) = 0 h (tx) dx for all t. an incorrect use of differentiation under the integral sign due to Cauchy is discussed. We can use differentiation under the integral sign on (9. Are they equal at t = 0 too? The 1 left side at t = 0 is r(0) = h (0). introduce the change of variables u = tx. so 1 (9.

2) is 0 ∂t t=0 f (x. In particular. (10. f (x. let  xt3  . t) and then 0 ∂t f (x. if x = 0 and t = 0. Indeed.1. t) is differentiable in t and ∂ f (x. Therefore F (t) is differentiable and 1 − t2 F (t) = 2(1 + t2 )2 1 for all t. When t = 0. t) dx. 0). t). the denominator in the formula in (10. t) = 0 for all t. t) is not a continuous function of (x. t) is differen∂ tiable in t and ∂t f (0. For any real numbers x and t. if x = 0. if x = 0 or t = 0.1) ∂ f (x. 1 ∂ ∂ Now we compute ∂t f (x. t) = 0. t) = 0. . so F (t) = t/(2(1 + t2 )) for all t. (x2 + t2 )3 xt2 (3x2 −t2 ) . (x2 +t2 )3 Combining both cases (x = 0 and x = 0). Let F (t) = 0 1 f (x. The two sides are unequal! ∂ The problem in this example is that ∂t f (x. 0. F (0) = 1 0 f (x. t) dx. F (0) = 2 . t) dx = 0.2) is F (0) = 1/2 and the 1 ∂ right side of (1. ∂ In particular ∂t t=0 f (x. Since f (0. For x = 0. For instance. F (t) = = t2 xt3 dx (x2 + t2 )2 t3 du (where u = x2 + t2 ) 2u2 1+t2 = t3 − 2u u=1+t2 u=t2 t3 = − = 2(1 + t2 ) t . Therefore at t = 0 the left side of (1. which has a problem near (0. t) = (x2 + t2 )2  0. f (0. f (x. t) = ∂t = = (x2 + t2 )2 (3xt2 ) − xt3 · 2(x2 + t2 )2t (x2 + t2 )4 xt2 (x2 + t2 )(3(x2 + t2 ) − 4t2 ) (x2 + t2 )4 xt2 (3x2 − t2 ) . Specifically.12 KEITH CONRAD Example 10. 2(1 + t2 ) + t3 2t2 This formula also works at t = 0. t) = ∂t if x = 0.1) is (x2 + t2 )3 . 0) dx = 1 0 0 dx 1 0 = 0.

b b such that a A(x) dx and a B(x) dx converge. t 2. Prove for t > 0 that 0 11. 0). ∞) t range t≥c>0 t≥c>0 t we want 1 0 A(x) xn e−cx e−cx 1 1+x2 2 xn e−cx B(x) xn+1 e−cx e−cx 2c 2 xn+2 e−cx cos(tx)e−x 2 /2 xt −1 log x 1 xt log x xk h(k+1) (tx) [0. t) and ∂t f (x. show ∞ −∞ e−(x+iy) dx = 2 √ 2π for all y ∈ R. 0). The two sides of (1. t) → (0. ∞) t≥c>1 t>1 [0. ∞) [0. show the integral on the left side is independent of y. then on this ∂ line ∂t f (x. Section 2 3 4 5 6 7 8 9 f (x. Exercises cos x − 1 t e−tx dx = log √ . 3. t) xn e−tx e−tx sin x x e−t (1+x ) 1+x2 n e−tx2 x 2 2 x range [0. ∞) R all t (0. 337–339]. t cos(tx) . 3. Summary e−x /2 ?? 1 x2 log x 2 |x|e−x 1 1 xc 2 /2 ?? ?? ∞ 1. See [2. Starting with the indefinite integral formulas cos(tx) dx = compute formulas for xn sin x dx and sin(tx) dx = − xn cos x dx for n = 2. In other words. ∂ • there are upper bounds |f (x. both being independent of t. t) are continuous functions of two variables when x is in the range of integration and t is in some interval around t0 . t)| ≤ B(x). t) has the value 1/(4x). If you are familiar with integration of complex-valued functions.DIFFERENTIATING UNDER THE INTEGRAL SIGN 13 while this derivative vanishes at (0. Since the calculation of a derivative at a point only depends on an interval around the point. it we let (x.2) both exist and are equal at a point t = t0 provided the following two conditions hold: ∂ • f (x. we have replaced a t-range such as t > 0 with t ≥ c > 0 in some cases to obtain choices for A(x) and B(x). (Hint: Compute the y-derivative of the left side.2. Theorem 10. t) → (0. which does not tend to 0 as (x. 1] t ≥ c > −1 1 [2. In Table 1 we include choices for A(x) and B(x) for each of the functions we have treated.) . t)| ≤ A(x) and | ∂t f (x. x 1 + t2 sin(tx) . 1] 0≤t≤c t→∞ R t≥c>0 1 [0. pp. 0) along the line x = t. 1] R 0 Table 1. 4. Proof.

. Springer-Verlag. Lang. 2nd ed.14 KEITH CONRAD References [1] R. Mr. Surely You’re Joking. [3] E. Monthly 108 (2001). Math. 1985. 1997. New York. 432–436. Talvila. [2] S. “Some Divergent Trigonometric Integrals”. P. Undergraduate Analysis. Amer. New York. Feynman!. . Feynman. Bantam.

Sign up to vote on this title
UsefulNot useful