You are on page 1of 67

Elementary Probability Theory, Fourth Edition

Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

Probability Theory and Statistics


Zhengyao Bai, PhD, Professor
School of Information Science & Engineering, Yunnan University

Email: baizhy@ynu.edu.cn
Mobile: 13577060089
QQ & Wechat: 1302255793
Room: 1417#, SISE Building

0
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.1 Basic properties of expectation


• 6.2 The density case
• 6.3 Multiplication theorem, variance and covariance
• 6.4 Multinomial distribution
• 6.5 Generating function and the like

1
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.1 Basic properties of expectation


• The definition of mathematical expectation:

• Random variable X summable:

• If A is an event, then its indicator IA is a random variable

2
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.1Basic properties of expectation


• Theorem 1. If X and Y are summable, then so is X + Y
and we have: E (X + Y ) = E(X) + E(Y ).
• Extension1. For any two constants a and b, we obtain
E(aX + bY ) = aE(X) + bE(Y )
• Extension 2. If X 1 , X 2 , ... , X n are summable random
variables, then so is their sum: E(X1 + ··· + Xn) = E(X1)
+ ··· + E(Xn).

3
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.1Basic properties of expectation


• Example 1. A raffle lottery contains 100 tickets, of
which there is one ticket bearing the prize $10000, the
rest being all zero. If I buy two tickets, what is my
expected gain?
• If I have only one ticket, my gain is represented by the
random variable X, which takes the value 10000 on
exactly one ω and 0 on all the rest.

4
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.1Basic properties of expectation


• Example 2. (Coupon collecting problem). There are N
coupons marked 1 to N in a bag. We draw one coupon
after another with replacement. Suppose we wish to
collect r different coupons; what is the expected number
of drawings to get them?

• The problem may be regarded as one of waiting time,


namely: we wait for the rth new arrival. Let X 1 , X 2 ,...
denote the successive waiting times for a new coupon.

5
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.1Basic properties of expectation


• Example 2. X1=1, then X2 is the waiting time for any
coupon that is different from the first one drawn. At each
drawing there are N coupons and all but one of them will
be new. The success probability p =( N − 1)/N.

The third new one is similar with success probability p =


(N − 2)/N.

6
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.1Basic properties of expectation


• Example 2. Continuing this process, we can obtain, for
1 ≤ r ≤ N:

(r = N)

(r = N/2)

7
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.1Basic properties of expectation


• Example 2. There is a famous formula in mathematical
analysis:

where the “log” is the natural logarithm to the base e, C is the Euler's
constant = 0.5772.

8
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.2The density case


• For random variables in an arbitrary sample space, the
mathematical expectation is defined as an abstract
integral:

where “dω” denotes the probability of the “element at ω”.

9
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.2 The density case


• For random variables in an arbitrary sample space, the
mathematical expectation is defined as an abstract
integral:

where “dω” denotes the probability of the “element at ω”.

10
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.2 The density case


• Let (x, y) = x + y, we have

• This double integral can be split and evaluated by


iterated integration:

11
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.2 The density case


• Example 3. An insurance company receives claims for
indemnification from time to time. Both the times of
arrival of such claims and their amounts are unknown in
advance and determined by chance; ergo, random. The
total amount of claims in one year, say, is of course also
random, but clearly it will be determined as soon as we
know the “when” and “how much” of the claims.
• Let Sn denote the date of the nth claim.

12
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.2 The density case


• If the density is λe-λt for all Tn, then E(Tn) = 1/λ.

13
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.2 The density case


• Poincar´e’s Formula. For arbitrary events A1,... ,An ,we
have

where the indices in each sum are distinct and range


from 1 to n.

14
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.2 The density case


• E x a m p l e 4 . ( M a t c h i n g p r o b l e m , o r p ro b l e m o f
rencontre). Two sets of cards both numbered 1 to n are
randomly matched. What is the probability of at least one
match?
• Let A j be the event that the jth cards are matched,
regardless of the others. There are n! permutations of the
second set against the first set. If the jth cards match, that
leaves (n − 1)! permutations for the remaining cards,
hence

15
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.2 The density case


• Example 4. (continued) Similarly if the jth and kth cards
are both matched, where j≠k, that leaves (n − 2)!
permutations for the remaining cards, hence

If j, k, l are all distinct, then

16
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.2 The density case


• Example 4. (continued)

equal to

17
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.2 The density case


• Example 4. (continued) Since

then

Hence, as soon as n ≥ 4, 1/(n + 1)! ≤ 1/5! = 1/120, and the


probability of at least one match differs from 1 − e−1 ≈ .63
by less than .01.

18
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.3 Multiplication theorem, variance and covariance


• Theorem 2. If X and Y are independent summable
random variables, then E(XY ) = E(X)E(Y ).
• Extension to any finite number of independent summable
random variables: E(X1 ··· Xn) = E(X1)··· E(Xn).
• In the particular case of Theorem 2 where each Xj is the
indicator of an event Aj , the above equation reduces to:
P(A1 ··· An) = P(A1)··· P(An).

19
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.3 Multiplication theorem, variance and covariance


• Example 5. Iron bars in the shape of slim cylinders are
test-measured. Suppose the average length is 10 inches
and average area of ends is 1 square inch. The average
error made in the measurement of the length is .005 inch,
that in the measurement of the area is .01 square inch.
What is the average error made in estimating their
weights?

20
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.3 Multiplication theorem, variance and covariance


• Example 5. Since the weight is density times volume,
that is, m = ρ × V. ρ is a constant. Consider the volune V
of the iron bar: V = LA, where L = length, A = area of
ends. Let the errors be ∆L and ∆A, respectively; then the
error in V is given by

Assuming independence between the measurements, we


have

21
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.3 Multiplication theorem, variance and covariance


• Definition of Moment. For positive integer r, the
mathematical expectation E(Xr) is called the rth moment
[moment of order r] of X. Thus if X r has a finite
expectation, we say that X has a finite rth moment.
• For r = 1, of course, the first moment is just the
expectation or mean.
• The case r = 2 is of special importance. Since X 2 ≥ 0, we
shall call E(X 2) the second moment of X.
• When the mean E(X) is finite, it is often convenient to
consider: X 0 = X − E(X) [centerizing], instead of X
because its first moment is equal to zero.

22
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.3 Multiplication theorem, variance and covariance


• Definition of Variance and Standard Deviation. The
second moment of X 0 is called the variance of X and
denoted by σ2(X); its positive square root σ(X) is called
the standard deviation of X.
• Theorem 3. If E(X2) is finite, then so is E(|X|). We then
have: σ 2 (X) = E(X 2 ) − E(X) 2 ; consequently, E(|X|) 2 ≤
E(X2).

23
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.3 Multiplication theorem, variance and covariance


• Theorem 4. If X and Y are independent and both have
finite variances, then σ2(X + Y ) = σ2(X) + σ2(Y ).

• Extension to any finite number of independent random


variables:

When the X’s are centered independent, then all the mixed terms in the
second sum above vanish and the result is the extension of Theorem 4.

24
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.3 Multiplication theorem, variance and covariance


• Cauchy–Schwarz inequality: E(XY )2 ≤ E(X2) E(Y2)
• Covariance of X and Y : Cov (X, Y )

• Coefficient of correlation between X and Y

If it is equal to zero, then X and Y are said to be uncorrelated.


25
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.3 Multiplication theorem, variance and covariance


• Example 6. The most classical application of the
preceding results is to the case of Bernoullian random
variables.These a r e i nd e p e nd e n t w i t h t he s a m e
probability distribution as follows:

then

26
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.3 Multiplication theorem, variance and covariance


• Example 6. (continued) Let{Xn, n ≥ 1}denote Bernoullian
random variables and write

then

27
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.3 Multiplication theorem, variance and covariance


• Example 7. Returning to the matching problem in §6.2,
let us now compute the standard deviation of the number
of matches. From Eq. (6.3.8), we have

Clearly

So, we obtain

28
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• Multinomial distribution is a natural generalization of
the binomial distribution.
• It serves as a model for repeated trials in which there are
a number of possible outcomes instead of just “success
or failure.”
• Multinomial Theorem.

where the sum ranges over all ordered r-tuples of


integers n1,... ,nr, ni ≥ 0, i =1, 2, ..., r, and n1+... +nr= n.

29
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• Multinomial Theorem.
When r = 2, multinomial theorem reduces to the
binomial theorem. There are n+1 ordered couples: (0, n),(1,
n − 1),... ,(k, n − k),... ,(n, 0), with the corresponding
coefficients

i.e.

30
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• Multinomial Theorem.
The sum can be written explicitly as

31
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• Problem. Suppose an urn contains balls of r different
colors in the proportions p1, p2, ..., pr, where p1+ p2+...+
pr=1.We draw n balls one after another with replacement.
What is the probability that so many of the balls drawn
are of each color?
• Solution. Let X1,... , Xn be independent random variables
all having the same distribution as the X below:

32
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• Solution (continued): The joint distribution of (X1,... , Xn)
is P(X1 = x1,... , Xn = xn) = P(X1 = x1) ... P(Xn = xn). Let
numbers of xj 's equal to 1, 2,..., r are n 1 , n 2 , ..., n r . the
probability in question is equal to
• Introduce new random variables N j , 1 ≤ j ≤ r : N j =
number of X ’s among (X1,... , Xn) that take the value j.
their joint distribution can be written down:

It is called multinomial distribution, denoted as M(n;r;p1,..., pr−1, pr) .

33
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• The probabilistic interpretation of multinomial theorem.
Repeating multinomial theorem here:

divide this formula through by its left member, and let:

we obtain

34
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• Actually, from

Take summation for all cases, we get

35
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• Marginal distributions

If we are interested in N1 alone, then we can lump the r − 1


other varieties as “not 1” with probability 1 − p1:

Thus the multinomial distribution collapses into a binomial


one B(n; p1).

36
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• The mean and variance of Nj can be computed as:

• Similarly, for the pair (N1, N2), we obtain

where n3 = n − n1 − n2, p3 = 1 − p1 − p2.

37
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• Take the second partial derivative below:

Multiply through by x1x2 and then put x1 = p1, x2 = p2, x3 = p3. The
result is n(n − 1)p1p2.
• In general, we have for j ≠ k:

38
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• Example 8. Three identical dice are thrown. What is the
probability of obtaining a total of 9? The dice are not
supposed to be symmetrical, and the probability of
turning up face j is equal to p j , 1 ≤ j ≤ 6; same for all
three dice.

39
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• Example 8. Let us list the possible cases in terms of the
X’s and the N’s:

Hence, we obtain

40
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.4 Multinomial distribution


• Example 8. If all the p’s are equal to 1/6, then this is
equal to:

• In the general context of the X’s discussed above, the


probability P(X1 + ··· + Xn = m) is obtained by summing

over all (n1,... ,nr), satisfying the two conditions:

In the Example 8, r = 6, n = 3, m = 9.

41
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• A powerful mathematical device, a true gimmick, is the
generating function invented by the great prolific
mathematician Euler.
• Let X be a random variable taking only nonnegative
integer values with the probability distribution given by
P(X = j) = a j , j = 0, 1, 2, ....The idea is to put all the
information contained above in a compact capsule. Using
a dummy variable z, the generating function associated
with the sequence of numbers {aj , j ≥ 0}is defined as:

it converges for |z| ≤ 1.


42
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Differentiate the series term by term to get the
derivatives of g.

43
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• If we set z = 0 , all the terms vanish except the constant
term:

• Therefore, not only does the probability distribution


determine the generating function, but also vice versa.
putting z = 1 in g' and g'' we get:

44
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Theorem 5. The probability distribution of a nonnegative
integer-valued random variable is uniquely determined by
its generating function.
• Let Y be a random variable having the probability
distribution {bk, k ≥ 0} where bk = P(Y = k), and let h be
its generating function:

Suppose that g(z) = h(z) for all |z| < 1; then the theorem
asserts that ak = bk for all k ≥ 0. Consequently X and Y have
the same distribution.

45
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Multiplication of generating functions

Let

The sequence {c j } is called the convolution of the two


sequences {aj} and {bj}. What does cl stand for?

46
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Multiplication of generating functions
Suppose that the random variables X and Y are
independent.Given that X = j, we have X + Y = l if and only
if Y = l − j.

47
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Multiplication of generating functions
We have proved P(X + Y = l) = cl. So that {cl, l ≥ 0} is
the probability distribution of the random variable X+Y .
• Theorem 6. If the random variables X 1 ,... ,X n are
independent and have g 1 ,... ,g n as their generating
functions, then the generating function of the sum X 1
+ ··· + Xn is given by the product g1 ··· gn.

48
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 9. For the Bernoullian random variables
X 1 ,... ,X n , since a 0 = q, a 1 = p, the common generating
function is g(z) = q + pz.
Hence the generating function of Sn = X1 + ··· + Xn, where
the X’s are independent, is given by the nth power of g:

49
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 9. On the other hand, by definition of a
generating function, we have

Compare the above two equations, we obtain

Hence we prove the Bernoulli formula in another way.

50
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 10. For the waiting-time distribution (Section
4.4), we have p j = q j−1 p, j ≥ 1; hence the generating
function is given as follows:

51
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 10. Let S n = T 1 + ··· + T n where the T’s are
independent and each has the g. Then Sn is the waiting time
for the nth success. Its generating function is given by gn:

52
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 10. Hence we obtain for j ≥ 0:

The probability distribution given by


is called the negative binomial distribution of order n. Its
generating function is given by

53
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 10. Since g(z)/z is the generating function of T1 −
1, which represents the number of failures before the first
success. Hence the above generating function is that of the
random variable Sn−n, which is the total number of failures
before the nth success.

54
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 10. A negative binomial experiment is a
statistical experiment that has the following properties:
• The experiment consists of n repeated trials.
• Each trial can result in just two possible outcomes: success and
failure.
• The probability of success p is the same on every trial.
• The trials are independent; that is, the outcome on one trial does
not affect the outcome on other trials.
• The experiment continues until j successes are observed, where j is
specified in advance.

55
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 10. Consider the following statistical experiment.
You flip a coin repeatedly and count the number of times
the coin lands on heads. You continue flipping the coin
until it has landed 5 times on heads. This is a negative
binomial experiment.

56
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 10. Comparison. A binomial experiment is a
statistical experiment that has the following properties:
• The experiment consists of n repeated trials.
• Each trial can result in just two possible outcomes: success and
failure.
• The probability of success p is the same on every trial.
• The trials are independent; that is, the outcome on one trial does
not affect the outcome on other trials.

57
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 10. Consider the following statistical experiment.
You flip a coin 2 times and count the number of times the
coin lands on heads. This is a binomial experiment because:
• The experiment consists of repeated trials. We flip a coin 2 times.
• Each trial can result in just two possible outcomes - heads or tails.
• The probability of success is constant - 0.5 on every trial.
• The trials are independent; that is, getting heads on one trial does
not affect whether we get heads on other trials..

58
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 11. For the dice problem at the end of Section 6.4,
we have pj = 1/6 for 1 ≤ j ≤ 6 if the dice are symmetrical.
Hence the associated generating function is given by

The generating function of the total points obtained by


throwing three dice is just g3.

59
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• Example 11. The coefficient of z 9 is easily found by
inspection, since there are only two ways of forming it
from the product above:

Generally, the machinery works in the following steps:


Step 1: Code the probabilities {P(X = j), j ≥ 0} into a
generating function g.
Step 2: Process the function by raising it to nth power gn.
Step 3: Decode the probabilities {P(Sn = k), k ≥ 0} from gn
by expanding it into a power series.
60
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• The Laplacian transform and the Fourier transform
• Let X be a random variable, for|z|<1, consider the
mathematical expectation of zX. When X takes the value j,
zX takes the value z j , hence the expectation of z X may be
expressed as , , which is g(z).
• The generating function of X1+···+Xn is equal to

This is the result of Theorem 6.

61
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• The Laplacian transform and the Fourier transform
• Extension. If X can take arbitrary real values, the above
expression still has a meaning. For , Every such z
can be represented as e−λ with 0 ≤ λ < ∞. consider the new
expression after such a change of variable:

62
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• The Laplacian transform and the Fourier transform
• More generally, if X takes the values {xj} with probabilities
{pj}, then

• Finally, if X has the density function f , let (u) = e−λu, we


have

provided that the integral converges. It is called the


Laplacian transform of X.

63
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• 6.5 Generating function and the like


• The Laplacian transform and the Fourier transform
• If we replace the negative real parameter -λ by the purely
imaginary iθ, and θ is real. We get the Fourier
transform E(e iθX ). It is also called the characteristic
function in probability theory.

• Theorem 7. Theorems 5 and 6 remain true when the


generating function is replaced by the Laplace transform
(for nonnegative random variables) or the Fourier
transform (for arbitrary random variables).

64
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter 6 Mean, Variance, and Traansform

• Homework
Excercises 1, 2, 4, 5, 7, 8, 13, 28, 30, 45 (Pages 195-200)

65
Elementary Probability Theory, Fourth Edition
Kai Lai Chung, Farid AitSahlia

Chapter
Chapter 7 6 Mean, Variance, and Traansform

To be updated…

THANKS
2020-11-27

You might also like