You are on page 1of 3

2.8.

Linear Combinations of Random Variables 1

Section 2.8. Linear Combinations of


Random Variables

Note. We considered linear combinations of random variables in Section 2.6, “Ex-


tension to Several Variables,” in connection with moment generating functions
(see Theorem 2.6.1 and Corollary 2.6.1). In this section we consider expectations
of random variables, variances, and covariances of linear combinations of random
variables. With X1 , X2 , . . . , Xn as the random variables, we define random variable
T = ni=1 ai Xi where each ai is some constant.
P

Pn
Theorem 2.8.1. Let X1 , X2 , . . . , Xn be random variables and define T = i=1 ai Xi .

Suppose E[Xi ] = µi for i = 1, 2, . . . , n. Then E[T ] = ni=1 ai µi .


P

Note. To find the variance of T , we first consider covariances.

Theorem 2.8.2. Let X1 , X2 , . . . , Xn , Y1 , Y2 , . . . , Ym be random variables and define


Pn Pm 2 2
T = a X
i=1 i i and W = j=1 bj Yj . If E[Xi ] < ∞ and E[Yj ] < ∞ for i =

1, 2, . . . , n and j = 1, 2, . . . , m, then
n X
X m
Cov(T, W ) = ai bj Cov(Xi , Yj ).
i=1 j=1
2.8. Linear Combinations of Random Variables 2

Pn
Corollary 2.8.1. Let X1 , X2 , . . . , Xn be random variables and define T = i=1 ai Xi .

Provided E[Xi2 ] < ∞ for i = 1, 2, . . . , n, then


n
X X
Var(T ) = a2i Var(Xi ) + s ai aj Cov(Xi , Xj ).
i=1 i<j

Corollary 2.8.2. Let X1 , X2 , . . . , Xn be independent random variables and define


T = ni=1 ai Xi . With Var(Xi ) = σi2 for i = 1, 2, . . . , n we have Var(T ) = ni=1 a2i σi2 .
P P

Note. We now give definitions which are our first step from mathematical proba-
bility to mathematical statistics.

Definition 2.8.1. If the random variables X1 , X2 , . . . , Xn are independent and


identically distributed; that is, each Xi has the same distribution, then these ran-
dom variables constitute a random sample of size n from that common distribution.
We abbreviate independent and identically distributed as “iid.”

Definition. Let X1 , X2 , . . . , Xn be a random sample of size nPfrom some distribu-


n
Xi
tion with mean µ and variance σ . The sample mean is X = i=1 . The sample
2
Pn n
2
(Xi − X)
variance is S 2 = i=1 .
n−1
2.8. Linear Combinations of Random Variables 3

Theorem 2.8.A. Let X1 , X2 , . . . , Xn be independent and identically distributed


random variables with common mean µ and variance σ 2 . We have
Pn 2
σ2 2
2
i=1 Xi − nX
E[X] = µ, Var(X) = , S = , and E[S 2 ] = σ 2 .
n n−1

Note. Since E[X] = µ and E[S 2 ] = σ 2 , then X is an unbiased estimate of µ, and


S 2 is an unbiased estimate of σ 2 .

Revised: 2/18/2020

You might also like