Professional Documents
Culture Documents
MA20205
Bibhas Adhikari
Lecture 12
October 11, 2022
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 1 / 18
Joint distributions
Joint expectation
Let X , Y be random variables. Then the joint expectation of the pair is
defined as
P P
y ∈ΩY x∈ΩX xy f (x, y ) if X , Y are discrete
E (XY ) =
R R
y ∈ΩY x∈ΩX xy f (x, y ) dx dy if X , Y are continuous
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 2 / 18
Joint distributions
Joint expectation
Let X , Y be random variables. Then the joint expectation of the pair is
defined as
P P
y ∈ΩY x∈ΩX xy f (x, y ) if X , Y are discrete
E (XY ) =
R R
y ∈ΩY x∈ΩX xy f (x, y ) dx dy if X , Y are continuous
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 2 / 18
Joint distributions
Suppose X and Y are discrete random variables with range spaces
ΩX = {x1 , . . . , xn } and ΩY = {y1 , . . . , yn }.
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 3 / 18
Joint distributions
Suppose X and Y are discrete random variables with range spaces
ΩX = {x1 , . . . , xn } and ΩY = {y1 , . . . , yn }. Now define the vectors
x = [x1 , . . . , xn ]T , y = [y1 , . . . , yn ]T .
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 3 / 18
Joint distributions
Suppose X and Y are discrete random variables with range spaces
ΩX = {x1 , . . . , xn } and ΩY = {y1 , . . . , yn }. Now define the vectors
x = [x1 , . . . , xn ]T , y = [y1 , . . . , yn ]T .
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 3 / 18
Joint distributions
Suppose X and Y are discrete random variables with range spaces
ΩX = {x1 , . . . , xn } and ΩY = {y1 , . . . , yn }. Now define the vectors
x = [x1 , . . . , xn ]T , y = [y1 , . . . , yn ]T .
Then
E (XY ) = x T Py ,
the weighted inner (scalar) product of x and y .
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 3 / 18
Joint distributions
then
1 1
P= I , E (XY ) = x T y
n n
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 4 / 18
Joint distributions
Recall that the cosine angle between x and y is defined as
xT y
cos θ =
∥x∥ ∥y ∥
qP qP
n 2 n 2
where ∥x∥ = i=1 xi and ∥y ∥ = i=1 yi .
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 5 / 18
Joint distributions
Recall that the cosine angle between x and y is defined as
xT y
cos θ =
∥x∥ ∥y ∥
qP qP
n 2 n 2
where ∥x∥ = i=1 xi and ∥y ∥ = i=1 yi .
Geometry of expectation: geometry defined by the weighted inner product
and weighted norm
where
x T Py E (XY )
cos θ = =p p
∥x∥ ∥y ∥ E (X 2 ) E (Y 2 )
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 5 / 18
Joint distribution
In the above,
E (X 2 ) = x T P X x = ∥x∥2P X
E (Y 2 ) = y T P Y y = ∥y ∥2P Y
where
p(x1 ) . . . 0 p(y1 ) . . . 0
PX = ... .. .. , P = .. .. ..
. . Y . . .
0 . . . p(xn ) 0 . . . p(yn )
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 6 / 18
Joint distribution
In the above,
E (X 2 ) = x T P X x = ∥x∥2P X
E (Y 2 ) = y T P Y y = ∥y ∥2P Y
where
p(x1 ) . . . 0 p(y1 ) . . . 0
PX = ... .. .. , P = .. .. ..
. . Y . . .
0 . . . p(xn ) 0 . . . p(yn )
Obviously,
E (XY )
−1 ≤ p p ≤1
E (X 2 ) E (Y 2 )
due to Cauchy-Schwarz inequality:
(E (XY ))2 ≤ E (X 2 )E (Y 2 )
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 6 / 18
Joint distributions
Conclusion E (XY ) can be interpreted as a measure of correlation between
x and y
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 7 / 18
Joint distributions
Conclusion E (XY ) can be interpreted as a measure of correlation between
x and y
Covariance
Let X , Y be random variables with means µX , µY respectively. Then
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 7 / 18
Joint distributions
Conclusion E (XY ) can be interpreted as a measure of correlation between
x and y
Covariance
Let X , Y be random variables with means µX , µY respectively. Then
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 7 / 18
Joint distributions
Conclusion E (XY ) can be interpreted as a measure of correlation between
x and y
Covariance
Let X , Y be random variables with means µX , µY respectively. Then
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 7 / 18
Joint distributions
Conclusion E (XY ) can be interpreted as a measure of correlation between
x and y
Covariance
Let X , Y be random variables with means µX , µY respectively. Then
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 7 / 18
Joint distributions
Conclusion E (XY ) can be interpreted as a measure of correlation between
x and y
Covariance
Let X , Y be random variables with means µX , µY respectively. Then
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 7 / 18
Joint distributions
Conclusion E (XY ) can be interpreted as a measure of correlation between
x and y
Covariance
Let X , Y be random variables with means µX , µY respectively. Then
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 7 / 18
Joint distributions
Correlation coefficient
Let X , Y be random variables. Then the correlation coefficient is defined
by
Cov(X , Y ) Cov(X , Y )
ρ= p =
Var(x)Var(Y ) σX σY
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 8 / 18
Joint distributions
Correlation coefficient
Let X , Y be random variables. Then the correlation coefficient is defined
by
Cov(X , Y ) Cov(X , Y )
ρ= p =
Var(x)Var(Y ) σX σY
Observations:
1 −1 ≤ ρ ≤ 1
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 8 / 18
Joint distributions
Correlation coefficient
Let X , Y be random variables. Then the correlation coefficient is defined
by
Cov(X , Y ) Cov(X , Y )
ρ= p =
Var(x)Var(Y ) σX σY
Observations:
1 −1 ≤ ρ ≤ 1
2 if X = Y i.e. fully correlated then ρ = 1
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 8 / 18
Joint distributions
Correlation coefficient
Let X , Y be random variables. Then the correlation coefficient is defined
by
Cov(X , Y ) Cov(X , Y )
ρ= p =
Var(x)Var(Y ) σX σY
Observations:
1 −1 ≤ ρ ≤ 1
2 if X = Y i.e. fully correlated then ρ = 1
3 if X = −Y i.e. negatively correlated then = −1
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 8 / 18
Joint distributions
Correlation coefficient
Let X , Y be random variables. Then the correlation coefficient is defined
by
Cov(X , Y ) Cov(X , Y )
ρ= p =
Var(x)Var(Y ) σX σY
Observations:
1 −1 ≤ ρ ≤ 1
2 if X = Y i.e. fully correlated then ρ = 1
3 if X = −Y i.e. negatively correlated then = −1
4 if X and Y are uncorrelated then ρ = 0
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 8 / 18
Joint distributions
Correlation coefficient
Let X , Y be random variables. Then the correlation coefficient is defined
by
Cov(X , Y ) Cov(X , Y )
ρ= p =
Var(x)Var(Y ) σX σY
Observations:
1 −1 ≤ ρ ≤ 1
2 if X = Y i.e. fully correlated then ρ = 1
3 if X = −Y i.e. negatively correlated then = −1
4 if X and Y are uncorrelated then ρ = 0
5 if X and Y are independent then E (XY ) = E (X )E (Y ) and hence
Cov(X , Y ) = 0 and Var(XY ) = Var(X )Var(Y )
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 8 / 18
Joint distributions
X −µX Y −µY
Interesting result Let X ∗ = σX and Y ∗ = σY .
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 9 / 18
Joint distributions
X −µX Y −µY
Interesting result Let X ∗ = σX and Y ∗ = σY . Then
Cov(X ∗ , Y ∗ ) Cov(X , Y )
ρ∗ == == =ρ
σX ∗ σY ∗ σX σY
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 9 / 18
Joint distributions
X −µX Y −µY
Interesting result Let X ∗ = σX and Y ∗ = σY . Then
Cov(X ∗ , Y ∗ ) Cov(X , Y )
ρ∗ == == =ρ
σX ∗ σY ∗ σX σY
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 9 / 18
Joint distributions
X −µX Y −µY
Interesting result Let X ∗ = σX and Y ∗ = σY . Then
Cov(X ∗ , Y ∗ ) Cov(X , Y )
ρ∗ == == =ρ
σX ∗ σY ∗ σX σY
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 9 / 18
Joint distributions
Joint Moment Generating Function
Let X , Y be random variables with joint pdf f (x, y ). Then the real values
function
M(s, t) = E (e sX +tY )
is called the joint mgf of X and Y , if it exists in some interval
(s, t) ∈ [−h, h] × [−k, k]
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 10 / 18
Joint distributions
Joint Moment Generating Function
Let X , Y be random variables with joint pdf f (x, y ). Then the real values
function
M(s, t) = E (e sX +tY )
is called the joint mgf of X and Y , if it exists in some interval
(s, t) ∈ [−h, h] × [−k, k]
Observations
1 M(s, 0) = E (e sX )
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 10 / 18
Joint distributions
Joint Moment Generating Function
Let X , Y be random variables with joint pdf f (x, y ). Then the real values
function
M(s, t) = E (e sX +tY )
is called the joint mgf of X and Y , if it exists in some interval
(s, t) ∈ [−h, h] × [−k, k]
Observations
1 M(s, 0) = E (e sX )
2 M(0, t) = E (e ty )
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 10 / 18
Joint distributions
Joint Moment Generating Function
Let X , Y be random variables with joint pdf f (x, y ). Then the real values
function
M(s, t) = E (e sX +tY )
is called the joint mgf of X and Y , if it exists in some interval
(s, t) ∈ [−h, h] × [−k, k]
Observations
1 M(s, 0) = E (e sX )
2 M(0, t) = E (e ty )
∂ k M(s,t) ∂ k M(s,t)
3 E (X k ) = ∂s k , E (Y k ) = ∂t k , k = 1, 2, 3, . . .
(0,0) (0,0)
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 10 / 18
Joint distributions
Joint Moment Generating Function
Let X , Y be random variables with joint pdf f (x, y ). Then the real values
function
M(s, t) = E (e sX +tY )
is called the joint mgf of X and Y , if it exists in some interval
(s, t) ∈ [−h, h] × [−k, k]
Observations
1 M(s, 0) = E (e sX )
2 M(0, t) = E (e ty )
∂ k M(s,t) ∂ k M(s,t)
3 E (X k ) = ∂s k , E (Y k ) = ∂t k , k = 1, 2, 3, . . .
(0,0) (0,0)
∂ 2 M(s,t)
4 E (XY ) = ∂s∂t (0,0)
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 10 / 18
Joint distributions
Joint Moment Generating Function
Let X , Y be random variables with joint pdf f (x, y ). Then the real values
function
M(s, t) = E (e sX +tY )
is called the joint mgf of X and Y , if it exists in some interval
(s, t) ∈ [−h, h] × [−k, k]
Observations
1 M(s, 0) = E (e sX )
2 M(0, t) = E (e ty )
∂ k M(s,t) ∂ k M(s,t)
3 E (X k ) = ∂s k , E (Y k ) = ∂t k , k = 1, 2, 3, . . .
(0,0) (0,0)
∂ 2 M(s,t)
4 E (XY ) = ∂s∂t (0,0)
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 11 / 18
Joint distributions
Conditional pmf/pdf
Let X and Y be discrete random variables. Then the conditional pmf of X
given Y is
f (x, y )
fX |Y (x|y ) =
fY (y )
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 11 / 18
Joint distributions
Conditional pmf/pdf
Let X and Y be discrete random variables. Then the conditional pmf of X
given Y is
f (x, y )
fX |Y (x|y ) =
fY (y )
and
X X X
P(X ∈ A) = fX |Y (x|y )fY (y ) = P(X ∈ A|Y = y )fY (y )
x∈A y ∈ΩY y ∈ΩY
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 11 / 18
Joint distributions
Probability of conditional event
let X , Y be continuous r.v. and A be an event. Then
Z
P(X ∈ A|Y = y ) = fX |Y (x|y )dx
A
and Z
P(X ∈ A) = P(X ∈ A|Y = y )fY (y )dy
ΩY
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 12 / 18
Joint distributions
Probability of conditional event
let X , Y be continuous r.v. and A be an event. Then
Z
P(X ∈ A|Y = y ) = fX |Y (x|y )dx
A
and Z
P(X ∈ A) = P(X ∈ A|Y = y )fY (y )dy
ΩY
Conditional cdf
Let X , Y be rvs. Then
P
′
x ′ ≤x fX |Y (x |y ) if X , Y are discrete
FX |Y (x|y ) = P(X ≤ x|Y = y ) = Rx ′ ′
−∞ f (x ,y )dx if X , Y are continuous
fY (y )
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 12 / 18
Joint distributions
Conditional expectation
The conditional expectation of X given Y = y is
X
E (X |Y = y ) = xfX |Y (x|y )
x
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 13 / 18
Joint distributions
Conditional expectation
The conditional expectation of X given Y = y is
X
E (X |Y = y ) = xfX |Y (x|y )
x
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 13 / 18
Joint distributions
Proof:
!
X X X
E (X ) = xfX (x) = x f (x, y )
x x y
XX
= xfX |Y (x|y )fY (y )
x y
!
X X
= sfX |Y (x|y ) fY (y )
y x
X
= E (X |Y = y )fY (y )
y
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 14 / 18
Joint distributions
Proof:
!
X X X
E (X ) = xfX (x) = x f (x, y )
x x y
XX
= xfX |Y (x|y )fY (y )
x y
!
X X
= sfX |Y (x|y ) fY (y )
y x
X
= E (X |Y = y )fY (y )
y
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 14 / 18
Joint distributions
Note
1 The mean of a random variable X is a deterministic number.
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 15 / 18
Joint distributions
Note
1 The mean of a random variable X is a deterministic number.
2 The conditional mean of X given Y = y , that is, E (X |y ), is a
function h(y ) of y .
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 15 / 18
Joint distributions
Note
1 The mean of a random variable X is a deterministic number.
2 The conditional mean of X given Y = y , that is, E (X |y ), is a
function h(y ) of y .
3 Form the function h(Y ), which is a random variable
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 15 / 18
Joint distributions
Note
1 The mean of a random variable X is a deterministic number.
2 The conditional mean of X given Y = y , that is, E (X |y ), is a
function h(y ) of y .
3 Form the function h(Y ), which is a random variable
Thus:
E (X ) = EY (EX |Y (X |Y ))
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 15 / 18
Joint distributions
Note
1 The mean of a random variable X is a deterministic number.
2 The conditional mean of X given Y = y , that is, E (X |y ), is a
function h(y ) of y .
3 Form the function h(Y ), which is a random variable
Thus:
E (X ) = EY (EX |Y (X |Y ))
Proof.
X X
E (X ) = E (X |Y = y )fY (y ) = h(y )fY (y ) = E (h(Y )) = E (E (X |Y ))
y y
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 15 / 18
Joint distributions
Note
1 The mean of a random variable X is a deterministic number.
2 The conditional mean of X given Y = y , that is, E (X |y ), is a
function h(y ) of y .
3 Form the function h(Y ), which is a random variable
Thus:
E (X ) = EY (EX |Y (X |Y ))
Proof.
X X
E (X ) = E (X |Y = y )fY (y ) = h(y )fY (y ) = E (h(Y )) = E (E (X |Y ))
y y
Remark All the above results can be obtained when X = x is given and Y
is free
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 15 / 18
Joint distributions
A special result Let X and Y be random variables with mean µX and µY ,
and standard deviation σX and σy , respectively. If conditional expectation
of Y given X = x is linear in x, then
σY
E (Y |X = x) = µY + ρ (x − µX ).
σX
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 16 / 18
Joint distributions
A special result Let X and Y be random variables with mean µX and µY ,
and standard deviation σX and σy , respectively. If conditional expectation
of Y given X = x is linear in x, then
σY
E (Y |X = x) = µY + ρ (x − µX ).
σX
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 16 / 18
Joint distributions
A special result Let X and Y be random variables with mean µX and µY ,
and standard deviation σX and σy , respectively. If conditional expectation
of Y given X = x is linear in x, then
σY
E (Y |X = x) = µY + ρ (x − µX ).
σX
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 16 / 18
Joint distributions
A special result Let X and Y be random variables with mean µX and µY ,
and standard deviation σX and σy , respectively. If conditional expectation
of Y given X = x is linear in x, then
σY
E (Y |X = x) = µY + ρ (x − µX ).
σX
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 16 / 18
Joint distributions
Thus it follows that
µY = aµX + b.
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 17 / 18
Joint distributions
Thus it follows that
µY = aµX + b.
Further,
Z ∞ Z ∞ Z ∞
xyf (x, y )dydx = (ax 2 + bx)fX (x)dx
−∞ −∞ −∞
⇒ E (XY ) = aE (X 2 ) + bµX .
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 17 / 18
Joint distributions
Thus it follows that
µY = aµX + b.
Further,
Z ∞ Z ∞ Z ∞
xyf (x, y )dydx = (ax 2 + bx)fX (x)dx
−∞ −∞ −∞
⇒ E (XY ) = aE (X 2 ) + bµX .
Therefore,
E (XY ) − µX µY
a =
σX2
Cov(X , Y ) Cov(X , Y ) σY
= =
σX2 σX σX σX
σY
= ρ
σX
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 17 / 18
Joint distributions
Thus it follows that
µY = aµX + b.
Further,
Z ∞ Z ∞ Z ∞
xyf (x, y )dydx = (ax 2 + bx)fX (x)dx
−∞ −∞ −∞
⇒ E (XY ) = aE (X 2 ) + bµX .
Therefore,
E (XY ) − µX µY
a =
σX2
Cov(X , Y ) Cov(X , Y ) σY
= =
σX2 σX σX σX
σY
= ρ
σX
Similarly, b = µY − ρ σσYX µX
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 17 / 18
Joint distributions
Conditional variance
Let X , Y be random variables with joint pdf f (x, y ). Let fY |X (y |x) be the
conditional density of Y given X = x. Then the conditional variance of Y
given X = x is defined as
Var(Y |X = x) = E (Y 2 |X = x) − (E (Y |X = x))2
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 18 / 18
Joint distributions
Conditional variance
Let X , Y be random variables with joint pdf f (x, y ). Let fY |X (y |x) be the
conditional density of Y given X = x. Then the conditional variance of Y
given X = x is defined as
Var(Y |X = x) = E (Y 2 |X = x) − (E (Y |X = x))2
EX (Var(Y |X )) = (1 − ρ2 )Var(Y )
Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 12 October 11, 2022 18 / 18