Bivariate Normal Distribution
Bivariate Normal Distribution
1
21tcr1cr2 ✓ (1 - p2)
exp (- _1_ . x2) exp [ 1 .
2
(/ t/ - p <J2 x )2 )dxdy
2a12 ia22 (1 - p ) · 01
= 1
21tcr1cr2 ✓ (1 - p2) •
exp { - 1
2(1 - p 2)
( x2 - 2p
2
X y + y_) }ax dy
<J1 <J1<J2 a/ ·
Shifting the origin to (µ1, µ2), we get
_
f XY (X, y) -
1 [
---- -· exp - 2 (1
1
2)
{(x- µ )2 _ 2p (x -
1 µ1) (y- µ2) + (y - µ2)2 } .
21t<J1<J2 ✓ (1 - p2) - p <J12 <J1<J2
2 '
0'2
(- oo < X < oo, - oo < y < oo) ... (12·5)
·..vhere µ 1 , µ 2 , cr 1 (> 0), cr 2 (> 0), and p (-1 < p < 1) are the five param eters of the
distrib ution.
NORMA L CORRELATION SURFA CE
z
2. The curve z =f (x, y) which is the equation of a surface in three dimens ions, is called the
'Normal Correlation Surface' .
12-3-1. Moment Generating Function of Bivariate Normal Distribution.
Le,t (X, Y) ~ BVN (µ 1, µ2, cr12, crl, p). By def,
MXY (t1, !2) : E [ e\ X+ t,Y] = r_ [_ exp (!1 x.+ 12!J).f (x, y) dxdy
X- µ1 y- ~l 2
Put = u, = v, - oo < (u, v) < 00
<11 . <12
e~p U1 µ1 + t2 µ2)
=
21t ✓ 1 ._ p2
Jf
~~ p')
= exp (11 µ1 + !2 µ2)
27t ✓ 1 - p2 U V
exp [ 2
We have (u 2 -
2puv + v2) - 2 (1 - p2) U1cr1 u + t2cr2v)
= { (u - pv) - (1 - ·p2) t1 cr1} 2 + (1 - p2) { (v - P t1cr1 - t2cr2) 2 - ti2cri2 - ti2 al - 2pt1t2 cr1cr2}
....... ---- .
... (*)
By taking
u - pv - (1- p2) t 1 cr1 = co (1 - p2)1/2 }
and V - pt1cr1 - t20'2 =Z ⇒ du dv = . ✓_ 1 - p2 dwdz \
and using(* ), we get
Mx, y (ti, t2) = exp [t1 µ1 + t2µ2 + ½(ti2 cri2 + tl al+ 2pt t cr cr )]
12 1 2
,;
x [-1-j""
6 -CM>
e -ro2/2dro l x [-1-
ili
j 00
- oo
e -z212 dz l
= exp [t1µ1 + t2µ2 +½<ti2 cri2 + tlal + 2pt t cr cr )]
12 1 2 ... (12·6)
In particu lar if (X, Y) ~ BVN (0, O,' l, 1, p), then
f(x, y) =
Ox
~21t exp [- ½{x- µx ) Ox
2
] x
Oy
~ - exp [- ½(y- µy
27t O"y
fl
⇒ f(x,y) = Ii (x). [i(y)
X and Y are independent, X ~ N(µx, ax2) and Y ~ N (µy' a/) .
Aliter. (X, Y) ~ BVN.(µ 11 ·µ 2, ai2, crz2, p)
Mx, y (t 1, t2) = exp {t1 µ1 + t 2µ2 + ½ (ti2ai2 + 2pt1t2cr1a2 + tz2crz2)}
If p = 0, then
Mx, y (ti, t2). = exp {t1µ1 + ½t12ai2} · exp {t2µ2 + ½tz2 crz2}
... (*)
= Mx (t1) · My (tz). ~,
[·.· If (X, Y) ~ BVN (µ1, µ2, o/, o/, p), then the ma~ginal p.d.f.'s of X and Y are normal i.e., _
X ~ N (µ1, o/) and Y ~ N (~Li, cr/)].
(*) ⇒
X and Y are independent.
(b) Conversely if X and Y are independent, then p = 0 [c. f. Theorem 10·2]
Theorem 12-2. (X, Y) possesses a bivariate normal distribution if and only {f every
linear combination of X and Y viz., aX + bY, a -:t:- 0, b :/: 0, is a normal variate. -
Proof. Let (X, Y) ~ BVN (µ 1; µ 2, (J'i2, crz2, p)~ then we shall prove that aX + bY,
a -:I= 0, b -:I= 0 is a normal variate.
Since (X, Y) has a bivariate normal distribution, we have
1
Mx, y (t 1, t2) = E (et X + t 2Y) = e'1µ1
1 2
+ f,J12 + (t/ ai2 + 2pt 1t2 a 1a 2 + tla/ ) ... (**)
Then [Link] of Z = aX + bY, is given by:
Mz(t) = E (efZ) = E [el(aX + bY)] = E (eatX + btY)
2
= exp {f (aµl + bµ2) + ~ (a 2ai2 + 2paba1 <12 + b2az2)},
. . . .. . [Taking t 1 = at, t 2 = bt in(**)]
. which 1s th~ m.g f of normal d1stnbutton with parameters
µ =-aµ1 +·bµz, a 2 = a2ai2 + 2paba1a 2 + b2az2.
... (***)
Hence by uniqueness theorem of m.g.f.,
ax
z = + bY ~ N (µ, a2) h .-
. ' w ere µ and a2 are given in (***)
(b) Conversely, let Z = aX + bY, a·-:1= b -:I= 0 be a o,. _ ·
prove that (X, Y) has a bivariate normal distribution. normal variate. Then we have to
Let Z =ax+ bY ~ N (µ, a2),
where
and µ
2
=EZ =E (aX + bY) =aµx + bµ
cr = Var Z = Var (aX + bY) - 2 Y 2
- a ax + 2abpa a + b2cr 2
Mz(t) = exp [ tµ + t2a2 ; 2] · · x Y Y
2
- exp [ t ( b t ·
- aµx + µy) + 2 (a2ax2 + 2abpcrxay + b2ay2)] -
where
= exp [t1µx + t2µy +½ (ti2 ax2 + 2pt t 2 ]
t1 = at and t2 = bt. I iaxay + t2 ay2) ... (****)
12•1Q FUNDAMENT
ALS OF MATHEMATICAL STATIST
ICS
f x(x) = j: f 00
xy (x, y) dy
,
Jx(x) =
cr, 1
~
! 00
exp
[- 1
2(1 - p2)
{(~)'-2pu(x:µ )+u2}]cr2d11
01
1
= 1
i 2 [
exp -
1
2 -
(x- µ1)2] j 00
. - exp - oo
[ 1
2(1 - p2)
{u -p ( x- µ1 )}2] du
21tcr1✓ (1 - p2) <J1
<51
Put
✓ (1 -
1
p2)
Iu p ( x-
-
µ1)] = t,
<J1 ·
the n du == ✓ (1 - p2) dt
== - 1 . exp [ - -1 (x-
-µ1)
-
2
]
'\/. ~
27t
21t<J1 2 <J1
=- -e1
cr1 .f21t
xp [ - l
2
(x- µ1) 2] ... (12·7)
<J1
Sim ilar ly, we sha ll get
00
Hence
fy(y) =
l - 00
fXY (x: y) dx = .r;;-
1 exp [ - 1 (
0-i \/ 27t 2 ~ )21
<52
... (12•7a)
X ~ N (µi, .cri2)
and Y ~ N (µ 2, o/)
... (12·7b)
Remark. We have proved that if (X, Y)
~ BVN (µ 1, µ2, cr/, crz2, p), the n the ma
of X and Y are also normal. However, the rgin al p.d .Cs
converse is not true, i.e., we ma y hav e join
Y) of (X, Y) which is not normal but the t p.d.f. f (X,
marginal p.d.f.'s may still be nor ma l as
following illustration. dis cus sed in the
Consider the joint distribution of X and Y ·
given by :
~ p2) { ( ~ )' - 2p ( ~ ) ( ~ )
+( ~ )2 (1 - (1 - p2) ) } ]
F MATHEMATICAL STATISTICS
FUNDAMENTALS 0
12-12
= 1
- [ 1
x exp - 2 (1 - p2) cr12
{ (x - µ,) - p cr1 ( y - µ,)
0'2
}2 l
ili ✓ 1 - P
µ1 + p :l (y - µ,) }i]
2
cr1 · )
1 { X- (
= 1 x exp [ - 2 (1 - p2) cri2 2
0'1 ..fht ~
·· · · te-norma1 dlS' tribution with mean and
which is the probability function of a univarlp
variance 'gjven by :
2
" cr1 ) , d V (X I Y =y) =ai2 (l - P )
E (X I Y = y) ~ µ1 + p - (y- µ2 an
. cr~ . . . f d y is also normal, given ~ :
Hence the conditional distribution of X for ixe
. cr 2)] .. . (12·7d)
(X I Y = y) ~ N [µ 1 + p _: (y - µ2) , oi (l - P
cr2. . f f · d X is :
Similarly the conditional distribution o ran om variables Y or a ixe
f d
f XY (X, y)
fy I X (y
t I X) = f X (X) -
=
1 ex [- 1
-. r--::; P 2 (1 - p2) cri
{ (y - µ,) - p cri (x - µ1)
- 01
}2 ],- oo < y < oo
0'2 -fin'\/ 1 - P""
Thus the conditional distribution of Y for fixed X is also normal, given by :
It is apparent from the above results that the array means are collinear, i.e., the
regression equations are linear (involving linear functions of the independent
variables) and the array variances are constant (i.e., free from independent variable).
We express this by saying that the regression equations of Yon X and X on Y are linear
and homoscedastic. ✓
Bu t M = exp
= r~
0 s ~ 0 µ,s · r ! s !
[ ]
:. (*) gives
00 00
(t{ -1 t/ - 1)
L L ~~ (r - 1) ! (s - 1) !
r= l s= l ,
00
[p i 1 trts 00
t.{ tz5 00
+ p l l µ, r! s! + ( 1- p2)
5 I, ✓ I, _t{• 1 t;< •']
r =0 s =0
r =0 µrs r!s !
s =0
· tr-
Eq ua tin g the coefficients of (/- 1 ts- 1
2
l) ( (s _ l) ,on bo th sides, we
ge t
~ =[ p( r- l)µ r - 1's :-1 +p (s -1
) µ, -l, s- 1+ ~µ ,. - l , s - l
⇒ + (l - P2) (r - lt( s- 1) µr -
µ, 5 = (r + s - 1) pµr - 1;, - 1 + (r- 1) 2,s -2 ]
(s -1 ) (1 - p2) µ,. _ ,i; _
In pa rtic ula r I
2 2
J½1 = 3p ~,o + O = 3pcri2 = 3p
µ2 2 = 3p µ1, l + (1 - p2
) ~ ,O= 3p 2 + (1 - p2). 1 (·: cri2 = 1)
= (1 + 2p
2 )
Also ~ 3 =J½o =0
µ12 ;:: 2p ~ ,1 + 0 = 0
~ = 4p µ1,2 + 1.2 (1 - p2) ~,1 = 0
FUNDAMENTALS OF MATHEMATICAL STATISTICS
Mx , x U1, t2 ) = exp
1 2
['½ (ti2 + 2pt1t2 + tl)] Id. (12-6)]
E(X/ X/) - E (X/) E(~ )
Now p(Xi2, Xi) = -;========::::::::::=--;:=:===========
✓ [E(X14) - {E(X12)} 2} ✓ [E(X2 ) - {E(Xl)}2]
4
2 2 .·
where E(Xi2 Xi) =Coefficient of ii-~! in M(t 1, t2) = (2p 2 +1)
E(X14) = Coefficient of ~; in M (ti, t2 ) = 3
f(x, y) = 1
21tcr1cr2 ✓ (1 - p2) .
exp I- 1 { i!._ - 2pxy 1f:__}
2(1 - p2) . 0"12 0"10"2 + cr22
l . - oo < (x, y) < oo
Aiso u =.:_ + Y._, v =~-}I_
0"1 0"2 0"1 0'2
1 1
= exp [ { (1 _ p)u2 + (1 + p)v2} ] du dv
21t . 2 ✓ (1 - p2) 4(1 - p2)
-- . 1
-----
2'n✓ 2 (1 - p) ✓ 2 (1 + p)
exp I u2
4(1 + p)
- v2
4(1 - p)
l
du dv
1 2 1
=[ · , PXP { u } ] x[ • exr{ - v' }]
ffi ✓2 (1 + p) 2(1 + p)2 ili ✓2(1 - p) 2(1- p) 2
ana fi(v) =
1
exr{ - v' }
. fu ✓2 (1- p) 2(1 - p)2
(*) => U and V are independently distributed,
U ~ N [ 0, 2 (1 + p)] and V ~ N (0, 2 (1- p)].
Aliter. Find joint m.g.f. of U and V viz.,
M (t1, t2) = E (e1U + t2V) = E [ex (t1 + t2)/0"1 + Y(t1 - t2)/02]
f (x, y) = -:===
1 .
exp [
21t✓ 1 - p2
- -1- (x2 _ 2pxy + y2)
2(1 - p2)
l
Let u = x + y and v=x- y u+v u-v
X= 2 ,Y,= 2
ax ax 1 1
. J- d(x,y) =
au av =
2 2 1
=- -2
.. - a (u, v) ~ ~ 1 1
au av 2 -2
The joint pdf g(u , v) of U and V is given by : 1
1
g(u,v) = C exp [ - ( _ p2)_ {2(112 + v2)- 2p(u2 _ v2)} ]
21 4
where C = 1
41t ✓-1--p-2
12-16 FUNDAMENTALS OF MATHEMATICAL STATISTICS
·
= [ C1 exp {
4
p) } [ C2 exp { (t: 4(:- p) } l
= [g1(u)] [g2(v)], (say).
Hen ce U and V are inde pen den tly dist ribu ted.
f 00
= 1
27t ✓ (l - p2)
[ L: exp (tQ -~) dxdy
.
= 1
27t ✓ (1 - p2)
f~ j ~ exp[-~(1-21)] dxdy
- 00 - 00
--
Put ✓ (1 - 2t) X = u and ✓ (1 - 2t) Y= v ⇒
du
dx ✓ (l _ t) an
dd- dv
2 y - ✓ (1 - 2t)
2
Also Q =- 1 [ 1
- x 2 -2p xy+ y2 ] = - [ u - 2puv + v2]
(1 - p2) - 1-2 t
(1 - p2)
00
1 00
M (!) =
Q
- x
21t✓ (1 - p2) (1 - 2t)
J- J exp [ 1
2
(u 2 - 2pu v + v2)] du dv
00
-
00
2(1 - p )
= l ·1 = (1 - 2t)-1
(1-2 t)
whi ch is the [Link]- of chi-square (X2) vari ate with
n (= 2) deg rees of free dom .
(For chi-square distribution, see Cha pter 15.)
. Exa mpl e 12-5. Let X and Y be independent stbndard normal variante
m.g f ofX Y. s. Obtain the
·
Solution. We hav e, by def.,
Mxv (!) = i" J_: J_: exp [-½ (x2 - 2txy + y 2} ]dx dy
=i" [ [ exp [ 2(1~ t 2)
2
;{ x 2t.x.y
- f }]
1/(1 - t2) , (l/.. ff=t 2) (l/✓ 1 - t2) + 1/(1 - t2) dxdy ... (*)
ADDITIONAL TOPICS ON CORREL~TION AND REGRESSION 12-17
J J-
1
- e - 2c1 ~ p'l {
- 00 - 00 21t0"10"2 ✓ 1 - p2
⇒
eo Joo
- oo -oo
1 {,:2 ~ 1(}
e 2(1-p2) a?- a1a2 + a? dxdy = 21tcr1cr2 ✓ 1- p2 ... (**)
J
Comparing (*) and (**) with cri2 =cri2 = (l ~ t2) and p .= t, we get
[Link](t) = l. 27t 1 . 1. ✓ 1- t2 ⇒ MXY(t) = (1 - t2)112 ; -1 < t < 1.
. . 21t . ~2 ✓ 1 -.t2
Example 12-6. Let X and Y have bivariate normal distribution with parameters :
µx = .5, µy = 10, a;= 1, a!= 25 and Corr (X, Y) = P· ·
~)lf p >:O,find p when P (4 < Y < 16 I X = 5) = 0-954
(b) If p = 0,find P(X + Y) ~16.
Solution. S~~~- X, Y ~ BVN(µx, µy, crx2, cr_y2, p), the conditional distribution of Y
given X = X is also normal.
(Y I X = x) ~ N [µ = µy + pay (x - µx), cr2 = cry2 (1 - p2)]. therefore
O"x
(Y I X =:5) ~ N [µ = 10 + p x f (5 -5), cr = 25 ( 1 - p
2 2
)] = N [µ= 10, cr2 = 25 (1 _. p2)]
We want p so that P(4 < Y < 16 I X = 5) = 0·954
But we know that if Z ~ N (0, 1), then P (-2 < Z < 2) =0-954 ... (**)
Comparing(*) and(**), we get
•
•• P(X+Y<16)=P(z< _,;:-)=<1>(1;ff6),
. -v 26
where <l> (z) = P (Z < z), is the distribution function of standard normal variate.