0% found this document useful (0 votes)
381 views13 pages

Bivariate Normal Distribution

Uploaded by

Reevu Thapa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
381 views13 pages

Bivariate Normal Distribution

Uploaded by

Reevu Thapa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

12-3.

BIVARIATE NORMAL DISTRIBUTION


The bivariate normal distributio11 is a generalization cf a normal distribution for a
single variate. Let X and Y be two normally correlated variables with correlatio11
coefficient p and E(X) = µi, Var(X) = a 1 2 ; E(Y) = µ 2, Var(Y) = a 2 2 . I11 deriving the
bivariate normal distribution we make the following tl1ree assumptions.
(i) The regression of Yon Xis linear. Since the mea11 of each array is on the li11e of
regression Y = p (a2 / a 1 ) X, the mean or expected value of Y is p (a2 / cr1)X, for differe11t
values of X.
(ii) The arrays are homoscedastic, i.e., variance in each array is sa1ne. The commo11
varian<?e of estimate of Y in each array is the11 given by cr 2 2 (1 - p 2 ), p being the
correlation coefficient between variables A and Y ai1d is independent of X.
(iii) The distribution of Yin different arrays is norn1al. Suppose that one oJ the
ADDITIONAL TOPICS ON CORRELATION AND REGRESSION 12-7
variates, say X, is distrib uted norma lly with mean 0 and stand ard devia tion
cr 1 so that
the proba bility that a rando m value of X will fall in the small interv al dx is
g(x) dx = ~ exp (-x2 /2cri2) dx
CJ1 (27t)
The proba bility that a value _of Y, taken at randm n in an assign ed vertic
al array
will fall in the interv al dy is : ·
2
h (y I x) dy = 1 1 02
. ex {- ( y- p x )
- 02 ✓ 27t (1- p2) p 2 2
2<J2 (1 - p ) <J1 '
}

The joint proba bility differentlial of X and Y is given by


dP (x, y) = g (x) h(y I x) dxdy

1
21tcr1cr2 ✓ (1 - p2)
exp (- _1_ . x2) exp [ 1 .
2
(/ t/ - p <J2 x )2 )dxdy
2a12 ia22 (1 - p ) · 01

= 1
21tcr1cr2 ✓ (1 - p2) •
exp { - 1
2(1 - p 2)
( x2 - 2p
2
X y + y_) }ax dy
<J1 <J1<J2 a/ ·
Shifting the origin to (µ1, µ2), we get
_
f XY (X, y) -
1 [
---- -· exp - 2 (1
1
2)
{(x- µ )2 _ 2p (x -
1 µ1) (y- µ2) + (y - µ2)2 } .
21t<J1<J2 ✓ (1 - p2) - p <J12 <J1<J2
2 '
0'2
(- oo < X < oo, - oo < y < oo) ... (12·5)
·..vhere µ 1 , µ 2 , cr 1 (> 0), cr 2 (> 0), and p (-1 < p < 1) are the five param eters of the
distrib ution.
NORMA L CORRELATION SURFA CE
z

(12·5) gives the densit y functi on of a bivari ate norm al . .


and Y ar_e said to be norma lly correl ated a d th f t t1buti. .
. dtS on. The variab les X
norma l correlation surface. The nature of the n e sur ace z == f (x ) · kn
. . , Y . 1s own as the
the above diagra m norm a1 correl ation surfac e is indica ted in
Remarks 1. The vector (X, Y)' following the . . t
abbreviated as (X, Y) ~ N (µ 11 _ <J 2 cr 2 . ) BV Jom p.d.f. f (x, y) as given in (12-5), will be
. 1, r-1. 1 , 2 , p or
1
N (µ1 µ 0 2 2
and a1 = a 2 =1 then (X Y) ~ N (0 ) B ' 2' 1 , <J2 , p). If in partic ular µ = µ = o
' 0 1 1
, , , , p or VN (0, o, l, l, p). 1 2
12-8 FUNDAMENTALS OF MATHEMATICAL STATISTICS

2. The curve z =f (x, y) which is the equation of a surface in three dimens ions, is called the
'Normal Correlation Surface' .
12-3-1. Moment Generating Function of Bivariate Normal Distribution.
Le,t (X, Y) ~ BVN (µ 1, µ2, cr12, crl, p). By def,

MXY (t1, !2) : E [ e\ X+ t,Y] = r_ [_ exp (!1 x.+ 12!J).f (x, y) dxdy

X- µ1 y- ~l 2
Put = u, = v, - oo < (u, v) < 00
<11 . <12

e~p U1 µ1 + t2 µ2)
=
21t ✓ 1 ._ p2

x f f exp [t 1 cr1 u + t2 cr2 v ·- 2 (1-1 p { u 2 - 2puv + v 2}l du dv


U V
2
) ~

Jf
~~ p')
= exp (11 µ1 + !2 µ2)
27t ✓ 1 - p2 U V
exp [ 2

x {(u 2 - 2puv + v2) - 2 (1- p2) (t1cr1 u + t2cr2v)}] dudv

We have (u 2 -
2puv + v2) - 2 (1 - p2) U1cr1 u + t2cr2v)
= { (u - pv) - (1 - ·p2) t1 cr1} 2 + (1 - p2) { (v - P t1cr1 - t2cr2) 2 - ti2cri2 - ti2 al - 2pt1t2 cr1cr2}
....... ---- .

... (*)
By taking
u - pv - (1- p2) t 1 cr1 = co (1 - p2)1/2 }
and V - pt1cr1 - t20'2 =Z ⇒ du dv = . ✓_ 1 - p2 dwdz \
and using(* ), we get

Mx, y (ti, t2) = exp [t1 µ1 + t2µ2 + ½(ti2 cri2 + tl al+ 2pt t cr cr )]
12 1 2
,;

x [-1-j""
6 -CM>
e -ro2/2dro l x [-1-
ili
j 00

- oo
e -z212 dz l
= exp [t1µ1 + t2µ2 +½<ti2 cri2 + tlal + 2pt t cr cr )]
12 1 2 ... (12·6)
In particu lar if (X, Y) ~ BVN (0, O,' l, 1, p), then

. Mx, y (ti, 12)·= exp [ ½(ti2 +ti+ 2pt1t2)]


... (12·6a)
, Theorem 12·1. Let (X, Y) ~ BVN (µ1, µ2, cri2, cr2,2, p). Then x and y are independent
if and only if p = 0.
Proof . (a) If (X, Y) .~ BVN ..(µ 1 , µ , cri2, cr 2 ) d
2
indepe n d ent 2 , P an P = 0, then X and Y are
-
12·9
ADDITIONAL TOPICS ON CORRELATION AND REGRESSION

If p = 0, then from (12·5), we get :

f(x, y) =
Ox
~21t exp [- ½{x- µx ) Ox
2
] x
Oy
~ - exp [- ½(y- µy
27t O"y
fl
⇒ f(x,y) = Ii (x). [i(y)
X and Y are independent, X ~ N(µx, ax2) and Y ~ N (µy' a/) .
Aliter. (X, Y) ~ BVN.(µ 11 ·µ 2, ai2, crz2, p)
Mx, y (t 1, t2) = exp {t1 µ1 + t 2µ2 + ½ (ti2ai2 + 2pt1t2cr1a2 + tz2crz2)}
If p = 0, then
Mx, y (ti, t2). = exp {t1µ1 + ½t12ai2} · exp {t2µ2 + ½tz2 crz2}
... (*)
= Mx (t1) · My (tz). ~,
[·.· If (X, Y) ~ BVN (µ1, µ2, o/, o/, p), then the ma~ginal p.d.f.'s of X and Y are normal i.e., _
X ~ N (µ1, o/) and Y ~ N (~Li, cr/)].
(*) ⇒
X and Y are independent.
(b) Conversely if X and Y are independent, then p = 0 [c. f. Theorem 10·2]
Theorem 12-2. (X, Y) possesses a bivariate normal distribution if and only {f every
linear combination of X and Y viz., aX + bY, a -:t:- 0, b :/: 0, is a normal variate. -
Proof. Let (X, Y) ~ BVN (µ 1; µ 2, (J'i2, crz2, p)~ then we shall prove that aX + bY,
a -:I= 0, b -:I= 0 is a normal variate.
Since (X, Y) has a bivariate normal distribution, we have
1
Mx, y (t 1, t2) = E (et X + t 2Y) = e'1µ1
1 2
+ f,J12 + (t/ ai2 + 2pt 1t2 a 1a 2 + tla/ ) ... (**)
Then [Link] of Z = aX + bY, is given by:
Mz(t) = E (efZ) = E [el(aX + bY)] = E (eatX + btY)
2
= exp {f (aµl + bµ2) + ~ (a 2ai2 + 2paba1 <12 + b2az2)},
. . . .. . [Taking t 1 = at, t 2 = bt in(**)]
. which 1s th~ m.g f of normal d1stnbutton with parameters
µ =-aµ1 +·bµz, a 2 = a2ai2 + 2paba1a 2 + b2az2.
... (***)
Hence by uniqueness theorem of m.g.f.,
ax
z = + bY ~ N (µ, a2) h .-
. ' w ere µ and a2 are given in (***)
(b) Conversely, let Z = aX + bY, a·-:1= b -:I= 0 be a o,. _ ·
prove that (X, Y) has a bivariate normal distribution. normal variate. Then we have to
Let Z =ax+ bY ~ N (µ, a2),
where
and µ
2
=EZ =E (aX + bY) =aµx + bµ
cr = Var Z = Var (aX + bY) - 2 Y 2
- a ax + 2abpa a + b2cr 2
Mz(t) = exp [ tµ + t2a2 ; 2] · · x Y Y
2
- exp [ t ( b t ·
- aµx + µy) + 2 (a2ax2 + 2abpcrxay + b2ay2)] -

where
= exp [t1µx + t2µy +½ (ti2 ax2 + 2pt t 2 ]
t1 = at and t2 = bt. I iaxay + t2 ay2) ... (****)
12•1Q FUNDAMENT
ALS OF MATHEMATICAL STATIST
ICS

. ·th par am ete rs (µx, µy, crx2, cry2, p).


But(****) is the m g f of BV N dis trib
utio
f (X Y) ,.., BV ( µ x, µ y' crx2' crl ' p)
. .. f n WlN
He nce by uni que nes s the ore m o m.g. · . ' . Normal Distribution. The ma rgi nal
12-3-2. Marginal Distribution of Bivana
te
dis trib utio n of ran dom var iab le X is
giv en by :

f x(x) = j: f 00
xy (x, y) dy

Put Y - µ2 = u, the n dy = cr2 du. The refo re,

,
Jx(x) =
cr, 1
~
! 00

exp
[- 1
2(1 - p2)
{(~)'-2pu(x:µ )+u2}]cr2d11
01
1

21tcr cr 'I (1 - p~) - 00 1

= 1
i 2 [
exp -
1
2 -
(x- µ1)2] j 00

. - exp - oo
[ 1
2(1 - p2)
{u -p ( x- µ1 )}2] du
21tcr1✓ (1 - p2) <J1
<51

Put
✓ (1 -
1
p2)
Iu p ( x-
-
µ1)] = t,
<J1 ·
the n du == ✓ (1 - p2) dt

fx(x) = 2~cr, . exp [- ½ ( x :~ ' )'] L exp (- ~) dt

== - 1 . exp [ - -1 (x-
-µ1)
-
2
]
'\/. ~
27t
21t<J1 2 <J1

=- -e1
cr1 .f21t
xp [ - l
2
(x- µ1) 2] ... (12·7)
<J1
Sim ilar ly, we sha ll get
00

Hence
fy(y) =
l - 00
fXY (x: y) dx = .r;;-
1 exp [ - 1 (
0-i \/ 27t 2 ~ )21
<52
... (12•7a)
X ~ N (µi, .cri2)
and Y ~ N (µ 2, o/)
... (12·7b)
Remark. We have proved that if (X, Y)
~ BVN (µ 1, µ2, cr/, crz2, p), the n the ma
of X and Y are also normal. However, the rgin al p.d .Cs
converse is not true, i.e., we ma y hav e join
Y) of (X, Y) which is not normal but the t p.d.f. f (X,
marginal p.d.f.'s may still be nor ma l as
following illustration. dis cus sed in the
Consider the joint distribution of X and Y ·
given by :

f (x, y) = ½Ln (l ~ p')' 12 exp L(;~ p') (x 2 - 2pxy + y')}


+ 2n (1 ~ p')' " exp { 2 (1 ~ p') (x' + 2pxy + y 2
)})
1
= 21/1(x, y) +!2 (x, y)] ;- 00 (x, y) < 00
••• (12-7c)
where Ji (x, y) is the p.d.f. of BVN (0, o, 1,
1 1 - p) distribution It can be eas ·l 1, p) dist ribu tion an .
T d d !2 (x, y) 1s the p.d.f. of BV N (0, 0,
obviouslyf(x y)i sno ;th ep d - ofb ! Y.~
etn ie th atf
' 1·
Marginal distribution of X· in· (12· iva na e normal di (x, t ·b
y) is the join t p.d.f. of (X, Y) and
·
7c) s n ut10n. ·

fx(x) = ½[ L J,<x, y) dy ~ :j2 (x, y) dy l


ADDITIONAL TOPICS ON CO
RRELATION AND REGRESSIO
N
12-11

But f ~ .. fi (x, y) dy is the marginal p.d.f. of X,


wh ere (X, Y) ~ BV N (0, 0, 1, 1, p)
and is given by X ~ N (0, 1).

Similarly J~. / 2 (x, y) dy is the marginal p.d.f. of


X,
wh ere (X, Y) ~ BV N (0, 0, 1, 1, -
p) and is given by X ~ N (0,
1).
fx(x) = .!. [-1 -e -x2 ;2 + ·_1_ e - x21
2] = _1_ e - x212 ; - oo < x < oo ... (i)
2~ lli ' ~
⇒ X ~ N (0, 1) i.e., the marginal dis
trib utio n of X in (12-7c) is nor ma
Similarly, we can sho w tha t the l.
marginal p.d.f. of Yin (12-7c) is giv
en by :
fy(y) = ! [-1-fh-t e-Y2 + _1_
12 e-y 2; 2 ] = ~ 2
e-y /2 ; - 00 < y < oo ... (ii)
2 fu "i21t

Y ~ N (0, 1).
He nce if the marginai distribution
s of X and Y are normal (Gaussian
imply that the joint distribution of (X, ), it does not necess arily
Y) is bivariate normal.
We fur the r not e tha t for the join ·
t p.d.f. (12-7c), usi ng (i) and (ii), we
hav e
E (X) = 0, crx2 = 1 and E (Y)
=0, cry2 = 1.
Co v (X, Y) = E (XY) - E (X) E (Y)
- E (XY )
CJx O'y

~ ½[ [ [ xyf,(x,y) dxdy + j ~[ xy J, (x, y) dxdy1


=~ [p +( -p )] =0 ,
bec aus e, for / 1 (x, y), (X, Y) ~ BV
N (0, 0, 1, 1, p) and for h(x , y),
(X, Y) ~ BV N (0, o, 1, 1, _ p).
Corr. (X, Y) = Cov (X, Y) = 0
Ho we ver , we hav e; [From (i) and O'x O'y
(ii)]
fx:(x) · f y(y) = 2\ exp {- ½(x + y2)} -:;; f (x, Y) ⇒ X and Y are
2
no t ind epe nd e~t -
Th e above exa mp le illustrates tha
· t we ma y hav e a ·oin t d
Y) m w h'1ch the ma rgm
· a 1 d f' f
p. . . s o X 1 ·t (
ensY) .
ind epe nde nt. and Y are nor ma l and (X i Y_ no n-G aus sia n) of rv' s (X
p ' - O and yet X and Y are no t'
12-3-3. Conditio,nal Distribution
given by : s. Co nd itio na l dis tri bu tio n of X
for fix ed Y is
f x I Y (x I y) - f xv (x, y)
- fy (Y)

~ p2) { ( ~ )' - 2p ( ~ ) ( ~ )

+( ~ )2 (1 - (1 - p2) ) } ]
F MATHEMATICAL STATISTICS
FUNDAMENTALS 0
12-12

= 1
- [ 1
x exp - 2 (1 - p2) cr12
{ (x - µ,) - p cr1 ( y - µ,)
0'2
}2 l
ili ✓ 1 - P
µ1 + p :l (y - µ,) }i]
2
cr1 · )
1 { X- (
= 1 x exp [ - 2 (1 - p2) cri2 2

0'1 ..fht ~
·· · · te-norma1 dlS' tribution with mean and
which is the probability function of a univarlp
variance 'gjven by :
2
" cr1 ) , d V (X I Y =y) =ai2 (l - P )
E (X I Y = y) ~ µ1 + p - (y- µ2 an
. cr~ . . . f d y is also normal, given ~ :
Hence the conditional distribution of X for ixe
. cr 2)] .. . (12·7d)
(X I Y = y) ~ N [µ 1 + p _: (y - µ2) , oi (l - P
cr2. . f f · d X is :
Similarly the conditional distribution o ran om variables Y or a ixe
f d

f XY (X, y)
fy I X (y
t I X) = f X (X) -

=
1 ex [- 1
-. r--::; P 2 (1 - p2) cri
{ (y - µ,) - p cri (x - µ1)
- 01
}2 ],- oo < y < oo
0'2 -fin'\/ 1 - P""
Thus the conditional distribution of Y for fixed X is also normal, given by :

(YI X=x)~'J !½+P " 2 (x-µ1),o}(l-p 2)] (12·7e)


1\/l 0'1
•••

It is apparent from the above results that the array means are collinear, i.e., the
regression equations are linear (involving linear functions of the independent
variables) and the array variances are constant (i.e., free from independent variable).
We express this by saying that the regression equations of Yon X and X on Y are linear
and homoscedastic. ✓

For p = 0, _t~e conditional variance_ V (Y I X) is equal to the marginal variance a?


and_ the cond1tiona~ mean E (Y I X) ~s e~ual to the marginal mean µ and the two
2
vana~les become independent, which is also apparent from joint · distribution
func~10n. In _betw~en the two extreme~ ~hen p = ± 1, the correlation coefHciJnt p
pro:v1bdes a measure of degree of association or interdependence between the two
vana 1es.
Example 12·1.· Show that for the bivariate normal di' t 'b t ·
· · s .rz u ion: ·
dP =[Link] [ 1
2 (1- p 2) x -
( 2 2Pxy + y2) ] dx dy,

(i) M.G.F. is M (ti, t2) = exp[ 1 (t


2 2
1
+2 t
p 1t2 + t22)
l
(ii) Moments obey the recurrence relation.
J1rs = (r + s - J) pµ., -1 -1 + fr-1) (
,Hence or otherwise, show that 's s -1) (1 - p2) µ,. _2, s _
2
J1rs = 0, if r + s is odd,- µ _
.
31
-
3P, µ22 == 1 + 2p2.
ADDITIONAL TOPICS ON
CO RRELATION AN D REGR 12-13
ESSION

Solution . (i) Fr om the given


probability function, we see tha
t
µ1 = 0 =µ2 and cri2 = 1 = crz2
:. Fr om (12·6a ), we ge t

M =M (t1, t2) =exp[ ½(ti2 + 2pl1!2 + !/) ]


(ii) ·~M = M (t + pt )
1 2 and aM = M Ct2 + pt1)
at 1 a~
and a2M = l_ ( aM) = l_ [ M (t2 + pt1
)]
at1at2 at1 cH2 at1
= Mp + (t2 + pt1) Ct1 + pt2) M
a2M aM aM
- - -p t1 -- p t2 -
dt1dt2 ~t 1 at2
= Mp + (t + pt ) (t + pt ) M
2 1 1 2 - pt1 (t1 + pt2) M - pt2 (tz + P~1 M
) . .. .
= M (t1 t2 + p - p2 t1 t2 , (On simplification)
~
=M p+ (1 - p2) Mt 1t2
a2M = pt1 aM + pt2 aM +M p
cU 1c)t2 + M ( 1- p2) t1 t2 (*)
at1 at 2
00
t{ tz5
½(t/ + 2pt1t2 + tz2)
00

Bu t M = exp
= r~
0 s ~ 0 µ,s · r ! s !
[ ]

:. (*) gives
00 00
(t{ -1 t/ - 1)
L L ~~ (r - 1) ! (s - 1) !
r= l s= l ,
00

[p i 1 trts 00

= l- rµrs. r.1! s2! + p l t{ t/


r = s =0
l sµrs . r ! s
!
r =0 s= l
00 00

t.{ tz5 00

+ p l l µ, r! s! + ( 1- p2)
5 I, ✓ I, _t{• 1 t;< •']
r =0 s =0
r =0 µrs r!s !
s =0
· tr-
Eq ua tin g the coefficients of (/- 1 ts- 1
2
l) ( (s _ l) ,on bo th sides, we
ge t
~ =[ p( r- l)µ r - 1's :-1 +p (s -1
) µ, -l, s- 1+ ~µ ,. - l , s - l

⇒ + (l - P2) (r - lt( s- 1) µr -
µ, 5 = (r + s - 1) pµr - 1;, - 1 + (r- 1) 2,s -2 ]
(s -1 ) (1 - p2) µ,. _ ,i; _
In pa rtic ula r I
2 2
J½1 = 3p ~,o + O = 3pcri2 = 3p
µ2 2 = 3p µ1, l + (1 - p2
) ~ ,O= 3p 2 + (1 - p2). 1 (·: cri2 = 1)
= (1 + 2p
2 )
Also ~ 3 =J½o =0
µ12 ;:: 2p ~ ,1 + 0 = 0
~ = 4p µ1,2 + 1.2 (1 - p2) ~,1 = 0
FUNDAMENTALS OF MATHEMATICAL STATISTICS

Similarly, we will get ~ 1 =0, µ 32 =0


If r +sis odd, so is (r - 1) + (s - 1), (r - 2) + (s - 2), and so on.
And since µ = ~ , µ 12 = O=µ 2i, µ 23 =0 =µ 32... .. , we finally get,
3
30
µrs = 0, if r + s is odd.
Example 12•2. Show that if X1 and X 2 are standard normal variates with correlation
coefficient p between them, then the correlation coefficient between Xi2 and Xi is given by p2.
Solution . Since X ·and X2, are two standard normal variates, we have
1
=
E(X1) E (X2) = 0 and V(X1) E(Xi2) 1 V(X2) = E(Xi2) = = =

Mx , x U1, t2 ) = exp
1 2
['½ (ti2 + 2pt1t2 + tl)] Id. (12-6)]
E(X/ X/) - E (X/) E(~ )
Now p(Xi2, Xi) = -;========::::::::::=--;:=:===========
✓ [E(X14) - {E(X12)} 2} ✓ [E(X2 ) - {E(Xl)}2]
4
2 2 .·
where E(Xi2 Xi) =Coefficient of ii-~! in M(t 1, t2) = (2p 2 +1)
E(X14) = Coefficient of ~; in M (ti, t2 ) = 3

E(X24 ) = Coefficient of i; in M (t 1, t2) =3


p(Xi2, Xi) = 2p2 + 1-1 p2
✓ (3 -1)✓ (3 -1)
Example 12•3. The variables X and Y with zero means and standard deviations a1 and
cr2 are normally correlated with correlation coefficient p. Show that U and v defined as:
U = X + .r_ and V = X - _r
C11 '12 C11 G2
are independent
. normal
. variates with variances 2(1 + p) and 2 (1 - p,.\ respectively.

Solution. Smee (X, Y) ~ BVN (0' 0' cr1 ' cr2 ' p) , its 1·0 m
· t df·. ·
p. . 1s given · :
by

f(x, y) = 1
21tcr1cr2 ✓ (1 - p2) .
exp I- 1 { i!._ - 2pxy 1f:__}
2(1 - p2) . 0"12 0"10"2 + cr22
l . - oo < (x, y) < oo
Aiso u =.:_ + Y._, v =~-}I_
0"1 0"2 0"1 0'2

½~+0=.:_ _md -21 ~ ~0= L ⇒ ~


x = -2 (u + v) 0
0'1 cr 2 and Y= 22 (u -v)
Jacobim of trmsformation Jis given by :
1 1
- a(x, y) 2 0 1 -2 0'1
J- - O'<J2
d(tt v) 12 cr2 --1 cr = - 12
'
Also
x2 L 1[ 2 2
cri2+ cr/ = 4 (u ! v)2+(u - v)2]= ½[u2+v2]
The joint [Link] g(u,v) of U md V . .
ts given by:
1
g(u,v) = exp [- 1 ·
2na,o, ✓ (1 - p')
1
2(1 - p')
{
2 (112 + v2) - 2p( u2 4v2 ) } ] "2"2
ADbtTIONAL - TOPICS ON CORRELATION AND REGRESSION 12-15

1 1
= exp [ { (1 _ p)u2 + (1 + p)v2} ] du dv
21t . 2 ✓ (1 - p2) 4(1 - p2)

-- . 1
-----
2'n✓ 2 (1 - p) ✓ 2 (1 + p)
exp I u2
4(1 + p)
- v2
4(1 - p)
l
du dv

1 2 1
=[ · , PXP { u } ] x[ • exr{ - v' }]
ffi ✓2 (1 + p) 2(1 + p)2 ili ✓2(1 - p) 2(1- p) 2

= [fi(u) du] l/ (v) dv], (say)


2 · · · (*)
where
J,(u) ~ -
fu
✓12(1 + p) exp { 2(1 •: )2}
p

ana fi(v) =
1
exr{ - v' }
. fu ✓2 (1- p) 2(1 - p)2
(*) => U and V are independently distributed,
U ~ N [ 0, 2 (1 + p)] and V ~ N (0, 2 (1- p)].
Aliter. Find joint m.g.f. of U and V viz.,
M (t1, t2) = E (e1U + t2V) = E [ex (t1 + t2)/0"1 + Y(t1 - t2)/02]

and use E (et x + t Y) = exp [ti2 o} +ti2al + 2pt1 t20'10'2)/2].


1 2
\
Example 12•4. If X and Y are standard normal variates with co-efficient of correlation p,
show that
(i) Regression of Y on X is linear.
(ii) X + Y and X - Y are independently distributed.
xi - 2pXY + yi. rt· t 'b t d l'k---, h' ·
(iii) Q = -__:_-- zs is rz u e i e a c 1-square, z.e., as that of the sum of the
(1-p2) " . .
squares of standard normal variates.
Solution. (i) c.f. § 12·3·3.
(ii) (X, Y) - BVN (0, 0, 1, 1, p)

f (x, y) = -:===
1 .
exp [
21t✓ 1 - p2
- -1- (x2 _ 2pxy + y2)
2(1 - p2)
l
Let u = x + y and v=x- y u+v u-v
X= 2 ,Y,= 2
ax ax 1 1

. J- d(x,y) =
au av =
2 2 1
=- -2
.. - a (u, v) ~ ~ 1 1
au av 2 -2
The joint pdf g(u , v) of U and V is given by : 1

1
g(u,v) = C exp [ - ( _ p2)_ {2(112 + v2)- 2p(u2 _ v2)} ]
21 4
where C = 1
41t ✓-1--p-2
12-16 FUNDAMENTALS OF MATHEMATICAL STATISTICS
·

= C exp [ 1 {(1 - p)u2 + (1 + p)v2} ]


g(u ,v) 4(1 - p2)

= [ C1 exp {
4
p) } [ C2 exp { (t: 4(:- p) } l
= [g1(u)] [g2(v)], (say).
Hen ce U and V are inde pen den tly dist ribu ted.

(iii) MQ(t) = f_: L: e1Qf (x, y) dxdy


00

f 00

{x2 _2pxy + y2}]dxdy


= 1
27t ✓ (1 - p2) f - 00 - 00
exp (tQ) x exp . ( 1_ p2)
21
[

= 1
27t ✓ (l - p2)
[ L: exp (tQ -~) dxdy
.
= 1
27t ✓ (1 - p2)
f~ j ~ exp[-~(1-21)] dxdy
- 00 - 00

--
Put ✓ (1 - 2t) X = u and ✓ (1 - 2t) Y= v ⇒
du
dx ✓ (l _ t) an
dd- dv
2 y - ✓ (1 - 2t)
2
Also Q =- 1 [ 1
- x 2 -2p xy+ y2 ] = - [ u - 2puv + v2]
(1 - p2) - 1-2 t
(1 - p2)
00
1 00

M (!) =
Q
- x
21t✓ (1 - p2) (1 - 2t)
J- J exp [ 1
2
(u 2 - 2pu v + v2)] du dv
00
-
00
2(1 - p )
= l ·1 = (1 - 2t)-1
(1-2 t)
whi ch is the [Link]- of chi-square (X2) vari ate with
n (= 2) deg rees of free dom .
(For chi-square distribution, see Cha pter 15.)
. Exa mpl e 12-5. Let X and Y be independent stbndard normal variante
m.g f ofX Y. s. Obtain the
·
Solution. We hav e, by def.,

Mio {!) = E(efXY) = f _: L: e xy. f(x, y) dxdy 1

Sinc e X and Y are ind epe nde nt stan dar


p.d f f(x, y) is given by : d nor mal var iate s, the ir join t

f(x, y) =fi(x) . fz(y) =i1t e-x212 e-t/! 2; - oo < (x, y) < oo

Mxv (!) = i" J_: J_: exp [-½ (x2 - 2txy + y 2} ]dx dy
=i" [ [ exp [ 2(1~ t 2)

2
;{ x 2t.x.y
- f }]
1/(1 - t2) , (l/.. ff=t 2) (l/✓ 1 - t2) + 1/(1 - t2) dxdy ... (*)
ADDITIONAL TOPICS ON CORREL~TION AND REGRESSION 12-17

If (U, V) ~ BVN (0, 0, cri2, cri2, p), then we have

¾,,- ~~• ~} dxdy = l


00

J J-
1
- e - 2c1 ~ p'l {
- 00 - 00 21t0"10"2 ✓ 1 - p2


eo Joo
- oo -oo
1 {,:2 ~ 1(}
e 2(1-p2) a?- a1a2 + a? dxdy = 21tcr1cr2 ✓ 1- p2 ... (**)
J
Comparing (*) and (**) with cri2 =cri2 = (l ~ t2) and p .= t, we get
[Link](t) = l. 27t 1 . 1. ✓ 1- t2 ⇒ MXY(t) = (1 - t2)112 ; -1 < t < 1.
. . 21t . ~2 ✓ 1 -.t2
Example 12-6. Let X and Y have bivariate normal distribution with parameters :
µx = .5, µy = 10, a;= 1, a!= 25 and Corr (X, Y) = P· ·
~)lf p >:O,find p when P (4 < Y < 16 I X = 5) = 0-954
(b) If p = 0,find P(X + Y) ~16.
Solution. S~~~- X, Y ~ BVN(µx, µy, crx2, cr_y2, p), the conditional distribution of Y
given X = X is also normal.
(Y I X = x) ~ N [µ = µy + pay (x - µx), cr2 = cry2 (1 - p2)]. therefore
O"x

(Y I X =:5) ~ N [µ = 10 + p x f (5 -5), cr = 25 ( 1 - p
2 2
)] = N [µ= 10, cr2 = 25 (1 _. p2)]
We want p so that P(4 < Y < 16 I X = 5) = 0·954

where Z=y-µ = Y - lO ~ N (0, 1)


(J 5 ✓ (1 - p2)

⇒ p ( 4-0"10 < z < 16: 10 ) = 0·954

P ( -6 < Z < ~) = 0·951 ... (*)


O' . O'

But we know that if Z ~ N (0, 1), then P (-2 < Z < 2) =0-954 ... (**)
Comparing(*) and(**), we get

cr6 = 2 ⇒ cr = 3 ⇒ cr2 = 9 = 25 (1 - p2)

.. 1 - p2 = 295 ⇒ p2 = 2156 ⇒ 4 0·8


P =:s= (· .· p > 0)
(b) Since (X, Y) have bivariate normal distribution, p = o⇒ X and y are
independent r. v' s,

and X ~ N (µx = 5, crx2= 10) and y ~N (µy = 10, crl= 25)


Thus X + Y ~ N (µ = µx + µy, u2 =crx2 + cry2) = N (15,\26)
P(X+ Y ~ 16) = P{z ~ 16-15 ), where Z (X + Y)-µ ~
. .
. ._._. . €6 .
. . O' . .
N (0,1).
12-18 FUNDAMENTALS OF MATHEMATICAL STATISTIC


•• P(X+Y<16)=P(z< _,;:-)=<1>(1;ff6),
. -v 26
where <l> (z) = P (Z < z), is the distribution function of standard normal variate.

Remark. P(X+Y<16) =1z < 5.~99)=P(Z<O ·I96)

= 0·5 + P (0 < Z < 0-196) = 0·5 + 0.0793 (approx.)


= 0-5793.

You might also like