You are on page 1of 5

Example 1.3: As an example, consider the following func- Example 1.

4: As another example, consider the following


tion: ⎧ function:


⎪ for 0 < x < 1, 1
⎨ 1, f (x) = √ e− 2 x ,
1 2

f (x) = ⎪


⎩ 0, otherwise. 2π
∞ for −∞ < x < ∞.
Clearly, since f (x) ≥ 0 for −∞ < x < ∞ and −∞ f (x) dx
1 Clearly, we have f (x) ≥ 0 for all x.
= 0 f (x) dx = [x]10 = 1, the above function can be a proba- ∞
We check whether −∞ f (x) dx = 1.
bility density function. ∞
First of all, we define I as I = −∞ f (x) dx.
In fact, it is called a uniform distribution.
To show I = 1, we may prove I 2 = 1 because of f (x) > 0 for
all x, which is shown as follows:

59 60

ʻ Review ʼ Integration by Substitution (ஔ‫׵‬


 ∞ 2 
 ∞∞ ੵ෼):
I2 = f (x) dx =
f (x) dx f (y) dy
−∞ −∞ −∞
 ∞ 1 1 2  ∞ 1 1
= √ exp(− x ) dx √ exp(− y2 ) dy Univariate (1 ม਺) Case: For a function of x, f (x), we
2 2
 ∞ 2π
−∞
 ∞  1
−∞


perform integration by substitution, using x = ψ(y).
1
= exp − (x2 + y2 ) dx dy
2π −∞ −∞ 2 Then, it is easy to obtain the following formula:
 2π  ∞
1 1  
= exp(− r2 )r dr dθ
2π 0 0 2 f (x) dx = ψ (y) f (ψ(y)) dy,
 2π  ∞
1 1
= exp(−s) ds dθ = 2π[− exp(−s)]∞ 0 = 1.
2π 0 0 2π which formula is called the integration by substitution.

61 62

Proof: Bivariate (2 ม਺) Case: For f (x, y), define x = ψ1 (u, v)


Let F(x) be the integration of f (x), i.e., and y = ψ2 (u, v).
 x  
F(x) = f (t) dt, f (x, y) dx dy = J f (ψ1 (u, v), ψ2 (u, v)) du dv,
−∞

which implies that F  (x) = f (x). where J is called the Jacobian (ϠίϏΞϯ), which repre-
Differentiating F(x) = F(ψ(y)) with respect to y, we have: sents the following determinant (ߦྻࣜ):

∂x ∂x

dF(ψ(y)) dF(x) dx

f (x) ≡ = = f (x)ψ (y) = f (ψ(y))ψ (y). J =

∂u ∂v

= ∂x ∂y − ∂x ∂y .
dy dx dy
∂y ∂y

∂u ∂v ∂v ∂u
∂u ∂v
ʻ End of Review ʼ

63 64
ʻ Go back to the Integration ʼ In the inner integration of the sixth equality, again, inte-
In the fifth equality, integration by substitution (ஔ‫ੵ׵‬෼) is gration by substitution is utilized, where transformation is
1
used. s = r2 .
2
The polar coordinate transformation (‫࠲ۃ‬ඪม‫ )׵‬is used as Thus, we obtain the result I 2 = 1 and accordingly we have
x = r cos θ and y = r sin θ. I = 1 because of f (x) ≥ 0.
1 2 √
Note that 0 ≤ r < +∞ and 0 ≤ θ < 2π. Therefore, f (x) = e− 2 x / 2π is also taken as a probability
The Jacobian is given by: density function.

∂x ∂x



Actually, this density function is called the standard normal



cos θ −r sin θ

J =

∂y
∂r ∂θ

= r.
∂y

probability density function (ඪ४ਖ਼‫ن‬෼෍).



sin θ r cos θ

∂r ∂θ

65 66

Distribution Function: The distribution function (෼෍ The properties of the distribution function F(x) are given by:
ؔ਺) or the cumulative distribution function (ྦྷੵ෼෍ؔ
F(x1 ) ≤ F(x2 ), for x1 < x2 , — > nondecreasing function
਺), denoted by F(x), is defined as:
P(a < X ≤ b) = F(b) − F(a), for a < b,
P(X ≤ x) = F(x),
F(−∞) = 0, F(+∞) = 1.

which represents the probability less than x.


The difference between the discrete and continuous random
variables is given by:

67 68

1. Discrete random variable (Figure 1): 2. Continuous random variable (Figure 2):
r
r  x
• F(x) = f (xi ) = pi , • F(x) = f (t) dt,
−∞
i=1 i=1
where r denotes the integer which satisfies xr ≤ x < • F  (x) = f (x).
xr+1 .
f (x) and F(x) are displayed in Figure 1 for a discrete random
• F(xi ) − F(xi − ) = f (xi ) = pi ,
variable and Figure 2 for a continuous random variable.
where  is a small positive number less than xi − xi−1 .

69 70
Figure 1: Probability Function f (x) and Distribution Func- Figure 2: Density Function f (x) and Distribution Function
tion F(x)— Discrete Case F(x) — Continuous Case

r .............
...... . . . ......
.... .. .. .. .. .. .. .. ....

 F(x) = i=1 f (xi ) ..... ... ... ... ... ... ... ... ... ... ......
........ ... ... ... ... ... ... ... ... ... ... ... ........
  ..... .. .. .. .. .. .. .. .. .. .. .. .. .. .....
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ......
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... .. .. .....
• f (xr ) ....... .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... ... .... .... .... ........
... . . . . . . . . . . . . . . . . . . . . . ....
.... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ....
B ⎧ ⎪

•  .
.... . . . . . . . . . . . . . . . . . . . . . . . ...
.... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .....
BBN ⎪
x ...........................
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...
• ⎪

⎨ F(x) = f (t)dt ..... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .... f (x)
• ............. ⎪ ⎪
• .............
−∞ ..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...
...... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .....

⎪ @

⎩ @
R
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

...
...
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...
.. .... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
X ..
.
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
...
...
. ....
x1 x2 x3 ............. xr x xr+1 ............. ..
. .
.
.
.
.... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
....... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
.....
......
.... . ........
...... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
....................... .....................
X
x
Note that r is the integer which satisfies xr ≤ x < xr+1 .

71 72

2.2 Multivariate Random Variable (ଟมྔ֬ y j } occurs is given by:

཰ม਺) and Distribution P(X = xi , Y = y j ) = f xy (xi , y j ),

We consider two random variables X and Y in this section. It where f xy (xi , y j ) represents the joint probability function
is easy to extend to more than two random variables. (݁߹֬཰ؔ਺) of X and Y. In order for f xy (xi , y j ) to be a
joint probability function, f xy (xi , y j ) has to satisfies the fol-
Discrete Random Variables: Suppose that discrete ran-
lowing properties:
dom variables X and Y take x1 , x2 , · · · and y1 , y2 , · · ·, respec-
tively. The probability which event {ω; X(ω) = xi and Y(ω) = f xy (xi , y j ) ≥ 0, i, j = 1, 2, · · ·

f xy (xi , y j ) = 1.
i j

73 74

Define f x (xi ) and fy (y j ) as: Continuous Random Variables: Consider two continu-
ous random variables X and Y. For a domain D, the prob-
f x (xi ) = f xy (xi , y j ), i = 1, 2, · · · ,
j ability which event {ω; (X(ω), Y(ω)) ∈ D} occurs is given

fy (y j ) = f xy (xi , y j ), j = 1, 2, · · · . by: 
i
P((X, Y) ∈ D) = f xy (x, y) dx dy,
Then, f x (xi ) and fy (y j ) are called the marginal probability D

functions (पล֬཰ؔ਺) of X and Y. where f xy (x, y) is called the joint probability density func-

f x (xi ) and fy (y j ) also have the properties of the probability tion (݁߹֬཰ີ౓ؔ਺) of X and Y or the joint density

functions, i.e., function of X and Y.


 
f x (xi ) ≥ 0 and i f x (xi ) = 1, and fy (y j ) ≥ 0 and j fy (y j ) = 1.

75 76
f xy (x, y) has to satisfy the following properties: density functions (पล֬཰ີ౓ؔ਺) of X and Y or the
marginal density functions (पลີ౓ؔ਺) of X and Y.
f xy (x, y) ≥ 0,
 ∞ ∞ For example, consider the event {ω; a < X(ω) < b, c <
f xy (x, y) dx dy = 1.
−∞ −∞ Y(ω) < d}, which is a specific case of the domain D. Then,

Define f x (x) and fy (y) as: the probability that we have the event {ω; a < X(ω) < b, c <
 ∞ Y(ω) < d} is written as:
f x (x) = f xy (x, y) dy, for all x and y,  b d
 −∞
∞ P(a < X < b, c < Y < d) = f xy (x, y) dx dy.
fy (y) = f xy (x, y) dx, a c
−∞

where f x (x) and fy (y) are called the marginal probability

77 78

The mixture of discrete and continuous RVs is also possible. The marginal probability function of X is given by:
For example, let X be a discrete RV and Y be a continuous  ∞
f x (xi ) = f xy (xi , y) dy,
RV. X takes x1 , x2 , · · ·. The probability which both X takes −∞

xi and Y takes real numbers within the interval I is given by: for i = 1, 2, · · ·. The marginal probability density function of

Y is:
P(X = xi , Y ∈ I) = f xy (xi , y) dy.
I
fy (y) = f xy (xi , y).
Then, we have the following properties: i

f xy (xi , y) ≥ 0, for all y and i = 1, 2, · · ·,


 ∞
f xy (xi , y) dy = 1.
i −∞

79 80

2.3 Conditional Distribution The features of the conditional probability function f x|y (xi |y j )
are:
Discrete Random Variable: The conditional probability
function (৚݅෇֬཰ؔ਺) of X given Y = y j is represented f x|y (xi |y j ) ≥ 0, i = 1, 2, · · · ,
as:
f x|y (xi |y j ) = 1, for any j.
f xy (xi , y j ) f xy (xi , y j ) i
P(X = xi |Y = y j ) = f x|y (xi |y j ) = = .
fy (y j ) i f xy (xi , y j )

The second equality indicates the definition of the condi-


tional probability.

81 82
Continuous Random Variable: The conditional proba- The properties of the conditional probability density function
bility density function (৚݅෇֬཰ີ౓ؔ਺) of X given f x|y (x|y) are given by:
Y = y (or the conditional density function (৚݅෇ີ౓ؔ
f x|y (x|y) ≥ 0,
਺) of X given Y = y) is:  ∞
f x|y (x|y) dx = 1, for any Y = y.
f xy (x, y) f xy (x, y) −∞
f x|y (x|y) = = ∞ .
fy (y) f (x, y) dx
−∞ xy

83 84

Independence of Random Variables: For discrete random


variables X and Y, we say that X is independent (ಠཱ)
(or stochastically independent (֬཰తʹಠཱ)) of Y if and
only if f xy (xi , y j ) = f x (xi ) fy (y j ).

Similarly, for continuous random variables X and Y, we say


that X is independent of Y if and only if f xy (x, y) = f x (x) fy (y).

When X and Y are stochastically independent, g(X) and h(Y)


are also stochastically independent, where g(X) and h(Y) are
functions of X and Y.

85

You might also like