You are on page 1of 70

Chapter- two

Random Variables Concept and


Operations on
Random Variable

2017
Girma Adam

2.1
Topics discussed in this section:
T he r andom va r iable concep t
Dis t r ibu t ion Func t ion
D ensi t y Func t ion
Condi t ional Dis t r ibu t ion & D ensi t y Func t ions
E xpec t a t ion
Momen t s
Func t ions t ha t give momen t s
Special distributions function
T r ans f orma t ion o f a r andom va r iable
2.2
Random variable concept
In most scientific and technological applications, measurements and
observations are expressed as numerical quantities.
Systems represented by operations on random variables

2.3
Cont’d..

2.4
Cont’d..
The outcome of a random experiment need not be a number, however, we
are usually interested not in the outcome itself, but rather in some
measurement or numerical attribute of the outcome.
Examples:-
- In tosses of a coin, we may be interested in the total number of head
- In the selection of a student’s name from the class, we are interested in the weight
of the student
- In study of noise like signals, we are nearly always deal with physical quantities such
as voltage, power etc, which can be measured in physical unit.
In each of these examples, a measurement assigns a numerical value to the outcome of the
random experiment. Since the outcomes are random, the result of measurement will also
be random

2.5
Cont’d..

2.6
Cont’d..
• A random Variable :-is simply a function that maps every point in
sample space on to the real line (numbers)
• Random Variable(RV):- is a function whose domain is the set of
outcomes of an experiment and whose range
is R1, the real line(numbers).
Examples:2:1-
1. Mapping for the through of one die

2.7
Cont’d..
Examples:2:2 Let us construct a model for counting the number of
heads in a sequence of three coin tosses. For the underlying sample
space,
we take

which contains the eight possible sequences of tosses. However, since


we are only interested in the number of heads in each sequence, we
define the random variable (function) X by

2.8
Cont,d..

&
• The probability of occurrence of each outcome determines the
probability of occurrence of each of the mapped numbers.
• Random Variables represent by capital letters such as X,Y,W…and any
particular values of the RV by lower case such as x, y,w

2.9
Cont’d..

&

2.10
Cont’d..

&

2.11
Events Defined by Random Variables

• If X is a r.v. and x is a fixed real number, we can define the event


(X = x) as:-

Similarly, for fixed numbers x, x1, and x2, we can define the
following events:
&

2.12
Cont,d..

• These events have probabilities that are denoted by

• Example 2.3 &

2.13
Types of Random Variables

•Discrete RVs They take values which belong to a set of finite


length. For e.g.
- The number of people using their cordless phones
- the age of a man (rounded off in years) and
- the number of children of a couple would also be discrete RVs.
•Continuous RVs a RV which may take infinite number of values or a
continuum of values is called a Continuous RV.
For e.g. -The height of men in a town,
- the voltage at the front end of a receiver or
- the age of a man (without any rounding)

2.14
Distribution Function

• The probability P ( X ≤ x ) is also denoted by the function FX ( x) ,

which is called the distribution function or cdf of the random


variable X.
• It gives the summation of the probabilities of occurrence of all the
values to the left of and including the value X=x .

o Properties of FX ( x)

2.15
Cont,d..

Example 2.4 : Consider the r.v. X defined in Example 2.2 Find and

sketch the cdf FX(x) of X. Table below gives

Fx(x) = P(X ≤ x) for x = - 1, 0, 1, 2, 3, 4.

2.16
Cont,d..

Fig.1. Cumulative Distribution Function (cdf)

2.17
Discrete RVs

A discrete RV is characterized by a discrete set of allowable values x1,


x2,….xn and the probability of the RV taking one of these discrete values is
determined by the underlying experiment. The probability that X=xi is
denoted by P(X=xi) for i=1,2,….n and is called the Probabili t y Mass Func t ion
(PM F ) of X.
The PMF has the following properties:

1. P( X ≤ x) = FX ( x) = ∑ P( X =xi )
all xi ≤ x
n
2 ∑
i =1
P( X = xi ) = 1

2.18
Cont,d..

The probability density function (PDF) of a discrete random variable


that assumes x1, x2, … , is P(x1), P(x2), ., where P(xi) = P(X = xi), i = 1, 2,
Example 2.5: An information source generates symbols at random from.
a four-letter alphabet (a, b, c, d} with probabilities of a
relation 2P(X = 1) = 3P(X = 2) = P(X = 3) ,where X is the length of the code A
coding scheme encodes these symbols into binary codes as follows:

a) What is the range of X?


b) Assuming that the generations of symbols are independent, find the
probability distribution of X

2.19
Cont,d..

Example2.6:

2.20
Continuous RVs

A continuous random variables is defined as a random variable


whose cdf is continuous everywhere, and it can be defined as an
integral of a non-negative function

Density Function
The probability density function of continuous r.v X denoted by

is defined as the derivative of the distribution function

2.21
Cont,d..

2.22
Cont,d..

Example 2.7 .Find the constant c such that the function

is a density function

(a) Compute P(1 < X < 2).

(b) Find the distribution function FX (x).

(C)draw the graph of FX (x). And fX (x).

2.23
Conditional Distribution & Density Functions

The Concept of conditional probability was introduce in chapter 1.


recall that, for two events A and B where P(B) > 0. the conditional
probability of A given B had occurred was

In this section we extend the conditional probability concept to


include random variables.

2.24
Cont’d..

Let A be as the event { X ≤ x} for the random variable X. The


resulting probability P{ X ≤ x/B} is defined as the conditional
distribution function of X, which we denote Fx(x). Thus

Where we use notation { X ≤ x n B} to simply the joint event { X ≤ x}


n B. This joint event consists of all outcomes such that

X(s) ≤ x and s є B

2.25
Cont’d..

Example 2.8 A random variable X has the following probability

distribution

2.26
Properties of Conditional Distribution

All the properties of ordinary distributions apply to Fx(x/B)

Conditional Density

- The conditional density function is given by

2.27
Cont’d..

2.28
Cont’d..

Example2.9The diameter of an electric cable X is a continuous


random variable with pdf

2.29
Expectation

A discrete (continuous) r.v. is completely specified by its pmf (pdf)


It is often desirable to summarize the r.v. or predict its outcome in
terms of one or a few numbers.
What do we expect the value of the r.v. to be?
What range of values around the mean do we expect the r.v. To
take?
Such information can be provided by the mean and standard deviation
of the r.v.
These are special cases of moments of a probability distribution. The
first moment is the mean

2.30
Cont’d..
An important concept in the theory of probability and statistics is the
mathematical expectation, or expected value, or mean value, or statistical
average of a random variable X.
The expected value of a random variable is denoted by E[X] or X or mx.
If X is a discrete random variable having values x1, x2, … , xn, then the
expected value of X is defined to be

Similarly, for a continuous random variable X with density function f X (x),


the expectation of X is defined to be

2.31
Cont’d..

Example 2.10. A discrete random variable X has possible values ,i=


1,2,3,4,5. which occur with probabilities 0.4,0.25,0.15,0.1 and
0.1, respectively. find the mean of X and Plot px(x) ?

2.32
Expected values of a Function of a
Random Variables

Let X be a random variable. Then, the function g(X) is also a random


variable, and its expected value, E[g(X)], is

If X is a discrete random variables. Then, the function g(X) is also a


random variable, and its expected value, E[g(X)], is

2.33
Cont’d..

Example2.11. Consider the random variable X with the distribution


shown in Figure below. Find expected value of X

Example2.12. define a function g(x) of a random variable X by

where x0 is a real number -∞ < x0 < ∞. Show that


E[g(x)] = 1- Fx(x)

2.34
Properties of expectations.

Let X and Y be random variables on the probability space, then

1. If X ≥ 0 then E[X] ≥ 0.
2. For any real number a, E[aX] = aE[X].
3. E[X + Y ] = E[X] + E[Y ].
4. If X is constant equal to a, then E[a] = a.
5. If X and Y are independent and both E[|X|] and E[|Y |] are finite,
then E[XY ] = E[X]E[Y ].
6. , with the equality if and only if X and Y are
linearly dependent,

2.35
Moments of Random Variables

Moment is a specific quantitative measure, in statistics of a set of


points.

If the points represent probability density, then the

zeroth moment is the total probability (i.e. one),

the first moment is the mean,

the second moment is the variance, and the third moment is


the skewness

2.36
Cont’d..

An immediate application of the expected value of a function g(x) of


a random variable X is Calculating moments. Two types of moments
are of interest those about the origin and those about the mean.

1.Moments about origin

The function

the moment denoted by mn then

Clearly mo=1, the area of the function f X (x) , while m1= the expected value of X

2.37
Cont’d..

2. Central Moments
Moments about the mean value of X are called central moments are
given the symbol µx. They are defined as the expected value of the
function

Which is

The moment µo = 1, the area of fx(x)

2.38
Variance and Skew

The second central moment µ2 is so important we shall give it the


name variance and special notation . Thus variance is given by

The variance characterizes how likely it is to observe values of the random


variable far from its mean.
Example 1.

2.39
Cont’d..

The positive square root δx of variance is called the standard deviation of X


; it is measure of the spread in the function f x(x) about the mean.
Variance can be found from the knowledge of first and second moments

Example2.13 A random variable X has a probability density

Find a) its mean value


2.40
Properties of Variance

Let X, Y be random variables on a probability space

if the central moments are small, then the random variable cannot deviate
much from its mean.
Example2.14

2.41
Skewness
Skewness is a measure of the asymmetry of the probability
distribution of a real-valued random variable about its mean.

The third central moment µ3 is a measure of the symmetry of f x (x )


about x = =m1. it is called the skew of the density function.

. The skewness value can be positive or negative

2.42
Cont’d..
The normalized third central moment is known as the skewness
of the density function or coefficient of skewness.

2.43
Cont’d..
Example.2.15

2.44
Functions that give moments
Two function can be defined that allow moments to be calculated for a
random variable X. They are the moment generating function and
characteristics function.

Moment generating function

- The moment generating function (MGF) Mx (t) of a random variable


X is defined by

If X is a discrete random variable with probability distribution P(xi) =


P(X = xi), i = 1, 2, Κ , then

2.45
Cont’d..
If X is a continuous random variable with density function f X (x), then
its MGF is

A “nice” advantage of the MGF is its ability to give the moments. Recall
that the McLaurin series of the function is

This is a convergent series. Thus, can be expressed in the series as

2.46
Cont’d..
By using the fact that the expected value of the sum equals the sum of
the expected values, we can write the MGF as

Since t is considered as a constant with respect to the expectation


operator, taking the derivative of Mx (t) with respect to t, we obtain

2.47
Cont’d..
Setting t = 0, all terms become zero except E[ X ]. We obtain

Similarly, taking the second derivative of M x (t) with respect to t and


setting it equal to zero, we obtain

Continuing in this manner, we obtain all moments to be

where denotes the nth derivative of Mx (t) with respect to t

2.48
Special distributions & Density Functions

The random variable X is called normal or Gaussian if its


probability density function has a form

where m and σ are, respectively, the mean and standard deviation of


X and satisfy.

The gaussian density is the most important of all densities and it


enters into nearly all areas of science and engineering.

2.49
Cont,d..

2.50
Cont,d..

the conditions −∞ < m < ∞ and σ > 0. The corresponding distribution


function, is given by

2.51
Cont,d..

2.52
Cont,d..

2.53
Cont,d..

Example 2.16Find the probability of the event for a


Gaussian random variable having m=3 and σ=2

2.54
Uniform Distribution

A r.v. X is called a uniform r.v. over (a, b) if its pdf is given by

for real constant - ∞ < a < ∞ and b > a


The corresponding cdf of X is

2.55
Cont’d..

Example 2.17 : Let Y= a Cos(ωt + θ) where a, ω and t are constants, and θ


is uniform random variable in the interval of (0,2Π). The
random variable Y results from sampling the amplitude of a
sinusoid with random phase θ. Find the expected value of Y ?

2.56
Exponential Distribution

The exponential density and distribution function

of a r.v X is given by

for real numbers - ∞ < a < ∞ and b > 0. where b is the rate at
which the event occurs.

2.57
Cont’d..

Fig.2.3. exponential density function


Fig.2.4.exponential distribution function

Example 2.18

2.58
Poisson Distribution

The Poisson random variable X has a density and distribution


function given by

Where b > 0 is real constant & it is the average no of event occurred

2.59
Binomial Distribution

Let 0 < p < 1, and N = 1,2,......, then the function

is called binomial density function.

- The quantity is the binomial coefficient defined as

- The binomial density function can be applied to the Bernoulli


trial experiment

2.60
Cont’d..
-The binomial distribution function is given by

2.61
Transformations of a Random variables

Transform (change) one random variable X in to a new random variable Y


by means of a transformation. Y = T(X).

Typically, the density function f x (x ) or distribution function Fx(x) of X


is known, and the problem is to determine either the density function
f y (y) or distribution Fy(y) of Y
In general X can be a discrete, continuous random variables, and the
transformation T can be Linear, non-linear etc..

2.62
Cont’d..

Depending on the form of X and T , there are many cases to be consider


in a general study.
Case one:- both X and T continuous and either monotonically increase
or decrease with X
Case two:- both X and T continuous but non monotonic.
Case three:- X discrete and T continuous

Monotonic transformations of a continuous RV


- A transformation T is called monotonically increasing if T(x1) < T(x2)
for any x1 < x2 . It is monotonically decrease if T(x1) > T(x2) for any x1 < x2

2.63
Cont’d..

Assume that T is continuous and differentiable at all values of x for


which fx(x) ≠ 0.

fig 1. Monotonic transformations a) increasing b) decreasing

2.64
Cont’d..

Where represents the inverse of the transformation T

Now the probability of the event (Y ≤ y) must equal the probability of


the event (X ≤ x) because of one-to-one correspondence between X and
Y thus

Differentiate both side

2.65
Cont’d..

Where represents the inverse of the transformation T

Now the probability of the event (Y ≤ y) must equal the probability of


the event (X ≤ x) because of one-to-one correspondence between X and
Y thus

Differentiate both side

2.66
Cont’d..

If the function the monotonically decreasing, we would have

and consequently,

2.67
Cont’d..

the density function of Y is given by

This result can be generalized to the case where the function T(x) has
many real roots x1 , x2 ,Κ , xn ,Κ , as shown below.

2.68
Cont’d..

In this case, the density function of the random variable Y, Y = T(X),


is given by

where fX(x) is the density function of X, and xi, i = 1, 2, … , is


expressed in terms of y, and T′(x) is the derivative of T(x) with
respect to x.

2.69
Cont’d..

Example 2:19 Determine the density function of the random variable


Y where Y = T(X ) = , given that a is positive and the density
function of X is fX (x).

2.70

You might also like