You are on page 1of 19

OPERATION ON RANDOM VARIABLES

Operations on single and multiple random variables– expectations


Expected Value of a Random Variable, Function of a Random Variable,
Moments about the Origin, Central Moments, Variance and Skew, Chebychev’s
Inequality, Characteristic Function, Moment Generating Function, Joint
Distribution Function and its Properties, Joint Density Function and its Properties
Marginal Distribution & Marginal Density Functions, Conditional Distribution and
Density – Point Conditioning, Conditional Distribution and Density – Interval
conditioning, Statistical Independence, Sum of Two and more Random
Variables, Central Limit Theorem, Equal and Unequal Distribution. Expected
Value of a Function of Random Variables- Joint Moments about the Origin, Joint
Central Moments, Joint Characteristic Functions .

A Random Variables are describes the events on given sample was introduced,
discussed basic operation like expectation, moments, variance, ect.

Mathematical Expectation:

The Mean or Average Value of a Probability Density Function of a random


variables X is called the Mathematical Expected Value of X. It is denoted as 𝑋̅
or E[X].

Expected Value of A Random Variable:

If ‘X’ is continuous random variables with a valid Probability Density Function


f(x), then the Expected Value of X or the mean value of X is defined

̅ = 𝑬[𝑿] = ∫ 𝒙 𝒇(𝒙)𝒅𝒙
𝒎𝒙 = 𝑿
−∞

If ‘X’ is discrete random variables with a set elements{𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 … …} and a set


of corresponding probabilities{𝑃 (𝑥1 ), 𝑃(𝑥2 ), 𝑃(𝑥3 ) … . },then the expected value of X
is

𝑬 [ 𝑿 ] = ∑ 𝒙𝒊 𝑷 ( 𝒙𝒊 )
𝒊=𝟎

Where N is an integer and may be infinite.


Expected Value of A Function of A Random Variable:

Consider a random variable X Probability Density Function f(x).If g(x) is a real


function of x, then the expected value of g(x) for a continuous random variable
X is defined as

𝑬[𝒈(𝒙)] = ∫ 𝒈(𝒙) 𝒇(𝒙)𝒅𝒙


−∞

and for a discrete random variable X is function of g(x),then


𝑬[𝒈(𝒙)] = ∑ 𝒈(𝒙𝒊 )𝑷(𝒙𝒊 )


𝒊=𝟎

Where 𝑷(𝒙𝒊 ) i=1,2,3……N

PROPERTIES OF EXPECTATION

1.If a random variable X is a constant i.e X =’a’ then E[a] = ‘a’, where a is a
constant.

We know that

𝑬[𝑿] = ∫ 𝒙 𝒇(𝒙)𝒅𝒙
−∞

𝑬[𝒂] = ∫ 𝒂 𝒇(𝒙)𝒅𝒙
−∞

𝑬[𝒂] = 𝒂 ∫ 𝒇(𝒙)𝒅𝒙
−∞

We know that area under the curve of Probability Density Function is equal to
one i.e

∫ 𝒇(𝒙)𝒅𝒙 = 𝟏
−∞

Then

𝑬 [ 𝒂] = 𝒂 ∗ 𝟏
𝑬[𝒂] = 𝒂

2. If ‘a’ is any constant, then E[a X]=a E[X]


𝑬[𝒂𝑿] = ∫ 𝒂𝒙 𝒇(𝒙)𝒅𝒙
−∞

𝑬[𝒂𝑿] = 𝒂 ∫ 𝒙 𝒇(𝒙)𝒅𝒙
−∞

𝑬[𝒂𝑿] = 𝒂 ∫ 𝒙𝒇(𝒙)𝒅𝒙
−∞

𝑬[𝒂𝑿] = 𝒂𝑬[𝑿]

3. If ‘a’ and ‘b’ any two constants, then E[a X +b]=a E[X]+b

𝑬[𝒂𝑿 + 𝒃] = ∫(𝒂𝒙 + 𝒃) 𝒇(𝒙)𝒅𝒙


−∞

∞ ∞

𝑬[𝒂𝑿 + 𝒃] = 𝒂 ∫ 𝒙 𝒇(𝒙)𝒅𝒙 + 𝒃 ∫ 𝒇(𝒙)𝒅𝒙


−∞ −∞

𝑬[𝒂𝑿 + 𝒃] = 𝒂 ∫ 𝒙𝒇(𝒙)𝒅𝒙 + 𝒃
−∞

Where area under the curve of Probability Density Function is equal to one i.e

∫ 𝒇(𝒙)𝒅𝒙 = 𝟏
−∞

Then

𝑬[𝒂𝑿 + 𝒃] = 𝒂𝑬[𝑿]

4. If 𝒈𝟏 (𝒙), 𝒂𝒏𝒅 𝒈𝟐 (𝒙) are two functions of a random variable X,then

𝑬[𝒈𝟏 (𝒙) + 𝒈𝟐 (𝒙)] = 𝑬[𝒈𝟏 (𝒙)] + 𝑬[𝒈𝟐 (𝒙)]


𝑬[𝒈𝟏 (𝒙) + 𝒈𝟐 (𝒙)] = ∫ (𝒈𝟏 (𝒙) + 𝒈𝟐 (𝒙))𝒇(𝒙)𝒅𝒙


−∞

∞ ∞

𝑬[𝒈𝟏 (𝒙) + 𝒈𝟐 (𝒙)] = ∫ 𝒈𝟏 (𝒙)𝒇(𝒙)𝒅𝒙 + ∫ 𝒈𝟐 (𝒙)𝒇(𝒙)𝒅𝒙


−∞ −∞

𝑬[𝒈𝟏 (𝒙) + 𝒈𝟐 (𝒙)] = 𝑬[𝒈𝟏 (𝒙)] + 𝑬[𝒈𝟐 (𝒙)]

MOMENTS:

Moments are application of Average Value or Expected Value of a random


variable X, classified as Moments about Origin and Moments about Mean
Value and denoted as 𝒎𝒏 𝒂𝒏𝒅 𝝁𝒏 𝒂𝒓𝒆 𝒓𝒆𝒔𝒑𝒆𝒄𝒕𝒊𝒗𝒆𝒍𝒚

𝒎𝒏 = 𝑬[𝑿𝒏 ] = ∫ 𝑿𝒏 𝒇(𝒙)𝒅𝒙
−∞

and 𝒏
𝝁 𝒏 = 𝑬[(𝑿 − 𝒎𝟏 ) ]

𝝁𝒏 = ∫−∞(𝑿 − 𝒎𝟏 )𝒏 𝒇(𝒙)𝒅𝒙 n=0,1,2,3,4---

Moments about Origin:

If n =0 is called zeroth moment about origin


𝒎𝒏 = 𝑬[𝑿𝒏 ] = ∫ 𝑿𝒏 𝒇(𝒙)𝒅𝒙
−∞

𝒎𝟎 = 𝑬[𝑿𝟎 ] = ∫ 𝑿𝟎 𝒇(𝒙)𝒅𝒙
−∞

𝒎𝟎 = 𝑬[𝟏] = ∫ 𝒇(𝒙)𝒅𝒙 = 𝟏
−∞

If n =1 is called first moment about origin and also called Average Value of
Random Variable or Expected Value of Random Variables

𝒎𝒏 = 𝑬[𝑿𝒏 ] = ∫ 𝑿𝒏 𝒇(𝒙)𝒅𝒙
−∞

𝒎𝟏 = 𝑬[𝑿𝟏 ] = ∫ 𝑿𝟏 𝒇(𝒙)𝒅𝒙
−∞

𝒎𝟏 = 𝑬[𝑿] = ∫ 𝒙 𝒇(𝒙)𝒅𝒙
−∞

If n =2 is called second moment about origin and also called Mean Square
Value of Random Variable or Average Power of Random Variables and root of
Mean Square is called RMS Value.

𝒎𝒏 = 𝑬[𝑿𝒏 ] = ∫ 𝑿𝒏 𝒇(𝒙)𝒅𝒙
−∞

𝒎𝟐 = 𝑬[𝑿𝟐 ] = ∫ 𝑿𝟐 𝒇(𝒙)𝒅𝒙
−∞

Higher Order Moments are calculated using Moment Generating Function and
Characteristic Function of random variables x

MOMENTS GENERATING FUNCTION:

Moment Generating Function 𝑀𝑣 (𝑥) of a random variable x is generating the nth


order Moments about origin and defined as

𝑀𝑣 (𝑥 ) = 𝐸 [𝑒 𝑣𝑥 ]

Where v is real value −∞ < 𝑋 < ∞


𝑀𝑣 (𝑥 ) = 𝐸 [𝑒 𝑣𝑥 ] = ∫ 𝒆𝒗𝒙 𝒇(𝒙)𝒅𝒙
−∞

𝐼𝑓 𝑀𝑣 (𝑥) is a moment generating function of a random variable X, then the


nth Moment of X is given by
𝐝𝐧 𝐌𝐱 (𝐱)
𝐦𝐧 = 𝐝𝐯 𝐧
where v=0

n th Moments of X can be derived from the moment generating function. Here


this function is not exists for complex random variables

CHARACTERISTIC FUNCTION:

Characteristic Function ∅(𝑥 ) of a random variable x is generating the nth order


Moments about origin and defined as

∅(𝜔) = 𝐸[𝑒 𝑗𝜔𝑥 ]

Where 𝜔 is real value −∞ < 𝑋 < ∞


∅(𝜔) = 𝐸[𝑒 𝑗𝜔𝑥 ] = ∫ 𝒆𝒋𝝎𝒙 𝒇(𝒙)𝒅𝒙


−∞

∅(𝜔) is a Characteristic function of a random variable X, then the nth


Moment of X is given by
𝐝𝐧 ∅(𝛚)
𝐦𝐧 = (−𝐣)𝐧 𝐝𝛚𝐧

where 𝜔=0

Moment about Mean Value or Central Moments


𝝁𝒏 = ∫(𝑿 − 𝒎𝒙 )𝒏 𝒇(𝒙)𝒅𝒙
−∞

If n =0 is zeroth central moments or zeroth moments about mean value


𝝁𝒏 = 𝑬[(𝑿 − 𝒎𝒙 )𝒏 ] = ∫(𝑿 − 𝒎𝒙 )𝒏 𝒇(𝒙)𝒅𝒙


−∞

𝝁𝟎 = 𝑬[(𝑿 − 𝒎𝒙 )𝟎 ] = ∫(𝑿 − 𝒎𝒙 )𝟎 𝒇(𝒙)𝒅𝒙


−∞

𝝁𝟎 = ∫ 𝒇(𝒙)𝒅𝒙 = 𝟏
−∞
If n =1 is first Moments about Mean Value or first Central Moments

𝝁𝒏 = 𝑬[(𝑿 − 𝒎𝒙 )𝒏 ] = ∫(𝑿 − 𝒎𝒙 )𝒏 𝒇(𝒙)𝒅𝒙


−∞

𝝁𝟏 = 𝑬[(𝑿 − 𝒎𝒙 )𝟏 ] = ∫(𝑿 − 𝒎𝒙 )𝟏 𝒇(𝒙)𝒅𝒙


−∞

𝝁𝟏 = 𝑬[𝑿 − 𝒎𝒙 ] = ∫[𝑿 − 𝒎𝒙 ]𝒇(𝒙)𝒅𝒙


−∞

∞ ∞

𝝁𝟏 = 𝑬[𝑿] − 𝑬[𝒎𝒙 ] = ∫ 𝑿𝒇(𝒙)𝒅𝒙 − 𝒎𝒙 ∫ 𝒇(𝒙)𝒅𝒙


−∞ −∞

∞ ∞

𝝁𝟏 = 𝑬[𝑿] − 𝑬[𝒎𝒙 ] = ∫ 𝑿𝒇(𝒙)𝒅𝒙 − 𝒎𝒙 ∫ 𝒇(𝒙)𝒅𝒙


−∞ −∞

𝝁𝟏 = 𝑬[𝑿] − 𝑬[𝒎𝟏 ] = 𝒎𝒙 − 𝒎𝒙 = 𝟎

We known that

𝒎𝒙 = ∫ 𝒙𝒇(𝒙)𝒅𝒙
−∞

First moment about mean value is always zero

If n=2 is Second Moment Mean Value and also called Variance of Random
Variables and denoted as 𝜎𝑥2

𝝈𝟐𝒙 = 𝝁𝟐 = 𝑬[(𝑿 − 𝒎𝒙 )𝒏 ] = ∫(𝑿 − 𝒎𝒙 )𝒏 𝒇(𝒙)𝒅𝒙


−∞

𝝈𝟐𝒙 = 𝝁𝟐 = 𝑬[(𝑿 − 𝒎𝒙 )𝟐 ] = ∫(𝑿 − 𝒎𝒙 )𝟐 𝒇(𝒙)𝒅𝒙


−∞

𝝈𝟐𝒙 = 𝝁𝟐 = 𝑬[𝑿𝟐 + 𝒎𝟐𝒙 − 𝟐𝑿𝒎𝒙 ]

𝝈𝟐𝒙 = 𝝁𝟐 = 𝑬[𝑿𝟐 ] − 𝑬[𝟐𝑿𝒎𝒙 ] + 𝑬[𝒎𝟐𝒙 ]


𝝈𝟐𝒙 = 𝝁𝟐 = 𝑬[𝑿𝟐 ] − 𝟐𝒎𝒙 𝑬[𝑿] + 𝒎𝟐𝒙

𝝈𝟐𝒙 = 𝝁𝟐 = 𝑬[𝑿𝟐 ] − 𝟐𝒎𝒙 𝒎𝒙 + 𝒎𝟐𝒙

𝝈𝟐𝒙 = 𝝁𝟐 = 𝑬[𝑿𝟐 ] − 𝟐𝒎𝟐𝒙 + 𝒎𝟐𝒙

𝝈𝟐𝒙 = 𝝁𝟐 = 𝑬[𝑿𝟐 ] − 𝒎𝟐𝒙

Square Root of Variance is called Standard Deviation.

The physical significance of variance and standard deviation

Consider two random variable x1 and x2 , The Probability Density Function of

x1 and x2 are shown in figure as f(x1 )and f(x2 ) with same mean value E[X]

As shown in figure, both functions are symmetrical about mean. f(x1 ) is deviate

less from E[X],where as the values of f(x2 ) deviate more about from E[X].so the

amount of deviation about the mean of x1 is different from x2 .this amount of

deviation is called spread of function f(x).the standard deviation of X is a

measure of the spread in the function about the mean

If n=3 is called third moment about mean value or third central moment or
skew and skew measures the Asymmetry from the mean value

𝝁𝒏 = 𝑬[(𝑿 − 𝒎𝒙 )𝒏 ] = ∫(𝑿 − 𝒎𝒙 )𝒏 𝒇(𝒙)𝒅𝒙


−∞

𝝁𝟑 = 𝑬[(𝑿 − 𝒎𝒙 )𝟑 ] = ∫(𝑿 − 𝒎𝒙 )𝟑 𝒇(𝒙)𝒅𝒙


−∞

If n=4 is called fourth moment about mean value or fourth central moment or
kurtosis

𝝁𝒏 = 𝑬[(𝑿 − 𝒎𝒙 )𝒏 ] = ∫(𝑿 − 𝒎𝒙 )𝒏 𝒇(𝒙)𝒅𝒙


−∞

𝝁𝟒 = 𝑬[(𝑿 − 𝒎𝒙 )𝟒 ] = ∫(𝑿 − 𝒎𝒙 )𝟒 𝒇(𝒙)𝒅𝒙


−∞

Kurtosis Measures the peakendness of random variables X

Chebychev’s Inequality: Chebychev’s Inequality is important imequality.

which is useful for solve probability problems

For a given random variable X with and variance 𝝈𝟐𝒙 , States that

𝑷{|𝑿 − 𝑿| ≥∈} ≤ 𝝈𝟐𝒙 Where ∈ is very small positive number


MULTI RANDOM VARIABELS

Many Practical situations required Multi Random Variables for analysis.

So here introduces the probability function for two random variables. The

theory can then be extended to multi random variables.

JOINT PROBABILITY DISTRIBUCTION FUNCTION

Consider two random variables X and Y, with elements {x} and {y} in the xy

plane. The ordered pair of numbers {x,y} is called a random vector in the two

dimensional product space or joint sample space. Let two events be A ={X≤ x}

and B={Y≤ y}. hen the Joint Probability Distribution Function For the joint

event {X≤ x, Y≤ y} is defined as

𝐹(𝑥, 𝑦) = 𝑃 {X ≤ x, Y ≤ y} = 𝑃(𝐴 ∩ 𝐵)

For discrete random variables if 𝑋 = {𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 , 𝑥6 … … … … . }

𝑎𝑛𝑑 𝑌 = {𝑦1 , 𝑦2 , 𝑦3 , 𝑦4 , 𝑦5 , 𝑦6 … … … … . } with joint probabilities

For discrete random variables if 𝑋 = {𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 , 𝑥6 … … … } and 𝑌 =

{𝑦1 , 𝑦2 , 𝑦3 , 𝑦4 , 𝑦5 , 𝑦5 … … … } with joint probabilities

𝑃(𝑥𝑛 , 𝑦𝑚 ) = 𝑃(𝑋 = 𝑥𝑛 , 𝑌 = 𝑦𝑛 ) then The Joint Probability Distribution

Function is given

𝑁 𝑀

𝐹(𝑥, 𝑦) = ∑ ∑ 𝑃(𝑥𝑛 , 𝑦𝑛 )𝑢(𝑥 − 𝑥𝑛 )𝑢(𝑦 − 𝑦𝑚 )


𝑛=1 𝑚=1
PROPERTIES OF JOINT PROBABILITY DISTRIBUTION FUNCTIONS

1.𝐹(−∞, ∞) = 0

2. 𝐹 (∞, ∞) = 1

3. 𝐹 (−∞, −∞) = 0

4. 𝐹 (−∞, 𝑦) = 0

5. 𝐹 (𝑥, −∞) = 0

6.F(x, y) is a monotonic and non decreasing function of both ‘x’ and ‘y’

7.The Probability of the joint event {𝑥1 ≤ 𝑋 ≤ 𝑥2 , 𝑦1 ≤ 𝑌 ≤ 𝑦2 } is given by

𝑃{𝑥1 ≤ 𝑋 ≤ 𝑥2 , 𝑦1 ≤ 𝑌 ≤ 𝑦2 } = 𝐹{𝑥2 , 𝑦2 } + 𝐹 {𝑥1 , 𝑦1 } − 𝐹 {𝑥1 , 𝑦2 } − 𝐹{𝑥2 , 𝑦1 }

8. The Marginal Probability Distribution Functions are given by

𝐹 (∞, 𝑦) = 𝐹 (𝑦)

𝐹 (𝑥, ∞) = 𝐹(𝑥 )
JOINT PROBABILITY DENSITY FUNCTION

For two random variables X and Y ,the Joint Probability Density

Function denoted f(x,y),is defined by the second order derivative of the

Probability Distribution Function

δ2 F(x,y)
f(x, y) = ;f(x,y) refers as Joint Probability Density Function
δxδy

of the Continuous Random Variables.

The Joint Probability Density Function of discrete Random Variables is defined

∞ ∞

𝑓 (𝑥, 𝑦) = ∑ ∑ 𝑃 (𝑥, 𝑦)𝛿 (𝑥 − 𝑥𝑖 )𝛿(𝑦 − 𝑦𝑗 )


𝑖=−∞ 𝑗=−∞

PROPERTIES OF JOINT PROBABILITY DENSITY FUNCTIONS

1.f(x,y)≥0

∞ ∞
2.∫−∞ ∫−∞ 𝑓 (𝑥, 𝑦)𝑑𝑥𝑑𝑦 = 1

∞ ∞
3. 𝐹(𝑥, 𝑦) = ∫−∞ ∫−∞ 𝐹(𝑥, 𝑦)𝑑𝑥𝑑𝑦

𝑥 ∞
4.𝐹(𝑥 ) = ∫−∞ ∫−∞ 𝐹 (𝑥, 𝑦)𝑑𝑥𝑑𝑦

∞ 𝑦
5.𝐹(𝑦) = ∫−∞ ∫−∞ 𝐹(𝑥, 𝑦)𝑑𝑥𝑑𝑦


6 𝑓(𝑥 ) = ∫−∞ 𝑓 (𝑥, 𝑦)𝑑𝑦


7.𝑓 (𝑦) = ∫−∞ 𝑓(𝑥, 𝑦)𝑑𝑥
STATISTICAL INDEPENDENCE

The Two Events are Statistically Independent if and only if

P(A∩B)=P(A)P(B)

Above Condition Extended to Random Variables X and Y by defining

events A={X≤x}, and B={Y≤y}for two real numbers x and y.Thus X and Y

are said to be Statistically Independent random variables if and only if

P{X≤x, Y≤y}=P{X≤x} P{Y≤y},

This is also expressed in Joint Probability Distribution Function and

Individual Probability Distribution Functions

F(x,y)=F(x)F(y)

If X and Y are independent then expressed as Joint Probability Density

Function and Individual Probability Density Functions

f(x,y)=f(x)f(y)
SUM OF TWO RANDOM VARIABLES

Let ‘W’ be Random Variable equal to the sum of two random variables ‘X’ will
be the input signal and ‘Y’ will be the Noise Signal

W= X + Y

F(x) and F(y) are Probability Distribution Functions of ‘X’ and ‘Y’ respectively,
then the Probability Distribution Functions of ‘W’ is given as

𝐹 (𝑤) = 𝑃{𝑊 ≤ 𝑤} = 𝑃{𝑋 + 𝑌 ≤ 𝑤}

The Joint Probability Density Function of random variables ‘X’ and ‘Y’ is f(x,y)in
the region X+Y≤w ,Then the Probability Distribution Function
∞ 𝑥

𝐹 (𝑤 ) = ∫ ∫ 𝑓 (𝑥, 𝑦)𝑑𝑥𝑑𝑦
−∞ −∞

But w = x+y
x =w-y
∞ 𝑤−𝑦

𝐹 (𝑤 ) = ∫ ∫ 𝑓 (𝑥, 𝑦)𝑑𝑥𝑑𝑦
−∞ −∞
Since X and Y are Independent Random Variables.

f(x,y) = f(x) f(y)


∞ 𝑤−𝑦

𝐹 (𝑤 ) = ∫ ∫ 𝑓(𝑥 )𝑓(𝑦)𝑑𝑥𝑑𝑦
−∞ −∞
∞ 𝑤−𝑦

𝐹 (𝑤 ) = ∫ 𝑓(𝑦) ∫ 𝑓 (𝑥 ) 𝑑𝑥𝑑𝑦
−∞ −∞

We Know that
𝑑𝐹 (𝑤 )
𝑓 (𝑤 ) =
𝑑𝑤
∞ 𝑤−𝑦
𝑑𝐹 (𝑤 ) 𝑑
( )
𝑓 𝑤 = = [ ∫ 𝑓(𝑦) ∫ 𝑓 (𝑥 ) 𝑑𝑥𝑑𝑦]
𝑑𝑤 𝑑𝑤
−∞ −∞

Apply the differentiating using Leibniz’s Rule, The Probability Density Function
is
∞ 𝑤−𝑦
𝑑𝐹 (𝑤 ) 𝑑
𝑓 (𝑤 ) = = ∫ 𝑓 (𝑦 ) [ ∫ 𝑓 (𝑥 ) 𝑑𝑥] 𝑑𝑦
𝑑𝑤 𝑑𝑤
−∞ −∞

We Know that
𝑏(𝑤) 𝑏(𝑤)
𝑑 𝑑𝑓 (𝑥 ) 𝑑𝑏 𝑑𝑎
[ ∫ 𝑓 (𝑥, 𝑤 ) 𝑑𝑥] = ∫ 𝑑𝑥 + 𝑓(𝑏(𝑤 )) − 𝑓(𝑎(𝑤 ))
𝑑𝑤 𝑑𝑤 𝑑𝑤 𝑑𝑤
𝑎(𝑤) 𝑎(𝑤)
∞ 𝑤−𝑥
𝒅𝑭(𝒘) 𝑑𝑓 (𝑥 ) 𝑑 (𝑤 − 𝑥 ) 𝑑(−∞)
𝑓 (𝒘) = = ∫ 𝑓 (𝑦 ) [ ∫ 𝑑𝑥 + 𝑓(𝑤 − 𝑥 ) − 𝑓 (−∞) ] 𝑑𝑦
𝒅𝒘 𝑑𝑤 𝑑𝑤 𝑑𝑤
−∞ −∞

𝑑 (𝑤 − 𝑥 ) 𝑑 (−∞) 𝑑𝑓 (𝑥 )
= 1, = 0 𝑎𝑛𝑑 =0
𝑑𝑤 𝑑𝑤 𝑑𝑤
∞ 𝑤−𝑥
𝒅𝑭(𝒘) 𝑑𝑓 (𝑥 )
𝑓 (𝒘) = = ∫ 𝑓 (𝑦 ) [ ∫ 𝑑𝑥 + 𝑓(𝑤 − 𝑥 ) ∗ 1 − 𝑓 (−∞) ∗ 0] 𝑑𝑦
𝒅𝒘 𝑑𝑤
−∞ −∞
∞ 𝑤−𝑥
𝒅𝑭(𝒘)
𝑓 (𝒘) = = ∫ 𝑓(𝑦) [ ∫ 0 ∗ 𝑑𝑥 + 𝑓 (𝑤 − 𝑥 )] 𝑑𝑦
𝒅𝒘
−∞ −∞

𝒅𝑭(𝒘)
𝑓 (𝒘) = = ∫ 𝑓(𝑦) 𝑓(𝑤 − 𝑥 )𝑑𝑦
𝒅𝒘
−∞

𝑓 (𝒘) = ∫ 𝑓(𝑦) 𝑓(𝑤 − 𝑥 )𝑑𝑦


−∞

Above expression is known as convolution integral and it can be also


expressed as

𝑓 (𝑤 ) = 𝑓 (𝑥 ) ⊗ 𝑓(𝑦)
Hence, The Probability Density Function of the sum of two statistically
independent random variables is equal to the convolution of their Individual
Probability Density Functions.

MULTIPLE RANDOM VARIABLES

Consider that there are ‘N’ statistically independent random variables Xn,
n=0, 1, 2, 3, 4, 5 …N

If the Sum of ‘N’ statistically independent random variables

𝑌 = 𝑋1 + 𝑋2 + 𝑋3 + 𝑋4 + 𝑋5 + ⋯ … … … … … . . +𝑋𝑁 .
Then the Probability Density Function of the sum of ‘N’ statistically
independent random variables is equal to the convolution of their Individual
Probability Density Functions.

CENTRAL LIMIT THEOREM

It state that the Probability Density Function of the sum of ‘N’ statistically
independent random variables is approaches the Gaussian Density Function as
N tends infinity

OPERATION ON MULTI RANDOM VARIABLES


Expected Value of function of Random Variables ‘X’ and ‘Y’ with the Joint
Probability Density Function f(x,y),then
∞ ∞

𝐸 [𝑔(𝑥, 𝑦)] = ∫ ∫ 𝑔(𝑥, 𝑦)𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦


−∞ −∞

JOINT MOMENTS

1. Joint Moments about Origin


The Joint Moments about the origin for two random variables and it is
denoted as mnk; mathematically

𝑚𝑛𝑘 = 𝐸 [𝑋 𝑛 𝑌 𝑘 ]
∞ ∞

𝑚𝑛𝑘 = ∫ ∫ 𝑋 𝑛 𝑌 𝑘 𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦


−∞ −∞
where n and k are positive integer, the sum of (n+k) is called order of
Moments
If k=0,then mn0 E[Xn] is Moments of Random Variables ‘X’
If n=0,then mn0 E[Yk] is Moments of Random Variables ‘Y’
First Order Moments are m10 and m01
∞ ∞ ∞ ∞

𝒎𝟏𝟎 = 𝑬[𝑿𝟏 𝒀𝟎 ] = ∫ ∫ 𝑿𝟏 𝒀𝟎 𝒇(𝒙, 𝒚)𝒅𝒙𝒅𝒚 = ∫ ∫ 𝑿𝒇(𝒙, 𝒚)𝒅𝒙𝒅𝒚


−∞ −∞ −∞ −∞
∞ ∞ ∞ ∞

𝒎𝟎𝟏 = 𝑬[𝑿𝟎 𝒀𝟏 ] = ∫ ∫ 𝑿𝟎 𝒀𝟏 𝒇(𝒙, 𝒚)𝒅𝒙𝒅𝒚 = ∫ ∫ 𝒀𝒇(𝒙, 𝒚)𝒅𝒙𝒅𝒚


−∞ −∞ −∞ −∞

Second Order Moments are m20 , m02 and m11


∞ ∞ ∞ ∞

𝒎𝟐𝟎 = 𝑬[𝑿𝟐 𝒀𝟎 ] = 𝑬[𝑿𝟐 ] = ∫ ∫ 𝑿𝟐 𝒀𝟎 𝒇(𝒙, 𝒚)𝒅𝒙𝒅𝒚 = ∫ ∫ 𝑿𝟐 𝒇(𝒙, 𝒚)𝒅𝒙𝒅𝒚


−∞ −∞ −∞ −∞
∞ ∞ ∞ ∞

𝒎𝟎𝟐 = 𝑬[𝑿𝟎 𝒀𝟐 ] = 𝑬[𝒀𝟐 ] = ∫ ∫ 𝑿𝟎 𝒀𝟐 𝒇(𝒙, 𝒚)𝒅𝒙𝒅𝒚 = ∫ ∫ 𝒀𝟐 𝒇(𝒙, 𝒚)𝒅𝒙𝒅𝒚


−∞ −∞ −∞ −∞

∞ ∞ ∞ ∞
𝟏 𝟏] 𝟏 𝟏
𝒎𝟏𝟏 = 𝑬[𝑿 𝒀 = ∫ ∫ 𝑿 𝒀 𝒇(𝒙, 𝒚)𝒅𝒙𝒅𝒚 = ∫ ∫ 𝑿𝒀𝒇(𝒙, 𝒚)𝒅𝒙𝒅𝒚
−∞ −∞ −∞ −∞
𝒎𝟏𝟏 𝒊𝒔 𝒄𝒂𝒍𝒍𝒆𝒅 𝑪𝒐𝒓𝒓𝒆𝒍𝒂𝒕𝒊𝒐𝒏 𝒐𝒇 𝑿 𝒂𝒏𝒅 𝒀. 𝑰𝒕 𝒊𝒔 𝒅𝒆𝒏𝒐𝒕𝒆𝒅 𝒂𝒔 𝑹𝑿𝒀

1. If Two Random Variables X and Y are Statistically Independent,then X


and Y are said to be uncorrelated RXY=E[XY]=E[X]E[Y]
2. If Two Random Variables X and Y are Orthogonal ,their Correlation X
and Y is equal to Zero RXY=E[XY]=0

2. JOINT MOMENTS ABOUT MEAN VALUE(JOINT CENTRAL MOMENTS)

Joint Moments About Mean Value(Joint Central Moments) is defined as

𝒌
𝝁𝒏𝒌 = 𝑬 [(𝒙 − 𝒎𝒙 )𝒏 (𝒚 − 𝒚𝒚 ) ]

∞ ∞
𝒌 𝒌
𝝁𝒏𝒌 = 𝑬 [(𝒙 − 𝒎𝒙 )𝒏 (𝒚 − 𝒎𝒚 ) ] = ∫ ∫ (𝒙 − 𝒎𝒙 )𝒏 (𝒚 − 𝒎𝒚 ) 𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚
−∞ −∞
Where n and k are positive integer, the sum of (n+k) is called order of
Moments

First Order Joint Moments About Mean Value µ01 and µ10 are
∞ ∞
𝟎 𝟎
𝝁𝟏𝟎 = 𝑬 [(𝒙 − 𝒎𝒙 )𝟏 (𝒚 − 𝒎𝒚 ) ] = ∫ ∫ (𝒙 − 𝒎𝒙 )𝟏 (𝒚 − 𝒎𝒚 ) 𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚
−∞ −∞
∞ ∞

𝝁𝟏𝟎 = 𝑬[(𝒙 − 𝒎𝒙 )𝟏 ] = ∫ ∫ (𝒙 − 𝒎𝒙 )𝟏 𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚


−∞ −∞

∞ ∞

𝝁𝟏𝟎 = 𝑬[𝒙 − 𝒎𝒙 ] = ∫ ∫ (𝒙 − 𝒎𝒙 )𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚 = 𝟎


−∞ −∞

∞ ∞
𝟏 𝟏
𝝁𝟎𝟏 = 𝑬 [(𝒙 − 𝒎𝒙 )𝟎 (𝒚 − 𝒎𝒚 ) ] = ∫ ∫ (𝒙 − 𝒎𝒙 )𝟎 (𝒚 − 𝒎𝒚 ) 𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚
−∞ −∞

∞ ∞
𝟏 𝟏
𝝁𝟎𝟏 = 𝑬 [(𝒚 − 𝒎𝒚 ) ] = ∫ ∫ (𝒚 − 𝒎𝒚 ) 𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚
−∞ −∞
∞ ∞

𝝁𝟎𝟏 = 𝑬[𝒚 − 𝒎𝒚 ] = ∫ ∫ (𝒚 − 𝒎𝒚 )𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚 = 𝟎


−∞ −∞

Second Order Joint Moments About Mean Value µ02 ,µ20 and µ11are
∞ ∞
𝟎 𝟎
𝝁𝟐𝟎 = 𝑬 [(𝒙 − 𝒎𝒙 )𝟐 (𝒚 − 𝒎𝒚 ) ] = ∫ ∫ (𝒙 − 𝒎𝒙 )𝟐 (𝒚 − 𝒎𝒚 ) 𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚
−∞ −∞
∞ ∞

𝝁𝟐𝟎 = 𝑬[(𝒙 − 𝒎𝒙 )𝟐 ] = ∫ ∫ (𝒙 − 𝒎𝒙 )𝟐 𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚 = 𝝈𝟐𝒙


−∞ −∞

∞ ∞
𝟐 𝟐
𝝁𝟎𝟐 = 𝑬 [(𝒙 − 𝒎𝒙 )𝟎 (𝒚 − 𝒎𝒚 ) ] = ∫ ∫ (𝒙 − 𝒎𝒙 )𝟎 (𝒚 − 𝒎𝒚 ) 𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚
−∞ −∞
∞ ∞
𝟐 𝟐
𝝁𝟎𝟐 = 𝑬 [(𝒚 − 𝒎𝒚 ) ] = ∫ ∫ (𝒚 − 𝒎𝒚 ) 𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚 = 𝝈𝟐𝒚
−∞ −∞

∞ ∞
𝟏 𝟏
𝝁𝟏𝟏 = 𝑬 [(𝒙 − 𝒎𝒙 )𝟏 (𝒚 − 𝒎𝒚 ) ] = ∫ ∫ (𝒙 − 𝒎𝒙 )𝟏 (𝒚 − 𝒎𝒚 ) 𝒇(𝒙𝒚)𝒅𝒙𝒅𝒚
−∞ −∞

𝝁𝟏𝟏 𝒊𝒔 𝒄𝒂𝒍𝒍𝒆𝒅 𝑪𝒐𝒗𝒂𝒓𝒊𝒂𝒏𝒄𝒆 𝒐𝒇 𝑹𝒂𝒏𝒅𝒐𝒎 𝑽𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔 𝒂𝒏𝒅 𝒊𝒔 𝒅𝒆𝒏𝒐𝒕𝒆𝒅 𝒂𝒔 𝐶𝑋𝑌


Correlation Coefficient is defined as normalized second order Joint
Moment About Mean Value and denoted as 𝝆
𝜇11 𝐶𝑋𝑌
𝜌= =
√𝜇20 𝜇02 𝜎𝑋 𝜎𝑌

You might also like