Professional Documents
Culture Documents
Jawaharlal Nehru Technological University Kakinada Kakinada - 533 001, Andhra Pradesh
Jawaharlal Nehru Technological University Kakinada Kakinada - 533 001, Andhra Pradesh
Analog Communication
26-05-2020 to 06-07-2020
4
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Introduction
5
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Probability
◊ Probability theory is rooted in phenomena that, explicitly or
implicitly, can be modeled by an experiment with an outcome that is
subject to chance.
◊ Example: Experiment may be the observation of the result of
6
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Probability
◊ A single sample point is called an elementary event.
◊ The entire sample space S is called the sure event; and the null set
7
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
8
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Probability
◊ The following properties of probability measure P may be derived
from the above axioms:
1. (4 )
2.When events A and B are not mutually exclusive:
P A B = P A + P B − P A B (5)
3. If A1,A2 ,...,Am are mutually exclusive events that include all
possible outcomes of the random experiment, then
P A1+ P A2 +… + P Am = 1 (6 )
9
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Probability
◊ Let P[B|A] denote the probability of event B, given that event A has
occurred. The probability P[B|A] is called the conditional
probability of B given A.
◊ P[B|A] is defined by
(.7 )
◊ Bayes’ rule
◊ We may write Eq.(7) as P[A∩B] = P[B|A]P[A] (8)
◊ It is apparent that we may also write P[A∩B] = P[A|B]P[B] (9)
◊ From Eqs.(8) and (9), provided P[A] ≠ 0, we may determine P[B|A] by
using the relation
(10 )
10
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
11
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Conditional Probability
◊ Suppose that the condition probability P[B|A] is simply equal to the
elementary probability of occurrence of event B, that is
P A B = P APB
so that P A B P APB
= = = P A (13)
PB] PB]
12
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Conditional Probability
◊ Example 1: Binary Symmetric Channel
◊ This channel is said to be discrete in that it is designed to handle
discrete messages.
◊ The channel is memoryless in the sense that the channel output at
any time depends only on the channel input at that time.
◊ The channel is symmetric, which means that the probability of
receiving symbol 1 when 0 is sent is the same as the probability
of receiving symbol 0 when symbol 1 is sent.
13
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Conditional Probability
14
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Conditional Probability
15
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
16
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
TOTAL
TH. OF
PROB
17
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Variables
◊ We denote the random variable as X(s) or just X.
◊ X is a function.
◊ Random variable may be discrete or continuous.
◊ Consider the random variable X and the probability of the event
X ≤ x. We denote this probability by P[X ≤ x].
◊ To simplify our notation, we write
FX ( x) = P X x (15)
◊ The function FX(x) is called the cumulative distribution
function (cdf) or simply the distribution function of the random
variable X.
◊ The distribution function FX(x) has the following properties:
1. 0 Fx ( x) 1
2. Fx ( x1 ) Fx ( x2 ) if x1 x2
18
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Variables
19
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Variables
◊ If the distribution function is continuously differentiable, then
f X ( x) = FX (x) (17)
d
dx
◊ fX(x) is called the probability density function (pdf) of the random
variable X.
◊ Probability of the event x1 < X ≤ x2 equals
P x1 X x2 = P X x2 − P X x1
(18)
(27 )
◊ Suppose that X and Y are two continuous random variables with
joint probability density function fX,Y(x,y). The conditional
probability density function of Y given that X = x is defined by
f X ,Y (x,y )
fY ( y x ) = (28 )
f X (x)
22
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Variables
23
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
(a)
24
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
25
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
26
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
27
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages
−
(38)
28
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages
29
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages
◊ Moments
◊ For the special case of g(X) = X n, we obtain the nth moment of
the probability distribution of the random variable X; that is
◊ Mean-square value of X :
(40 )
◊ The nth central moment is
(41)
30
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages
◊ For n = 2 the second central moment is referred to as the variance of
the random variable X, written as
31
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages
32
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages
33
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages
◊ Joint Moments
◊ Consider next a pair of random variables X and Y. A set of
statistical averages of importance in this case are the joint
moments, namely, the expected value of Xi Y k, where i and k
may assume any positive integer values. We may thus write
(53)
34
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages
◊ Correlation coefficient of X and Y :
cov XY
= (54 )
X Y
uncorrelated.
◊ The converse of the above statement is not necessarily true.
35
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes
For a fixed time instant tk,x (t ), x (t ),…x (t )= X (t , s ), X (t , s ),…, X (t , s )constitutes a random variable.
1 k 2 k n k k 1 k 2 k n
36
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes
may or may not be identical. When they are identical, i.e., when
( ) (
p xt1 , xt2 ,..., xtn = p xt1 +t , xt2 +t ,..., xtn +t )
for all t and all n, it is said to be stationary in the strict sense (SSS).
◊ When the joint PDFs are different, the stochastic process is
non-stationary.
39
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes
40
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes
41
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes
42
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes
◊ Auto-covariance function
◊ The auto-covariance function of a stochastic process is
defined as:
= RX (t1 ,t2 ) − m (t1 )m (t2 )
◊ When the process is stationary, the auto-covariance
function simplifies to:
43
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions
(57 )
◊ A random process is said to be stationary to first order if the
distribution function (and therefore density function) of X(t) does
not vary with time.
f X (t 1 ) (x ) = f X (t 2 ) (x ) for all t1 and t2 → X (t )= X for all t (59)
◊ The mean of the random process is a constant.
◊ The variance of such a process is also constant.
44
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions
(.60 )
◊ We say a random process X(t) is stationary to second order if the
joint distribution f X (t 1, )X t ( 2 ) (x1 , x2 ) depends on the difference between
the observation time t1 and t2.
RX (t1 ,t2 ) = RX (t2 − t1 ) for all t1 and t2 (.61)
◊ The autocovariance function of a stationary random process X(t) is
written as
45
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions
46
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions
47
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
48
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
F Y OF Y IS
1 BY B-A AS
X IS
UNIFORML
Y
DISTRIBUTE
D
1/11
49
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
AREA
UNDER
DENSITY FN
50
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
51
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
52
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
53
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions
◊ Ergodic Processes
◊ In many instances, it is difficult or impossible to observe all sample
functions of a random process at a given time.
◊ It is often more convenient to observe a single sample function for a
long period of time.
◊ For a sample function x t), the time average of the mean value over
an observation period 2T is
(84 )
◊ For many stochastic processes of interest in communications, the
time averages and ensemble averages are equal, a property known as
ergodicity.
◊ This property implies that whenever an ensemble average is required,
we may estimate it by using a time average.
54
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions
55
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Transmission of a Random Process Through
a Linear Filter
56
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Transmission of a Random Process Through
a Linear Filter
57
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Transmission of a Random Process Through
a Linear Filter
RY ( ) = h ( )h ( )R ( −
1 2 X 1 + 2 )d1d 2 (90 )
− −
58
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
59
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
60
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Power Spectral Density
◊ The Fourier transform of the autocorrelation function RX(τ) is called
the power spectral density SX( f ) of the random process
X(t).
S X ( f ) = RX ( )exp (− j2 f )d (91)
−
61
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
.. 8 P oow
P weerr SSppeeccttrr a
a l
l D
D e
e n
n s
s i
i ttyy
62
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
◊ Suppose we let |H( f )|2=1 for any arbitrarily small interval f1 ≤ f ≤ f2 ,
and H( f )=0 outside this interval. Then, we have:
S X ( f ) df 0
f2
f1
63
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Ref.43
64
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
65
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
FT of AUTO. CORR
FN. IS PSD
66
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
67
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Gaussian Process
N
Y = g(t) X (t)dt
T
ai are constants
0
Xi are randomvariables
◊ We refer to Y as a linear functional of X(t). Y is a linear function of the Xi .
◊ If the weighting function g(t) is such that the mean-square value of
the random variable Y is finite, and if the random variable Y is a
Gaussian-distributed random variable for every g(t) in this class of
functions, then the process X(t) is said to be a Gaussian process.
◊ In other words, the process X(t) is a Gaussian process if every
linear functional of X(t) is a Gaussian random variable.
Y
2πσ 2σ2
μY : the mean of the random variable Y
σY2 : the variance of the random variable Y
◊ If the Gaussian random variable Y is normalized to have a mean of
zero and a variance of one, such a normalized Gaussian
distribution is commonly written as N(0,1).
69
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Gaussian Process
σ2X.
◊ The Xi so described are said to constitute a set of independent and
identically distributed (i.i.d.) random variables.
70
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Gaussian Process
◊ Property 1: If a Gaussian process X(t) is applied to a stable linear
filter, then the output of Y(t) is also Gaussian.
◊ Property 2: Consider the set of random variables or samples X(t1),
X(t2), …X(tn), obtained by observing a random process X(t) at time t1,
t2, …, tn. If the process X(t) is Gaussian, then this set of random
variables is jointly Gaussian for any n, with their n-fold joint
probability density function being completely determined by
specifying the set of means:
i = 1, 2,…, n
and the set of autocovariance functions:
k, i =1, 2, ..., n
◊ Consider the composite set of random variables X(t1), X(t2),…, X(tn),
Y(u1), Y(u2),…, Y(um). We say that the processes X(t) and Y(t) are
jointly Gaussian if this composite set of random variables are jointly
Gaussian for any n and m.
71
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Gaussian Process
72
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
73
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
74
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
75
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
References