You are on page 1of 77

JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY KAKINADA

KAKINADA – 533 001 , ANDHRA PRADESH

GATE Coaching Classes as per the Direction of


Ministry of Education
GOVERNMENT OF ANDHRA PRADESH

Analog Communication
26-05-2020 to 06-07-2020

Prof. Ch. Srinivasa Rao


Dept. of ECE, JNTUK-UCE Vizianagaram

Prof.Ch.Srinivasa1Rao, JNTUK - UCEV


Analog Communication-Day 11, 05-06-2020
Presentation Outline
◊ Introduction
◊ Probability
◊ Random Variables
◊ Statistical Averages
◊ Random Processes
◊ Mean, Correlation and Covariance Functions
◊ Transmission of a Random Process through a Linear Filter
◊ Power Spectral Density
◊ Gaussian Process
Problems



2
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Learning Outcomes

• At the end of this Session, Student will be able to learn:

• LO 1 : Probability and Random Variables


• LO 2 : Statistical Averages and Random Processes
• LO 3 : Mean, Correlation and Covariance Functions
• LO 4 : Transmission of a Random Process through a Linear Filter
• LO 5 : PSD and Gaussian Process

Prof.Ch.Srinivasa Rao, JNTUK - UCEV 3


Introduction

• A Random signal is a time waveform that can be


characterized only in some probabilistic manner. It can be
desired or Undesired
• Ex: Hiss, snow, sea sounds, noise, bits tx. by computer
• Undesired random waveforms (noise) also appears in the
output of other type of systems
• Probability theory is a study of Random or Unpredictable
experiments
• Deterministic Vs. Random experiment
• Ex : current passing through a conductor, throwing a dice

4
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Introduction

◊ Fourier transform is a mathematical tool for the representation of


deterministic signals.

◊ Deterministic signals: the class of signals that may be modeled as


completely specified functions of time.
A signal is “random” if it is not possible to predict its precise value

in advance.

◊ A random process consists of an ensemble (family) of sample


functions, each of which varies randomly with time.

◊ A random variable is obtained by observing a random process at a


fixed instant of time.

5
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Probability
◊ Probability theory is rooted in phenomena that, explicitly or
implicitly, can be modeled by an experiment with an outcome that is
subject to chance.
◊ Example: Experiment may be the observation of the result of

tossing a fair coin. In this experiment, the possible outcomes of a


trial are “heads” or “tails”.
◊ If an experiment has K possible outcomes, then for the kth possible
outcome we have a point called the sample point, which we denote
by sk. With this basic framework, we make the following definitions:
◊ The set of all possible outcomes of the experiment is calledthe

sample space, which we denote by S.


◊ An event corresponds to either a single sample point or a set of

sample points in the space S.

6
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Probability
◊ A single sample point is called an elementary event.
◊ The entire sample space S is called the sure event; and the null set

 is called the null or impossible event.


◊ Two events are mutually exclusive if the occurrence of one event

precludes the occurrence of the other event.


◊ A probability measure P is a function that assigns a non-negative
number to an event A in the sample space S and satisfies the
following three properties (axioms):
1. 0  PA  1 (1)
2. PS ] = 1 (2 )
3. If A and B are two mutually exclusive events, then
(3)

7
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
8
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Probability
◊ The following properties of probability measure P may be derived
from the above axioms:

1. (4 )
2.When events A and B are not mutually exclusive:
P A  B = P A + P B − P A  B (5)
3. If A1,A2 ,...,Am are mutually exclusive events that include all
possible outcomes of the random experiment, then
P A1+ P A2 +… + P Am  = 1 (6 )

9
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Probability
◊ Let P[B|A] denote the probability of event B, given that event A has
occurred. The probability P[B|A] is called the conditional
probability of B given A.
◊ P[B|A] is defined by

(.7 )
◊ Bayes’ rule
◊ We may write Eq.(7) as P[A∩B] = P[B|A]P[A] (8)
◊ It is apparent that we may also write P[A∩B] = P[A|B]P[B] (9)
◊ From Eqs.(8) and (9), provided P[A] ≠ 0, we may determine P[B|A] by
using the relation

(10 )

10
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
11
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Conditional Probability
◊ Suppose that the condition probability P[B|A] is simply equal to the
elementary probability of occurrence of event B, that is

P A  B = P APB

so that P A  B P APB
= = = P A (13)
PB] PB]

◊ Events A and B that satisfy this condition are said to be


statistically independent.

12
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Conditional Probability
◊ Example 1: Binary Symmetric Channel
◊ This channel is said to be discrete in that it is designed to handle
discrete messages.
◊ The channel is memoryless in the sense that the channel output at
any time depends only on the channel input at that time.
◊ The channel is symmetric, which means that the probability of
receiving symbol 1 when 0 is sent is the same as the probability
of receiving symbol 0 when symbol 1 is sent.

13
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Conditional Probability

◊ Example 1 Binary Symmetric Channel (continued)


◊ The a priori probabilities of sending binary symbols 0 and 1:
P A0  = p0 P A1 = p1
◊ The conditional probabilities of error:

◊ The probability of receiving symbol 0 is given by:

◊ The probability of receiving symbol 1 is given by:

14
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Conditional Probability

◊ Example 1 Binary Symmetric Channel (continued)


◊ The a posteriori probabilities P[A0|B0] and P[A1|B1]:

15
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
16
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
TOTAL
TH. OF
PROB

17
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Variables
◊ We denote the random variable as X(s) or just X.
◊ X is a function.
◊ Random variable may be discrete or continuous.
◊ Consider the random variable X and the probability of the event
X ≤ x. We denote this probability by P[X ≤ x].
◊ To simplify our notation, we write
FX ( x) = P X  x (15)
◊ The function FX(x) is called the cumulative distribution
function (cdf) or simply the distribution function of the random
variable X.
◊ The distribution function FX(x) has the following properties:
1. 0  Fx ( x)  1
2. Fx ( x1 )  Fx ( x2 ) if x1  x2
18
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Variables

There may be more than one


random variable associated with
the same random experiment.

19
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Variables
◊ If the distribution function is continuously differentiable, then

f X ( x) = FX (x) (17)
d
dx
◊ fX(x) is called the probability density function (pdf) of the random
variable X.
◊ Probability of the event x1 < X ≤ x2 equals

P x1  X  x2  = P X  x2 − P X  x1 

(18)

◊ Probability density function must always be a nonnegative function,


and with a total area of one.
20
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Variables

◊ Several Random Variables


◊ Consider two random variables X and Y. We define the joint
distribution function FX,Y(x,y) as the probability that the random
variable X is less than or equal to a specified value x and that the
random variable Y is less than or equal to a specified value y.
FX ,Y x,y = P X  x,Y  y (23)

◊ Suppose that joint distribution function FX,Y(x,y) is continuous


everywhere, and that the partial derivative
2 FX ,Y (x, y )
f X ,Y (x, y ) = (24)
xy
exists and is continuous everywhere. We call the function fX,Y(x,y)
the joint probability density function of the random variables X
and Y.
21
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Variables

◊ Several Random Variables


◊ The joint distribution function FX,Y(x,y) is a monotone-
nondecreasing function of both x and y.
 
  f X,Y ( , )d d = 1

− −

◊ Marginal density : fX(x)

(27 )
◊ Suppose that X and Y are two continuous random variables with
joint probability density function fX,Y(x,y). The conditional
probability density function of Y given that X = x is defined by

f X ,Y (x,y )
fY ( y x ) = (28 )
f X (x)
22
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Variables

◊ Several Random Variables


◊ If the random variable X and Y are statistically independent, then
knowledge of the outcome of X can in no way affect the
distribution of Y.

23
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
(a)

24
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
25
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
26
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
27
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages

◊ The expected value or mean of a random variable X is defined by


x = EX  =  xf X (x )dx (36 )

−

◊ Function of a Random Variable


◊ Let X denote a random variable, and let g(X) denote a real-
valued function defined on the real line. We denote as
Y = g (X ) (37)
◊ To find the expected value of the random variable Y.

(38)

28
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages

◊ Example 4: Cosinusoidal Random Variable


◊ Let Y=g(X)=cos(X)
◊ X is a random variable uniformly distributed in the interval (-π, π)

29
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages

◊ Moments
◊ For the special case of g(X) = X n, we obtain the nth moment of
the probability distribution of the random variable X; that is

◊ Mean-square value of X :

(40 )
◊ The nth central moment is

(41)

30
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages
◊ For n = 2 the second central moment is referred to as the variance of
the random variable X, written as

◊ The variance of a random variable X is commonly denoted as  X2 .


◊ The square root of the variance is called the standard deviation of
the random variable X.

31
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages

◊ Example 5 : Gaussian Random Variable


32
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages

◊ Example 5 : Gaussian Random Variable (cont.)


◊ The sum of n statistically independent Gaussian random
variables is also a Gaussian random variable.
◊ Proof:

Therefore,Y is Gaussian- distributed with mean my

33
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages

◊ Joint Moments
◊ Consider next a pair of random variables X and Y. A set of
statistical averages of importance in this case are the joint
moments, namely, the expected value of Xi Y k, where i and k
may assume any positive integer values. We may thus write

◊ A joint moment of particular importance is the correlation


defined by E[XY], which corresponds to i = k = 1.

(53)
34
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Statistical Averages
◊ Correlation coefficient of X and Y :
cov XY 
= (54 )
 X Y

◊ σX and σY denote the variances of X and Y.

◊ We say X and Y are uncorrelated if and only if cov[XY] = 0.


◊ Note that if X and Y are statistically independent, then they are

uncorrelated.
◊ The converse of the above statement is not necessarily true.

◊ We say X and Y are orthogonal if and only if E[XY] = 0.

35
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes

An ensemble of sample functions.

For a fixed time instant tk,x (t ), x (t ),…x (t )= X (t , s ), X (t , s ),…, X (t , s )constitutes a random variable.
1 k 2 k n k k 1 k 2 k n

36
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes

◊ At any given time instant, the value of a stochastic process


is a random variable indexed by the parameter t.
We denote such a process by X(t).

◊ In general, the parameter t is continuous, whereas X may


be either continuous or discrete, depending on the
characteristics of the source that generates the stochastic
process.

◊ The noise voltage generated by a single resistor or a single


information source represents a single realization of the
stochastic process. It is called a sample function.
37
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes

◊ The set of all possible sample functions constitutes an


ensemble of sample functions or, equivalently, the
stochastic process X(t).
◊ In general, the number of sample functions in the
ensemble is assumed to be extremely large; often it is
infinite.
◊ Having defined a stochastic process X(t) as an ensemble of
sample functions, we may consider the values of the
process at any set of time instants t1>t2>t3>…>tn, where n
is any positive integer.
◊ In general, the random variables X t i  X (ti ),i = 1,2,..., n, are
(
characterized statistically by their joint PDF p x t1 , xt2 ,..., xtn .)
38
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes

◊ Stationary stochastic processes


◊ Consider another set of n random variables X t i+t  X (ti + t ),
i = 1, 2,..., n, where t is an arbitrary time shift. These random
(
variables are characterized by the joint PDF p xt1 +t , xt2 +t ,..., xtn +t .)
◊The jont PDFs of the random variables X t and X t +t ,i = 1,2,...,n,
i i

may or may not be identical. When they are identical, i.e., when
( ) (
p xt1 , xt2 ,..., xtn = p xt1 +t , xt2 +t ,..., xtn +t )
for all t and all n, it is said to be stationary in the strict sense (SSS).
◊ When the joint PDFs are different, the stochastic process is
non-stationary.

39
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes

◊ Averages for a stochastic process are called ensemble averages.


◊ The nth moment of the random variable X t i is defined as :

◊ In general, the value of the nth moment will depend on the


time instant ti if thePDF of X ti depends on ti .
◊ When the process is stationary, p xt +t = p xt for all t.
i i

Therefore, the PDF is independent of time, and, as a


consequence, the nth moment isindependent of time.

40
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes

◊ Two random variables: X t  X (ti ),i = 1, 2.


i

◊ The correlation is measured by the joint moment:

◊ Since this joint moment depends on the time instants t1 and


t2, it is denoted by RX(t1 ,t2).
◊ RX(t1 ,t2) is called the autocorrelation function of the
stochastic process.
◊ For a stationary stochastic process, the joint moment is:
E( X t X t ) = RX (t1 ,t2 ) = RX (t1 − t2 ) = RX ( )
1 2

41
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes

◊ Wide-sense stationary (WSS)


◊ A wide-sense stationary process has the property that the
mean value of the process is independent of time (a
constant) and where the autocorrelation function satisfies
the condition that RX(t1,t2)=RX(t1-t2).

◊ Wide-sense stationarity is a less stringent condition than


strict-sense stationarity.

42
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Random Processes

◊ Auto-covariance function
◊ The auto-covariance function of a stochastic process is
defined as:
= RX (t1 ,t2 ) − m (t1 )m (t2 )
◊ When the process is stationary, the auto-covariance
function simplifies to:

(t1 ,t2 ) = (t1 − t2 ) = ( ) = RX ( ) − m 2


◊ For a Gaussian random process, higher-order moments can
be expressed in terms of first and second moments.
Consequently, a Gaussian random process is completely
characterized by its first two moments.

43
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions

◊ Consider a random process X(t). We define the mean of the


process X(t) as the expectation of the random variable obtained by
observing the process at some time t, as shown by

(57 )
◊ A random process is said to be stationary to first order if the
distribution function (and therefore density function) of X(t) does
not vary with time.
f X (t 1 ) (x ) = f X (t 2 ) (x ) for all t1 and t2 → X (t )= X for all t (59)
◊ The mean of the random process is a constant.
◊ The variance of such a process is also constant.

44
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions

◊ We define the autocorrelation function of the process X(t) as the


expectation of the product of two random variables X(t1) and X(t2).

(.60 )
◊ We say a random process X(t) is stationary to second order if the
joint distribution f X (t 1, )X t ( 2 ) (x1 , x2 ) depends on the difference between
the observation time t1 and t2.
RX (t1 ,t2 ) = RX (t2 − t1 ) for all t1 and t2 (.61)
◊ The autocovariance function of a stationary random process X(t) is
written as

45
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions

46
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions

◊ The physical significance of the autocorrelation function RX(τ) is that


it provides a means of describing the “interdependence” of two
random variables obtained by observing a random process
X(t) at times τ seconds apart.

47
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
48
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
F Y OF Y IS
1 BY B-A AS
X IS
UNIFORML
Y
DISTRIBUTE
D

1/11

49
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
AREA
UNDER
DENSITY FN

50
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
51
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
52
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
53
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions

◊ Ergodic Processes
◊ In many instances, it is difficult or impossible to observe all sample
functions of a random process at a given time.
◊ It is often more convenient to observe a single sample function for a
long period of time.
◊ For a sample function x t), the time average of the mean value over
an observation period 2T is
(84 )
◊ For many stochastic processes of interest in communications, the
time averages and ensemble averages are equal, a property known as
ergodicity.
◊ This property implies that whenever an ensemble average is required,
we may estimate it by using a time average.
54
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Mean, Correlation and Covariance Functions

◊ Cyclostationary Processes (in the wide sense)


◊ There is another important class of random processes commonly
encountered in practice, the mean and autocorrelation function of
which exhibit periodicity:
 X (t1 + T ) =  X (t1 )
RX t1 + T ,t 2 + T = RX t1,t 2
for all t1 and t2.

◊ Modeling the process X(t) as cyclostationary adds a new


dimension, namely, period T to the partial description of the
process.

55
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Transmission of a Random Process Through
a Linear Filter

◊ Suppose that a random process X(t) is applied as input to linear


time-invariant filter of impulse response h(t), producing a new
random process Y(t) at the filter output.

◊ Assume that X(t) is a wide-sense stationary random process.


◊ The mean of the output random process Y(t) is given by

56
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Transmission of a Random Process Through
a Linear Filter

◊ When the input random process X(t) is wide-sense stationary, the


Mean of X(t) is a constant  X , then mean of y(t)is also a constant 
 Y
 Y(t ) =  X  h ( )d
1 1 =  X H (0 ) (87)
−

where H(0) is the zero-frequency (dc) response of the system.


◊ The autocorrelation function of the output random process Y(t) is
given by:

57
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Transmission of a Random Process Through
a Linear Filter

◊ When the input X(t) is a wide-sense stationary random process, the


autocorrelation function of X(t) is only a function of the difference
between the observation times:

 
RY ( ) =   h ( )h ( )R ( −
1 2 X 1 + 2 )d1d 2 (90 )
− −

◊ If the input to a stable linear time-invariant filter is a


wide-sense stationary random process, then the output of
the filter is also a wide-sense stationary random process.

58
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
59
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
60
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Power Spectral Density
◊ The Fourier transform of the autocorrelation function RX(τ) is called
the power spectral density SX( f ) of the random process
X(t).

S X ( f ) =  RX ( )exp (− j2 f  )d (91)
−

RX ( ) = − S X ( f )exp ( j2 f  )df (92 )


◊ Equations (91) and (92) are basic relations in the theory of spectral
analysis of random processes, and together they constitute what are
usually called the Einstein-Wiener-Khintchine relations.

61
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
.. 8 P oow
P weerr SSppeeccttrr a
a l
l D
D e
e n
n s
s i
i ttyy

◊ Properties of the Power Spectral Density

62
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
◊ Suppose we let |H( f )|2=1 for any arbitrarily small interval f1 ≤ f ≤ f2 ,
and H( f )=0 outside this interval. Then, we have:
S X ( f ) df  0
f2
 f1

This is possible if an only if SX( f )≥0 for all f.

◊ Conclusion: SX( f )≥0 for all f.

63
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Ref.43

64
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
65
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
FT of AUTO. CORR
FN. IS PSD

66
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
67
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Gaussian Process
N

◊ A random variable Y is defined by: Y = ai Xi


i=1

Y =  g(t) X (t)dt
T
ai are constants
0
Xi are randomvariables
◊ We refer to Y as a linear functional of X(t). Y is a linear function of the Xi .
◊ If the weighting function g(t) is such that the mean-square value of
the random variable Y is finite, and if the random variable Y is a
Gaussian-distributed random variable for every g(t) in this class of
functions, then the process X(t) is said to be a Gaussian process.
◊ In other words, the process X(t) is a Gaussian process if every
linear functional of X(t) is a Gaussian random variable.

◊ The Gaussian process has many properties that make analytic


results possible.
◊ The random processes produced by physical phenomena are often
such that a Gaussian model is appropriate.
68
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Gaussian Process
◊ The random variable Y has a Gaussian distribution if its probability
density function has the form

Y
2πσ 2σ2
μY : the mean of the random variable Y
σY2 : the variance of the random variable Y
◊ If the Gaussian random variable Y is normalized to have a mean of
zero and a variance of one, such a normalized Gaussian
distribution is commonly written as N(0,1).

69
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Gaussian Process

◊ Central Limit Theorem


◊ Let Xi, i = 1, 2, …, N, be a set of random variables that satisfies
the following requirements:
◊ The Xi are statistically independent.
◊ The Xi have the same probability distribution with mean μX and variance

σ2X.
◊ The Xi so described are said to constitute a set of independent and
identically distributed (i.i.d.) random variables.

The central limit theorem states that the probability distribution


of VN approaches a normalized Gaussian distribution N(0,1) in
the limit as N approaches infinity.

70
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Gaussian Process
◊ Property 1: If a Gaussian process X(t) is applied to a stable linear
filter, then the output of Y(t) is also Gaussian.
◊ Property 2: Consider the set of random variables or samples X(t1),
X(t2), …X(tn), obtained by observing a random process X(t) at time t1,
t2, …, tn. If the process X(t) is Gaussian, then this set of random
variables is jointly Gaussian for any n, with their n-fold joint
probability density function being completely determined by
specifying the set of means:
i = 1, 2,…, n
and the set of autocovariance functions:
k, i =1, 2, ..., n
◊ Consider the composite set of random variables X(t1), X(t2),…, X(tn),
Y(u1), Y(u2),…, Y(um). We say that the processes X(t) and Y(t) are
jointly Gaussian if this composite set of random variables are jointly
Gaussian for any n and m.
71
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
Gaussian Process

◊ Property 3: If a Gaussian process is wide-sense stationary, then


the process is also stationary in the strict sense.

◊ Property 4: If the random variables X(t1), X(t2), …X(tn), are


uncorrelated, that is

then these random variables are statistically independent.

◊ The implication of this property is that the joint probability


density function of the set of random variables X(t1), X(t2),…,
X(tn) can be expressed as the product of the probability density
functions of the individual random variables in the set.

72
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
73
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
74
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
75
Prof.Ch.Srinivasa Rao, JNTUK - UCEV
References

❑ Communication Systems by Simon Haykin, Wiley, 2nd Edition.


❑ Principle of Communication System by Taub ,Schilling & Saha,
TMH.
References
❑ Modern digital and Analog Communications system by BP Lathi,
Ding and Gupta, Oxford.
❑ Electronic Communication Systems by Kennedy and Davis, TMH.
❑ Communication Systems Analog and Digital by Singh and Sapre,
TMH
❑ Probability, Random Variables and Random Signal Processing,
P Z. Peebles, Jr. TMH, 4th Edition
❑ http://wits.ice.nsysu.edu.tw
Prof.Ch.Srinivasa Rao, JNTUK - UCEV 76
Prof.Ch.Srinivasa Rao, JNTUK - UCEV 77

You might also like