Professional Documents
Culture Documents
Random Signals: and Noise
Random Signals: and Noise
CHAPTER 3
Let the variable X denote the temperature of a certain city, say, at 9 A.M.
In general, the value of X would be different on different days. In fact, the
temperature readings at 9 A.M. on two different days could be significantly
different, depending on the geographical location of the city and the time
separation between the days of observation (In a place like Delhi, on a cold
winter morning, temperature could be as low as 40D F whereas at the height of
the summer, it could have crossed 100D F even by 9 A.M.!).
3.1
Indian Institute of Technology Madras
Principles of Communication
But the temperature is also a function of time. At 12 P.M., for example, the
temperature may have an entirely different distribution. Thus the random variable
X is a function of time and it would be more appropriate to denote it by X ( t ) . At
least, theoretically, the PDF of X ( t1 ) could be very much different from that of
X ( t 2 ) for t1 t 2 , though in practice, they may be very much similar if t1 and t 2
3.2
Indian Institute of Technology Madras
Principles of Communication
waveforms, labeled x j ( t ) , j = 1, 2, 3, 4 .
Now, let us think of observing this set of waveforms at some time instant
t = t1 as shown in the figure.
3.3
Indian Institute of Technology Madras
Principles of Communication
3.4
Indian Institute of Technology Madras
Principles of Communication
Example 3.1
t = 0 and (b) t = 1 .
There are only two sample functions for the process. Let us denote them
by x1 ( t ) and x 2 ( t ) where x1 ( t ) = sin ( t ) and x 2 ( t ) = 2 t which are shown in
Fig. 3.2.
1
As heads and tails are equally likely, we have P x1 ( t ) = P x2 ( t ) = .
2
3.5
Indian Institute of Technology Madras
= 0
and X 1 correspond to X ( t ) | t
1
( x ) + ( x 2 ) .
2
= 1
Principles of Communication
Note that this is one among the simplest examples of RPs that can be used to
construe the concept.
Example 3.2
Consider the experiment of throwing a fair die. The sample space consists
of six sample points, s1 , ........, s6 corresponding to the six faces of the die. Let
the sample functions be given by xi ( t ) =
1
t + ( i 1) for s = si , i = 1, ......., 6 .
2
= 1
A few of the sample functions of this random process are shown below
(Fig 3.3).
The PDF of X is
6
i = 1
fX ( x ) =
1
6
E[X] =
1
(1 + 3 + 5 + 7 + 9 + 11) = 3.0
12
x 2 + ( i 1)
The examples cited above have two features in common, namely (i) the
number of sample functions are finite (in fact, we could even say, quite small)
3.6
Indian Institute of Technology Madras
Principles of Communication
and (ii) the sample functions could be mathematically described. In quite a few
situations involving a random process, the above features may not be present.
1
, we find
N
that this probability becomes smaller and smaller as N becomes larger and
larger. Also, it would be an extremely difficult task to write a mathematical
expression to describe the time variation of any given voltage waveform.
However, as we shall see later on in this chapter, statistical characterization of
such noise processes is still possible which is adequate for our purposes.
Fig. 3.4: The ensemble for the experiment 'picking a resistor at random'
One fine point deserves a special mention; that is, the waveforms (sample
functions) in the ensemble are not random. They are deterministic. Randomness
3.7
Indian Institute of Technology Madras
Principles of Communication
in this situation is associated not with the waveforms but with the uncertainty as
to which waveform will occur on a given trial.
3.3 Stationarity
By definition, a random process X ( t ) implies the existence of an infinite
number of random variables, one for every time instant in the range,
We
then
X (t )
have
be observed at n
the
corresponding
time instants,
random
variables
n
F (x)
x1 x2 ........ xn X (t )
variable with a known PDF. The third method of specifying a random process is
3.8
Indian Institute of Technology Madras
Principles of Communication
sample
functions
{x ( t )}
j
satisfy
the
requirement,
P ( A) =
b1
b2
b3
a1
a2
a3
fX ( t ) ( x ) d x
where x ( t ) = ( X ( t1 ) , X ( t2 ) , X ( t3 ) ) and d x = d x1 d x2 d x3
Fig. 3.5: Set of windows and a waveform that passes through them
3.9
Indian Institute of Technology Madras
Principles of Communication
The above step can easily be generalized to the case of a random vector
with n-components.
general class of random processes. We shall define stationarity here. Let the
process X ( t ) be observed at time instants t1 , t 2 , .........., t n and X ( t ) be the
corresponding random vector.
Def. 3.1: A random process X ( t ) is said to be strictly stationary or Stationary in
a Strict Sense (SSS) if the joint PDF fX (t ) ( x ) is invariant to a translation of the
f X (t + T ) ( x ) = f X (t ) ( x )
(3.1)
where ( t + T ) = ( t1 + T , t 2 + T , .........., t n + T ) .
For X ( t ) to be SSS, Eq. 3.1 should be valid for every finite set of time
instants {t j } , j = 1, 2, .........., n , and for every time shift T and dummy vector
3.10
Indian Institute of Technology Madras
Principles of Communication
Example 3.3
fF ( f )
1
, 100 f 200 Hz
= 100
0 , otherwise
function is a sine wave of unit amplitude and a particular frequency f . Over the
ensemble, the random variable F takes all possible values in the range
3.11
Indian Institute of Technology Madras
Principles of Communication
3.12
Indian Institute of Technology Madras
Principles of Communication
3. 4 Ensemble Averages
We had mentioned earlier that a random process is completely specified,
if fX (t ) ( x ) is known. Seldom is it possible to have this information and we may
have to be content with a partial description of the process based on certain
averages. When these averages are derived from the ensemble, they are called
ensemble averages. Usually, the mean function and the autocorrelation function,
mX ( t ) = E X ( t ) = X ( t ) =
x f ( ) (x) d x
x t
(3.2)
Xi =
x f (x) d x
xi
= mX ( ti )
3.13
Indian Institute of Technology Madras
Principles of Communication
Xj =
x f (x) d x
xj
= mX ( t j )
R X ( t k , ti ) = E X ( t k ) X ( t i )
(3.3)
R X ( tk , ti ) =
xy f
Xk , Xi
(x, y )d x d y
(3.4a)
(3.4b)
(3.5a)
mX being a constant. In such a case, we can simply mention the mean value of
the process. Also, for a stationary process, we find that the autocorrelation and
3.14
Indian Institute of Technology Madras
Principles of Communication
R X ( t k , ti ) = E X ( t k ) X ( t i )
= E X ( tk + T ) X ( ti + T ) , as X ( t ) is stationary.
In particular if T = t i , then
R X ( tk , ti ) = E X ( t k ti ) X ( 0 )
(3.5b)
R X ( tk , ti ) = R X ( tk ti , 0 )
In view of Eq. 3.4(b), it is not difficult to see that for a stationary process
C X (tk , ti ) = C X (tk ti )
It is important to realize that for a process that is SSS, Eq. 3.5(a) and Eq.
3.5(b) hold. However, we should not infer that any process for which Eq. 3.5 is
valid, is a stationary process. In any case, the processes satisfying Eq. 3.5 are
sufficiently useful and are termed Wide Sense Stationary (WSS) processes.
and
R X ( tk , ti ) = R X ( tk ti )
(3.6a)
(3.6b)
For definitions of other forms of stationarity (such as Nth order stationary) see [1, P302]
3.15
Indian Institute of Technology Madras
Principles of Communication
R X ( ) = E X ( t + ) X ( t )
(3.7)
Note that is the difference between the two time instants at which the process
is being sampled.
P1) The mean-square value of the process is R X ( )
= 0
R X ( 0 ) = E X 2 ( t )
As R X ( 0 ) is a constant, we infer that for a WSS process, mean and meansquare values are independent of time.
P2) The ACF is an even function of ; that is,
RX ( ) = RX ( )
R X ( t + ) t = R X t ( t + )
which is the desired result.
P3) ACF is maximum at the origin.
E X ( t ) X ( t + )
which implies R X ( 0 ) R X ( ) .
3.16
Indian Institute of Technology Madras
Principles of Communication
P4) If the sample functions of the process X ( t ) are periodic with period T0 , then
repeats with period T0 , the product repeats and so does the expectation of
this product.
The physical significance of R X ( ) is that it provides a means of
describing the inter-dependence of two random variables obtained by
observing the process X ( t ) at two time instants seconds apart. In fact if
X ( t ) is a zero mean process, then for any = 1 ,
R X ( 1 )
RX (0)
is the
RX ( 0 )
100
3.17
Indian Institute of Technology Madras
Principles of Communication
Example 3.4
R X ( 2, 4 ) = E X ( 2 ) X ( 4 )
=
P x x ( 2) x ( 4 )
j
j = 1
1
As P x j =
for all j , we have
6
R X ( 2, 4 ) =
1
6
x ( 2) x ( 4 )
j = 1
for j = 1, the two samples that contribute to the ACF have the values 1
(indicated by the D on x1 ( t ) ) and 2 (indicated by on x1 ( t ) ). The other sample
pairs are ( 2, 3 ) , ( 3, 4 ) ,
( 4, 5 ) , ( 5, 6 ) , ( 6, 7 ) .
3.18
Indian Institute of Technology Madras
Principles of Communication
Hence R X ( 2, 4 ) =
=
1
[2 + 6 + 12 + 20 + 30 + 42]
6
112
= 18.66
6
Exercise 3.1
E [ X ] and E [Y ]
b)
f X ,Y ( x , y )
c)
R X (1, 2 )
3.19
Indian Institute of Technology Madras
Principles of Communication
Exercise 3.2
1
. Four of these sample functions are given
5
below.
x1 ( t ) = cos ( 2 t ) sin ( 2 t )
x2 ( t ) = sin ( 2 t ) cos ( 2 t )
x3 ( t ) = 2 cos t
x4 ( t ) = 2 sin t
a)
Find the fifth sample function x5 ( t ) of the process X ( t ) such that the
process X ( t ) is
b)
i)
zero mean
ii)
R X ( t1 , t 2 ) = R X ( t1 t 2
X (t ) t
= 4
= 0
fV (v ) fW (v ) .
Example 3.5
Let X ( t ) = A cos ( c t + ) ,
where A and c are constants and is a random variable with the PDF,
1
, 0 < 2
f ( ) = 2
0 , otherwise
Let us compute
a) m X ( t )
and
b) R X ( t1 , t 2 ) .
3.20
Indian Institute of Technology Madras
Principles of Communication
Three sample functions of the process are shown in Fig. 3.10 for
fc = 106 Hz and A = 1.
3.21
Indian Institute of Technology Madras
Principles of Communication
3.22
Indian Institute of Technology Madras
Principles of Communication
a)
A
2
cos (
t + ) d
= 0
Note that is uniform in the range 0 to 2 .
This is a reasonable answer because, for any given = 1 , the
ensemble has both the waveforms, namely,
A cos ( c t + 1 )
and
A cos ( c t + 1 + ) = A cos ( c t + 1 ) .
the
are
Both
waveforms
b)
R X ( t1 , t2 ) = E A cos ( c t1 + ) A cos ( c t 2 + )
=
A2
cos c ( t1 + t 2 ) + 2 + cos c ( t1 t2 )
2
R X ( t1 , t 2 ) =
A2
cos c ( t1 t 2 )
2
R X ( t1 , t 2 ) = R X ( t1 t 2 ) = R X ( ) =
A2
cos ( c )
2
1
. In accordance with property P4, we find that the ACF is also
fc
1
. Of course, it is also an even function of (property
fc
3.23
Indian Institute of Technology Madras
Principles of Communication
Exercise 3.3
1
. Find fY ( y ) .
4
Exercise 3.4
Exercise 3.5
Example 3.6
In this example, we shall find the ACF of the random impulse train process
specified by
X (t ) =
A ( t nT T )
n
1
.
2
( 0, T ) .
That is,
1
, 0 td T and zero elsewhere. Impulses are spaced T seconds
T
3.24
Indian Institute of Technology Madras
Principles of Communication
R X ( t + , t ) = E Am ( t mT + Td ) An ( t nT Td )
n
= E Am An ( t mT + Td ) ( t nT Td )
m n
RX (t + , t ) =
But,
An ( t nT + Td ) ( t nT Td )
1, m = n
Am An =
0, otherwise
1 2 1
2
(1) + ( 1) = 1 . If m n ,
2
2
1
1
(1) + ( 1) = 0 .
2
2
3.25
Indian Institute of Technology Madras
Principles of Communication
RX ( t + , t ) =
( t nT
+ Td ) ( t nT Td )
1
n T ( t nT + td ) ( t nT td ) d td
0
1
RX (t + , t ) =
( x ) ( x + ) d x
n T t ( n + 1)T
1
( x ) ( x + ) d x
T +
Letting y = x , we have
1
1
RX ( t + , t ) =
( y ) ( y ) d y =
(y ) ( y ) d y
T
T
=
1
1
( ) ( ) =
()
T
T
3.4.2 Cross-correlation
Consider two random processes X ( t ) and Y ( t ) . We define the two
cross-correlation functions of X ( t ) and Y ( t ) as follows:
R X ,Y ( t1 , t 2 ) = E X ( t1 ) Y ( t2 )
3.26
Indian Institute of Technology Madras
Principles of Communication
RY , X ( t1 , t 2 ) = E Y ( t1 ) X ( t2 )
where t1 and t 2 are the two time instants at which the processes are observed.
Def. 3.6:
ii)
R X ,Y ( t1 , t 2 ) = R X ,Y ( t1 t 2 ) = R X Y ( )
(3.8)
Def. 3.7:
Def. 3.8:
Exercise 3.6
RX Y ( )
b)
RX Y ( )
R X ( 0 ) RY ( 0 )
1
R X ( 0 ) + RY ( 0 )
2
3.27
Indian Institute of Technology Madras
Principles of Communication
input process
h (t )
[third method of
y j (t ) =
h ( ) x (t ) d
j
As the above relation is true for every sample function of X ( t ) , we can write
Y (t ) =
h ( ) X (t ) d
3.28
Indian Institute of Technology Madras
Principles of Communication
mY ( t ) = E Y ( t ) = E h ( ) X ( t ) d
(3.9a)
Provided that E X ( t ) is finite for all t , and the system is stable, we may
interchange the order of the expectation and integration with respect to in Eq.
3.9(a) and write
mY ( t ) =
h ( ) E X ( t ) d
(3.9b)
where we have used the fact that h ( ) is deterministic and can be brought
outside the expectation. If X ( t ) is WSS, then E X ( t ) is a constant mX , so that
Eq. 3.9(b) can be simplified as
mY ( t ) = mX
h ( ) d
= mX H (0 )
where H ( 0 ) = H ( f ) f
= 0
(3.10)
and H ( f ) is the transfer function of the given system.
RY ( t , u ) = E Y ( t ) Y ( u ) = E h ( 1 ) X ( t 1 ) d 1
h
X
u
d
(
)
(
)
2
2
2
RY ( t , u ) =
d h ( ) d h ( ) E X ( t ) X (u )
1
d 1 h ( 1 )
d h ( ) R (t , u )
2
If X ( t ) is WSS, then
3.29
Indian Institute of Technology Madras
Principles of Communication
RY ( t , u ) =
d 1 h ( 1 )
d h ( ) R (
2
+ 2 )
(3.11)
E Y
But,
( t )
= RY ( 0 ) =
h ( ) h ( )
1
R X ( 2 1 ) d 1 d 2
h ( 1 ) =
H (f ) e
j 2 f 1
df
Hence, E Y 2 ( t ) =
H ( f ) e
j 2 f 1
d f h ( 2 ) R X ( 2 1 ) d 1 d 2
H (f ) d f h ( ) d R (
1 ) e j 2 f 1 d 1
E Y 2 ( t ) =
H (f ) d f
h ( 2 ) e j 2 f 2 d 2
R () e
X
j 2f
E Y 2 ( t ) =
S (f ) H (f )
X
df
(3.12)
3.30
Indian Institute of Technology Madras
Principles of Communication
where SX ( f ) =
R () e
j 2f
d.
1, f fc < 2 f
=
0, f f > 1 f
c
Then, from Eq. 3.12, we find that if the filter band-width is sufficiently small
and S X ( f ) is a continuous function, then the mean square value of the filter
output is approximately,
E Y 2 ( t ) ( 2 f ) SX ( fc )
The filter however passes only those frequency components of the input random
process X ( t ) that lie inside the narrow frequency band of width f centered
about fc . Thus, S X ( fc ) represents the frequency density of the average power
in the process X ( t ) , evaluated at f = fc . The dimensions of S X ( f ) are
watts/Hz.
SX ( f ) =
R ( ) e
X
j 2f
(3.13)
3.31
Indian Institute of Technology Madras
Principles of Communication
RX ( ) =
S (f ) e
j 2f
df
(3.14)
Eq. 3.13 and 3.14 are popularly known as Weiner-Khinchine Relations. Using
this pair of equations, we shall derive some general properties of PSD of a WSS
random process.
P1) The zero frequency value of the PSD of a WSS process equals the total
SX ( 0 ) =
R ( ) d
X
E X 2 ( t ) =
S (f ) d f
X
R X ( 0 ) = E X 2 ( t ) .
P3) The PSD is real and is an even function of frequency; that is, S X ( f ) is real
and S X ( f ) = S X ( f ) .
This result is due to the property of the ACF, namely, R X ( ) is real and
even.
P4) The PSD of a WSS process is always non-negative; that is,
S X ( f ) 0 , for all f .
1, f1 f f1 + f
H (f ) =
0, otherwise
3.32
Indian Institute of Technology Madras
Principles of Communication
S (f ) H (f )
X
d f , as a
h ( ) h ( )
1
j 2 f ( 1 + 2 )
d f d 1 d 2
SX (f ) e
j 2 f 2
j 2 f 1
j 2 f
h
e
d
h
e
d
df
(
)
(
)
1
2
2 SX (f ) e
1
H (f ) H (f ) S (f ) e
j 2f
df
H ( f )
S X ( f ) e j 2 f d f
(3.15)
(3.16)
Note that when the input to an LTI system is deterministic, we have the inputoutput FT relationship, Y ( f ) = X ( f ) H ( f ) . The corresponding time domain
relationship is y ( t ) = x ( t ) h ( t ) . Let Rh ( ) denote the ACF of the h ( t ) . Then
Rh ( ) H ( f ) (see P4, sec 1.6.2). Hence
2
RY ( ) = R x ( ) Rh ( )
= Rx ( ) h ( ) h ( )
(3.17a)
(3.17b)
3.33
Indian Institute of Technology Madras
Principles of Communication
Example 3.7
For the random process X ( t ) of example 3.5, let us find the PSD.
Since R X ( ) =
SX ( f ) =
A2
cos ( c ) , we have
2
A2
( f fc ) + ( f + fc )
4
(0 2 ) .
RY ( ) = E Y ( t + ) Y ( t )
= E X ( t + ) cos c ( t + ) + X ( t ) cos ( c t + )
As X ( t ) and are independent, we have
RY ( ) = E X ( t + ) X ( t ) E cos ( c ( t + ) + ) cos ( c t + )
=
1
R X ( ) cos ( c )
2
1
S X ( f fc ) + S X ( f + fc )
SY ( f ) = F RY ( ) =
4
3.34
Indian Institute of Technology Madras
Principles of Communication
1.
2.
The pulses are not synchronized, so that the starting time of the first pulse,
td is equally likely to lie anywhere between zero and T seconds. That is, td
is the sample value of a uniformly distributed random variable Td with its
probability density function defined by
fTd ( td )
1
, 0 td T
= T
0 , elsewhere
3.
The above process can be generated by passing the random impulse train
(Example 3.6) through a filter with the impulse response,
A , 0 t T
h (t ) =
0 , otherwise
3.35
Indian Institute of Technology Madras
Principles of Communication
H ( f ) = AT sin c ( f T )
Therefore, S X ( f ) =
1 2 2
A T sin c 2 ( f T )
T
= A2 T sin c 2 ( f T )
A 1 , 0 T
RX ( ) =
T
, otherwise
0
The ACF of the random binary wave process can also be computed by direct
time domain arguments. The interested reader is referred to [3].
We note that the energy spectral density of a rectangular pulse x ( t ) of
amplitude A and duration T , is E x ( f ) = A2 T 2 sin c 2 ( f T )
Hence, Sx ( f ) = E x ( f ) T .
Exercise 3.7
A2
4
sin2 ( f T )
+
f
( )
2 f 2 T
Exercise 3.8
X ( t ) = 2 and R X ( ) = 4 + e
capacitor. Find
a)
Y (t )
b)
SY ( f )
3.36
Indian Institute of Technology Madras
Principles of Communication
SX Y ( f ) =
R ( ) e
j 2f
(3.18)
j 2f
(3.19)
XY
SY X ( f ) =
R ( ) e
YX
That is,
R X Y ( ) S X Y ( f )
RY X ( ) SY X ( f )
Example 3.10
Let Z ( t ) = X ( t ) + Y ( t )
where the random processes
X (t )
and
Y (t )
RZ ( t , u ) = E Z ( t ) Z ( u )
3.37
Indian Institute of Technology Madras
Principles of Communication
= E ( X ( t ) + Y ( t ) ) ( X ( u ) + Y ( u ) )
= R X ( t , u ) + R X Y ( t , u ) + RY X ( t , u ) + RY ( t , u )
Letting = t u , we have
RZ ( ) = R X ( ) + R X Y ( ) + RY X ( ) + RY ( )
(3.20a)
(3.20b)
RX Y ( ) = X ( t + ) Y ( t ) = 0
then,
RZ ( ) = R X ( ) + RY ( ) and
SZ ( f ) = S X ( f ) + SY ( f )
Example 3.11
RY X ( t , u ) = E Y ( t ) X ( u )
= E h ( 1 ) X ( t 1 ) d 1 X ( u )
RY X ( t , u ) =
h ( ) E X ( t ) X (u ) d
1
If X ( t ) is WSS, then
3.38
Indian Institute of Technology Madras
Principles of Communication
RY X ( t , u ) =
h ( ) R (t u ) d
1
RY X ( t , u ) =
h ( ) R ( ) d
1
= h ( ) RX ( )
(3.21a)
That is, the cross-correlation between the output and input is the convolution of
the input ACF with the filter impulse response. Taking the Fourier transform,
SY X ( f ) = S X ( f ) H ( f )
(3.21b)
Example 3.12
In the scheme shown (Fig. 3.14), X ( t ) and Y ( t ) are jointly WSS. Let us
compute SV Z ( f ) .
RV Z ( t , u ) = E V ( t ) Z ( u )
= E h1 ( 1 ) X ( t 1 ) d 1
Y
u
(
)
(
)
2
2
2 2
3.39
Indian Institute of Technology Madras
Principles of Communication
RV Z ( t , u ) =
h ( ) h ( ) E X ( t ) Y (u ) d
1
d 2
RV Z ( t , u ) =
h ( ) h ( ) R (
1
XY
+ 2 ) d 1 d 2
(3.22)
Z (t )
(or
both)
will
be
zero
(note
that
V ( t ) = H1 ( 0 ) mX
and
3.40
Indian Institute of Technology Madras
Principles of Communication
iii)
(3.23)
familiar result
SV ( f ) = H ( f ) S X ( f )
2
( X1 , X 2 , ..........., X n ) .
dimensional
joint
density
for
every
n 1
and
Principles of Communication
T
1
exp ( x m x ) C X 1 ( x m X )
2
fX ( x ) =
( 2 )
CX
1
2
(3.24)
(X , X
1
, ..........., X n
and the
21 22 " 2 n
Cx =
#
n1 n 2 " nn
where i j = cov X i , X j
= E Xi Xi
) (X
Xj
Similarly, ( x m X ) = x1 X1 , x2 X 2 , ........, xn X n
If
the
mean
i j = cov X ( t i ) , X ( t j )
value
of
the
process
is
constant
and
independent of the time origin. In other words, a WSS Gaussian process is also
stationary in a strict sense. To illustrate the use of the matrix notation, let us take
an example of a joint Gaussian PDF with n = 2 .
Example 3.13
3.42
Indian Institute of Technology Madras
Principles of Communication
i)
As 11 = 22 = 2 and 12 = 21 = 2 , we have
2
Cx = 2
1
X
2
, C X = 4 (1 2 )
2
2
1
= 4
1 2 2
1
1
2 1 2 1
Therefore,
f X1, X 2 ( x1 , x2 ) =
ii)
1
exp 2
2
2
1
2 1
1
2 2
1 x1
1 x2
( x1 x2 )
Taking the matrix products in the exponent above, we have the expanded
result, namely,
f X1, X 2 ( x1 , x2 ) =
x 2 2 x x + x 2
1 2
2
exp 1
, < x1 , x2 <
2
2
2
2
1
1
2 2
This is the same as the bivariate Gaussian PDF of Chapter 2, section 6 with
x1 = x and x2 = y , and m X = mY = 0 , and X = Y = .
Example 3.14
X ( t ) is a Gaussian process with m X ( t ) = 3 and
C X ( t1 , t2 ) = 4 e
0.2 t1 t2
= 4e
a)
P X ( 5 ) 2
b)
P X ( 8 ) X ( 5 ) 1
0.2
3.43
Indian Institute of Technology Madras
),
Principles of Communication
a)
1
Y 3
= Q ( 0.5 )
P [Y 2] = P
2
2
b)
Let
Z = X (8 ) X (5 )
= Y2 + W2 2 cov [Y , W ]
= 4 + 4 2 4 e 0.2 3
= 8 1 e 0.6 = 3.608
1
P Z 1 = 2 P [0 Z 1] = 2 P [ Z > 1]
2
Z
= 1 2P
>
3.6
3.44
Indian Institute of Technology Madras
1
= 1 2 Q ( 0.52 )
3.6
Principles of Communication
Exercise 3.9
The above formula can also be used to compute the moments such as
X14 , X12 X 22 , etc. X14 can be computed as
X14 = X1 X1 X1 X1 = 3 14
Similarly, X12 X 22 = X1 X1 X 2 X 2 = 12 22 + 2 X1 X 2
1 2
Show that E X12 X 22 = 6
3.45
Indian Institute of Technology Madras
Principles of Communication
1, i = j
i j =
0, i j
P3) Let
Y = (Y1 , Y2 , .........., Yn )
be
set
of
vectors
obtained
from
Y T = A X T + aT
where A is any n n matrix and a = ( a1 , a2 , .........., an ) is a vector of
constants {ai } . If X is jointly Gaussian, then so is Y .
Exercise 3.10
Let Y T = A X T + aT
where the notation is from P3) above. Show that
CY = AC X AT
Two of the important internal circuit noise varieties are: i) Thermal noise
ii) Shot noise
3.46
Indian Institute of Technology Madras
Principles of Communication
(in ( Volts )
where T :
k :
(3.25)
K = D C + 273
It is to be remembered that Eq. 3.25 is valid only upto a certain frequency limit.
However this limit is much, much higher than the frequency range of interest to
us.
If this open circuit voltage is measured with the help of a true RMS
voltmeter of bandwidth B (frequency range: B to B ), then the reading on the
instrument would be
2R kT 2B =
4R kT B V .
PSD.
Def. 3.9:
3.47
Principles of Communication
Let us treat the resistor to be noise free. But it is in series with a noise
source with SV ( f ) = 2 R k T . In other words, we have a noise source with the
source resistance R (Fig. 3.16(a)).
Fig. 3.16: (a) Resistor as noise source (b) The noise source driving a load
We know that the maximum power is transferred to the load, when the load is
matched to the source impedance. That is, the required load resistance is R
(Fig. 3.16(b)). The transfer function of this voltage divider network is
1
2
R
i.e.
. Hence,
R + R
available PSD =
=
SV ( f )
R
H (f )
kT
2R kT
=
Watts Hz
4R
2
(3.26)
kT
2B = kT B
2
(3.27)
In many solid state devices (and of course, in vacuum tubes!) there exists
a noise current mechanism called shot noise. In 1918, Schottky carried out the
first theoretical study of the fluctuations in the anode current of a temperature
3.48
Indian Institute of Technology Madras
Principles of Communication
limited vacuum tube diode (This has been termed as shot noise). He showed that
at frequencies well below the reciprocal of the electron transit time (which
extends upto a few Giga Hz), the spectral density of the mean square noise
current due to the randomness of emission of electrons from the cathode is given
by,
SI ( f ) = 2 q I ( Amp )
Hz
where q is the electronic charge and I is the average anode current. (Note that
units of spectral density could be any one of the three, namely, (a) Watts Hz ,
(b) ( Volts )
with SI ( f ) for the source quantity. Then, we will have the Norton equivalent
circuit with SI ( f ) =
2R k T
2kT
in parallel with the resistance R .)
=
2
R
R
Schottkys result also holds for the semiconductor diode. The current
spectral density of shot noise in a p-n junction diode is given by
SI ( f ) = 2 q ( I + 2 IS )
(3.28)
3.49
Indian Institute of Technology Madras
Principles of Communication
A BJT has two semiconductor junctions and hence two sources of shot
noise. In addition, it contributes to thermal noise because of internal ohmic
resistance (such as base resistance).
A JFET has a reverse biased junction between the gate terminal and the
semiconductor channel from the source to the drain. Hence, we have the gate
shot noise and the channel thermal noise in a JFET. (Note that gate shot noise
could get amplified when the device is used in a circuit.)
N0
watts Hz
2
(3.29)
1
has been included to indicate that half the power is
2
associated with the positive frequencies and half with negative frequencies. In
addition to a fiat spectrum, if the process happens to be Gaussian, we describe it
as white Gaussian noise. Note that white and Gaussian are two different
attributes. White noise need not be Gaussian noise, nor Gaussian noise need be
white. Only when "whiteness" together with "Gaussianity" simultaneously exists,
the process is qualified as White Gaussian Noise (WGN) process.
3.50
Indian Institute of Technology Madras
Principles of Communication
White noise, whether Gaussian or not, must be fictitious (as is the case
with everything that is "ideal") because its total mean power is infinity. The utility
of the concept of white noise stems from the fact that such a noise, when passed
through a linear filter for which
H (f )
<
(3.30)
the filter output is a stationary zero mean noise process N ( t ) that is meaningful
(Note that white noise process, by definition, is a zero mean process).
As SW ( f ) =
N0
, we have, for the ACF of white noise
2
RW ( ) =
N0
( ) ,
2
(3.31)
3.51
Indian Institute of Technology Madras
Principles of Communication
Eq. 3.31 implies that any two samples of white noise, no matter how closely
together in time they are taken, are uncorrelated. (Note that RW ( ) = 0 for
0 ). In addition, if, the noise process is Gaussian, we find that any two
samples of WGN are statistically independent. In a sense, WGN represents the
ultimate in randomness!
3.52
Indian Institute of Technology Madras
Principles of Communication
should sound monotonous because the waveform driving the speaker appears to
be the same for all time instants.
instrument depends on the required bandwidth, power level, etc.) We can give
partial demonstration (audio-visual) of the properties of white noise with the help
of these sources.
We will begin with the audio. By clicking on the speaker symbol, you will
listen to the speaker output when it is driven by a BLWN source with the
frequency range upto 110 kHz. (As the speaker response is limited only upto 15
kHz, the input to the speaker, for all practical purposes, is white.)
We now show some flash animation pictures of the spectral density and
time waveforms of BLWN.
1.
Picture 1 is the display put out by a spectrum analyzer when it is fed with a
BLWN signal, band limited to 6 MHz. As can be seen from the display, the
spectrum is essentially constant upto 6 MHz and falls by 40 to 45 dB at
about 7MHz and beyond.
2.
3.53
Indian Institute of Technology Madras
Principles of Communication
display is just about the same, whereas when you switch from 500 sec/div
to 50 sec/div, there is a change, especially in the display level.
Now consider a 20 kHz sine wave being displayed with the same
sweep speed, namely, 50 sec/div. This will result in 1 cycle/div which
implies we can see the fine structure of the waveform. If the sweep speed
were to be 500 sec/div, then a 20 kHz time will result in 10 cycles/div,
which implies that fine structure will not be seen clearly. Hence when we
observe BLWN, band limited to 6 MHz, on an oscilloscope with a sweep
speed of 50 sec/div, fine structure of the low frequency components could
interfere with the constant envelope display of the higher frequency
components, thereby causing some reduction in the envelope level.
3.
Picture 3 is the display from the spectrum analyzer when it is fed with a
BLWN signal, band limited to 50 MHz. As can be seen from the display,
PSD is essentially constant upto 50 MHz and then starts falling, going 40
dB down by about 150 MHz.
4.
You have again the choice of the same four sweep rates. This time
you will observe that when you switch from 500 sec/div to 50 sec/div, the
3.54
Indian Institute of Technology Madras
Principles of Communication
change is much less. This is because, this signal has much wider spectrum
and the power in the frequency range of f 100 kHz is 0.002 P where P
is the total power of the BLWN signal.
From the above, it is clear that as the BLWN tends towards white noise,
variations in the time waveform keep reducing, resulting in a steady picture on
the oscilloscope no matter what sweep speed is used for the display.
N0
is applied as input to an ideal
2
LPF of bandwidth B . Find the ACF of the output process. Let Y be the random
variable obtained from sampling the output process at t = 1 sec. Let us find
fY ( y ) .
Let N ( t ) denote the output noise process when the WGN process W ( t )
is the input. Then,
SN ( f )
N0
, B f B
= 2
0 , otherwise
3.55
Indian Institute of Technology Madras
Principles of Communication
E [Y ] = 0
and
n
where n = 1, 2, ..... .
2B
Hence any two random variables obtained by sampling the output process at
times t1 and t 2 such that t1 t 2 is multiple of
1
, are going to be statistically
2B
independent.
3.56
Indian Institute of Technology Madras
Principles of Communication
1 + (2 f R C )
, and
N0
exp
4 RC
RC
RN ( ) =
X Y =
N0 2
1
.
1 + j 2 f RC
XY
X Y
E[XY]
E X 2
E X 2 = RN ( )
Hence, X Y = e
( 0.1 RC )
= 0.1 sec
= 0
N0
4RC
Exercise 3.11
X ( t ) is a zero mean Gaussian process with R X ( ) =
1
. Let
1 + 2
Y (t ) = l
X ( t ) where l
X ( t ) is the Hilbert transform of X ( t ) . The process X ( t )
b)
3.57
Indian Institute of Technology Madras
Principles of Communication
Exercise 3.12
where is a positive constant. If the input to the filter is white Gaussian noise
with the PSD of
N0
watts/Hz, find
2
a)
b)
N0
2
( ) 2 e
Exercise 3.13
Exercise 3.14
N (t )
is sampled at
t1
and
t2
such that
3.58
Indian Institute of Technology Madras
Principles of Communication
In example 3.15, we found that the average output noise power is equal to
N0 B where B is the bandwidth of the ILPF. Similarly, in the case of example
N
3.16, we find the average output noise power being equal to 0 where
4 RC
1
is the 3-dB bandwidth of the RC-LPF. That is, in both the cases, we find
2RC
that the average output noise power is proportional to some measure of the
bandwidth. We may generalize this statement to include all types of low pass
filters by defining the noise equivalent bandwidth as follows.
N0
2
N =
N0
2
H (f )
df
= N0
H (f ) d f
2
Consider next the same source of white noise connected to the input of an
ILPF of zero frequency response H ( 0 ) and bandwidth B . In this case, the
average output noise power is,
N ' = N0 B H 2 ( 0 ) .
Def. 3.10:
3.59
Indian Institute of Technology Madras
Principles of Communication
power as the filter in question and has the same maximum gain as the filter
under consideration.
BN =
H (f )
0
df
(3.32)
H 2 (0)
Example 3.17
BN =
df
1 + (2 f R C )
0
1
4RC
3.60
Indian Institute of Technology Madras
Principles of Communication
N0
. If H ( f ) denotes the filter transfer function, we have
2
SN ( f ) =
2
N0
H (f )
2
(3.33)
3.61
Indian Institute of Technology Madras
Principles of Communication
3.62
Indian Institute of Technology Madras
Principles of Communication
The magnitude response of the filter is obtained by sweeping the filter input from
87 kHz to 112 kHz.
Plot in Fig. 3.20(a) gives us the impression that it is a 101 kHz sinusoid with
slowly changing envelope. This is only partially correct. Waveforms at (b) and (c)
are expanded versions of a part of the waveform at (a). From (b) and (c), it is
clear that zero crossings are not uniform. In Fig. 3.20(c), the cycle of the
waveform shown in green fits almost into the space between two adjacent
vertical lines of the graticule, whereas for the cycle shown in red, there is a clear
offset. Hence the proper time-domain description of a NBBP signal would be: it is
a sinusoid undergoing slow amplitude and phase variations. (This will be justified
later on by expressing the NBBP noise signal in terms of its envelope and
phase.)
We shall now develop two representations for the NBBP noise signal.
These are (a) canonical (also called in-phase and quadrature component)
representation and (b) Envelope and Phase representation.
3.63
Indian Institute of Technology Madras
Principles of Communication
(3.34)
nc e ( t ) = np e ( t ) exp ( j 2 fc t )
(3.35)
(3.36)
As
nc ( t ) = n ( t ) cos ( 2 fc t ) + n ( t ) sin ( 2 fc t )
(3.37)
ns ( t ) = n ( t ) cos ( 2 fc t ) n ( t ) sin ( 2 fc t )
(3.38)
n ( t ) = Re nc e ( t ) e j 2 fc t
= Re nc ( t ) + j ns ( t ) e j 2 fc t
We have,
n ( t ) = nc ( t ) cos ( 2 fc t ) ns ( t ) sin ( 2 fc t )
(3.39)
3.64
Indian Institute of Technology Madras
(3.40a)
Principles of Communication
Exercise 3.15
l ( t ) = N ( t ) sin ( 2 f t ) + N ( t ) cos ( 2 f t )
Show that N
c
c
s
c
(3.40b)
Let us write n ( t ) as
n ( t ) = r ( t ) cos 2 fc t + ( t )
(3.41a)
(3.41b)
or
r ( t ) = nc2 ( t ) + ns2 ( t ) 2
(3.42a)
and
n (t )
( t ) = arc tan s
nc ( t )
(3.42b)
N ( t ) = R ( t ) cos ( 2 fc t + ( t ) )
(3.43)
where R ( t ) is the envelope process and ( t ) is the phase process. Eq. 3.43
justifies the statement that the NBN waveform exhibits both amplitude and phase
variations.
3.65
Indian Institute of Technology Madras
Principles of Communication
SN ( f )
fc B f fc + B and fc > B .
3.66
Indian Institute of Technology Madras
Principles of Communication
Note: Given the spectrum of an arbitrary band-pass signal, we are free to choose
any frequency fc as the (nominal) centre frequency. The spectral shape SNc ( f )
and SNs ( f ) will depend on the fc chosen. As such, the canonical representation
of a narrowband signal is not unique. For example, for the narrowband spectra
shown in Fig. 3.22, if f1 is chosen as the representative carrier frequency, then
the noise spectrum is 2 B1 wide, whereas if f2 (which is actually the midfrequency of the given band) is chosen as the representative carrier frequency,
then the width of the spectrum is 2 B2 . Note for the SN ( f ) of Fig. 3.22, it is not
possible for us to choose an fc such that SN ( f ) exhibits local symmetry with
respect to it.
Fig. 3.22: Narrowband noise spectrum with two different centre frequencies
Example 3.18
3.67
Indian Institute of Technology Madras
Principles of Communication
a)
3.68
Indian Institute of Technology Madras
Principles of Communication
b)
By repeating the above procedure, we obtain SNc ( f ) (the solid line) shown
in Fig. 3.26.
c)
2
Nc
2
N
S (f ) d f
N
= 1.0 Watt
3.69
Indian Institute of Technology Madras
Principles of Communication
Exercise 3.16
(3.44)
Exercise 3.17
N0
, fc B f fc + B
= 2
0 , otherwise
a)
b)
Exercise 3.18
Let N ( t ) represent a NBN process with the PSD shown in Fig. 3.27
below.
3.70
Indian Institute of Technology Madras
2
sin c 103 sin 3 103 .
3
Principles of Communication
= E ( Nc ( t + ) + j Ns ( t + ) ) ( Nc ( t ) j Ns ( t ) )
= RNc ( ) + RNs ( ) + j RNs Nc ( ) j RNc Ns ( )
But RNc ( ) = RNs ( ) (Eq. A3.1.7)
and RNs Nc ( ) = RNc Ns ( ) (Property of cross correlation)
= RNc Ns ( )
The
last
equality
follows
from
Eq.
A3.1.8.
Hence,
1
Re RNc e ( ) e +
j 2 fc
1
RNc e ( ) e j 2 fc + RN c e ( ) e j 2 fc
(3.45)
1
SN ( f fc ) + S Nc e ( f fc )
4 ce
(3.46)
Note: For complex signals, the ACF is conjugate symmetric; that is, if X ( t )
represents a complex random process that is WSS, then R X ( ) = R X ( ) . The
PSD S X ( f ) = F R X ( ) is real, nonnegative but not an even function of
frequency.
3.71
Indian Institute of Technology Madras
Principles of Communication
Nc ( t ) = R ( t ) cos ( ( t ) )
(3.47a)
and
Ns ( t ) = R ( t ) sin ( ( t ) )
(3.47b)
That is,
1
R ( t ) = Nc2 ( t ) + Ns2 ( t ) 2
(3.48a)
N (t )
( t ) = tan 1 s
Nc ( t )
(3.48b)
Our interest is the PDF of the random variable R ( t1 ) , for any arbitrary
sampling instant t = t1 .
Let Nc and Ns represent the random variables obtained from sampling
Nc ( t ) and Ns ( t ) at any time t = t1 . Assuming N(t) to be a Gaussian process,
we
variance where =
2
S (f ) d f .
N
Hence,
fNc Ns ( nc , ns ) =
nc2 + ns2
1
exp
2 2
2 2
sin
1
cos = r
r
3.72
Indian Institute of Technology Madras
Principles of Communication
Therefore,
r
r2
exp
, r 0, 0 < 2
2
fR , ( r , ) = 2 2
2
0
, otherwise
fR ( r ) =
f (r , ) d
R,
r2
exp
, r 0
2
= 2
2
0
, otherwise
(3.49)
f ( ) = 2
0 , otherwise
(3.50)
R
(transformation by a multiplicative constant).
Then,
v2
v exp
, v 0
fV (v ) =
2
0
, otherwise
3.73
Indian Institute of Technology Madras
Principles of Communication
(3.51)
3.74
Indian Institute of Technology Madras
and N'c is
Principles of Communication
fN' , N n'c , ns
c
n' A 2 + n 2
c
s
1
exp
=
2
2
2
2
(3.52)
Let
2
2 2
R ( t ) = N'c ( t ) + Ns ( t )
N (t )
( t ) = tan 1 's
Nc ( t )
r 2 + A2 2 A r cos
r
exp
, r 0, 0 2
2 2
2 2
where R = R ( t1 ) and = ( t1 )
The quantity of interest is fR ( r ) , where
fR ( r ) =
f (r , ) d
R,
That is,
fR ( r ) =
r 2 + A2
r
exp
2 2
2 2
Ar
exp
0
cos d
(3.53)
The integral on the RHS of Eq. 3.53 is identified in terms of the defining
integral for the modified Bessel function of the first kind and zero order.
Let I0 ( y ) =
1
2
exp ( y cos ) d
0
3.75
Indian Institute of Technology Madras
(3.54)
Ar
, Eq. 3.53 can
2
Principles of Communication
fR ( r ) =
r 2 + A2 A r
r
exp
I0
2
2 2 2
(3.55)
R
A
and = . Then, the Rician density of Eq.
3.76
Indian Institute of Technology Madras
(3.56)
Principles of Communication
v2
v
exp
, v 0
fV (v ) =
2
0
, otherwise
which is the normalized Rayleigh PDF shown earlier in Fig. 3.27. This is
justified because if A = 0 , then R ( t ) is the envelope of only the
narrowband noise.
ii)
For y >> 1, I0 ( y )
ey
2y
3.77
Indian Institute of Technology Madras
Principles of Communication
Appendix A3.1
Properties of Narrowband Noise: Some Proofs.
We shall now give proofs to some of the properties of the NBN, mentioned
in sec.3.10.2.
P1) If N ( t ) is zero mean Gaussian, then so are Nc ( t ) and Ns ( t ) .
l ( t ) sin ( 2 f t )
Nc ( t ) = N ( t ) cos ( 2 fc t ) + N
c
(A3.1.1)
l ( t ) cos ( 2 f t ) N ( t ) sin ( 2 f t )
Ns ( t ) = N
c
c
(A3.1.2)
A3.1.2, we obtain Ns ( t ) = 0 .
3.78
Indian Institute of Technology Madras
Principles of Communication
l ( t ) from N ( t )
Fig. A3.1.1: Scheme to obtain N
1
RN ( )
(A3.1.4a)
l N ( )
= R
(A3.1.4b)
RN Nl ( ) = RNl N ( ) ,
=
1
RN ( )
( )
1
RN ( )
l N ( )
= R
(A3.1.5)
RNc ( t + , t ) = E Nc ( t + ) Nc ( t )
Expressing Nc ( t + ) and Nc ( t ) in terms of the RHS quantities of Eq.
A3.1.1 and after some routine manipulations, we will have
RNc ( t + , t ) =
1
RN ( ) + RNl ( ) cos ( 2 fc )
2
1
+ RN ( ) RNl ( ) cos ( 2 fc ( 2 t + ) )
2
1
+ RNl N ( ) RN Nl ( ) sin ( 2 fc )
2
+
1
R l ( ) + RN Nl ( ) sin ( 2 fc ( 2 t + ) )
2 NN
3.79
Indian Institute of Technology Madras
(A3.1.6)
Principles of Communication
Exercise A3.1.1
and
(A3.1.7)
l N ( ) cos ( 2 f )
RNc Ns ( ) = RN ( ) sin ( 2 fc ) R
c
(A3.1.8)
P4) Both Nc ( t ) and Ns ( t ) have the same PSD which is related to SN ( f ) of the
SN ( f )
fc B f fc + B and fc > B .
1
SN ( f fc ) + SN ( f + fc )
2
1
SN ( f fc ) sgn ( f fc ) SN ( f + fc ) sgn ( f + fc )
2
1
1
SN ( f fc ) 1 sgn ( f + fc ) + SN ( f + fc ) 1 + sgn ( f + fc )
2
2
(A3.1.9)
2 , f < fc
But 1 sgn ( f fc ) =
0 , outside
(A3.1.10a)
2 , f > fc
1 + sgn ( f + fc ) =
0 , outside
(A3.1.10b)
3.80
Indian Institute of Technology Madras
Principles of Communication
3.81
Indian Institute of Technology Madras
Principles of Communication
References
1)
2)
Henry Stark and John W. Woods, Probability and Random Processes with
Applications to Signal processing (3rd ed), Pearson Education Asia, 2002
3)
3.82
Indian Institute of Technology Madras