Professional Documents
Culture Documents
Venkata Rao
CHAPTER 3
3.1 Introduction
The concept of 'random variable' is adequate to deal with unpredictable
voltages; that is, it enables us to come up with the probabilistic description of the
numerical value of a random quantity, which we treat for convenience to be a
voltage quantity. In the real world, the voltages vary not only in amplitude but
also exhibit variations with respect to the time parameter. In other words, we
have to develop mathematical tools for the probabilistic characterization of
random signals. The resulting theory, which extends the mathematical model of
probability so as to incorporate the time parameter, is generally called the theory
of Random or Stochastic Processes.
Let the variable X denote the temperature of a certain city, say, at 9 A.M.
In general, the value of X would be different on different days. In fact, the
temperature readings at 9 A.M. on two different days could be significantly
different, depending on the geographical location of the city and the time
separation between the days of observation (In a place like Delhi, on a cold
winter morning, temperature could be as low as 40D F whereas at the height of
3.1
But the temperature is also a function of time. At 12 P.M., for example, the
temperature may have an entirely different distribution. Thus the random variable
X is a function of time and it would be more appropriate to denote it by X ( t ) . At
least, theoretically, the PDF of X ( t1 ) could be very much different from that of
3.2
illustrated in Fig. 3.1, which shows a sample space S with four points and four
waveforms, labeled x j ( t ) , j = 1, 2, 3, 4 .
Now, let us think of observing this set of waveforms at some time instant
t = t1 as shown in the figure.
3.3
sample functions and the probability measure is such that it assigns a probability
to any meaningful event associated with these sample functions.
3.4
Example 3.1
Consider the experiment of tossing a fair coin. The random process X ( t )
toss results in a tail. Sketch the sample functions. We wish to find the expression
for the PDF of the random variables obtained from sampling the process at (a)
t = 0 and (b) t = 1 .
There are only two sample functions for the process. Let us denote them
by x1 ( t ) and x 2 ( t ) where x1 ( t ) = sin ( π t ) and x 2 ( t ) = 2 t which are shown in
Fig. 3.2.
1
As heads and tails are equally likely, we have P ⎡⎣ x1 ( t ) ⎤⎦ = P ⎡⎣ x2 ( t ) ⎤⎦ = .
2
Let X 0 denote the random variable X ( t ) | t = 0 and X 1 correspond to X ( t ) | t = 1 .
1
Then, we have f X0 ( x ) = δ ( x ) and f X1 ( x ) = ⎡ δ ( x ) + δ ( x − 2 ) ⎤⎦ .
2⎣
3.5
Note that this is one among the simplest examples of RPs that can be used to
construe the concept.
Example 3.2
Consider the experiment of throwing a fair die. The sample space consists
of six sample points, s1 , ........, s6 corresponding to the six faces of the die. Let
1
the sample functions be given by xi ( t ) = t + ( i − 1) for s = si , i = 1, ......., 6 .
2
Let us find the mean value of the random variable X = X ( t ) | t = 1 .
A few of the sample functions of this random process are shown below
(Fig 3.3).
The PDF of X is
1 6
⎡ ⎛1 ⎞⎤
fX ( x ) = ∑ δ ⎢ x − ⎜⎝ 2 + ( i − 1) ⎟⎠ ⎥
6 i = 1 ⎣ ⎦
1
E[X] = (1 + 3 + 5 + 7 + 9 + 11) = 3.0
12
The examples cited above have two features in common, namely (i) the
number of sample functions are finite (in fact, we could even say, quite small)
3.6
and (ii) the sample functions could be mathematically described. In quite a few
situations involving a random process, the above features may not be present.
1
Assuming that the probability of any resistor being picked up is , we find
N
that this probability becomes smaller and smaller as N becomes larger and
larger. Also, it would be an extremely difficult task to write a mathematical
expression to describe the time variation of any given voltage waveform.
However, as we shall see later on in this chapter, statistical characterization of
such noise processes is still possible which is adequate for our purposes.
Fig. 3.4: The ensemble for the experiment 'picking a resistor at random'
One fine point deserves a special mention; that is, the waveforms (sample
functions) in the ensemble are not random. They are deterministic. Randomness
3.7
in this situation is associated not with the waveforms but with the uncertainty as
to which waveform will occur on a given trial.
3.3 Stationarity
By definition, a random process X ( t ) implies the existence of an infinite
number of random variables, one for every time instant in the range,
− ∞ < t < ∞ . Let the process X (t ) be observed at n time instants,
vector notation is quite often convenient and we denote the joint distribution by
FX (t ) ( x ) where the n-component random vector X ( t ) = ( X ( t1 ) , ..........., X ( t n ) )
is given by
∂n
f X (t ) ( x ) = F (x)
∂ x1 ∂ x2 ........ ∂ xn X (t )
instants ( t1 , t 2 , ..........., t n ) .
variable with a known PDF. The third method of specifying a random process is
3.8
linear filtering on some known process X ( t ) . We shall see later on in this lesson,
That is, the event A consists of all those sample points {s j } such that the
P ( A ) can be calculated as
P ( A) = fX ( t ) ( x ) d x
b1 b2 b3
∫ ∫ ∫
a1 a2 a3
where x ( t ) = ( X ( t1 ) , X ( t2 ) , X ( t3 ) ) and d x = d x1 d x2 d x3
Fig. 3.5: Set of windows and a waveform that passes through them
3.9
The above step can easily be generalized to the case of a random vector
with n-components.
f X (t + T ) ( x ) = f X (t ) ( x ) (3.1)
where ( t + T ) = ( t1 + T , t 2 + T , .........., t n + T ) .
For X ( t ) to be SSS, Eq. 3.1 should be valid for every finite set of time
instants {t j } , j = 1, 2, .........., n , and for every time shift T and dummy vector
3.10
Fig. 3.6: (a) Original set of Windows (b) The translated set
Example 3.3
We cite below an example of a nonstationary process (It is easier to do
this than give a nontrivial example of a stationary process). Let
X ( t ) = sin ( 2 π F t )
function is a sine wave of unit amplitude and a particular frequency f . Over the
ensemble, the random variable F takes all possible values in the range
(100, 200 Hz ) . Three members of this ensemble, (with f = 100, 150 and 200
3.11
3.12
identically zero for negative arguments whereas the density function of the RV
X ( t 2 ) for t 2 = − 1 m sec is non-zero only for negative arguments (Of course, the
PDF is independent of the observation instant, which is evidently not the case for
this example. Hence X ( t ) is nonstationary.
3. 4 Ensemble Averages
We had mentioned earlier that a random process is completely specified,
if fX (t ) ( x ) is known. Seldom is it possible to have this information and we may
3.13
∞
Xj = ∫ x f (x) d x
xj = mX ( t j )
−∞
R X ( t k , ti ) = E ⎡⎣ X ( t k ) X ( t i ) ⎤⎦ (3.3)
Denoting the joint PDF of the random variables X ( t k ) and X ( t i ) by f Xk , Xi ( x , y )
C X ( t k , t i ) = E ⎡⎣( X ( t k ) − m X ( t k ) ) ( X ( t i ) − m X ( t i ) ) ⎤⎦ (3.4a)
mX being a constant. In such a case, we can simply mention the mean value of
the process. Also, for a stationary process, we find that the autocorrelation and
3.14
R X ( t k , ti ) = E ⎡⎣ X ( t k ) X ( t i ) ⎤⎦
= E ⎡⎣ X ( tk + T ) X ( ti + T ) ⎤⎦ , as X ( t ) is stationary.
In particular if T = − t i , then
R X ( tk , ti ) = E ⎡⎣ X ( t k − ti ) X ( 0 ) ⎤⎦ (3.5b)
R X ( tk , ti ) = R X ( tk − ti , 0 )
In view of Eq. 3.4(b), it is not difficult to see that for a stationary process
C X (tk , ti ) = C X (tk − ti )
It is important to realize that for a process that is SSS, Eq. 3.5(a) and Eq.
3.5(b) hold. However, we should not infer that any process for which Eq. 3.5 is
valid, is a stationary process. In any case, the processes satisfying Eq. 3.5 are
sufficiently useful and are termed Wide Sense Stationary (WSS) processes.
mX (t ) = mX (3.6a)
and R X ( tk , ti ) = R X ( tk − ti ) (3.6b)
Wide sense stationarity represents a weak kind of stationarity in that all
processes that are SSS are also WSS; but the converse is not necessarily true.
When we simply use the word stationary, we imply stationarity in the strict sense.
1
For definitions of other forms of stationarity (such as Nth order stationary) see [1, P302]
3.15
R X ( τ ) = E ⎡⎣ X ( t + τ ) X ( t ) ⎤⎦ (3.7)
Note that τ is the difference between the two time instants at which the process
is being sampled.
R X ⎡⎣( t + τ ) − t ⎤⎦ = R X ⎡⎣t − ( t + τ ) ⎤⎦
which implies R X ( 0 ) ≥ R X ( τ ) .
3.16
P4) If the sample functions of the process X ( t ) are periodic with period T0 , then
repeats with period T0 , the product repeats and so does the expectation of
this product.
R X ( τ1 )
X ( t ) is a zero mean process, then for any τ = τ1 , is the
RX (0)
It is therefore apparent that the more rapidly X ( t ) changes with time, the
RX ( 0 )
prescribed value, say . We shall now take up a few examples to
100
compute some of the ensemble averages of interest to us.
3.17
Example 3.4
For the random process of Example 3.2, let us compute R X ( 2, 4 ) .
R X ( 2, 4 ) = E ⎡⎣ X ( 2 ) X ( 4 ) ⎤⎦
6
= ∑ P ⎡⎣ x ⎤⎦ x ( 2) x ( 4 )
j = 1
j j j
1
As P ⎡⎣ x j ⎤⎦ = for all j , we have
6
6
1
R X ( 2, 4 ) =
6
∑ x ( 2) x ( 4 )
j = 1
j j
for j = 1, the two samples that contribute to the ACF have the values 1
pairs are ( 2, 3 ) , ( 3, 4 ) , ( 4, 5 ) , ( 5, 6 ) , ( 6, 7 ) .
3.18
1
Hence R X ( 2, 4 ) = [2 + 6 + 12 + 20 + 30 + 42]
6
112
= = 18.66
6
Exercise 3.1
Let a random process X ( t ) consist of 6 equally likely sample
b) f X ,Y ( x , y )
c) R X (1, 2 )
3.19
Exercise 3.2
A random process X ( t ) consists of 5 sample functions, each
1
occurring with probability . Four of these sample functions are given
5
below.
x1 ( t ) = cos ( 2 π t ) − sin ( 2 π t )
x2 ( t ) = sin ( 2 π t ) − cos ( 2 π t )
x3 ( t ) = − 2 cos t
x4 ( t ) = − 2 sin t
a) Find the fifth sample function x5 ( t ) of the process X ( t ) such that the
process X ( t ) is
i) zero mean
ii) R X ( t1 , t 2 ) = R X ( t1 − t 2 )
b) Let V be the random variable X ( t ) t = 0
and W be the random
variable X (t ) t = π4
. Show that, though the process is WSS,
fV (v ) ≠ fW (v ) .
Example 3.5
Let X ( t ) = A cos ( ωc t + Θ ) ,
where A and ωc are constants and Θ is a random variable with the PDF,
⎧ 1
⎪ , 0 ≤ θ < 2π
fΘ ( θ ) = ⎨ 2 π
⎪0 , otherwise
⎩
Let us compute a) m X ( t ) and b) R X ( t1 , t 2 ) .
3.20
frequency fc but with a random initial phase. Of course, a given sample function
Three sample functions of the process are shown in Fig. 3.10 for
fc = 106 Hz and A = 1.
3.21
3.22
= 0
Note that Θ is uniform in the range 0 to 2 π .
b) R X ( t1 , t2 ) = E ⎡⎣ A cos ( ωc t1 + Θ ) A cos ( ωc t 2 + Θ ) ⎤⎦
=
A2
2
{
cos ⎡⎣ωc ( t1 + t 2 ) + 2 Θ ⎤⎦ + cos ⎡⎣ωc ( t1 − t2 ) ⎤⎦ }
As the first term on the RHS evaluates to zero, we have
A2
R X ( t1 , t 2 ) = cos ⎡⎣ωc ( t1 − t 2 ) ⎤⎦
2
As the ACF is only a function of the time difference, we can write
A2
R X ( t1 , t 2 ) = R X ( t1 − t 2 ) = R X ( τ ) = cos ( ωc τ )
2
Note that X ( t ) is composed of sample functions that are periodic with
1
period . In accordance with property P4, we find that the ACF is also
fc
1
periodic with period . Of course, it is also an even function of τ (property
fc
P3).
3.23
Exercise 3.3
For the process X ( t ) of example 3.5, let ωc = 2 π . let Y be the
1
random variable obtained from sampling X ( t ) and t = . Find fY ( y ) .
4
Exercise 3.4
Let X ( t ) = A cos ( ωc t + Θ ) where A and ωc are constants, and θ is
Exercise 3.5
Let Z ( t ) = X cos ( ωc t ) + Y sin ( ωc t ) where X and Y are independent
Gaussian variables, each with zero mean and unit variance. Show that Z ( t ) is
Example 3.6
In this example, we shall find the ACF of the random impulse train process
specified by
X (t ) = ∑ A δ ( t − nT − T )
n
n d
1
where the amplitude An is a random variable with P ( An = 1) = P ( An = − 1) = .
2
Successive amplitudes are statistically independent. The time interval td of the
first impulse from the origin has uniform PDF in the range ( 0, T ) . That is,
1
fTD ( td ) = , 0 ≤ td ≤ T and zero elsewhere. Impulses are spaced T seconds
T
3.24
⎧⎡ ⎤⎡ ⎤⎫
R X ( t + τ , t ) = E ⎨ ⎢ ∑ Am δ ( t − mT + τ − Td ) ⎥ ⎢ ∑ An δ ( t − nT − Td ) ⎥ ⎬
⎩⎣ m ⎦⎣ n ⎦⎭
⎡ ⎤
= E ⎢ ∑∑ Am An δ ( t − mT + τ − Td ) δ ( t − nT − Td ) ⎥
⎣m n ⎦
As An is independent of Td , we can write (after interchanging the order of
expectation and summation),
RX (t + τ, t ) = ∑∑ A
m n
m An δ( t − nT + τ − Td ) δ ( t − nT − Td )
⎧1, m = n
But, Am An = ⎨
⎩0, otherwise
1 2 1
(1) + ( − 1) = 1 . If m ≠ n ,
2
This is because when m = n , Am An = Am2 =
2 2
then Am An = Am An , as successive amplitudes are statistically independent. But
1 1
Am = An = (1) + ( − 1) = 0 .
2 2
3.25
RX ( t + τ, t ) = ∑ δ ( t − nT
n
+ τ − Td ) δ ( t − nT − Td )
T
1
= ∑n T ∫ δ ( t − nT + τ − td ) δ ( t − nT − td ) d td
0
Let t − nT − td = x . Then,
t − nT
1
RX (t + τ, t ) = ∑ ∫ δ ( x ) δ ( x + τ) d x
n T t − ( n + 1)T
−∞
1
δ ( x ) δ ( x + τ) d x
T +∫∞
=
Letting y = − x , we have
∞ ∞
1 1
RX ( t + τ, t ) = ∫ δ(− y ) δ(τ − y ) d y = δ(y ) δ(τ − y ) d y
T −∞ T −∫∞
1 1
= ⎡⎣ δ ( τ ) ∗ δ ( τ ) ⎤⎦ = δ (τ)
T T
That is, the ACF is a function of τ alone and it is an impulse!
3.4.2 Cross-correlation
Consider two random processes X ( t ) and Y ( t ) . We define the two
R X ,Y ( t1 , t 2 ) = E ⎡⎣ X ( t1 ) Y ( t2 ) ⎤⎦
3.26
RY , X ( t1 , t 2 ) = E ⎡⎣Y ( t1 ) X ( t2 ) ⎤⎦
where t1 and t 2 are the two time instants at which the processes are observed.
Def. 3.6:
Two processes are said to be jointly wide-sense stationary if,
i) X ( t ) and Y ( t ) are WSS and
ii) R X ,Y ( t1 , t 2 ) = R X ,Y ( t1 − t 2 ) = R X Y ( τ )
Def. 3.7:
Two random process X ( t ) and Y ( t ) are called (mutually) orthogonal if
Exercise 3.6
Show that the cross-correlation function satisfies the following
inequalities.
a) RX Y ( τ) ≤ R X ( 0 ) RY ( 0 )
1
b) RX Y ( τ) ≤ ⎡R X ( 0 ) + RY ( 0 ) ⎤⎦
2⎣
3.27
RY ( t1 , t 2 ) when X ( t ) is WSS.
belongs to Y ( t ) . Then,
∞
y j (t ) = ∫ h ( τ) x (t − τ) d τ
j
−∞
As the above relation is true for every sample function of X ( t ) , we can write
∞
Y (t ) = ∫ h ( τ) X (t − τ) d τ
−∞
3.28
⎡∞ ⎤
mY ( t ) = E ⎡⎣Y ( t ) ⎤⎦ = E ⎢ ∫ h ( τ ) X ( t − τ ) d τ ⎥ (3.9a)
⎣⎢ − ∞ ⎦⎥
Provided that E ⎡⎣ X ( t ) ⎤⎦ is finite for all t , and the system is stable, we may
interchange the order of the expectation and integration with respect to τ in Eq.
3.9(a) and write
∞
mY ( t ) = ∫ h ( τ ) E ⎡⎣ X ( t − τ )⎤⎦ d τ (3.9b)
−∞
where we have used the fact that h ( τ ) is deterministic and can be brought
= mX H (0 ) (3.10)
where H ( 0 ) = H ( f ) f = 0
and H ( f ) is the transfer function of the given system.
∞ ∞
= ∫ d τ1 h ( τ1 ) ∫ d τ h ( τ ) R (t − τ , u − τ )
2 2 X 1 2
−∞ −∞
If X ( t ) is WSS, then
3.29
∞ ∞
RY ( t , u ) = ∫ d τ1 h ( τ1 ) ∫ d τ h (τ ) R (τ − τ
2 2 X 1 + τ2 ) (3.11)
−∞ −∞
Hence, the LHS of Eq. 3.11 can be written as RY ( τ ) . Eq. 3.10 and 3.11 together
But, h ( τ1 ) = ∫ H (f ) e
j 2 π f τ1
df
−∞
∞ ∞ ∞
∞ ∞ ∞
= ∫ H (f ) d f ∫ h ( τ ) d τ ∫ R ( τ
−∞ −∞
2 2
−∞
X 2 − τ1 ) e j 2 π f τ1 d τ1
3.30
where SX ( f ) = ∫ R (λ) e
X
− j 2πf λ
dλ.
−∞
Then, from Eq. 3.12, we find that if the filter band-width is sufficiently small
and S X ( f ) is a continuous function, then the mean square value of the filter
output is approximately,
E ⎣⎡Y 2 ( t ) ⎦⎤ ( 2 ∆ f ) SX ( fc )
The filter however passes only those frequency components of the input random
process X ( t ) that lie inside the narrow frequency band of width ∆ f centered
about ±fc . Thus, S X ( fc ) represents the frequency density of the average power
watts/Hz.
3.31
∞
RX ( τ ) = ∫ S (f ) e
X
j 2πf τ
df (3.14)
−∞
Eq. 3.13 and 3.14 are popularly known as Weiner-Khinchine Relations. Using
this pair of equations, we shall derive some general properties of PSD of a WSS
random process.
P1) The zero frequency value of the PSD of a WSS process equals the total
area under the graph of ACF; that is
∞
SX ( 0 ) = ∫ R ( τ) d τ
X
−∞
R X ( 0 ) = E ⎡⎣ X 2 ( t ) ⎤⎦ .
P3) The PSD is real and is an even function of frequency; that is, S X ( f ) is real
and S X ( − f ) = S X ( f ) .
This result is due to the property of the ACF, namely, R X ( τ ) is real and
even.
P4) The PSD of a WSS process is always non-negative; that is,
S X ( f ) ≥ 0 , for all f .
3.32
∞ ∞ ⎡∞ ⎤
RY ( τ ) = ∫ ∫ h (τ ) h (τ ) ⎢ ∫ SX (f ) e
j 2 π f ( τ − τ1 + τ2 )
1 2 d f ⎥ d τ1 d τ2
− ∞− ∞ ⎣⎢ − ∞ ⎥⎦
∞ ⎡∞ ⎤⎡∞ ⎤
= ∫− ∞ ⎢ −∫∞ 1
⎢ h ( τ ) e − j 2 π f τ1
d τ1 ⎥ ⎢ ∫ h ( τ 2 ) e j 2 π f τ2
d τ 2 ⎥ SX (f ) e
j 2 πf τ
df
⎣ ⎥
⎦⎣⎢ − ∞ ⎥
⎦
∞
∫ H (f ) ⋅ H (f ) S (f ) e
∗ j 2πf τ
= X df
−∞
∫ ⎡⎣⎢ H ( f ) S X ( f ) ⎤⎥ e j 2 π f τ d f
2
= (3.15)
−∞
⎦
SY ( f ) = S X ( f ) H ( f )
2
(3.16)
Note that when the input to an LTI system is deterministic, we have the input-
output FT relationship, Y ( f ) = X ( f ) H ( f ) . The corresponding time domain
RY ( τ ) = R x ( τ ) ∗ Rh ( τ )
= Rx ( τ ) ∗ h ( τ ) ∗ h∗ ( − τ ) (3.17a)
3.33
Example 3.7
For the random process X ( t ) of example 3.5, let us find the PSD.
A2
Since R X ( τ ) = cos ( ωc τ ) , we have
2
A2
SX ( f ) = ⎡δ ( f − fc ) + δ ( f + fc ) ⎤⎦
4 ⎣
RY ( τ ) = E ⎡⎣Y ( t + τ ) Y ( t ) ⎤⎦
{
= E X ( t + τ ) cos ⎡⎣ωc ( t + τ ) + Θ ⎤⎦ X ( t ) cos ( ωc t + Θ ) }
As X ( t ) and Θ are independent, we have
RY ( τ ) = E ⎡⎣ X ( t + τ ) X ( t ) ⎤⎦ E ⎡⎣cos ( ωc ( t + τ ) + Θ ) cos ( ωc t + Θ ) ⎤⎦
1
= R X ( τ ) cos ( ωc τ )
2
1
SY ( f ) = F ⎡⎣RY ( τ ) ⎤⎦ = ⎡S X ( f − fc ) + S X ( f + fc ) ⎤⎦
4 ⎣
3.34
The above process can be generated by passing the random impulse train
(Example 3.6) through a filter with the impulse response,
⎧A , 0 ≤ t ≤ T
h (t ) = ⎨
⎩0 , otherwise
3.35
H ( f ) = AT sin c ( f T )
1 2 2
Therefore, S X ( f ) = A T sin c 2 ( f T )
T
= A2 T sin c 2 ( f T )
Hence, Sx ( f ) = E x ( f ) T .
Exercise 3.7
In the random binary wave process of example 3.9, let 1 be
represented by a pulse of amplitude A and duration T sec. The binary zero is
indicated by the absence of any pulse. The rest of the description of the
process is the same as in the example 3.9. Show that
A2 ⎡ sin2 ( π f T ) ⎤
SX ( f ) = ⎢ ( )
δ f + ⎥
4 ⎢⎣ π2 f 2 T ⎥⎦
Exercise 3.8
The input voltage to an RLC series circuit is a WSS process X ( t ) with
capacitor. Find
a) Y (t )
b) SY ( f )
3.36
∞
SY X ( f ) = ∫ R ( τ) e
YX
− j 2πf τ
dτ (3.19)
−∞
That is,
R X Y ( τ ) ←⎯→ S X Y ( f )
RY X ( τ ) ←⎯→ SY X ( f )
S X Y ( f ) = SY X ( − f ) = SY∗ X ( f )
Example 3.10
Let Z ( t ) = X ( t ) + Y ( t )
spectrum.
RZ ( t , u ) = E ⎡⎣Z ( t ) Z ( u ) ⎤⎦
3.37
= E ⎡⎣( X ( t ) + Y ( t ) ) ( X ( u ) + Y ( u ) ) ⎤⎦
= R X ( t , u ) + R X Y ( t , u ) + RY X ( t , u ) + RY ( t , u )
Letting τ = t − u , we have
RZ ( τ ) = R X ( τ ) + R X Y ( τ ) + RY X ( τ ) + RY ( τ ) (3.20a)
RX Y ( τ ) = X ( t + τ ) Y ( t ) = 0
then,
RZ ( τ ) = R X ( τ ) + RY ( τ ) and
SZ ( f ) = S X ( f ) + SY ( f )
Example 3.11
Let X ( t ) be the input to an LTI system with the impulse response h ( t ) . If
RY X ( t , u ) = E ⎡⎣Y ( t ) X ( u ) ⎤⎦
⎡∞ ⎤
= E ⎢ ∫ h ( τ1 ) X ( t − τ1 ) d τ1 X ( u ) ⎥
⎣⎢ − ∞ ⎦⎥
Interchanging the order of expectation and integration, we have
∞
RY X ( t , u ) = ∫ h ( τ ) E ⎣⎡ X ( t − τ ) X (u )⎦⎤ d τ
1 1 1
−∞
If X ( t ) is WSS, then
3.38
∞
RY X ( t , u ) = ∫ h ( τ ) R (t − u − τ ) d τ
1 X 1 1
−∞
= h ( τ) ∗ RX ( τ) (3.21a)
That is, the cross-correlation between the output and input is the convolution of
the input ACF with the filter impulse response. Taking the Fourier transform,
SY X ( f ) = S X ( f ) H ( f ) (3.21b)
Eq. 3.21(b) tells us that X ( t ) and Y ( t ) will have strong cross-correlation in
Example 3.12
In the scheme shown (Fig. 3.14), X ( t ) and Y ( t ) are jointly WSS. Let us
compute SV Z ( f ) .
RV Z ( t , u ) = E ⎡⎣V ( t ) Z ( u ) ⎤⎦
⎧⎪ ∞ ∞ ⎫⎪
= E ⎨ ∫ h1 ( τ1 ) X ( t − τ1 ) d τ1 ∫ 2 2
h ( τ ) Y ( u − τ 2 ) d τ 2 ⎬
⎪⎩ − ∞ −∞ ⎪⎭
Interchanging the order of expectation and integration,
3.39
∞ ∞
RV Z ( t , u ) = ∫ ∫ h ( τ ) h ( τ ) E ⎡⎣ X ( t − τ ) Y (u − τ )⎤⎦ d τ
1 1 2 2 1 2 1 d τ2
− ∞− ∞
RV Z ( t , u ) = RV Z ( t − u ) = RV Z ( τ )
has to be zero. That is, the two random variables, V ( t i ) and Z ( t j ) obtained
Fig. 3.15.
3.40
familiar result
SV ( f ) = H ( f ) S X ( f )
2
3.41
1 ⎡ 1 T ⎤
fX ( x ) = 1
exp ⎢ − ( x − m x ) C X− 1 ( x − m X ) ⎥ (3.24)
( 2 π)
n
2
CX 2 ⎣ 2 ⎦
by
⎛ λ11 λ12 " λ1n ⎞
⎜ ⎟
⎜ λ 21 λ 22 " λ 2 n ⎟
Cx =
⎜ # ⎟
⎜⎜ ⎟⎟
⎝ λ n1 λ n 2 " λ nn ⎠
where λ i j = cov ⎣⎡ X i , X j ⎦⎤
⎣ (
= E ⎡ Xi − Xi ) (X j )
− Xj ⎤
⎦
(
Similarly, ( x − m X ) = x1 − X1 , x2 − X 2 , ........, xn − X n )
Specification of a Gaussian process as above corresponds to the first
method mentioned earlier. Note that an n-dimensional Gaussian PDF depends
only on the means and the covariance quantities of the random variables under
consideration. If the mean value of the process is constant and
independent of the time origin. In other words, a WSS Gaussian process is also
stationary in a strict sense. To illustrate the use of the matrix notation, let us take
an example of a joint Gaussian PDF with n = 2 .
Example 3.13
X 1 and X 2 are two jointly Gaussian variables with X 1 = X 2 = 0 and
joint PDF of X 1 and X 2 in i) the matrix form and ii) the expanded form.
3.42
⎡ σ2 ρ σ2 ⎤
Cx = ⎢ 2 2 ⎥
, C X = σ 4 (1 − ρ2 )
⎣ρ σ σ ⎦
−1 1 ⎡ σ2 − ρ σ2 ⎤
C = 4 ⎢ ⎥
X
(
σ 1 − ρ2 ⎣ − ρ σ 2 ) σ2 ⎦
1 ⎡ 1 − ρ⎤
= ⎢ ⎥
(
σ2 1 − ρ2 ⎣ − ρ 1 ⎦ )
Therefore,
1 ⎧⎪ −1 ⎡ 1 − ρ⎤ ⎛ x1 ⎞ ⎫⎪
f X1, X 2 ( x1 , x2 ) = exp ⎨ 2 ( x1 x2 ) ⎢ ⎥ ⎜ ⎟⎬
2 π σ2 1− ρ 2
(
⎪⎩ 2 σ 1 − ρ
2
) ⎣ − ρ 1 ⎦ ⎝ x2 ⎠ ⎭⎪
ii) Taking the matrix products in the exponent above, we have the expanded
result, namely,
1 ⎧⎪ x 2 − 2 ρ x x + x 2 ⎫⎪
f X1, X 2 ( x1 , x2 ) = exp ⎨− 1 1 2 2
⎬ , − ∞ < x1 , x2 < ∞
2 π σ2 1− ρ 2
⎩⎪ 2 σ 2
1 − ρ (
2
⎪⎭ )
This is the same as the bivariate Gaussian PDF of Chapter 2, section 6 with
x1 = x and x2 = y , and m X = mY = 0 , and σ X = σY = σ .
Example 3.14
X ( t ) is a Gaussian process with m X ( t ) = 3 and
a) P ⎡⎣ X ( 5 ) ≤ 2⎤⎦
b) P ⎡⎣ X ( 8 ) − X ( 5 ) ≤ 1⎤⎦
3.43
Gaussian with the mean value 3, and the variance = C X ( 0 ) = 4 . That is,
Y is N ( 3, 4 ) .
⎡Y − 3 1⎤
P [Y ≤ 2] = P ⎢ ≤ − ⎥ = Q ( 0.5 )
⎣ 2 2⎦
σ 2Z = σY2 + σW
2
− 2 ρY ,W σY σW
= 4 + 4 − 2 × 4 e − 0.2 × 3
( )
= 8 1 − e − 0.6 = 3.608
⎧1 ⎫
P ⎡⎣ Z ≤ 1⎤⎦ = 2 P [0 ≤ Z ≤ 1] = 2 ⎨ − P [ Z > 1]⎬
⎩2 ⎭
⎡ Z 1 ⎤
= 1 − 2P ⎢ > ⎥ = 1 − 2 Q ( 0.52 )
⎣ 3.6 3.6 ⎦
3.44
Exercise 3.9
Let X be a zero-mean Gaussian vector with four components, X1 , X 2 ,
X1 X 2 X 3 X 4 = X1 X 2 ⋅ X 3 X 4 + X1 X 3 ⋅ X 2 X 4 + X1 X 4 ⋅ X 2 X 3
The above formula can also be used to compute the moments such as
X14 = X1 X1 X1 X1 = 3 σ14
( )
2
Similarly, X12 X 22 = X1 X1 X 2 X 2 = σ12 σ22 + 2 X1 X 2
matrix of X1 and X 2 is
⎡ 2 1⎤
CX = ⎢ ⎥
⎣ 1 2⎦
Show that E ⎡⎣ X12 X 22 ⎤⎦ = 6
3.45
⎧1, i = j
δi j = ⎨
⎩0, i ≠ j
P3) Let Y = (Y1 , Y2 , .........., Yn ) be a set of vectors obtained from
Y T = A X T + aT
Exercise 3.10
Let Y T = A X T + aT
where the notation is from P3) above. Show that
CY = AC X AT
Two of the important internal circuit noise varieties are: i) Thermal noise
ii) Shot noise
3.46
whether or not an electrical field is applied. Consistent with the central limit
theorem, V ( t ) is a Gaussian process with zero mean and the variance is a
(in ( Volts ) 2
)
Hz , denoted SV ( f ) is essentially constant for f ≤ 1012 Hz, if T is
given by
SV ( f ) = 2 R k T V 2 Hz (3.25)
If this open circuit voltage is measured with the help of a true RMS
voltmeter of bandwidth B (frequency range: − B to B ), then the reading on the
instrument would be 2R kT ⋅ 2B = 4R kT B V .
3.47
Let us treat the resistor to be noise free. But it is in series with a noise
source with SV ( f ) = 2 R k T . In other words, we have a noise source with the
Fig. 3.16: (a) Resistor as noise source (b) The noise source driving a load
We know that the maximum power is transferred to the load, when the load is
matched to the source impedance. That is, the required load resistance is R
1
(Fig. 3.16(b)). The transfer function of this voltage divider network is
2
⎛ R ⎞
⎜ i.e. ⎟ . Hence,
⎝ R + R⎠
SV ( f )
H (f )
2
available PSD =
R
2R kT kT
= = Watts Hz (3.26)
4R 2
It is to be noted that available PSD does not depend on R , though the
noise voltage is produced by R .
Because of Eq. 3.26, the available power in a bandwidth of B Hz is,
kT
Pa = ⋅ 2B = kT B (3.27)
2
In many solid state devices (and of course, in vacuum tubes!) there exists
a noise current mechanism called shot noise. In 1918, Schottky carried out the
first theoretical study of the fluctuations in the anode current of a temperature
3.48
limited vacuum tube diode (This has been termed as shot noise). He showed that
at frequencies well below the reciprocal of the electron transit time (which
extends upto a few Giga Hz), the spectral density of the mean square noise
current due to the randomness of emission of electrons from the cathode is given
by,
SI ( f ) = 2 q I ( Amp )
2
Hz
where q is the electronic charge and I is the average anode current. (Note that
units of spectral density could be any one of the three, namely, (a) Watts Hz ,
with SI ( f ) for the source quantity. Then, we will have the Norton equivalent
2R k T 2kT
circuit with SI ( f ) = 2
= in parallel with the resistance R .)
R R
Schottky’s result also holds for the semiconductor diode. The current
spectral density of shot noise in a p-n junction diode is given by
SI ( f ) = 2 q ( I + 2 IS ) (3.28)
3.49
A BJT has two semiconductor junctions and hence two sources of shot
noise. In addition, it contributes to thermal noise because of internal ohmic
resistance (such as base resistance).
A JFET has a reverse biased junction between the gate terminal and the
semiconductor channel from the source to the drain. Hence, we have the gate
shot noise and the channel thermal noise in a JFET. (Note that gate shot noise
could get amplified when the device is used in a circuit.)
N0
SW ( f ) = watts Hz (3.29)
2
1
where the factor has been included to indicate that half the power is
2
associated with the positive frequencies and half with negative frequencies. In
addition to a fiat spectrum, if the process happens to be Gaussian, we describe it
as white Gaussian noise. Note that white and Gaussian are two different
attributes. White noise need not be Gaussian noise, nor Gaussian noise need be
white. Only when "whiteness" together with "Gaussianity" simultaneously exists,
the process is qualified as White Gaussian Noise (WGN) process.
3.50
White noise, whether Gaussian or not, must be fictitious (as is the case
with everything that is "ideal") because its total mean power is infinity. The utility
of the concept of white noise stems from the fact that such a noise, when passed
through a linear filter for which
∞
∫ H (f )
2
< ∞ (3.30)
−∞
the filter output is a stationary zero mean noise process N ( t ) that is meaningful
within this band. In so far as the power spectrum at the output is concerned, it
makes little difference how the input power spectrum behaves outside of the
pass band. Hence, if the input noise spectrum is flat within the pass band of the
system, we might as well treat it to be white as this does not affect the calculation
of output noise power. However, assuming the input as ‘white’ will simplify the
calculations.
N0
As SW ( f ) = , we have, for the ACF of white noise
2
N0
RW ( τ ) = δ ( τ) , (3.31)
2
as shown in Fig. 3.17. This is again a nonphysical but useful result.
3.51
Eq. 3.31 implies that any two samples of white noise, no matter how closely
together in time they are taken, are uncorrelated. (Note that RW ( τ ) = 0 for
τ ≠ 0 ). In addition, if, the noise process is Gaussian, we find that any two
samples of WGN are statistically independent. In a sense, WGN represents the
ultimate in randomness!
3.52
should sound monotonous because the waveform driving the speaker appears to
be the same for all time instants.
We will begin with the audio. By clicking on the speaker symbol, you will
listen to the speaker output when it is driven by a BLWN source with the
frequency range upto 110 kHz. (As the speaker response is limited only upto 15
kHz, the input to the speaker, for all practical purposes, is white.)
We now show some flash animation pictures of the spectral density and
time waveforms of BLWN.
1. Picture 1 is the display put out by a spectrum analyzer when it is fed with a
BLWN signal, band limited to 6 MHz. As can be seen from the display, the
spectrum is essentially constant upto 6 MHz and falls by 40 to 45 dB at
about 7MHz and beyond.
3.53
display is just about the same, whereas when you switch from 500 µsec/div
to 50 µsec/div, there is a change, especially in the display level.
Now consider a 20 kHz sine wave being displayed with the same
sweep speed, namely, 50 µsec/div. This will result in 1 cycle/div which
implies we can see the fine structure of the waveform. If the sweep speed
were to be 500 µsec/div, then a 20 kHz time will result in 10 cycles/div,
which implies that fine structure will not be seen clearly. Hence when we
observe BLWN, band limited to 6 MHz, on an oscilloscope with a sweep
speed of 50 µsec/div, fine structure of the low frequency components could
interfere with the constant envelope display of the higher frequency
components, thereby causing some reduction in the envelope level.
3. Picture 3 is the display from the spectrum analyzer when it is fed with a
BLWN signal, band limited to 50 MHz. As can be seen from the display,
PSD is essentially constant upto 50 MHz and then starts falling, going 40
dB down by about 150 MHz.
You have again the choice of the same four sweep rates. This time
you will observe that when you switch from 500 µsec/div to 50 µsec/div, the
3.54
change is much less. This is because, this signal has much wider spectrum
and the power in the frequency range of f ≤ 100 kHz is 0.002 P where P
From the above, it is clear that as the BLWN tends towards white noise,
variations in the time waveform keep reducing, resulting in a steady picture on
the oscilloscope no matter what sweep speed is used for the display.
fY ( y ) .
Let N ( t ) denote the output noise process when the WGN process W ( t )
RN ( τ ) = N0 B sin c ( 2 B τ )
3.55
σY2 = Y 2 = RY ( 0 ) = N0 B . Hence Y is N ( 0, N0 B ) .
n
Note that ACF passes through zeros at τ = where n = ± 1, ± 2, ..... .
2B
Hence any two random variables obtained by sampling the output process at
1
times t1 and t 2 such that t1 − t 2 is multiple of , are going to be statistically
2B
independent.
3.56
1
The transfer function of the RC LPF is given by H ( f ) = .
1 + j 2 π f RC
N0 2
Therefore SN ( f ) = , and
1 + (2 π f R C )
2
N0 ⎛− τ ⎞
RN ( τ ) = exp ⎜ ⎟
4 RC ⎝ RC ⎠
λXY E[XY]
ρX Y = =
σ X σY E ⎡⎣ X 2 ⎤⎦
E [ X Y ] = RN ( τ ) τ = 0.1 sec
N0
E ⎡⎣ X 2 ⎤⎦ = RN ( τ ) τ = 0
=
4RC
− ( 0.1 RC )
Hence, ρ X Y = e
Exercise 3.11
1
X ( t ) is a zero mean Gaussian process with R X ( τ ) = . Let
1 + τ2
Y (t ) = l
X ( t ) where l
X ( t ) is the Hilbert transform of X ( t ) . The process X ( t )
3.57
Exercise 3.12
The impulse response of a filter (LTI system) is given by
h (t ) = δ (t ) − α e− α t u (t )
where α is a positive constant. If the input to the filter is white Gaussian noise
N0
with the PSD of watts/Hz, find
2
a) the output PSD
b) Show that the ACF of the output is
N0 ⎡ α − ατ ⎤
⎢δ ( τ ) − 2 e ⎥
2 ⎣ ⎦
Exercise 3.13
White Gaussian noise process is applied as input to a zero-order-hold
circuit with a delay of T sec. The output of the ZOH circuit is sampled at
t = T . Let Y be the corresponding random variable. Find fY ( y ) .
Exercise 3.14
Noise from a 10 k Ω resistor at room temperature is passed through an
ILPF of bandwidth 2.5 MHz. The filtered noise is sampled every microsecond.
Denoting the random variables corresponding to the two adjacent samples as
Y1 and Y2 , obtain the expression for the joint PDF of Y1 and Y2 .
3.58
In example 3.15, we found that the average output noise power is equal to
N0 B where B is the bandwidth of the ILPF. Similarly, in the case of example
⎛ N ⎞
3.16, we find the average output noise power being equal to ⎜ 0 ⎟ where
⎝ 4 RC ⎠
1
is the 3-dB bandwidth of the RC-LPF. That is, in both the cases, we find
2πRC
that the average output noise power is proportional to some measure of the
bandwidth. We may generalize this statement to include all types of low pass
filters by defining the noise equivalent bandwidth as follows.
N0
Suppose we have a source of white noise with the PSD SW ( f ) =
2
connected to an arbitrary low pass filter of transfer function H ( f ) . The resulting
∫ H (f ) d f
2
= N0
0
Consider next the same source of white noise connected to the input of an
ILPF of zero frequency response H ( 0 ) and bandwidth B . In this case, the
Def. 3.10:
The noise equivalent bandwidth of an arbitrary filter is defined as the
bandwidth of an ideal rectangular filter that would pass as much white noise
3.59
power as the filter in question and has the same maximum gain as the filter
under consideration.
∫ H (f )
2
df
BN = 0
(3.32)
H 2 (0)
we can simply calculate the output noise power without worrying about the actual
shape of H ( f ) . The definition of noise equivalent bandwidth can be extended to
Example 3.17
Compute the noise equivalent bandwidth of the RC-LPF.
We have H ( 0 ) = 1 . Hence,
∞
df 1
BN = ∫ 1 + (2 π f R C )
0
2
=
4RC
3.60
usually being much smaller compared to fc . Hence the receiver meant for such
signals usually consists of a cascade of narrowband filters; this means that even
if the noise process at the input to the receiver is broad-band (may even
considered to be white) the noise that may be present at the various stages of a
receiver is essentially narrowband in nature. We shall now develop the statistical
characterization of such NarrowBand Noise (NBN) processes.
N0
taken as . If H ( f ) denotes the filter transfer function, we have
2
N0
SN ( f ) = H (f )
2
(3.33)
2
In fact, any narrowband noise encountered in practice could be modeled as the
output of a suitable filter. In Fig. 3.20, we show the waveforms of experimentally
generated NBBP noise. This noise process is generated by passing the BLWN
(with a flat spectrum upto 110 kHz) through a NBBP filter, whose magnitude
characteristic is shown in Fig. 3.21. This filter has a centre frequency of 101 kHz
and a 3-dB bandwidth of less than 3 kHz.
3.61
3.62
The magnitude response of the filter is obtained by sweeping the filter input from
87 kHz to 112 kHz.
Plot in Fig. 3.20(a) gives us the impression that it is a 101 kHz sinusoid with
slowly changing envelope. This is only partially correct. Waveforms at (b) and (c)
are expanded versions of a part of the waveform at (a). From (b) and (c), it is
clear that zero crossings are not uniform. In Fig. 3.20(c), the cycle of the
waveform shown in green fits almost into the space between two adjacent
vertical lines of the graticule, whereas for the cycle shown in red, there is a clear
offset. Hence the proper time-domain description of a NBBP signal would be: it is
a sinusoid undergoing slow amplitude and phase variations. (This will be justified
later on by expressing the NBBP noise signal in terms of its envelope and
phase.)
We shall now develop two representations for the NBBP noise signal.
These are (a) canonical (also called in-phase and quadrature component)
representation and (b) Envelope and Phase representation.
3.63
can write:
np e ( t ) = n ( t ) + j n ( t ) and (3.34)
nc e ( t ) = np e ( t ) exp ( − j 2 π fc t ) (3.35)
nc e ( t ) = nc ( t ) + j ns ( t ) (3.36)
As n ( t ) = Re ⎡⎣nc e ( t ) e j 2 π fc t ⎤⎦
{
= Re ⎡⎣nc ( t ) + j ns ( t ) ⎤⎦ e j 2 π fc t }
We have,
n ( t ) = nc ( t ) cos ( 2 π fc t ) − ns ( t ) sin ( 2 π fc t ) (3.39)
quadrature component of n ( t ) .
3.64
Nc ( t ) and Ns ( t ) respectively.
Exercise 3.15
l ( t ) = N ( t ) sin ( 2 π f t ) + N ( t ) cos ( 2 π f t )
Show that N (3.40b)
c c s c
ns ( t ) = r ( t ) sin ψ ( t )
1
or r ( t ) = ⎡⎣nc2 ( t ) + ns2 ( t ) ⎤⎦ 2 (3.42a)
⎡ n (t ) ⎤
and ψ ( t ) = arc tan ⎢ s ⎥ (3.42b)
⎢⎣ nc ( t ) ⎥⎦
N ( t ) = R ( t ) ⎡⎣cos ( 2 π fc t + Ψ ( t ) ) ⎤⎦ (3.43)
where R ( t ) is the envelope process and Ψ ( t ) is the phase process. Eq. 3.43
justifies the statement that the NBN waveform exhibits both amplitude and phase
variations.
3.65
P4) Both Nc ( t ) and Ns ( t ) have the same PSD which is related to SN ( f ) of the
⎪⎧S ( f − fc ) + SN ( f + fc ) , − B ≤ f ≤ B
SNc ( f ) = SNs ( f ) = ⎨ N
⎪⎩ 0 , elsewhere
fc − B ≤ f ≤ fc + B and fc > B .
⎪⎧ j ⎡S ( f + fc ) − SN ( f − fc ) ⎦⎤ , − B ≤ f ≤ B
SNc Ns ( f ) = − SNs Nc ( f ) = ⎨ ⎣ N
⎪⎩ 0 , elsewhere
RNc Ns ( τ ) is zero for all τ . Since Nc ( t ) and Ns ( t ) are jointly Gaussian, they
not there is local symmetry. In other words the variables Nc ( t1 ) and Ns ( t1 ) , for
3.66
Note: Given the spectrum of an arbitrary band-pass signal, we are free to choose
any frequency fc as the (nominal) centre frequency. The spectral shape SNc ( f )
and SNs ( f ) will depend on the fc chosen. As such, the canonical representation
of a narrowband signal is not unique. For example, for the narrowband spectra
shown in Fig. 3.22, if f1 is chosen as the representative carrier frequency, then
respect to it.
Fig. 3.22: Narrowband noise spectrum with two different centre frequencies
Example 3.18
For the narrowband noise spectrum SN ( f ) shown in Fig. 3.23, sketch
is the variance Nc ( t ) ?
3.67
3.24.
3.68
b) By repeating the above procedure, we obtain SNc ( f ) (the solid line) shown
in Fig. 3.26.
∞
σ = σ = ∫ S (f ) d f = 1.0 Watt
2 2
c) Nc N
−∞
N
3.69
Exercise 3.16
Assuming N ( t ) to be WSS and using Eq. 3.40(a), establish
Exercise 3.17
Let N ( t ) represent a narrowband, zero-mean Gaussian process with
⎧ N0
⎪ , fc − B ≤ f ≤ fc + B
SN ( f ) = ⎨2
⎪⎩0 , otherwise
d Nc ( t )
at t = t1 , where Nc ( t ) is the in phase component of N ( t ) .
dt
a) Show that X and Y are independent.
b) Develop the expression for f X ,Y ( x , y ) .
Exercise 3.18
Let N ( t ) represent a NBN process with the PSD shown in Fig. 3.27
below.
3.70
RNc e ( τ ) evidently stands for the ACF of the complex envelope of the narrowband
= E ⎡⎣( Nc ( t + τ ) + j Ns ( t + τ ) ) ( Nc ( t ) − j Ns ( t ) ) ⎤⎦
= − RNc Ns ( τ )
( )
RNc e ( t + τ , t ) = RNce ( τ ) = 2 RNc ( τ ) + j RNs Nc ( τ ) . From Eq. 3.44, we have
1
RN ( τ ) = Re ⎡RNc e ( τ ) e + j 2 π fc τ
⎤
2 ⎣ ⎦
1⎡
= RNc e ( τ ) e j 2 π fc τ + RN∗ c e ( τ ) e − j 2 π fc τ ⎤ (3.45)
4 ⎣ ⎦
Taking the Fourier transform of Eq. 3.45, we obtain
1⎡
SN ( f ) = SN ( f − fc ) + S ∗Nc e ( − f − fc ) ⎤ (3.46)
4 ⎣ ce ⎦
Note: For complex signals, the ACF is conjugate symmetric; that is, if X ( t )
frequency.
3.71
N ( t ) = R ( t ) cos ⎡⎣( 2 π fc t + Ψ ( t ) ) ⎤⎦
That is,
1
R ( t ) = ⎡⎣Nc2 ( t ) + Ns2 ( t ) ⎤⎦ 2 (3.48a)
⎡ N (t ) ⎤
Ψ ( t ) = tan− 1 ⎢ s ⎥ (3.48b)
⎣⎢ Nc ( t ) ⎦⎥
Our interest is the PDF of the random variable R ( t1 ) , for any arbitrary
sampling instant t = t1 .
Hence,
1 ⎡ nc2 + ns2 ⎤
fNc Ns ( nc , ns ) = exp ⎢− ⎥
2 π σ2 ⎣ 2 σ2 ⎦
3.72
Therefore,
⎧ r ⎛ r2 ⎞
⎪ exp ⎜ − 2 ⎟
, r ≥ 0, 0 ≤ ψ < 2 π
fR , Ψ ( r , ψ ) = ⎨ 2 π σ2 ⎝ 2σ ⎠
⎪
⎩ 0 , otherwise
2π
fR ( r ) = ∫ f (r , ψ) d ψ
R, Ψ
0
The PDF given by Eq. 3.49 is the Rayleigh density which was introduced
in Chapter 2. From the above discussion, we have the useful result, namely, the
envelope of narrowband Gaussian noise is Rayleigh distributed.
3.73
X ( t ) = A cos ( 2 π fc t ) + N ( t ) (3.51)
and variance σ2 . Then, for any sampling instant t = t1 , let N'c denote the
random variable N'c ( t1 ) and let Ns denote the random variable Ns ( t1 ) . From our
(
earlier discussion, we find that N'c is N A , σ2 , Ns is N 0, σ2 ) ( ) and N'c is
independent of Ns . Hence,
3.74
(
⎡ n' − A 2 + n 2 ⎤
)
c s
(
fN' , N n'c , ns ) =
1
2πσ 2
⎢
exp ⎢ −
c
2σ 2
s ⎥
⎥ (3.52)
⎢⎣ ⎥⎦
1
⎧ 2
2 ⎫2
Let R ( t ) = ⎨ ⎡N'c ( t ) ⎤ + ⎣⎡Ns ( t ) ⎦⎤ ⎬
⎩ ⎢⎣ ⎦⎥ ⎭
⎡ N (t ) ⎤
Ψ ( t ) = tan− 1 ⎢ 's ⎥
⎢⎣ Nc ( t ) ⎥⎦
procedure similar to that used in the computation of the PDF of the envelope of
narrowband noise, we find that
r ⎡ r 2 + A2 − 2 A r cos ψ ⎤
fR , Ψ ( r , ψ ) = exp ⎢− ⎥ , r ≥ 0, 0 ≤ ψ ≤ 2 π
2 π σ2 ⎣ 2 σ2 ⎦
where R = R ( t1 ) and Ψ = Ψ ( t1 )
fR ( r ) = ∫ f (r , ψ) d ψ
R, Ψ
0
That is,
2π
r ⎛ r 2 + A2 ⎞ ⎛ Ar ⎞
fR ( r ) = exp ⎜− ⎟ ∫ exp ⎜⎝ σ cos ψ ⎟ d ψ (3.53)
2 π σ2 ⎝ 2 σ2 ⎠ 0
2
⎠
The integral on the RHS of Eq. 3.53 is identified in terms of the defining
integral for the modified Bessel function of the first kind and zero order.
2π
1
Let I0 ( y ) = ∫ exp ( y cos ψ ) d ψ (3.54)
2π 0
Ar
A plot of I0 ( y ) is shown in Fig. 3.29. In Eq. 3.54, if we let y = , Eq. 3.53 can
σ2
be rewritten as
3.75
r ⎛ r 2 + A2 ⎞ ⎛ A r ⎞
fR ( r ) = exp ⎜− ⎟ I0 ⎜ ⎟ (3.55)
σ2 ⎝ 2 σ2 ⎠ ⎝ σ2 ⎠
3.76
⎧ ⎛ v2 ⎞
⎪v exp ⎜− ⎟, v ≥ 0
fV (v ) = ⎨ ⎝ 2⎠
⎪
⎩ 0 , otherwise
which is the normalized Rayleigh PDF shown earlier in Fig. 3.27. This is
justified because if A = 0 , then R ( t ) is the envelope of only the
narrowband noise.
ey
ii) For y >> 1, I0 ( y ) . Using this approximation, it can be shown that
2πy
3.77
Appendix A3.1
Properties of Narrowband Noise: Some Proofs.
We shall now give proofs to some of the properties of the NBN, mentioned
in sec.3.10.2.
P1) If N ( t ) is zero mean Gaussian, then so are Nc ( t ) and Ns ( t ) .
Nc ( t ) = N ( t ) cos ( 2 π fc t ) + N
l ( t ) sin ( 2 π f t )
c (A3.1.1)
Ns ( t ) = N
l ( t ) cos ( 2 π f t ) − N ( t ) sin ( 2 π f t )
c c (A3.1.2)
Nc ( t ) = N ( t ) cos ( 2 π fc t ) + N
l ( t ) sin ( 2 π f t )
c
A3.1.2, we obtain Ns ( t ) = 0 .
P2) If N ( t ) is a Gaussian process, then Nc ( t ) and Ns ( t ) are jointly Gaussian.
3.78
l ( t ) from N ( t )
Fig. A3.1.1: Scheme to obtain N
RN Nl ( τ ) = RNl N ( − τ ) ,
1
= ∗ RN ( − τ )
π ( − τ)
1
= − ∗ RN ( τ )
πτ
l N ( τ)
= −R
RNc ( t + τ , t ) = E ⎡⎣Nc ( t + τ ) Nc ( t ) ⎤⎦
R l ( τ ) + RN Nl ( τ ) ⎤ sin ( 2 π fc ( 2 t + τ ) )
1⎡
+
2 ⎣ NN ⎦
3.79
Eq. A3.1.3.
Exercise A3.1.1
Show that RNs ( t ) = RNc ( τ ) (A3.1.7)
P4) Both Nc ( t ) and Ns ( t ) have the same PSD which is related to SN ( f ) of the
⎪⎧S ( f − fc ) + SN ( f + fc ) , − B ≤ f ≤ B
SNc ( f ) = SNs ( f ) = ⎨ N
⎪⎩ 0 , elsewhere
fc − B ≤ f ≤ fc + B and fc > B .
⎧2 , f > − fc
1 + sgn ( f + fc ) = ⎨ (A3.1.10b)
⎩0 , outside
3.80
3.81
References
1) A. Papoulis, Probability, Random variables and Stochastic processes (3rd
ed), McGraw Hill, 1991
2) Henry Stark and John W. Woods, Probability and Random Processes with
Applications to Signal processing (3rd ed), Pearson Education Asia, 2002
3) K. Sam Shanmugam and A. M. Breiphol, Random Signals: Detection,
Estimation and data analysis, John Wiley, 1988
3.82