# Principles of Communication

Prof. V. Venkata Rao

CHAPTER 3

Random Signals and Noise
3.1 Introduction
The concept of 'random variable' is adequate to deal with unpredictable voltages; that is, it enables us to come up with the probabilistic description of the numerical value of a random quantity, which we treat for convenience to be a voltage quantity. In the real world, the voltages vary not only in amplitude but also exhibit variations with respect to the time parameter. In other words, we have to develop mathematical tools for the probabilistic characterization of random signals. The resulting theory, which extends the mathematical model of probability so as to incorporate the time parameter, is generally called the theory of Random or Stochastic Processes.

Before we get started with the mathematical development of a random process, let us consider a few practical examples of random processes and try to justify the assertion that the concept of random process is an extension of the concept of a random variable.

Let the variable X denote the temperature of a certain city, say, at 9 A.M. In general, the value of X would be different on different days. In fact, the temperature readings at 9 A.M. on two different days could be significantly different, depending on the geographical location of the city and the time separation between the days of observation (In a place like Delhi, on a cold winter morning, temperature could be as low as 40D F whereas at the height of the summer, it could have crossed 100D F even by 9 A.M.!).

3.1

Principles of Communication

Prof. V. Venkata Rao

To get the complete statistics of X , we need to record values of X over many days, which might enable us to estimate f X ( x ) .

But the temperature is also a function of time. At 12 P.M., for example, the temperature may have an entirely different distribution. Thus the random variable
X is a function of time and it would be more appropriate to denote it by X ( t ) . At

least, theoretically, the PDF of X ( t1 ) could be very much different from that of
X ( t 2 ) for t1 ≠ t 2 , though in practice, they may be very much similar if t1 and t 2

are fairly closely spaced.

As a second example, think of a situation where we have a very large number of speakers, each one of them uttering the same text into their individual microphones of identical construction. The waveforms recorded from different microphones would be different and the output of any given microphone would vary with time. Here again, the random variables obtained from sampling this collection of waveforms would depend on the sampling instants.

As a third example, imagine a large collection of resistors, each having the same value of resistance and of identical composition and construction. Assume that all these resistors are at room temperature. It is well known that thermal voltage (usually referred to as thermal noise) develops across the terminals of such a resistor. If we make a simultaneous display of these noise voltages on a set of oscilloscopes, we find amplitude as well as time variations in these signals. In the communications context, these thermal voltages are a source of interference. Precisely how they limit our capacity to enjoy, say, listening to music in an AM receiver, is our concern. The theory of random processes enables one to come up with a quantitative answer to this kind of problem.

3.2

Principles of Communication

Prof. V. Venkata Rao

3.2 Definition of a Random Process
Consider a sample space

S

(pertaining to some random experiment) with

sample points s1 , s2 , ..........., sn , ........ . To every s j ∈

S , let us assign a real

valued function of time, x ( s j , t ) which we denote by x j ( t ) . This situation is illustrated in Fig. 3.1, which shows a sample space waveforms, labeled x j ( t ) , j = 1, 2, 3, 4 .

S

with four points and four

Now, let us think of observing this set of waveforms at some time instant
t = t1 as shown in the figure.

Since each point s j of

S

has associated with it, a number x j ( t1 ) and a

probability Pj , the collection of numbers, { x j ( t1 ) , j = 1, 2, 3, 4} forms a random variable. Observing the waveforms at a second time instant, say t 2 , yields a different collection of numbers, and hence a different random variable. Indeed this set of four waveforms defines a random variable for each choice of the observation instant. The above situation can easily be extended to the case where there are infinite numbers of sample points and hence, the number of waveforms associated with them are correspondingly rich.

3.3

Principles of Communication

Prof. V. Venkata Rao

Fig 3.1: A simple random process

The probability system composed of a sample space, an ensemble (collection) of time functions and a probability measure is called a Random
Process (RP) and is denoted X ( t ) . Strictly speaking, a random process is a

function of two variables, namely s ∈

S

and t ∈ ( − ∞ , ∞ ) . As such, a better

notation would be X ( s , t ) . For convenience, we use the simplified notation X ( t ) to denote a random process. The individual waveforms of X ( t ) are called sample functions and the probability measure is such that it assigns a probability to any meaningful event associated with these sample functions. Given a random process X ( t ) , we can identify the following quantities:
X ( t ) : The random process x j ( t ) : The sample function associated with the sample point s j X ( t i ) : The random variable obtained by observing the process at t = t i x j ( t i ) : A real number, giving the value of x j ( t ) at t = t i .

3.4

we have f X0 ( x ) = δ ( x ) and f X1 ( x ) = = 0 and X 1 correspond to X ( t ) | t = 1 . if the toss results in a tail. Example 3. The random process X ( t ) is defined as follows: X ( t ) = sin ( π t ) . Venkata Rao We shall now present a few examples of random processes. There are only two sample functions for the process.5 Indian Institute of Technology Madras . We wish to find the expression for the PDF of the random variables obtained from sampling the process at (a) t = 0 and (b) t = 1 . if head shows up and X ( t ) = 2 t . V. Fig.1 Consider the experiment of tossing a fair coin. Let X 0 denote the random variable X ( t ) | t Then.Principles of Communication Prof.2. we have P ⎡ ⎣ x1 ( t ) ⎤ ⎦ = P⎡ ⎣ x2 ( t ) ⎤ ⎦ = 2. Let us denote them by x1 ( t ) and x 2 ( t ) where x1 ( t ) = sin ( π t ) and x 2 ( t ) = 2 t which are shown in Fig.2: The ensemble for the coin tossing experiment 1 As heads and tails are equally likely. Sketch the sample functions. 1 ⎡δ ( x ) + δ ( x − 2 )⎤ ⎦. 3. 3. 2⎣  3.

. 3.6 Indian Institute of Technology Madras .. Example 3. we could even say.. namely (i) the number of sample functions are finite (in fact. A few of the sample functions of this random process are shown below (Fig 3.3). .2 The PDF of X is fX ( x ) = E[X] = 1 6 i = 1 + ( i − 1) ⎟ ⎥ ∑ δ ⎢x − ⎜ ⎝2 ⎠ ⎣ ⎦ 6 ⎡ ⎛1 ⎞⎤ 1 (1 + 3 + 5 + 7 + 9 + 11) = 3. 2 = 1 Let us find the mean value of the random variable X = X ( t ) | t . Let the sample functions be given by xi ( t ) = 1 t + ( i − 1) for s = si ...... Fig. V.2 Consider the experiment of throwing a fair die.Principles of Communication Prof. s6 corresponding to the six faces of the die...0 12  The examples cited above have two features in common.. 6 . .. The sample space consists of six sample points. Venkata Rao Note that this is one among the simplest examples of RPs that can be used to construe the concept. s1 .3: A few sample functions of the RP of Example 3.. i = 1. quite small) 3...

N being a very large number (of the order of a million!). Fig. Also. Consider the situation where we have at our disposal N identical resistors. the waveforms (sample functions) in the ensemble are not random. typical sample functions might look like the ones shown in Fig. as we shall see later on in this chapter.7 Indian Institute of Technology Madras . Let the experiment be 'picking a resistor at random' and the sample functions be the thermal noise voltage waveforms across these resistors. statistical characterization of such noise processes is still possible which is adequate for our purposes. Then.Principles of Communication Prof. Assuming that the probability of any resistor being picked up is 1 . In quite a few situations involving a random process. Venkata Rao and (ii) the sample functions could be mathematically described. that is.4.4: The ensemble for the experiment 'picking a resistor at random' One fine point deserves a special mention. Randomness 3. 3. V. However. the above features may not be present. we find N that this probability becomes smaller and smaller as N becomes larger and larger. They are deterministic. it would be an extremely difficult task to write a mathematical expression to describe the time variation of any given voltage waveform. 3.

Principles of Communication

Prof. V. Venkata Rao

in this situation is associated not with the waveforms but with the uncertainty as to which waveform will occur on a given trial.

3.3 Stationarity
By definition, a random process X ( t ) implies the existence of an infinite number of random variables, one for every time instant in the range,
X (t )

− ∞ < t < ∞ . Let the process
t1 , t 2 , ..........., t n .

be observed at n corresponding

time instants, variables

We

then

have

the

random

X ( t1 ) , X ( t 2 ) , ..........., X ( t n ) . We define their Joint distribution function by

FX (t1 ) , ..........., X (tn ) ( x1 , x2 , ..........., xn ) = P { X ( t1 ) ≤ x1 , ........ X ( tn ) ≤ xn } . Using a
vector notation is quite often convenient and we denote the joint distribution by

FX (t ) ( x ) where the n-component random vector X ( t ) = ( X ( t1 ) , ..........., X ( t n ) )
and the dummy vector x = ( x1 , x2 , ..........., xn ) . The joint PDF of X ( t ) , fX (t ) ( x ) , is given by
f X (t ) ( x ) = ∂n F (x) ∂ x1 ∂ x2 ........ ∂ xn X (t )

We say a random process X ( t ) is specified if and only if a rule is given or implied for determining FX (t ) ( x ) or f X (t ) ( x ) for any finite set of observation instants ( t1 , t 2 , ..........., t n ) .

In application, we encounter three methods of specification. The first (and simplest) is to state the rule directly. For this to be possible, the joint density function must depend in a known way on the time instants. For the second method, a time function involving one or more parameters is given. For example,
X ( t ) = A cos ( ωc t + Θ ) where A and ωc are constants and Θ is a random

variable with a known PDF. The third method of specifying a random process is

3.8

Principles of Communication

Prof. V. Venkata Rao

to generate its ensemble by applying a stated operation to the sample functions of a known process. For example, a random process Y ( t ) may be the result of linear filtering on some known process X ( t ) . We shall see later on in this lesson, examples of all these methods of specification. Once f X (t ) ( x ) is known, it is possible for us to compute the probability of various events. For example, we might be interested in the probability of the random process X ( t ) passing through a set of windows as shown in Fig. 3.5.

Let A be the event:

A = {s : a1 < X ( t1 ) ≤ b1 , a2 < X ( t2 ) ≤ b2 , a3 < X ( t3 ) ≤ b3 }
That is, the event A consists of all those sample points {s j } such that the corresponding sample functions

{x ( t )}
j

satisfy

the

requirement,

ai ≤ x j ( t i ) ≤ bi , i = 1, 2, 3 . Then the required quantity is P ( A ) . A typical

sample function which would contribute to P ( A ) is shown in the same figure.
P ( A ) can be calculated as

P ( A) =

∫ ∫ ∫
a1 a2

b1

b2

b3

a3

fX ( t ) ( x ) d x

where x ( t ) = ( X ( t1 ) , X ( t2 ) , X ( t3 ) ) and d x = d x1 d x2 d x3

Fig. 3.5: Set of windows and a waveform that passes through them

3.9

Principles of Communication

Prof. V. Venkata Rao

The above step can easily be generalized to the case of a random vector with n-components.

Stationary random processes constitute an important subset of the

general class of random processes. We shall define stationarity here. Let the process X ( t ) be observed at time instants t1 , t 2 , .........., t n and X ( t ) be the corresponding random vector.
Def. 3.1: A random process X ( t ) is said to be strictly stationary or Stationary in a Strict Sense (SSS) if the joint PDF fX (t ) ( x ) is invariant to a translation of the

time origin; that is, X ( t ) is SSS, only if

f X (t + T ) ( x ) = f X (t ) ( x )
where ( t + T ) = ( t1 + T , t 2 + T , .........., t n + T ) .

(3.1)



For X ( t ) to be SSS, Eq. 3.1 should be valid for every finite set of time instants {t j } , j = 1, 2, .........., n , and for every time shift T and dummy vector

x . If X ( t ) is not stationary, then it is called a nonstationary process.

One implication of stationarity is that the probability of the set of sample functions of this process which passes through the windows of Fig. 3.6(a) is equal to the probability of the set of sample functions which passes through the corresponding time shifted windows of Fig. 3.6(b). Note, however, that it is not necessary that these two sets consist of the same sample functions.

3.10

Principles of Communication

Prof. V. Venkata Rao

Fig. 3.6: (a) Original set of Windows

(b) The translated set

Example 3.3

We cite below an example of a nonstationary process (It is easier to do this than give a nontrivial example of a stationary process). Let
X ( t ) = sin ( 2 π F t )

where F is a random variable with the PDF

fF ( f )

⎧ 1 , 100 ≤ f ≤ 200 Hz ⎪ = ⎨100 ⎪ ⎩0 , otherwise

(Note that this specification of X ( t ) corresponds to the second method mentioned earlier on in this section). We now show that X ( t ) is nonstationary.
X ( t ) consists of an infinite number of sample functions. Each sample

function is a sine wave of unit amplitude and a particular frequency f . Over the ensemble, the random variable F takes all possible values in the range

(100, 200 Hz ) . Three members of this ensemble, (with
Hz) are plotted in Fig. 3.7.

f = 100, 150 and 200

3.11

V. 3. zero at t = 0 .3 To show that X ( t ) is nonstationary.7: A few sample functions of the process of example 3.Principles of Communication Prof.5 m sec ) < t < 0 . we need only observe that every waveform in the ensemble is.5 m sec negative for ( − 2.12 Indian Institute of Technology Madras . 3. Venkata Rao Fig. positive for 0 < t < 2.

When these averages are derived from the ensemble. if X i and X j are the random variables obtained by sampling the process at t = t i and t = t j respectively.13 Indian Institute of Technology Madras . (This situation is analogous to the random variable case. which is evidently not the case for this example.  3. certain averages such as mean value. Venkata Rao Thus the density function of the random variable X ( t1 ) for t1 = 1 m sec is identically zero for negative arguments whereas the density function of the RV X ( t 2 ) for t 2 = − 1 m sec is non-zero only for negative arguments (Of course. wherein we had mentioned that even if the PDF of the variable is not available.. do provide adequate or useful information). the one-dimensional PDF is independent of the observation instant. they are called ensemble averages. Seldom is it possible to have this information and we may have to be content with a partial description of the process based on certain averages. Def.2) For example. 4 Ensemble Averages We had mentioned earlier that a random process is completely specified. variance etc. the PDF of X ( 0 ) is an impulse). (or the auto-covariance function) of the process provide a useful description of the processes. we require the cross-correlation between two different processes. Hence X ( t ) is nonstationary.2: The Mean Function The mean function of a process X ( t ) is mX ( t ) = E ⎡ ⎣ X ( t )⎤ ⎦ = X (t ) = ∞ −∞ ∫ x f ( ) (x) d x x t  (3. For a process that is SSS. 3. Usually. At times. if fX (t ) ( x ) is known.Principles of Communication Prof. then ∞ Xi = −∞ ∫ x f (x) d x xi = mX ( ti ) 3. the mean function and the autocorrelation function. V.

Xi (x. y )d x d y We also use ACF to denote the Auto Correlation Function.5a) mX being a constant.4: The Auto-covariance Function Let C X ( t k . It is given by ⎤ CX ( tk . 3.Principles of Communication Prof. m X ( t ) = 0 for all t ). we find that the autocorrelation and 3. mX (t ) = mX (3.3: The Autocorrelation Function The autocorrelation function of a random process X ( t ) is a function of two variables t k and t i . ti ) = E ⎡ ⎣( X ( t k ) − m X ( t k ) ) ( X ( t i ) − m X ( t i ) ) ⎦ (3.4b) In general. the autocorrelation and the auto-covariance would be a function of both the arguments t k and t i . for a stationary process. m X ( t ) is a function of time. then C X ( t k .3) Denoting the joint PDF of the random variables X ( t k ) and X ( t i ) by f Xk . and is given by R X ( tk . Def. Venkata Rao ∞ Xj = −∞ ∫ x f (x) d x xj = mX ( t j ) In general. In particular. we can simply mention the mean value of the process. Xi ( x .3 as R X ( tk . V. ti ) − m X ( tk ) m X ( ti )  (3. Also. In such a case. If the process has a zero mean value (that is. y ) we may rewrite Eq.4a) It is not too difficult to show that C X ( tk . we find that mean function of the process is a constant. For a stationary process. t i ) denote the auto covariance function of X ( t ) . 3. 3. ti ) = ∞ − ∞− ∞ ∫ ∫ xy f ∞ Xk . ti ) = E ⎡ ⎣ X ( tk ) X ( ti )⎤ ⎦  (3. the ensemble averages defined above take a simpler form. Def. That is. ti ) = R X ( tk . t i ) = R X ( t k . t i ) .14 Indian Institute of Technology Madras .

6a) and R X ( tk . 3. ti ) = E ⎡ ⎣ X ( tk − ti ) X ( 0 )⎤ ⎦ R X ( tk . This can be shown as follows.5 is valid. 3. we imply stationarity in the strict sense. ti ) = R X ( tk − ti .5(b) and write as R X ( tk . but the converse is not necessarily true. rather than on the actual values of t k and t i . 3. Venkata Rao auto-covariance functions depend only on the time difference ( t k − t i ) . ti ) = R X ( tk − ti ) In view of Eq.5(a) and Eq. P302] 3. 3. it is conventional to drop the second argument on the RHS of Eq.Principles of Communication Prof.4(b). When we simply use the word stationary.5: Wide Sense Stationarity A process X ( t ) is WSS or stationary in a wide sense1.5(b) hold. ti ) = E ⎡ ⎣ X ( tk ) X ( ti )⎤ ⎦ = E⎡ ⎣ X ( tk + T ) X ( ti + T )⎤ ⎦ . 3. 3. it is not difficult to see that for a stationary process C X (tk . 3. However. we should not infer that any process for which Eq. ti ) = C X (tk − ti ) It is important to realize that for a process that is SSS. ti ) = R X ( tk − ti )  (3. 0 ) (3.5 are sufficiently useful and are termed Wide Sense Stationary (WSS) processes. Def.15 Indian Institute of Technology Madras . Eq. then RX ( tk . V.5b) In order to simplify the notation. the processes satisfying Eq. as X ( t ) is stationary. provided mX (t ) = mX (3.6b) Wide sense stationarity represents a weak kind of stationarity in that all processes that are SSS are also WSS. 1 For definitions of other forms of stationarity (such as Nth order stationary) see [1. In any case. R X ( tk . In particular if T = − t i . is a stationary process.

4. P2) The ACF is an even function of τ . we infer that for a WSS process. V. E ⎡ ⎣ X ( t ) ± X ( t + τ)⎤ ⎦ { 2 } Being the expectation of a squared quantity. ACF is a very useful tool and a thorough understanding of its behavior is quite essential for a further study of the random processes of interest to us. we have RX (0 ) ± RX ( τ) ≥ 0 which implies R X ( 0 ) ≥ R X ( τ ) . Consider the quantity. 3. we define the ACF of a wide-sense stationary process X ( t ) as RX ( τ) = E ⎡ ⎣ X ( t + τ ) X ( t )⎤ ⎦ (3. Expanding the squared quantity and making use of the linearity property of the expectation.7 because. P3) ACF is maximum at the origin. it is nonnegative. That is.7) Note that τ is the difference between the two time instants at which the process is being sampled. 3. 2 RX ( 0 ) = E ⎡ ⎣ X ( t )⎤ ⎦ As R X ( 0 ) is a constant. RX ⎡ ⎣( t + τ ) − t ⎤ ⎦ = RX ⎡ ⎣t − ( t + τ ) ⎤ ⎦ which is the desired result. mean and meansquare values are independent of time. This follows from Eq. For convenience of notation.16 Indian Institute of Technology Madras . RX ( − τ) = RX ( τ) ⎡ X ( t + τ) X ( t )⎤ This is because E ⎣ ⎦ = E⎡ ⎣ X ( t ) X ( t + τ )⎤ ⎦ . which we shall presently establish. that is. Venkata Rao 3.Principles of Communication Prof. P1) The mean-square value of the process is R X ( τ ) τ = 0 .1 Properties of ACF The autocorrelation function of a WSS process satisfies certain properties.

8: ACF of a slowly and rapidly fluctuating random process 3.Principles of Communication Prof. This decrease may be characterized by a decorrelation time τ0 . R X ( τ ) remains below some prescribed value. 3. Venkata Rao P4) If the sample functions of the process X ( t ) are periodic with period T0 .17 Indian Institute of Technology Madras . We shall now take up a few examples to compute some of the ensemble averages of interest to us. The physical significance of R X ( τ ) is that it provides a means of describing the inter-dependence of two random variables obtained by observing the process X ( t ) at two time instants τ seconds apart. It is therefore apparent that the more rapidly X ( t ) changes with time.8). R X ( τ1 ) RX (0) is the correlation coefficient of the two random variables spaced τ1 seconds apart. the product repeats and so does the expectation of this product. say RX ( 0 ) 100 . such that for τ ≥ τ0 . then the ACF is also periodic with the same period. Fig. the more rapidly R X ( τ ) decreases from its maximum value R X ( 0 ) as τ increases (Fig. Consider E ⎡ ⎣ X (t + τ ) X (t )⎤ ⎦ for τ ≥ T0 . V. then for any τ = τ1 . This property can be established as follows. In fact if X ( t ) is a zero mean process. As each sample function repeats with period T0 . 3.

( 3. 3. V.9: Sampling the process of example 3. 4 ) = 1 6 j = 1 ∑ x ( 2) x ( 4 ) j j 6 for j = 1. ( 5.18 Indian Institute of Technology Madras .2 at t = 2 and 4 1 As P ⎡ ⎣x j ⎤ ⎦ = 6 for all j . 4 ) . 6 ) . let us compute R X ( 2.2. 4 ) = E ⎡ ⎣ X ( 2) X ( 4 )⎤ ⎦ = j = 1 ∑ P⎡ ⎣x ⎤ ⎦ x ( 2) x ( 4 ) j j j 6 A few of the sample functions of the process are shown below. ( 4. Fig. 3. Venkata Rao Example 3. R X ( 2.4 For the random process of Example 3.Principles of Communication Prof. the two samples that contribute to the ACF have the values 1 (indicated by the D on x1 ( t ) ) and 2 (indicated by × on x1 ( t ) ). 5 ) . 7 ) . ( 6. we have R X ( 2. 3 ) . The other sample pairs are ( 2. 4 ) .

2 ) 3. 4 ) = = 1 [2 + 6 + 12 + 20 + 30 + 42] 6 112 = 18.Principles of Communication Prof.. 2. Find a) b) c) E [ X ] and E [Y ] f X . V. i = 1... given by xi ( t ) = i t . Venkata Rao Hence R X ( 2.1 Let a random process X ( t ) consist of 6 equally likely sample functions.66 6  Exercise 3..19 Indian Institute of Technology Madras . . Let X and Y be the random variables obtained by sampling process at t = 1 and t = 2 respectively. 6 . y ) R X (1.Y ( x ...

2 A random process X ( t ) consists of 5 sample functions. 3.Principles of Communication Prof. Show that. 0 ≤ θ < 2π ⎪ fΘ ( θ ) = ⎨ 2 π ⎪0 . though the process is WSS. fV (v ) ≠ fW (v ) . t 2 ) . Example 3.20 Indian Institute of Technology Madras . x1 ( t ) = cos ( 2 π t ) − sin ( 2 π t ) x2 ( t ) = sin ( 2 π t ) − cos ( 2 π t ) 1 . ⎧ 1 . t 2 ) = R X ( t1 − t 2 ) = 0 b) Let V be the random variable X ( t ) t variable X (t ) t = π4 and W be the random . Venkata Rao Exercise 3. where A and ωc are constants and Θ is a random variable with the PDF. otherwise ⎩ Let us compute a) m X ( t ) and b) R X ( t1 . V. Four of these sample functions are given 5 x3 ( t ) = − 2 cos t x4 ( t ) = − 2 sin t a) Find the fifth sample function x5 ( t ) of the process X ( t ) such that the process X ( t ) is i) ii) zero mean R X ( t1 .5 Let X ( t ) = A cos ( ωc t + Θ ) . each occurring with probability below.

Of course. Three sample functions of the process are shown in Fig.21 Indian Institute of Technology Madras . 3. 3.Principles of Communication Prof.10 for fc = 106 Hz and A = 1. a given sample function has a fixed value for θ = θ1 . another example of the second method of specification) The ensemble of X ( t ) is composed of sinusoids of amplitude A and frequency fc but with a random initial phase. Venkata Rao (Note that the random processes is specified in terms of a random parameter. namely Θ . V. where 0 ≤ θ1 < 2 π .

V.10: Three sample functions of the process of example 3.22 Indian Institute of Technology Madras .5 3.Principles of Communication Prof. Venkata Rao Fig 3.

we have. In accordance with property P4.  3. we can write R X ( t1 . Venkata Rao a) mX ( t ) = E ⎡ ⎣ A cos ( ωc t + Θ ) ⎤ ⎦ . V. A cos ( ωc t + θ1 ) and are A cos ( ωc t + θ1 + π ) = − ⎡ ⎣ A cos ( ωc t + θ1 ) ⎤ ⎦.Principles of Communication Prof. the ensemble has both the waveforms. it is also an even function of τ (property fc periodic with period P3). t 2 ) = A2 cos ⎡ ⎣ωc ( t1 − t 2 ) ⎤ ⎦ 2 As the ACF is only a function of the time difference. namely. we have R X ( t1 . we find that the ACF is also fc 1 . t 2 ) = R X ( t1 − t 2 ) = R X ( τ ) = A2 cos ( ωc τ ) 2 Note that X ( t ) is composed of sample functions that are periodic with period 1 . for any given θ = θ1 . As Θ is the only random variable.23 Indian Institute of Technology Madras . This is a reasonable answer because. Of course. b) R X ( t1 . t2 ) = E ⎡ ⎣ A cos ( ωc t1 + Θ ) A cos ( ωc t 2 + Θ ) ⎤ ⎦ = A2 cos ⎡ ⎣ωc ( t1 + t 2 ) + 2 Θ ⎤ ⎦ + cos ⎡ ⎣ωc ( t1 − t2 ) ⎤ ⎦ 2 { } As the first term on the RHS evaluates to zero. Both the waveforms equally likely and sum to zero for all values of t . mX ( t ) = A 2π 2π ∫ cos ( ω 0 c t + θ) d θ = 0 Note that Θ is uniform in the range 0 to 2 π .

Principles of Communication Prof. Impulses are spaced T seconds T 3. Venkata Rao Exercise 3.24 Indian Institute of Technology Madras . 4 Exercise 3. 0 ≤ td ≤ T and zero elsewhere. let ωc = 2 π . Find fY ( y ) . Exercise 3. Let Z ( t1 ) = V .4 Let X ( t ) = A cos ( ωc t + Θ ) where A and ωc are constants. The time interval td of the first impulse from the origin has uniform PDF in the range fTD ( td ) = ( 0. Show that Z ( t ) is WSS and RZ ( τ ) = cos ( ωc τ ) . That is. Show the process is not WSS. let Y be the random variable obtained from sampling X ( t ) and t = 1 . and θ is a random variable. V. we shall find the ACF of the random impulse train process specified by X (t ) = ∑ A δ ( t − nT − T ) n d n where the amplitude An is a random variable with P ( An = 1) = P ( An = − 1) = 1 .5. 1 . 2 Successive amplitudes are statistically independent.6 In this example. each with zero mean and unit variance. Show that V is N ( 0.5 Let Z ( t ) = X cos ( ωc t ) + Y sin ( ωc t ) where X and Y are independent Gaussian variables. Example 3.3 For the process X ( t ) of example 3. 1) . uniformly distributed in the range 0 ≤ θ ≤ π . T ) .

(The symbol ∑ n indicates summation with respect to n where n . 3. Venkata Rao apart and An is independent of Td . 2 2 2 This is because when m = n . ranges from ( − ∞ . ∞ ) . Fig. we have 3.11: Sample function of the random impulse train process ⎧⎡ ⎤⎡ ⎤⎫ R X ( t + τ .11. t ) = But. Am An = Am = then Am An = Am An . as successive amplitudes are statistically independent. m = n Am An = ⎨ ⎩0. ∑∑ A m n m An δ( t − nT + τ − Td ) δ ( t − nT − Td ) ⎧1.) A typical sample function of the process X ( t ) is shown in Fig. If m ≠ n . we can write (after interchanging the order of expectation and summation). RX (t + τ.25 Indian Institute of Technology Madras . t ) = E ⎨ ⎢ ∑ Am δ ( t − mT + τ − Td ) ⎥ ⎢ ∑ An δ ( t − nT − Td ) ⎥ ⎬ ⎦⎣ n ⎦⎭ ⎩⎣ m ⎡ ⎤ = E ⎢ ∑∑ Am An δ ( t − mT + τ − Td ) δ ( t − nT − Td ) ⎥ ⎣m n ⎦ As An is independent of Td . otherwise 1 2 1 2 (1) + ( − 1) = 1 . But Am = An = 1 1 (1) + ( − 1) = 0 . V. 2 2 Using this result.Principles of Communication Prof. 3. an integer.

There are certain processes. called ergodic processes where it is possible to interchange the corresponding ensemble and time averages.. t 2 ) = E ⎡ ⎣ X ( t1 ) Y ( t2 ) ⎤ ⎦ 3. t ) = δ(− y ) δ(τ − y ) d y = δ(y ) δ(τ − y ) d y ∫ T −∞ T −∫∞ = 1 1 ⎡ δ ( τ ) ∗ δ ( τ )⎤ = δ (τ) ⎣ ⎦ T T ∞ ∞ That is. whose calculation is based on the individual sample functions.2 Cross-correlation Consider two random processes X ( t ) and Y ( t ) . 1 RX (t + τ. the ACF is a function of τ alone and it is an impulse!  It is to be pointed out that in the case of a random process.Principles of Communication Prof. More details on ergodic processes can be found in [2]. Then.Y ( t1 . t ) = = ∑ δ ( t − nT n T + τ − Td ) δ ( t − nT − Td ) 1 ∑ ∫ δ ( t − nT + τ − td ) δ ( t − nT − td ) d td n T 0 (Note that Td is uniformly distributed in the range 0 to T ) Let t − nT − td = x .4. 3. we can also define time averages such as time-averaged mean value or time-averaged ACF etc. Venkata Rao RX ( t + τ. we have 1 1 RX ( t + τ. t ) = ∑ ∫ δ ( x ) δ ( x + τ) d x n T t − ( n + 1)T t − nT = 1 δ ( x ) δ ( x + τ) d x T +∫∞ −∞ Letting y = − x . V.26 Indian Institute of Technology Madras . We define the two cross-correlation functions of X ( t ) and Y ( t ) as follows: R X .

Def.  Exercise 3.  Def.8: Two random process X ( t ) and Y ( t ) are uncorrelated if C X Y ( t1 . Venkata Rao RY .27 Indian Institute of Technology Madras . 3.7: Two random process X ( t ) and Y ( t ) are called (mutually) orthogonal if R X Y ( t1 . 3. t 2 ) = R X . V.6 Show that the cross-correlation function satisfies the following inequalities. t 2 ) = 0 for every t1 and t 2 . i) ii) X ( t ) and Y ( t ) are WSS and R X .Y ( t1 − t 2 ) = R X Y ( τ ) Cross-correlation is not generally an even function of τ as is the case with ACF. t 2 ) − m X ( t1 ) mY ( t 2 ) = 0 for every t1 and t 2 .8) Def. nor does it have a maximum at the origin.Principles of Communication Prof. it does obey a symmetrical relationship as follows: R X Y ( τ ) = RY X ( − τ ) (3. However. 3. t 2 ) = E ⎡ ⎣Y ( t1 ) X ( t2 ) ⎤ ⎦ where t1 and t 2 are the two time instants at which the processes are observed.6: Two processes are said to be jointly wide-sense stationary if. a) b) RX Y ( τ) ≤ R X ( 0 ) RY ( 0 ) RX Y ( τ) ≤ 1 ⎡R X ( 0 ) + RY ( 0 ) ⎤ ⎦ 2⎣ 3. t 2 ) = R X Y ( t1 . X ( t1 .Y ( t1 .

resulting in the output process Y ( t ) . Let x j ( t ) be a sample function of X ( t ) which is applied as input to the linear time-invariant system. We had developed the relations for the input-output spectral densities. Venkata Rao 3. we would like to develop the relations for mY ( t ) and RY ( t1 . Specifically.12: Transmission of a random process through a linear filter We shall now try to characterize the output process Y ( t ) in terms of the input process X (t ) and the impulse response h (t ) [third method of specification].28 Indian Institute of Technology Madras .5 Systems with Random Signal Excitation In Chapter 1. t 2 ) when X ( t ) is WSS. V.12. h ( t ) represents the (known) impulse response of a linear time-invariant system that is excited by a random process X ( t ) . we can write Y (t ) = ∞ −∞ ∫ h ( τ) X (t − τ) d τ Consider first the mean of the output process 3. Consider the scheme shown in Fig. Let y j ( t ) be the corresponding output where y j ( t ) belongs to Y ( t ) . Fig 3. Then. We shall now develop the analogous relationships for the case when a linear time-invariant system is excited by random signals.Principles of Communication Prof. we discussed the transmission of deterministic signals through linear systems. 3. y j (t ) = ∞ −∞ ∫ h ( τ) x (t − τ) d τ j As the above relation is true for every sample function of X ( t ) .

29 Indian Institute of Technology Madras . then E ⎡ ⎣ X ( t )⎤ ⎦ is a constant mX .10) and H ( f ) is the transfer function of the given system. Let us compute RY ( t .9a) Provided that E ⎡ ⎣ X ( t )⎤ ⎦ is finite for all t . so that Eq. = mX H (0 ) where H ( 0 ) = H ( f ) f = 0 We note that mY ( t ) is a constant. u − τ ) 2 2 X 1 2 If X ( t ) is WSS. we may interchange the order of the expectation and integration with respect to τ in Eq. we obtain RY ( t . interchanging the order of integration and expectation.Principles of Communication Prof. where t and u denote the time instants at which the output process is observed.9(a) and write ∞ mY ( t ) = −∞ ∫ h ( τ) E ⎡ ⎣ X ( t − τ )⎤ ⎦ dτ (3.9b) where we have used the fact that h ( τ ) is deterministic and can be brought outside the expectation. 3. u ) = E ⎡ ⎣Y ( t ) Y ( u ) ⎤ ⎦ = E ⎢ ∫ h ( τ1 ) X ( t − τ1 ) d τ1 ⎢− ∞ ⎣ ∞ ∞ ⎤ h X u d τ − τ τ ( ) ( ) 2 2⎥ ∫ 2 ⎥ −∞ ⎦ ∞ Again.9(b) can be simplified as ∞ mY ( t ) = mX −∞ ∫ h ( τ) d τ (3. We have. Venkata Rao ⎡∞ ⎤ ⎤ mY ( t ) = E ⎡ Y t E h X t d = τ − τ τ ( ) ( ) ( ) ⎢ ⎥ ⎣ ⎦ ∫ ⎢− ∞ ⎥ ⎣ ⎦ (3. V. If X ( t ) is WSS. then 3. and the system is stable. u ) . ⎡∞ RY ( t . 3. u ) = = −∞ ∞ ∫ d τ h(τ ) ∫ d τ h(τ ) E ⎡ ⎣ X ( t − τ ) X (u − τ )⎤ ⎦ 1 1 2 2 1 2 −∞ ∞ −∞ ∫ d τ1 h ( τ1 ) −∞ ∫ d τ h ( τ ) R (t − τ .

Principles of Communication Prof. ⎡Y 2 ( t ) ⎦ ⎤ = E⎣ ∞ −∞ ∫ H (f ) d f ∞ −∞ ∫ h ( τ2 ) e j 2 π f τ2 d τ2 ∞ −∞ ∫ R (λ) e X − j 2πf λ dλ The second integral above is H ∗ ( f ) .12) 3. 3.10 and 3. 3.11 can be written as RY ( τ ) . h ( τ1 ) = −∞ ∫ H (f ) e ∞ ∞ df df⎤ ⎦ h ( τ2 ) R X ( τ2 − τ1 ) d τ1 d τ2 ∞ 2 Hence. Eq. u ) = ∞ −∞ ∫ d τ1 h ( τ1 ) ∞ −∞ ∫ d τ h (τ ) R (τ − τ 2 2 X 1 + τ2 ) (3. then so is Y ( t ) . 3. u ) is only a function of t − u . Then 2 E⎡ ⎣Y ( t ) ⎤ ⎦ = ∞ −∞ ∫ S (f ) H (f ) X 2 df (3.11.6 Power Spectral Density The notion of Power Spectral Density (PSD) is an important and useful one.30 Indian Institute of Technology Madras . τ1 = τ2 − λ . E ⎡ ⎣Y ( t ) ⎤ ⎦ = − ∞ − ∞− ∞ ∞ ∫ ∫ ∫⎡ ⎣H ( f ) e ∞ −∞ −∞ ∞ j 2 π f τ1 = ∫ H (f ) d f ∫ h ( τ ) d τ ∫ R ( τ 2 2 X 2 − τ1 ) e j 2 π f τ1 d τ1 −∞ Let τ2 − τ1 = λ . It provides the frequency domain description of a stationary (at least WSS) random process.11 implies that RY ( t . 3. From Eq. Venkata Rao RY ( t .11 together imply that if X ( t ) is WSS. 3. V. we have ∞ ∞ ⎡Y E⎣ 2 ⎤ ( t )⎦ ∞ = RY ( 0 ) = j 2 π f τ1 − ∞− ∞ ∫ ∫ h (τ ) h (τ ) 1 2 R X ( τ2 − τ1 ) d τ1 d τ2 But. that is. the complex conjugate of H ( f ) . The third integral will be a function f . which we shall denote by S X ( f ) . the LHS of Eq.11) where τ = t − u . Hence. Eq.

V.13) 3. from Eq. Thus.6. Venkata Rao where SX ( f ) = ∞ −∞ ∫ R (λ) e X − j 2πf λ dλ. f ± f > 1 ∆ f c ⎪ ⎩ 2 Then.1 Properties of power spectral density The PSD S X ( f ) and the ACF R X ( τ ) of a WSS process X ( t ) form a Fourier transform pair and are given by SX ( f ) = ∞ −∞ ∫ R ( τ) e X − j 2πf τ dτ (3. S X ( fc ) represents the frequency density of the average power in the process X ( t ) . then the mean square value of the filter output is approximately. 3. 3. evaluated at f = fc . we find that if the filter band-width is sufficiently small and S X ( f ) is a continuous function. f ± fc < ∆f ⎪ ⎪ 2 = ⎨ ⎪0. We will now justify that S X ( f ) can be interpreted as the power spectral density of the WSS process X ( t ) .Principles of Communication Prof. band-pass filter with the passband centered at fc and having the amplitude response H (f ) 1 ⎧ 1. The dimensions of S X ( f ) are watts/Hz.12. Suppose that the process X ( t ) is passed through an ideal narrowband.31 Indian Institute of Technology Madras . ⎡Y 2 ( t ) ⎦ ⎤  ( 2 ∆ f ) SX ( fc ) E⎣ The filter however passes only those frequency components of the input random process X ( t ) that lie inside the narrow frequency band of width ∆ f centered about ±fc .

that is. R X ( τ ) is real and even.13 and 3. say ( f1 . 3. for all f . that is SX ( 0 ) = ∞ −∞ ∫ R ( τ) d τ X This property follows directly from Eq.32 Indian Institute of Technology Madras . Using this pair of equations. To establish this. S X ( f ) is real and S X ( − f ) = S X ( f ) . f1 + ∆ f ) . namely. V. 3. that is.14 by putting τ = 0 and noting 2 RX (0 ) = E ⎡ ⎣ X ( t )⎤ ⎦.13 by putting f = 0 . 3.Principles of Communication Prof. P3) The PSD is real and is an even function of frequency. Let X ( t ) be the input to a narrowband filter with the transfer function characteristic. ⎧ ⎪1. This result is due to the property of the ACF.14 are popularly known as Weiner-Khinchine Relations.14) Eq. Venkata Rao RX ( τ ) = ∞ −∞ ∫ S (f ) e X j 2πf τ df (3. that is. we shall derive some general properties of PSD of a WSS random process. f1 ≤ f ≤ f1 + ∆ f H (f ) = ⎨ ⎪ ⎩0. S X ( f ) ≥ 0 . P1) The zero frequency value of the PSD of a WSS process equals the total area under the graph of ACF. assume that S X ( f ) is negative. P4) The PSD of a WSS process is always non-negative. P2) The mean square value of a WSS process equals the total area under the graph of the PSD. otherwise 3. 2 E⎡ ⎣ X ( t )⎤ ⎦ = ∞ −∞ ∫ S (f ) d f X This property follows from Eq. for a certain frequency interval.

6.17b) 3. then RY ( τ ) = R x ( τ ) ∗ h ( τ ) ∗ h ( − τ ) (3.Principles of Communication Prof.15) As RY ( τ ) ←⎯ → SY ( f ) .33 Indian Institute of Technology Madras . Venkata Rao 2 Then from Eq. 3. as a negative quantity which is a contradiction. we have the inputoutput FT relationship. we have E ⎡ ⎣Y ( t ) ⎤ ⎦ = ∞ −∞ ∫ S (f ) H (f ) X 2 d f . Hence 2 RY ( τ ) = R x ( τ ) ∗ Rh ( τ ) = Rx ( τ ) ∗ h ( τ ) ∗ h∗ ( − τ ) (3. V.12. SY ( f ) .12. The corresponding time domain relationship is y ( t ) = x ( t ) ∗ h ( t ) . Eq.11 can be written as RY ( τ ) = ∞ − ∞− ∞ ∞ ∫ ∫ h (τ ) h (τ ) 1 2 ∞ ⎡∞ ⎤ j 2 π f ( τ − τ1 + τ2 ) d f ⎥ d τ1 d τ2 ⎢ ∫ SX (f ) e ⎢− ∞ ⎥ ⎣ ⎦ = ⎡∞ ⎤⎡∞ ⎤ j 2 π f τ2 − j 2 π f τ1 j 2 πf τ τ τ τ τ h e d h e d df ( ) ( ) ⎢ ⎥ ⎢ 1 1 2 2 ⎥ SX (f ) e ∫ ∫ ∫ ⎥ ⎢ ⎥ −∞⎢ − ∞ − ∞ ⎣ ⎦⎣ ⎦ ∞ = −∞ ∞ ∫ H (f ) ⋅ H (f ) S (f ) e ∗ j 2πf τ X df = −∞ ∫⎡ ⎢ H (f ) ⎣ 2 j 2πf τ SX ( f )⎤ df ⎥e ⎦ (3.17a) If the impulse response is real. 3. 3.15 implies SY ( f ) = S X ( f ) H ( f ) 2 (3.16) Note that when the input to an LTI system is deterministic. 3. Then Rh ( τ ) ←⎯ → H ( f ) (see P4. of the output of Fig.2). Y ( f ) = X ( f ) H ( f ) . We shall now derive an expression for the power spectral density. sec 1. Eq. Let Rh ( τ ) denote the ACF of the h ( t ) . Using the relation R X ( τ ) = F − 1 ⎡ ⎣SX ( f ) ⎤ ⎦ .

X ( t ) and Θ are independent. Θ is a uniformly distributed random variable in the range (0 − 2 π) .7 For the random process X ( t ) of example 3. we have 2 SX ( f ) = A2 ⎡δ ( f − fc ) + δ ( f + fc ) ⎤ ⎦ 4 ⎣  Example 3. Let us find the ACF and PSD of Y ( t ) . 3. It is assumed that: 3. we have ⎤ RY ( τ ) = E ⎡ ⎣ X ( t + τ ) X ( t )⎤ ⎦ E⎡ ⎣cos ( ωc ( t + τ ) + Θ ) cos ( ωc t + Θ ) ⎦ = 1 R X ( τ ) cos ( ωc τ ) 2 { } 1 SY ( f ) = F ⎡ ⎣RY ( τ ) ⎤ ⎦ = 4 ⎡ ⎣S X ( f − fc ) + S X ( f + fc ) ⎤ ⎦  Example 3. Example 3. Venkata Rao We shall now take up a few examples involving the computation of PSD.5.9: (Random Binary Wave) Fig. V.34 Indian Institute of Technology Madras .13 shows the sample function x j ( t ) of a process X ( t ) consisting of a random sequence of binary symbols 1 and 0.8: (Modulated Random Process) Let Y ( t ) = X ( t ) cos ( ωc t + Θ ) where X ( t ) is a WSS process with known R X ( τ ) and S X ( f ) . RY ( τ ) = E ⎡ ⎣Y ( t + τ ) Y ( t ) ⎤ ⎦ = E X ( t + τ ) cos ⎡ ⎣ωc ( t + τ ) + Θ ⎤ ⎦ X ( t ) cos ( ωc t + Θ ) As X ( t ) and Θ are independent.Principles of Communication Prof. Since R X ( τ ) = A2 cos ( ωc τ ) . let us find the PSD.

V.35 Indian Institute of Technology Madras .Principles of Communication Prof. specifically. 2. Venkata Rao 1. td is equally likely to lie anywhere between zero and T seconds. the presence of a 1 or 0 is determined by tossing a fair coin. The symbols 1 and 0 are represented by pulses of amplitude + A and − A volts. we have a 1 and if the outcome is 'tails'.6) through a filter with the impulse response. The above process can be generated by passing the random impulse train (Example 3. 3. if the outcome is ‘heads’. td is the sample value of a uniformly distributed random variable Td with its probability density function defined by fTd ( td ) ⎧1 ⎪ . 0 ≤ td ≤ T = ⎨T ⎪ ⎩0 . We shall compute S X ( f ) and R X ( τ ) . otherwise 3. respectively and duration T seconds. so that the starting time of the first pulse. 0 ≤ t ≤ T h (t ) = ⎨ ⎩0 .13: Random binary wave During any time interval ( n − 1)T ≤ t − t d ≤ nT . 3. and the presence of a 1 or 0 in anyone interval is independent of all other intervals. These two symbols are thus equally likely. we have a 0. elsewhere Fig. The pulses are not synchronized. ⎧A . where n is an integer. That is.

V. Sx ( f ) = E x ( f ) T . S X ( f ) = 1 2 2 A T sin c 2 ( f T ) T = A2 T sin c 2 ( f T ) Taking the inverse Fourier transform. Venkata Rao H ( f ) = AT sin c ( f T ) Therefore. We note that the energy spectral density of a rectangular pulse x ( t ) of amplitude A and duration T . 0 ≤ τ ≤ T RX ( τ ) = ⎨ ⎝ T⎠ ⎪ .7  In the random binary wave process of example 3.Principles of Communication Prof. otherwise ⎩0 The ACF of the random binary wave process can also be computed by direct time domain arguments. let 1 be represented by a pulse of amplitude A and duration T sec. Show that SX ( f ) = A2 4 ⎡ sin2 ( π f T ) ⎤ δ + f ⎢ ( ) ⎥ π2 f 2 T ⎥ ⎢ ⎣ ⎦ Exercise 3. The rest of the description of the process is the same as in the example 3. The interested reader is referred to [3]. The binary zero is indicated by the absence of any pulse. Exercise 3. Find a) b) Y (t ) SY ( f ) −2τ .9. Let Y ( t ) be the voltage across the 3.8 The input voltage to an RLC series circuit is a WSS process X ( t ) with X ( t ) = 2 and R X ( τ ) = 4 + e capacitor.9.36 Indian Institute of Technology Madras . is E x ( f ) = A2 T 2 sin c 2 ( f T ) Hence. we have ⎧ 2⎛ τ⎞ ⎪A ⎜1 − ⎟ .

Even if it is real.10 Let Z ( t ) = X ( t ) + Y ( t ) where the random processes X (t ) and Y (t ) are jointly WSS and X ( t ) = Y ( t ) = 0 . the cross spectral density provides a measure of frequency interrelationship between two processes. Let X ( t ) and Y ( t ) be two jointly WSS random processes with their cross correlation functions R X Y ( τ ) and RY X ( τ ) . RZ ( t . it need not be positive. u ) = E ⎡ ⎣Z ( t ) Z (u ) ⎤ ⎦ 3. we require the crossspectrum. as R X Y ( τ ) = RY X ( − τ ) .37 Indian Institute of Technology Madras . we define.19) R X Y ( τ ) ←⎯ → SX Y (f ) RY X ( τ ) ←⎯ → SY X ( f ) The cross-spectral density is in general complex. V. (This will become clear from the examples that follow).Principles of Communication Prof. Then. ∞ −∞ ∞ ∫ R ( τ) e XY − j 2πf τ dτ (3. we find ∗ S X Y ( f ) = SY X ( − f ) = SY X (f ) We shall now give a few examples that involve the cross-spectrum. S X Y ( f ) and SY X ( f ) as follows: SX Y ( f ) = SY X ( f ) = That is. Venkata Rao 3. However.7 Cross-Spectral Density Just as the PSD of a process provides a measure of the frequency distribution of the given process. Example 3.18) −∞ ∫ R ( τ) e YX − j 2πf τ dτ (3. We will show that to find SZ ( f ) .

u ) + R X Y ( t . we have ∞ RY X ( t .20a) Taking the Fourier transform. then 3. Example 3. u ) Letting τ = t − u . RY X ( t . Venkata Rao ⎤ = E⎡ ⎣( X ( t ) + Y ( t ) ) ( X ( u ) + Y ( u ) ) ⎦ = R X ( t . we have RZ ( τ ) = R X ( τ ) + R X Y ( τ ) + RY X ( τ ) + RY ( τ ) (3. RZ ( τ ) = R X ( τ ) + RY ( τ ) and SZ ( f ) = S X ( f ) + SY ( f ) Hence. let us find an expression for SY X ( f ) . u ) + RY X ( t . u ) = E ⎡ ⎣Y ( t ) X ( u ) ⎤ ⎦ ⎡∞ ⎤ = E ⎢ ∫ h ( τ1 ) X ( t − τ1 ) d τ1 X ( u ) ⎥ ⎢− ∞ ⎥ ⎣ ⎦ Interchanging the order of expectation and integration. u ) + RY ( t . If the resulting output is Y ( t ) .38 Indian Institute of Technology Madras . u ) = −∞ ⎡ X ( t − τ ) X (u )⎦ ⎤ dτ ∫ h (τ ) E ⎣ 1 1 1 If X ( t ) is WSS. RX Y ( τ ) = X ( t + τ ) Y ( t ) = 0 then.Principles of Communication Prof.20b) If X ( t ) and Y ( t ) are uncorrelated. that is. we have SZ ( f ) = S X ( f ) + S X Y ( f ) + SY X ( f ) + SY ( f )  (3. we have the superposition of the autocorrelation functions as well as the superposition of power spectral densities. V.11 Let X ( t ) be the input to an LTI system with the impulse response h ( t ) .

3.21b) Eq.21a) That is.Principles of Communication Prof. Fig. X ( t ) and Y ( t ) are jointly WSS. SY X ( f ) = S X ( f ) H ( f )  (3. Example 3.21(b) tells us that X ( t ) and Y ( t ) will have strong cross-correlation in those frequency components where S X ( f ) H ( f ) is large. V. Venkata Rao RY X ( t . 3.39 Indian Institute of Technology Madras . u ) = ∞ −∞ ∫ h (τ ) R (τ − τ ) d τ 1 X 1 1 = h ( τ) ∗ RX ( τ) (3. 3.14: Figure for the example 3. u ) = ∞ −∞ ∫ h ( τ ) R (t − u − τ ) d τ 1 X 1 1 Letting t − u = τ gives the result RY X ( t .12 In the scheme shown (Fig. 3. Taking the Fourier transform. Let us compute SV Z ( f ) . the cross-correlation between the output and input is the convolution of the input ACF with the filter impulse response.12 RV Z ( t .14). u ) = E ⎡ ⎣V ( t ) Z ( u ) ⎤ ⎦ ∞ ⎧ ⎪ = E ⎨ ∫ h1 ( τ1 ) X ( t − τ1 ) d τ1 ⎪ ⎩− ∞ ⎫ ⎪ h τ Y u − τ d τ ( ) ( ) ⎬ 2 2 2 2 ∫ ⎪ −∞ ⎭ ∞ Interchanging the order of expectation and integration.

and Z ( t ) at t j respectively.22) We shall now consider some special cases of Example 3. H1 ( 0 ) or H 2 ( 0 ) has to be zero. Then we have the situation shown in Fig. V ( t i ) and Z ( t j ) obtained from sampling the processes V ( t ) at t i . the two random variables. That is. are uncorrelated. That is. That is.12. u ) = −∞−∞ ∫ ∫ h (τ ) h (τ ) R (τ − τ 1 1 2 2 XY 1 + τ2 ) d τ1 d τ2 where τ = t − u . u ) = ∞ − ∞− ∞ ∫ ∫ h (τ ) h (τ ) E ⎡ ⎣ X ( t − τ ) Y (u − τ )⎤ ⎦ dτ 1 1 2 2 1 2 ∞ ∞ ∞ 1 d τ2 As X ( t ) and Y ( t ) are jointly WSS. either V ( t ) or Z (t ) (or both) will be zero (note that V ( t ) = H1 ( 0 ) mX and Z ( t ) = H2 ( 0 ) mY ). this implies RVZ ( τ ) ≡ 0 and we have V ( t ) and Z ( t ) being orthogonal. Venkata Rao RV Z ( t .14 have non-overlapping passbands. Then ∗ H1 ( f ) H 2 ( f ) = 0 for all f . In addition. 3.15. i) Let H1 ( f ) and H 2 ( f ) of Fig. ii) Let X ( t ) = Y ( t ) and X ( t ) is WSS. RV Z ( t . 3.40 Indian Institute of Technology Madras . the processes V ( t ) and Z ( t ) are uncorrelated. because atleast one of the quantities. u ) = RV Z ( t − u ) = RV Z ( τ ) It not too difficult to show that ∗ SV Z ( f ) = H1 ( f ) H 2 (f ) SX Y (f )  (3. we have RV Z ( t . 3. V. That is. SV Z ( f ) ≡ 0 .Principles of Communication Prof.

. familiar result SV ( f ) = H ( f ) S X ( f ) 2 if h1 ( t ) = h2 ( t ) = h ( t ) . ∞ ) . ∗ SV Z ( f ) = H1 ( f ) H 2 (f ) SX (f ) (3.8 Gaussian Process Gaussian processes are of great practical and mathematical significance in the theory of communication.15. Venkata Rao Fig.. .. Let X ( t1 ) = X1 ... .. t 2 . t n ) ∈ ( − ∞ ..... X 2 . V. if f X ( x ) is an nGaussian density for every n ≥ 1 and ( t1 . 3.....15: Two LTI systems with a common input Then from Eq... 3. X ( t n ) = X n and X denote the row vector ( X1 .. dimensional joint The process X ( t ) is called Gaussian..... then we have the 3.. The n-dimensional Gaussian PDF is given by 3.... Let a random process X ( t ) be sampled at time instants t1 .. ...41 Indian Institute of Technology Madras . 3...22.23) iii) In the scheme of Fig..Principles of Communication Prof.. we obtain. X n ) . They are of great mathematical significance because Gaussian processes possess a neat mathematical structure which makes their analytical treatment quite feasible... X ( t 2 ) = X 2 .... t n .. t2 .... They are of great practical significance because a large number of noise processes encountered in the real world (and that interfere with the communication process) can be characterized as Gaussian processes. ..

X j ⎦ ⎤ where λ i j = cov ⎣ = E ⎡ Xi − Xi ⎣ ( ) (X j − Xj ⎤ ⎦ ) Similarly... xn − X n ( ) Specification of a Gaussian process as above corresponds to the first method mentioned earlier.. Example 3.... x2 − X 2 . let us take an example of a joint Gaussian PDF with n = 2 .Principles of Communication Prof. ( x − m X ) = x1 − X1 .13 X 1 and X 2 are two jointly Gaussian variables with X 1 = X 2 = 0 and σ1 = σ2 = σ .. (X ...42 Indian Institute of Technology Madras . then the joint PDF is independent of the time origin.. Write the joint PDF of X 1 and X 2 in i) the matrix form and ii) the expanded form. namely. . If the mean value of the process is constant and ⎤ λ i j = cov ⎡ ⎣ X ( ti ) .. . Note that an n-dimensional Gaussian PDF depends only on the means and the covariance quantities of the random variables under consideration. 3..... The covariance matrix C x is given by ⎛ λ11 λ12 " λ1n ⎞ ⎜ ⎟ λ 21 λ 22 " λ 2 n ⎟ ⎜ Cx = ⎜ # ⎟ ⎜ ⎟ ⎜λ ⎟ ⎝ n1 λ n 2 " λ nn ⎠ ⎡ Xi . X n ) and the superscript T denotes the matrix transpose. X ( t j )⎦ depends only on t i − t j .. To illustrate the use of the matrix notation..24) where C x is the covariance matrix. X 1 2 . m X −1 denotes its determinant and C X is its is the mean vector.. a WSS Gaussian process is also stationary in a strict sense. The correlation coefficient between X 1 and X 2 is ρ . V. Venkata Rao fX ( x ) = 1 ( 2 π) n 2 CX 1 2 T ⎡ 1 −1 exp ⎢ − ( x − m x ) C X ( x − mX ) ⎤ ⎥ ⎣ 2 ⎦ (3. inverse.. In other words.

namely. V.  Example 3. C X = σ 4 (1 − ρ2 ) 2 ⎥ σ ⎦ C −1 X ⎡ σ2 1 = 4 ⎢ σ 1 − ρ2 ⎣ − ρ σ 2 ( ) − ρ σ2 ⎤ ⎥ σ2 ⎦ = Therefore. we have ⎡ σ2 Cx = ⎢ 2 ⎣ρ σ ρ σ2 ⎤ . and σ X = σY = σ . t2 ) = 4 e a) b) − 0.2 t1 − t2 = 4e − 0. in terms of Q ( ).14 X ( t ) is a Gaussian process with m X ( t ) = 3 and C X ( t1 . P⎡ ⎣ X ( 5 ) ≤ 2⎤ ⎦ ⎤ P⎡ ⎣ X ( 8 ) − X ( 5 ) ≤ 1⎦ 3. ⎡ 1 − ρ⎤ 1 ⎢ ⎥ σ2 1 − ρ2 ⎣ − ρ 1 ⎦ ( ) f X1.2 τ . section 6 with x1 = x and x2 = y . Venkata Rao i) As λ11 = λ 22 = σ2 and λ12 = λ 21 = ρ σ2 . X 2 ( x1 .Principles of Communication Prof.43 Indian Institute of Technology Madras . x2 < ∞ 2 2 2 2 1 σ − ρ 1− ρ ⎪ ⎪ ⎩ ⎭ f X1. Let us find. x2 ) = ii) 1 2 π σ2 ⎧ −1 ⎪ exp ⎨ 2 2 2 1− ρ ⎪ ⎩2σ 1 − ρ ( ) ( x1 x2 ) ⎢ ⎡ 1 − ρ⎤ ⎛ x1 ⎞ ⎫ ⎪ ⎜ ⎟⎬ ⎥ ⎣ − ρ 1 ⎦ ⎝ x2 ⎠ ⎭ ⎪ Taking the matrix products in the exponent above. X 2 ( x1 . x2 ) = 1 2 π σ2 ( ) This is the same as the bivariate Gaussian PDF of Chapter 2. 2 2 ⎧ ⎪ x − 2 ρ x1 x2 + x2 ⎫ ⎪ exp ⎨− 1 ⎬ . and m X = mY = 0 . − ∞ < x1 . we have the expanded result.

5 ) P [Y ≤ 2] = P ⎢ 2⎦ ⎣ 2 b) Let Z = X (8 ) − X (5 ) and Y = X ( 8 ) . Y is N ( 3.608 ⎧1 ⎫ P⎡ ⎣ Z ≤ 1⎤ ⎦ = 2 P [0 ≤ Z ≤ 1] = 2 ⎨ 2 − P [ Z > 1]⎬ ⎩ ⎭ ⎡ Z = 1 − 2P ⎢ > ⎣ 3.6 = 3. and W = X ( 5 ) . That is.2 × 3 = 8 1 − e − 0. V. 1⎤ ⎡Y − 3 ≤ − ⎥ = Q ( 0. Y is Gaussian with the mean value 3.6 1 ⎤ ⎥ = 1 − 2 Q ( 0.Principles of Communication Prof. W ] = 4 + 4 − 2 × 4 e − 0.44 Indian Institute of Technology Madras . We have Z = 0 and 2 2 σ2 Z = σY + σW − 2 ρY .W σY σW 2 2 = σY + σW − 2 cov [Y . Note that Z = Y − W is Gaussian. 4 ) . and the variance = C X ( 0 ) = 4 . Venkata Rao a) Let Y be the random variable obtained by sampling X ( t ) at t = 5 .6 ⎦ ( )  3.52 ) 3.

.. Venkata Rao Exercise 3.Principles of Communication Prof. We can show that X1 X 2 X 3 X 4 = X1 X 2 ⋅ X 3 X 4 + X1 X 3 ⋅ X 2 X 4 + X1 X 4 ⋅ X 2 X 3 The above formula can also be used to compute the moments such as 2 ... X 3 and X 4 . that is. . X14 can be computed as X14 . V.. etc. (These properties are generalizations of the properties mentioned for the bivariate case in Chapter 2).. X n ) are jointly Gaussian... X 2 . . X 2 . X12 X 2 ( ) 2 A zero mean stationary Gaussian process is sampled at t = t1 and t 2 . X 2 .. P1) If X = ( X1 . X = ( X1 .. X n ) . The covariance matrix of X1 and X 2 is ⎡ 2 1⎤ CX = ⎢ ⎥ ⎣ 1 2⎦ 2 2 Show that E ⎡ ⎣ X1 X 2 ⎤ ⎦ = 6 We shall now state (without proof) some of the important properties of n jointly Gaussian random variables. X12 X 2 4 X14 = X1 X1 X1 X1 = 3 σ1 2 2 2 = X1 X1 X 2 X 2 = σ1 σ2 + 2 X1 X 2 Similarly. then any subset of them are jointly Gaussian. X1 . λ i j = σ i2 δ i j where 3. Let X1 and X 2 denote the corresponding random variables... P2) The X i ’s are statistically independent if their covariance matrix is diagonal.9 Let X be a zero-mean Gaussian vector with four components...45 Indian Institute of Technology Madras ...

46 Indian Institute of Technology Madras ..10 Let Y T = A X T + aT where the notation is from P3) above. Venkata Rao ⎧1. Two of the important internal circuit noise varieties are: i) Thermal noise ii) Shot noise 3.. i ≠ j P3) Let Y = (Y1 ... then so is Y .9 Electrical Noise Electronic communication systems are made up of circuit elements such as R ... If X is jointly Gaussian. i = j δi j = ⎨ ⎩0. Exercise 3. Yn ) be a set of vectors obtained from X = ( X1 ..... We shall make use of this result in developing the properties of narrowband noise. etc..Principles of Communication Prof. X n ) by means of the linear transformation Y T = A X T + aT where A is any n × n matrix and a = ( a1 . an ) is a vector of constants {ai } . .. X 2 .... L and C .. .. V.. a2 . All these components give rise to what is known as internal circuit noise. we find that if a Gaussian process X ( t ) is input to a linear system.. Y2 . transistors. Show that CY = AC X AT 3.. then the output Y ( t ) is also Gaussian process. and devices like diodes.. ....... It is this noise that sets the fundamental limits on communication of acceptable quality. As a consequence of P3) above....

much higher than the frequency range of interest to us.  3. 1012 Hz is already in the infrared region of EM spectrum. Venkata Rao Historically. 3.25) where T : k : the temperature of R . However this limit is much.Principles of Communication Prof. Consistent with the central limit theorem. V. If this open circuit voltage is measured with the help of a true RMS voltmeter of bandwidth B (frequency range: − B to B ). denoted SV ( f ) is essentially constant for f ≤ 1012 Hz.47 Indian Institute of Technology Madras . The random motion of the free electrons in a conductor caused by thermal agitation gives rise to a voltage V ( t ) at the open ends of the conductor and is present whether or not an electrical field is applied. This constant value of SV ( f ) is given by SV ( f ) = 2 R k T V 2 Hz (3. in degrees Kelvin ( D K = D C + 273 ) Boltzman’s constant = 1. Def. then the reading on the instrument would be 2R kT ⋅ 2B = 4R kT B V . hence it is also known as Johnson noise or resistance noise. It is to be remembered that Eq. Johnson and Nyquist first studied thermal noise in metallic resistors.37 × 10− 23 Joules D K . where R is the value of the resistance and T is the temperature of R .25 is valid only upto a certain frequency limit. if T is ) 290D K or 63D F ( 290D K is taken as the standard room temperature). It has also been found that the spectral density of V ( t ) (in ( Volts ) 2 Hz . Thermal noise sources are also characterized in terms of available noise PSD. V ( t ) is a Gaussian process with zero mean and the variance is a function of R and T . 3.9: Available noise PSD is the maximum PSD that can be delivered by a source.

3. 3.48 Indian Institute of Technology Madras . Schottky carried out the first theoretical study of the fluctuations in the anode current of a temperature 3.e. Because of Eq. ⎟ . 3. Pa = kT ⋅ 2B = kT B 2 (3.26.Principles of Communication Prof.16(a)). 3. That is. the required load resistance is R (Fig.27) In many solid state devices (and of course.16(b)). in vacuum tubes!) there exists a noise current mechanism called shot noise. R + R⎠ ⎝ available PSD = = SV ( f ) R H (f ) 2 kT 2R kT = Watts Hz 4R 2 (3. when the load is matched to the source impedance. Hence. we have a noise source with the source resistance R (Fig. In 1918.16: (a) Resistor as noise source (b) The noise source driving a load We know that the maximum power is transferred to the load. the available power in a bandwidth of B Hz is. In other words. Fig. V. though the noise voltage is produced by R . The transfer function of this voltage divider network is 1 2 ⎛ R ⎞ ⎜ i. Venkata Rao Let us treat the resistor to be noise free. But it is in series with a noise source with SV ( f ) = 2 R k T .26) It is to be noted that available PSD does not depend on R .

49 Indian Institute of Technology Madras . (Note that units of spectral density could be any one of the three. SI ( f ) = 2 q I ( Amp ) 2 Hz where q is the electronic charge and I is the average anode current. 3. we have an applied potential and there is an average flow in some direction. fluctuations of current occurs because of (i) randomness of the transit time across space charge region separating p and n regions. and (iii) randomness in the number of charged particles that diffuse etc. we will have the Norton equivalent circuit with SI ( f ) = 2R k T 2kT in parallel with the resistance R .) = 2 R R Semiconductor diodes.28) where I is the net DC current and IS is the reverse saturation current.Principles of Communication Prof. 3. (a) Watts Hz . (b) ( Volts ) 2 Hz and (c) ( Amp ) 2 Hz . Here. BJTs and JFETs have sources of shot noise in them. He showed that at frequencies well below the reciprocal of the electron transit time (which extends upto a few Giga Hz).16(a) could be drawn with SI ( f ) for the source quantity. occurs whenever charged particles cross a potential barrier. the spectral density of the mean square noise current due to the randomness of emission of electrons from the cathode is given by. namely. Shot noise (which is non-thermal in nature). In p-n junction devices. V. However. Venkata Rao limited vacuum tube diode (This has been termed as shot noise). Then. (ii) randomness in the generation and recombination of electron-hole pairs. there are going to be fluctuations about this average flow and it is these fluctuations that contribute a noise with a very wide spectrum. The current spectral density of shot noise in a p-n junction diode is given by SI ( f ) = 2 q ( I + 2 IS ) (3. Schottky’s result also holds for the semiconductor diode. The circuit of Fig.

V.28 indicate that we have noise sources with a flat spectral density with frequencies extending upto the infrared region of the EM spectrum. In addition to a fiat spectrum. Note that white and Gaussian are two different attributes.) 3. it contains all frequency components in equal proportion) for − ∞ < f < ∞ is called white noise.25 and Eq.1 White noise Eq. (Note that gate shot noise could get amplified when the device is used in a circuit. Any noise quantity (thermal or non-thermal) which has a fiat power spectrum (that is. 3. if the process happens to be Gaussian. Only when "whiteness" together with "Gaussianity" simultaneously exists. Hence. nor Gaussian noise need be white.Principles of Communication Prof. A JFET has a reverse biased junction between the gate terminal and the semiconductor channel from the source to the drain. 3. We denote the PSD of a white noise process W ( t ) as SW ( f ) = N0 watts Hz 2 (3.29) where the factor 1 has been included to indicate that half the power is 2 associated with the positive frequencies and half with negative frequencies. 3. In addition. Venkata Rao A BJT has two semiconductor junctions and hence two sources of shot noise. White noise need not be Gaussian noise. the process is qualified as White Gaussian Noise (WGN) process. we describe it as white Gaussian noise.9. The concept of white noise is an idealization of the above. in analogy with white light. it contributes to thermal noise because of internal ohmic resistance (such as base resistance).50 Indian Institute of Technology Madras . we have the gate shot noise and the channel thermal noise in a JFET.

Hence. The utility of the concept of white noise stems from the fact that such a noise. As SW ( f ) = N0 . This is again a nonphysical but useful result.3. it makes little difference how the input power spectrum behaves outside of the pass band. 3.17.30 is not too stringent as we invariably deal with systems which are essentially band limited and have finite value for H ( f ) within this band. if the input noise spectrum is flat within the pass band of the system. 3. Venkata Rao White noise.51 Indian Institute of Technology Madras .31) as shown in Fig. The condition implied by Eq. In so far as the power spectrum at the output is concerned. whether Gaussian or not. However. when passed through a linear filter for which ∞ −∞ ∫ H (f ) 2 < ∞ (3. for the ACF of white noise 2 N0 δ ( τ) . we might as well treat it to be white as this does not affect the calculation of output noise power. V.30) the filter output is a stationary zero mean noise process N ( t ) that is meaningful (Note that white noise process. we have. is a zero mean process). 2 RW ( τ ) = (3. must be fictitious (as is the case with everything that is "ideal") because its total mean power is infinity. assuming the input as ‘white’ will simplify the calculations. by definition.Principles of Communication Prof.

In the case of display of a deterministic waveform (such as a sinusoid) changing the time-base makes the waveform to ‘expand’ or ‘contract’. if. In a sense. In the case of white noise. the waveform changes randomly from one instant to the next. WGN represents the ultimate in randomness! Imagine that white noise is being displayed on an oscilloscope.3. the noise process is Gaussian.31 implies that any two samples of white noise. Though the waveforms on successive sweeps are different.52 Indian Institute of Technology Madras . the display on the oscilloscope always appears to be the same. V. no matter what sweep speed is used. are uncorrelated.Principles of Communication Prof. Venkata Rao Fig. In addition. If white noise drives a speaker. (Note that RW ( τ ) = 0 for τ ≠ 0 ). we find that any two samples of WGN are statistically independent. however. 3. no matter how closely together in time they are taken. it 3.17: Characterization of white noise: a) Power spectral density b) Auto correlation function Eq. no matter what time scale is used and as such the display on the scope appears to be the same for all time instants.

the input to the speaker. you will find that the 3. The cost of the instrument depends on the required bandwidth. As mentioned earlier. etc. Venkata Rao should sound monotonous because the waveform driving the speaker appears to be the same for all time instants.Principles of Communication Prof. As can be seen from the display. BLWN signal is the input to the oscilloscope.53 Indian Institute of Technology Madras .) We can give partial demonstration (audio-visual) of the properties of white noise with the help of these sources.) We now show some flash animation pictures of the spectral density and time waveforms of BLWN. (BLWN sources are commercially available. it is possible to generate what is known as Band Limited White Noise (BLWN) which has a flat power spectral density for f ≤ W . 1. for all practical purposes. power level. band limited to 6 MHz. We will begin with the audio. 2. Picture 1 is the display put out by a spectrum analyzer when it is fed with a BLWN signal. Picture 2 is time domain waveform (as seen on an oscilloscope) when the 6 MHz. These speeds are: 100 µsec/div. you will listen to the speaker output when it is driven by a BLWN source with the frequency range upto 110 kHz. the spectrum is essentially constant upto 6 MHz and falls by 40 to 45 dB at about 7MHz and beyond. white noise is fictitious and cannot be generated in the laboratory. is white. where W is finite. By clicking on the speaker symbol. (As the speaker response is limited only upto 15 kHz. Four sweep speeds have been provided for you to observe the waveform. V. 500 µsec/div and 50 µsec/div. 200 µsec/div. However. As you switch from 100 µsec/div to 200 µsec/div to 500 µsec/div.

BLWN signal. going 40 dB down by about 150 MHz. then a 20 kHz time will result in 10 cycles/div. Picture 4 is the time domain signal. This time you will observe that when you switch from 500 µsec/div to 50 µsec/div. band limited to 50 MHz. You have again the choice of the same four sweep rates. the 3. As the frequency increases. whereas when you switch from 500 µsec/div to 50 µsec/div. as seen on an oscilloscope.) Now consider a 20 kHz sine wave being displayed with the same sweep speed.54 Indian Institute of Technology Madras . 3. on an oscilloscope with a sweep speed of 50 µsec/div. 50 µsec/div. namely. (The spikes in the display are due to large amplitudes in noise voltage which occur with some non-zero probability. PSD is essentially constant upto 50 MHz and then starts falling. the display essentially becomes a band of fluorescence enclosed between two lines. 4. when the input is the 50 MHz wide. especially in the display level. which implies that fine structure will not be seen clearly. This will result in 25 cycles per division which implies every cycle will be seen like a vertical line of appropriate height. This will result in 1 cycle/div which implies we can see the fine structure of the waveform. Hence when we observe BLWN. V. If the sweep speed were to be 500 µsec/div. there is a change. fine structure of the low frequency components could interfere with the constant envelope display of the higher frequency components. As can be seen from the display.Principles of Communication Prof. Consider a 500 kHz sine wave being displayed on an oscilloscope whose sweep is set to 50 µsec/div. thereby causing some reduction in the envelope level. Picture 3 is the display from the spectrum analyzer when it is fed with a BLWN signal. band limited to 6 MHz. Venkata Rao display is just about the same.

otherwise Taking the inverse Fourier transform of SN ( f ) . Venkata Rao change is much less. Let N ( t ) denote the output noise process when the WGN process W ( t ) is the input. we have RN ( τ ) = N0 B sin c ( 2 B τ ) A plot of RN ( τ ) is given in Fig. Let Y be the random variable obtained from sampling the output process at t = 1 sec.55 Indian Institute of Technology Madras . Example 3. Then.15: (White noise through an ILPF) White Gaussian noise with the PSD of N0 is applied as input to an ideal 2 LPF of bandwidth B . From the above. resulting in a steady picture on the oscilloscope no matter what sweep speed is used for the display. −B ≤ f ≤ B ⎪ = ⎨2 ⎪ ⎩ 0 . V. Find the ACF of the output process.Principles of Communication Prof.18. 3. This is because.002 P where P is the total power of the BLWN signal. it is clear that as the BLWN tends towards white noise. SN ( f ) ⎧ N0 . variations in the time waveform keep reducing. 3. this signal has much wider spectrum and the power in the frequency range of f ≤ 100 kHz is 0. Let us find fY ( y ) .

Evidently 2 σY = Y 2 = RY ( 0 ) = N0 B . . Let us find the ACF of the output N ( t ) . ± 2.. 3. Hence the PDF is independent of the sampling time. Hence Y is N ( 0. 2B Note that ACF passes through zeros at τ = Hence any two random variables obtained by sampling the output process at times t1 and t 2 such that t1 − t 2 is multiple of independent.Principles of Communication Prof. 1 .18: RN ( τ ) of example 3. As the variable Y is Gaussian.16: (White Noise through an RC-LPF) Let W ( t ) be input to an RC-LPF.1 sec. N0 B ) ..56 Indian Institute of Technology Madras . let us find ρ X Y .. . are going to be statistically 2B Example 3. E [Y ] = 0 and  n where n = ± 1.15 The filter output N ( t ) is a stationary Gaussian process. V. what we need is the mean value and the variance of Y .. If X and Y are two random variables obtained from sampling N ( t ) with a separation of 0. Venkata Rao Fig 3.

V.11 X ( t ) is a zero mean Gaussian process with R X ( τ ) = 1 . Let the corresponding random variables be denoted by ( X1 . X 2 ) and (Y1 . 1 + j 2 π f RC 1 + (2 π f R C ) 2 . X 2 Y1 and Y2 . Y2 ) respectively. The process X ( t ) and Y ( t ) are sampled at t = 1 and t = 2 sec. E [ X Y ] = RN ( τ ) τ 2 E⎡ ⎣X ⎤ ⎦ = RN ( τ ) τ = 0.57 Indian Institute of Technology Madras . ρ X Y = e − ( 0. Find the joint PDF of X 2 . 3.1 sec = 0 = N0 4RC Hence. Y1 and Y2 .Principles of Communication Prof. Let 1 + τ2 Y (t ) = l X ( t ) where l X ( t ) is the Hilbert transform of X ( t ) . a) b) Write the covariance matrix of the four variables X1 .1 RC )  Exercise 3. and RN ( τ ) = ⎛− τ ⎞ N0 exp ⎜ ⎟ 4 RC ⎝ RC ⎠ λXY = E[XY] 2 E⎡ ⎣X ⎤ ⎦ ρX Y = σ X σY (Note that N ( t ) = 0 and σ X = σY ). Venkata Rao The transfer function of the RC LPF is given by H ( f ) = Therefore SN ( f ) = N0 2 1 .

find 2 the output PSD Show that the ACF of the output is N0 2 α − ατ ⎤ ⎡ ⎢δ ( τ ) − 2 e ⎥ ⎣ ⎦ Exercise 3.12 The impulse response of a filter (LTI system) is given by h (t ) = δ (t ) − α e− α t u (t ) where α is a positive constant. Denoting the random variables corresponding to the two adjacent samples as Y1 and Y2 . Then for the RC-LPF. Exercise 3. τ0 = 4. The filtered noise is sampled every microsecond.61 R C . if the output process N (t ) is sampled at t1 and t2 such that t1 − t 2 ≥ 4.13 White Gaussian noise process is applied as input to a zero-order-hold circuit with a delay of T sec. Venkata Rao Exercise 3. If the input to the filter is white Gaussian noise with the PSD of a) b) N0 watts/Hz. The output of the ZOH circuit is sampled at t = T . V. That is.61 R C . Find fY ( y ) . obtain the expression for the joint PDF of Y1 and Y2 .14 Noise from a 10 k Ω resistor at room temperature is passed through an ILPF of bandwidth 2. Let Y be the corresponding random variable.58 Indian Institute of Technology Madras .01 RN ( 0 ) . then the random variables N ( t1 ) and N ( t 2 ) will be 3. Let τ0 denote the de-correlation time where τ0 is defined such that if τ > τ0 .5 MHz. then RN ( τ ) ≤ 0.Principles of Communication Prof.

Venkata Rao essentially uncorrelated. We may generalize this statement to include all types of low pass filters by defining the noise equivalent bandwidth as follows. in the case of example ⎛ N ⎞ 3. If N ( t ) happens to be a Gaussian process. The resulting average output noise power N is given by. the average output noise power is.15. Suppose we have a source of white noise with the PSD SW ( f ) = N0 2 connected to an arbitrary low pass filter of transfer function H ( f ) . Similarly. In example 3. That is. independent. 3. then N ( t1 ) and N ( t 2 ) will be. N = N0 2 ∞ −∞ ∞ ∫ H (f ) 2 2 df = N0 ∫ 0 H (f ) d f Consider next the same source of white noise connected to the input of an ILPF of zero frequency response H ( 0 ) and bandwidth B . we found that the average output noise power is equal to N0 B where B is the bandwidth of the ILPF. we find the average output noise power being equal to ⎜ 0 ⎟ where ⎝ 4 RC ⎠ 1 is the 3-dB bandwidth of the RC-LPF.Principles of Communication Prof. in both the cases. N ' = N0 B H 2 ( 0 ) . V. In this case.10: The noise equivalent bandwidth of an arbitrary filter is defined as the bandwidth of an ideal rectangular filter that would pass as much white noise 3. for all practical purposes. Def.16. we find 2πRC that the average output noise power is proportional to some measure of the bandwidth.59 Indian Institute of Technology Madras .

We have H ( 0 ) = 1 . Example 3. is given by ∞ BN = ∫ H (f ) 0 2 df H 2 (0) (3. The definition of noise equivalent bandwidth can be extended to a band pass filter. V. the noise equivalent bandwidth of H ( f ) . we can simply calculate the output noise power without worrying about the actual shape of H ( f ) . 3.  Let BN be the value of B such that N = N ' .32) The notion of equivalent noise bandwidth is illustrated in Fig. Venkata Rao power as the filter in question and has the same maximum gain as the filter under consideration.17 Compute the noise equivalent bandwidth of the RC-LPF. 3.Principles of Communication Prof. Then BN . Fig. Hence.60 Indian Institute of Technology Madras . ∞ BN = ∫ 1 + (2 π f R C ) 0 df 2 = 1 4RC  3.19.19: Pictorial representation of noise equivalent bandwidth The advantage of the noise equivalent bandwidth is that if BN is known.

say fc . 3. If H ( f ) denotes the filter transfer function. V. 3. Venkata Rao 3. we show the waveforms of experimentally generated NBBP noise. with the signal bandwidth usually being much smaller compared to fc . at the input.61 Indian Institute of Technology Madras . SW ( f ) is taken as N0 .33) In fact. 3. This filter has a centre frequency of 101 kHz and a 3-dB bandwidth of less than 3 kHz. Let N ( t ) denote the noise process at the output of a narrowband filter produced in response to a white noise process. We shall now develop the statistical characterization of such NarrowBand Noise (NBN) processes.21. That is. In Fig.10 Narrowband Noise The communication signals that are of interest to us are generally NarrowBand BandPass (NBBP) signals.20.Principles of Communication Prof. any narrowband noise encountered in practice could be modeled as the output of a suitable filter. we have 2 2 N0 H (f ) 2 SN ( f ) = (3. Hence the receiver meant for such signals usually consists of a cascade of narrowband filters. this means that even if the noise process at the input to the receiver is broad-band (may even considered to be white) the noise that may be present at the various stages of a receiver is essentially narrowband in nature. their spectrum is concentrated around some (nominal) centre frequency. W ( t ) . This noise process is generated by passing the BLWN (with a flat spectrum upto 110 kHz) through a NBBP filter. whose magnitude characteristic is shown in Fig.

62 Indian Institute of Technology Madras .20: Some oscilloscope displays of narrowband noise X-axis: time Y-axis: voltage 3. 3. Venkata Rao Fig.Principles of Communication Prof. V.

20(c). whereas for the cycle shown in red. 3. These are (a) canonical (also called in-phase and quadrature component) representation and (b) Envelope and Phase representation. Venkata Rao The magnitude response of the filter is obtained by sweeping the filter input from 87 kHz to 112 kHz. it is clear that zero crossings are not uniform. (This will be justified later on by expressing the NBBP noise signal in terms of its envelope and phase. Hence the proper time-domain description of a NBBP signal would be: it is a sinusoid undergoing slow amplitude and phase variations. the cycle of the waveform shown in green fits almost into the space between two adjacent vertical lines of the graticule.63 Indian Institute of Technology Madras . This is only partially correct. Fig.20(a) gives us the impression that it is a 101 kHz sinusoid with slowly changing envelope. Waveforms at (b) and (c) are expanded versions of a part of the waveform at (a). there is a clear offset.) We shall now develop two representations for the NBBP noise signal. 3. 3.Principles of Communication Prof.21: Frequency response of the filter used to generate NBN Plot in Fig. 3. From (b) and (c). In Fig. V.

39) We have. As Eq. V.1 Representation of narrowband noise a) Canonical representation Let n ( t ) represent a sample function of a NBBP noise process N ( t ) . we call nc ( t ) the in-phase component and ns ( t ) the quadrature component of n ( t ) . it is easy to show. n ( t ) = nc ( t ) cos ( 2 π fc t ) − ns ( t ) sin ( 2 π fc t ) As in the deterministic case. we can write:  ( t ) and np e ( t ) = n ( t ) + j n nc e ( t ) = np e ( t ) exp ( − j 2 π fc t ) (3. where n The complex envelope nc e ( t ) itself can be expressed as nc e ( t ) = nc ( t ) + j ns ( t ) (3. Venkata Rao 3.  ( t ) sin ( 2 π f t ) nc ( t ) = n ( t ) cos ( 2 π fc t ) + n c  ( t ) cos ( 2 π f t ) − n ( t ) sin ( 2 π f t ) ns ( t ) = n c c (3.37) (3. 3.38) As j 2 π fc t ⎤ n ( t ) = Re ⎡ ⎣nc e ( t ) e ⎦ j 2 π fc t = Re ⎡ ⎣nc ( t ) + j ns ( t ) ⎤ ⎦e { } (3.10.Principles of Communication Prof. Then.39 is valid for any sample function n ( t ) of N ( t ) .64 Indian Institute of Technology Madras .34 to 3.40a) 3. and np e ( t ) and nc e ( t ) . its pre-envelope and complex envelope respectively.36) With the help of Eq.34) (3. 3.35)  ( t ) is the Hilbert transform of n ( t ) .36. we shall write N ( t ) = Nc ( t ) cos ( 2 π fc t ) − Ns ( t ) sin ( 2 π fc t ) (3. We will assume fc to be the (nominal) centre frequency of the noise process.

3.Principles of Communication Prof.15 l ( t ) = N ( t ) sin ( 2 π f t ) + N ( t ) cos ( 2 π f t ) Show that N c c s c (3. V. Nc ( t ) and Ns ( t ) are low pass random processes. Exercise 3. Eq.42a) (3. Venkata Rao Eq. 3.65 Indian Institute of Technology Madras . nc ( t ) and ns ( t ) are sample functions of Nc ( t ) and Ns ( t ) respectively. Generalizing.40b) b) Envelope and phase representation Let us write n ( t ) as n ( t ) = r ( t ) cos ⎡ ⎣2 π fc t + ψ ( t ) ⎤ ⎦ = r (t ) ⎡ ⎣cos ψ ( t ) cos ( 2 π fc t ) − sin ψ ( t ) sin ( 2 π fc t ) ⎤ ⎦ Comparing Eq. 3.41(b) with Eq.41b) or and 2 2 2 r (t ) = ⎡ ⎣nc ( t ) + ns ( t ) ⎤ ⎦ 1 (3.41a) (3.39.43) where R ( t ) is the envelope process and Ψ ( t ) is the phase process.40(a) is referred to as the canonical representation of N ( t ) . we find that nc ( t ) = r ( t ) cos ψ ( t ) ns ( t ) = r ( t ) sin ψ ( t ) (3. 3. 3. we have ⎤ N (t ) = R (t ) ⎡ ⎣cos ( 2 π fc t + Ψ ( t ) ) ⎦ (3.42b) ⎡ n (t ) ⎤ ψ ( t ) = arc tan ⎢ s ⎥ ⎢ ⎣ nc ( t ) ⎥ ⎦ r ( t ) is the envelope of n ( t ) and ψ ( t ) its phase.43 justifies the statement that the NBN waveform exhibits both amplitude and phase variations.

) P1) If N ( t ) is zero mean. refer to Appendix A3. for any sampling instant t1 . −B ≤ f ≤ B ⎪ SNc Ns ( f ) = − SNs Nc ( f ) = ⎨ ⎣ N 0 . then RNc Ns ( τ ) is zero for all τ . Venkata Rao 3. then Nc ( t ) and Ns ( t ) are jointly Gaussian. then Nc ( t ) and Ns ( t ) are statistically independent. are always independent. it is easy to see that RNc Ns ( 0 ) = 0 whether or not there is local symmetry. they become independent. as shown by ⎧ j ⎡S ( f + fc ) − SN ( f − fc ) ⎦ ⎤. 3. then Nc ( t ) and Ns ( t ) have the same variance as the variance of N ( t ) .Principles of Communication Prof. P5) If the narrowband noise N ( t ) is zero mean. V. then Nc ( t ) and Ns ( t ) are WSS. P6) The cross-spectral densities of the quadrature components of narrowband noise are purely imaginary.66 Indian Institute of Technology Madras . − B ≤ f ≤ B ⎪ SNc ( f ) = SNs ( f ) = ⎨ N 0 . elsewhere ⎪ ⎩ where it is assumed that SN ( f ) occupies the frequency interval fc − B ≤ f ≤ fc + B and fc > B . SN ( f ) is locally symmetric about ± fc . Since Nc ( t ) and Ns ( t ) are jointly Gaussian.1. (For proofs. However. In other words the variables Nc ( t1 ) and Ns ( t1 ) . P3) If N ( t ) is WSS. elsewhere ⎪ ⎩ P7) If N ( t ) is zero-mean Gaussian and its PSD. then so are Nc ( t ) and Ns ( t ) . P2) If N ( t ) is a Gaussian process.2 Properties of narrowband noise We shall now state some of the important properties of NBN. Property P7) implies that if SN ( f ) is locally symmetric about ± fc . P4) Both Nc ( t ) and Ns ( t ) have the same PSD which is related to SN ( f ) of the original narrowband noise as follows: ⎧S ( f − fc ) + SN ( f + fc ) .10.

c) What is the variance Nc ( t ) ? 3. whereas if f2 (which is actually the midfrequency of the given band) is chosen as the representative carrier frequency. 3. if f1 is chosen as the representative carrier frequency.22. 3. namely a) fc = 10 k Hz and b) fc = 11 k Hz . we are free to choose any frequency fc as the (nominal) centre frequency. then the noise spectrum is 2 B1 wide. the canonical representation of a narrowband signal is not unique. V. Note for the SN ( f ) of Fig. As such. then the width of the spectrum is 2 B2 . The spectral shape SNc ( f ) and SNs ( f ) will depend on the fc chosen. 3. Fig. sketch SNc ( f ) for the two cases.22. for the narrowband spectra shown in Fig.Principles of Communication Prof. Venkata Rao Note: Given the spectrum of an arbitrary band-pass signal.18 For the narrowband noise spectrum SN ( f ) shown in Fig. 3. it is not possible for us to choose an fc such that SN ( f ) exhibits local symmetry with respect to it.67 Indian Institute of Technology Madras . For example.23.22: Narrowband noise spectrum with two different centre frequencies Example 3.

Principles of Communication Prof. and plotting SN ( f − fc ) and SN ( f + fc ) . 3.68 Indian Institute of Technology Madras . 3. Venkata Rao Fig. we have.24. 3.23: SN ( f ) for the example 3. Fig. V. elsewhere ⎪ ⎩ Using fc = 10 k Hz . we obtain Fig.24: Shifted spectra: (a) SN ( f + fc ) and (b) SN ( f − fc ) Taking B = 2 k Hz and extracting the relevant part of SN ( f + fc ) and SN ( f − fc ) . 3.25. 3. ⎧ ⎪S ( f − fc ) + SN ( f + fc ) . we have SNc ( f ) as shown in Fig.18 a) From P4). − B ≤ f ≤ B SNc ( f ) = ⎨ N 0 .

3.69 Indian Institute of Technology Madras . 3.26: SNc ( f ) with fc = 11 k Hz ∞ c) σ 2 Nc = σ 2 N = −∞ ∫ S (f ) d f N = 1.26.Principles of Communication Prof.25: SNc ( f ) with fc = 10 k Hz By repeating the above procedure. V.0 Watt  3. Venkata Rao Fig. we obtain SNc ( f ) (the solid line) shown in Fig. 3. b) Fig.

where Nc ( t ) is the in phase component of N ( t ) .18 Let N ( t ) = Nc ( t ) cos ( ωc t ) − Ns ( t ) sin ( ωc t ) with fc = 50 kHz. 3 ( ) ( ) 3. Show that RNs Nc ( τ ) = 2 sin c 103 τ sin 3 π103 τ . zero-mean Gaussian process with SN ( f ) ⎧ N0 .18 Let N ( t ) represent a NBN process with the PSD shown in Fig. establish RN ( τ ) = RNc ( τ ) cos ( 2 π fc τ ) − RNs Nc ( τ ) sin ( 2 π fc τ ) (3.Y ( x . Venkata Rao Exercise 3.17 Let N ( t ) represent a narrowband.16 Assuming N ( t ) to be WSS and using Eq. Show that X and Y are independent. V. y ) . 3.27: Figure for exercise 3. Fig. Exercise 3. 3.70 Indian Institute of Technology Madras . Develop the expression for f X . 3. fc − B ≤ f ≤ fc + B ⎪ = ⎨2 ⎪ ⎩0 .40(a).44) Exercise 3. otherwise Let X and Y be two random variables obtained by sampling Nc ( t ) and d Nc ( t ) dt a) b) at t = t1 .Principles of Communication Prof.27 below.

Principles of Communication Prof. RNc e ( t + τ . For complex signals. where RNc e ( τ ) evidently stands for the ACF of the complex envelope of the narrowband noise.71 Indian Institute of Technology Madras . Venkata Rao We shall now establish a relation between RN ( τ ) and RNc e ( τ ) . 3. 3.45) 1⎡ ∗ RNc e ( τ ) e j 2 π fc τ + RN ( τ ) e − j 2 π fc τ ⎤ ce ⎣ ⎦ 4 Taking the Fourier transform of Eq. that is.45.1. A3. we define its ACF as ∗ RNc e ( t + τ . nonnegative but not an even function of frequency.1. Hence. t ) = RNce ( τ ) = 2 RNc ( τ ) + j RNs Nc ( τ ) . such as Nc e ( t ) . V. we have RN ( τ ) = = 1 Re ⎡RNc e ( τ ) e + ⎣ 2 j 2 π fc τ ( ) ⎤ ⎦ (3. A3.8. t ) = E ⎡ ⎣ Nc e ( t + τ ) Nc e ( t ) ⎤ ⎦ ⎤ = E⎡ ⎣ ( Nc ( t + τ ) + j Ns ( t + τ ) ) ( Nc ( t ) − j N s ( t ) ) ⎦ = RNc ( τ ) + RNs ( τ ) + j RNs Nc ( τ ) − j RNc Ns ( τ ) But RNc ( τ ) = RNs ( τ ) (Eq. From Eq. 3. we obtain SN ( f ) = 1⎡ SN ( f − fc ) + S ∗Nc e ( − f − fc ) ⎤ ⎦ 4 ⎣ ce (3. if X ( t ) ∗ represents a complex random process that is WSS.7) and RNs Nc ( τ ) = RNc Ns ( − τ ) (Property of cross correlation) = − RNc Ns ( τ ) The last equality follows from Eq. The PSD S X ( f ) = F ⎡ ⎣R X ( τ ) ⎤ ⎦ is real.44. the ACF is conjugate symmetric. then R X ( − τ ) = R X ( τ ) .46) Note: For complex signals.

independent Gaussian variables with ∞ variance σ where σ = 2 2 −∞ ∫ S (f ) d f . 3. ⎡ cos ψ J = ⎢ − sin ψ ⎢ ⎣ r sin ψ ⎤ ⎥ = r−1 cos ψ ⎥ r ⎦ 3. ns ) = 2 2 ⎡ nc ⎤ + ns 1 − exp ⎢ 2 2 ⎥ 2πσ 2σ ⎦ ⎣ For convenience. N Hence. Venkata Rao 3.Principles of Communication Prof. for any arbitrary sampling instant t = t1 . we have. Then.48a) (3.43. we have Nc and Ns as zero mean.47b) That is. V. let R ( t1 ) = R and Ψ ( t1 ) = Ψ .48b) ⎡ N (t ) ⎤ Ψ ( t ) = tan− 1 ⎢ s ⎥ ⎢ Nc ( t ) ⎦ ⎥ ⎣ Our interest is the PDF of the random variable R ( t1 ) . Nc = R cos Ψ and Ns = R sin Ψ and the Jacobian of the transformation is. 2 2 2 R (t ) = ⎡ ⎣ Nc ( t ) + Ns ( t ) ⎤ ⎦ 1 (3. fNc Ns ( nc . ⎤ N ( t ) = R ( t ) cos ⎡ ⎣( 2 π fc t + Ψ ( t ) ) ⎦ with and Nc ( t ) = R ( t ) cos ( Ψ ( t ) ) Ns ( t ) = R ( t ) sin ( Ψ ( t ) ) (3.10. Assuming N(t) to be a Gaussian process.47a) (3.3 PDF of the envelope of narrowband noise From Eq.72 Indian Institute of Technology Madras . Let Nc and Ns represent the random variables obtained from sampling Nc ( t ) and Ns ( t ) at any time t = t1 .

Ψ 0 It is easy to verify that fR ( r ) ⎧r ⎛ r2 ⎞ exp . otherwise ⎩ As fR . 0 ≤ ψ < 2π ⎪ fΨ ( ψ ) = ⎨ 2 π ⎪ 0 . (3. ψ ) = ⎨ 2 π σ2 ⎝ 2σ ⎠ ⎪ 0 . r ≥ 0 − ⎪ ⎜ 2 ⎟ = ⎨ σ2 ⎝ 2σ ⎠ ⎪ 0 . The PDF given by Eq. σ ⎧ ⎛ v2 ⎞ ⎪v exp ⎜ − ⎟.49) Similarly. we have the useful result. ⎧ r ⎛ r2 ⎞ exp − . Venkata Rao Therefore.50) Let us make a normalized plot of the Rayleigh PDF by defining a new random variable V as V = Then. ψ ( r . namely. it can be shown that ⎧ 1 . 0 ≤ ψ < 2 π ⎪ ⎜ 2 ⎟ fR .Principles of Communication Prof. From the above discussion. 3. V. otherwise ⎩ fR ( r ) = 2π ∫ f (r . ψ) d ψ R. 3. otherwise ⎩ (3.73 Indian Institute of Technology Madras . otherwise ⎩ fV (v ) is plotted in Fig.49 is the Rayleigh density which was introduced in Chapter 2. 3.28. v ≥ 0 fV (v ) = ⎨ ⎝ 2⎠ ⎪ 0 . the envelope of narrowband Gaussian noise is Rayleigh distributed. r ≥ 0. ψ ) = fR ( r ) fΨ ( ψ ) . R (transformation by a multiplicative constant). Ψ ( r . we have R and Ψ as independent variables.

3. N ( t ) represents the narrowband noise process whose centre frequency is taken as fc . let N'c denote the random variable N'c ( t1 ) and let Ns denote the random variable Ns ( t1 ) . we find that N'c is N A .4 Sine wave plus narrowband noise Let a random process X ( t ) be given by X ( t ) = A cos ( 2 π fc t ) + N ( t ) (3. σ2 independent of Ns .28: Normalized Rayleigh PDF The peak value of the PDF occurs at v = 1 and fV (1) = 0. Eq.51 can be written as X ( t ) = A cos ( 2 π fc t ) + Nc ( t ) cos ( 2 π fc t ) − Ns ( t ) sin ( 2 π fc t ) Let N'c ( t ) = A + Nc ( t ) . ( ) ( ) and N'c is 3. Ns is N 0.51) where A and fc are constants. Then. V. Hence. Our interest is to develop an expression for the PDF of the envelope of X ( t ) . for any sampling instant t = t1 .74 Indian Institute of Technology Madras .607 . From our earlier discussion. 3.Principles of Communication Prof. Assume N ( t ) to be a Gaussian process with zero mean and variance σ2 . σ2 .10. Venkata Rao Fig. Using the canonical form for N ( t ) . 3.

75 Indian Institute of Technology Madras . ψ ) = ⎡ r 2 + A2 − 2 A r cos ψ ⎤ r exp ⎢− ⎥ . 2π Let I0 ( y ) = 1 2π ∫ exp ( y cos ψ ) d ψ 0 (3. 3. where fR ( r ) = 2π ∫ f (r .53 is identified in terms of the defining integral for the modified Bessel function of the first kind and zero order. 3. N n'c . In Eq. if we let y = be rewritten as 3. ψ) d ψ R. r ≥ 0. By a procedure similar to that used in the computation of the PDF of the envelope of narrowband noise.52) Let 2 2 ⎫2 ⎧ ⎡ Ns ( t ) ⎦ ⎤ ⎬ R ( t ) = ⎨ ⎡N'c ( t ) ⎤ + ⎣ ⎥ ⎣ ⎦ ⎩⎢ ⎭ ⎡ N (t ) ⎤ Ψ ( t ) = tan− 1 ⎢ 's ⎥ ⎢ ⎣ Nc ( t ) ⎥ ⎦ where R ( t ) and Ψ ( t ) are the envelope and phase.54) Ar .53) The integral on the RHS of Eq. ns c s ( ) ⎡ n' − A 2 + n 2 ⎤ c s ⎥ 1 ⎢ exp ⎢ − = 2 2 ⎥ 2πσ 2σ ⎢ ⎥ ⎣ ⎦ 1 ( ) (3. fR ( r ) = ⎛ r 2 + A2 ⎞ r exp ⎜− ⎟ 2 π σ2 2 σ2 ⎠ ⎝ 2π ∫ exp ⎜ ⎝σ 0 ⎛ Ar 2 ⎞ cos ψ ⎟ d ψ ⎠ (3.29. 0 ≤ ψ ≤ 2 π 2 π σ2 2 σ2 ⎣ ⎦ where R = R ( t1 ) and Ψ = Ψ ( t1 ) The quantity of interest is fR ( r ) .54. Ψ 0 That is. we find that fR . Eq.53 can σ2 A plot of I0 ( y ) is shown in Fig. respectively of X ( t ) . 3.Principles of Communication Prof. Ψ ( r . Venkata Rao fN' . V. 3.

Venkata Rao fR ( r ) = ⎛ r 2 + A2 ⎞ ⎛ A r ⎞ r exp ⎜− ⎟ I0 ⎜ ⎟ σ2 2 σ2 ⎠ ⎝ σ2 ⎠ ⎝ (3. Then.76 Indian Institute of Technology Madras .Principles of Communication Prof. σ σ 3. 3. V. namely. V = R A and α = .30 for various values of α .55) The PDF given by Eq. the Rician density of Eq. 3. Fig.29: Plot of I0 ( y ) The graphical presentation of the Rician PDF can be simplified by introducing two new variables. 3.56) which is plotted in Fig. ⎛ v 2 + α2 ⎞ fV (v ) = v exp ⎜ − ⎟ I0 ( α v ) 2 ⎝ ⎠ (3.55 is referred to as Rician distribution.55 can be written in a normalized form. 3.

This can be seen from Fig. This is justified because if A = 0 . That is. then R ( t ) is the envelope of only the narrowband noise. when α is sufficiently large. 3. V. we make the following observations: i) From Fig.77 Indian Institute of Technology Madras . If A = 0 . 3. we find I0 ( 0 ) = 1 . 3. then α = 0 and ⎧ ⎛ v2 ⎞ v exp ⎪ ⎜− ⎟.30: Normalized Rician PDF for various values of α Based on these curves. fV (v ) can be approximated by a Gaussian PDF over a certain range. when the sine-wave amplitude A is large compared with σ (which is the square root of the average power in N ( t ) ). ii) For y >> 1.29 for the cases of α = 3 and 5 .Principles of Communication Prof. it can be shown that fV (v ) is approximately Gaussian in the vicinity of v = α . v ≥ 0 fV (v ) = ⎨ ⎝ 2⎠ ⎪ 0 .27. Venkata Rao Fig. 3.28. Using this approximation. 3. otherwise ⎩ which is the normalized Rayleigh PDF shown earlier in Fig. I0 ( y )  ey 2πy .

Taking the expectation of As N ( t ) = N c A3.1.3) RNc ( t + τ .1. with the N πt l ( t ) .2.1. mentioned in sec.37 and 3.2. P3) If N ( t ) is WSS. Proof: Generalizing Eq.78 Indian Institute of Technology Madras . we have l ( t ) sin ( 2 π f t ) Nc ( t ) = N ( t ) cos ( 2 π fc t ) + N c l ( t ) cos ( 2 π f t ) − N ( t ) sin ( 2 π f t ) Ns ( t ) = N c c (A3.1. then so are Nc ( t ) and Ns ( t ) .1. V.2) l ( t ) is obtained as the output of an LTI system with h ( t ) = 1 . then so is N the expectation of A3.  Proof: We will first establish that l N ( τ ) sin ( 2 π f τ ) (A3.  P2) If N ( t ) is a Gaussian process.1. 3.10.1) (A3. We shall now give proofs to some of the properties of the NBN. 3. N ( t ) is guaranteed to be Gaussian only if Nc ( t ) and Ns ( t ) are jointly Gaussian.1 Properties of Narrowband Noise: Some Proofs. R N 3. then Nc ( t ) and Ns ( t ) are jointly Gaussian. A3. P1) If N ( t ) is zero mean Gaussian. we obtain N ( t ) = 0 .40(a) because. we have l ( t ) sin ( 2 π f t ) Nc ( t ) = N ( t ) cos ( 2 π fc t ) + N c l ( t ) = 0 . t ) = RNc ( τ ) = RN ( τ ) cos ( 2 π fc τ ) + R c l N ( τ ) is the Hilbert transform of R ( τ ) .3. Taking input N ( t ) .) (In Eq.38. we obtain Ns ( t ) = 0 . Proof: This property follows from Eq.3.1. then Nc ( t ) and Ns ( t ) are WSS. This implies that if N ( t ) is zero mean.Principles of Communication Prof. Venkata Rao Appendix A3.

1 and after some routine manipulations.1.4a) (A3.1. t ) = E ⎡ ⎣ Nc ( t + τ ) Nc ( t ) ⎤ ⎦ (A3.1.79 Indian Institute of Technology Madras .1: Scheme to obtain N Eq. l ( t ) from N ( t ) Fig. RN l N ( τ ) = − RN N l ( τ) RNc ( t + τ .5) Expressing Nc ( t + τ ) and Nc ( t ) in terms of the RHS quantities of Eq.1. V.6) 3.21(a) gives us RN l N ( τ) = 1 ∗ RN ( τ ) πτ (A3. 3.1. A3. we will have RNc ( t + τ .1.Principles of Communication Prof. t ) = 1 ⎡RN ( τ ) + RN l ( τ )⎤ ⎦ cos ( 2 π fc τ ) 2⎣ 1 ⎤ + ⎡ RN ( τ ) − RN l ( τ ) ⎦ cos ( 2 π fc ( 2 t + τ ) ) 2⎣ 1 ⎤ + ⎡RN l ( τ ) − RN N l ( τ ) sin ( 2 π fc τ ) ⎦ 2⎣ N + 1⎡ ⎤ R l ( τ ) + RN N l ( τ ) sin ( 2 π fc ( 2 t + τ ) ) ⎦ 2 ⎣ NN (A3.1. Venkata Rao Consider the scheme shown in Fig A3.4b) l N ( τ) = R RN N l ( τ ) = RN l N ( − τ) .1. A3. = 1 ∗ RN ( − τ ) π ( − τ) = − 1 ∗ RN ( τ ) πτ l N ( τ) = −R That is.

− B ≤ f ≤ B ⎪ SNc ( f ) = SNs ( f ) = ⎨ N 0 . f > − fc 1 + sgn ( f + fc ) = ⎨ ⎩0 .80 Indian Institute of Technology Madras .8) P4) Both Nc ( t ) and Ns ( t ) have the same PSD which is related to SN ( f ) of the original narrowband noise as follows: ⎧S ( f − fc ) + SN ( f + fc ) . Venkata Rao RN l ( τ ) is the autocorrelation of the Hilbert transform of N ( t ) .1. As N ( t ) and l ( t ) have the same PSD.Principles of Communication Prof.7) (A3. f < fc But 1 − sgn ( f − fc ) = ⎨ ⎩0 . outside = (A3.9) ⎧2 . A3. V.3. A3.1 Show that RNs ( t ) = RNc ( τ ) and l N ( τ ) cos ( 2 π f τ ) RNc Ns ( τ ) = RN ( τ ) sin ( 2 π fc τ ) − R c (A3. N l N N we have RN l N ( τ ) = − RN N l (T ) .1. we obtain Eq. Using these results in Eq.1.10b) 3.  Exercise A3. Proof: Taking the Fourier transform of Eq.1.10a) (A3. we have SNc ( f ) = 1 ⎡SN ( f − fc ) + SN ( f + fc ) ⎤ ⎦ 2⎣ 1 SN ( f − fc ) sgn ( f − fc ) − SN ( f + fc ) sgn ( f + fc ) ⎤ − ⎡ ⎦ 2⎣ 1 1 SN ( f − fc ) ⎡ 1 − sgn ( f + fc ) ⎤ + SN ( f + fc ) ⎡ ⎣ ⎦ ⎣1 + sgn ( f + fc ) ⎤ ⎦ 2 2 (A3.1.1.1. A3.1.5. outside ⎧2 .1. elsewhere ⎪ ⎩ where it is assumed that SN ( f ) occupies the frequency interval fc − B ≤ f ≤ fc + B and fc > B . we have R ( τ ) = R ( τ ) and from Eq.3.6.1. A3.

2  Establish properties P5 to P7. Exercise A3. 3.1. V.1.81 Indian Institute of Technology Madras .1.Principles of Communication Prof. A3.9 completes the proof. Venkata Rao Using Eq.10 in Eq. A3.

Sam Shanmugam and A. Probability and Random Processes with Applications to Signal processing (3rd ed).Principles of Communication Prof. Random Signals: Detection. M. 1988 3. Breiphol. V. Pearson Education Asia. McGraw Hill. 2002 3) K. Random variables and Stochastic processes (3rd ed). Papoulis. 1991 2) Henry Stark and John W.82 Indian Institute of Technology Madras . Woods. Estimation and data analysis. Probability. Venkata Rao References 1) A. John Wiley.