You are on page 1of 239

1 Random Processes

“Probabilities” is considered an important


background to communication study.
1.2 Random Process
 Random process: An extension of multi-
dimensional random vectors
 Representation of 2-dimensional random vector
 (X,Y) = (X(1), X(2)) = {X(j), jI}, where the index set I
equals {1, 2}.
 Representation of m-dimensional random vector
 {X(j), jI}, where the index set I equals {1, 2,…, m}.
1.2 Random Process
 How about {X(t), t}?
 It is no longer a random vector since the index set is
continuous!
 This is a suitable model for, e.g., a noise because a
noise often exists continuously in time.
 Its cdf is well-defined through the mapping:

X (t ) : S  
for every t  .
Define or view a random process via its
inherited probability system

X(t,s1)

X(t,s2)

X(t,sn)

1-4
1.2 Random Process
 X(t, sj) is called a sample function (or a
realization) of the random process for sample
point sj.
 X(t, sj) is deterministic.
1.2 Random Process
 Notably, with a probability system (S, F, P)
over which the random process is defined, any
finite-dimensional joint cdf is well-defined.
 For example,
PrX (t1 )  x1 and X (t2 )  x2 and X (t3 )  x3 
 P s  S : X (t1 , s )  x1 s  S : X (t2 , s )  x2  s  S : X (t3 , s )  x3 
1.2 Random Process
 Summary
 A random variable
 maps s to a real number X
 A random vector
 maps s to a multi-dimensional real vector.
 A random process
 maps s to a real-valued deterministic function.
 s is where the probability is truly defined; yet the
image of the mapping is what we can observe and
experiment over.
1.2 Random Process
 Question: Can we map s to two or more real-
valued deterministic functions?
 Answer: Yes, such as (X(t), Y(t)).
 Then, we can discuss any finite-dimensional joint
distribution of X(t) and Y(t), such as, the joint
distribution of
( X (t1 ), X (t2 ), X (t3 ), Y (t1 ), Y (t4 ))
1.3 (Strictly) Stationary
 For strict stationarity, the statistical property of
a random process encountered in real world is
often independent of the time at which the
observation (or experiment) is initiated.
 Mathematically, this can be formulated as that
for any t1, t2, …, tk and :
FX ( t  ), X ( t  ),..., X ( t  ) ( x1 , x2 ,..., xk )
1 2 k

 FX ( t ), X ( t
1 2 ),..., X ( tk ) ( x1 , x2 ,..., xk )
1.3 (Strictly) Stationary
 Example 1.1. Suppose that any finite-
dimensional cdf of a random process X(t) is
defined. Find the probability of the joint event.

A  a1  X (t1 )  b1 and a2  X (t2 )  b2 


Answer:
P( A)  FX ( t ), X ( t ) (b1 , b2 )  FX ( t ), X ( t ) (b1 , a2 )
1 2 1 2

 FX ( t ), X ( t ) ( a1 , b2 )  FX ( t ), X ( t ) ( a1 , a2 )
1 2 1 2
1.3 (Strictly) Stationary
1.3 (Strictly) Stationary
 Example 1.1. Further
assume that X(t) is
strictly stationary.
 Then, P(A) is also
equal to:

P( A)  FX ( t  ), X ( t  ) (b1 , b2 )  FX ( t  ), X ( t  ) (b1 , a2 )
1 2 1 2

 FX ( t  ), X ( t  ) ( a1 , b2 )  FX ( t  ), X ( t  ) (a1 , a2 )
1 2 1 2
1.3 (Strictly) Stationary
 Why introducing “stationarity?”
 With stationarity, we can be certain that the
observations made at different time instances have the
same distributions!
 For example, X(0), X(T), X(2T), X(3T), ….

 Suppose that Pr[X(0) = 0] = Pr[X(0)=1] = ½. Can we


guarantee that the relative frequency (i.e., their
average) of “1’s appearance” for experiments
performed at several different times approach ½ by
stationarity? No, we need additional assumption!
1.4 Mean
 The mean of a random process X(t) at time t is
equal to:

 X (t )  E [ X (t )]   x  f X ( t ) ( x )dx


where fX(t)() is the pdf of X(t) at time t.

 If X(t) is stationary, the mean X(t) is


independent of t, and is a constant for all t.
1.4 Autocorrelation
 The autocorrelation function of a real random
process X(t) is given by:
RX (t1 , t2 )  E [ X (t1 ) X (t2 )]
 
  x1 x2 f X ( t ), X ( t ) ( x1 , x2 )dx1dx2
1 2
 

 If X(t) is stationary, the autocorrelation


function RX(t1, t2) is equal to RX(t1t2, 0).
1.4 Autocorrelation
RX (t1 , t2 )  E[ X (t1 ) X (t2 )]
 
   x1 x2 f X ( t ), X ( t ) ( x1 , x2 )dx1dx2
1 2

 
   x1 x2 f X ( 0 ), X ( t t ) ( x1 , x2 )dx1dx2
2 1

 E[ X (t1  t2 ) X (0)]
 RX (t1  t2 ,0)
A short-hand for
 RX (t1  t2 ) autocorrelation function
of a stationary process
1.4 Autocorrelation
 Conceptually, autocorrelation function = “power
correlation” between two time instances t1 and
t2.
1.4 Autocovariance
 “Variance” is the degree of variation to the
standard value (i.e., mean).
Autocovariance function CX(t1, t2) is given by:
C X (t1 , t2 )  E X (t1 )   X (t1 ) X (t2 )   X (t2 ) 
 E X (t1 ) X (t2 )  X (t1 )  X (t2 )
  X (t1 ) X (t2 )   X (t1 )  X (t2 )
 E X (t1 ) X (t2 )  E X (t1 ) X (t2 )
  X (t1 ) E X (t2 )   X (t1 )  X (t2 )
 RX (t1 , t2 )   X (t1 )  X (t2 )
  X (t1 )  X (t2 )   X (t1 )  X (t2 )
 RX (t1 , t2 )   X (t1 )  X (t2 )
1.4 Autocovariance
 If X(t) is stationary, the autocovariance
function CX(t1, t2) becomes
C X (t1 , t2 )  RX (t1 , t2 )   X (t1 )  X (t2 )
 RX (t1  t2 ,0)   2
X

 C X (t1  t2 ,0)
 C X (t1  t2 )
1.4 Wide-Sense Stationary (WSS)
 Since in most cases of practical interest, only
the first two moments (i.e., X(t) and CX(t1, t2))
are concerned, an alternative definition of
stationarity is introduced.
 Definition (Wide-Sense Stationarity) A
random process X(t) is WSS if
  X (t )  constant;   X (t )  constant;
 or 
C X (t1 , t2 )  C X (t1  t2 )  RX (t1 , t2 )  RX (t1  t2 ).
1.4 Wide-Sense Stationary (WSS)
 Alternative names for WSS include
 weakly stationary
 stationary in the weak sense
 second-order stationary

 If the first two moments of a random process


exist (i.e., are finite), then strictly stationary
implies weakly stationary (but not vice versa).
1.4 Cyclostationarity
 Definition (Cyclostationarity) A random
process X(t) is cyclostationary if there exists a
constant T such that
  X (t  T )   X (t );

C X (t1  T , t2  T )  C X (t1 , t2 ).
1.4 Properties of autocorrelation function for
WSS random process
1. Second Moment: RX(0) = E[X2(t)] > 0.
2. Symmetry: RX() = RX().
 In concept, autocorrelation function = “power
correlation” between two time instances t1 and t2.
 For a WSS process, this “power correlation” only
depends on time difference.
1.4 Properties of autocorrelation function for
WSS random process
3. Peak at zero: |RX()| ≦ RX(0)

Proof: 0  E X (t   )  X (t ) 2 
 E X 2 (t   ) E X 2 (t ) 2 E[ X (t   ) X (t )]
 RX (0)  RX (0)  2 RX ( )
 2 RX (0)  2 RX ( )
Hence,  RX (0)  R X ( )  RX (0)
with equality holds when
Pr X (t   )  X (t )  Pr X ( )  X (0)  1.
1.4 Properties of autocorrelation function for
WSS random process
 Operational meaning of autocorrelation
function:
 The “power” correlation of a random process at 
seconds apart.
 The smaller RX() is, the less correlation between
X(t) and X(t+).
1.4 Properties of autocorrelation function for
WSS random process
 If RX() decreases faster, the correlation decreases
faster.
1.4 Decorrelation time
 Some researchers define the decorrelation
time as:
Example 1.2 Signal with Random Phase
 Let X(t) = A cos(2fct + ), where  is
uniformly distributed over [, ).
 Application: A local carrier at the receiver side may
have a random “phase difference” with respect to
the carrier at the transmitter side.
Example 1.2 Signal with Random Phase
Channel …0110 …,m(t), m(t), m(t), m(t)
Modulator
Encoder

m(t)
T

Local carrier Carrier wave


w(t) Accos(2fct)
cos(2fct+)
0110… >0 yT T x(t) s(t)
< 
0
dt   
correlator
Local carrier X(t)=Acos(2fct+)
cos(2fct)
Example 1.2 Signal with Random Phase
Then  X (t )  E [ A cos(2f c t  )]
 1
  A cos(2f c t   ) d
 2
A 

2 
cos(  2f c t )d

A
 sin(  2f ct )  

2
A
 sin(  2f ct )  sin(  2f ct ) 
2
 0.
Example 1.2 Signal with Random Phase
RX (t1 , t2 )  E[ A cos(2f c t1  )  A cos(2f c t2  )]
 1
 A  cos(  2f c t1 ) cos(  2f c t2 )
2
d
 2
A2  1
  cos(  2f ct1 )  (  2f ct2 )
2   2
 cos(  2f c t1 )  (  2f c t2 )d
A2  1
  cos2  2f c (t1  t2 )   cos2f c (t1  t2 ) d
2  2
A2
 cos2f c (t1  t2 ) . Hence, X(t) is WSS.
2
Example 1.2 Signal with Random Phase
Example 1.3 Random Binary Wave
 Let

X (t )   A I
n  
n  p(t  nT  td )

where …, I2, I1, I0, I1, I2, … are independent,


and each Ij is either 1 or 1 with equal
probability, and
1, 0  t  T
p (t )  
0, otherwise
Example 1.3 Random Binary Wave
I6 I5 I4 I3 I2 I1 I0 I1 I2 I3 I4
Example 1.3 Random Binary Wave
Channel …0110 …,m(t), m(t), m(t), m(t)
Modulator
Encoder

m(t)
T

No/Ignore
w(t) carrier wave
0110… >0 yT x(t) s(t)
< 
correlator
X(t) = A p(t−td)
Example 1.3 Random Binary Wave
 Then by assuming that td is uniformly distributed
over [0, T), we obtain:

 X (t )  E   A  I n  p(t  nT  td )
 

 n   

  A  E[ I
n  
n ]  E[ p(t  nT  td )]

  A  0  E[ p(t  nT  t
n  
d
)]

0
Example 1.3 Random Binary Wave
 A useful probabilistic rule: E[X] = E[E[X|Y]]
So we have:

E [ X (t1 ) X (t2 )]  E E X (t1 ) X (t2 ) td 


E X (t1 ) X (t2 ) td 
     
 E   A  I n  p(t1  nT  td )   A  I m  p(t2  mT  td )  td 
 n    m    
 
 A2   E [ I n I m | td ]E [ p (t1  nT  td ) p (t2  mT  td ) | td ]
n  m  
 
A 2
  E[ I I
n  m  
n m ] p(t1  nT  td ) p (t2  mT  td )

 A2  E [ I n2 ] p(t1  nT  td ) p(t2  nT  td )
n  

 A2  p(t1  nT  td ) p(t2  nT  td )
n  

Since E[ I n I m ]  E [ I n ]E [ I m ]  0 for n  m.

1-39
 Three conditions must be simultaneously satisfied in
order to obtain a non-zero n   p(t1  nT  td ) p(t2  nT  td )

p(t1  nT  td )  1 if 0  t1  nT  td  T
t1  td t1  td
or equivalently 1  n 
T T
p (t2  nT  td )  1 if 0  t2  nT  td  T
t2  td t2  td
or equivalently 1  n 
T T
n must be an integer.

Notably, 0  td  T .

1-40
 For an easier understanding, we let t1 = 0 and t2 =  0.
 The below two conditions reduce to:
 td td 
  1  n    n  1, if 0  td  T
 T T  
n must be an integer   n  0, if td  0

  td   td
Combining with 1  n  gives :
T T
   td   td
0  td  T and n  1  T  1  1  T

    td    T  0    td  T
  
td  0 and n  0   1  0   0    T and td  0.
 T T
1-41
Example 1.3 Random Binary Wave
E X (t1 ) X (t2 ) td   A

2
 p(t
n  
1  nT  td ) p(t2  nT  td )

 A2 , (0 | t1  t2 | td  T ) or (0 | t1  t2 | T and td  0)

 0, otherwise
 T 2 1
E X (t1 ) X (t2 )  E E X (t1 ) X (t2 ) td     |t t | T d
A dt , 0 | t1  t2 | T
1 2

 0, otherwise
 2  | t1  t2 | 
A 1 , 0 | t1  t2 | T
   T 
 0, otherwise
Example 1.3 Random Binary Wave
1.4 Cross-Correlation
 The cross-correlation between two processes
X(t) and Y(t) is:
RX ,Y (t , u )  E[ X (t )Y (u )]

 Sometimes, their correlation matrix is given


instead for convenience:
 RX (t , u ) RX ,Y (t , u )
R X ,Y (t , u )   
R
 Y ,X ( t , u ) RY ( t , u ) 
1.4 Cross-Correlation
 If X(t) and Y(t) are jointly weakly stationary,
then
R X ,Y (t , u )  R X ,Y (t  u )
 RX (t  u ) RX ,Y (t  u )
 
R
 Y ,X ( t  u ) RY ( t  u ) 
Example 1.4 Quadrature-Modulated
Processes
 Consider a pair of quadrature decomposition of
X(t) as:
 X I (t )  X (t ) cos2f c t  

 X Q (t )  X (t ) sin(2f c t  )
where  is independent of X(t) and is uniformly
distributed over [0, ).
Example 1.4 Quadrature-Modulated
Processes
RX I ,XQ
(t , u )  E[ X I (t ) X Q (u )]
 E[ X (t ) cos(2f c t  )  X (u ) sin( 2f c u  )]
 E[ X (t ) X (u )]E[sin( 2f c u  ) cos(2f c t  )]
 sin( 2f c (t  u )  2)  sin( 2f c (u  t )) 
 RX ( t , u ) E  
 2
1
  sin( 2f c (t  u )) RX (t , u )
2
Example 1.4 Quadrature-Modulated
Processes
 Notably, if t = u, i.e., synchronize in two quadrature
components, then

RX I ,XQ (t , t )  0

which indicates that simultaneous observations of


the quadrature-modulated processes are
“orthogonal” to each other!
(See Slide 1-78 for a formal definition of
orthogonality.)
1.5 Ergodic Process
 For a random-process-modeled noise (or random-
process-modeled source) X(t), how can we know its
mean and variance?
 Answer: Relative frequency.
 Specifically, by measuring X(t1), X(t2), …, X(tn), and
calculating their average, it is expected that this time average
will be close to its mean.
 Question is “Will this time average be close to its
mean, if X(t) is stationary?”
 For a stationary process, the mean function X(t) is
independent of time t.
1.5 Ergodic Process
 The answer is negative!
 An additional ergodicity assumption is
necessary for time average converging to the
ensemble average X.
1.5 Time Average versus Ensemble Average
 Example.
 X(t) is stationary.
 For any t, X(t) is uniformly distributed over {1, 2, 3,
4, 5, 6}.
 Then ensemble average is equal to:
1 1 1 1 1 1
1   2   3   4   5   6   3.5
6 6 6 6 6 6
1.5 Time Average versus Ensemble Average
 We make a series of observations at time 0, T, 2T,
…, 10T to obtain 1, 2, 3, 4, 3, 2, 5, 6, 4, 1. (They
are deterministic!)
 Then, the time average is equal to:
1 2  3 4  3 2  5  6  4 1
 3.1
10
1.5 Ergodicity
 Definition. A stationary process X(t) is ergodic
in the mean if

1. Pr lim  X (T )   X  1, and
T 

2. lim Var  X (T )  0
T 

where
1 T
 X (T ) 
2T 
T
X (t )dt
1.5 Ergodicity
 Definition. A stationary process X(t) is ergodic
in the autocorrelation function if
 
1. Pr lim RX ( ; T )  RX ( )  1, and
T 

2. lim Var RX ( ; T )  0


T 

where
1 T
RX ( ; T ) 
2T 
T
X ( t   ) X ( t ) dt
1.5 Ergodic Process
 Experiments or observations on the same
process can only be performed at different time.
 “Stationarity” only guarantees that the
observations made at different time come from
the same distribution.
A1.3 Statistical Average
 Alternative names of ensemble average
 Mean
 Expectation value
 Sample average. (Recall that sample space consists
of all possible outcomes!)
 How about the sample average of a function
g( ) of a random variable X ?

E [ g ( X )]   g ( x ) f X ( x )dx

A1.3 Statistical Average
 nth moment of random variable X

 The 2nd moment is also named mean-square value.


 nth central moment of random variable X

 The 2nd central moment is also named variance.


 Square root of the 2nd central moment is also named
standard deviation.
A1.3 Joint Moments
 The joint moment of X and Y is given by:
 
E[ X Y ]  
i k
 x i y j f X ,Y ( x, y )dxdy
 

 When i = j = 1, the joint moment is specifically


named correlation.
 The correlation of centered random variables is
specifically named covariance.
Cov[ X ,Y ]  E[( X   X )(Y  Y )]  E[ XY ]   X Y
A1.3 Joint Moments
 Two random variables, X and Y, are uncorrelated if,
and only if, Cov[X, Y] = 0.
 Two random variables, X and Y, are orthogonal if,
and only if, E[XY] = 0.
 The covariance, normalized by two standard
deviations, is named correlation coefficient of X
and Y.
Cov[ X , Y ]

 XY
A1.3 Characteristic Function
 Characteristic function is indeed the inverse
Fourier transform of the pdf.
1.6 Transmission of a Random Process
Through a Stable Linear Time-Invariant Filter

 Linear
 Y(t) is a linear function of X(t).
 Specifically, Y(t) is a weighted sum of X(t).
 Time-invariant
 The weights are time-independent.
 Stable 

 Dirichlet’s condition (defined later) and  | h ( ) | d  


2

 “Stability” implies that the output is an energy function, which has


finite power (second moment), if the input is an energy function.
Example of LTI filter: Mobile Radio
Channel
(1 , 1 )

X (t ) Y (t )
( 2 , 2 )
Transmitter Receiver
Y ( t )  1 s ( t   1 )   2 s ( t   2 )   3 s ( t   3 )
3
  h( i ) s (t   i ),
i 1
( 3 , 3 )
where h( i )   i .
Example of LTI filter: Mobile Radio
Channel

X (t ) Y (t )
Transmitter Receiver

Y (t )   h( ) X (t   )d

1.6 Transmission of a Random Process
Through a Linear Time-Invariant Filter
 What are the mean and autocorrelation
functions of the LTI filter output Y(t)?
 Suppose X(t) is stationary and has finite mean.

 Suppose  | h( ) | d  

 Then

Y (t )  E [Y (t )]  E  h( ) X (t   )d 
 

   
 
  h ( ) E [ X (t   )]d   X  h( )d
 
1.6 Zero-Frequency (DC) Response


1 

h( )d

 The mean of the LTI filter output process is equal to


the mean of the stationary filter input multiplied by
the DC response of the system.

Y   X  h( )d

1.6 Autocorrelation
RY (t , u )  E [Y (t )Y (u )]

 E  h( 1 ) X (t   1 )d 1   h( 2 ) X (u   2 )d 2 


  

   
 
  h( 1 )h( 2 ) E X (t   1 ) X (u   2 )d 2 d 1
 
 
  h( 1 )h( 2 ) R X (t   1 , u   2 )d 2 d 1
 

 If X (t ) WSS,
 
then RY ( )    h( 1 )h( 2 ) R X (   1   2 )d 2 d 1
 
1.6 WSS input induces WSS output
 From the above derivations, we conclude:
 For a stable LTI filter, a WSS input induces a WSS
output.
 In general (not necessarily WSS),

 As the above two quantities also relate in


“convolution” form, a spectrum analysis is perhaps
better in characterizing their relationship.
A2.1 Fourier Analysis
 Fourier Transform Pair

Fourier Transform of g (t ) : G ( f )   g (t ) exp(  j 2ft )dt


Inverse Fourier Transform of g (t ) : g (t )   G ( f ) exp( j 2ft )df


 Fourier Transform G(f) is the frequency


spectrum content of a signal g(t).
 |G(f)| magnitude spectrum
 arg{G(f)} phase spectrum
A2.1 Dirichlet’s Condition
 Dirichlet’s condition
 In every finite interval, g(t) has a finite number of
local maxima and minima, and a finite number of
discontinuity points.
 Sufficient conditions for the existence of
Fourier transform
 g(t) satisfies Dirichlet’s condition

 Absolute integrability.  | g (t ) | dt  

A2.1 Dirichlet’s Condition
 “Existence” means that the Fourier transform
pair is valid only for continuity points.

1,  1  t  1; 1,  1  t  1;
g (t )   and g (t )  
0, | t | 1. 0, | t | 1.

has the same spectrum G(f).

However, the above two functions are not equal at t =


1 and t = −1 !
A2.1 Dirac Delta Function
 A function that exists only in principle.
 Define the Dirac Delta Function as a function
(t) satisfies:
, t  0; 
 (t )  
0, t  0.
and 
 (t )dt  1.

 (t) can be thought of as a limit of a unit-area pulse


function.  1 1
n ,  t ;
lim sn (t )   (t ), where sn (t )   2n 2n
n
0, otherwise.
A2.1 Properties of Dirac Delta Function
 Sifting property
 If g(t) is continuous at t0, then


g (t ) (t  t0 )dt g (t0 )

  g (t ) s (t  t )dt  t 1/( 2 n )g (t )  n  dt  g (t ) 
   t 1/( 2n )
0

n 0 0
0 

 The sifting property is not necessarily true if g(t) is


discontinuous at t0.
A2.1 Properties of Dirac Delta Function
 Replication property
 For every continuous point of g(t),

g (t )   g ( ) (t   )d


 Constant spectrum
 



 (t ) exp(  j 2ft )dt    (t  0) exp(  j 2ft )dt  1.

A2.1 Properties of Dirac Delta Function
 Scaling after integration f ( x)  g ( x)  

 
f ( x )dx   g ( x )dx ???


 Although
 , t  0
 (t )  2   (t )  
 0, t  0
their integrations are different
 


 (t )dt  1 while 
2 (t )dt  2.

 Hence, the “multiplicative constant” to Dirac delta


function is significant, and shall never be ignored!
A2.1 Properties of Dirac Delta Function
 More properties
 Multiplication convention
 (t  a ), if a  b
 (t  a ) (t  b)  
 0, if a  b
 


 (t  a ) (t  b)dt    (t  a ) ( a  b)dt


  ( a  b)   (t  a )dt


  ( a  b)
A2.1 Fourier Series
 The Fourier transform of a periodic function
does not exist!
 E.g., for integer k,
1, 2k  t  2k  1;
g (t )  
0, otherwise.

0
0 2 4 6 8 10
A2.1 Fourier Series
 Theorem: If gT(t) is a bounded periodic function
satisfying Dirichlet condition, then

 2n 
gT (t )   cn exp j t
n    T 

at every continuity points of gT(t), where T is the


smallest real number such that gT(t) = gT(t+T), and
1 T /2  2n 
cn   gT (t ) exp  j t dt
T  T / 2
 T 
A2.1 Relation between a Periodic Function
and its Generating Function
 Define the generating function of a periodic
function gT(t) with period T as:

 gT (t ),  T / 2  t  T / 2;
g (t )  
 0, otherwise.

 Then

gT (t )   g (t  mT )
m  
A2.1 Relation between a Periodic Function
and its Generating Function
 Let G(f) be the spectrum of g(t) (which should
exist).
 Then from Theorem in Slide 1-96,
1 T /2  2n 
cn 
T T / 2 gT (t ) exp  j T t dt
1 T /2     2n 

TT / 2  m
 
g (t  mT )  exp  j
  T 
t dt

1  T /2  2n 
   g (t  mT ) exp  j t dt , s  t  mT
T m   T / 2  T 
(Continue from the previous slide.)
1  T / 2  mT  2n 
cn 
T

m  
T / 2  mT
g ( s ) exp  j
 T
( s  mT ) ds

1  T / 2mT  2n 
  T / 2mT g ( s ) exp  j s ds
T m    T 
1   n 
  g ( s) exp  j 2 T s ds
T
1 n
 G 
T T 

1-80
A2.1 Relation between a Periodic Function
and its Generating Function
 This concludes to the Poisson’s sum formula
1  n  n 
gT (t )   G   exp j 2 t 
T n    T   T 

g (t ) G( f )
1

T n
G 
gT (t ) 1/ T T 
1
T
A2.1 Spectrums through LTI filter

x1(t) y1(t)
h()
excitation response
x2(t) y2(t)
h()

A linear filter satisfies the principle of superposition, i.e.,

x1(t) + x2(t) y1(t) + y2(t)


h()
A2.1 Linearity and Convolution
 A linear time-invariant filter can
be described
by convolution integral y (t )   h( ) x (t   )d .

 Example of a non-linear system


1

x(t) y(t) 0.5

h()
-2 -1 1 2

-0.5
3
y (t )  x (t )  0.1  x (t ) -1
A2.1 Linearity and Convolution
 Time-Convolution = Spectrum-Multiplication
 x (t )   x ( f ) exp( j 2ft )df

y (t )   h ( ) x (t   )d and 
 
 
h( )   H ( f ) exp( j 2f )df

 

y ( f )   y (t ) exp(  j 2ft )dt




   h( ) x (t   )d  exp(  j 2ft )dt



 

 
  

  h( )  x (t   ) exp(  j 2ft )dt  d



 

    
 
y ( f )   h( )  x ( s ) exp(  j 2f ( s   ))dt  d , s  t  

    

  h( ) exp(  j 2f )  x ( s ) exp(  j 2fs )ds  d



 

    

 x ( f )  h ( ) exp(  j 2f )d


 x( f ) H ( f )

1-85
A2.1 Impulse Response of LTI Filter
 Impulse response = Filter response to Dirac
delta function (application of replication
property)
(t) h(t)
h()
 
y (t )   h( ) x (t   )d   h( ) (t   )d  h(t ).
 
A2.1 Frequency Response of LTI Filter
 Frequency response = Filter response to a
complex exponential input of unit amplitude
and frequency f

exp(j2ft) y (t )   h( ) x (t   )d
h() 

  h( ) exp j 2f (t   ) d


 exp j 2ft  h( ) exp j 2f d


 exp( j 2ft ) H ( f )
A2.1 Measures for Frequency Response
Expression 1
| H ( f ) | amplitude response
H ( f ) | H ( f ) |  exp[ j ( f )], where 
 ( f ) phase response

Expression 2

log H ( f )  log | H ( f ) |  j ( f )
 ( f ) gain
  ( f )  j ( f ) where 
  ( f ) phase response
 ( f )  ln | H ( f ) | nepers
 20 log10 | H ( f ) | dB
A2.1 Fourier Analysis
 Remember to self-study Tables
A6.2 and A6.3.
1.7 Power Spectral Density

Deterministic x(t) LTI y (t )   h( ) x (t   )d

h()


WSS X(t) LTI Y (t )   h( ) X (t   )d

h()
1.7 Power Spectral Density
 How about the spectrum relation between filter
input and filter output?
 An apparent relation is:

Deterministic x(f) LTI y( f )  H ( f ) x( f )


H(f)

X(f) LTI Y ( f )  H( f )X ( f )
H(f)
1.7 Power Spectral Density
 This is however not adequate for a random
process.
 For a random process, what concerns us is the
relation between the input statistic and output
statistic.
1.7 Power Spectral Density
 How about the relation of the first two moments
between filter input and output?
 Spectrum relation of mean processes

Y (t )  E [Y (t )]  E  h( ) X (t   )d 
 

   

  h( )  X (t   )d


 Y ( f )   X ( f ) H ( f )
補充 : Time-Average Autocorrelation
Function
 For a non-stationary process, we can use the
time-average autocorrelation function to define
the average power correlation for a given time
difference.
1 T
RX ( )  lim
T 
2T 
T
E [ X ( t   ) X ( t )]dt
 It is implicitly assumed that RX ( ) is independent
of the location of the integration window. Hence,
1 3T / 2
RX ( )  lim
T 
2T 
T / 2
E [ X ( t   ) X ( t )]dt
補充 : Time-Average Autocorrelation
Function
 E.g., for a WSS process,

RX ( )  E [ X (t   ) X (t )]

 E.g., for a deterministic function,


1 T
RX ( )  lim
T  2T T
E [ x (t   ) x (t )]dt

1 T
 lim
T  2T T
x (t   ) x (t )dt
補充 : Time-Average Autocorrelation
Function
 E.g., for a cyclostationary process,
1 T
RX ( ) 
2T T
E[ X (t   ) X (t )]dt ,

where T is the cyclostationary period of X (t ).


補充 : Time-Average Autocorrelation
Function
 The time-average power spectral density is the
Fourier transform of the time-average
autocorrelation function.
  1 T   j 2f
S X ( f )    lim  E [ X ( t   ) X ( t )] dt  e d 
 T  2T T 
 lim
T 
1
2T
E 


 


X ( t   ) X 2 T
( t ) dt 
e  j 2 f
d 
1
 lim E [ X ( f ) X 2*T ( f )] where X 2T (t )  X (t )  1| t | T .
T 
2T
補充 : Time-Average Autocorrelation
Function
 For a WSS process, S X ( f )  S X ( f ).
 For a deterministic process,
1
S X ( f )  lim x ( f ) x2*T ( f ).
T  2T
1.7 Power Spectral Density
 Relation of time-average PSDs
1-100
1.7 Power Spectral Density under WSS input
 For a WSS filter input,
1.7 Power Spectral Density under WSS input
 Observation
 
E[Y (t )]  RY (0)   SY ( f )df   | H ( f ) |2 S X ( f )df
2

 E[Y2(t)] is generally viewed as the average power of


the WSS filter output process Y(t).
 This average power distributes over each spectrum
frequency f through SY(f). (Hence, the total average
power is equal to the integration of SY(f).)
 Thus, SY(f) is named the power spectral density
(PSD) of Y(t).
1.7 Power Spectral Density under WSS input
 The unit of E[Y2(t)] is, e.g., Watt.
 So the unit of SY(f) is therefore Watt per Hz.
1.7 Operational Meaning of PSD
 Example. Assume h() is real, and |H(f)| is
given by:
1.7 Operational Meaning of PSD

Then E[Y (t )]  RY (0)   SY ( f )df
2


   | H ( f ) | S X ( f )df
2

f c  f / 2  f c  f / 2
 f  f / 2 S X ( f )df   f  f / 2 S X ( f )df
c c

 f  [ S X ( f c )  S X (  f c )]
The filter passes only those frequency components of the input
random process X(t), which lie inside a narrow frequency band
of width f centered about the frequency fc and  fc.
1.7 Properties of PSD
Property 0. Einstein-Wiener-Khintchine relation
 Relation between autocorrelation function and PSD
of a WSS process X(t)
 S ( f )   R ( ) exp(  j 2f )d
 X  X
 
 RX ( )   S X ( f ) exp( j 2f )df
 
1.7 Properties of PSD
Property 1. Power density at zero frequency
S X (0) [Watt/Hz]  S X (0) [Watt-Second]

  RX ( ) [Watt] d [Second]

Property 2: Average power



E[ X (t )] [Watt]   S X ( f ) [Watt/Hz] df [Hz]
2

1.7 Properties of PSD
Property 3: Non-negativity for WSS processes
SX ( f )  0

Proof: Pass X(t) through a filter with impulse


response h() = cos(2fc).
Then H(f) = (1/2)[(f fc) + (f + fc)].
This step requires that
1.7 Properties of PSD SX(f) is continuous, which
is true in general.
As a result,

E [Y (t )]   SY ( f )df
2


  | H ( f ) |2 S X ( f )df
1 
 

   ( f  f c ) S X ( f )df    ( f  f c ) S X ( f )df
4
1
 ( S X ( f c )  S X (  f c ))
4
1
 S X ( f c ), since S X ( f )  S X (  f ) from Property 4.
2
1.7 Properties of PSD
Therefore, by passing through a proper filter,

S X ( f c )  2 E [Y 2 (t )]  0

for any fc.


1.7 Properties of PSD
Property 4: PSD is an even function, i.e.,
S X ( f )  S X (  f ).
Proof. S (  f )   R ( )e  j 2 (  f ) d
X  X 

  R X (  s )e  j 2fs ds, s  


  R X ( s )e  j 2fs ds, R X ( )  RX (  )


 SX ( f )
1.7 Properties of PSD
 Property 5: PSD is real.
Proof.
RX ( ) real  S X (  f )  S X ( f )
*

Then with Property 4,


S X ( f )  S X (  f )  S X* ( f ).

Thus, SX(f) is real.


Example 1.5(continue from Example 1.2)
Signal with Random Phase
Let X(t) = A cos(2fct + ), where  is
uniformly distributed over [, ).
 A2
SX ( f )   cos2f c e  j 2f d
 2
A2  j 2f 

4   
e  e  j 2 f 
c
e c

 j 2f
d

A2    j 2 ( f  f ) 

   
 j 2 ( f  f )
 e d  c
 e c
d
2 4 
A
R X ( )  cos2f c . A2
2   ( f  f c )   ( f  f c ) 
4
Example 1.5(continue from Example 1.2)
Signal with Random Phase
Example 1.6 (continue from Example
1.3) Random Binary Wave
Let

X (t )   A I
n  
n  p(t  nT  td )

where …, I2, I1, I0, I1, I2, … are independent,


and each Ij is either 1 or 1 with equal
probability, and
1, 0  t  T
p (t )  
0, otherwise
Example 1.6 (continue from Example
1.3) Random Binary Wave
 2  | | 
A 1 , |  | T
RX ( )    T 
 0, otherwise

T  |  |   j 2f
S X ( f )   A2 1 
T 
e d  u  dv  uv   v  du
T

T
 |  |  1  j 2f 
T
2 1  1  j 2f 

2
 A 1   e   A   sgn( )  e d
 T   j 2f  T  T
 T   j 2f 
A2 T

j 2fT T
sgn( )e  j 2f d
(Continue from the previous slide.)
A2 T
SX ( f )  
j 2fT 
T
sgn( )e  j 2f d

A2  T 0
  0  j 2f e
 j 2f
d    j 2f e  j 2f d 
 j 2fT  j 2f   T 
A2   e  j 2f T    e  j 2f 0
 
 2 2 
4 f T  0   T  

A2
  2 2 e  j 2fT  1  1  e j 2fT 
4 f T
A2
 2 2 2  2 cos(2fT ) 
4 f T
A2
 2 2 sin 2 (fT )  A2Tsinc2 ( fT )
 f T
1-117
1-118
1.7 Energy Spectral Density
 Energy of a (deterministic) function p(t) is

given by  | p(t ) |2 dt.
 Recall that the average power of p(t) is given by
1 T
lim  | p ( t ) | dt.
2
T 
2T T

 Observe that
 

 | p ( t ) | dt   p ( t ) p (t )dt


2 *


  


p ( f )e j 2ft
df 


p ( f ' )e j 2f 't

*
df ' dt
(Continue from the previous slide.)

2

 | p(t ) | dt   



p

( f ) e j 2ft
 p ( f ' )e df 'dt
df 

*  j 2f 't

p( f ) p ( f ' ) e dt dfdf '


  
   *

 j 2 ( f '  f ) t

 
   p( f ) p* ( f ' ) ( f ' f )dfdf '

  p( f ) p* ( f )df

  | p( f ) |2 df

For the same reason as PSD, |p(f)|2 is named energy spectral


density (ESD) of p(t).

1-120
Example
 The ESD of a rectangular pulse of amplitude A
and duration T is given by
T 2


 j 2ft
Eg ( f )  Ae dt  A2T 2sinc 2 ( fT )
0
Example 1.7 Mixing of a Random Process
with a Sinusoidal Process
Let Y(t) = X(t) cos(2fct + ), where  is
uniformly distributed over [, ), and X(t) is
WSS and independent of .
RY (t , u )  E [ X (t ) X (u ) cos(2f c t  ) cos(2f c u  )]
 E [ X (t ) X (u )]E[cos(2f c t  ) cos(2f c u  )]
cos(2f c (t  u ))
 RX (t  u )
2
1
 SY ( f )  S X ( f  f c )  S X ( f  f c )
4
1.7 How to measure PSD?
 If X(t) is not only (strictly) stationary but also
ergodic, then any (deterministic) observation
sample x(t) in [T, T) has:
1 T
lim
T 
2T 
T
x (t )dt  E[ X (t )]   X

 Similarly,
1 T
lim
T 
2T 
T
x (t   ) x (t )dt  RX ( )
1.7 How to measure PSD?
 Hence, we may use the time-limited Fourier
transform of the time-averaged autocorrelation
function: 1 T
2T 
lim
T  T
x ( t   ) x ( t ) dt

to approximate the PSD.


(Notably, we only have the values of x(t) for t in [T, T).)
T  1 T 
T  2T T x(t   ) x(t )dt  exp(  j 2f )d

1 T
2T T
 T

x (t ) T x (t   ) exp(  j 2f )d dt


1 T
2T T
 T

x (t ) T x ( s ) exp(  j 2f ( s  t ))ds dt , s  t  


1 T
2T T
 T

x (t ) exp( j 2ft ) T x ( s ) exp(  j 2fs)ds dt


1 T
2T
 T
 T
x ( s ) exp(  j 2fs)ds T x (t ) exp(  j 2 (  f )t )dt 
1
 x 2 T ( f ) x2 T (  f )
2T
1-125
1.7 How to measure PSD?
 The estimate
1
x2 T ( f ) x2 T (  f )
2T
is named the periodogram.
 To summarize:
1. Observe x (t ) for duration [ T , T ).
T
2. Calculate x2T ( f )   x (t ) exp(  j 2ft )dt.
T

1
3. Then S X ( f )  x2 T ( f ) x2 T (  f )
2T
1.7 Cross Spectral Density
 Definition: For two (jointly WSS) random
processes, X(t) and Y(t), their cross spectral
densities are given by:
 S ( f )   R ( ) exp(  j 2f )d
 X ,Y  X ,Y
 
 SY , X ( f )   RY , X ( ) exp(  j 2f )d
 

where RX ,Y (t , u )  E [ X (t )Y (u )] and RX ,Y ( )  RX ,Y (t  u ),
and RY , X (t , u )  E [Y (t ) X (u )] and RY , X ( )  RY , X (t  u )
(for   t  u ).
1.7 Cross Spectral Density
 Property


RX ,Y ( )  RY , X (  )  SY , X (  f )   RY , X ( ) exp( j 2f )d


  RY , X (  ) exp(  j 2f )d


  RX ,Y ( ) exp(  j 2f )d


 S X ,Y ( f )
Example 1.8 PSD of Sum Process
 Determine the PSD of sum process Z(t) = X(t) +
Y(t) of two zero-mean WSS processes X(t) and
Y(t).
 Answer:
RZ (t , u )
 E [ Z (t ) Z (u )]
 E [( X (t )  Y (t ))( X (u )  Y (u ))]
 E [ X (t ) X (u )]  E[ X (t )Y (u )]  E[Y (t ) X (u )]  E [Y (t )Y (u )]
 RX (t , u )  RX ,Y (t , u )  RY , X (t , u )  RY (t , u ).
WSS implies that
RZ ( )  RX ( )  RX ,Y ( )  RY , X ( )  RY ( ).
Hence,
S Z ( f )  S X ( f )  S X ,Y ( f )  SY , X ( f )  SY ( f ).
Q.E.D.

If X (t ) and Y (t ) are uncorrelated,


i.e., E[ X (t   )Y (t )]  E[ X (t   )]E[Y (t )]
S Z ( f )  S X ( f )  SY ( f ).
The PSD of a sum process of zero-mean uncorrelated
processes is equal to the sum of their individual PSDs.

1-130
Example 1.9
 Determine the CSD of output processes induced
by separately passing jointly WSS inputs
through a pair of stable LTI filters.
1-132
1.8 Gaussian Process
 Definition. A random variable is Gaussian
distributed, if its pdf has the form
1  ( y  Y ) 2 
fY ( y )  exp   
2 Y
2
 2 2
Y 
1.8 Gaussian Process
 Definition. An n-dimensional random vector is
Gaussian distributed, if its pdf has the form
 1  1   T 1   
f ( x) 
 exp  ( x   )  ( x   ) 
X
(2 |  |) n/2
 2 

where   [ E[ X 1 ], E [ X 2 ],..., E [ X n ]]T is the mean vector, and
 Cov{ X 1 , X 1} Cov{ X 1 , X 2 } 
  Cov{ X 2 , X 1} Cov{ X 2 , X 2 }  is the covariance matrix.
 
     nn
1.8 Gaussian Process
 For a Gaussian random vector, “uncorrelation”
implies “independence.”

Cov{ X 1 , X 1} 0 
 n
 0 Cov{ X 2 , X 2 }   f X ( x )   f X ( xi )
  i 1
i

     nn
1.8 Gaussian Process
 Definition. A random process X(t) is said to be
Gaussian distributed, if for every function g(t),
satisfying
T T

  g (t ) g (u ) R
0 0
X (t , u )dtdu  

T
Y   g (t ) X (t )dt
0
is a Gaussian random variable.
T T
Notably, E [Y ]  
2
 g (t ) g (u ) R X (t , u )dtdu.
0 0
1.8 Central Limit Theorem
 Theorem (Central Limit Theorem) For a
sequence of independent and identically
distributed (i.i.d.) random variables X1, X2, X3,

 ( X1  X )    ( X n  X )  y 1  x2 
lim Pr   y    exp  dx
n
 X n  2  2

where  X  E [ X j ] and  X2  E [ X 2j ].
1.8 Properties of Gaussian processes
Property 1. Output of a stable linear filter is a Gaussian
process if the input is a Gaussian process.
(This is self-justified by the definition of Gaussian
processes.)

Property 2. A finite number of samples of a Gaussian


process forms a multi-dimensional Gaussian vector.
(No proof. Some books use this as the definition of
Gaussian processes.)
1.8 Properties of Gaussian processes
Property 3. A WSS Gaussian process is also
strictly stationary.
(An immediate consequence of Property 2.)
1.9 Noise
 Noise
 An unwanted signal that will disturb the
transmission or processing of signals in
communication systems.
 Types
 Shot noise
 Thermal noise
 … etc.
1.9 Shot Noise
 A noise arises from the discrete nature of
diodes and transistors.
 E.g., a current pulse is generated every time an
electron is emitted by the cathode.
 Mathematical model

X Shot (t )   p(t  
k  
k )

where { k }k   are the sequence of time a pulse is generated,


and p(t ) is a pulse shape with finite duration.
1.9 Shot Noise
 XShot(t) is called the
shot noise.
 A more useful model
is to count the
number of electrons
emitted in the time
interval (0, t].
N (t )  max{k :  k  t}
1.9 Shot Noise
 N(t) behaves like a Poisson Counting Process.
 Definition (Poisson counting process) A Poisson
counting process with parameter  is a process
{N(t), t  0} with N(0) = 0 and stationary
independent increments satisfying that for 0 < t1 <
t2, N(t2)  N(t1) is Poisson distributed with mean (t2
t1). In other words,
[ (t2  t1 )]k
PrN (t2 )  N (t1 )  k  exp[  (t2  t1 )]
k!
1.9 Shot Noise
 A detailed statistical characterization of the
shot-noise process X(t) is in general hard.
 Some properties are quoted below.
 XShot(t) is strictly stationary.
 Mean 
 X Shot    p(t )dt


 Autocovariance function

C X Shot ( )    p(t   ) p(t )dt

1.9 Shot Noise
 For your reference

1-145
1.9 Shot Noise
 Example. p(t) is a rectangular pulse of
amplitude A and duration T.
 T
 X Shot    p(t )dt    Adt  AT
 0

A2 (T  |  |), |  | T
C X Shot ( )  
 0, otherwise
1.9 Thermal Noise
 A noise arises from the random motion of
electrons in a conductor.
 Mathematical model
 Thermal noise voltage VTN that appears across the
terminals of a resistor, measured in a bandwidth of
f Herz, is zero-mean Gaussian distributed with
2 2
variance E [VTN ]  4 kTR  f [volts ]
where k  1.38  1023 joules per degree Kelvin is the
Boltzmann' s constant, R is the resistance in ohms,
and T is the absolute temparature in degrees Kelvin.
1.9 Thermal Noise
 Model of a noisy resistor

E[VTN2 ]  E[ I TN
2
]R 2
1.9 White Noise
 A noise is white if its PSD equals constant for
all frequencies.
N0
 It is often defined as: SW ( f ) 
2
 Impracticability
 The noise has infinite power
  N0
E [W (t )]   SW ( f )df  
2
df  .
  2
1.9 White Noise
 Another impracticability
 No matter how closely in time two samples are,
they are uncorrelated!
 So impractical, why white noise is so popular in
the analysis of communication system?
 There do exist noise sources that have a flat power
spectral density over a range of frequencies that is
much larger than the bandwidths of subsequent
filters or measurement devices.
1.9 White Noise
 Some physically measurements have shown that the
PSD of (a certain kind of) noise has the form
2 2
SW ( f )  kTR 2
  ( 2f ) 2
where k is the Boltzmann’s constant, T is the absolute
temperature,  and R are the parameters of physical
medium.
 When f << ,
2 2 N0
SW ( f )  kTR 2  2kTR 
  ( 2f ) 2
2
Example 1.10 Ideal Low-Pass Filtered White
Noise
 After the filter, the PSD of the zero-mean white
noise becomes:
 N 0
, | f | B
S FW ( f ) | H ( f ) | SW ( f )   2
2

 0, otherwise
B N0
RFW ( )   B exp( j 2f )df  N 0 Bsinc(2 B )
2
    k /(2 B ) implies RFW ( )  0, i.e., uncorrelated.
Example 1.10 Ideal Low-Pass Filtered White
Noise
 So if we sample the noise at rate of 2B times per
second, the resultant noise samples are
uncorrelated!
Example 1.11
Channel …0110 …,-m
,-m(t), m(t), m(t), -m
-m(t)
Modulator
Encoder

m (t )
T

Local carrier Carrier wave


2/T cos(2fc t )
w(t) Accos(2
Accos(2fct)
0110… >0 N T x(t) s(t)
< 0
dt   
correlator
Example 1.11
In the previous figure, a scaling factor 2 / T is added to the
local carrier to nomalize the signal energy.
2
 2
T 
Signal Energy    cos(2f c t )  dt
 T
0

T 2
  cos2 ( 2f c t )dt
0 T

T 1  cos(4f t )
 c
dt
0 T
 1.
Example 1.11
T 2
Noise N   w(t ) cos(2f c t )dt
0 T
 T 2  T 2
 N  E   w(t ) cos(2f c t )dt    E[ w(t )] cos(2f c t )dt  0.

0 T 
0 T

 T 2 T 2 
  E   w( t )
2
N cos(2f c t )dt  w( s ) cos(2f c s )ds 

0 T 0 T 
2 T T
   E [ w(t ) w( s )] cos(2f c t ) cos(2f c s )dsdt
T 0 0
(Continue from the previous slide.)
2 T T N0
   
2
N  (t  s ) cos(2f c t ) cos(2f c s )dsdt
T 0 0 2
N0 T
 ( 2f c s )ds
2
 cos
T 0

N0
 .
2

If w(t) is white Gaussian, then the pdf of N is uniquely


determined by the first and second moments.

1-157
1.10 Narrowband Noise
 In general, the receiver of a communication
system includes a narrowband filter whose
bandwidth is just large enough to pass the
modulated component of the received signal.
 The noise is therefore also filtered by this
narrowband filter.
 So the noise’s PSD after being filtered may
look like the figures in the next slide.
1.10 Narrowband Noise

The analysis on narrowband noise will be covered in


subsequent sections.
A2.2 Bandwidth
 The bandwidth is the width of the frequency
range outside which the power is essentially
negligible.
 E.g., the bandwidth of a (strictly) band-limited
signal shown below is B.
A2.2 Null-to-Null Bandwidth
 Most signals of practical interest are not strictly
band-limited.
 Therefore, there may not be a universally accepted
definition of bandwidth for such signals.
 In such case, people may use null-to-null
bandwidth.
 The width of the main spectral lobe that lies inside the
positive frequency region (f  0).
A2.2 Null-to-Null Bandwidth

X (t )   A I
n  
n  p (t  nT  td ), where p(t ) is a rectangular

pulse of duration T and amplitude A.

 S X ( f )  A2Tsinc 2 ( fΤ )

The null-to-null bandwidth is 1/T.

1-162
A2.2 Null-to-Null Bandwidth
 The null-to-null bandwidth in this case is 2B.
A2.2 3-dB Bandwidth
 A 3-dB bandwidth is the displacement between
the two (positive) frequencies, at which the
magnitude spectrum of the signal reaches its
maximum value, and at which the magnitude
spectrum of the signal drops to 1 2 of the
peak spectrum value.
 Drawback: A small 3-dB bandwidth does not
necessarily indicate that most of the power will be
confined within a small range. (E.g., the signal may
have slowly decreasing tail.)
A2.2 3-dB Bandwidth
S X ( f ) A2Tsinc 2 ( fΤ ) 1
 2

S X ( 0) AT 2

The 3-dB bandwidth is 0.32/T.

0.32
T
1-165
A2.2 Root-Mean-Square Bandwidth
 rms bandwidth
 1/ 2
 
   f S X ( f )df
2

Brms   
  S X ( f )df 
  

 Disadvantage: Sometimes,


f 2 S X ( f )df  
even if



S X ( f )df  .
A2.2 Bandwidth of Deterministic Signals
 The previous definitions can also be applied to
Deterministic Signals where PSD is replaced by
ESD.
 For example, a deterministic signal with spectrum
G(f) has rms bandwidth:
1/ 2
 

  
2 2
f | G ( f ) | df 
Brms   
  | G ( f ) |2 df 
  
A2.2 Noise Equivalent Bandwidth
 An important consideration in communication
system is the noise power at a linear filter
output due to a white noise input.
 We can characterize the noise-resistant ability of
this filter by its noise equivalent bandwidth.
 Noise equivalent bandwidth = The bandwidth of an
ideal low-pass filter through which the same output
filter noise power is resulted.
A2.2 Noise Equivalent Bandwidth
A2.2 Noise Equivalent Bandwidth
 Output noise power for a general linear filter
 N0 

 W 
2
S ( f ) | H ( f ) | df  | H ( f ) |2 df
2 

 Output noise power for an ideal low-pass filter of


bandwidth B and the same amplitude as the general
linear filter at f = 0.
 N0 B

 SW ( f ) | H ( f ) | df  2 
2
| H (0) |2 df  BN 0 | H (0) |2
B

BNE 
 
| H ( f ) |2 df
2 | H (0) |2
A2.2 Time-Bandwidth Product
 Time-Scaling Property of Fourier Transform
 Reducing the time-scale by a factor of a extends the
bandwidth by a factor of a.
Fourier Fourier 1 f 
g (t )  G ( f )  g ( at )  G 
|a |  a 
 This hints that the product of time- and frequency-
parameters should remain constant, which is named
the time-bandwidth product or bandwidth-duration
product.
A2.2 Time-Bandwidth Product
 Since there are various definitions of time-parameter
(e.g., duration of a signal) and frequency-parameter
(e.g., bandwidth), the time-bandwidth product
constant may change for different definitions.
 E.g., rms duration and rms bandwidth of a pulse g(t)
1/ 2 1/ 2
 t 2 | g (t ) |2 dt 

 

     
2 2
 f | G ( f ) | df 
Trms    Brms    
  | g (t ) |2 dt    | G ( f ) |2 df 
     
1
Then Trms Brms  .
4
A2.2 Time-Bandwidth Product
Example: g(t) = exp(t2). Then G(f) = exp(f2).
1/ 2
 t 2 e 2t dt 


2

 1 1
Trms  Brms       . Then Trms Brms  .
  e dt 
2
 2t 2  4
  

Example: g(t) = exp(t|). Then G(f) = 2/(1+42f2).


2 1/ 2
1/ 2   f 
 t e dt   

df 
 
2  2 |t |
  (1  4 f )
  2 2 2
 1 1 1
Trms Brms          .
  e dt 
 2|t | 1  2 2 4
     (1  4 2 f 2 ) 2 
df
 
A2.3 Hilbert Transform
 Let G(f) be the spectrum of a real function g(t).
 By convention, denote by u(f) the unit step
function, i.e.,  1, f 0

u( f )  1 / 2, f 0
Multiply 2 to
 0, f 0
 unchange the
 Put g+(t) to be the function area.

corresponding to 2u(f)G(f).
G( f ) 2u( f )G ( f )
A2.3 Hilbert Transform
 How to obtain g+(t)?
 Answer: Hilbert Transformer.
Proof: Observe that
 1, f 0

2u ( f )  1  sgn( f ), where sgn( f )   0, f 0
 1, f 0

Then by the next slide, we learn that
Inverse Fourier
1
2u ( f )    (t )  j  1{t  0}
t
By extended Fourier transform,

4t  j
InverseFourier
, t0
sgn( f )  lim j 2   t
a 0
a  4t 2
 0, t  0
InverseFourier
j
2u( f )  1  sgn( f )   (t )   1{t  0}
t
1-176
A2.3 Hilbert Transform
g  (t )  Fourier 12u ( f )G ( f )
 Fourier 1{2u ( f )} * Fourier 1{G ( f )}
 1 
   (t )  j  1{t  0} * g (t )
 t 
1
 g (t )  j  1{t  0} * g (t )
t
 g (t )  jgˆ (t ),
 g ( )
where g (t )  
ˆ d is named the Hilbert Transform of g (t ).
   (t   )
A2.3 Hilbert Transform

g t ) 1 ˆ t)
g
h ) 


1, f 0
1 Fourier

h )   H ( f )   j sgn( f ), where sgn( f )  0, f 0
  1,
 f 0

| G ( f ) | exp{ j[G ( f )   / 2]}, f 0



 Gˆ ( f )   j sgn( f )  G ( f )   0, f 0
| G ( f ) | exp{ j[G ( f )   / 2]}, f 0

A2.3 Hilbert Transform
 Hence, Hilbert
Transform is
basically a 90
degree phase
shifter.
A2.3 Hilbert Transform
 1  g ( )
 gˆ (t )     t   d
Hilbert Transform Pair 
 g (t )   1
 gˆ ( )
 
  t  
d

g t ) 1 ˆ t)
g
h ) 


ˆ t)
g 1 g t )
h )  

1
A2.3 Hilbert Transform
 An important property of Hilbert Transform is that:

g (t ) and gˆ (t ) are orthogonal in the sense of Integration.



In other words, 
g (t ) gˆ (t )dt  0.
(See the proof in the next slide.)

The real and imaginary parts of


are orthogonal to each other.
(Examples of Hilbert Transform Pairs can be found in Table A6.4.)
  
 ˆ dt
 ˆ   
j 2ft
g ( t ) g ( t ) dt  g ( t ) G ( f ) e df

 
  G ( f )  g (t )e j 2ft dt df
ˆ 
   

  Gˆ ( f )G (  f )df


  j  sgn( f )G ( f )G (  f )df

 0
  j   G ( f )G (  f )df   G ( f )G (  f )df 

 0  
 
  j   G ( f )G (  f )df   G (  f )G ( f )df 

 0 0 

 0, if  G( f )G(  f )df  .
0

1-182
A2.4 Complex Representation of Signals
and Systems
 g+(t) is named the pre-envelope or analytical
signal of g(t).
 We can similarly define
g  (t )  g (t )  jgˆ (t )

2(1  u( f ))G ( f )
G( f )
A2.4 Canonical Representation
of Band-Pass Signals
 Now let G(f) be a narrow-
band signal for which 2W
<< fc.
 Then we can obtain its pre-
envelope G+(f).
 Afterwards, we can shift the
pre-envelope
~ to its low-pass
signal G ( f )  G ( f  f c )
A2.4 Canonical Representation of Band-Pass
Signal
 These steps give the relation between the
complex lowpass signal (baseband signal) and
the real bandpass signal (passband signal).

g (t )  Reg  (t ) Reg~(t ) exp( j 2f c t )

 Quite often, the real and imaginary parts of


complex lowpass signal are respectively denoted by
gI(t) and gQ(t).
A2.4 Canonical Representation of Band-Pass
Signal
 In terminology,
 g  (t ) pre - envelope
 g~ (t ) complex envelope


 g I (t ) in - phase component of the band - pass signal g (t )
 gQ (t ) quadrature component of the band - pass signal g (t )

This leads to a canonical, or standard, expression for g(t).


g (t )  Re( g I (t )  jgQ (t )) exp( j 2f c t )
 g I (t ) cos(2f c t )  gQ (t ) sin(2f c t )
( g I (t )  jgQ (t )) exp( j 2f ct )

( g I (t )  jgQ (t )) exp( j 2f ct )

1-187
A2.4 Canonical Representation of Band-Pass
Signal
 Canonical transmitter
g (t )  g I (t ) cos(2f c t )  gQ (t ) sin(2f c t )
A2.4 Canonical Representation of Band-Pass
Signal
 Canonical
receiver
A2.4 More Terminology
 g  (t ) pre - envelope
 g~(t ) complex envelope


 g I (t ) in - phase component of the band - pass signal g (t )
 gQ (t ) quadrature component of the band - pass signal g (t )

a (t ) | g (t ) || g~(t ) | g 2 (t )  g 2 (t )
  I Q

 natural envelope or envelope of g (t )

 (t )  tan 1  gQ (t )  phase of g (t )
  g (t ) 
 I 
A2.4 Bandpass System
 Consider the case of passing a band-pass signal
x(t) through a real LTI filter h() to yield an
output y(t).

x (t ) y (t )   h( ) x (t   )d
h( ) 

 Can we have a low-pass equivalent system for


the bandpass system?
 Similar to the previous analysis, we have:
Assumption : The spectrum of x (t ) is limited to within  W Hz of the
carrier frequency f c , and W  f c .
 x (t )  Re ~ x (t )e j 2f t  x I (t ) cos(2f c t )  xQ (t ) sin( 2f c t )
c

~
 x (t )  x I (t )  jxQ (t )

Assumption : The spectrum of h( ) is limited to within  B Hz of the


carrier frequency f c , and B  f c .
~

h( )  Re h ( )e j 2f   hI ( ) cos(2f c )  hQ ( ) sin( 2f c )
c

~
h ( )  hI ( )  jhQ ( ) complex impulse response

1-192
 Now, is the filter output y(t) also a bandpass
signal?

X( f )

H( f )

Y( f )
1-193
A2.4 Bandpass System
 Question: Is the following system valid?
 ~
~
x (t ) ~
y (t )   h ( ) ~
x (t   )d
~ 
h ( )

 The advantage of the above equivalent system is that there is


no need to deal with the carrier frequency in the system
analysis.
 The answer to the question is YES (with some
modification)! It will be substantiated in the sequel.
~ ~ ~
It suffices to show that Y ( f )  X ( f ) H ( f ).
~
Y ( f )  2u( f )Y ( f )  Y ( f )  Y ( f  f c )
  ~
Observe that  X  ( f )  2u( f ) X ( f ) and  X ( f )  X  ( f  f c ).
 H ( f )  2u ( f ) H ( f ) ~
   H ( f )  H  ( f  f c )
Consequently,
~ ~
X ( f )H ( f )  X  ( f  fc )H  ( f  fc )
 2u( f  f c ) X ( f  f c ) 2u( f  f c ) H ( f  f c ) 
 4u ( f  f c ) X ( f  f c ) H ( f  f c )
 4u ( f  f c )Y ( f  f c )
 2Y ( f  f c )
~ There is an additional
 2Y ( f )
multiplicative constant 2 at the
output! 1-195
A2.4 Bandpass System
 Conclusion:
~ ~ 1 ~ ~
x (t ) y (t )   h ( ) x (t   )d
~ 2 
h ( )
A2.4 Bandpass System
 Final note on bandpass system
 Some books define H+(f) = u(f)H(f) for a filter,
instead of H+(f) = 2u(f)H(f) as for a signal.
 As a result of this definition (i.e., H+(f) = u(f)H(f)),
~

h( )  2 Re h ( )e j 2f 
c
 ~  ~
and y (t )   h ( ) ~

x (t   )d

 It is justifiable to remove 2 in H+(f) = u(f)H(f),


because a filter is used to filter out the signal; hence,
it is not necessary to make the total area constant.
1.11 Representation of Narrowband Noise in terms
of In-Phase and Quadrature Components
 In Appendix 2.4, the bandpass system
representation is discussed based on
deterministic signals.
 How about a random process? Can we have a
low-pass isomorphism system to a bandpass
random process.
 Take the noise process N(t) as an example.

Y (t )   h( ) N (t   )d
N (t ) h( ) 
1.11 Representation of Narrowband Noise in
Terms of In-Phase and Quadrature Components
 A WSS real-valued zero-mean noise process N(t)
is a band-pass process if its PSD SN(f)  0 for |f
fc|  B and |f + fc|  B, and also B < fc.
 Similar to the analysis for deterministic signals, let
 N (t )  N I (t ) cos(2f c t )  N Q (t ) sin(2f c t )
~
 N (t )  N I (t )  jN Q (t )

for some joint zero-mean WSS of NI(t) and NQ(t).


 Notably, the joint WSS of NI(t)~ and NQ(t)
immediately imply WSS of N (t ).
1.11 PSD of NI(t ) and NQ(t)
 First, we note that joint WSS of NI(t ) and NQ(t)
and the WSS of N(t) imply:
 RN ( )  RN ( )
(See the proof in the sequel.)
I Q


 RN , N ( )   RN , N ( )
I Q Q I
1-201
(Continue from the previous slide.) These two terms
must equal zero,
1 since RN() is not
RN ( )  [ RN ( )  RN ( )] cos(2f c )
2 I Q
a function of t.
1
 [ RN , N ( )  RN , N ( )] sin(2f c )
2 I Q Q I

1
 [ RN ( )  RN ( )] cos(2f c ( 2t   ))
2 I Q

1
 [ RN , N ( )  RN , N ( )] sin(2f c ( 2t   ))
2 I Q Q I

 RN ( )  RN ( )
 (Property 1)
I Q

 RN , N ( )   RN , N ( )
I Q Q I

 RN ( )  RN ( ) cos( 2f c )  RN
I Q ,N I
( ) sin( 2f c ) (Property 2)
1-202
For real N (t ),
1.11 PSD of NI(t ) and RN ( )  E [ N (t   ) N (t )].
~
NQ(t) For complex N (t ),
1 ~ ~
RN~ ( )  E[ N (t   ) N * (t )].
2
1
Factor is added so that N (t )
 Some other properties 2
~
and its lowpass isomophism N (t )
1 ~ ~* reasonably have the same power.
RN~ ( )  E [ N (t   ) N (t )]
2 (Cf. The next slide.)
1
 E [( N I (t   )  jN Q (t   ))  ( N I (t )  jN Q (t ))]
2

1
 RN ( )  RN ( )  j[ RN , N ( )  RN , N ( )]
2 I Q Q I I Q

 RN ( )  jRN , N ( )
I Q I
(Property 3)
1.11 PSD of NI(t ) and NQ(t)

Properties 2 and 3 jointly imply

(Property 4)

1
In fact, without the factor , the desired spectrum relation
2
 S N ( f )  2u( f ) S N ( f )
of  
fail to hold.
 S N~ ( f )  S N ( f  f c )

1.11 Summary of Spectrum Properties
1. RN ( )  RN ( ) and RN
I Q I ,NQ ( )   RN Q ,N I
( )
2. RN ( )  RN ( ) cos( 2f c )  RN
I Q ,N I ( ) sin( 2f c )  RN (0)  RN (0)
I

3. RN~ ( )  RN ( )  jRN ( )  R N ( 0)
Q ,N I
Q
I

4. RN ( )  ReRN~ ( ) exp( j 2f c ).

5. S N~ ( f ) is real valued. By RN~ ( )  RN*~ (  ).

6. S N ( f )  S N ( f ) and S N , N ( f )   S N
I Q I Q Q ,N I
(f) [From 1.]
1
7. S N ( f )  S N~ ( f  f c )  S N~ (  f  f c )  [From 4.
2 See the next slide.]

S N ( f )   RN ( )e  j 2f d


  ReRN~ ( )e j 2f  e  j 2f d



c




1
 2

RN~ ( )e j 2f  c
 RN~ ( )e 
 e d
j 2f  *  j 2f
c

 1

 2
RN~ ( )e j 2f   RN*~ ( )e  j 2f  e  j 2f d
c c

*
1  1 
  RN~ ( )e  j 2 ( f  f )c
d    RN~ ( )e  j 2 (  f  f )
c
d 
2  2   
1
 S N~ ( f  f c )  S N*~ (  f  f c ) 
2
1
 S N~ ( f  f c )  S N~ (  f  f c ) , since S N~ ( f ) is real valued.
2
1-206
8. N I (t ) and N Q (t ) are uncorrelated, since they have zero means.

 RN , N ( )   RN , N ( )
I Q Q I


 RN , N (  )  E[ N I (t ) N Q (t   )]  RN
I Q Q ,N I ( )
 RN , N ( )   RN , N (  )
I Q I Q

 RN , N ( 0 )   R N
I Q I ,NQ ( 0)
 RN , N (0)  E[ N I (t ) N Q (t )]  0.
I Q

Now, let us examine the desired spectrum properties of:


 S N ( f )  2u( f ) S N ( f )


 S N~ ( f )  S N ( f  f c )

1-207
VI (t )
N I (t )
N (t )  N I (t ) cos(2f c t )
2 cos(2f c t )
 N Q (t ) sin(2f c t )
VQ (t )
N Q (t )

 2 sin(2f c t )
RV (t , u )  E [VI (t )VI (u )]
I

 4 E [ N (t ) cos(2f c t ) N (u ) cos(2f c u )]
 4 RN (t , u ) cos(2f c t ) cos(2f c u )

Notably, VI (t ) is not WSS.

1-208
1 T
RV ( )  lim
I T 
2T T
RV (t   , t )dt
I

1 T
 lim
T 
2T T
4 RN ( ) cos(2f c (t   )) cos(2f ct )dt

1 T
 RN ( ) lim
T 
T  T
[cos( 2 f c ( 2 t   ))  cos( 2 f c )]dt

1 T
 RN ( ) lim T cos(2f c ( 2t   ))dt  2 RN ( ) cos(2f c )
T 
T
 2 RN ( ) cos(2f c )

SV ( f )  S N ( f  f c )  S N ( f  f c )
I

1, | f | B
S N ( f ) | H ( f ) | SV ( f ), where | H ( f ) |  
2 2

0, f  B
I I

1-209
 S N ( f )  S N ( f )  [ S N ( f  f c )  S N ( f  f c )] | H ( f ) |2
I I

 S N ( f  f c )  S N ( f  f c ), | f | B

 0, otherwise
Similarly,
 S N ( f  f c )  S N ( f  f c ), | f | B
SN ( f )  
Q
 0, otherwise

1-210
Next, we turn to RN I ,NQ ( ). VI (t )
N I (t )
N (t )  N I (t ) cos(2f c t )
2 cos(2f c t )
 N Q (t ) sin(2f c t )
VQ (t )
N Q (t )

 2 sin( 2f c t )
RV ,V (t , u )  E [VI (t )VQ (u )]
I Q

 4 E N (t ) cos(2f c t ) N (u ) sin( 2f c u )


 4 RN (t , u ) cos(2f c t ) sin( 2f c u )
RV ,V (t   , t )  4 RN ( ) cos(2f c (t   )) sin(2f c t )
I Q

1-211
1 T
RV ,V
I Q
( )  lim
T 
2T 
T
RV ,V (t   , t )dt
I Q

1 T
  RN ( ) lim T [sin(2f c ( 2t   )  sin( 2f c )]dt
T 
T
 2 RN ( ) sin(2f c )
SV ,V ( f )  j[ S N ( f  f c )  S N ( f  f c )]
I Q

RN , NQ (t , u )  E N I (t ) N Q (u )

 
I

 
 E  hI ( 1 )VI (t   1 )d 1  hQ ( 2 )VQ (u   2 )d 2
 
   hI ( 1 )hQ ( 2 ) E[VI (t   1 )VQ (u   2 )]d 1d 2
 
   hI ( 1 )hQ ( 2 ) RV ,V (t   1 , u   2 )d 1d 2
I Q

1-212
1-213
  
SN I ,NQ ( f )     hI ( 1 )hQ ( 2 ) RV ,V ( 2     1 )e  j 2f dd 1d 2
I Q

  
    hI ( 1 )hQ ( 2 ) RV ,V (u )e  j 2f ( u 
I Q
2  1 )
dud 1d 2 ,
(Let u   2     1.)
 H I ( f ) H Q (  f ) SV ,V ( f )
I Q

SN I ,NQ
( f )  SN I ,NQ
(f)
 j[ S N ( f  f c )  S N ( f  f c )] | H ( f ) |2
 j[ S N ( f  f c )  S N ( f  f c )], | f | B

 0, otherwise

1-214
Finally,

1-215
Example 1.12 Ideal Band-Pass Filtered
White Noise
1.12 Representation of Narrowband Noise in
Terms of Envelope and Phase Components
 Now we turn to envelope R(t) and phase (t)
components of a random process of the form
N (t )  N I (t ) cos(2f c t )  N Q (t ) sin(2f c t )
 R(t ) cos[2f c t   (t )]

where R (t )  N I2 (t )  N Q2 (t ) and  (t )  tan 1[ N Q (t ) / N I (t )].


1.12 Pdf of R(t) and (t)
 Assume that N(t) is a white Gaussian process
with two-sided PSD 2 = N0/2.
 For convenience, let NI and NQ be snapshot
samples of NI(t) and NQ(t).
 Then
1  nI2  nQ2 
fN ,NQ ( nI , nQ )  exp  

I
2 2
 2  2

1.12 Pdf of R(t) and (t)
 By nI = r cos() and nQ = r sin(),
dnI dnQ
1  nI2  nQ2  1  r 2  dr dr drd

A ( n I ,nQ )
2 2
exp 
 2 2
dnI dnQ  

A ( r , )
2 2
exp  
2  dn
 2  I
dnQ
d d
1  r2 
  exp   rdrd
2 
A ( r , )
2 2
 2 

r  r2  1 r  r2 
So f R , ( r, )  exp  
2 
 2 exp  .
2 
2 2
 2  2   2 
1.12 Pdf of R(t) and (t)
 R and  are therefore independent.
 r  r2 
 f R ( r )  2 exp   for r  0.
2 
Rayleigh distribution.
   2 

 f ( )  1 for 0     .
  2
Normalized Rayleigh
distribution with 2 =
1.
1.13 Sine Wave Plus Narrowband Noise
 Now suppose the previous Gaussian white
noise is added to a sinusoid of amplitude A.
 Then
x (t )  A cos( 2f c t )  n(t )
 A cos( 2f c t )  nI (t ) cos( 2f c t )  nQ sin( 2f c t )
 x I (t ) cos( 2f c )  xQ (t ) sin( 2f c )

 Uncorrelation for Gaussian nI(t) and nQ(t) implies


their independence.
1.13 Sine Wave Plus Narrowband Noise
 This gives the pdf of xI(t) and xQ(t) as:
1  ( x I  A) 2  xQ2 
fX ,XQ ( x I , xQ )  exp   
I
2 2
 2 2

 By xI = r cos() and xQ = r sin(), dxI dxQ


1  ( r cos( )  A) 2  r 2 sin 2 ( )  dr dr
f R , ( r, )  exp    dx dxQ
2 2
 2 2
 I
d d
1  r 2  A2  2 Ar cos( ) 
 exp  
2 2
 2 2

1.13 Sine Wave Plus Narrowband Noise
 Notably, in this case, R and  are no longer
independent.
 We are more interested in the marginal
distribution of R.
2
f R ( r )   f R , ( r, )d
0

2 r  r 2  A2  2 Ar cos( ) 
 exp  d
0 2 2
 2 2

1.13 Sine Wave Plus Narrowband Noise
r  r 2  A2  2  2 Ar cos( ) 
f R (r) 
2 2
exp



2  0
2

  exp
 2 2 

d

r  r 2  A2   Ar 
 2 exp   I 0  2 ,
  2    
2

1 2
where I 0 ( x ) 
2 0
exp( x cos( ))d is the modified Bessel function

of the first kind of zero order.

This distribution is named the Rician distribution.


1.13 Normalized Rician distribution

 v2  a2 
fV ( v )  v  exp   I 0 av ,
 2 
A3.1 Bessel Functions
 Bessel’s equation of order n
2
d y dy
x2 2  x  ( x2  n2 ) y  0
dx dx

 Its solution Jn(x) is the Bessel function of the


first kind of order n.
1 
J n ( x )   cos( x sin( )  n )d
 0
1  xn 
 x / 4
2 m


2  exp jx sin  jn d  2n  m!(n  m)!
m 0
A3.2 Properties of the Bessel Function
1. J n ( x )  ( 1) n J n ( x )  ( 1) n J n (  x )
2n xn
2. J n 1 ( x )  J n 1 ( x )  J n ( x ) 3. When x small, J n ( x )  n .
x 2 n!
2   n 
4. When x large, J n ( x )  cos x   
x  4 2 

5. lim J n ( x )  0.
n
6. J
n  
n ( x ) exp( jn )  exp( jx sin( ))

7.  n ( x )  1.
J 2

n  
A3.3 Modified Bessel Function
Modified Bessel’s equation of order n
2d2y dy 2 2 2
x 2
 x  ( j x  n )y  0
dx dx

Its solution In(x) is the Modified Bessel function


of the first kind of order n.
 1
 exp( x cos( )) cos( n )d
n
I n ( x)  j J n ( x) 
2 

 The modified Bessel function is monotonically


increasing in x for all n.
A3.3 Properties of Modified Bessel Function
1, n  0
3'. lim I n ( x )   .
x 0
0, n  1
exp( x )
4'. When x large, I n ( x )  .
2x

6'.  I ( x ) exp( jn )  exp( x cos( ))
n  
n
1.14 Computer Experiments: Flat-Fading
Channel
 Model of a multi-path channel
Assume N paths.
N

 A cos(2f t   )
k 1
k c k

A cos(2f c t )
1.14 Computer Experiments: Flat-Fading
Channel
N
Y (t )   Ak cos(2f c t  k )  YI cos(2f c t )  YQ sin( 2f c t )
k 1
N N
where YI   Ak cos(k ) and YQ   Ak sin( k ).
k 1 k 1

~ ~
 Input X (t )  A induces output Y (t )  YI  jYQ ,
which is independen t of t.

Assume {( Ak , k )} i.i.d., and Ak uniform over [1,1)


and k uniform over [0,2 ).
1.14 Experiment 1
 By the central limit theorem,
YI A1 cos( 1 )    AN cos(  N )
  Normal (0,1)
N /6 N /6

So YI is approximately Gaussian distributed with


mean 0 and variance ( N / 6).

For any Gaussian random variable G, we have


E 2 [| G   |3 ] E [| G   |4 ]
1  3 2
 0 and  2  2 2
 3.
E [| G   | ] E [| G   | ]
1.14 Experiment 1
 We can therefore use 1 and 2 to examine the degree
of resemblance to Gaussian for a random variable.
N
10 100 1000 10000
1 0.2443 0.0255 0.0065 0.0003
YI 2 2.1567 2.8759 2.8587 3.0075
1 0.0874 0.0017 0.0004 0.0000
YQ 2 1.9621 2.7109 3.1663 3.0135
S N  (U1    U N ) / a N for zero - mean i.i.d.{U k }
~ E[ S N4 ] E[(U1    U N )4 ] NE [U 4 ]  3N ( N  1) E 2 [U 2 ] E[U 4 ] / E 2 [U 2 ]  3
 2  2 2  2   3
E [ S N ] E [(U1    U N ) 2 ] N 2 E 2 [U 2 ] N
 E[ A cos()]  E [ A sin()]  0
  N  10 100 1000 10000
  E[ A2 cos2 ()]  E[ A2 sin2 ( )]  1 / 6 
 E[ A4 cos4 ()]  E[ A4 sin4 ( )]  3 / 40  2.745 2.9745 2.9975 3.0000

1.14 Experiment 2
 We can re-formulate Y(t) as:
Y (t )  R cos(2f c t   ),
where R  YI2  YQ2 and   tan 1 (YQ / YI ).

 If YI and YQ approach independent Gaussian,


then R and  respectively approach Rayleigh
and uniform distributions.
1.14 Experiment 2
N = 10,000
The experimental
curve is obtained by
averaging 100
histograms.
1.14 Experiment 2

Input  cos(2  106 t ).

The corresponding
Rayleigh channel Output
1.15 Summary and Discussion
 Definition of Probability System and Probability
Measure
 Random variable, vector and process
 Autocorrelation and crosscorrelation
 Definition of WSS
 Why ergodicity?
 Time average as a good “estimate” of ensemble average
 Characteristic function and Fourier transform
 Dirichlet’s condition
 Dirac delta function
 Fourier series and its relation to Fourier transform
1.15 Summary and Discussion
 Power spectral density and its properties
 Energy spectral density
 Cross spectral density
 Stable LTI filter
 Linearity and convolution
 Narrowband process
 Canonical low-pass isomorphism
 In-phase and quadrature components
 Hilbert transform
 Bandpass system
1.15 Summary and Discussion
 Bandwidth
 Null to null, 3dB, rms, noise-equivalent
 Time-bandwidth product
 Noise
 Shot noise, thermal noise and white noise
 Gaussian, Rayleigh and Rician
 Central limit theorem

You might also like