You are on page 1of 49

Chapter 2

Signals and Spectra


(All sections, except Section 8, are
covered.)
Physically Realizable Waveform
1. Non zero over finite duration (finite
energy)
2. Non zero over finite frequency range
(physical limitation of media)
3. Continuous in time (finite bandwidth)
4. Finite peak value (physical limitation of
equipment)
5. Real valued (must be observable)
• Power Signal: finite power, infinite energy
• Energy Signal: finite energy, non-zero
power over limited time
• All physical signals are energy signals.
Nothing can have infinite power.
However, mathematically it is more
convenient to deal with power signals.
We will use power signals to approximate
the behavior of energy signals over the
time intervals of interest.
T
2
1
Time average operator :   limT    dt
T T

2

Periodic wave form with period To : w(t )  w(t  To )


where To is the smallest positive number satisfying this condition.

To

2
1
For periodic function :     dt for any real number  .
To To
 
2
To
2
1
DC of period function w(t ) : Wdc 
To  w(t ) dt
To

2
t
1 2
DC of w(t ) over a finite interal t 2 , t1    w(t ) dt
t 2  t1 t1
voltage : v(t )
currnet : i (t )
power : p(t)  v(t) i(t)
- p(t)  0 if the circuit consumes power.
- p(t)  0 if the circuit generates power.
average power : P  p(t )  v(t )i (t )
Root - mean - square (RMS) of w(t ) : Wrms  w2 (t )

v 2 (t ) V 2
For a resistor : P   i 2 (t ) R  rms  I rms
2
R  Vrms I rms
R R
i(t) R

+ v(t) -
Average normalized power is the power given to 1  resistor
T T
2 2
1 1
T  (t ) dt
2 2 2 2
P  v (t )  limT  v (t ) dt  i (t )  lim T  i
T T T
 
2 2

v(t ) and i (t ) are power waveforms if and only if P is finite and non zero.
0P
T T
2 2

v i
2 2
Total normalized energy : E  limT  (t ) dt  limT  (t ) dt
T T
 
2 2

v (t ) and i (t ) are energy waveforms if and only if E is finite and non zero.
0 E 

 average power out  P 


Decibel gain ( dB) : dB  10 log   10 log out 
 average power in   Pin 

power in power out


System

Decibel signal to noise ration(S/N)dB :


P  s 2 (t ) 
(S/N) dB  10 log signal

  10 log 2   20 log Vrms signal 

 n (t )  V
 Pnoise     rms noise 

signal, s(t)
System
noise, n(t)
The phasor is a complex number that carries the amplitude and phase angle
information of a sinusoidal function. It does not include the angular frequency.

Euler’s identify : e  j  cos   j sin 


cos   Ree j 

Given w(t )  A cos( wo t   )



 ARe e j ( wo t  ) 
 ARee jwo t
e j 
 ReAe j
e jwo t 

magnitude and phase angle information
P
Phasor Transformation : A cos(wot   )  Ae j  x  jy
where x  A cos  and y  A sin  .
(We may use the following two notations exchangeably : Ae j  A )
P 1
Inverse Phasor Transformation : Ae j  A cos(wot   )
 P 1
2 2 1 y 
 or x  jy  A cos(wot   ) where A  x  y and   tan .
 x 
Fourier Transform
Signal is a measurable, physical quantity which carries
information. In time, it is quantified as w(t).
Sometimes it is convenient to view through its
frequency components.

Fourier Transform (FT) is a mathematical tool to


identify the presence of frequency component for any
wave form.

W ( f )  F w(t)   w(t )e  j 2ft dt


where F  denotes the FT of  and f is the frequency (Hz).



Conversely, w(t )  F -1 W ( f )   W ( f )e j 2ft df


where F -1  denotes the inverse FT of .


w(t) and W(f) are thus a uniquely matched pair under the FT.
w(t)  W(f)
Example :
 e t t 0
w(t)  
0 t0
 -( 1 2 πf)t 
e 1
W(f)   e e-t -j 2 πft
dt  
0
1  j 2πf 0
1  j 2πf

Note: It is in general difficult to evaluate the FT integrations


for arbitrary functions. There are certain well known functions
used in the FT along with the properties of the FT.
Properties of Fourier
1.
Transform
If w(t) is real, W(-f) = W*(f).
2. Linearity:
a1w1(t)+a2w2(t)  a1W1(f) + a2 W2(f)
3. Time delay: w(t – T) = W(f) e-j2fT
4. Frequency Translation:
w(t) ej2fot  W(f – fo)
5. Convolution: w1(t)X w2(t)  W1(f)W2(f)
X
6. Multiplication: w1(t)w2(t)  W1(f) W2(f)
Note:
* is complex conjugate. X is convolution integral.
Parseval’s Theorem
 

 w (t )w

1 2 * (t )dt   W1 ( f )W2 * ( f )df

 
2 2
If w1 (t )  w2 (t )  w(t ), 

w(t ) dt   W ( f ) df  E.


E is the energy of w(t ).

2
Energy spectral density (ESD) : E(f)  W ( f ) (unit  joules per hertz)

2
Total normalized energy : E   E(f )

df
Dirac Delta Function

Dirac delta function,  ( x), satisfies  w( x) ( x)dx  w(0) for all function w(x)


continuous at x  0. This is a definition of  ( x). Alternatively, we can define  ( x)


based on the following two conditions (together).

 x0
 ( x)dx  1 AND  ( x)  0 x0


Shifting property :  w( x) ( x  x )dx  w( x ).

o o
Unit Step Function

1 x0
Unit step function, u ( x) : u ( x)  
0 x0
x
du(x)
Therefore, it is true that u ( x)    ( )d and  δ ( x).

dx
More Commonly Used
Functions
 T
t  1 t
  2
Rectangular function, Π    
 T  0 T
t 
 2

 t
 t  1  t T
Triangular function,     T
T  0 t T

sin x
Sa() function : Sa( x) 
x
Spectrum of Sine Wave

Couch, Digital and Analog Communication Systems, Seventh Edition ©2007 Pearson Education, Inc. All rights reserved. 0-13-142492-
0
Figure 2–8 Waveform and spectrum of a switched sinusoid.

Spectrum of Truncated Sine Wave


Example: Double Exponential
t

w(t )  e 

 t

W ( f )   e  e  j 2ft dt

 t 0 t

  e e  j 2ft dt   e e  j 2ft dt
 

0 
 0
1  1 
1  t   j 2f 
 1 t   j 2f 

 e 
 e 
1 1
 j 2f  j 2f
 0  

2
W( f ) 
1  (2f ) 2
Convolution

w3 (t )  w1 (t )  w2 (t )   w1 ( ) w2 (t   )d


Supposed t is fixed at an arbitrary value.


Within the integration, w2(t-) is a “horizontally flipped about
=0, and move to the right by t version” of w2().
Now, multiply w1() with w2(t-) for each point of .
Then, integrate over -  <  < . The result is w3(t) for this
fixed value of t.
Repeat this process for all values of t, -  < t < .
Example

 1 
t  T 
w1 (t )   2 
 T 
 
t

w2 (t )  e u (t ).
T

w3 (t )  w1 (t )  w2 (t )
Power Spectrum Density
 WT ( f ) 2 
Pw ( f )  lim   watts per hertz where w (t )  W ( f )
T    T T
 T 

normalized average power P  w (t )   Pw ( f )df
2


T
2
1
Autocorrelation : R w ( )  w(t ) w(t   )  lim
T  T  w(t )w(t   )dt
T

2

Wiener - Khintchine Theorem : R w ( )  Pw ( f )


2
P  w (t )  W 2
rms   Pw ( f )df  R w (0)

w(t)
1

t
Example : -1 0 1
1  1  t  1 w(t+) if  < -2
Given w(t)  
0 elsewhere
t
Find R w ( ), Pw ( f ), and P.
0 -1-  1- 
w(t+) if  > 2
R w ( )  w(t ) w(t   )
For   2, R w ( )  0. t
For   2, R w ( )  0. -1-  0
1- 
For 0    2,
1 w(t+) if -2  < 0
1 1 1
 (1   )  (1)  1   .
1
R w ( ) 
2  1dt  t
1
1
2 2
t
For - 2    0, -1-  0 1- 
1
1 1 1
R w ( )   1dt  t 1  (1)  (1   )  1   .
1
w(t+) if 0   2
2 1 2 2
P  R w (0)  1. t
-1-  0 1- 
 
R w ( ) is a triangular function : R w ( )   
2
Pw ( f )  F R w ( )   2Sa 2f 
2

P  R w (0)  1
 
R w ( )   
2
P  R w ( 0)  1
1


-2 2

Pw ( f )  2Sa 2f 
2
2

f
-1.5 -1.0 -0.5 0 0.5 1.0 1.5
Orthogonal Series
Representation
In general, signals and noise are difficult to be represented by closed form mathematical functions.
A convenient way to represent signals and noise is the use of orthogonal series. Orthogonal series
is a method of representing signals and noise as elements in an abstract vector space.

Definition. Two functions, φn(t) and φm(t) are orthogonal on the interval a,b if
b

  (t)
a
n m * (t )dt  0.

1 n  m
Definition.  nm is Kronecker delta function with  nm  
0 n  m
Definition. A set of functions, φi(t), i  1,2,3,...is mutually orthogonal on the interval a,b if
b
K n nm
a n m
 (t) * (t )dt    K n nm . If K n  1 for all n, φi(t), i  1,2,3,...is mutually
0 nm
orthonormal.
 
Example. Show that e jnwo t , n  1,2,3,... are orthogonal over the interval a  t  a  To ,
T o 1/f o , wo  2πf o , and n is an integer.

If n  m,
a  To a  To
e j(n  m)wo t
a  To
 
e j(n  m)wo a e j(n  m)2  1
 
jnwo t  jmwo t j(n  m)wo t
e e dt  e dt   0
a a
j (n  m) wo a
j (n  m) wo
since e j(n  m)2  cos2 (n  m)  j sin 2 (n  m)  1 for n  m.

If n  m,
a  To a  To

e  1 dt  T .
jnwo t  jnwo t
e dt  o
a a

Therefore,
a  To
T nm
 e jnwo t e  jmwo t dt   o 
 To nm . Thus, K n  To for all n, and e jnwo t , n  1,2,3,... is mutually 
a 0 nm
 1 jnw t 
orthonormal. Note that  e o , n  1,2,3,... is mutually orthonormal.
 To 
Examples of Orthogonal Functions

Sinusoids Polynomials Square Waves


Example. Consider w1 (t ) and w2 (t ), where w1 (t ) and w2 (t ) have Fourier tranforms in non - overlapping
intervals in the frequency domain. Then w1 (t ) and w2 (t ) are orthogonal.

 
   
*
  1 2   1   2
j 2ft j 2st
w (t ) w * (t )dt  W ( f ) e df W ( s ) e ds  dt
   
  
    W ( f )W
  
1 2 * ( s)e j 2ft e  j 2st dfdsdt

 
  j 2 ( f  s ) t 
   W1 ( f )W2 * ( s)   e dt dfds
   
  

  
j 2 ( f  s ) t
 W1 ( f )W2 * ( s ) ( f  s ) dfds (because e dt  1 if f  s, and 0 otherwise.)
  

  W ( f )W

1 2 * ( f )df  0 (by the non - overlapping assumption).
Theorem. A waveform w(t) can be represented over interval (a, b) by w(t )   a n n (t).
n

Note : The theorem implies that w(t) belongs to a certain type of functions where n (t ), n  1,2,3,...
can represent w(t) precisely. In reality, in the minimum we need n (t ), n  1,2,3,...such that w(t )   a n n (t)
n

with sufficient accuracy. Under this assumption n (t ), n  1,2,3,...is said to be a orthogonalbasis set for
w(t). Alternatively, we say w(t) belongs to a vector space spanned by the basis set n (t ), n  1,2,3,....
A specific function, w(t), is a pointa1 , a2 , a3 , a4 ,... in the vector space is defined by the basis set n (t ), n  1,2,3,....
What are the orthogonalcoefficients an , n  1,2,3,...?
b b b
 
 w(t) n * (t )dt    a m m (t)n * (t )dt   a m  m (t)n * (t )dt   a m K m mn  an K n
a a m  m a m

b b
1
Therefore, an K n   w(t)n * (t )dt which implies an   w(t) n * (t )dt.
a
Kn a

b 2

w ( t ) dt   a n . (Prove this for yourself.)


2

2
Parseval's Theorem (another version).
a n
Generation of w(t ) from n (t ), n  1,2,3, 

Is there a way to find an , n  1,2 ,3, from w(t ) in a similar way?
 Yes. We will see later.
Fourier Series
(pages 71 – 78 not covered)

Any period function w(t) with finite energy can be represented by a complex Fourier Series.

 
Complex Fourier Series has a orthogonal basis set φn(t)  e jnwo t , n  1,2,3,... where wo  2πf o 
To
and To is the period of w(t ).

To

1
Theorem. For a periodic wave w(t) with finite energy, w(t)  c e
n  
n
jnwo t
where cn 
To  w(t)e
 jnwo t
dt.
0

 
Alternatively, w(t)   an cos nwot   bn sin nwot where
n 0 n 1

 1 o
T

To 0
 w(t )dt n0 T
 2 o
an   To and bn   w(t ) sin nwotdt for all n  0.
 2 w(t ) cos nw tdt n  1 To 0
To 0 o

This is the well known form of Fourier series in a real vector space.
Properties of Fourier Series
1. If w(t) is real, cn  c* n .

2. If w(t) is real and even, Im[cn ]  0.

3. If w(t) is real and odd, Re[cn ]  0.

To  2
1
c
2
4. Parseval's theorem :
To 
0
w(t) dt 
n  
n .

 an bn
 2  j n0
 2
5. cn   a0 n0
 a n b n
 2  j n0
2
Proof of Parseval's theorem :
To To
1 2 1
To 
0
w(t) dt 
To  w(t)w*(t)dt
0
To  
1

To   cne j 2 nwo t
 me
c *  j 2 mwo t
dt
0 n   m  

  To
1
   cn cm*  e j 2 wo ( n  m )t dt
n   m   To 0
 
   n m nm
c
n   m  
c *

 2

 c
n  
n
Impulse Response
For linear time invariant systems (without artificial time delay), the impulse response function, h(t),
describes the system completely.
y (t )  h(t ) if x(t )   (t )
For physical systems, h(t)  0, for t  0 (Causality).

For arbitrary x(t), y(t)  x(t)  h(t )   x( )h(t   )d


Transfer Function H ( f ) : h(t )  H ( f )


Y( f )
Y(f)  X(f)H(f) or equivalently H(f)  ( H(f) is also called the frequency response function.)
X(f)
For a physical system, h(t) is a real function. Then, H(f) is even and H(f) is odd.

H(f) relates the magnitudes and the phase angles of the inputs and the outputs at frequency f .
For example, if x(t)  A cos2πft , then y(t)  A H(f) cos2πft  H(f).
Example : RC Filter
x(t) : input voltage
y (t) : output voltage
dy(t )
x(t )  Ri(t )  y (t ) and i (t )  C
dt
dy(t )
RC  y (t )  x(t )
dt
Take the Fourier Transform of both sides of the above equation.
RC ( j 2f )Y ( f )  Y ( f )  X ( f )
Y( f ) 1
H( f )  
X ( F ) 1  ( j 2RC ) f
 1  t
 e o t0
 o
 h(t )  
 0 t0


where  o  RC (time constant).
2 1
Power Transfer Function : Gh ( f )  H(f)  2
 f 
1   
 fo 
with f o  1 / 2πRC.
Gh ( f )  0.5 if f  f o . f o is called the 3-dB frequency because
10 log Gh(f o )  10 log 0.5  3.
At f o , the power of y(t) is a half of (or 3 dB less than) x(t).
Distortion
Communication system is said to be distortionless, if
y (t )  Ax(t  Td )
A : amplitude gain, Td : time delay ( A and Td are constant.)

Y ( f )  AX ( f )e  j 2fTd
H ( f )  Ae  j 2fTd
 H ( f )  A and H(f)   2Td  f
For communication system to be distortionless :
1. H ( f ) is constant for all f .
2. H(f) is a linear function of f .
Note: Sections 2.7 and 2.9 will be covered briefly.
Section 2.8 will not be covered.
Definition. A waveform w(t) is said to be (absolutely) bandlimited to B herz if
W(f)  0 for f  B.
Definition. : A waveform w(t) is said to be (absolutely) timelimited to T seconds if
w(t)  0 for t  T .

Theorem. A bandlimited waveform cannot be time limited, and vice versa.


Sampling
Definition . Sampling is a process of evaluating signal w(t) at a discrete set of points,
n
t1,t2 ,t3,t4 , .... in time. Usually, t1  nTs  . (Ts : sampling period. f s : sampling frequency.)
fs

w(t)

t
t1 t2 t3 t4 t5
If w(t) is sufficient ly smooth, then w(t) can be completely reconstructed from the samples w(t n ), n  1,2 ,3, ....

Sampling Theorem
Any physical waveform (i.e., finite energy signal) may be represented over the interval -  t   by

sin f s t  n / f s   sin f s t  n / f s 
w(t )   an where an  f s  w(t ) dt.
 f s t  n / f s   f s t  n / f s 
Note. f s has to satisfy a certain minimum value.
If w(t) is bandlimite d to B hertz and f s  2 B, an  w(n / f s ).
To reconstruct a bandlimite d waveform (to B hertz) without error :  f s min  2 B.
 f s min is known as the Nyquist frequency.

Interpretation : Bandlimite d signals are equivalent to a certain sum of Sinc functions.


1
If w(t) is bandlimited to B hertz, and w(t) is known at intervals of Ts  , w(t) can be compltely
2B
reconstructed from the samples w(nTs ), n  1,2 ,3,.... This is a fundamental principle which
governs the tranmission of analog signals through digital communication system.
 If w(t) is sufficiently smooth, then w(t) can be completely reconstructed from the samples w(tn ), n  1,2,3, ....

Let ws(t)  w(t )  (t  nTs ). ws(t) : impulse sampled series of w(t ).


ws(t)   w(nTs ) (t  nTs ) w(t)


1 jn 2f s t
ws(t)  w(t ) e
T
 s
t
  1 jn 2f s t  t1 t2 t3 t4 t5
Ws(f)  W ( f )  F  e 
   Ts 
ws(t)
 

1
 W ( f )  F e jn 2f s t
Ts 

1
 W ( f )     f  nf s  t
Ts  t1 t2 t3 t4 t5
1 
 W ( f  nf s )
Ts  
sampler
w(t) ws(t) Low Pass Filter w(t)
Ideal cutoff at B
(t-nTs)
If fs < 2B, the sampling rate is insufficient, i.e., there aren’t enough samples
to reconstruct the original waveform.  Aliasing or spectral folding.
 The original waveform cannot be reconstructed without distortion.
Dimensionality Theorem

For a bandlimited waveform with bandwidth B hertz, if the waveform


can be completely specified (i.e., later reconstructed by an ideal
low pass filter) by N=2BTo samples during a time period of To,
then N is the dimension of the wave form.

Conversely, to estimate the bandwidth of a waveform, find a number


N such that N=2BTo is the minimum number of samples needed to reconstruct
the waveform during a time period To. Then B follows.

As To  , any approximation goes to zero.

A slightly modified version of this theorem is the Bandpass Dimensionality


Theorem: Any bandpass waveform (with bandwidth B) can be determined
by N=2BTo samples taken during a period of To.
Data Rate Theorem (Corollary to Dimensionality Theorem)

The maximum number of independent quantities which can be transmitted by


a bandlimited channel (B hertz) during a time period of To is N=2BTo.

Definition. The baud rate of a digital communication system is the rate of


symbols or quantities transmitted per second.

From the Data Rate Theorem, the maximum baud rate of a system with a
bandlimited channel (B hertz) is 2B symbols / second.

Definition. The data rate (or bit rate), R, of a system is the baud rate times the
information content per symbol (H): R= 2BH bits / second

Suppose a source transmits one of M equally likely symbols. The information


content of each symbol: H = log 2 (1/probablity of each symbol) = log2 M
 R= 2Blog2 M
Data rate is (also known as the Channel Capacity) is determined by (1) channel
bandwidth and (2) channel SNR.
Different Definitions of Bandwidth
(Commonly used terms)
1. Absolute bandwidth f 2 - f1 : For f 2  f or f  f1 , W(f)  0.
2 1
2. 3-dB (or half power) bandwidth f 2 - f1 : For any f1  f*  f 2 , H(f * )  of the maximum
2
2
value of H(f) over f 2  f  f1 .

(Less useful definitions)


3. Equivalent bandwidth
4. Null to null bandwidth
5. Bounded spectrum bandwidth
6. Power bandwidth
7. FCC bandwidth

You might also like